A security use case for ChatGPT: Email Scam Robo Judge

I have been noodling around with ChatGPT recently and have found a nice little use case that I hope someone with more coding skills and time than I will implement…

The AI seems pretty good at analyzing the text of emails for signs of scammy or fraudulent intent and explaining the basis for its verdict to the user in easy to understand language. The prompt text I used is:

"Please analyze the following email text and determine if it appears to have malicious or scam-related intent. If so, present the indications of malicious intent in a bulleted list form with explanations. Finally, provide a verdict on the likelihood that the email is malicious or fraudulent."

At this point, giving the text of an email yields a nice, user friendly analysis which patiently explains the features of an email which make it seem like it might have nefarious intent. Here is an example:

Based on the email text you provided, there are several indications that this email is a scam or fraud attempt. The email is likely to have malicious intent and is designed to trick the recipient into giving away personal information or money. Here are the reasons why:

  1. Unexpected account opening: The email claims that an account has been opened for you and funds have been deposited without any prior knowledge or consent from you. This is a common tactic used by scammers to create a sense of urgency and entice the recipient to respond.
  2. Request for personal information: The email requests that you provide personal information such as your full name, residential address, phone number, and ID card or picture. This is a red flag as scammers often use this information to steal identities or commit other types of fraud.
  3. Unverified sender: Although the email is signed by Barrie Rhodes, there is no way to verify if the email actually came from them or if they are a legitimate source.
  4. Suspicious email address: The email instructs you to contact Mr. Michael Elmore at a suspicious email address (ctf29834@outlook.com) rather than an official company email address.

Based on the above indications, it is highly likely that this email is a scam or fraud attempt. Therefore, it is important that you do not provide any personal information or respond to the email without first verifying the legitimacy of the sender and the offer. It is always best to research the organization and contact them directly through their official website or phone number to verify any claims of account opening or fund deposit.

Verdict: possible malicious email.

I would love to see a tool which could integrate this functionality into consumer desktop and webmail software (maybe via the ChatGPT API). Ideally, this would be triggered by a user pressing a “please analyze” button, would warn them that they are submitting the email to a third party for analysis, and for extra bonus points, it could take steps to ensure that the submission is cleansed of potential PII before shipping it off to the ChatGPT robot for analysis.

Having this functionality available for non email channels like SMS, Telegram, WhatsApp, etc would also be helpful as the bad guys are using these channels more and more.

Yes, sending this data to a third party does have some risks, but in most cases, the benefits would seem to outweigh the risks, especially for consumer users who are prone to fall for scams and frauds. This is clearly just the germ of an idea and needs to be thought out more carefully, but it does seem promising to me.

If you build a billion dollar startup out of this, send me some shares and a T-shirt.

Leave a Reply