ChatGPT is already almost as good as humans at writing phishing emails

Current generative artificial intelligence (AI) models are already very good at writing believable phishing emails and can save precious time for attackers, a new IBM study has shown.

Cybersecurity professionals and government officials have long been warning that threat actors could weaponize ChatGPT and similar AI tools to expand their phishing campaigns.

Now, new research from IBM has actually proven – in concrete details – how close AI-enabled tools are to perfecting the writing of phishing emails and fooling the average recipient.

IBM researchers have recently released the results of a testing experiment, which they ran with an unnamed global healthcare company and its 1,600 employees. Half of them got a phishing email fully written by humans – IBM’s X-Force team of hackers, responders, researchers, and analysts.

The other half got an email composed by ChatGPT. It turns out that humans are still better at tricking other people, but not by much: 14% of employees fell for the human-written phishing email and clicked on a malicious link, and 11% of the ChatGPT-written email’s targets fell for the note.

What’s probably even more important is that it only took five minutes for the researchers to get ChatGPT to spit out a convincing email, Stephanie Carruthers, IBM’s chief people hacker who led the experiment, said.

“With only five simple prompts, we were able to trick a generative AI model into developing highly convincing phishing emails in just five minutes — the same time it takes me to brew a cup of coffee,” said Carruthers.

“It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure set-up. So, attackers can potentially save nearly two days of work by using generative AI models.”

Indeed, ChatGPT developer OpenAI has put in safeguards that prevent the chatbot from responding to direct requests for a phishing email, malware, or other malicious cyber tools.

But Carruthers and her team have been able to find a workaround. They started by asking ChatGPT to list the primary areas of concern for employees in the healthcare industry.

Then, the bot was prompted to list the top social-engineering and marketing techniques within the email. These choices aimed to optimize the likelihood of a greater number of employees clicking on a malicious link in the email.

A prompt then asked ChatGPT who the sender should be – someone internal to the company, a vendor, or an outside organization. Finally, the researchers asked the bot to craft an email based on the information it had just provided.

“I have nearly a decade of social engineering experience, crafted hundreds of phishing emails and even I found the AI-generated phishing emails to be fairly persuasive,” said Carruthers.

Humans are still better than machines in creating phishing emails, she explained, because generative AI models still lack the emotional intelligence needed to trick larger numbers of people.

However, IBM’s X-Force has already observed that tools such as WormGPT are sold on various forums advertising phishing capabilities. This shows that attackers are testing AI’s use in phishing campaigns – and the technology is constantly improving.

    Comments closed

    Keep explore the world. The hard thing to do is to do it continously and consistently. Practice will make it easier. The key is your Emotion.

    Contact

    • riduan1373@uitm.edu.my
    • +603 5521 1352
    • UiTM Shah Alam, Selangor, MALAYSIA
    © 2023 RIDUAN JAMIL. All Rights Reserved.