Just as legitimate users have moved on from exploring ChatGPT to building similar tools, the same has happened in the shadowy world of cybercrime.
The development of artificial intelligence has progressed at an unprecedented pace over the past few months. While governments, industry, civil society and multilateral bodies alike deliberate how best to regulate it, nefarious non-state actors are already harnessing AI to scale up their malicious activities.
Since the launch of OpenAI’s ChatGPT in November last year, forums on the dark web have been buzzing about ways to harness the technology.
Just as people around the world have shared tips on using ChatGPT and other AI tools to enhance efficiency or outsource tasks, dark web users have been sharing tips on how to jailbreak the technology to get around safety and ethical guardrails or use it for more sophisticated malicious activity.
Now, just as legitimate users have moved on from exploring ChatGPT to building similar tools, the same has happened in the shadowy world of cybercrime.
Criminal breeding ground
In recent weeks the dark web has become a breeding ground for a new generation of standalone AI-powered tools and applications designed to cater to a cybercriminal’s every illicit need.
The first of these tools, WormGPT, appeared on the dark web on 13 July. Marketed as a ‘blackhat’ alternative to ChatGPT with no ethical boundaries, WormGPT is based on the open-source GPT-J large-language model developed in 2021.
Hot on WormGPT’s heels, FraudGPT appeared for sale on the dark web on 22 July. The tool – based on GPT-3 technology – is marketed as the advanced bot for offensive purposes.
Available in monthly (€100) or yearly (€550) subscriptions, WormGPT, according to its anonymous seller, has a range of features such as unlimited character inputs, memory retention and coding capabilities.
Allegedly trained on malware data, its primary uses are generating sophisticated phishing and business email attacks and writing malicious code. The tool is constantly being updated with new features, which are advertised on a dedicated Telegram channel.
Hot on WormGPT’s heels, FraudGPT appeared for sale on the dark web on 22 July. The tool – based on GPT-3 technology – is marketed as the advanced bot for offensive purposes. Its uses include writing malicious code, creating undetectable malware and hacking tools, writing phishing pages and scam content, and finding security vulnerabilities. Subscriptions start at US$200 a month through to US$1,700 for an annual licence.
According to the security firm that discovered it, FraudGPT is likely focused on generating quick, high-volume phishing attacks, while WormGPT is more focused on generating sophisticated malware and ransomware capabilities.
New wave of AI-powered cybercrime
It’s early days, so it’s too soon to know how effective WormGPT and FraudGPT actually are. The specific datasets and algorithms they are trained on are unknown. The GPT-J and GPT-3 models they are based on were released in 2021, which is relatively old technology compared with more advanced models like OpenAI’s GPT-4.
And just as in the legitimate world, these AI tools could be overhyped. As anyone who has played around with ChatGPT, Google’s Bard or one of the other AI tools on the market knows, AI might promise the world, but it is still limited in what it can actually do.
It’s safe to say that these tools are just the beginning of a new wave of AI-powered cybercrime. Despite its limitations, AI offers enormous opportunities for nefarious actors to enhance their malicious activity and expand their operations.
It’s also entirely possible that the malicious AI bots for sale are scams in themselves, designed to defraud other cybercriminals. Cybercriminals are, after all, criminals.
Yet it’s safe to say that these tools are just the beginning of a new wave of AI-powered cybercrime. Despite its limitations, AI offers enormous opportunities for nefarious actors to enhance their malicious activity and expand their operations.
For example, AI can craft convincing phishing emails by mimicking authentic language and communication patterns, deceiving even savvy users and leading to more people unwittingly clicking on malicious links. AI can quickly scrape the internet for personal details about a target to develop a tailored scam or carry out identity theft.
AI can also assist in rapidly developing and deploying malware, including pinpointing vulnerabilities in software before they can be patched. It can be used to generate or refine malicious code, lowering the technical barriers for cybercriminals.
Sophisticated cyber threats
AI technology is also getting smarter – fast. There are already two new malicious AI tools in the works that represent a giant leap beyond WormGPT’s and FraudGPT’s capabilities.
The creator of FraudGPT is apparently developing DarkBART – a dark web version of Google’s Bard AI – and DarkBERT, a bot trained on data from the dark web. Both tools will have internet access and be integrated with Google Lens. Interestingly, DarkBERT was originally developed by researchers to help fight cybercrime.
The widespread adoption of AI by nefarious actors and the technology’s rapid advancement will only continue to elevate the scale and sophistication of malicious cyber threats. AI-powered cybercrime will demand an even more proactive approach to cybersecurity to counter the dynamic and evolving tactics employed by malicious actors.
Fortunately, AI also offers opportunities to enhance cybersecurity – and the principles of good cyber hygiene and awareness training remain relevant as the first line of defence against cybercriminals.
But individuals, organisations and the government will still need to get ready for an explosion of AI-powered cybercrime.
About the Author
Mercedes Page is a Senior Fellow at ASPI, and has more than a decade of experience working on foreign policy, defence and security issues across government, think-tanks, not-for-profits and the private sector. She was previously a fellow with the Schmidt Futures International Strategy Forum (Asia); a non-resident WSD-Handa Fellow at the Pacific Forum; and worked in the Australian defence industry. She is the founder and former CEO of Young Australians in International Affairs, and was one of the main authors of Australia’s 2021 International Cyber and Critical Tech Engagement Strategy.
This article first appeared on The ASPI Strategist, and is republished under a Creative Commons Licence; you can read the original here.
Picture © NicoElNino / Shutterstock