The widespread adoption of ChatGPT has led to a surge in the availability of “generative Artificial Intelligence (AI)” products that can generate new text passages, images, and other forms of media. However, these tools have raised concerns regarding their potential to generate misleading information, which can be difficult to identify due to their strong grasp of human language grammar. There are also worries about the proliferation of fake content and the corresponding increase in cybercrime. Unfortunately, one such AI tool, WormGPT, has validated these concerns, as cybercriminals are exploiting it to conduct sophisticated phishing attacks.
WormGPT, similar to ChatGPT, is an AI model based on a generative pre-trained transformer model called GPT-J. Its purpose is to generate text that resembles human language. However, unlike ChatGPT or Google’s Bard, WormGPT lacks safety measures to prevent it from responding to malicious content.
Essentially, WormGPT enables users to engage in illegal activities. It facilitates the creation of malware content in the Python coding language and enables the generation of persuasive and sophisticated phishing emails or Business Email Compromise (BEC) attacks. This means that cybercriminals can create convincing fraudulent emails to target unsuspecting individuals for phishing attacks. In essence, WormGPT is similar to ChatGPT but without ethical boundaries.
According to a report by PC Magazine, the developer of the program stated, “This project (WormGPT) aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”
Earlier this February, the Israeli cybersecurity firm disclosed how cybercriminals are working around ChatGPT’s restrictions by taking advantage of its API, not to mention trade stolen premium accounts and selling brute-force software to hack into ChatGPT accounts by using huge lists of email addresses and passwords.
The fact that WormGPT operates without any ethical boundaries underscores the threat posed by generative AI, even permitting novice cybercriminals to launch attacks swiftly and at scale without having the technical wherewithal to do so
To safeguard against AI-generated phishing attacks, certain measures can be taken:
- Email verification: Implement a stringent email verification process, carefully examining email IDs, dates, and other details to detect potential phishing attempts.
- Firewalls: Employ robust firewalls, both at the desktop and network levels, to act as a protective barrier between your computer and external intruders.
- Stay informed about phishing techniques: Remain vigilant and stay updated on the latest phishing scams and methods that are continually evolving.
By adopting these preventive measures, individuals and organizations can enhance their defenses against AI-generated phishing attacks.
Read more:Kerala man loses Rs 40,000 to AI-based Deepfake WhatsApp fraud