Home AI News Fear the Rise of FraudGPT: The Dark Web’s New AI Villain

Fear the Rise of FraudGPT: The Dark Web’s New AI Villain

Fear the Rise of FraudGPT: The Dark Web’s New AI Villain

**Title: The Rise of FraudGPT: A Dangerous AI Tool on the Dark Web**


The emergence of AI chatbots like ChatGPT has had a significant impact on how people work and find information online. This has sparked curiosity among many, including those who haven’t tried it yet. However, the rise of generative AI models has also paved the way for new dangers. The Dark Web Forum has witnessed discussions about a malicious AI tool known as FraudGPT, which cybercriminals are exploring for their benefit.

**What is FraudGPT?**

FraudGPT is an AI bot similar to ChatGPT but with a dangerous twist. It is designed specifically for carrying out malicious activities such as sending spear-phishing emails, cracking tools, and carding. This AI tool is available for purchase on various Dark Web marketplaces and the Telegram app.

In July 2023, the Netenrich threat research team first discovered FraudGPT being advertised on the Dark Web. Unlike ChatGPT, FraudGPT lacks the safeguards and restrictions that make it unresponsive to suspicious queries.

FraudGPT receives regular updates every week or two and utilizes different types of artificial intelligence. The primary mode of payment for FraudGPT is through a subscription service. Monthly subscriptions cost $200, while annual memberships are priced at $1,700.

**How does it work?**

The Netenrich research team decided to investigate FraudGPT by purchasing and testing it. The interface of FraudGPT is quite similar to ChatGPT, with a chat window taking up most of the screen and a sidebar displaying the user’s previous requests. To get a response, users simply input their question in the provided box and hit “Enter.”

During the test, the team used FraudGPT to create a phishing email related to a bank. All that was needed was to include the bank’s name in the inquiry format. FraudGPT not only completed the task but also identified where a malicious link could be inserted in the email. This AI tool is also capable of generating harmful code to create undetectable malware and identify potential targets.

The Netenrich team discovered that the supplier of FraudGPT had previously advertised hacking services for hire and was also associated with a similar program called WormGPT.

**Implications and Security Concerns**

The existence of FraudGPT raises concerns about the use of AI by hackers to develop novel dangers. Although it saves time for hackers by allowing them to write phishing emails and develop landing pages in seconds, it also highlights the importance of vigilance for consumers.

Users must remain cautious about any requests for personal information and follow cybersecurity best practices. Cybersecurity professionals should keep their threat-detection tools up to date, especially considering that malicious actors may exploit programs like FraudGPT to gain unauthorized access to critical computer networks.

FraudGPT reminds us that hackers will continue to adapt their methods over time. It’s crucial for everyone, including those responsible for securing online infrastructures, to stay informed about emerging technologies and the associated threats. While programs like ChatGPT have their benefits, it’s essential to be aware of the potential risks involved.


FraudGPT, a malicious AI tool available on the Dark Web, poses significant security risks. Its ability to generate content for cyberattacks, such as spear-phishing emails and undetectable malware, makes it a concerning tool for cybercriminals. Users should remain vigilant, follow cybersecurity best practices, and keep their threat-detection tools updated to protect against such threats.

Please check **Reference 1** and **Reference 2** for more information on this research. All credit for this research goes to the researchers involved. Don’t forget to join our ML SubReddit, Facebook Community, Discord Channel, and Email Newsletter to stay updated on the latest AI research news, cool projects, and more. If you like our work, please follow us on Twitter.

Note: The author of this article is Dhanshree Shenwai, a Computer Science Engineer with experience in the FinTech industry and a keen interest in AI applications.

Source link


Please enter your comment!
Please enter your name here