Home » Sciences » Criminal AI Subscription Service Launches—Available to Anyone from May 7, 2025!

Criminal AI Subscription Service Launches—Available to Anyone from May 7, 2025!

Photo of author

By Cameron Aldridge

Criminal AI Subscription Service Launches—Available to Anyone from May 7, 2025!

Photo of author

By Cameron Aldridge

This article includes a reference to violent sexual assault.

In April, cybersecurity forums began buzzing about a new, crime-focused artificial intelligence system known as Xanthorox, which had emerged from the shadows of the dark web. Despite its ominous name and origins, Xanthorox is surprisingly accessible. Its developer maintains a GitHub page and a public YouTube channel that showcases the system’s capabilities, described merely as “This Channel Is Created Just for Fun Content Ntg else.” Additionally, there’s a Gmail contact, a Telegram channel updating its progress, and a Discord server where access is sold for cryptocurrency—no secretive dark web forum initiation required.

However, the platform’s intentions are clearly malicious. Xanthorox is capable of creating deepfake videos or audios to con individuals by mimicking someone familiar, crafting phishing emails to swipe login details, generating malware to infiltrate computers, and deploying ransomware to deny access until a ransom is paid. These are typical tools in a scam industry worth billions. Alarmingly, one YouTube screen recording shows a user requesting a “step by step guide for making a nuke in my basement,” to which the AI chillingly responds with specifics on using plutonium-239 or highly enriched uranium.


Supporting Science Journalism

If you find this article valuable, please consider supporting our journalism by subscribing. Your subscription helps ensure the continuation of important stories that shape our understanding of the world.


The knowledge Xanthorox offers isn’t exactly top secret. Universities, internet searches, and educational AI platforms have long provided similar information without leading to widespread production of basement nukes. Scamming tools have been around long before the advent of modern AIs. The screen recording is likely just a marketing tactic that adds to the platform’s allure—a common theme in the dramatic portrayals seen on cybersecurity blogs. Although it’s unproven that Xanthorox marks the start of a new era in criminal AI, it raises significant questions about distinguishing genuine threats from mere hype.

See also  Bacon's Tasty Temptation: Delicious Yet Dangerous for Health

The Evolution of Criminal AI

The concept of “jailbreaking”—removing software restrictions—went mainstream in 2007 with the first iPhone. Before the App Store, hackers seeking to customize their phones had to create their own jailbreaks. When OpenAI released ChatGPT, driven by its GPT-3.5 model in late 2022, users quickly began testing its limits by asking it to pretend to be an unrestricted AI that could compose phishing emails. While ChatGPT itself refused, it could simulate being such an AI. To circumvent the direct interface, hackers crafted “wrappers,” intermediary software layers that rephrased user prompts in ways that fooled ChatGPT into compliance.

As AI boundaries improved, criminals turned to an open-source model known as GPT-J-6B, which wasn’t created by OpenAI and had fewer usage restrictions. In June 2023, after training this model on various malicious datasets, a user launched WormGPT on Telegram, offering custom malicious software designs for fees ranging from $70 to $5,600. Soon after, cybersecurity reporter Brian Krebs unmasked the creator as 23-year-old Rafael Morais from Portugal. Following increased scrutiny, Morais deleted the channel, leaving his customers with only the tools they had already acquired. Subsequently, new AIs like FraudGPT and DarkBERT appeared, simplifying the creation of malware, ransomware, and scam emails.

These episodes have demonstrated that wrapping an AI system is both affordable and straightforward, and a catchy name helps sell the product. Chester Wisniewski of Sophos noted that scammers often target inexperienced hackers, or “script kiddies,” from economically disadvantaged regions. These individuals, often young and desperate, simply run scripts hoping to breach systems.

The True Danger of Criminal AI

While there’s concern about AI teaching dangerous activities, like bomb-making or virus engineering, the more immediate threat is the escalation of common scams—phishing and ransomware. Yael Kishon of KELA points out that criminal AIs greatly simplify the creation of malicious attacks. Wisniewski adds that criminals can now launch thousands of scams within an hour, a stark increase from before. Although this doesn’t necessarily mean the attacks are more sophisticated, the sheer volume and reach pose significant risks.

See also  Readers React to February 2025 Issue: Discover What's Trending in Culture Now!

Apart from lowering the entry barrier for criminals, AI enables scammers to target vastly more individuals. For instance, Hong Kong police reported an incident involving a multinational company where an employee was tricked into transferring $25 million during a video call with AI-generated deepfakes of company executives. Additionally, phishing has evolved from broad campaigns to “spear phishing,” where attacks are personalized using victims’ details, gathered effortlessly by AI.

One significant advantage of AI in cybercrime is its linguistic capability. It can tailor scam messages to the dialects and cultural nuances of its targets, a refinement that traditional scammers often get wrong. While the concepts aren’t new, the scale and efficiency of these crimes have dramatically increased due to AI.

Xanthorox: Hype or Hazard?

The name Xanthorox might sound like something from a fantasy book, yet there’s little verifiable information about its effectiveness. Although some describe it as the first AI designed from scratch for criminal purposes, this claim remains unconfirmed. On Xanthorox’s Telegram channel, the creator admitted to facing hardware limitations while using versions of two well-known AI systems: Claude and DeepSeek. Kishon is skeptical about Xanthorox’s impact, noting a lack of significant chatter about it in cybercrime circles. However, Casey Ellis of Bugcrowd views it differently, suggesting that Xanthorox could become a formidable tool due to its integrated advanced systems, a step up from previous criminal AIs.

In early 2023, Xanthorox’s creator claimed the platform was for “educational purposes,” but later shifted to selling access amidst growing media attention. As of the latest update, he had sold numerous subscriptions and launched a polished online store that markets the AI as a sophisticated and secure product.

See also  Severe Staff Shortage at NWS Despite New Hires, Meteorologists Warn!

Perhaps most disturbing is the content on Xanthorox’s Telegram channel, which includes violent and misogynistic language. At one point, the creator used the AI to generate instructions for committing horrific acts, reflecting the kind of dangerous influence such platforms can wield.

Ensuring Safety in the Era of Criminal AI

Defending against AI-driven crimes is currently focused on corporate security, but tools are emerging for personal use. Products like Microsoft Defender, Malwarebytes Browser Guard, and Bitdefender are designed to block malicious sites, filter phishing attempts, and counter ransomware. Norton 360 monitors the dark web for stolen data, and Reality Defender identifies AI-generated content.

Shykevich advocates for using AI to combat AI, noting that the best defense involves recognizing AI-generated threats quickly. Education and awareness are crucial, especially for vulnerable groups like the elderly, who are often targeted by these scams. In our increasingly digital world, skepticism and caution are becoming essential, and trust may soon be reserved only for face-to-face interactions, until even those can be convincingly simulated by robots.

Similar Posts

Rate this post
Share this :

Leave a Comment