ChatGPT may be known to plagiarize an essay or two, but its rogue counterparts are doing far worse.
Duplicate chatbots with criminal capabilities are surfacing on the dark web and — very similar to ChatGPT — might be accessed for a modest monthly subscription or one-time fee.
These language learning models, as they’re technically known, essentially function a tool chest for stylish online scammers.
Several dark web chatbots, DarkBERT, WormGPT and FraudGPT — the last of which works for $200 a month or $1,700 annually — have recently caught the eye of cybersecurity firm SlashNext. They were flagged for having the potential to create phishing scams and phony texts via remarkably believable images.
![indiscernible person in hooded sweatshirt uses the computer](https://nypost.com/wp-content/uploads/sites/2/2023/08/chatbots.gif?w=1024)
The corporate found evidence that DarkBERT illicitly sold “.edu” email addresses at $3 apiece to con artists impersonating academic institutions. These are used to wrongfully access student deals and discounts on marketplaces like Amazon.
One other grift, facilitated by FraudGPT, involves soliciting someone’s banking info by posing as a trusted entity, corresponding to the bank itself.
These kinds of swindles are nothing latest, but are more accessible than ever because of artificial intelligence, warns Lisa Palmer, an AI strategist for consulting firm AI Leaders.
![ChatGPT imposters are showing up on the dark web and making it easier for criminals to operate.](https://nypost.com/wp-content/uploads/sites/2/2023/08/netenrich-research-1.jpg?w=1024)
“That is about crime that might be personalized at a large scale. [Scammers] can create campaigns which might be highly personalized for hundreds of targeted victims versus having to create separately,” she told The Post, adding that fraudulent, deepfake video and audio is now easy to create.
Furthermore, these attacks don’t just pose a threat to the elderly and less-than-tech-savvy.
“Since [these kind of models] are trained across large amounts of publicly available data, they might be used to search for patterns and information that’s shared in regards to the government — a government that they’re wanting to infiltrate or attack,” Palmer said. “It might be gathering details about specific businesses that will allow for things like ransom or fame attacks.”
AI-driven character assassination could also facilitate a serious crime cyber security already struggles with defending.
![Chatbots are being designed on the dark web so that users may pay a subscription to have it create scams.](https://nypost.com/wp-content/uploads/sites/2/2023/08/ai-fraud-bot-1.jpg?w=1024)
“Take into consideration things like identity theft and with the ability to create identity theft campaigns,” Palmer said. “They’re highly personalized at a large scale. What you’re talking about listed below are taking crimes to an elevated level.”
Serving justice to those liable for the outlaw LLMs won’t be easy, either.
“For those which might be sophisticated organizations, it’s exceptionally hard to catch them,” Palmer said.
“On the opposite end of that, we even have these latest criminals which might be being emboldened by latest language models because they make it easier for people without high-tech skills to enter illegal enterprises.”