fraud background (8)


We spent the last year looking at hundreds of discussions about AI crime in all its forms, including hacker forums on the dark web, to try to figure out which crimes are real and which are fantasy, and which could possibly be in the future versus those that are here now.

And while some of the claims about AI cybercrime range from premature and speculative to outright fantasy, AI-assisted scams are still very real and we’re already seeing real victims of crimes accelerated or assisted by AI.


The criminal world is drowning in stolen data. In 2021 alone it’s estimated that more than 40 billion records were exposed or stolen in data breaches. According to Juniper Research, nearly 150 billion records were compromised in just the last 5 years.

There are an estimated 24 billion stolen credentials (username and password combos) currently circulating on the dark web. That’s four complete sets of credentials for every human on earth. And in January 2024 a stash of more than 26 billion records was discovered on an unprotected server, data from multiple recent and previous data breaches.

Until recently, criminals were limited by time and tools in what they could do with all this information. But AI is making it much easier for cyber criminals to sort through these billions of records and solve one of the biggest criminal challenges – connecting the dots. Analyzing those vast troves of stolen information to find the pieces that match, then putting them together so they can be used not just to commit convincing crimes, but at scale.


Speaking of stolen data, identity theft has been the top consumer crime for more than a decade and relies on a constant feed of personal information. The more information, and the more accurate that information, the better. And that’s where AI comes in.

Not only is AI making it easier to capitalize on the billions of personal records already stolen in data breaches, it’s making it much easier to launch more (and more convincing) identity thefts. Is it already here? In its third annual Identity Fraud Report, verification company Sumsub reported that in the US alone, deepfake-based identity fraud surged 1,740% in 2023.


Synthetic identity theft is nothing new, but like so much else in crime, AI is making it much easier to grow and scale. Synthetic identity theft is where criminals use a mixture of real and concocted information to create entirely new identities.

For example, it could include a real Social Security number and address, combined with entirely made-up information like photos and utility bills. Using these hybrid identities, thieves are able to open up multiple bank accounts, credit card accounts, and lines of credit.

These identities could also be used to create entire personas. One security expert predicted that a synthetic identity could be used to apply for employment benefits, housing assistance, food stamps, and other benefits totaling more than $2 million per identity.


In the year following the launch of ChatGPT there was a reported 1,265% increase in phishing emails and a nearly 1,000% rise in credential phishing.

More of these phishing attacks are successfully tricking recipients because AI  is making it easier to create, launch, and manage massive spam and phishing campaigns that are so well-researched and convincing, they’re almost impossible to spot. And accurately translating these phishing and business email compromise (BEC) emails into multiple languages is a also breeze for AI.


One of the best ways to verify whether a person is real or not is to look at their past. What does the Internet says about them, what evidence is there to prove or at least suggest that they really exist?

Before AI, it was almost impossible to create a believable fake Internet history. But thanks to AI, it’s much easier to create very detailed and believable online profiles and histories, from professional websites to complete social media profiles, LinkedIn pages, employment history, and even certifications.

AI tools have already been shown to be able to create realistic websites within minutes, either to fake the identity of a criminal, as a front for fraudulent job offers, to launch B2B frauds, or as part of phishing or BEC campaigns.

A fake website can include logos, team members with complete profiles, product and service descriptions, social media, testimonials and reviews, blogs, press releases, physical addresses and phone numbers and so on. So if you want to claim, for example, that you’ve been running your own construction business for the last 20 years and employ 30 people, AI will make that happen. Or at least appear to happen.


In just the last few months we’ve seen a huge increase in reports of the success of AI in developing very realistic fake versions of real humans. Generating photos, voices, and videos that are so realistic and lifelike it’s almost impossible for even friends and family members to distinguish them from the real being.

And with the growth in use of video for everything from training to marketing and PR to social media, snippets of our voices and faces exist everywhere. Scammers are now able to use even just a few seconds of these snippets to create complete clones of our voice.

And the scams are working. In February 2024 a bank revealed it had lost $25 million to a scam based on a series of conversations that were completely fabricated by deepfake technology.

In one recent security demonstration, a security expert was able to trick an employee of the 60 Minutes program into sharing the passport number of one of their correspondents, simply by using a clone of her voice. The attack took less than 5 minutes to construct.


Not all targets are created equally, and whether it’s targeting a CEO or other executive, or a wealthy consumer or their advisers, AI will be much better at identifying and sorting the best targets. That includes doing the in-depth background research and setting up a social engineering or phishing attack that will be very hard to detect or defend against.


For many of we humans, the humble password is often the first and only line of defense guarding many of the things we hold in great value.

And AI is setting its sights on them. In some recent demonstrations, AI-driven password crackers were programmed to break a collection of millions of passwords stolen in recent data breaches.

According to reports, 81% of the passwords were cracked in less than a month, 71% in less than a day, and 65% in less than an hour. Any seven-character password could be cracked in six minutes or less. AI can also sort through hundreds of millions of exposed and compromised passwords and usernames, and quickly find which other sites are using the same password/username combos.


For criminals, finding and exploiting the millions of security holes that exist every day can be a costly, time-consuming, and repetitive task. A task that AI is ideally suited for.

AI can also scan billions of lines of code almost instantly to discover flaws, weaknesses, or mistakes. It’s also very good at writing exploits to take advantage of the vulnerabilities it discovers.


ChatGPT, perhaps the most popular of all AI tools, has been used to not only create malicious code but also code that’s capable of changing quickly and automatically to evade antivirus software.

In early 2023, security researchers launched a proof-of-concept malware called BlackMamba that used AI to both eliminate the need for the command-and-control infrastructure typically used by cyber criminals, and to generate new malware on the fly in order to evade detection.

And AI is also helping to make malware smarter and more capable, able to do more damage, infiltrate more deeply into a network, morph and hide, and find and steal the most valuable data.


Creating and spreading misinformation and disinformation is something that AI seems born to excel at. Using the same techniques and tactics as phishing campaigns, AI can be deployed to create and optimize the distribution of all kinds of misinformation, disinformation, fake news and images, and conspiracy theories.

And it will also present a frightening threat to elections and democracies. One leading AI expert admitted to being completely terrified of the 2024 election and an expected misinformation tsunami, while another expert suggested that AI will turn elections into a train wreck. Misleading AI-generated content will be at the forefront of these unsettling attacks.


In 2023, the FBI and the Department of Justice warned that the fastest-growing crime against children was sextortion – using fake but highly realistic social media profiles to trick teens and kids into sharing sensitive or sexually explicit photos and videos, and then extorting them for money with the threat of sharing that content with family or publicly.

In 2021 the National Center for Missing and Exploited Children received 139 reports of sextortion. Two years later, that number had jumped to 26,000. AI is expected to take that kind of crime even further by generating deep fake pornographic photos and videos that appear to include the face or likeness of the victim.

One West African gang is believed to be responsible for nearly half of all global sextortion targeting minors, even advertising “how to” guides in chat rooms and on social media sites.


In late 2023, global security firm Sophos showed how they were able to create a complete fraudulent website just using AI. The site included hyper realistic images, audio, and product descriptions, a fake Facebook login, and a checkout page able to steal user login credentials and credit cards. They were also able to create hundreds of similar websites in a matter of just minutes and with a single button.

Juniper Research estimates that global losses from e-commerce fraud from 2023 to 2027 will surpass $343 billion. Those losses will likely be shared by businesses and consumers.


AI will make it much easier for unsophisticated and entry-level criminals or wannabes to scale up more advanced and complex attacks with fewer resources or costs. According to security firm Trend Micro “One thing we can derive from this (AI) is that the bar to becoming a cybercriminal has been tremendously lowered. Anyone with a broken moral compass can start creating malware without coding know-how.”


Another AI advantage that will make crime easier and life harder is deepfake forgeries. AI is very capable of forging and counterfeiting the most complicated documents, including birth certificates, driver’s licenses, and even passports.

It’s also capable of forging all the stuff that’s supposed to make counterfeiting much more difficult – things like watermarks, holograms, microprinting, special fonts and logos, and of course, a user’s photo and even signature.

AI can also forge utility bills, which will make identity theft and other frauds much easier to commit. And it can easily forge and create paper trails of invoices that can be used to trick companies into inadvertently paying scammers.


Where AI is already shining in the criminal underworld is helping criminals to improve their hacking tools. Monitoring of hacker chat rooms on the dark web has shown a keen interest by criminals in using currently available AI tools like ChatGPT to make existing malware better, write better code, write quality code faster, and even create entirely new tools.


AI learns and grows from nothing else but data, and has an insatiable appetite for more. And that will include your personal information.

With so much of this information, chances are AI will know far more about you than you’re comfortable with. About your behavior, habits, choices, preferences, political and social opinions, locations and connections, and so on. And also, and perhaps mistakenly, make inferences about you based on inaccurate or incomplete data.


The Deepfake porn threat has already emerged in a number of ways, from revenge porn to sextortion to a tool for humiliating people. But the threat goes far beyond humiliation.

In 2022 as 25-year-old Cara Hunter was running for political office in Northern Ireland, her world and campaign were rocked by the release of a very graphic deepfake porn video purporting to be her. The most likely motive was to embarrass and humiliate her, and ultimately to either force her to drop out of the race or to influence electors against her. She didn’t, and she won, although by just a handful of votes.

This was one of the first recorded instances of deepfake porn being used to influence political races and elections, and likely won’t be the last.


If AI really is going to mean more criminals, more crimes, and more victims, and especially victims of financial crimes and scams, chances are the first place those victims are going to turn is their local police department.

We know from two decades of just fighting identity theft that no police department has the resources to investigate or prosecute such an overwhelming number of crimes that are usually way beyond their jurisdiction. They don’t need more of these crimes.

And the rapid advancements in AI-driven document forgery will also present additional challenges for law enforcement. It will be nearly impossible, at least on initial inspection, to recognize a fake driver’s license, vehicle registration, and proof of insurance documents.


One of the most important ingredients in the success of humans and communities is our ability to trust each other. Trust is hard earned and easily squandered, and thanks to AI, it’s becoming a threatened species.

We are quickly approaching a point where we humans will not trust ourselves to believe anything we see or hear. No matter how it’s presented, who’s presenting it, or how thoroughly it’s verified and authenticated. This trust deficit is likely to seep into every part of human life.

In a recent article by Newsweek “These developments will have far-reaching consequences. Schools and universities will face AI-generated submissions, undermining their traditional tests. Businesses will struggle to identify capable employees, as the credentials they relied on become watered down. Political leaders skilled at mastering their image in a TV context will find they no longer convey the same credibility and influence. The very concept of expertise—something people traditionally associated with professional-sounding tone and language—will lose the signal that has given credibility to its purveyors.”


There are thousands of scam call centers operating around the world, usually beyond the reach of the law, and each year bilking millions of victims out of billions of dollars through tech support scams, investment scams, and romance scams. As soon as the operators of these centers start to deploy AI to run these centers and scams, we expect even more victims.

AI can help these criminals eliminate most of their setup and operating costs (like buildings and people), churn out even more of these calls, and using conversational AI make these calls much more convincing and effective.

That means lower costs, bigger profits, and more victims. All great incentives for more of these criminals to move to AI. 


AI has already caused its fair share of stress and anxiety, whether it’s driving political misinformation and disinformation, accelerating deep fake scams, or helping to drive romance and sextortion.

And you don’t have to be a victim to feel victimized. Bad AI will be a major threat to trust and confidence and we can’t ignore the psychological impact that it’s going to have on all humans.


As consumers, we’re not helpless against these crimes. If there is a silver lining, it’s that the threat of AI crime might finally persuade more people to take cybercrime and scams a little more seriously. History has shown us that most consumers are still doing very little to protect themselves from these crimes and often in the mistaken belief that it’s simply never going to happen to them.

AI makes it more likely that these crimes will eventually happen to all of us, and might be more devastating too. The best defense remains the same – things like greater vigilance and awareness, and a handful of good habits and behaviors.

Tagged with: ,
Posted in: , ,
Author: Neal O'Farrell

Neal O’Farrell has spent more than 40 years fighting cybercrime and fraud around the world, and recently launched the Center for AI Crime to focus on how AI is changing fraud for everyone. He was the first to expert to teach law enforcement about identity fraud more than 20 years ago, headed the Identity Theft Council for a decade, and was invited to advise President Barack Obama's Stock Act Panel in 2013.