How AI is Changing the Scale and Speed of Crypto Fraud

TRM Team
How AI is Changing the Scale and Speed of Crypto Fraud

Key takeaways

  • AI is industrializing crypto fraud: Artificial intelligence has transformed crypto-enabled fraud from a labor-intensive activity into a machine-scaled ecosystem driven by automation, personalization, and rapid iteration.
  • Illicit crypto activity reached record levels in 2025: Illicit crypto volume totaled USD 158 billion in 2025, increasing nearly 145% year over year. TRM estimates that scam-related activity alone accounted for approximately USD 30 billion, with underreporting likely pushing the true figure significantly higher.
  • AI dramatically increases scale, speed, and adaptability: TRM observed an approximately 500% increase in AI-enabled scam activity over the past year, reflecting rapid integration of generative AI into fraud operations, including phishing, impersonation, laundering automation, and synthetic identity creation.
  • Automation creates strategic asymmetry between attackers and defenders: Criminal actors can experiment with thousands of micro-campaigns at low cost, while compliance teams and law enforcement must operate within legal, regulatory, and evidentiary constraints.
  • Blockchain transparency remains a structural advantage: AI may accelerate fraud, but it does not eliminate traceability. On-chain transactions are permanent and observable, enabling entity clustering, anomaly detection, and forensic analysis when paired with defensive AI and expert investigation.

{{horizontal-line}}

Artificial intelligence (AI) is transforming crypto-enabled fraud from a labor-constrained enterprise into an industrialized, machine-scaled ecosystem. 

In 2025, illicit crypto volume reached an all-time high of USD 158 billion, increasing nearly 145% year over year. Within that broader landscape, scam-related activity alone accounted for an estimated USD 30 billion, with substantial underreporting suggesting the true figure may be significantly higher. 

At the same time, TRM observed a roughly 500% increase in AI-enabled scam activity over the past year. The convergence of generative AI, programmable financial infrastructure, and global crypto liquidity has altered the economics, velocity, and scalability of fraud. In this piece, we’ll examine how AI is reshaping the lifecycle of crypto fraud and what that structural shift means for investigators, policymakers, and financial institutions.

What is AI-enabled fraud? 

AI-enabled fraud refers to scams and financial crimes that leverage artificial intelligence to automate, personalize, and scale deceptive activity. While fraud in crypto is not new, AI is reshaping how it is executed — making schemes faster to deploy, harder to detect, and more convincing to victims.

How AI scales illicit activity from labor-constrained fraud to machine-scaled deception

Crypto-enabled fraud has existed since the earliest days of digital assets. Historically, however, even large-scale fraud operations were constrained by human labor. Investment scams, romance fraud, impersonation schemes, and technical support fraud required sustained engagement by trained operators. Call centers, script libraries, multilingual recruitment, shift management, and quality control imposed natural ceilings on growth.

Today, artificial intelligence has removed many of those ceilings.

AI-enabled fraud does not necessarily introduce entirely new crime typologies. It enhances existing tactics by removing the limitations of human-driven operations.

  • Threat actors use generative AI to produce polished phishing emails, fake investment websites,  and realistic customer support chatbots in seconds
  • Large language models (LLMs) can tailor outreach messages to specific demographics or individuals, dramatically increasing the likelihood of engagement
  • AI-powered translation tools allow fraudsters to localize scams across jurisdictions without linguistic barriers

AI is also accelerating impersonation. Deepfake audio and video tools enable criminals to mimic executives, romantic partners, or public figures with increasing realism. In crypto-related scams, this can include fabricated endorsements of token launches, fake trading signals, or impersonated exchange representatives. These tactics exploit the speed and borderless nature of digital assets, where transactions can be executed — and irreversibly settled — within minutes.

Beyond social engineering

Importantly, AI-enabled fraud is not limited to social engineering. Machine learning models can be used to test stolen credentials at scale, optimize money laundering flows, or identify vulnerabilities in smart contracts. As AI tools become more accessible and easier to use, the barrier to entry for conducting sophisticated fraud declines — potentially expanding the pool of actors capable of executing high-impact schemes.

What previously required huge teams of operators can now be automated, templated, and continuously refined — transforming the economics of deception. 

The critical change is scalability. Human fraud scales linearly with headcount. AI-enabled fraud scales with compute.

The result is a threat landscape where fraud operations may become more industrialized — blending automation, personalization, and rapid cross-platform coordination. Understanding how AI enhances existing fraud typologies is the first step toward building the tools, controls, and public-private partnerships needed to disrupt its impact.

The data context: Industrialized illicit activity

TRM’s 2026 Crypto Crime Report provides the macroeconomic backdrop to this transformation. Illicit crypto volume reached USD 158 billion in 2025, up nearly 145% from the previous year. While illicit activity declined slightly as a proportion of total on-chain volume — falling from 1.3% to 1.2% — the absolute increase signals expansion in both ecosystem size and adversarial capability.

More revealing is TRM’s liquidity-based metric. In 2025, illicit entities captured 2.7% of available crypto liquidity. This framing highlights how adversaries engage with deployable capital rather than merely measuring transaction share. It underscores that illicit actors are not marginal participants but embedded components within global crypto markets.

Scam-related activity remains one of the largest consumer-facing threat categories. TRM estimates approximately USD 30 billion in crypto scam volume during 2025 alone. However, fraud reporting gaps remain significant. Victim underreporting due to embarrassment, uncertainty, or delayed discovery means the true scale of harm may be materially higher. In some categories, underreporting could push total harm estimates as much as 85% beyond observable datasets.

Overlaying these figures is a dramatic behavioral shift. TRM observed a roughly 500% increase in AI-enabled scam activity over the past year. This increase reflects rapid integration of generative AI into fraud pipelines — not a marginal adoption curve.

Synthetic trust at industrial scale

The defining input to most successful crypto scams — and one of the most psychologically damaging to victims — is not infrastructure: it’s trust. Pig butchering schemes, high-yield investment fraud, and romance-based scams rely on carefully constructed emotional credibility. Historically, this required sustained human interaction, with operators maintaining multiple online personas, cultivating relationships over weeks or months, and carefully timing financial requests.

Generative AI compresses and multiplies this process.

AI systems can generate persuasive narratives tailored to cultural context and language — sustaining simultaneous conversations with hundreds of victims, while maintaining coherent memory and tone across those interactions. Automated translation tools eliminate linguistic barriers that previously limited geographic reach. And fraud networks can deploy dynamic engagement models across continents without scaling human staffing proportionally.

This creates what can be described as synthetic trust at scale. The interaction feels personal, but is algorithmically generated — and industrially replicated.

How AI compresses the fraud lifecycle

Artificial intelligence does not merely increase outreach volume; it accelerates the entire fraud lifecycle.

Reconnaissance becomes automated through data scraping and signal prioritization. Outreach messages are dynamically generated and optimized for engagement rates. Grooming interactions persist without fatigue or scheduling constraints. Extraction phases leverage AI-generated spoofed platforms and dashboards that mirror legitimate exchanges with high fidelity.

Money laundering also benefits from automation. Criminal networks are increasingly integrating scripting tools and automation into cross-chain routing strategies. While not all laundering relies on advanced machine learning, the broader availability of automated routing tools lowers the barrier to executing structured dispersal strategies. Funds can move across decentralized exchanges, bridges, and liquidity pools with greater speed and less manual coordination.

This means investigators now face shorter windows between initial victim payment and complex dispersal across chains.

The economic implications of automation

AI also fundamentally changes fraud’s cost structure. Human operators require training, infrastructure, compensation, and oversight. AI systems, once deployed, operate continuously and autonomously. 

  • The marginal cost of engaging an additional victim approaches zero
  • Script optimization becomes data-driven
  • Campaign variants can be tested rapidly

This cost compression enables smaller criminal groups to achieve scale previously reserved for large, centralized scam centers. Fraud-as-a-service ecosystems further modularize the process, allowing content generation, infrastructure hosting, and laundering strategies to be packaged and distributed.

The result is democratized capability, and deception that is no longer exclusive to well-resourced syndicates. 

Strategic asymmetry: An unbalanced playing field between attackers and defenders

AI-enabled fraud introduces a form of strategic asymmetry between attackers and defenders.

Attackers can afford to experiment

With generative AI and automation tools, criminals can launch thousands of low-cost micro-campaigns, test different narratives, and optimize for conversion in real time. Failed attempts carry little downside, and success requires only a small percentage of victims to respond.

Defenders are often constrained

Defenders operate under very different constraints. Compliance teams and law enforcement agencies must adhere to legal and ethical standards, due process, and internal controls. False positives carry real regulatory, financial, and reputational costs. Investigative resources are finite, and actions must be defensible.

This imbalance — speed and scale on one side, accountability and precision on the other — reshapes the fraud landscape.

Addressing it requires more than incremental improvements to existing controls. It underscores the importance of defensive AI within blockchain intelligence. The same computational capabilities that enable fraud — automation, pattern recognition, and rapid iteration — must also power detection, entity clustering, anomaly identification, and near real-time network mapping. Only by matching machine-scale threats with machine-scale analysis can investigators and compliance teams begin to close the gap.

What AI doesn’t change: Using the transparency of the blockchain to fight AI-enabled fraud

Despite the increased speed and scale AI enables for bad actors, blockchains retain a structural advantage for defenders: transparency.

Transactions are recorded on a public, immutable ledger. Funds move through observable pathways and, in most cases, ultimately intersect with exchanges, stablecoin issuers, or other liquidity nodes. Even AI-enabled fraud — no matter how automated or sophisticated — leaves forensic traces on-chain.

Artificial intelligence may accelerate fraud, but it does not eliminate traceability.

In fact, the more automated an operation becomes, the more data it generates. Patterns emerge. Infrastructure overlaps. Reused wallets, common service providers, and recurring behavioral signatures create investigative footholds for analysts equipped with the right tools.

At the same time, technology alone is not sufficient. Human judgment remains essential. Attribution decisions require contextual intelligence. Victim recovery demands coordination across exchanges, issuers, and law enforcement. Enforcement actions must meet evidentiary thresholds that machines cannot independently determine.

AI can surface signals at scale. But interpretation, prioritization, and accountability still require experienced investigators.

The future of crypto fraud will be defined by automation, velocity, and industrial coordination. The defensive response must match that sophistication with equally advanced analytics, cross-sector collaboration, and disciplined investigative oversight.

{{horizontal-line}}

Frequently asked questions (FAQs)

1. What is AI-enabled crypto fraud?

AI-enabled crypto fraud refers to scams and financial crimes that use artificial intelligence (AI) to automate, personalize, and scale deceptive activity involving digital assets. AI tools such as large language models, deepfake generators, and automation scripts enhance traditional fraud tactics rather than replacing them.

2. How does AI increase the scale of crypto scams?

AI increases scale by reducing the need for human labor. Generative AI can create phishing emails, fake investment websites, and multilingual messages instantly. Automation tools allow fraudsters to engage hundreds or thousands of victims simultaneously, lowering marginal costs and increasing campaign efficiency.

3. Did crypto fraud increase in 2025?

Yes. In 2025, illicit crypto volume reached approximately USD 158 billion, up nearly 145% year over year. Scam-related activity accounted for roughly USD 30 billion, with reporting gaps suggesting the true figure may be materially higher.

4. What types of scams use artificial intelligence?

Common AI-enhanced scam types include:

  • Investment and high-yield crypto fraud
  • Romance and pig butchering scams
  • Executive impersonation using deepfake audio or video
  • AI-generated phishing campaigns
  • Automated laundering and cross-chain routing

AI improves speed, realism, and targeting precision across these categories.

5. What is “synthetic trust” in crypto fraud?

Synthetic trust refers to AI-generated credibility that appears personal and authentic, but is produced algorithmically. Fraud networks use AI to simulate emotional relationships, investment expertise, or institutional legitimacy at scale, increasing victim confidence.

6. Does AI make crypto transactions untraceable?

No. Artificial intelligence may accelerate fraud, but blockchain transactions remain immutable and permanently recorded on public ledgers. Funds typically pass through exchanges, stablecoin issuers, or liquidity providers, creating forensic traces that investigators can analyze.

7. How does AI create asymmetry between attackers and defenders?

Attackers can experiment rapidly and tolerate failure, launching thousands of low-cost campaigns. Defenders, including compliance teams and law enforcement, must minimize false positives, follow due process, and meet evidentiary standards. This creates an imbalance in speed and operational flexibility.

8. How can blockchain intelligence counter AI-enabled fraud?

Defensive AI within blockchain intelligence platforms can:

  • Detect anomalous transaction patterns
  • Cluster related wallets and entities
  • Map cross-chain fund flows
  • Identify infrastructure reuse
  • Prioritize high-risk activity in near real time

These tools help investigators match machine-scale threats with machine-scale analysis.

This is some text inside of a div block.
Subscribe and stay up to date with our insights
No items found.