Houston Museum Hack Highlights Growing Threat of AI-driven Crypto Scams
Today (June 4, 2025), followers of the Houston Museum of Natural Science (HMNS)'s Instagram account were met with a disturbing surprise. Instead of dinosaur fossils or interactive science exhibits, the institution’s feed featured graphic content and false promises of a "$25,000 in BTC" giveaway supposedly tied to Elon Musk. The posts were accompanied by an incoherent rant and a link to a fraudulent crypto website, part of a well-worn scam typology known as a “Musk giveaway.”
The museum quickly secured its social media account, and took the videos down. But the incident is the latest in a wave of social media hijackings used to promote crypto fraud — a trend now supercharged by generative artificial intelligence (genAI). These scams leverage deepfake visuals, synthetic voiceovers, and hijacked platforms to reach unsuspecting victims with sophisticated disinformation at scale.

Museums as soft targets in the age of deepfake fraud
The HMNS attack wasn’t unique — but it was especially jarring. Cultural institutions like museums are high-trust brands with broad digital followings, yet they often lack enterprise-level cybersecurity resources. This combination makes them ideal targets for threat actors seeking to exploit reputational capital for financial gain. And in 2025, that exploitation increasingly involves crypto and AI.
By hijacking a verified Instagram account, scammers gained immediate legitimacy and a pre-built audience. The explicit bait — a deepfake-style video referencing Elon Musk — is part of a known fraud scheme that TRM and Chainabuse, TRM Labs’ open-source fraud reporting platform, have tracked for years. But as TRM’s Head of Fraud Intelligence, Ian Schade, puts it: “We’re now seeing a transformation. Deepfake crypto scams used to be crude. Today, thanks to AI-as-a-service platforms, they’re high-volume, polished operations with multi-language support and fake help desks.”
TRM and Chainabuse: Tracking the AI-fraud surge
According to Chainabuse, reports of genAI-enabled scams rose by 456% from May 2024 to April 2025, compared with the same period in 2023-24, which had already seen a 78% increase over 2022-23.
These scams often begin with a hijacked or spoofed YouTube or Instagram account. The attacker uploads a manipulated livestream or story — frequently featuring a celebrity like Elon Musk — with a link to a malicious site. Victims are promised free crypto or high-yield returns. In a June 2024 case reported on Chainabuse, a user lost funds to a fake Elon Musk deepfake livestream. TRM traced the scammer wallet to MEXC, a centralized exchange known to receive inflows from similar fraud operations. Over USD five million in crypto passed through the scammer’s network.
TRM has also documented deepfake scams impersonating other public figures including Ripple CEO Brad Garlinghouse, MicroStrategy co-founder Michael Saylor, and Ark Invest CEO Cathie Wood. Many of these scams now incorporate synthetic voice, chatbot support, and persuasive phishing pages powered by large language models (LLMs). In one particularly alarming case, scammers used an AI-generated avatar of a fake CEO to promote a global Ponzi scheme that ultimately collected nearly USD 200 million from victims, many in Southeast Asia.
Real face models and the rise of fraud-as-a-Service
The HMNS case fits into a broader and darker trend: the professionalization of online scams using AI. On messaging platforms like Telegram, TRM analysts have observed so-called “real face models” advertising their services to fraud factories and online casino operators. These individuals participate in video calls with potential victims using deepfake overlays to appear more attractive or mimic someone else.
TRM has also uncovered scams where funds from deepfake-enabled operations were routed directly to AI-as-a-service vendors — suggesting that criminals are purchasing access to deepfake tools and actors much like a corporate customer would procure software-as-a-service (SaaS) licenses.
Beyond social media: Grooming, extortion, and multimodal AI attacks
Not all AI scams begin with a museum takeover. TRM’s February 2025 report, The Rise of AI-Enabled Crime, outlines how threat actors are deploying LLMs to automate financial grooming scams, build realistic personas, and generate multilingual phishing campaigns at scale.
In one observed case, a financial grooming scam involving deepfakes received over USD 60 million, primarily on Ethereum. The scammer used a manipulated video feed to build trust during video calls, pushing victims into long-term investment frauds.
“Criminals are now combining deepfake videos, pictures, and voice, LLM-based persuasion, and hijacked platforms to create fraud campaigns that feel eerily legitimate,” says Schade. “The increasingly realistic nature of the content is what makes these scams so effective — and so dangerous.”
What comes next
The HMNS was fortunate. No financial losses have been reported. But the reputational risk was real, and the vectors used — account takeover, AI-driven impersonation, crypto bait — are all expanding.
The solution will require more than just better passwords. Public awareness, stronger digital hygiene for institutions, and tighter coordination between platforms, law enforcement, and blockchain intelligence firms are all part of the response. TRM continues to monitor these trends and provide actionable intelligence to support law enforcement around the world.
If you encounter a scam like the one targeting HMNS — especially those involving impersonation, crypto giveaways, or AI-generated content — you can report it directly at Chainabuse. Every report helps strengthen the collective defense.
As Schade concludes, “The scammers are evolving fast — but so are the tools we have to stop them.”
{{horizontal-line}}
AI-driven crypto scams FAQs
What are AI-driven crypto scams and how do they work?
AI-driven crypto scams use genAI tools — such as deepfakes, synthetic voiceovers, and LLMs — to create highly convincing, fraudulent content that tricks victims into sending cryptocurrency to scammers. These scams often involve hijacking trusted institutions’ social media accounts or creating fake websites to promote fraudulent giveaways, fake investments, or phishing campaigns. The AI technology makes scams more polished and harder to detect, enabling scammers to operate at a larger scale.
Why do scammers target trusted institutions like museums and cultural organizations?
Trusted institutions often have large, loyal audiences and established reputations, making them attractive targets for scammers seeking to exploit that trust. Many of these institutions lack enterprise-level cybersecurity resources, leaving them vulnerable to account takeovers or impersonation attacks. By hijacking or spoofing a trusted account, scammers gain immediate legitimacy, making it easier to deceive followers with crypto scams.
What types of AI tools are commonly used in these scams?
Common AI tools in these scams include:
- Deepfake videos that convincingly impersonate celebrities or authority figures
- Synthetic voiceovers that mimic real voices to promote fake crypto giveaways
- LLMs to generate realistic phishing messages and support fake help desks
- AI-as-a-service platforms that let scammers buy or rent deepfake tools, avatars, or even entire fraud campaigns on demand
How big is the problem of AI-driven scams targeting institutions?
According to TRM Labs and Chainabuse, reports of genAI-enabled scams rose by 456% between May 2024 and April 2025, continuing a troubling exponential growth trend. These scams often leverage hijacked accounts from popular platforms like Instagram and YouTube, then spread fake crypto giveaways and investment opportunities to a wide audience. Some scams have stolen millions of dollars’ worth of cryptocurrency, causing significant financial and reputational damage to both victims and the hijacked institutions.
What can institutions and the public do to protect themselves?
Institutions should:
- Implement strong cybersecurity measures, including two-factor authentication and robust account monitoring
- Educate staff and followers about deepfake scams and phishing tactics
- Collaborate with blockchain intelligence firms and law enforcement to track and disrupt scammers
The public can:
- Verify offers carefully, especially those promising free crypto or high returns
- Report suspicious activity using platforms like Chainabuse
- Stay updated on scam trends and think critically about offers that seem too good to be true
Access our coverage of TRON, Solana and 23 other blockchains
Fill out the form to speak with our team about investigative professional services.