Posts on X have rocketed a startling story across social feeds. But despite wide sharing, no police filings or reputable news reports corroborate it. Here’s what we actually know—and what’s being conflated.

The claim

Multiple viral posts on X assert that “a man on Tinder created a fake woman using Higgsfield AI, matched with 150 men, and has scammed $4 million so far.” None of the posts include names, a jurisdiction, a police report, or a court filing; they circulate as bare claims or short videos with captions. 

What we could (and couldn’t) verify

  • No official record: A search of recent federal releases and mainstream coverage turns up no case matching the exact “Higgsfield + Tinder + $4M + 150 men” description. Notably, the U.S. Department of Justice did announce an unrelated case in July 2025: a Whittier, California man charged with using dating apps (including Tinder) to defraud victims of more than $2 million over several years. Those filings do not mention Higgsfield or any generative‑AI tool.  
  • Independent reporting on the $2M case: Local and national outlets (KABC/ABC7, The Guardian, Los Angeles Times) covered that Whittier arrest in late July, again with no reference to Higgsfield or to AI‑generated personas. It’s a different case with different facts.  
  • About Higgsfield (the company): Higgsfield is a real AI startup founded by former Snap AI lead Alex Mashrabov, best known for a video model (Diffuse) and a photorealistic image model called Soul. Public materials and tech‑press coverage describe creative features and high‑aesthetic outputs—not fraud. There’s likewise no credible reporting linking the firm to a $4M Tinder scam.  
  • How the rumor may have spread: One popular video touts a “Tinder ad built with Higgsfield in 47 minutes,” which appears to have been repurposed in some posts with sensational captions. That promotional content does not evidence a crime; the leap from “AI‑made ad” to “$4M scam” is unsupported. (This is an inference based on the posted video and timing of the captions.)  

Bottom line: As of publication, there’s no verifiable evidence for the specific claim that someone used Higgsfield to fabricate a woman on Tinder and steal $4 million from 150 men. Treat it as an uncorroborated viral rumor unless and until a law‑enforcement release or courtroom filing surfaces with names, dates, and charges. 

The real—and rising—problem: AI‑assisted romance fraud

Even if this particular story doesn’t hold up, AI‑enabled romance scams are very real and growing:

  • The FBI’s Internet Crime Complaint Center (IC3) logged $16.6 billion in reported losses across all cybercrime in 2024, up 33% year‑over‑year. “Confidence/Romance” scams alone accounted for about $672 million in reported losses.  
  • The U.S. Federal Trade Commission reports $12.5 billion in overall consumer fraud losses in 2024, a 25% jump from 2023—a sign that more victims are losing money even if complaint volumes are steady.  
  • Reporting and research show criminals increasingly seeding dating platforms with AI‑generated profile photos and even using deepfake audio/video to “verify” identities over calls. Platforms are experimenting with stronger ID checks in response.  

Why unverified claims spread so easily

Sensational, screenshot‑ready narratives thrive on short‑form video platforms and X, where proof is often replaced by plausibility. Posts that reference a hot‑topic tool (like a buzzy AI model) ride the algorithm and can be mistaken for reporting. Without names, documents, or locations, though, these stories are not evidence—they’re claims.

If you cover such stories, the safest framing is: Viral posts allege X; no corroboration from authorities or major outlets as of [date]. Then pivot to documented cases and published data to inform readers and avoid amplifying misinformation. The Whittier case is a ready example for the dating‑app context, but it should not be conflated with the Higgsfield rumor. 

How to spot (and short‑circuit) AI‑assisted catfishing

  • Move to live video early. Scammers resist real‑time, spontaneous verification. If they agree but the video looks too perfect or badly synced, treat it as a red flag—deepfake tools are improving but still glitch under pressure.  
  • Study the photos. Watch for visual artifacts across “different” images: identical lighting, repeated backgrounds, inconsistent earrings/teeth, distorted accessories—common tells of AI‑generated portraits.  
  • Refuse money requests—of any kind. No fees, urgent bills, “investment opportunities,” or crypto transfers. Direct them to official channels if they claim emergencies. The DOJ Whittier indictment details classic investment‑style lies—no fancy AI needed.  
  • Check the trend lines, not the hype. Losses are rising across multiple fraud categories; skepticism and second‑factor verification save money and heartache.  

The takeaway

  • The “Higgsfield Tinder $4M” story is unverified at best.
  • A separate, real federal case—$2M via dating apps—has no Higgsfield or AI component in filings.  
  • AI does amplify romance fraud risks, but the smarter coverage links viral rumors to hard evidence: court documents, agency reports, and platform policy changes.  

If you encounter a post like this in the wild, ask: Who’s named? What’s the case number? Which agency? Where’s the docket? Stories that can’t answer those basics usually aren’t stories yet—they’re claims waiting for facts.

You May Also Like

Agentic AI for Everyone: The New Internet Is Autonomous

Artificial intelligence (AI) has rapidly advanced from simple chatbots to agents capable…

Reality Check: Will New Jobs Always Replace the Ones Tech Destroys?

Given the rapid pace of technological change, can new job creation truly keep up with the jobs being lost?

Reality Check: Are Robots Really Stealing Jobs, or Just Redefining Them?

Just when you think robots are stealing jobs, the reality reveals they’re actually reimagining work—discover how to stay ahead in this evolving landscape.

The AI Regulation Landscape (2025–2030 Outlook)

The next five years will be pivotal for artificial intelligence (AI) policy.…