Article At A Glance
- Internal Meta documents projected roughly 10% of its 2024 revenue — about $16 billion — would come from ads linked to scams and banned goods.
- Facebook accounts for 57% of all social media scam reports, making it the single most exploited platform for online fraud globally.
- Meta removed 159 million scam ads and 10.9 million accounts tied to criminal scam centers in 2025 — but critics say that’s not nearly enough.
- Scam operations have gone fully industrial, with organized crime compounds in Southeast Asia running fraud like a 9-to-5 business — keep reading to find out how they actually work.
- There are specific, actionable steps you can take right now to avoid becoming the next victim of a Meta platform scam.
Social media scams have quietly become one of the most profitable — and most destructive — forms of organized crime on the planet, and the platforms hosting them may be benefiting far more than most people realize.
ScamAdviser, a platform dedicated to helping consumers identify and avoid online scams, has been tracking this crisis as it accelerates. The numbers are staggering, and the systems enabling them are more deeply embedded in the platforms you use every day than the companies involved would like to admit.
Meta Is Earning Billions From the Same Scams Targeting You
This isn’t a fringe theory or a speculative claim. According to a bombshell Reuters investigation citing internal company documents, Meta’s own projections estimated that approximately 10% of its 2024 revenue — roughly $16 billion — would come from advertising linked to scams and banned goods. That single figure reframes the entire conversation about why scams on Facebook and Instagram have been so difficult to eliminate.
The uncomfortable truth is that scam ads are revenue. Every fraudulent product promotion, every fake celebrity endorsement, every too-good-to-be-true investment opportunity — when it runs as a paid ad, Meta gets paid first. The victim gets defrauded second.
The $16 Billion Question: How Much Does Meta Profit From Scams?
Reuters Investigation Key Finding: Internal Meta documents, as reported by Reuters in December 2024, projected that ads linked to scams and banned goods would account for up to 10% of Meta’s 2024 total revenue — approximately $16 billion. The same documents reportedly showed Meta instructed staff not to take enforcement actions that would threaten more than 0.15% of company revenue.
That 0.15% revenue protection threshold is the detail that should concern every Facebook and Instagram user. It means that internally, there was reportedly a financial ceiling on how aggressively Meta would act against scam advertisers — a hard limit built around protecting the bottom line, not the user.
The Wall Street Journal added more texture to this picture, reporting that Meta’s systems allowed suspicious advertisers to rack up as many as 32 automated strikes for financial fraud before triggering a ban. Thirty-two strikes. For financial fraud. That’s not an oversight — that’s a policy.
Meta disputes the Reuters figures directly, arguing that scam advertising undermines the trust its entire ad business depends on. The company maintains that it has strong financial incentives to fight fraud, not enable it. But the documented internal thresholds tell a more complicated story, as seen in the recent Interpol operation that highlights the global scale of such issues.
What the Reuters Investigation Actually Found
The Reuters investigation didn’t just surface a revenue estimate. It revealed a documented internal decision-making framework where enforcement actions against bad actors were weighed against their revenue impact. Billions of scam ads were reportedly appearing every day across Meta’s family of apps, and the internal response was calibrated — at least in part — around how much cleaning up those ads would cost the company financially.
Meta’s Internal Rule: Don’t Touch More Than 0.15% of Revenue
To put that 0.15% threshold in context: 0.15% of Meta’s ~$160 billion in projected 2024 revenue is approximately $240 million. That means enforcement actions that would cost Meta more than $240 million in ad revenue were reportedly flagged for caution. Given that scam ads were projected to generate $16 billion, the vast majority of that fraudulent revenue sat comfortably above the enforcement trigger line.
Meta’s Official Response to the Allegations
Meta has pushed back firmly on the framing, stating that scams damage user trust and that trust is the foundation of its advertising ecosystem. The company points to its 2025 enforcement numbers — 159 million scam ads removed, 10.9 million accounts disabled — as evidence of genuine commitment. U.S. lawmakers, however, have called for a formal investigation into Meta’s “facilitation of and profiting from” fraudulent advertising, suggesting the official response hasn’t fully satisfied regulators.
Social Media Is Now the #1 Starting Point for Scams
The Better Business Bureau’s data makes the platform concentration impossible to ignore. Facebook alone accounts for 57% of all online scam reports tied to a specific platform. Instagram adds another 22%. WhatsApp contributes 8%. That means Meta’s three core platforms are the origin point for 87% of social media scams tracked by the BBB — a concentration that points directly at structural, platform-level problems rather than isolated bad actors.
Why Facebook and Instagram Are Scammers’ Favorite Hunting Grounds
The answer is scale, targeting precision, and low barrier to entry. Facebook’s advertising infrastructure lets anyone — including criminals — precisely target users by age, location, interests, and financial behaviors. A scammer running a fake investment scheme can specifically target users aged 55–70 who have shown interest in retirement planning. That level of targeting capability, available cheaply and at scale, is why Meta platforms dominate scam origin statistics. Instagram adds the visual trust layer — polished feeds, influencer-style content, and celebrity imagery that makes fraudulent offers look credible at first glance.
The $10 Billion Americans Lost in 2023 Alone
Americans reported losing more than $10 billion to online fraud in 2023, according to the FTC — marking the first time that threshold had ever been crossed. Social media was identified as a leading contact method for fraud. The actual figure is almost certainly higher, given that a large percentage of scam victims never report their losses due to embarrassment or lack of awareness about reporting channels.
The Industrial Scale of Online Scam Operations
What’s changed in the past five years isn’t just the volume of scams — it’s the organizational sophistication behind them. These are no longer lone operators running email phishing schemes from a laptop. Modern online fraud, particularly the kind that flows through Meta’s platforms, is backed by criminal enterprises running operations that look more like corporations than crime rings.
Meta itself has described online scamming as “one of the fastest-growing forms of organized crime globally.” That framing is accurate. The infrastructure behind these scams includes HR recruitment pipelines, shift schedules, performance quotas, and management hierarchies — all operating out of physical compounds, primarily across Southeast Asia.
How a Modern Scam Operation Is Structured:
Recruitment Layer: Victims (often themselves trafficked) or willing workers are recruited via fake job ads on — frequently — Meta platforms.
Targeting Layer: Workers are assigned social media personas and trained scripts. Facebook and Instagram profiles are used to establish fake trust with targets.
Engagement Layer: Scammers build relationships over days or weeks before introducing the financial hook — often a fake investment platform or romance-based money request.
Extraction Layer: Once the victim sends money or cryptocurrency, funds are laundered through layered accounts before reaching compound leadership.
Scale: A single compound can run hundreds of simultaneous scam relationships across multiple platforms at any given time.
The Global Anti-Scam Alliance has consistently flagged Meta platforms as the primary recruitment and engagement channels for these industrial-scale operations. The combination of free account creation, powerful targeting tools, and historically high strike tolerances before bans made Meta’s ecosystem the natural habitat for these criminal enterprises.
How Criminal Scam Centers Actually Operate
These scam compounds aren’t improvised. Workers — some of whom are trafficking victims forced to participate — operate during structured shifts, targeting specific demographic profiles identified through social media data. Scripts are refined based on what works. Managers track conversion rates. The most effective emotional manipulation tactics get standardized and distributed across the operation. It is, in every operational sense, a business — one built entirely on defrauding people who trust what they see on their social media feeds.
Southeast Asian Scam Compounds: The Factory Floor of Online Fraud
The physical epicenter of this crisis sits across Myanmar, Cambodia, Laos, and the Philippines, where large-scale scam compounds operate with relative impunity. These aren’t back-alley operations — some compounds are housed in multi-story office buildings behind security fencing, staffed by hundreds of workers running simultaneous fraud campaigns across Facebook, Instagram, and WhatsApp. In late 2024, Meta reported taking down more than 2 million accounts connected to these scam compound networks — a number that sounds significant until you consider the scale at which new accounts can be created.
The Bangkok operation in 2025, conducted jointly with the Royal Thai Police, the FBI, and Britain’s National Crime Agency, resulted in 21 arrests and the disabling of more than 150,000 accounts tied to these networks. It was one of the most significant coordinated enforcement actions against scam infrastructure to date — and it still represented a fraction of the total operational capacity of these criminal enterprises.
Why No Single Platform or Government Can Stop This Alone
The jurisdictional reality of these operations is precisely what makes them so resilient. Compounds operate in countries with weak enforcement frameworks. Workers are sometimes trafficking victims with no agency in the scheme. Funds move through cryptocurrency channels that cross multiple borders before landing. Meta can remove accounts, but account creation is free and takes minutes. Law enforcement can arrest operators, but the compound infrastructure rebuilds. Solving this requires coordinated pressure across tech platforms, financial systems, and international law enforcement simultaneously — and that coordination is still developing.
Meta’s Response: 159 Million Ads Removed in 2025
Meta’s 2025 enforcement announcement was, by any measure, the most aggressive public action the company has taken against scam infrastructure. The company reported removing 159 million scam ads across all categories and disabling 10.9 million Facebook and Instagram accounts associated with criminal scam centers. The scale of those numbers is striking — and they also inadvertently confirm just how saturated Meta’s platforms had become with fraudulent content in the first place.
The New AI Detection Tools Meta Deployed
Meta introduced several new detection and warning tools as part of its 2025 anti-scam push. These include:
- Automated warnings triggered when users receive suspicious Facebook friend requests from accounts displaying scam behavior patterns
- Real-time alerts when scammers attempt to move conversations from Facebook Messenger to WhatsApp — a common tactic used to shift victims to less monitored channels
- Enhanced AI classifiers trained to identify scam ad patterns at the creative and targeting level before ads are approved to run
- Expanded cooperation protocols with law enforcement agencies that allow faster account disabling when criminal investigations are underway
These tools represent a meaningful upgrade from Meta’s previous detection infrastructure. The critical question experts are asking is whether they address the structural incentive problem — because better detection tools deployed within a system that still allows 32 strikes before banning a financial fraud advertiser may catch more scams at the edges while leaving the core pipeline intact.
The Bangkok Operation: 21 Arrests and 150,000 Accounts Disabled
The Bangkok operation stands out as a template for what coordinated cross-border enforcement can look like. Meta’s security teams worked directly with the Royal Thai Police, the FBI, and the UK’s National Crime Agency to map the account network, identify physical infrastructure, and time the platform-level takedowns to coincide with the physical arrests. The result — 21 arrests and 150,000 disabled accounts — disrupted operations that had been running continuous fraud campaigns targeting users across multiple countries. Meta has indicated it intends to expand this law enforcement partnership model, though scaling that approach across all active compound locations remains a significant logistical challenge.
$49 Million Earned From Political Deepfake Scams Over 7 Years
The New York Times reported that Meta allowed scammers to run more than 150,000 political advertising scams involving deepfakes and misleading paid content — campaigns that generated over $49 million in revenue for Meta across a seven-year period. These weren’t obscure edge cases. They were paid political-style ads featuring AI-generated video of real public figures, designed to look like legitimate endorsements of financial products, investment schemes, or outright disinformation.
Deepfake scam ads are particularly damaging because they exploit the trust people have in recognizable faces. When a user sees what appears to be a credible public figure endorsing a product in a professional-looking video, the psychological barriers to clicking — and potentially being defrauded — drop significantly. The $49 million figure means Meta was being paid to distribute content it had the technical capacity to detect, across a seven-year window where the deepfake problem was actively being reported and escalated publicly.
What Experts Say Meta Still Needs to Do
The consensus among cybersecurity researchers and digital rights experts is that Meta’s 2025 enforcement actions, while substantial, address symptoms rather than the underlying system design. Removing 159 million scam ads after they’ve been approved and run is reactive enforcement. The ask from experts is proactive structural change — specifically, lowering the financial fraud strike threshold dramatically, implementing mandatory identity verification for advertisers running financial product or investment content, and making real-time scam ad data available to independent researchers who can audit enforcement consistency.
There’s also a growing call for financial accountability — the argument that platforms profiting from scam ad revenue should face liability proportional to the documented harm those ads cause. Currently, Section 230 of the Communications Decency Act provides broad legal protection for platforms hosting third-party content. Reforming that protection specifically for paid advertising — where the platform is an active commercial participant, not a passive host — is a legislative conversation that is gaining traction in both the U.S. and EU regulatory environments.
How to Spot and Avoid Meta Scams Right Now
Understanding the platform-level failures is important context — but it doesn’t protect your money today. Here’s what you can actively do to reduce your exposure to scams running on Facebook and Instagram right now.
1. Treat Urgency in Ads as a Red Flag
Scam ads are engineered around artificial urgency. Phrases like “only 3 left,” “offer expires in 10 minutes,” or “exclusive deal for Facebook users today only” are pressure tactics designed to override your critical thinking before you can verify the offer. Legitimate businesses don’t need to manufacture panic to make a sale. In the tech world, even vulnerabilities in plugins are sometimes used to create a false sense of urgency.
When you feel that pressure spike while viewing an ad — that physical impulse to act immediately — treat it as a signal to stop and verify, not a reason to click faster. Open a separate browser tab, search the company name independently, and check for reviews on platforms outside of Meta’s ecosystem before engaging with anything financial.
2. Verify Celebrity Endorsements Before Clicking
Deepfake technology has made celebrity endorsement scams extremely convincing. If an ad features a public figure endorsing a product — especially an investment platform, cryptocurrency scheme, or health supplement — assume it may be fabricated until independently verified. Search the celebrity’s name plus the product name in a standard web search. If the endorsement is real, it will exist outside of a single Facebook ad. If the only evidence is the ad itself, treat it as fraudulent.
3. Check the Ad’s Sponsor Page Before Engaging
Every ad running on Facebook has a “Sponsored” label and a linked Page behind it. Before clicking any ad that asks for money, personal information, or a sign-up, click the Page name directly and look at its history. Meta’s “Page Transparency” feature shows when the Page was created, where it’s managed from, and whether the Page name has been changed recently.
What to Look For in an Ad’s Sponsor Page:
Page Age: Legitimate businesses have Pages with history. A Page created within the last 30–90 days running financial or investment ads is a serious red flag.
Name Changes: Scammers frequently recycle old Pages by changing the name. If the transparency section shows a recent name change, treat the Page with extreme suspicion.
Management Location: If the ad is targeting U.S. users but the Page is managed entirely from a country with no apparent connection to the business, that mismatch warrants investigation.
Post History: Scam Pages often have sparse, low-quality post histories — or a sudden burst of posts right before the ad campaign launched. A healthy business Page has consistent, varied content over time.
This check takes about 45 seconds and can save you thousands of dollars. It’s the single most underused protection tool Meta has already built into its platform. The information is publicly visible — most people just don’t know to look for it.
If the Page doesn’t exist independently — meaning it was created purely to run ads with no organic content, community engagement, or business history — close the ad immediately and report it. A real business has a real presence. Scam operations create Pages as throwaway infrastructure, and the transparency data exposes them clearly if you know where to look.
4. Never Send Money to Someone You Only Met on Social Media
This applies to romance scams, investment opportunities, crypto platforms, and any other financial request from someone whose entire relationship with you exists inside a Meta platform. The pig butchering scam — where criminals spend weeks or months building a fake romantic or friendly relationship before introducing a fraudulent investment platform — is now one of the most financially devastating scam categories globally. These relationships feel completely real. The emotional investment is real. The money lost is real. The person is not.
The threshold is straightforward: if you have never met someone in person, verified their identity through independent means outside of social media, and confirmed the legitimacy of any financial platform they’re recommending through a licensed financial regulator’s database — do not send money. Not a small amount to test it. Not cryptocurrency because it feels less like “real money.” Nothing. The entire architecture of these scams is built around making the first transfer feel reasonable.
5. Report Suspicious Ads Directly to Meta and the FTC
Reporting scam ads isn’t just self-protection — it’s the mechanism that feeds Meta’s detection systems and builds the regulatory pressure that drives enforcement. On Facebook or Instagram, tap the three dots on any ad and select “Report Ad,” then choose the most accurate category — “Scam or Fraud” is typically the correct option. For financial losses or identity theft resulting from a scam, file a report with the FTC at ReportFraud.ftc.gov. If the scam involved cryptocurrency, the FBI’s Internet Crime Complaint Center (IC3) at ic3.gov is the appropriate additional channel. Every report filed contributes to the documented record that regulators, lawmakers, and researchers use to push for structural platform changes.
The Crisis Will Worsen Before It Gets Better
The convergence of industrial-scale scam infrastructure, deepfake technology that makes fraudulent content increasingly indistinguishable from legitimate content, and platform incentive structures that have historically tolerated high volumes of fraudulent advertising means the near-term trajectory points upward — more scams, more sophisticated, reaching more people. The tools to fight back are developing, but they are developing in response to a problem that has already reached a scale most people don’t fully grasp. Staying informed, skeptical, and proactive about what you engage with on social media is no longer optional — it is the most effective individual defense available right now. Initiatives like Interpol’s Operation Synergia III demonstrate the global efforts to combat these sophisticated scams.
Frequently Asked Questions
The following questions reflect what people most commonly ask when they first encounter the scale of the social media scam problem. The answers are grounded in verified reporting and documented platform data.
How much money does Meta make from scam ads?
- Internal Meta documents, cited by Reuters, projected approximately 10% of Meta’s 2024 revenue would come from scam-linked advertising.
- 10% of Meta’s projected 2024 revenue equals roughly $16 billion.
- Meta disputes this figure and maintains that scam advertising undermines the trust its ad business depends on.
- The New York Times separately documented over $49 million earned by Meta specifically from political deepfake scam ads over seven years.
- Meta removed 159 million scam ads in 2025, which implicitly confirms the volume of fraudulent content that had been running across its platforms.
It’s important to distinguish between what Meta acknowledges and what internal documents reportedly show. The company’s public position is that it actively fights scam advertising. The Reuters investigation’s findings, drawn from internal projections, suggest that enforcement decisions were being weighed against revenue thresholds in ways that allowed substantial fraudulent advertising to continue.
Meta has not confirmed the specific $16 billion figure publicly, and the company has challenged the framing of the Reuters report. However, it has not disputed the existence of internal strike threshold policies that limited how aggressively enforcement teams could act against revenue-generating advertisers — including fraudulent ones.
What types of scams are most common on Facebook and Instagram?
The most prevalent scam categories running across Meta platforms include investment fraud (particularly cryptocurrency schemes), romance scams built around extended fake relationship development, fake product advertisements that collect payment without delivering goods, political deepfake ads featuring AI-generated celebrity or politician endorsements, and account takeover phishing attempts disguised as Meta security notifications. Pig butchering scams — long-con romance and investment hybrids — have grown particularly rapidly, with scam compounds in Southeast Asia running hundreds of simultaneous operations targeting Western users specifically.
Deepfake-powered scams represent the fastest-growing threat category. The combination of accessible AI video generation tools and Meta’s massive reach means that a convincing fake endorsement ad can reach millions of targeted users within hours of launch. The platform’s ad approval systems have historically been faster than its fraud detection systems — meaning scam ads often run, collect victims, and get reported before removal catches up.
What is Meta doing to stop scam ads in 2025?
Meta’s 2025 Anti-Scam Enforcement Summary:
Accounts Disabled: 10.9 million Facebook and Instagram accounts linked to criminal scam centers.
Scam Ads Removed: 159 million ads across all fraud categories.
Law Enforcement Operations: Joint action with the Royal Thai Police, FBI, and UK National Crime Agency resulting in 21 arrests and 150,000+ account disablements in Bangkok.
New Detection Tools: AI-powered scam classifiers, suspicious friend request warnings, and alerts when conversations are being redirected from Messenger to WhatsApp.
Scam Compound Focus: Expanded targeting of entire criminal networks rather than individual accounts, including 2+ million account takedowns tied to Southeast Asian compounds beginning in late 2024.
The 2025 enforcement numbers are the largest Meta has publicly reported. They represent a genuine escalation in the company’s stated commitment to reducing scam infrastructure on its platforms. The shift toward network-level takedowns — disabling entire criminal ecosystems rather than individual accounts — is a meaningful tactical evolution, since account-level bans had proven ineffective against operations that could recreate thousands of accounts within days.
However, independent cybersecurity experts have consistently emphasized that reactive enforcement — removing scam content after it runs — needs to be accompanied by proactive structural changes. Specifically, stricter advertiser identity verification for financial and investment content, lower fraud strike thresholds before account bans, and transparent third-party auditing of ad approval processes are the measures most frequently cited as the missing elements in Meta’s current approach.
Whether the 2025 actions represent a turning point or a temporary escalation driven by regulatory and media pressure remains an open question. The structural incentive dynamics that allowed the problem to grow to its current scale have not been publicly addressed in Meta’s enforcement announcements.
How do I report a scam ad on Facebook or Instagram?
On Facebook, click the three dots in the upper right corner of any ad and select “Report Ad,” then choose “It’s a scam or fraud.” On Instagram, tap the three dots on a sponsored post and follow the same reporting flow. For financial losses resulting from a scam, file a complaint with the FTC at ReportFraud.ftc.gov. Cryptocurrency-related fraud should additionally be reported to the FBI’s Internet Crime Complaint Center at ic3.gov. If you’ve been targeted by a romance scam or pig butchering operation, the Global Anti-Scam Organization (GASO) maintains resources specifically for those victims, including assistance with documenting cases for law enforcement referral.
Are social media platforms legally responsible for scam ads?
Current Legal Landscape for Platform Scam Ad Liability:
Section 230 Protection: The Communications Decency Act broadly shields platforms from liability for third-party content — including, in most current interpretations, paid advertising placed by third parties.
The Paid Advertising Distinction: Legal scholars and legislators are increasingly arguing that paid ads — where the platform is a commercial participant, not a passive host — should not receive the same Section 230 protection as organic user content.
FTC Authority: The FTC has authority over deceptive advertising practices and has been increasing scrutiny of platforms that knowingly host fraudulent paid content.
EU Digital Services Act: European regulators have moved further than U.S. counterparts, with the DSA imposing active due diligence obligations on large platforms, including requirements to detect and remove illegal advertising content proactively.
U.S. Legislative Momentum: U.S. lawmakers formally called for a federal investigation into Meta’s “facilitation of and profiting from” fraudulent advertising in 2025, signaling growing appetite for legislative action.
Under current U.S. law, social media platforms carry very limited legal liability for scam ads placed by third-party advertisers. Section 230 has consistently been interpreted to protect platforms from lawsuits based on third-party content, and that protection has generally extended to paid advertising. This is a significant gap — particularly given that platforms actively sell targeting capabilities, approve ad content, and collect revenue from ads that defraud their own users. For instance, Meta’s role in facilitating advertising has been under scrutiny.
The EU’s Digital Services Act has shifted this calculus for European operations, imposing proactive obligations on platforms classified as Very Large Online Platforms — a category Meta clearly meets. Under the DSA, Meta faces real regulatory consequences for failing to detect and remove illegal content, including fraudulent advertising, within defined timeframes. U.S. regulatory frameworks have not yet reached comparable specificity.
The argument gaining traction in U.S. legislative discussions is that paid advertising is fundamentally different from organic user content. When a platform charges money to distribute content, reviews that content for policy compliance, and profits from its distribution, the “passive host” framing that underpins Section 230 protection becomes harder to sustain legally or morally. That argument hasn’t yet produced legislative change, but the 2025 congressional calls for investigation into Meta’s scam ad revenue suggest the window for voluntary platform action before mandatory regulatory intervention may be narrowing.
For individual users, the practical implication is this: if you lose money to a scam ad on Facebook or Instagram, your legal recourse against Meta directly is currently extremely limited under U.S. law. Your primary channels are FTC complaints, state attorney general offices, and — in cases involving wire fraud — federal law enforcement. The legal framework that would hold platforms financially accountable proportional to their documented scam ad revenue does not yet fully exist in the United States, but it is actively being constructed.


