Skip to content

Court Rules Meta's AI Tools Actively Fueled Fraudulent Investment Ads

Meta's AI isn't just a tool—it's now legally a maker of fraud. This landmark ruling could reshape accountability for generative AI across Big Tech.

In this image, we can see an advertisement contains robots and some text.
In this image, we can see an advertisement contains robots and some text.

Court Rules Meta's AI Tools Actively Fueled Fraudulent Investment Ads

A US court has ruled that Meta’s AI-powered advertising tools actively shaped fraudulent investment content rather than merely hosting it. The decision strips the company of legal protections under Section 230 of the Communications Decency Act. This exposes Meta to potential securities fraud claims under Rule 10b-5, marking a significant shift in how platforms may be held accountable for AI-generated content.

The case, known as Bouck v. Meta, hinged on whether the company’s tools went beyond passive hosting. The court determined that Meta’s AI systems materially developed misleading investment ads, making the platform potentially liable as the 'maker' of fraudulent statements. This aligns with the 'maker' doctrine, which holds that the entity with ultimate control over content can be responsible for false claims.

The ruling follows a similar legal theory that survived dismissal in *Forrest v. Meta*. While courts have yet to fully decide whether AI-driven ad platforms can be held liable under securities law, the decision signals growing scrutiny of generative AI’s role in fraud. Regulators and plaintiffs are increasingly targeting the infrastructure behind AI-generated scams rather than individual bad actors. Other tech giants, including Alphabet, Snap, TikTok, and X, also use generative AI in their advertising products. The ruling could set a precedent, leaving them vulnerable to similar legal challenges.

Meta now faces potential securities fraud lawsuits under Rule 10b-5, with the court rejecting its Section 230 defence. The decision highlights the legal risks of AI-generated content in advertising. Companies relying on similar tools may need to reassess their liability exposure as regulators focus on the systems enabling fraud.

Read also:

Latest