Good news is so rare these days, you don’t quite know how to take it. You want to celebrate, but a rival instinct tells you it’ll be pulled back somehow, the same feeling you get when your team scores a late winner, but you’re filled with instant dread that the goal will be overturned on a video replay.
I confess that is how I responded to the double legal blow dealt this week to Meta, the company that owns Facebook and Instagram, when two US juries on successive days found against it in a pair of landmark cases. First came a verdict in New Mexico, fining the company $375m (£280m) for enabling harm, including child sexual exploitation, on its platforms and for misleading consumers about their safety. Twenty-four hours later, jurors in California awarded $6m in damages to a young user who had argued that Meta (along with YouTube) had deliberately designed addictive products that had hooked her from childhood, causing her grave harm.
Campaigners were thrilled, believing they had at last made a breakthrough in their long battle to tame the tech companies that shape so much of our daily lives – influencing what we know of the world, how we talk to others and how we feel about ourselves. I spoke the day after the California verdict to Frances Haugen, the former Facebook employee turned whistleblower who, by releasing 20,000 pages of internal documents in 2021, provided clear evidence that the company knew its platforms were causing harm, whether damaging children or destabilising democracy, but went ahead anyway in pursuit of “astronomical profits”. Haugen told me that Meta could be facing its “asbestos moment”, with its products deemed toxic and facing legal payouts amounting to, by her calculation, as much as a trillion dollars – a sum that, she says, would make “bankruptcy” a genuine possibility.
Before assessing the likelihood of that outcome, it’s worth reminding ourselves of the toxicity. Alongside Haugen’s files, a useful text is Careless People, the 2025 memoir written by fellow Facebook whistleblower Sarah Wynn-Williams. There she describes how the company, able to track users’ activity on and off the platform, could see when, for example, girls aged between 13 and 17 deleted a selfie. Realising that signalled the girls’ dissatisfaction with their appearance, the company saw a way to monetise that unhappiness. For a fee, a cosmetics company could serve a beauty ad to those children at that very moment.
Facebook did not hide this behaviour; they boasted of it. Wynn-Williams reveals how Facebook made a presentation for an Australian client, bragging that its ability to monitor users’ online lives – their posts, their photos, their conversations with friends – enabled them to know exactly when teenage girls were feeling “worthless”, “insecure”, “stressed”, “defeated”, “anxious”, “stupid”, “useless” and “like a failure”. Those were optimal moments for selling.
Dissenting voices within the company expressed unease, only to be dismissed. The court in New Mexico heard how a former Meta employee wrote to Mark Zuckerberg, founder and CEO, urging him to see the danger in allowing young girls access to a cosmetic surgery filter on Instagram that let users see how they would look with bigger eyes or thicker lips. The colleague emailed to say that one of his daughters had been “hospitalised twice for body dysmorphia” and that, when it came to body image, “the pressure on them and their peers coming through social media is intense”. Zuckerberg was unmoved. He said it would be “paternalistic” to limit users’ “ability to present themselves in these ways”.
Haugen, who used to work in the company’s civic integrity team, told me how colleagues might propose a small tweak that would substantially reduce the harm the platform was doing. But if that tweak – say, not sending notifications to children late at night, urging them to come back to Instagram – caused so much as a 1% drop in user engagement, the bosses would veto it. As Haugen puts it: “Mark said the most important thing is increasing time spent on the platform.”
So it’s no surprise that so many have welcomed this week’s court decisions. At long last, the Davids taking on the social media Goliaths have found a way around the so-called liability shield that had protected them for decades.
Passed in the 1990s, section 230 of the key legislation ruled that the tech companies could not be held responsible for the content posted on their platforms, any more than you could hold the Post Office responsible for the contents of an abusive letter. The California case, in particular, swerved past that shield by focusing not on the content – this or that unpleasant post – but rather the content recommendation system, meaning the machinery that determines what users see.
That machinery – addictive by design, whether it’s automatic video play or the infinite feed that encourages perpetual scrolling – is entirely devised and operated by the tech companies. Which means they are liable for the harm it does. As lawyer Ravi Naik, who acts for Wynn-Williams and others, put it to me: “These systems are made by people. These are not abstract entities, handed down by the gods. This is about the decisions of people and accountability for the choices they made. Isn’t that what the law is for?”
What of my instinctive worry that this win could be overturned, VAR-style? It’s true that this week’s verdicts will be appealed, that they could work their way up the system until they reach the US supreme court which, in its current, Trump-shaped form, could rule in big tech’s favour. It’s true, too, that years of legal wrangling will allow the tech firms to keep on doing what they’ve been doing and to keep making billions. But legal experts say jury verdicts are less prone to be overturned than judge’s ones. And, with thousands of similar cases in the pipeline, it would only take a minuscule fraction of the US’s teenagers to combine in a successful class-action lawsuit to devastate Meta. Haugen’s done the maths: 150,000 teenagers awarded $6m each would leave Meta with a trillion-dollar bill.
Another worry: this is only the US – what about the rest of the world? Admittedly, while the likes of the UK and EU have stringent rules, enforcement has been lacking. European regulators have been fearful of the US tech behemoths, just as European governments have been scared of the Trump administration. But that could be changing. As Europeans stand aside from Trump’s disastrous war on Iran, there are signs they are becoming readier to assert their own “digital sovereignty”. Others are doing that already: note Australia’s ban on social media for under-16s, a move Indonesia is following starting Saturday.
Perhaps the largest concern is AI. Is it possible that the law has finally landed a punch on old social media platforms just as a newer, greater menace enters the ring? Not for the first time, Zuckerberg promises what to him is a bright new future, but which, to almost everyone else, sounds like a dystopian nightmare. He wants to see AI become a godlike “superintelligence” more powerful than the human brain, and looks forward to the day when AI fills the role now played by our friends. Surely this week’s legal victories do nothing to curb that threat?
Don’t be so sure. The courts have now ruled that tech firms are liable for their systems, and AI, says Naik, “is entirely a human-designed system”. Every choice that made, for example, Elon Musk’s Grok produce fake nude images of real women to order was a human one – for which Grok’s creator is now being held accountable, in the form of lawsuits filed by both US authorities and individuals who say they were abused by the chatbot, among them Ashley St Clair, who is the mother of one of Musk’s children.
Of course, the broligarchy consists of determined men with unfathomably deep pockets and a friend in the White House. No one should assume they will fold quickly or easily. But in the long war against those who have done so much to corrode 21st-century life, this week brought an important victory – and we should celebrate it.
-
Jonathan Freedland is a Guardian columnist
-
Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

2 hours ago
11

















































