Why the two cases against Meta and YouTube are even bigger than you think.
And why lots of digital rights groups are pretending not to notice.
The two cases that ruled against Meta and YouTube are a fundamental turning of the tide against social media platforms for two reasons, one obvious and one less so.
The first reason is that now there is a clear precedent in US law (which is really the only place that counts, as all social media platforms are domiciled in the US) that social media platforms intentionally presented a public health risk to children and young people. This is huge: these platforms are intentionally harmful, and are in effect, criminally negligent.
Comparisons with tobacco companies are now unavoidable. And long overdue. So far, so bad.
The second reason is that it blows apart the strange legal anomaly in US law that was fundamental in enabling these criminally toxic business models in the first place: Section 230.
You may have heard about this little clause, but it takes some time to fully unpack why it has been so extraordinarily destructive – and so astonishingly lucrative for digital platforms.
What is especially shocking here is that many leading groups and figures in US digital rights still defend Section 230, for example the Electronic Frontier Foundation and Wikimedia, to name but two. That is a scandal – and an affront not just to young people, but humans of all ages.
That’s why a lot of digital rights groups have been strangely silent about these cases – and it goes a long way to explain why they have been so incapable of mounting an effective challenge to big tech over the past twenty years.
We’ll come to that later.
The two cases
The two legal cases were actually quite different, but also devastatingly complementary.
In New Mexico, the state’s Attorney General, Raul Torres accused Meta of creating subscription services that became a haven for child predators. Although children weren’t supposed to have an account on their platforms, parents could set up subscription accounts for their children and so, side step the rules. Predictably, on a vast, largely unregulated platform, these accounts attracted older male predators – and in some cases, parents actually encouraged it. Staff and users raised numerous warnings, but were ignored.
Torres said: “Meta executives knew their products harmed children, disregarded warnings from their own employees and lied to the public about what they knew.”
In California, the case was brought by a teenager, who argued that she had become addicted to social media platforms from the age of 6 – and this was a direct result of design features in social media apps, such as infinite scroll, algorithmically generated recommendations and autoplay videos. Crucially, internal documents from Meta and YouTube showed that executives knew the harmful impact of their apps on children, but did nothing to change them.
Both of these legal actions made compelling cases that social media companies had built infrastructures of harm towards children. The damages in the New Mexico were far greater, (although at $324m, still nothing for Meta), but the California case is likely to have wider ramifications.
That’s because the legal team took direct aim at Section 230 – which has acted like a protective shield for internet companies for 30 years – and they won.
Section dissection
Section 230 emerged out of an early internet moral panic among US Republicans and religious groups about kids and internet pornography (which looked hysterical and puritanical at the time, but it has to be said, now looks highly prescient). That panic escalated into a very dubious law called the Communications Decency Act (CDA) from 1996.
In US law, the precedent had been set at that point that internet companies – the digital giants then at the time were AOL (who, as it happens, I worked for at that time, before it sank without trace), Compuserve and a few others – were not responsible for content posted on message boards by their users, as long as they had not in any way altered it. In effect they were being regulated as utilities, like a telephone company, rather than as content providers, like a TV channel.
Which kind of made sense then; and this allowed early internet to grow and thrive without the threat of huge libel actions. However, what Republicans intended in this new law was for online platforms to take down indecent content that could be harmful for children. Of course these early internet companies refused – because if they did they’d be held responsible. They were supported by California politicians, anxious to keep that early internet boom booming (these were the early days of what became the dot com bubble).
Lobbyists for internet companies argued the proposed law was as an egregious attack on that great American shibboleth to free speech: The First Amendment.
And in a desperate attempt to get internet companies on board, a grubby compromise (is there any other kind?) was hatched; these internet companies could take down and/or interfere with users’ content – and would still not be responsible for it:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (47 U.S.C. § 230(c)(1)).
In other words, internet companies could manipulate any users’ content in any way they wished with no accountability whatsoever. And this was all to protect free speech.
How to have your cake and eat it. And then scoff it down in one greedy mouthful.
In 1997, just a year after it became law, the Communications Decency Act was ruled unconstitutional by the Supreme Court for its sweeping constraints on free speech. But Section 230 was excluded because it apparently protected free speech – so it stayed on the statute book as a stand-alone clause with no parent law attached. Weird, Huh.
And Section 230 is still there to this day.
btw reclaimed | systems is completely free - but you can also pay too. As it happens my small group of paid subscribers are all incredibly amazing people in surprisingly different ways -and you could be one of them!
Impunity meets profit - what could possibly go wrong?
Section 230 was already a very bad idea when internet companies’ only revenue was from internet connection subscription fees. But when online advertising suddenly became stunningly lucrative, starting with Google’s AdWords programme, (which is still Google’s principal profit spigot), then things got very much worse.
Google and shortly after Facebook, quickly realised that the easiest way to maximise revenue was to maximise user “engagement”. And the way to do that was manipulate content feeds to keep their users hooked.
As these companies weren’t responsible for the content in those feeds, they could fill them with whatever they liked, which is precisely what they did. And I’m guessing you may have noticed the consequences in your own feeds for the last fifteen years or so.
Furthermore, these platforms had every incentive to allow their massively profitable advertising platforms to be used by anyone, including criminal gangs, to use them to sell harmful and deceptive content as entirely legitimate advertising.
In other words Section 230 enabled these platforms to become the core foundational infrastructure for a global toxic-content industrial complex. It’s abundantly clear from thousands of internal documents that these companies were well aware of this, not least because they made tens of billions of dollar in profit from it every year.
Section 230 gave these companies a copper bottomed guarantee that they simply didn’t need to give a shit. Which is exactly what they didn’t do. Give a shit, that is.
And to a great degree they built their empires on that guarantee.
We know this from revelations such as Facebook’s research concluding that fraudulent advertising was causing harm. But as removing it would impact up to 20 per cent of their revenue, they decided not to.
We also know that Instagram was actually showing vulnerable teen women content classified as “eating order adjacent”.
We also know that Meta’s own research showed that its content was causing harm to young people – but they simply buried it.
In the shocking account from Sarah Wynne Williams of her time at Facebook, she recounts how Instagram actually created a special advertising slot, so advertisers could target girls and young women with beauty products – right after they took a selfie but didn’t post it.
How sick does a company culture have to be to think that through?
Free speech in the breach
The idea that Section 230 was in any way protecting free speech is risible. It wasn’t at all; Section 230 also allowed social media companies to take down any content whenever they felt like it. Which they frequently did - and do – with algorithms that arbitrarily ban content that they deem annoying, or politically inconvenient. Like this from last year, when Meta decided to shut down from queer groups, on abortion advice, and on reproductive heath.
Or maybe content is just banned by mistake – but who knows? As there was no-one to ask, much less appeal to. Because the social media platforms were not legally responsible, they didn’t have to explain themselves.
But they still allowed shockingly harmful content to stay up – as was found out with Instagram’s infamous white list, when for example the Brazilian footballer Neymar was allowed to post revenge porn images of a woman who accused him of rape.
Remember all these senate hearings a few years back, with Zuckerberg pretending to look apologetic? He was essentially saying: Yes, well, but Section 230. Free speech, you know.
He was holding the ultimate get out of jail card.
I’ve researched disinformation and harmful content for around a decade, and if you’re wondering why the digital giants have got away with such egregious, systematically abusive behaviour for so long, Section 230 is pretty much it.
Section objection
And here’s why these two trials were so significant.
In both these cases, Meta and Google trotted out the same section 230 defence they always do; It wasn’t their content, it’s all very complicated, what’s a poor social media platform to do, etc etc.
But it didn’t work.
And in California in particular, that’s because the lawyers made a groundbreaking case. They argued it wasn’t the content that was at fault here; it was the infrastructures that served up the content.
And the jury said: yes, that’s exactly right. Of course these companies are entirely responsible for that.
Both cases will be vigorously appealed – with all the vigour multi-trillion behemoths can possibly muster. Which is quite a lot of very expensively lawyered vigour.
But if they stick, these rulings will be momentous – and there are a whole bunch of other cases in the pipeline. Some are class action law suits, where social media companies could be paying out many billions to many thousands of even millions of claimants (alas most likely, only to US citizens).
So that’s potentially a very good end, to a very nasty story. Maybe...
Uncivil society
Yet you’ll notice that very few digital rights or “civic tech” organisations have said anything about it
That’s because a shocking number of them are actually fervent supporters of Section 230.
Here’s what the august Electronic Frontier Foundation, arguably the founding organisation of the digital rights movement, says about it:
“For more than 25 years, Section 230 has protected us all: small blogs and websites, big platforms, and individual users. The free and open internet as we know it couldn’t exist without Section 230”
The EFF even claims of Section 230:
It does not protect companies that create illegal or harmful content.
Which is pretty much exactly as disingenuous as the internet companies themselves.
And here’s Wikimedia (the parent foundation of Wikipedia):
Wikipedia and much of the modern internet could not exist without Section 230, an essential US law that protects internet platforms from lawsuits against the content their users share online and content moderation decisions regarding it.
If that was the case, how come Wikipedia has not been taken down in other countries, which have nothing like section 230 on their statute books? And how come the place where Wikipedia is currently mostly under attack is the US?
It’s an entirely specious argument.
Why is this? It could be because many people in digital rights are much closer to big tech companies than they let on.
But there’s also a dirty secret about digital rights activists, and the world of so-called “civic tech”:
Many of the most prominent digital rights figures are deeply committed “cyber-libertarians” – just as much, in fact, as the Silicon Valley oligarchs they are supposedly holding to account.
By cyberlibertarian, I mean that in quite specifically the terms described by the pre-eminent theorist on the topic, David Golumbia (who died shortly after releasing his seminal work of the same name1).
This is a belief that the internet is a fundamentally liberatory technology and therefore a purer distillation of human aspiration than even democratic processes.
So according to Golumbia:
“Cyberlibertarianism is a commitment to the belief that digital technology is or should be beyond the oversight of democratic governments—meaning democratic political sovereignty.”
There is a fundamental paradox at the heart of this (a lie might be a more accurate description), because cyberlibertarianism has always required the proactive complicity of the state in its values for its ideal of the internet to exist.
And of course Section 230 is the perfect example of that (we could also look at publicly funded internet bodies, and the entire Trump regime and countless other examples)
Yet despite the current state of the internet and AI - which is a pretty shocking indictment of this thoroughly sinister ideology - these groups haven’t given it up, and they certainly don’t intend to any time soon.
Incidentally, the founders of both EFF (John Perry Barlow2), and Wikipedia (jimmy Wales), are both cited by Golumbia in his original paper on the topic as text book cyberlibertarians.
This may explain why, despite millions of dollars and euros going into philanthropy networks, conferences, think tanks and campaigns on every digital issue you can think of, civic tech has achieved virtually nothing when it comes to reigning in big tech.
In many ways civic tech has simply validated those companies; it offers the promise that the internet could all be so much better, while systematically undermining the means for doing so, such as challenging the power structures that support the current internet, like Section 230.
Now what do we do…
So these two rulings are huge – And we can see other areas where people are successfully pushing back against big tech:
For example restricting smartphone use for young people – and bans in schools. 23 countries have now successfully introduced measures and more are coming. There’s a lot to say about that – and such bans have limitations for a number of reasons (see below).
Nevertheless these are huge popular wins for people against big technology – and these recent court verdicts makes the case unarguable.
Yet most civic tech organisations say nothing about – or are even critical – largely because it interferes with individual freedom,
The EFF (again...) have even launched a “resource hub” against such “misguided” laws.
That’s because they still hang on to the idea that these are fundamentally wonderful technologies, they’ve just accidentally got into the hands of the wrong people. They are in complete denial about these technologies’ harms – and the capacity of societies to collectively do something about it.
Another huge area for resistance against big tech is on AI data centres; hundreds of these projects have now been stopped by local communities in the US and worldwide. It may well be that this local resistance could stop the AI boom in its tracks. (if the coming energy crunch from the war on Iran doesn’t first – but that’s another story).
Again, virtually nothing from the world of civic tech. Most of these organisations want nothing to do with these battles.
Instead they’re focused on campaigning for “responsible AI”, “ethical AI”, “sustainable AI” and “Public AI”.
Now, I’ve been closely involved in digital rights and other tech issues for well over two decades – and I can personally guarantee to you, that none of these AIs have any chance of ever happening.
So people everywhere are winning against big tech; resistance can actually be highly effective. We actually can shape our future.
But we’re not going to get any help from the digital rights organisations that were set up to do precisely that.
Thanks for reading this far…..
reclaimed | systems is completely free - but you can also pay too. As it happens my small group of paid subscribers are all incredibly amazing people in surprisingly different ways -and you could be one of them!
Do leave a comment with your views - I get some really thoughtful feedback sometimes, and that makes the whole writing endeavour feel worthwhile…and don’t feel you have to agree with me….
PS: I was going to add some more on steps to take - but I’d gone on long enough already, so I’ll leave that for another post…
One quick note: all my posts are written completely AI free. I’m keenly aware that a lot of AI models use sentence constructions that I use too, for example: its not this; it’s that. But if you are in anyway suspicious (and tbh these days on substack, why wouldn’t you be), let me assure you, it’s not AI bad; it’s organically bad. And whatever cliched tropes I use, I was doing them long before an AI ever was - and I dare say, will be doing them long after.
I even saw today a “certified AI free” certificate, which is underwritten by blockchain technology, which sounds to me like taking up crack to kick your gambling habit. So for now, you’ll just have to take my uncertified word for it.
David Golumbia - Cyberlibertarianism The Right-Wing Politics of Digital Technology (2024)
Random side note: I did actually talk to John Perry Barlow once in an art squat in Soho, New York (so you can tell how long ago it was for such places to exist). He definitely fitted the cyberlibertarian description, but he also effortlessly fitted in with the radical anti-globalization crowd that was there (people actually thought they could stop it then). And I do recall him then bellowing more than once that Microsoft was, and I quote, “a fascist corporation”. At the time, I felt that was a little bit strong, although now I see he was on balance accurate.





I find all this talk of online freedom a bit much.
My FB account was hacked a few years ago, and I suspect that it was because I asked if anyone else had heard about BRICS, which I'd just heard about.
And then, the day after Trump was last inaugurated, I could no longer get into my X account, no reason given. I suspect that was because I regularly amplified Palestinian voices.
While I am hopeful about the future, if the AI companies are stopped in the USA, they will try to spread their business around the world, especially in developing countries. So, if people are really deadset on stopping AI, it has to be global and for everyone. We also may need another boom in the economy from a more sustainable source that is not AI so, the global economy functions and emerging economies (e.g. India, China, etc) can prosper.