Public Service Announcement: the internet remains neutral
Court cases against social media firms are starting to stick. It perhaps presents a moment for audiences seeing content and those who create it to get back on the same page.
Despite some implicit suggestions to the contrary, the internet itself remains neutral. I say this as a reminder that the data transmission technology that girdles our world and defines some large parts of the activity upon it, is exactly what we, the people, make of it.
The reason for making such a point comes in a week during which two of the world’s largest commercial empires, Meta and Google, which both owe their existence to such technology, have been dealt hefty legal blows.
First, jurors in New Mexico found that Meta had violated the state’s unfair practices law, after Attorney General Raúl Torrez built a case that the company failed to properly safeguard its apps from online predators targeting children.
The total penalty of $375m in this case “was reached after the jury decided there were thousands of violations of the act, each with a maximum penalty of $5,000.” If each violation represents an affected user, the financial maths are looking bad for Meta if extrapolated across the entire United States. Meta plans to appeal.
Then we had a jury last night in Los Angeles return a ruling that both Meta and Google were liable for the depression and anxiety caused to a young woman who became addicted to their platforms. That jury concluded two tech companies should pay the woman $3 million in compensatory damages and another $3 million in punitive damages, with Meta liable for 70% of the amount. Again, Meta have said they plan to appeal.
In a memorable quote from the LA case, lawyer Mark Lanier stated: “They didn’t just build apps - they built traps.”
Content unaware
In both cases, legal attention centred not on the content served by the platform, but instead on the mechanisms by which it was served. An important difference, as platforms have proven adept at avoiding responsibility for content, in a way publishers cannot, by claiming protection under Section 230 of the US Communication Act.
This then, is where the attention economy has taken us, and using all the powers of hindsight granted to me, it seems inevitable that this is where we would end up, as increasingly more intrusive and manipulative methods have been employed to lure billions to keep scrolling. It would be inevitable then that in among these vast numbers of users are those who can be lead astray, who do not develop healthy tech habits, who become enveloped in online worlds that do not serve their ultimate interests.
Yet again, and the crumb of comfort we can offer to our own publishing industry, is that the platforms are not the internet, and the internet is far from a final form. Legal blows that curtail the more invidious eyeball-capturing methods of Big Tech are to be welcomed. I’m not sure this is a “Big Tobacco” moment, but it’s certainly something for Zuckerberg and Pichai to put in their pipes to smoke and choke on.
Related to this, in the UK where I am much debate is being had about the possibility of introducing a social media ban for the under-16s, as has been implemented in Australia. To this end, a government-backed trial has been announced in which 300 volunteer families will try a combination of “social media bans, digital curfews, and time limits on apps” in order to gauge the effect of such on minors. This, it must be noted, is from an administration that is also considering lowering the voting age to 16.
If implemented nationally, this could bring about a scenario in which at the stroke of midnight on your 16th birthday, you are suddenly dropped into a world of Communist Cat memes, people setting their arses on fire for clicks, and many, many people offering their opinions on things they know little about. That’s quite the drop.
We might be trying to end the era of the “screenager”, but the internet literacy the young frequently possess is, in my thinking, a guard against manipulation that someone of my analogue/digital crossover generation can’t fully comprehend, no matter how many hours I’ve spent online gaming.
As the argument would go, just because someone has been beaten to death with a toaster, you don’t ban toasters.
A moment - but is it pivotal?
Speaking as the somewhat erratic chairman of a 15-year-old’s content consumption committee, I can say it’s the relationship you have with the child, and your understanding of them that matters most.
Some degree of literacy about what the various platforms actually are and how they work is useful, but it’s the bond of trust between a child and a guardian that remains the single most important bulwark against some of the internet’s wilder shores, and I opine as someone who read a Marquis de Sade book at 15.
It is of course not difficult to ascend this view upwards into the role that responsible media creators and publishers can have, and have been able to play, to society at large.
The scale of coverage given to the two verdicts hints at their potential real effects on the major platforms, particularly since there are so many other suits of similar themes stacked behind in other courts. The financial impact of these two specific awards will be unnoticeable to the social media firms, but the real-world impact on how they do things in future could be huge.
I am also interested to see how it might help the publishing and media industry to regain a voice on the power of good and trusted content, a message which in recent years has seemed drowned out by the real-world power of the platforms and their design decisions to bend content to their ends.
Publishing & Media
Journalists on picket lines
It's a bad week to be a media manager. In Australia, ABC journalists and technical staff walked out for 24 hours yesterday, their first strike in 20 years, after 60% of the workforce rejected a below-inflation pay deal. On the other side of the world, in New York, ProPublica's unionised staff voted 92% in favour of the same action after more than two years of bargaining for a contract. Two very different newsrooms, one underlying frustration which both creative teams and media business owners can agree on: content isn't cheap, and making money from it is hard. Will the social media and Google adtech trials help swing the needle?
Read
Old date, new problem
Here comes a quick but useful SEO reminder: if your pages show both a publish date and an updated-at date, Google will go for the original publish date to display in search results. One publisher decided to test this and added both dates, which resulted in their blog CTR being knocked down 20%. After they removed the original date, Google was able to see the updated one and brough it back straight away. If you are regularly updating your older content, showing two dates might send mixed signals to Google and undermine the freshness you're trying to achieve.
Read
Google writes your headlines now
Google is testing AI-generated headline rewrites in Search, and publishers who have been watching what happened in Discover will find some similar patterns. Last year, what Google labelled as a "small UI experiment" turned quickly into a feature. Now Search is getting the same treatment, and Google is using identical language, that it is small, narrow, not approved for wider rollout. All things we've heard before. What makes this different from Google's existing title rewrites is that this new version will generate text that doesn't appear anywhere in the original article, so not pulling from your H1 or OG title, but Google writing its own version of your headline. No disclosure, no opt-out, and no way to see what's happening unless you check it manually. Of course, a headline that's good for Google might not be good for anyone else, and then we are off into the whole "So what is good for Google?" question.
Read
The March shuffle
Google has blessed us with a new Spam Update in March, the first spam update since August 2025 and the second algorithm update of the year overall. It was described as a standard update rolling out across all languages and locations, expected to wrap up within a few days. What it's specifically targeting isn't clear, but if your rankings or traffic have moved unexpectedly this week, this is a likely culprit.
Read
Stop collecting data, start using it
Most publishers are sitting on more audience data than they know what to do with, but that isn't the problem: the issue is that the data lives in multiple different places, and nobody has the full picture. Glide Nexa is built to fix that: a single audience interaction platform that pulls first-party data into unified profiles, connects editorial behaviour to commercial outcomes, and gives everyone a shared view of who their readers actually are and what keeps them coming back. Personalised recommendations, dynamic paywall messaging, habit-forming formats, and interaction and engagement signals - it's all there, without the enterprise price tag that puts such insight out of reach of most publishers.
Read
Big Tech
Europe to Google: wrap it up
A coalition of European publishers, tech companies, and industry bodies - including Axel Springer, News Corp, Condé Nast, and the European Publishers Council - have written jointly to the European Commission urging regulators to bring their antitrust investigation into Google's search practices to a close, and do it fast. The probe has been running since March 2025 under the Digital Markets Act, with a stated target of around 12 months to conclude. Obviously, the deadline has passed. The signatories argue that the delay is already doing damage, creating uncertainty which is affecting investments in both media and tech. On the other side, Google claims they have made changes to address the concerns. The Commission claims they plan to move swiftly.
Read
Fake Claude site, real malware
Imagine you're going through Google, searching for help with a Claude plugin, and after clicking on a sponsored ad at the top of the results and you land on a site that looks exactly like Anthropic's official documentation. The thing is, it only looks like the official thing and when following it the user will end up downloading credential-stealing malware onto their machine. The ad was verified by Google which means the advertiser passed the identity checks. Google did remove it after being tipped off, but it begs the question how many of these malicious ads are still out there.
Read
Who's Sora now
Disney signed a deal with OpenAI to invest $1 billion and license some of its characters for use in video-maker app Sora, with the goal of eventually integrating the technology into Disney+. That deal is now off, after OpenAI's decision to shut down the standalone Sora app. The entertainment company will still explore partnerships elsewhere, with a pointed note about finding platforms that "respect IP and the rights of creators", a jab that hits right into the launch of the Sora app.
Read
The case that couldn't stick
Two US news publishers - Helena World Chronicle LLC, and Emmerich Newspapers - lost their bid to sue Google for monopolising the online news market The publishers claimed that Google's grip on search traffic left them with no meaningful alternatives for reaching readers online. The judge rejected their argument as too flawed to hold up, as well as their inability to show that any losses qualified as antitrust harm rather than just the reality of operating in a Google-dominated market.
Read
AI & Copyright
Steal now, lawyer up later
A leaked video of a closed-door Stanford lecture has resurfaced, where the former Google CEO Eric Schmidt is telling students that if they need copyrighted content to build an AI product - they should just take it, and then hire lawyers to clean up the mess if the product succeeds. This is a blunt articulation of something that the AI industry usually dresses up in some fine legal language. The attitude he described isn't new, as the AI industry has trained most models on vast quantities of books and creative work without paying anyone a dime, while at the same time making sure that no one can train on their outputs. We can use yours, you can't use ours, the double standard runs deep.
Read
ISP ≠ Copyright police
The US Supreme Court has ruled unanimously in favour of Cox Communications, deciding that internet service providers can't be held liable for copyright infringement that their customers committed. Back in 2018, Sony and other major music labels sued Cox and argued that it should have cut off customers accused of pirating music. The court didn't agree with this, ruling that not disconnecting these customers doesn't make Cox liable for their crimes. Cox called this a huge victory, The Recording Industry Association of America (RIAA) doesn't really agree.
Read
Name dropping without asking
As our previous comment mentioned, Grammarly has quietly launched a feature called Expert Review, which offered writing suggestions framed as coming from real named people - S. King for your plot, C. Sagan for your science writing, and so on - but failed to mention that none of them agreed to this. Writers objected, a class-action lawsuit followed, and the feature was pulled after eight months. CEO Shishir Mehrotra sat down for an interview with tech writer Nilay Patel to apologise, and the whole episode leaves one question in the air: if a product slaps a real person's name onto their AI product, where is the line between inspiration and liability?
Read
Fair use? Not for everyone
Jack Conte, the founder of Patreon which helps creators get paid for their work, had a pointed observation about the AI industry's fair use argument: it fell apart the moment OpenAI signed deals with Disney, Condé Nast, Vox, and Warner Music. The argument was never that compensation is impossible, but that smaller creators have no leverage to ask for it. Conte isn't calling for AI to be stopped, but he does point out that the industry's selective generosity tells you everything about where the power sits.
Read
The AI Overview gap
AI Overviews appear in less than 6% of breaking news and major headline queries - a fraction of the rate seen in categories such as Health, Tech, or Business. Google seems to be cautious about generating AI summaries when news is moving fast, which does make sense considering AI hallucinations. What is more interesting is who actually gets cited:YouTube is at the top, followed by Wikipedia and Instagram, with traditional news publishers nowhere to be found in the top ten. NewzDash shares more info.
Read



