AI content labelling - a maker's mark or a misinformation band-aid?
Plans for labelling of AI-created content are gathering pace in Europe. Aside from how and where, no-one really needs to ask why.
What’s in a label? Quite a lot if it says “made from mechanically reclaimed meat”. In that spirit, the EU Commission has just commenced work to produce a code of practice on the use of GenAI that will “be a voluntary instrument to help providers of generative AI systems effectively meet their transparency obligations”. As things stand currently, it’s hard to see how that isn’t a sensible thing. If you are what you eat, you may also think what you see or read.
From the consumer perspective, this move will be noticeable once agreement has been reached on how to label AI content. The European Commission gives us an idea of its direction of approach by saying “the requirement reflects the growing difficulty in distinguishing AI-generated content from authentic, human-produced material”.
You won’t find OpenAI et al using the word “authentic” much - far too close to “author”.
Reflexively, this column is against a great deal of regulation. Loosen the asphyxiating grip of Big Tech and let the content dogs run free where they will, is our North Star. The internet is the work of humans, and humans should be trusted to develop its uses.
Unfortunately though, we’re at a point where the broad perception is that the platforms are the internet, and the more people believe it, the truer it becomes. It’s a constraining idea, and one that will shatter again in time.
However, we are where we are now. A new study from the University of Maryland, that looked at US news publications, has found that “of 186,000 articles studied in the summer of 2025, 9.1% were either AI-generated (5.2%) or mixed (3.9%)”.
Going from zero GenAI content to nearly 10% in such a short time span is quite some adoption curve, although not as earth-shattering as some tech shills would have us believe.
Tellingly, in the study the proportion of GenAI content was higher in publications with smaller audiences than in larger players’ content. I’d think this is a matter of business realities, that those with deeper pockets and stronger editorial voices in the C-Suite are simply less likely to harness GenAI to produce content. In our deeply uneven publishing market, such utilisation might be far from ideal, but remains understandable.
More delightfully to those of us who maintain some limited pride in our ability to communicate with the written word, the Maryland study revealed that where GenAI was found among, shall we say premium publications, it was mostly found in the writing of public figures who had written for them. Or not written for them, as it turns out.
That in itself is revealing, suggesting that many prominent enough to appear in such publications on their own terms are actually poor communicators who don’t understand their own ideas and have to resort to GenAI.
Or maybe they just can’t write. Scathing I know, but what other conclusions are there? Maybe they didn’t have time. In which case, they shouldn’t have accepted the commission.
Coming back to what the European Commission aims to introduce in the second half of 2026, a voluntary code, adherence to which will surely mitigate against any GenAI-related legal troubles in the EU by having such content clearly labelled, it brings up a certain paradox.
Normally, if someone makes something fabulous that the world also regards as fabulous, then the creator will want people to know who made it. That’s just good marketing.
In the case of GenAI, the ultimate unique proposition is that something is being made without the involvement of humans, but crucially, it’s as good as a human can create and indistinguishable from it. The value is in this lack of difference. ChatGPT’s awful attempt at fiction, or metafiction, illustrates this. If it was as good as is promised, we would not know that it was GenAI. This is not the case, and although we all fear humanity falling for some machine-produced mass deception, the fact is we’re a long long way from it.
I’m certainly not against the idea of labelling content as GenAI, at the very least as a precautionary measure. The technology and its uses are still reasonably fluid, and will settle in places we may not expect. It’s not that mechanically reclaimed meat explicitly is bad for you, but you should know when you’re eating it.
Simplify audience management and first-party data, anywhere people and content meet.
Manage entitlements, craft engagement strategies, and model users and groups from one platform.
Glide Nexa goes beyond standard CDP functionality to help publishers foster community interaction, boost audience interaction, and build sustainable engagement strategies.
Request to see Glide Nexa in action.
Google eyed for breakup
Brussels is asking Google to break up, and not like teenagers do. The EU believes the best remedy for solving the Google ad monopoly problem is to dismantle the company’s ad business entirely, starting with asking the company how it would do such a thing, lest the EU steps in the does it itself. Such a sledgehammer move is rare in the EU, but is indicative of the will to loosen Google’s grip on the advertising market - particularly as US courts consider a similar move. “If you cannot go for structural remedies now, when the US is on the same page, then you’re unlikely to ever do it,” said one observer.
Read
Getty wins a battle not a war
In a court battle closely watched from all sides, Stability AI walked away mostly unscathed from Getty’s UK courtroom challenge alleging mass theft of IP. The photo giant called foul over 12 million scraped images, but scored a relatively low-stakes win over trademark infringement rather than a sweeping victory across the wider question of IP hijacking.
Read
Meta’s money from scams revealed
Reuters has seen Meta documents which hint at the sheer scale of the money it makes from scam ads. And we wonder why it seems so slow in clamping down on fraudulent-looking commercial content. Now we know.
Read
Hands off our anime
Japan is out of patience with AI anime remixes after its Content Overseas Distribution Association (CODA) demanded OpenAI’s Sora 2 ceases using its members’ content, with legal remedies threatened. OpenAI is working under the assumption of opt-outs, where CODA says opt-ins should be the norm, like most people would assume is the case. Hollywood similarly went Super Saiyan a few weeks ago and prompted an OpenAI U-turn in just a few days, so let’s see if OpenAI pays as much attention to this too.
Read
Selling less, caring more
The Guardian is going all in on brains, not clicks, as its US affiliate arm, The Filter, skips SEO tricks and goes straight for the things its audience actually cares about. A small team, led by Nick Mokey, is working overtime to ensure their recommendations are on target for what their audience wants.
Read
ChatGPT leaks into Google
ChatGPT prompts are appearing via Google (again), this time in site-owner Search Console data. Err, yup.
Read
Sweeney gets another Epic win
Perhaps the only individual to take on Google and win is the CEO of Epic Games, Tim Sweeney. His win against Google in 2024 was what led to third-party app stores and non-Google payment methods being accessible on Android devices. The latest development is a Google/Epic joint motion in which third-party app stores will be allowed on Android worldwide, not just in the US. Google is, of course, still being slippy about some aspects of compliance.
Read
Photos over pixels: NPPA pushes back
Is photojournalism under threat from AI? With AI offering an alternative illustrative route and, more importantly, newsroom budget cuts, the NPPA steps in to remind everyone that a picture really is worth a thousand words, and a lot of trust.
Read
AI goes rogue, gets benched
One reason publishers keep humans between AI content and audiences is their understanding of the laws of defamation and libel (and many others). Google pulled its Gemma GenAI-model from their AI Studio after US Senator Marsha Blackburn accused it of spouting wild and false claims about her. The search giant marked this as a hallucination and claimed it wasn’t meant for everyday users - not sure if that’s a defence in a trial, should it come to that.
Read
Fighting the algorithm
With big tech and AI taking whatever they please, not everyone is sitting and waiting for permission. Branko Brkic, alongside Nobel laureate Maria Ressa, is building Project Kontinuum, an operation which aims to rebuild trust globally, sprinkled with a little bit of anti-propaganda. The plan starts with “Choose Truth”, a worldwide campaign for journalism, while it ends with “The News Social”, a network without algorithm made by journalists for journalists, not clicks.
Read
When AI goes shopping
Amazon is not really vibing with AI shopping sprees. The master of retail has slapped Perplexity AI with a lawsuit, saying that their bot sneaked into accounts of customers and pretended to be human while shopping. Perplexity labels it as innovation, while this entire ordeal sparks a whole new conversation around AI agents posing as people.
Read
From memes to headlines: Reddit’s new move
Reddit wants news publishers to be more like a cocktail party, and less like a press conference. With a little help of their Pro Tools, news outlets can post, track, and AI-optimise their way into the subreddits which are right for them, but let’s not forget that Reddit is getting a piece of the cake from this as well. The Front Page of the Internet will reap booming referral traffic and the title of AI’s favourite training ground, while Facebook slowly fades into the void.
Read



