Dutch courage in taking a stand against the LLMs
A domestic LLM project in the Netherlands is well-placed to assert a bit of sovereignty over the supremacy of the big players.
European aspirations towards doing anything significant in the tech spheres dominated by large US companies have become something of a meme among considerable parts of the commentariat, which these days means virtually anyone on social media.
The reasons for this are manifold, but they manifest themselves in the form of European policymakers being overly proud of legislation designed to limit the use of a technology they have had no hand in creating. It feels a bit like being terribly keen on limiting how fast steam engines can travel because they fear people’s eyes will pop out at sufficient velocity, while angrily shouting at the steam engine brigade from a horse and cart.
This is of course a partial truth, and yet US commercial dominance in tech is real, and with it comes the funding to further work on the technology to increase that dominance. Europe lacks not in brains, but in focus.
However, there are ideas and efforts that cut against this narrative within Europe and one such is set to make its public face known in the Netherlands soon, with the launch of GPT-NL.
The aim of this government-backed project has been to produce a Netherlands-specific LLM, using Dutch language sources, and with the full co-operation of NDP, the national umbrella organisation for commercial news publishers.
As an excellent full write up on the project in Computer Weekly puts it “GPT-NL claims to be the first AI initiative anywhere in the world to have reached paid, consensual agreements with all major publishers in a single market for the use of their content in model training.”
This project is actually along the lines of what could be an ideal application for the technology. Devoid of irresponsible talk of machine sentience, and happily free of tech cultist hopium, it’s actually a fairly straightforward effort in concept to harness good data, that has been licensed, and hook it up in a way that is useful for the consumer, those being people in the Netherlands. The Dutch, if you know them, are a fairly direct people.
“We set a precedent that strengthens the position of journalism in the Netherlands over the long term,” according to Rien van Beemen, the chair of NDP Nieuwsmedia, in the project’s latest progress report. “AI innovation can happen ethically, without large-scale unlawful use of journalists’ work.”
GPT-NL is now on trial with a number of different users, all within the public sector and aims for a broader commercial roll-out later this year. It has reportedly reached technical benchmarks that place it within competitive parameters when set against an OpenAI product, for example.
As a side note, I do enjoy the fascinating game of performance metrics for LLM models, which are typically so obscure that surely someone has been tempted to just make one up. Bifurcated Ohm Throughput. Maybe they are. How would we know?
Is this Dutch model a glimpse of a possible future? One thing that possibly counts against it is language. Dutch is unique. It features sounds that are deliberately impossible for Germans to make, never mind English people. They have a language moat, in effect, and it could be argued that a specialist LLM using Dutch language sources is really a necessity, lest they be overwhelmed. There is also a commercial proposition in that cultural fact.
However, that’s not the whole story. The sources are the more substantive part of it. This is an agreement, forged at depth, with commercial news organisations, the very organisations that are under most pressure in the current technical epoch, to provide GPT-NL with high quality data. It is a leap of faith by those organisations, but the situation requires such leaps. It is also to be hoped that the sources reflect different opinions and positions, as if it is felt by some users that GPT-NL is a form of government propaganda dissemination operation, then given the febrile political situation in the Netherlands, it will fail to attract broad adoption.
Nevertheless, if successful, GPT-NL could provide a model for others, and with user adoption comes improvement. Whether such ideas are enough to pull Europe out of the long and obscenely well-funded shadow of the big US players is yet to be seen.
Publishing & Media
Glide Live: London - get it in your diary
Glide Live is coming to London for the first time. Our event for invited guests from newspapers, media, sports and technology will address where things are heading, what's working to engage audiences more, and what's not. There'll be talks from SEO stars like Barry Adams, Harry Clarkson-Bennett, and Steve Wilson-Beales, dives into topics as varied as AI content marketplaces, SEO in an AI world, and media product innovations which work from some of the most highly-engaged app makers in the country. Chatham House Rules are in effect, with the emphasis on real talk pertaining to real outcomes by people who have much to say. It's not all Powerpoints, so crowd mingling and food and drinks is expected. A small group, a beautiful venue, and people who actually have something interesting to say. Invites will be coming soon, so keep 'em peeled.
Read
Search still dominates
Reports of the Death of Search are greatly exaggerated. The Datos State of Search Q1 2026 report shows traditional search, and Google, still wear the crown when it comes to desktop discovery, with Google commanding a 94.3% share in the US and 95.5% across the EU and UK, with Bing, DuckDuckGo, and Yahoo fighting over scraps. Countering the DoS tale, traditional search is still growing faster than AI search, in which Google AI Mode holds less than 0.2% share, and ChatGPT usage seems to have plateaued. There are holes of course: the data misses out mobile users and most non-US markets where AI usage might be very different, but however you cut it, Google is still light years ahead on every number.
Read
People Inc's great escape
The publisher behind 40 brands, People Inc, has decided to think things through after their Google traffic nosedived 63% in two years. Now, 41% of their digital revenue comes from non-website sources - social ad sales, events, AI licensing deals, and its own ad-targeting tool D/Cipher+. Now they're looking at a 24% of growth year-on-year in Q1, and a digital revenue of $253m. The playbook? Bots get the block, and then they need to pay rent. They've struck deals with Meta, OpenAI, and Microsoft for content, while taking into consideration that AI firms are eyeing fresh reporting, not evergreen content. Whether others can follow their example, that's a different story.
Read
Google's (sorta) subscribe button
Google has rolled out Preferred Sources globally, which allows users to "subscribe" to a publisher's content in Top Stories. Implementation is pretty simple: add a CTA to your articles where your readers can opt in. The early data from English-language publishers shows that the results aren't spectacular so far, however, it's free, easy, and anything which brings returning visitors is worth a shot.
Read
Are you worth crawling?
In a race between visibility and the query, which wins? It's the topic of a good dive into the outcomes of search, and why the search even took place, by Greg Jarboe. He outlines that search captures demand at the finish line, but what created that demand happens across a fragmented web of news, social, and niche communities. If there is no entity recognition across Wikipedia, Reddit, LinkedIn, AI systems will struggle to care about you. Between 40% and 60% of cited sources change monthly across Google AI Mode and ChatGPT, making AI visibility far less stable than organic rankings. Some advice is to focus on optimising citability: original data, structured content, clear entity signal, and a presence in communities that are actually important to your audience.
Read
Spotify brands people
Spotify has rolled out a "Verified by Spotify" badge to distinguish real artists from AI-generated muzak. To qualify, an artist needs to have at least 10,000 active listeners over three consecutive months, as well as an identifiable presence off-platform, in things like social media, live dates, and merch. While some artists already have the badge, there's many still missing, including Joni Mitchell, Celia Cruz, and, err... Richard Wagner and George Frideric Handel - well known for their social media presence, gigs, and signed compositions. Independent artists who already struggle for visibility may be losers.
Read
More tools, more hours, same job
Is AI replacing artists and creatives or is it just making them work more? According to a poll by Gallup, "AI-exposed roles" like composers and art directors are working more not less, and although the way they worked has changed, employment hasn't dropped. Now they're using AI to experiment with ideas, generate drafts, and organise workflows. Whether they like it is a different story. Do the creative juices flow better in a chatbot back-and-forth?
Read
Big Tech
Foxes builds henhouse
A bipartisan bill, called the LIFT AI Act, would introduce changes to the K-12 curriculum to teach "AI literacy", defines as the ability to use AI effectively, interpret the outputs, and mitigate the risks. The bill is already endorsed by OpenAI, Google, Microsoft, HP, and the American Federation of Teachers. The irony is hard to miss: the companies whose products are disrupting children's learning and encouraging students to turn off their brains for AI are now backing legislation that teaches how to use said products responsibly. The bill attempts to put young people closer to a technology that they are hating more and more.
Read
Section 230 can't always save
How the long-standing internet/content/liability law Section 230 is being circumvented presents a fundamental question to how large platforms have avoided legal scrutiny over the decades. Three US court cases, one in California, one in New Mexico, and one in Massachusetts, have managed to find their way around Section 230. How? Instead of going straight to suing over content posted by users, the plaintiffs started targeting how the platforms were designed and used. Prospect Magazine explains how the damages in these lawsuits could become very existential for the platforms.
Read
Papers, please
Mozilla, the EFF, Tor Project, Proton, and a coalition of privacy and digital rights groups have protested proposed UK online age verification laws, claiming they will force any user of any age into intrusive identity checks. The open letter, published by the Open Rights Group, argues the proposals would fragment the open web, force users into walled ecosystems, and create countless opportunities for data breaches. They offered an alternative focusing on regulating the business models that actually cause harm, such as mass data harvesting. While the UK Government is still pondering about it, the fact that the Children's Wellbeing and Schools Bill has already passed doesn't really give much wiggle room to this issue.
Read
The trackers nobody checked
Ad trackers, embedded in almost all of the 20 US state-run health insurance exchanges, have been quietly shipping your personal information to TikTok, Meta, Snap, LinkedIn, and Google, according to a Bloomberg report. The states apparently didn't really understand what their trackers were selling, so they shipped data that includes sex, citizenship, race, ZIP codes, and even what pages users visited. The tech companies are washing their hands of it, claiming their TOS tell advertisers what not to share. Without a federal privacy law, and combined with the state health exchanges ignorance, it's every user for themself.
Read
AI & Copyright
Friendly AI lies more
According to a new paper from researchers at Oxford University’s Internet Institute, AI models tuned to be warmer and more empathetic are about 60% more likely to give incorrect answers. The effect gets worse when users express sadness, or state beliefs up front - bots are much more likely to agree without challenging a standpoint. Models tuned to be colder perform better. The researchers suggest the AI might be following a pattern from their human training data, where being warm was rewarded over being correct.
Read
France shakes things up
The French Senate has pulled a how-the-turntables move in the form of the Darcos law, which will shake future AI copyright disputes up: instead of creators having to prove their work was used to train LLMs, now the AI companies will have to prove it wasn't. 81 French cultural and media organisations, from music and film to publishing and press, are now pushing the National Assembly to schedule a debate as soon as possible. The bill comes after the failures of earlier negotiations between AI companies and rights holders, and according to the French Council of State, it is compatible with both the constitution and EU law. The bill already has 25,000 signatures.
Read
The podcast flood no one asked for
According to Podcast Index, more than a third of new podcast feeds created each day are AI generated, with one company - Inception Point AI - responsible for nearly a quarter of all new podcast outputs, with 10,000 active shows and 3,000 episodes per week. While some people enjoy their information delivered quickly without editorial voice, in general the audience for this synthetic chatter is not really clear. Is it bots, serving bots, serving ads to bots?
Read
The Academy draws a thin line
The Oscars have updated rules to require the acting performances to be "demonstrably performed by humans" and screenplays to be "human-authored". While it sounds like a ban, it only really applies to entirely AI-generated content from those two categories, while VFX, Best Song, and Best Animated Feature remain open to AI it would seem. Critics have already pointed out that in these two categories, AI-generated performances were the least likely to win anyways, and that the better solution would be to remove the prestige factor from AI-generated performances before studios start to use it as a medal in front of investors. Still no word on acting animals though.
Read



