Why AI content marketplaces are the future
A US body is formulating a content marketplace framework in which AI companies can pay a fair price for fair use.
One of the central commercial discrepancies affecting the publishing industry in the current era is the simple fact that big players are able to strike deals directly with AI companies, while all-important small- and medium-sized publishers generally remain prey to whatever content harvesting crawlers head their way to feast on their data for free.
The reasons behind this current awful state of affairs can be reasonably encapsulated by the word "disruption". The speed at which LLM systems advanced and pushed into widespread public usage after ChatGPT emerged less than three years ago has been remarkable to witness, even for veterans of other disruptions.
Publishing was akin to a medieval baggage train set upon by an invading army, such was the extent of the carnage and carrying off of valuables by those waving the AI banner, a banner which, for a time, protected them from accusations of theft.
Flat fee access deals were struck with these raiders by larger publishers, but even then those were largely of the kind of deals you strike in a hurry with raiders, which are never good deals.
If we take the AI companies at face value - not even the biggest ones, but also some you've never heard of or are never likely to - many of them enjoy inflated valuations which even a large, profitable publisher will only reach in a fever dream. Given that, even the better deals struck with larger publishers still look like pennies thrown to beggars.
As it becomes clearer by the day that website referrals from AI Answer Engines are not going to be a traffic driver of any reasonable kind for our industry, it also becomes apparent that a market value must start to emerge for the content used to train the LLMs driving the disruption.
This apparency is seeded in facts such as that the AI Answer Engines that are, by all accounts, destined to become the Search of the immediate future, require quality content to consume in order to have any use, and hence, popularity.
So then, a virtuous commercial circle seems possible between those who produce content and those who require said content to make their LLMs work better than their rivals. Or if not quite virtuous, a better state than what is happening right now.
To this end, the non-profit Interactive Advertising Bureau in the United States is driving an initiative to "Set AI-Era Publisher Monetization Standards" through a working party set up under the IAB Tech Lab umbrella. As they say themselves: "It's time to build the technical plumbing for a better path forward."
Locking the door to robots
The AI Content Monetization Protocols (CoMP) Working Group is considering three central components in order to achieve this "better path forward". The first is a reliable and secure way of blocking bot traffic and prevent unwanted scraping of content. The IAB Tech Lab describe this as a "door with a lock". It's clear the honourable but ignorable robots.txt protocol no longer cuts the mustard in a more dishonourable and ignorant age, so this is an obvious and needed requirement to meet.
Second comes a requirement for "LLM-Friendly Discovery". Essentially, this points to the pressing need for a marketplace, where publishers can provide content to those seeking to utilise it for AI applications. The seekers can find the right content, and the publishers can get paid. Content can be sampled and compared.
Finally, and importantly, there's a requirement for what IAB Labs term as an "LLM Ingest API", an industry standard mechanism for the purposes of content ingestion by AI systems with monetisation built in. You take it, you pay for it. This has been termed, with possible irony, the "easy button" by the IAB working group.
In short, this is a serious attempt to set the technical standards required to have a functioning content marketplace and to be a logical step if we are to move on from the plunder era to something more equitable.
There is a debate about what monetisation would look like of course and the IAB Labs group isn't foolish about it. They accept that a number of different models need to exist in such a content marketplace, with everything from an all-you-can-eat website buffet open 24/7, to a much more exclusive menu of content options, both depending on the client and the publisher.
Then there are the payment models themselves, with three main types identified:
Pay Per Crawl
Aggregation or Pay Per Use
Outcome or attribution-based payment
Naturally I'm in the Pay Per Crawl or nothing camp, which is why I'm not on the working group. Among many other reasons.
As much as I would like to see commercial vengeance weighed on the AI industry, it's not the path forward. These three options outlined by IAB Labs are not exclusive and form a framework in which both parties in any deal can find a model that works for them.
It even brings to mind the possibility of a co-operative venture in which publisher and AI business share both risk and reward. Imagine such a thing!
Of course, there's no guarantee that the IAB attempt to "build the technical plumbing" for a Content Marketplace will prove a success. What counts in its favour is timing being right for such a thing to emerge, they have some serious people onboard, and they are an American organisation, and the US leads the AI field.
The situation in which only the big publishers can get paid isn't good for anyone. After all, the big publishers were little publishers once.
Speed up the pace of product development and new launches with Glide Go, a pre-configured deployment of Glide CMS paired with a full-featured website hosted and managed by Glide.
Shift your focus to content and revenue while we manage the rest.
Request a demo to see Glide Go in action.
"Oh go on, one more can't hurt..."
For reader loyalty, think of content like a box of chocolates: another bite is always good! Forget Google or social media, your biggest source of pageviews is already on your site - that little gem called internal traffic from users already consuming your treats. Analytics firm Chartbeat reveals that nearly 40% of views come from existing site users, a massive sign that loyalty matters and serving up good related content is vital. Understanding where this internal traffic comes from, and how different audiences behave, helps turn first-time visitors into regulars. INMA shares the details.
Read
These aren't the bots you're looking for
Press Gazette are fast-becoming the Force against AI-generated authors and writers passing themselves off as humans. Journalistic jedi Charlotte Tobitt unveils more examples of non-existent writers in the pages of respected titles, as the case for meetings and calls escalates and legitimate writers and contributors fight against fictions within their own industry.
Read
Denmark vs reading crisis
Denmark has declared war on a reading crisis, scrapping its 25% sales tax on books to try and rescue reading, particularly among the young. The government says it wants less scrolling and more reading and this $52m a year gamble is hoped to put more books into the hands of young people, following the lead of countries such as Norway and the UK which have reduced VAT on books.
Read
Tech news payments eyed
It turns out that when local reporters vanish, so does information. The Local Journalism Index 2025 shows the US has gone from 40 journalists per 100,000 population in 2002, down to just 8.2 per 100,000 today, a near 80% drop. Canada saw the red flags and made Big Tech pay rent with Google ponying up C$100 million a year to support newsrooms. US legislators and industry figures are looking north to see how a similar scheme could work for local news there. States are studying their own plans, but it's a challenging task when tech giants are throwing money at lobbying.
Read
Judge blocks FTC "free press threat"
When US outlet Media Matters reported on the placement of big brand advertising next to hateful content on X, it probably expected the snark-storm from platform owner Elon Musk, and perhaps even an FTC investigation - but not into their reporting. A federal judge came to the rescue in the eleventh hour, stating that the Government agency's investigation seemed a little too conveniently timed, and a little too interested in punishing journalism. Judge Sparkle Sooknanan likened it to retaliation dressed as regulation, warning that when regulators go after journalists, they go against the First Amendment.
Read
Small questions, big impact
Sometimes all it takes to get to a smarter conversation is a little nudge. The Financial Times tested AI-generated questions mid-article, of course edited by humans, to invite thoughtful comments. The results? A better tone, more views, and fresh voices in the thread. It turns out that a small prompt can make a huge difference.
Read
Bot on the landscape
As AI eats the web, Fastly's latest report [PDF download] says it's not people clogging your sites, it's bots. Lots of bots. Crawlers make up 80% of AI bot traffic, led by Meta (52%), Google (23%), and OpenAI (20%), with OpenAI claimed to be grabbing a huge 98% of those real-time data requests. They can overwhelm sites and rack up costs, particularly hurting smaller publishers, and some regularly ignore robots.txt to boot. In response, some webmasters are trying to fight back with traps and gibberish feeders, but as the tools evolve so do the bots.
Read
Google AI and traffic: friend or foe
Google says AI is not hurting traffic to sites. Site owners and publishing bodies say that is a lie. SEO master Barry Adams discusses whether the warning reports are all wrong or if someone is missing the point. We know who we believe.
Read
New AI mess at Meta
Meta has found itself in the middle of a growing firestorm, after leaked internal docs showed its AI chatbots were allowed to get far too friendly with children, generate false info, and even mimic relationships with users. Lawmakers are calling it "deeply disturbing" and investigations are already on the table. Meta says the troubling policies have since been scrubbed, but not before one tragic case showed just how real the consequences of unchecked AI acting can be. Another item to add to Meta's legal pile. Meanwhile its short-lived Meta Superintelligence Lab is being broken up as part of efforts to "lead in AI". Related?
Read
Adblock battle reignited
Germany's top court has reignited a battle over ad blockers which, critics say, could outlaw the tech and other browser-based extensions. Axel Springer is going after Adblock Plus for allowing browsers to adjust site code when it reaches the user's machine, a common feature in many browser extensions. Browser-maker Mozilla says the action could lead to the removal of ad blockers but also countless other extensions which users have chosen to install, such as accessibility tools, language tools, and so on.
Read
GitHub's open source revolt
Microsoft's shakeup of developer platform GitHub - the biggest collaboration tool in tech and used by over 150m contributors - has caused fears that it is doing with computer code what the likes of OpenAI and Perplexity have been accused of doing with content. There are now fears around code use, broken promises, and eroded trust, and calls for a developer-led alternative as devs fear a key aspect of the open web is being locked away.
Read



