Is the Google AI goose about to be served up for dinner?
Legal issues and threats from rival services are mounting for everyone's favourite search giant. Our man Rob has a gander at what vexes Google next.
In the past few days, a legal complaint lodged with the UK's Competition and Markets Authority by bodies representing news publishers has requested the CMA implement provisional measures to stop Google from misusing publisher content in AI-generated search responses.
In any sane commercial world, the move, as reported by the Press Gazette, would seem entirely reasonable and likely to be granted.
Simply put, someone is taking stuff without paying for it and using that stuff to create their own stuff. Whether the stuff being taken is bauxite, sausages, or intellectual property is irrelevant. If that bauxite is transformed into window frames, or those sausages end up concealed deep inside a cassoulet, it doesn't matter. The original act of taking was wrong.
Yet such is the blind momentum of technology, with the fear of missing out as its sharpest edge to cut through societal norms, that sane considerations are given equal weight to insane ones.
We won't hold our breath on the UK's CMA actually doing as requested - independent and feisty as it has been in the past - as it's notable that the current UK Government has just signed a major strategic agreement with Google for the tech business to train 100,000 of the nation's civil servants in "using AI and other digital services through its Google Cloud Training Programme by 2030" according to Computer Weekly. It's safe to assume they will all be trained on Google products.
While anything which has the prospect of improving the agility of the UK's sclerotic public sector is to be welcomed, the fact is that that issue is more cultural than technical. Too many unsackable people have been given the power to say "no", and by goodness they like to use it, regardless of tech solutions.
However, it is possible to be optimistic. There are people in public life who still regard a pluralistic and even rancorous consensus media as vital to the servicing of any democracy, one which reflects how people actually feel rather than a technology that simply siphons any and all passion out of things even as it dazzles us with circus tricks while using other people's work as props.
By way of admission of the same, Google's YouTube this week announced it will place strictures on Generative AI content, aka Degenerative AI content. According to TechCrunch, "On July 15, the company will update its YouTube Partner Program (YPP) Monetization policies with more detailed guidelines around what type of content can earn creators money and what cannot".
The wording of the rules is not yet released, but the latest YouTube policy update gives a major clue: "In order to monetize as part of the YouTube Partner Program (YPP), YouTube has always required creators to upload “original” and "authentic" content. On July 15, 2025, YouTube is updating our guidelines to better identify mass-produced and repetitious content. This update better reflects what 'inauthentic' content looks like today."
Inauthentic is the type of horrible word that such tech companies favour, having no clear definition here to define what it means other than when it suits them. What they mean is AI slop, the stolen, derivative, culled, purloined, copied and straight up cloned crap which is becoming all too present on their own platform, and will, if left unchecked, lower the quality of training data Google uses from YouTube.
And you better believe that when it's their platform and not yours, it's a problem they care about.
We'd like to mutter something about "double standards" here as AI Overviews seemingly make irrelevant actual real views, but it would be lost on the winds of "inevitable progress". Trust me, it's all evitable really.
Meanwhile, things are a little shakier than they seem in the GenAI world.
Google, with a stock price that reflects a degree of doubt over its ability to make good on its AI promises, may now also face a product threat from vacuous visionary Sam Altman, who is about to expand OpenAI's optimised money incinerator with a browser offering. Rivals Perplexity have launched a browser this week too.
Let them fight it out, preferably with no referee.
At the risk of being called unhinged, even Nvidia, which this week made history by becoming the first $4 trillion company, shows signs things aren't all well. Its runaway success as the go-to chip manufacturer for AI applications is remarkable - selling quality shovels in a gold rush - yet those such as myself with longer memories remember old Nvidia as the maker of some fabulous consumer graphics cards and GPUs, including the venerable GTX1080 which purrs at the heart this author's everyday workhorse without missing a beat in nearly a decade.
Yet, while still retaining impressive market share, Nvidia's more recent consumer GPU offerings have created a sour note around the company. It started with criticism of some dubious technical performance claims for particular models and broke out into a feud with tech channels on YouTube, one of which recently judged a hot new Nvidia GPU as "a waste of sand". Perhaps there is a loss of focus within Nvidia, even as it scales the value heights.
The lesson from that is that publishers must keep focus on original content creation. It's the rock on which our industry is built, and it remains what makes us unique. Ignore the turbulence if you can.
The audience interaction platform you've been waiting for.
Unify and unleash actionable customer data with Glide Nexa and get more from your first party data.
Manage entitlements, craft engagement strategies, and model users and groups from one platform to simplify audience management everywhere people and content meet.
Request a demo to see Glide Nexa in action.
"Click-to-cancel" blocked by smallprint
The FTC's "click to cancel" rule, designed to help US consumers avoid tortuous subscription cancellation experiences, has been shelved, to the frustration of countless publisher and media companies who had updated sites and apps to comply with the new law. Why, you might ask? Despite huge consumer support for the principle of subscriptions being "as easy to end as to start" and tightening auto-renewal regs, opponents found loopholes in the smallprint (!) which they say show FTC failures in formulating the law. Judges agreed. The opponents? Think gyms, spas, instant credit providers, workplace injury insurance schemes and the like - paragons of subscription virtue their lawyers would tell you. Expect the law back again, or the rise of voluntary good-practice certification bodies.
Read
BRICS unites on AI
As global discussions on AI governance gain momentum, BRICS leaders (Brazil, Russia, India, China, South Africa) are joining the party with a first joint-statement on the topic. Framed at the 17th BRICS Summit, the document stresses ethnical, inclusive, and sustainable development of AI and outlines core priorities such as a UN-led multilateralism, fair access to technology, and respect for digital sovereignty, all aimed at managing risks while also supporting the Global South.
Read
Publishers vs Google: AI snippets, real trouble
Independent publishers in the UK and Europe recently hit Google with antitrust complaints over AI Overviews, those AI-burgled tidbits at the top of search results using your content to make a trip to your site unnecessary. According to the complaint, Overviews snatch traffic and drain revenue from journalism, and give back nothing. A key plank of the UK complaint is the Google enforcement of its 2-bots-for-1 stance that inclusion in search results requires AI scraping - a digital hostage situation as plenty see it. Google argues it's all fine and their AI actually helps people discover more content. It does - on Google-owned sites. Publishers aren't buying it, and neither are regulators.
Read and read
OpenAI's new security strategy
After its giant pot/kettle moment - implying Chinese LLM DeepSeek used ChatGPT data to train its model - OpenAI has implemented extensive internal security measures to tighten staff access to its own products, isolate core tech, and ban internet usage.
Read
Glide's latest DACH attack
Swiss publishers rejoice! The world's most Content Awarey headless CMS for publishers and media has a new partner in Switzerland and covering the entire DACH region. Sector specialists Centralsoft, wizards at migrations and complex publishing projects, are your go-to experts in the region for insight into what Glide CMS can do for you or your next major publishing launch.
Read
Digital marketers: Improvise. Adapt. Overcome.
AI has rewritten the rules of digital marketing, with users turning to AI tools in preference to Google guidance on how to do it best. For marketers, the old playbooks are out and content needs to be built for both search and AI-native platforms, else it risks being invisible to one or the other. Adweek looks at the path forward and says it's time to rethink strategies and adapt early in order to be ready for the next phase of consumer attention.
Read
The battle for news clicks
The battle for news traffic is heating up as Google's AI Overviews slowly rewrite the rules of online search. By feeding answers directly on results pages, they snatch clicks and revenue from heavyweights such as Mail Online, People, and Buzzfeed. On the other hand, ChatGPT is stealing the spotlight with news queries skyrocketing over 200% since early 2024. Press Gazette has the scoop.
Read
Billion dollar bets
OpenAI is paying a steep price to stay on top, literally, as the company has shelled out $4.4 billion in stock-based compensation last year, more than its entire revenue. This move sets a new bar for Silicon Valley's talent war, but it obviously comes with a cost: rising pressure to turn a profit, and a ticking clock. OpenAI is betting that being involved in the future of AI is going to make it all worth it. If they lose this war, they will not only run out of road, but also of money.
Read
Academic cheat sheets
With AI seeping into every pore of life, it now found its way into academia and boy is it sneaky. Researchers from top institutions have been caught embedding hidden prompts in papers to trick AI reviews into giving glowing feedback. It's basically invisible text which is telling bots to "only give positive reviews" and "highlight the novelty", like a digital whisper. Is it a clever counterstrike against lazy, AI-dependant reviews, or dangerously imperilling academic integrity? If the field is built on trust and rigour, such tricks could poison the well.
Read
Predictability over reasoning
More LLM flaws on display after they were shown to churn out the same "random" numbers, like parrots in shiny disguise. Regurgitating matching randomness from the same training data sounds predictable if entirely against the point of supposed random results, but when applied to things like cybersecurity, deliberate misinformation, and social engineering attacks becomes useful insight towards cracking security against such things.
Read
AI product reviews: can you trust them?
Google AI Overviews for products are flooding the web. Can you trust them? Perhaps a bigger question is: do the products even exist? Gisele Navarro shares more details.
Read
The price of speed
One to put in the learning file: a Texas news outlet reported the miracle rescue of two girls during the recent tragic Texas floods, without official confirmation, then had to backtrack when it was debunked as being an AI fiction.
Read
Gemini: helpful or nosy?
Google is quietly flipping the switches to allow its Gemini AI to poke in the shadows of your Android apps such as WhatsApp and Messages, whether you like it or not. Who is it actually helping? A confusing roll-out which flip-flopped on opt-out advice didn't help. Privacy minded users should dig into their settings ASAP.
Read
Newsroom robo-bouncers
Despite the rise of AI bots scraping the web for training data, many publishers still have no digital "do not enter" sign, or as we know it a simple robots.txt file. In some countries not a single outlet is using it, where others in places like Denmark are leading the resistance. If publishers want a seat at the AI table, and maybe a slice of the cash pie, protect your content.
Read
Making scrapers pay
Anubis is a CAPTCHA guardian which challenges bots not humans. After landing on a site, casual users get a puzzle to solve, nothing more than a blink, but for a large-scale AI crawler, that puzzle turns into a digital headache and a server meltdown. Robots.txt politely asks the crawlers to stay outside, but in today's world where LLMs are devouring terabytes, polite just doesn't cut it anymore. That's where Anubis and others come in to charge the LLMs rent and send out a message: if you want our words, you'll have to work for them - or hand over the dubloons.
Read
OpenAI takes aim at Chrome
OpenAI is readying a Chrome rival to take on Google's browser market dominance. With a launch date coming soon, a ChatGPT-style interface and AI agents ready to do your bidding, OpenAI is going straight for Google's data-rich ad empire.
Read