The race to write the laws which will make or break AI
Politicians and legislators are generating proposals to regulate the future of AI so quickly that one wonders if they are using ChatGPT to do it. A look at some of the proposals is very revealing.
A welcome back this week to GPP’s Head of Content Intelligence Rob, and also a big hello to the many new subscribers who joined us at the Future of Media Technology Conference in London last week, including from the BBC, Newsquest, News Corp, The Guardian, the FT, and many more from in the UK and farther afield. We hope you enjoyed the event and our company.
Hello and welcome all to Content Aware, a weekly look at things which matter to the future of media and publishing. Over to you Rob…
The sheer volume of legislative activity around Artificial Intelligence has understandably ballooned over the past few years.
It may be recalled that only last month the European Union proudly proclaimed, with the go-live of the EU Artificial Intelligence Act, that the bloc had achieved the world's first "comprehensive legal framework for AI".
To which a sizeable chorus of us echoed "That's all well and good but Europe doesn't really have a major AI dog in the tech fight does it?". (If France's Mistral gets big enough to be a market-maker, we'll be the first to celebrate it.)
Nevertheless, legislators do like to legislate, sometimes without fully understanding what they are legislating for, and China has also been pretty quick out of the gate with AI regulations.
Yet it is of course to the United States we should all direct our gaze. As pioneer and leader in this new tech sphere, there are currently more than 120 separate bills related to regulation of AI doing the rounds in Congress, according to the Brennan Centre for Justice who have produced an extensive list and tracker for each and every bill.
Given that AI remains largely an American-dominated game, and we all know that Big Tech worries more about legal developments "at home" than anywhere else, it's been worth a look at the various proposals which presumably reflect the concerns and ambitions of the people's representatives.
Two main themes emerge
The bills fall into a number of wide categories, but broadly seem divisible into two camps: those which are protective of people, consumers, and non-AI companies (e.g. publishers who own content), and those which seek to protect the AI companies against non-US competitors. A fascinating mixture of "brace for impact" legislation and "brave new world" legislation.
Of interest to publishers are ones such as HR7913 - Generative AI Copyright Disclosure Act of 2024, which seeks to "require a notice be submitted to the Register of Copyrights with respect to copyrighted works used in building generative AI systems, and for other purposes".
There's the faint sound of a stable door closing with efforts such as this, leaving us coughing in dust as the horses gallop towards the LLM meat mill in the distance, bearing in mind how much original content OpenAI et al have already harvested. But, clearly some effort at control is required.
Of similar vein is S2765 - Advisory for AI-Generated Content Act, requiring "a watermark for AI-generated materials". We at Glide have been saying since day one of generated AI content that you should presume you will eventually be forced to highlight its usage, so this is not unexpected.
Other "Protective of the people" Bills are like HR7528 - Comment Integrity and Management Act of 2024 which aims to prevent computer-generated comments from flooding public consultation documents on matters of state and national policy.
Also in this vein sits HR7766 - Protecting Consumers from Deceptive AI Act, requiring the "National Institute of Standards and Technology to establish task forces to facilitate and inform the development of technical standards and guidelines relating to the identification of content created by generative artificial intelligence", as well as laws against deep fakes and so on.
On the other side - sometimes literally, given that more of the 'people' bills seem to be Democrat sponsored - are bills which seek to reinforce the US advantage in the burgeoning AI industry, such as S4758, requiring the US Secretary of Defense to use AI to streamline "the workflow and operations of depots, shipyards, and other manufacturing facilities" - Bills which want to put AI to work, basically.
Other proposals are far reaching indeed, such as S2597 - Digital Consumer Protection Commission Act of 2023, backed by no less than Senator Elizabeth Warren, which seeks to establish a "a new Federal commission to regulate digital platforms, including with respect to competition, transparency, privacy, and national security". Bringing the platforms to heel, and properly.
They know your rights
An area where I see legislation performing a useful function is in that of such Bills as HR2701 - Online Privacy Act of 2023, which looks to "provide for individual rights relating to privacy of personal information, to establish privacy and security requirements for covered entities relating to personal information, and to establish an agency to be known as the Digital Privacy Agency to enforce such rights and requirements".
All of us have unwittingly given over our personal data in decades past, and such legislation would surely lead to a different concept of the individual as a legal entity in the online world.
Of course, most of this proposed legislation won't make it on to the statute books. Many ideas will be blended, binned, or compromised, but even still - some will certainly become law.
Finally we come to S1394 --Block Nuclear Launch by Autonomous Artificial Intelligence Act of 2023 that will "prohibit the use of Federal funds to launch a nuclear weapon using an autonomous weapons system that is not subject to meaningful human control". So HAL won't be in charge of the pod bay doors and Skynet won't ever gain self-awareness. Not if paid for by Federal dollars anyway.
This bill obviously plays at the severe end of the AI fear spectrum, but I would challenge it by pointing out that should true AGI ever be achieved (and we're not even in the foothills of it yet), it seems quite likely to me that such an omniscient system would simply disable all our weapons of mass destruction and tell us bioforms not to be so stupid in future.
Legislate for that.
Cut the cost and complexity of managing your first-party data and audiences with Glide Nexa, connected to Glide CMS or as a standalone system.
Nexa lets you manage audience authentication, entitlements, and preferences from within a single system to simplify audience management and improve engagement.
See Glide Nexa in action and request your demo.
AI and the hype cycle
Is AI entering its own "subprime crisis" of uncertain outcomes, model collapse, and evaporating billions? Tech author and PR guru Ed Zitron climbs high up the view pole for this challenging take on the latest tech bubble, uncomfortable reading for anyone who was there when the first Dot Com Boom went pop.
Read
Joy in journalism: what keeps reporters thriving
A study reveals the joy journalists find in their work, contrasting the common narrative of burnout, cynicism, and crisis in the profession. Turns out we like our colleagues and the people we create content for, who'd'a thunk it! Respondents highlighted the fulfilment of impactful storytelling as a key factor in their roles.
Read
Sign or sue: the panel show
At the Future of Media Technology bash in London last week, arguably the spiciest panel was the 'Platforms versus publishers: sign or sue?' Q&A looking at the options facing publishers when it comes to deals with AI companies and other big tech platforms who benefit from content produced by media companies. As well as luminaries from Google, publishing, and SEO, was our own CEO Denis.
Read
So what if Google does get broken up?
The assumption Google being broken up is easily achievable misses a key point: even small parts of the business are too bog to be bought. Ricky Sutton investigates.
Read
Follow the trial
Selecting just one morsel to get agitated about from the giant US vs Google adtech trial is like rating a month-long food festival from a single bite. Follow the trail coverage, with daily shocks in the daily summaries.
US vs GoogleAds and also Big Tech on Trial and DCN.
Users guessing game: Meta dims AI labels
Meta has updated its labelling of AI content on Instagram, Facebook, and Threads, making it less visible. With many different laws and regulations coming, such as the EU AI Act and the upcoming California Act, this tug of war between AI and regulators is a peek into the future: if you have something AI, expect to be obliged to say so, which which will likely apply to publishers too.
Read
Impact of Meta's ban on Canadian media
Meta's news ban in Canada, triggered by The Online News Act, led to an 85% drop in engagement via the social platform according to a recent study by The Media Ecosystem Observatory. Many outlets, especially smaller ones, lost significant traffic and revenue overnight. Overall news consumption and access to information has also suffered, especially in rural areas.
Read
Google boosts transparency for AI-created content
With a little help from the C2PA watermarking standard, Google is enhancing transparency for generative AI content and allowing users to spot AI-created images more easily. The plan is to apply the same to YouTube and ads.
Read
Awards countdown
Published some game-changing journalism which deserves to be celebrated for its excellence? Entries for the British Journalism Awards closes this week, so submit your nominations ASAP.
Read



