OpenAI's most absurd request yet to content owners: prove it's yours
Accused of theft, OpenAI's defence strategy against content owners centres on demands to see the notes, drafts, and interviews of writers and reporters.
It's 2036, and your Satoshi Nakamoto F-3p Autonomous Personal Mobocubicle has just been Forcibly Reassigned from right outside your nanoweave iYurt on the Puerto Las Vegas beachside. In dustspeak, your ride has been stolen.
Obviously, you shout at your fridge to call the police and, as befits a Gold Subscriber, within seconds you're assigning access codes to the shimmering holocop which picked up the support ticket.
Unfortunately, there's a blocker to the carbon-neutral blue lights. In our parallel future reality, President Altman's Constitutional Amendment Release Notes v2035 decreed rigorous new evidential standards to "streamline" the investigative process. The dreaded "Ownership is Theft" clause.
In a somewhat unexpected turn of events, down at the ChatGPT-Precinct (Motto: "To deflect and swerve"), as the official subscriber to the stolen hoverpotty you are being asked to prove additional evidence to your vehicular responsibility before any investigation can proceed.
This existence is to be verified by way of producing a sworn statement from each and every material extraction and fabrication worker involved in producing your vehicle, including the metal dug from the Earth, with dates and times of transformation.
What strangeness is this of which I write? Why such a flight of fancy? It's in light of a revelation from the expert Substack of Graham Lovelace. which illustrates the contempt held by OpenAI for publishers when it appears it might have to pay for their content.
Hand over your notes
Poring over the legal docs in the OpenAI vs NYT case, which puts copyright theft and AI pillaging into the spotlight and courts, Lovelace spotted the outrageous request from OpenAI "to force the news org to hand over reporters’ notes 'for each asserted work' - thus proving the Times originated the stories it says have been ripped off".
Exsqueeze me? Baking powder?
A publisher such as the NYT, which produces several novels worth of words every day, would therefore be expected to prove the provenance of each and every one of them in order to futureproof itself against such a demand.
It's absurd and insulting, and shows the depths to which OpenAI will stoop in order to turn the work of others into profit for itself. I don't need to draw a parallel to doctor's notes and patient privacy for anyone in media to see how odious it is to demand access to journalist's notes.
Happily, the Honorable Sidney H. Stein, human judge, cared nought for the request and rejected it out of hand, commenting that the NYT's "newsgathering process on a story-by-story basis has no relevance to whether it is entitled to enforce the millions of copyrights it has registered over the years".
He added that "OpenAI's claim that it needs all 'reporter's notes, interview memos, records of materials cited, or other files for each asserted work" - purportedly to determine whether The Times’s works are in fact protectable intellectual property - is unprecedented and turns copyright law on its head."
Essentially, "That's not how Copyright works, bozo." While not all heroes wear capes, this one was in robes.
Aside from the request transparently being an acidic rot on the freedom of individuals to speak to the press wherever they may be in the world - and freedom is a debatable term in many places - it also denigrates what reporters' notes are actually for.
Any one of us who has worked on contentious stories are aware of the threat of legal action, or has actually been subject to it, and that we must be able to prove the validity of our reporting actions through notes, and retain primary material. They are the safety net both to protect the reporter and the title, and to reinforce the guts of the story against the basic refutation of "Prove it...".
Conversely, keeping those notes secret is something people have gone to jail for, because - as any decent reporter would do with, say, an OpenAI whistleblower revealing the inner workings of the world's most influential start-up - the sanctity of unspoken identity in pursuit of the public good is extremely hard won.
The central takeaway from this is that OpenAI fundamentally do not care for the process of journalism, especially news journalism, and worse, they seem to regard an assault on the basic human process of extracting facts from the unknown by asking questions as an opportunity not a threat.
By their actions, or attempted actions, they make themselves known.
COPIED, not copied
Coming just in time to save our notes and sources, the bipartisan US COPIED Act seeks to protect artists, journalists and songwriters from the big bad of AI theft and deepfakes. The bill mandates being able to declare clear origin information for digital content, which prevents AI use without consent. Without a doubt a step in the right direction by legislators who are playing catch-up - but are not shy to do so any more.
Read
Content converters
Press Gazette recently analysed leading subscription newsbrands to see who are the cream in converting readers to subscribers. The Atlantic and Barron's, both with impressive conversion rates above 20%, lead out. The global newsbrands listed have amassed legions of digital subscribers, with a little help of paywalls and subscription strategies.
Read
Care in the community
Relevant to our main comment piece, The Bureau of Investigative Journalism's exposé on Philip Morris was ignited by a whistleblower, who risked everything to unveil the company's hidden funding of scientific research. The investigation peered deep into the inner workings of the company, shedding light on the daunting hurdles and risks involved in uncovering corporate misconduct as well as highlighting again the responsibilities reporters grapple with in protecting sources.
Read
Court in the act
A story about dedication, commitment, and the battle with the commercial realities in court reporting, seen through the eyes of the best. Despite the struggles, exponents remain dedicated to preserving the diverse and sometimes disturbing stories in the public record, fighting for public awareness and judicial transparency.
Read
Keep AI's hands off of our stuff!
Proof News reveals that every major AI company is at it... and pinching content from each other's platforms and partners is just as acceptable as from anyone else. It's easier to ask for forgiveness than for permission, they say.
Read
No more secret AI makeovers for images
Google recently updated documentation to label AI-manipulated images using new metadata standards from metadata authority IPTC, to provide additional information about images beyond Google News context. With this change, it will be easier to properly identify images which were manipulated by AI in search results, enhancing transparency and accuracy in displaying AI generated content. For transparency, we work with the IPTC to assess what standards metadata should go in GAIA-generated images.
Read
Auf Wiedersehen disks!
Germany follows Japan in consigning ancient tech to the scrapheap. Last week Japan's Digital Agency announced the elimination of floppy disks in government systems, and now Germany's F123 frigate fleet is following suit - reliant on the whirring platters and magnets since the 1990s to control vital warship systems. Makes you wonder what other obsolete tech controls vital operations - let’s hope no CMS are that old!
Read
Meta hearse
More indications that regulators are more than happy to put legal requirements ahead of the product plans of the big platforms, even if that makes them DOA. Meta joins Apple in saying mooted AI products won't make it past EU regulations as they stand.
Read