Good morning. The AI copyright question was deferred to the courts this week. The UK government and the White House reached this same conclusion independently, within 48 hours: neither ready to make definitive decisions.

Governments in the US, UK and EU are still working through the detail. In the meantime, the courts will start answering the questions that policy has not.

Elsewhere, a CEO asked ChatGPT how to get out of a $250m contract — and acted on the response. Here’s what matters this week. 🎯

— Philip

If you read one thing this morning, read the Briefing Room. Everything else is optional.

BRIEFING ROOM

Courts not codes

On Wednesday, the UK government published its Report on Copyright and AI, meeting a statutory deadline under the Data (Use and Access) Act 2025. Two days later, the White House released a legislative blueprint to Congress setting out the Trump administration’s position on AI and intellectual property. The two documents were produced on different continents, by governments with different political orientations and different legal systems. Both arrived at the same place.

Policy diverges, outcome converges

The UK government report implicitly concludes that it is “not the right time to make concrete proposals” on the questions that matter most: whether AI training on copyrighted material requires a licence, and whether rights holders are entitled to opt out. The government’s firmest recommendation was to remove s.9(3) of the Copyright, Designs and Patents Act 1988, a provision you may not have encountered. It gives copyright in computer-generated works (outputs created by software with no human author, such as automated financial reports or algorithm-generated data compilations) to whoever arranged for their creation. Similar provisions can be found in other common law jurisdictions but are rare elsewhere. The government’s “preference” is to remove it. On most other points, more evidence is needed.

The White House position is characteristically more direct: AI training on copyrighted material does not, in the administration’s view, violate copyright law. But the blueprint then acknowledges “arguments to the contrary exist” and explicitly supports letting the federal courts resolve the dispute. Congress is invited to codify whatever the courts find. The effect is the same.

Written in judgments

Neither government has decided. Both highlight the role of the courts. Legal teams will keep an eye on major case developments as well as legislative proposals. That calendar is already moving.

  • 🇺🇸 Reportedly there are ~100 active AI copyright cases in US federal courts alone. 

  • 🇪🇺 Earlier this month, the EU Court of Justice held its first oral hearing in Like Company v Google (C-250/25), in which a Hungarian publisher alleges that Gemini systematically reproduced its press publications without authorisation. The court must determine whether large language model training constitutes reproduction, and whether the text and data mining exception applies. An Advocate General opinion is expected in September. If the court finds that training constitutes reproduction, and that the EU’s “text and data mining exception” is either unavailable or has been overridden by rights-holder opt-outs, it could require AI developers to obtain licences across the EU. That would materially diverge from the White House blueprint.

  • 🇬🇧 UK courts are not bound by the CJEU outcome, but they will be watching. With no broad fair use doctrine equivalent to the US position; and Parliament’s decision not to introduce a mandatory text and data mining exception following 2023 pressure from the creative industry, the domestic litigation landscape is fertile ground for landmark cases. Getty Images v Stability AI is proceeding to the Court of Appeal, providing the first significant domestic testing ground for the same training and reproduction questions now before the CJEU.

The impact looks scheduled to arrive via judgment before legislation.

RISK RADAR
  • 🇪🇺 More AI Act timeline wrangling The European Parliament's Internal Market and Consumer Protection Committee (IMCO) and its Civil Liberties, Justice and Home Affairs Committee (LIBE) voted jointly on Wednesday to adopt their position on the Digital Omnibus AI Act amendments. High-risk AI obligations are now set for December 2027 for biometrics, critical infrastructure, education, employment, law enforcement etc; and August 2028 for AI systems embedded in products. The committees also added an explicit prohibition on "nudifier" AI systems to the Article 5 banned practices list, and extended SME compliance reliefs to small mid-caps (under 750 employees; under €150m turnover). The extension for watermarking AI-generated content under Article 50(2) was shortened to 2 November 2026 - earlier than the Commission's original February 2027 proposal. The full Parliament plenary votes tomorrow. 

    • Why it matters: Many compliance timelines built around the Digital Omnibus assumed a longer period to implement watermarking. Deferrals to December 2027 give more room for high-risk system deployment reviews, but 2027 is now within current planning cycles, not beyond them.

  • 🇪🇺 Privacy enforcement theme of the year The European Data Protection Board launched its 2026 Coordinated Enforcement Action (CEA) on Thursday. The theme is GDPR Articles 12, 13, and 14 (the provisions requiring organisations to tell individuals what data is being processed, why, and on what basis, at the point of collection). 25+ DPAs are coordinating investigations. 

    • Why it matters: The EDPB has a chosen Coordinated Enforcement Action every year. Previous themes include cookies and consent tools (2023); controller-processor contracts (2024); and data retention (2025). In-house teams can use this launch to help their prioritisation of privacy gap analysis programmes. Enforcement may focus on the content of layered information and privacy notices during product data flows. The 2026 CEA signals that notices which haven’t been reviewed since the last major product update are the current exposure.

  • 🇪🇺 The limits of Send to All The Court of Justice of the EU ruled on Thursday (Brillen Rottler (C-526/24)) that a data controller may, in exceptional circumstances, reject a first subject access request as “excessive” - if it can show the request was made not to exercise transparency rights, but to manufacture an Article 82 damages claim. The threshold is deliberately high. “Exceptional circumstances”, with the burden on the controller. In this case, the claimant submitted DSARs to multiple unconnected controllers shortly after subscribing to their services, then pursued compensation where responses fell short. Persuasive but of course not binding on UK courts.

    • Why it matters: The judgment gives a narrow but usable defence against DSARs deployed as a litigation tactic. Legal teams could add gates to their processes to try to catch if a request is part of a pattern. That said, the court was equally clear about how well protected genuine first requests remain. This is not a general escape route, just a helpful line drawn at the margins.

FROM THE SIDEBAR
Quick signals worth clocking (optional reading)

🎮 A gaming CEO consulted ChatGPT rather than his lawyers on how to escape a $250m earn-out, to predictable consequences.

POLL OF THE WEEK

Last week we asked: When geopolitical risk emerges, how early is Legal involved?


🟧🟧⬜⬜⬜⬜  🔭 Early: we actively monitor geopolitical developments
🟧⬜⬜⬜⬜⬜  ⚠️  Once business exposure becomes plausible
🟩🟩🟩⬜⬜⬜  ⚖️  When contracts or sanctions questions arise
⬜⬜⬜⬜⬜⬜  🚨 Usually after the issue has escalated

Enjoying the signal?

If you know an in-house lawyer who’s tired of the noise and wants to sound smarter in the boardroom, feel free to forward this edition.

💬 Forward to a colleague

🧠 Was this forwarded to you? Subscribe here to get it every Wednesday.

When you’re ready, here’s how I can help

I’m a General Counsel helping tech and SaaS scale-ups navigate digital regulation. I work with a small number of leadership teams as a Fractional GC or through targeted advisory sprints focused on:

  • AI & Regulatory Strategy: Translating regimes like the EU AI Act into design-level guardrails.

  • Strategic Triage: Making high-stakes calls with imperfect information to keep decisions moving.

  • Investor-Ready Foundations: Hardening your commercial architecture and contracts for the next funding round.

I work with 3-4 leadership teams at a time. If you’re navigating AI deployment, regulatory exposure or investor scrutiny, reply directly to this email.

Too much legal content is dull and jargon-filled. Profiles in Legal is for lawyers who want to think clearly, sound credible in the room and get promoted.

🪃 Reply to this email with what you think we should cover

📣 Request to partner with us

This newsletter is for general information only and does not constitute legal advice. Seek professional advice for specific situations.

Keep Reading