Good morning. The story this week is not new law. It’s existing regulators discovering that the frameworks they already enforce apply directly to AI conduct. For Legal, these cases show where (and how) live AI strategy is already live compliance exposure. Three enforcement signals in this edition point in the same direction. Two others sit outside AI entirely: a reminder that risk rarely arrives one theme at a time.
As always, the aim is a briefing that keeps you informed - without the padding. 🎯
— Philip
If you read one thing this morning, read Risk Radar. Everything else is optional.
BRIEFING ROOM
Already regulated

Nano Banana 2
On Wednesday, the UK’s competition authority published a detailed analysis identifying AI-driven pricing coordination - including systems that learn coordinated outcomes without any explicit human instruction - as a live competition enforcement priority. The blog teaches us a new word: it cites research showing AI agents can learn to coordinate through “steganographic” techniques. Steganography is the art and science of concealing secret messages or information within ordinary, non-secret files (like images, audio, or text) to make the data undetectable.
The approach of applying existing law aligns with the ICO's initial analysis of agentic AI from earlier this year which reaffirmed that UK GDPR applies in full to autonomous systems operating on behalf of organisations.
Two regulators. Two existing legal frameworks. One consistent position.
The law that was always in the room
Most in-house teams are currently managing AI risk with a particular eye on August 2026: tracking EU AI Act milestones, building governance frameworks in anticipation of dedicated enforcement.
The implicit assumption is simple: prepare now so you're ready when the rules arrive.
But the rules are already here.
GDPR has governed AI systems processing personal data since 2018. Competition law has always applied to coordination that reduces rivalry, whether the actor is human or algorithmic. Liability for delegated systems has always sat with the organisation deploying them.
The EU AI Act will add a new layer. The layer underneath has been in force for years.
Whose agent is it, anyway?
The ICO is unambiguous: legal accountability stays with the organisation. An agentic system operating on your behalf generates compliance exposure at each step of its task chain — purpose limitation, data minimisation, lawful basis — without a human in the loop to catch the gaps.
The CMA runs parallel logic. A pricing algorithm that learns coordinated outcomes without instruction requires no boardroom agreement to engage competition law. Effect is the test, not intent — and the structural conditions that enabled the behaviour are enough.
GCs will already be considering:
🔹 Purpose limitation drift. Agentic systems pursuing open-ended goals may process personal data for purposes that weren't declared at the point of design. GDPR's purpose limitation principle applies to each step of a task chain - not just the initial instruction.
🔹 Tacit collusion. The CMA has now stated explicitly that AI systems that learn to coordinate pricing without human direction may still engage competition law. "No one told it to do that" isn’t a defence.
🔹 Hub-and-spoke risk in shared tools. Deploying a third-party AI service alongside market competitors, without any direct agreement between those competitors, may be assessed as a potential hub-and-spoke arrangement. As has always been the case, intent is not required; structural effect is enough.
🔹 Audit trail exposure. Governance frameworks that document policies rather than decisions will face pressure. Regulators are starting to ask what the system actually did, on what basis and who was in a position to intervene.
The EU AI Act will impose structured risk classifications and dedicated enforcement machinery when it fully enters force. But UK regulators moving right now are using laws that have been in place for years - and they are looking at systems that are already live. The CMA disclosed this week that it is already deploying agentic AI internally to screen for competition breaches at scale. That is the more immediate compliance horizon.
RISK RADAR
🛢️ Force majeure goes live in the Gulf. The effective closure of the Strait of Hormuz following US-Israeli strikes on Iran has triggered the most significant cascade of force majeure declarations in energy and shipping contracts in years. QatarEnergy formally declared force majeure on LNG supply contracts on Wednesday; Maersk, CMA CGM and Hapag-Lloyd suspended Strait transits and invoked force majeure provisions in their bills of lading the same week. Marine war risk insurers withdrew cover. European gas prices rose sharply as the closure took hold, with Qatar alone accounting for approximately 20% of global LNG exports.
Why it matters: Force majeure clauses that have sat dormant in energy, shipping and supply contracts are now being invoked and tested simultaneously across multiple contract chains. GCs with counterparties in the energy, LNG, or maritime supply chain have a live question about whether upstream declarations by carriers and producers travel through to their own contractual positions. Boards will now be asking how quickly supply chain risk governance can be operationalised.
🧩 Anonymous data, personal problem. Gaining attention this week is a Court of Appeal ruling from last month against DSG Retail (Dixons/Currys) in a significant data security judgment, confirming that controllers must protect against third-party “jigsaw identification” risk. DSG had tried to limit its liability over those records because the stolen data couldn't identify individuals in the attacker's hands. The Court of Appeal unanimously disagreed: if data is personal data from the controller's perspective, the obligation to protect it is fully engaged regardless of what a third party can do with what they take. The case arose from a 2017-18 attack in which hackers scraped approximately 5.6 million payment card numbers and expiry dates.
Why it matters: The judgment narrows a defence that legal teams might have assumed - that exfiltrated data wasn't “personal” because the attacker couldn't identify anyone from it. The case arose under the Data Protection Act 1998, but the court's reasoning explicitly engages with CJEU GDPR jurisprudence and applies by analogy to current UK GDPR security obligations. GCs reviewing data security frameworks, anonymisation strategies or breach response positions may find this a useful reference point when sense-checking whether internal "non-personal data" classifications hold up under scrutiny.
🗂️ The EDPB starts counting data brokers The EDPB published a market study on Wednesday mapping the data broker landscape, establishing a working methodology for identifying brokers, a typology of eight business model categories and an initial risk assessment. The study was commissioned through the EDPB's Support Pool of Experts at the request of the Belgian DPA. More than 40 data brokers and providers were identified active in Belgium alone - a notable methodological finding being that standard industry classification codes proved unreliable, because companies operating in this space routinely don't self-identify as data brokers.
Why it matters: Regulators don't build market taxonomies without enforcement intent in the medium term. The eight categories the EDPB has mapped include AI platforms integrating personal data, data pools and cleanrooms and marketplaces handling aggregated datasets - descriptions that reach well beyond what organisations typically think of as “data brokers”. For GCs reviewing data vendor relationships or third-party data supply chains, this study is a useful early signal of where regulatory attention is heading.
FROM THE SIDEBAR
Quick signals worth clocking (optional reading)
POLL OF THE WEEK
Events in the Middle East this week highlight how quickly geopolitical developments can become contract and supply chain questions. When does Legal typically get involved?
When geopolitical risk emerges, how early is Legal involved?
Last week, we asked “When do you expect an autonomous AI agent to become part of your legal team’s workflow?”
⬜⬜⬜⬜⬜ ⏰ By Q3 2026
🟧🟧⬜⬜⬜ 🕟 By end of 2026
🟩🟩🟩🟩🟩 🕤 By end of 2027
🟧⬜⬜⬜⬜ ⏾ Never
This suggests most teams see agentic AI entering legal workflows in the medium term rather than immediately. This is beyond a gear-shift for a “risk-averse” and tradition-led profession. Tools capable of delegated execution are already shipping, and the CLOC report linked above suggests boards are expecting legal to absorb headcount freezes using tech. Time may speed up.
Enjoying the signal?
If you know an in-house lawyer who’s tired of the noise and wants to sound smarter in the boardroom, feel free to forward this edition.
💬 Forward to a colleague
🧠 Was this forwarded to you? Subscribe here to get it every Wednesday.
When you’re ready, here’s how I can help

I’m a General Counsel helping tech and SaaS scale-ups navigate digital regulation. I work with a small number of leadership teams as a Fractional GC or through targeted advisory sprints focused on:
AI & Regulatory Strategy: Translating regimes like the EU AI Act into design-level guardrails.
Strategic Triage: Making high-stakes calls with imperfect information to keep decisions moving.
Investor-Ready Foundations: Hardening your commercial architecture and contracts for the next funding round.
I work with 3-4 leadership teams at a time. If you’re navigating AI deployment, regulatory exposure or investor scrutiny, reply directly to this email.
- Philip
Too much legal content is dull and jargon-filled. Profiles in Legal is for lawyers who want to think clearly, sound credible in the room and get promoted.
This newsletter is for general information only and does not constitute legal advice. Seek professional advice for specific situations.

