Good morning. This week, two of our Risk Radar items are examples of law being used to make things easier for businesses, rather than heavier. DUAA aims to lower friction around some familiar data protection decisions. And the UK Jurisdictional Taskforce’s draft statement on AI liability carefully turns legal theory into practical, worked-through scenarios that businesses can actually reason with.
Elsewhere, the European Commission makes a bold move on “addictive design” in product; and Oatly is in court about milk.
Here’s some calm signals, early orientation and something practical to work with before decisions lock in. 🎯
— Philip
If you read one thing this morning, read the Risk Radar. Everything else is optional.
BRIEFING ROOM
From SaaSpocalypse to organisational choice

Last week, markets punished tech stocks with drops of up to $1 trillion, driven by concerns on AI capital investment alongside nerves around a so-called “SaaS-pocalypse”. The fear is that enterprise customers will move away from seat-based SaaS applications to agentic AI systems that co-ordinate workflows one layer up.
Thomson Reuters, owned of Westlaw, dropped 21% after Anthropic launched knowledge-worker plugins for its Cowork tool (covered in Edition 10). RELX (LexisNexis) and Wolters Kluwer saw similar reactions as investors digested what this could mean for professional software businesses.
Executives pushed back quickly. Jensen Huang, CEO of Nvidia, called the idea that AI would replace the software industry “illogical”. Steve Hasker, CEO of Thomson Reuters, pointed in an earnings call to the company’s decades-old moat of proprietary professional content.
Both can be true
Whilst enterprises are unlikely, and contractually unable, to rip out decades of embedded SaaS systems overnight, agentic AI can still cause structural change by increments. As AI is layered into existing tools, processes and roles, leadership teams move conversations from productivity gains to the shape of the organisation’s workforce itself.
Recent data shows how impactful those conversations can be. UK companies adopting AI are seeing productivity gains paired with job losses; US companies, using similar tools, are adding headcount. The technology is broadly similar. The outcomes are not.
The difference seems to lie in whether leadership is prepared to rethink how work is organised, or whether AI is simply being used to accelerate the status quo.
“Just a pilot”
Senior legal teams will hear this shift first in the language of product trials. What presents as a narrow tooling decision often sits downstream of a much wider choice: whether AI is being used to change how work is done or to do the same work with fewer people.
This is where the GC lens matters.
Investors, regulators and staff will look to Legal to identify:
🔹 when procurement terms are shaping who is responsible for decisions made by AI-enabled systems.
🔹 when a “pilot” crosses the line into an operational dependency that is hard to unwind.
🔹 when a vendor rollout is, in substance, a workforce or organisational decision.
This is the kind of moment where having a steady frame matters more than having all the answers.
RISK RADAR

🇬🇧 DUAA now mostly live. On Thursday, most of the remaining data protection provisions in the UK’s Data (Use and Access) Act 2025 came into force. Key changes include:
the introduction of “recognised legitimate interests”: processing activities presumed legitimate under Art. 6(1)(f) UK GDPR without a balancing test.
a relaxation of the requirement for a qualifying lawful basis before automated decision-making, except where special category data is involved, and still subject to rights to object and human intervention.
statutory footing for existing ICO guidance and practice on DSAR handling.
The ICO also updated its “by design and by default” guidance, with further guidance and some final implementing provisions still to come.
Why it matters: DUAA is intended to create some pro-growth divergence from the EU regime. It eases friction in some common business use cases. Direct marketing and intra-group data sharing for administrative purposes are among the most practically useful “recognised legitimate interests”. For Legal teams, this reduces pressure to over-engineer consent or contractual necessity, including in areas like AI-assisted recruitment. But it doesn’t remove judgement. The ICO continues to expect transparency, meaningful human intervention in automated decisions and defensible governance choices. Less box-ticking, not less responsibility.
📱 EU targets TikTok for “addictive design”. On Friday, the European Commission issued preliminary findings that TikTok’s product design breaches the Digital Services Act, focusing on features such as infinite scroll, autoplay, push notifications and highly personalised recommender systems. The Commission believes that TikTok’s risk assessment did not adequately account for foreseeable harms to wellbeing (including for minors and vulnerable adults) and that existing mitigation measures (screen-time prompts, parental controls) are too easily dismissed to constitute effective risk reduction. TikTok can still respond, and the Commission may yet adjust its position, but the direction of travel is now on the record.
Why it matters: Maintaining eyeball attention has long been a core metric of social media. Now the EU has the tools to enforce penalties based on how engagement mechanics are built. This shifts the centre of gravity for product risk discussions: away from content moderation rules and communications plans; and toward design choices, incentives and how friction operates in the user experience. For GCs, this provides a clean way to frame internal conversations about product risk. Safety tools need to be present and meaningfully change behaviour.
🤖 UK paper to explain how AI liability actually lands. The UK Jurisdictional Taskforce, chaired by the Master of the Rolls, is consulting until Friday on a draft legal statement which aims to bring certainty to businesses on how English law allocates liability for AI harms. The core premise is that AI has no legal personality, so liability must attach to people or organisations, usually via contract or established negligence principles. That is conceptually less intuitive, but not impossible, when harms are caused by non-deterministic autonomous agents.
Why it matters: This gives Legal teams something concrete to work with in board-level AI risk discussions. The paper sets out detailed hypothetical commercial scenarios involving customers, suppliers, and AI actors (including foundation model providers) and works through where liability is likely to sit. That kind of analysis is rare at this stage. It helps move conversations away from abstract accountability and toward practical questions of contracting, risk allocation, and governance - well before real test cases start working their way through the courts.
FROM THE SIDEBAR
Quick signals worth clocking (optional reading)
🐮 Today the UK Supreme Court will rule on Oatly’s “Post Milk Generation” trademark, turning on technical restrictions on using “milk” for non-animal products.
🇫🇷 France’s CNIL issued half a billion euros of fines in 2025.
🙋 People are hiring their bodies to work for AI agents.
POLL OF THE WEEK
When something is framed as a “pilot”, who usually drives the decision?
Last week, we asked “In your organisation, what is AI primarily doing to legal work today?”. The results suggest that for Legal, AI is still being used to compress execution rather than to change how work is organised:
⬜⬜⬜⬜💸 Reducing external counsel spend
⬜⬜⬜⬜🤖 Automating low-level internal work
🟩🟩🟩🟩👥 Enabling the same team to handle more
⬜⬜⬜⬜✂️ Optimising headcount
HIRING BOARD
This week’s senior legal hiring spans big-tech scale, regulated incumbents and AI-native SaaS.
🇫🇷 Amazon, Corporate Counsel EU Books: a senior generalist embedded in product and consumer businesses, formalising legal as a day-to-day operator inside fast-moving device and platform teams.
🇬🇧 BP, Counsel, Litigation & Disputes: a specialist disputes role reinforcing legal as a financial and reputational risk control function, tightly focused on high-value, cross-border matters rather than business enablement.
🇫🇷 Dailymotion, Legal Counsel, Privacy & Business Affairs: a hybrid privacy–commercial role sitting close to product, advertising, and content, signalling that regulatory exposure is now inseparable from revenue mechanics in media platforms.
🌍 n8n, Senior Legal Counsel, EMEA/DACH: an early, foundational hire tasked with scaling contracting, privacy, and AI governance in parallel, explicitly positioning legal as infrastructure for growth in an AI-first company.
Taken together, the bar is stretching in two directions at once. Large incumbents continue to ring-fence legal expertise around disputes and mature product lines, while growth and AI-native companies expect senior lawyers to build systems, unblock revenue, and shape governance as the product evolves - almost uncomfortably close to product, data, and AI design.
Enjoying Profiles in Legal?
If you know an in-house lawyer who values signal amid the noise, feel free to forward this on.
💬 Forward to a colleague
🧠 If this was forwarded to you, you can subscribe here
ABOUT THE EDITOR

I’m a General Counsel advising leadership teams on regulatory, product and board-level decisions in tech and regulated markets.
I work with a small number of companies on the judgement calls behind growth, regulatory pressure and investor scrutiny - and on the key contracts that follow from those decisions. Get in touch.
- Philip
Too much legal content is dull and jargon-filled. Profiles in Legal is for lawyers who want to think clearly, sound credible in the room and get promoted.
This newsletter is for general information only and does not constitute legal advice. Seek professional advice for specific situations.
