Good morning. Some of the most personally affecting matters I’ve worked on as a lawyer have been product liability cases. Moments where choices made years earlier were being dissected under a harsh light. Some of the most satisfying work has been the opposite: helping product teams ship. Moving fast. Enabling growth. Translating rules into momentum.

Those two worlds are getting closer. This week, we have real commercial examples of addictive design, AI imagery and autonomous agents needing accountability upstream. The only defensive position will be for governance to live inside of product architecture and systems permissions - not layered on at the end. AI governance is not a policy question. It’s a design question.

Today’s edition looks at where those lines are beginning to blur - and what that means for in-house teams navigating speed, scrutiny and uncertainty. 🎯

— Philip

If you read one thing this morning, read the Briefing Room. Everything else is optional.

BRIEFING ROOM

Addictive design on trial

Mark Zuckerberg appeared in an LA court last week to answer questions in a landmark trial focussed on whether Meta intentionally designed its social media platforms to be addictive. The plaintiffs allege that compulsive use of platforms such as Instagram has exacerbated depression and suicidal ideation among young users. 

Snapchat and TikTok settled equivalent claims, but Meta and Google (for YouTube) are pursuing their defence in court.

The proceedings have drawn significant public attention, bolstered by the presence of Prince Harry, who addressed bereaved families in Los Angeles to advocate for systemic accountability and “safety by design”.

Adam Mosseri, head of Instagram, challenged the use of the word “addiction” in relation to Instagram’s products, saying “I'm sure I've said that I've been addicted to a Netflix show when I binged it really late one night, but I don't think it's the same thing as clinical addiction" (BBC). 

Commentators have observed how these product liability cases mirror “Big Tobacco” cases of the 1990s.

The “bellwether” case is expected to have a profound impact on the liability of social media companies in the US. 

Digital product liability

For GCs, the case mirrors trends in legislation and regulatory enforcement away from downstream liability and towards upstream product design level (as explored further in our Risk Radar):

🔹 The plaintiffs sidestep the usual platform shield under section 230 of the US Communications Act by targeting product design rather than user-generated content.

🔹 Internal documents have become central. From Zuckerberg denying the existence of internal KPIs on user “time spent” on the apps, to Meta’s then head of global affairs (and one-time UK deputy prime minister) Nick Clegg quoted as having written in an email: “The fact that we have age limits which are unenforced makes it difficult to claim we are doing all we can”. 

🔹 Legislative momentum is converging. The EU is expected to regulate features such as infinite scroll, autoplay and “streaks” under the proposed Digital Fairness Act. In the UK, the government has publicly backed an “end to addictive design”.

UX optimisation is now being litigated as design choice carrying foreseeable risk. The next frontier of platform liability may not turn on what users post, but on what product teams choose to optimise.

RISK RADAR
  • 🇬🇧 ICO issues record £14.47m fine against Reddit for children’s privacy failures. Yesterday, the UK Information Commissioner’s Office imposed its largest ever non-data security penalty after finding that Reddit failed to implement robust age assurance measures and did not have a lawful basis for processing the personal data of children under 13 . The ICO also found that Reddit had not conducted a data protection impact assessment (DPIA) assessing risks to children prior to January 2025, despite allowing under-18s to use the platform. The regulator criticised reliance on self-declaration of age and emphasised that organisations must match age assurance methods to the level of risk on their platform.

    • Why it matters: This was a late addition to today’s edition, that only reinforces the broader direction of travel. A record fine for failures in age assurance and DPIA processes shows how quickly upstream design decisions can translate into material enforcement. GCs will recognise the structural point. Where services are likely to be accessed by children, lawful basis, age controls and risk assessments cannot be retrofitted once a product has scaled. Governance that sits downstream of launch is increasingly difficult to defend.

  • 🇬🇧 UK speeds up crackdown on non-consensual intimate images. The UK government announced amendments to its Crime and Policing Bill to require platforms to remove non-consensual intimate images within 48 hours of being flagged. Failure to comply could trigger fines of up to 10% of qualifying worldwide revenue or service blocking in the UK. In parallel, Ofcom confirmed it is accelerating its timeline to require platforms to deploy proactive “hash matching” technology to detect and block illegal intimate images - including explicit deepfakes - at the point of upload. Subject to parliamentary approval, these measures could come into force as early as this summer.

    • Why it matters: Regulators are enforcing against intimate image abuse as the highest level of online harm. In the same week that Ofcom issued a £1.35m fine to a website for lack of age verification, this is another real world example of legal risk shifting further upstream to operational product design choices. Legal teams in organisations with user-generated content risk can use this week’s developments to pressure test: whether internal “notice and action” systems can reliably meet 48-hour removal windows at scale; whether proactive moderation tools (e.g., hash matching) can be technically and contractually implemented; and whether the steps taken can be readily explained in response to an information request. Proactive detection is no longer a reputational differentiator. It is rapidly becoming a regulatory expectation.

  • 🌍 Global privacy regulators issue joint warning on AI-generated imagery. On Monday, data protection authorities from 61 jurisdictions published a joint statement addressing the privacy risks of AI systems that generate realistic images and videos of identifiable individuals without their knowledge or consent. The signatories highlight concerns around non-consensual intimate imagery, defamatory depictions and harm to children, especially where image-generation tools are embedded within social media platforms. The statement sets out common expectations for organisations developing or deploying AI content generation systems, including: implementing robust safeguards against misuse; ensuring meaningful transparency about system capabilities and limits; providing accessible and rapid removal mechanisms; and protecting children.

    • Why it matters: While legal frameworks differ across jurisdictions, sixty-one regulators have now aligned publicly around a common risk framing. That reduces scope for jurisdictional divergence and increases the likelihood of coordinated scrutiny where harms arise - as we’ve already seen in regulatory responses to Grok (Edition 7). Product counsel will now understand that genAI features need to be viewed through a data protection lens as well as a content moderation one. The statement also shows AI governance surfacing in operational reality. Organisations are expected to anticipate foreseeable misuse and build in preventative controls, rather than rely solely on reactive complaint handling once harm occurs.

  • 🇳🇱🦞 Dutch DPA warns of major security risks in autonomous AI agents. The Dutch Data Protection Authority (AP) has cautioned organisations against deploying experimental AI agents such as OpenClaw, citing cybersecurity risks. OpenClaw enables users to grant an autonomous AI assistant full access to local systems, email and connected services. The AP describes such agents as potential “Trojan horses” if inadequately secured. Security researchers have identified malicious plugins, vulnerabilities enabling indirect prompt injection, and weaknesses that could lead to credential theft, account takeover or even remote system control. The regulator stresses that use of open-source tools does not dilute GDPR accountability and is pushing for clarification at EU level that autonomous AI agents fall squarely within the scope of the EU AI Act’s product safety framework.

    • Why it matters: The underlying risks of experimental AI agents are not surprising. What is notable is a leading European data protection regulator issuing a formal warning to businesses at this earlystage of the technology’s development. AI agents merit their own governance category, presenting risks beyond those of a standard software tool. This raises the expectation on in-house legal teams. Assessing AI agents requires understanding of permission structures, plugin ecosystems, prompt injection risks and system-level execution rights. The governance conversation is moving below the interface layer and into technical architecture. Lawyers advising on AI deployment will want to engage with this detail to avoid supervising blind spots.

FROM THE SIDEBAR
Quick signals worth clocking (optional reading)

🏦 HSBC's in-house team implements Harvey, proving it’s not just for law firms 

POLL OF THE WEEK

When do you expect an autonomous AI agent to become part of your legal team’s workflow?

Login or Subscribe to participate

Last week we asked “When something is framed as a “pilot”, who usually drives the decision?

🟩🟩🟩🟩🧩Product

⬜⬜⬜⬜⬜💰Sales

🟧⬜⬜⬜⬜📣Marketing

⬜⬜⬜⬜⬜⚖️Legal

Responses suggest Product drives most “pilot” decisions. Legal rarely does.

That dynamic works well when pilots remain contained. It becomes more complex when pilots evolve into core product features or AI-enabled systems. This week’s stories on addictive design and AI deployment illustrate how quickly experimental features can become the focus of regulatory scrutiny. The sequencing of involvement may matter more than the label.

HIRING BOARD

This week’s senior vacancies reflect the shift from legal-as-advisory to legal-as-architecture, in the worlds of autonomous mobility, algorithmic operations and global digital safety. 

🌎 SaaS.Group, GC & Head of Legal Operations (AI-First): Signals the emergence of the "AI-first" GC, where the mandate is to replace traditional junior associates with LLM-driven automation and "no-code" legal stacks. A genuinely cutting edge spec.

🇬🇧 Epic Games (maker of Fortnite), Senior Counsel, Regulatory: Formalises the "always-on" regulatory function required to navigate the fragmentation of global online safety laws.

🇬🇧 FreeNow by Lyft, Principal Legal Counsel: Moves legal upstream into product design by tasking counsel with engineering the transition from human-driver liability to automated system responsibility.

The bar is moving from managing risk to building the infrastructure that contains it. Whether it is hard-coding liability frameworks for autonomous vehicles or auditing hundreds of SaaS contracts via AI agents, these roles suggest that the next generation of senior in-house leaders must be as comfortable with algorithmic logic and technical "stacks" as they are with statutory interpretation.

Enjoying the signal?

If you know an in-house lawyer who’s tired of the noise and wants to sound smarter in the boardroom, feel free to forward this edition.

💬 Forward to a colleague

🧠 Was this forwarded to you? Subscribe here to get it every Wednesday.

When you’re ready, here’s how I can help

I’m a General Counsel helping tech and SaaS scale-ups navigate digital regulation. I work with a small number of leadership teams as a Fractional GC or through targeted advisory sprints focused on AI & Regulatory Strategy, Strategic Triage and Investor-Ready Foundations.

I work with 3-4 leadership teams at a time. If you’re navigating AI deployment, regulatory exposure or investor scrutiny, reply directly to this email.

Too much legal content is dull and jargon-filled. Profiles in Legal is for lawyers who want to think clearly, sound credible in the room and get promoted.

🪃 Reply to this email with what you think we should cover

📣 Request to partner with us

This newsletter is for general information only and does not constitute legal advice. Seek professional advice for specific situations.

Keep Reading