Good morning. Two US jury verdicts landed this week, four days apart and both caught mainstream attention. Legal teams will have spotted that the liability question for social media platforms is not what they published, but how they were built. New Mexico won $375m from Meta under state consumer protection law. Los Angeles found Meta and YouTube negligent under a product liability theory. Section 230, the statute that has protected platforms from content claims for three decades, had nothing to say about either case.
Alongside that: the ICO published guidance on DUAA changes; and a 2000 NYPD document may have just resolved 25 years of speculation about who Banksy actually is.
Here's what matters this week. 🎯
— Philip
If you read one thing this morning, read the Briefing Room. Everything else is optional.
BRIEFING ROOM
Not a content problem

Two jury verdicts in four days. On Tuesday last week, New Mexico won a civil enforcement action against Meta under the state's Unfair Practices Act. $375m in civil penalties at $5,000 per violation, making it the first US state to prevail at trial against a major tech company for harming young people. On Wednesday, a Los Angeles jury found Meta 70% liable and YouTube 30% liable in the first bellwether civil case from the federal multi-district litigation, awarding $6m. In both cases, the theory was not that the platforms published harmful content. Both companies are appealing. The theory was that the platforms built harmful products. (Edition 13).
Design as the defect
The Los Angeles plaintiff used YouTube from age 6 and Instagram from age 9. The jury found that both platforms deliberately designed their products to be addictive, knew the design caused harm and failed to warn users. Section 230, the statute that immunises platforms for what third parties post, says nothing about the design choices the platform itself makes. The LA verdict signals that plaintiffs have found the gap - and intend to use it. Around 2,000 cases remain in the federal pipeline, resting on the same theory.
Meanwhile, in New Mexico, the attorney general used consumer protection law to reach the same design decisions that Section 230 would have blocked if framed as content claims. Internal Meta communications about a 2019 decision to make Messenger end-to-end encrypted — a decision that internal documents showed would exclude 7.5 million reports of child sexual abuse on the platform from being seen by law enforcement— were used at trial. Phase 2, a public nuisance claim heard by a judge without a jury, begins 4 May. If the court agrees, remedies may not stop at penalties. They may include mandatory product redesign and funded public programmes.
280 million
Policy interventions against social media are growing. Last week Indonesia (population 280 million) became the latest country to impose mandatory age-gating on digital platforms, joining Australia, Denmark and Norway. The regulation covers any platform using features that expose children to documented categories of risk: autoplay, engagement-based algorithms, ephemeral content, AI recommendation systems. That means AI chatbots and gaming apps sit within scope alongside Instagram and TikTok. Existing accounts held by under-16s are being deactivated.
The GC lens
Legal teams will have spotted the bigger picture message.
🔹Internal docs. The evidence that damaged Meta most in New Mexico was not a post or an algorithm — it was an internal email about a product design decision. Privilege does not protect business communications between executives about engineering choices, even sensitive ones. Legal teams should identify which product design deliberations involve legal risk assessment and ensure counsel is involved from the outset, not brought in after the decision is made. That is the difference between legal advice and disclosure risk.
🔹New theories of harm. Both the LA product liability theory and the Indonesian regulation apply to any product with recommendation algorithms, engagement mechanics, autoplay or push notification design calibrated for return frequency. The two most exposed sectors beyond social media are gaming and AI-powered consumer tools. In-house teams in those sectors should have a current view of which design features carry product liability characterisation risk.
🔹Innovative route to a landmark decision. Neither case attacked Section 230. Both cases made it irrelevant. That is the more significant signal: when a legal framework appears to protect a sector comprehensively, determined lawyers tend to look for the route around it rather than through it. Product liability, consumer protection law and public nuisance are three routes that emerged this week alone. GCs will stress-test the legal frameworks protecting their own organisations.
RISK RADAR

🐰 CMA concludes report into petcare The CMA published its final report on Wednesday, finding that fewer than half of pet owners using a large corporate group knew their practice was part of a chain. The remedies package requires vet businesses to make clear (on signage, at premises and online) whether they are part of a group or an independent business. Common ownership must be visible at the point of care. The report explicitly identifies dental, physiotherapy, optician and care home markets as sharing the same structural characteristics.
Why it matters: The mandatory ownership disclosure remedy establishes a principle the CMA has already signalled it intends to apply beyond vets: that operating consumer-facing professional services businesses under independently-branded trading names, without surfacing the group parent, is itself a competition concern. This will raise eyebrows of GCs advising PE-backed consolidators running multiple brands in any of the named adjacent sectors. Any acquisition model which relies on acquired brands feeling independent now carries CMA scrutiny risk, not just in vets but across regulated consumer services.
🇬🇧 ICO DUAA guidance The ICO published updated guidance covering Data Use and Access Act 2025 changes. The centrepiece is the new lawful basis of “recognised legitimate interest” covering five pre-approved purposes where no balancing test is required, including crime prevention, national security and safeguarding vulnerable individuals; and voluntary data sharing with public bodies on written request. Updated guidance on the existing legitimate interests basis and purpose limitation was also published. Two deadlines to note: a consultation on research, archiving and statistics closes 27 April, and formal data protection complaints processes must be in place by 19 June.
Why it matters: The new basis removes the LIA overhead for situations where in-house teams have so far carried a disproportionate documentation burden. Fraud and financial crime teams sharing data with law enforcement, HR and employment lawyers managing workplace safeguarding situations, and legal teams handling emergency disclosures all have a materially cleaner option available. The complaints process deadline is the more pressing item: eleven weeks away, and informal arrangements are now a compliance gap rather than a pragmatic shortcut.
🇬🇧 ICO/Ofcom Joint Statement on Age Assurance On Wednesday, the ICO and Ofcom published a joint statement on age assurance addressed to platforms subject to the Online Safety Act 2023 and the Children's Code. Both regulators set out a shared expectation that technical age assurance (not self-declaration) is required where children are likely to access a service or where age-restricted content is available. Acceptable methods include device-based signals, facial age estimation, mobile or email number verification, and credit card checks. A date-of-birth field is not listed.
Why it matters: A joint statement from two regulators is enforcement signalling, not guidance. Ofcom has been explicit that it intends to pursue early enforcement actions to establish precedent under the Online Safety Act. Platforms serving or likely to serve under-18s that have been relying on self-declaration have a live compliance gap, and the enforcement calendar is moving. Read alongside the LA and New Mexico verdicts and Indonesia's age regulation this week, age assurance is becoming the primary accountability mechanism for children online across courts, legislatures and regulators simultaneously — the convergence is deliberate.
FROM THE SIDEBAR
Quick signals worth clocking (optional reading)
📈 Finance got quants and it got disrupted. In-house lawyers are now trying the same trick on legal, 38 apps at a time.
Twenty-five years of speculation about Banksy's identity, and the answer may have been sitting in an NYPD archive since 2000.
🇬🇧 The Employment Rights Act's April instalment brings day-one sick pay and day-one paternity leave.
POLL OF THE WEEK
Last week we asked: How will the AI copyright question actually be resolved?
🟧⬜⬜ ⚖️ In the courts
🟧⬜⬜ 🏛️ In Parliament or Congress
⬜⬜⬜ 🤝 Industry licensing deals
🟧⬜⬜ 🌀 It won't be resolved, we'll just adapt
Three votes, a three-way tie, and the sharpest finding is the option that got nothing: nobody in the Profiles in Legal readership believes industry licensing deals will resolve this. The music industry's negotiated model, the publishing sector's opt-out hopes: neither lands as a credible endgame for this audience. The split between courts, parliament and “we'll just adapt” probably reflects genuine complexity rather than indecision. Experienced in-house teams know these aren't mutually exclusive outcomes.
We're asking again. Last week's Briefing Room observed that both the UK government and the White House had deferred to the courts. This week, two juries delivered verdicts on a related but distinct question, platform design liability, via exactly that route. Whether the copyright question follows the same path is worth a second vote.
How will the AI copyright question actually be resolved?
Enjoying the signal?
If you know an in-house lawyer who’s tired of the noise and wants to sound smarter in the boardroom, feel free to forward this edition.
💬 Forward to a colleague
🧠 Was this forwarded to you? Subscribe here to get it every Wednesday.
When you’re ready, here’s how I can help

I’m a General Counsel helping tech and SaaS scale-ups navigate digital regulation. I work with a small number of leadership teams as a Fractional GC or through targeted advisory sprints focused on:
AI & Regulatory Strategy: Translating regimes like the EU AI Act into design-level guardrails.
Strategic Triage: Making high-stakes calls with imperfect information to keep decisions moving.
Investor-Ready Foundations: Hardening your commercial architecture and contracts for the next funding round.
I work with 3-4 leadership teams at a time. If you’re navigating AI deployment, regulatory exposure or investor scrutiny, reply directly to this email.
- Philip
Too much legal content is dull and jargon-filled. Profiles in Legal is for lawyers who want to think clearly, sound credible in the room and get promoted.
This newsletter is for general information only and does not constitute legal advice. Seek professional advice for specific situations.
