Good morning.  One of the hardest parts of senior legal work is noticing when something long treated as “normal” stops being safe.

Early in my career, I worked alongside marketing and media teams for whom Legal was often a distant support function. Many products were deployed as established market practice rather than legal questions. “Custom audiences” built from hashed customer data sat firmly in that category: technical, routine, and rarely escalated.

What struck me then - and still does now - is how often assumptions harden simply because something has been done that way for a long time. This week’s Risk Radar includes a French enforcement decision that challenges those legacy assumptions. It’s not novel technology, but it is a reminder that regulators are willing to revisit long-standing tech practices through a stricter lens, particularly where consent models were never refreshed.

That theme - how yesterday’s “normal” becomes today’s exposure - runs through this edition, from AI-driven organisation design to the widening expectations around governance and accountability.

This edition focuses on the quiet work Legal does when “normal” starts to age badly. 🎯

— Philip

If you read one thing this morning, read the Risk Radar. Everything else is optional.

BRIEFING ROOM

Davos 2026: AI confesses it’s a headcount story

Alongside geopolitical recalibrations, the more expected subject dominating big speeches at last week’s World Economic Forum was AI. 

Reflecting trends we’ve seen in legal departments, the tone has shifted from “productivity tool pilot” to organisation design.

One data point doing the rounds came from Wing Venture Capital’s survey of 181 chief-level technology, data, security and AI leaders: 66% of large enterprises (10k+ employees) expect “meaningful” 10–25% headcount reductions in teams affected by AI over the next three years. Customer service, software engineering, IT, sales and marketing are identified as functions with the most “potential”.

The public-facing narrative stayed upbeat, yet the numbers were blunt. IMF Managing Director Kristalina Georgieva said 40% of jobs globally will be significantly impacted by AI, rising to 60% in advanced economies. She cited the IMF’s own translation team shrinking from 200 to 50. Existing debates focus on reduction in entry-level roles causing talent promotion pipelines to dry up.

Meanwhile, the “how fast does this get weird” debate continues. DeepMind CEO Demis Hassabis put human-level AGI at five to ten years away, slower than some rival timelines being floated in the market.

Your org chart just got a product manager

Executives will likely treat AI rollout as an operating model change.

Fewer layers. Narrower roles. More standardised decisions pushed into systems. More work externalised to vendors or contractors where cheaper than rebuilding capability internally.

Inside organisations, expect a familiar pattern. Phase one is “prove value”; phase two is “remove cost”; phase three is “change what we hire for”. We’ve already seen this in the evolution of legal counsel job specs (Edition 7, Edition 8).

Humans in the lead, lawyers on the hook

The leadership mantra emerging at Davos felt governance-flavoured: “human in the lead, not human in the loop”. A quotable line from Accenture CEO Julie Sweet that will travel in exec rooms where AI governance still feels abstract and reactive.

For GCs in the lead, agentic AI reducing headcount has many operational angles:

🔹 Procurement: shadow AI becomes more consequential if squeezed middle managers approve SaaS upgrades with embedded AI, without a parallel pass over data use, audit rights and liability allocation.

🔹 HR: employment law on consultation duties, works councils and discrimination risk are vanilla expectations. Increasing reliance on externalised capacity such as contractors builds risks around IP and confidentiality leakage and tax classification errors (in the UK, IR35).

🔹 Governance: if leaders are becoming more open about material job impacts, boards will expect a matching story for controls and accountability. This could start with internal audit and risk teams asking for stronger oversight - and clearer ownership - than product and engineering teams are used to producing.

The headcount story is becoming easier to say out loud. The harder work for GCs is being ready for the governance questions that follow.

RISK RADAR
  • 📱 French regulator targets “hashed uploads” for social ads. France’s CNIL announced on Thursday that it had fined an unnamed large retailer (reportedly an international brand) €3.5m for transferring loyalty-programme email addresses and phone numbers to a social network for targeted advertising without valid consent. The practice ran from 2018 to 2024 and involved over 10.5 million individuals; the decision was adopted with 16 other EU DPAs and deliberately made public to signal expectations around widespread ad-tech practices. 

    • Why it matters: This cuts across a long-standing marketing practice of creating “custom audiences”. Uploading hashed customer lists to platforms has often been treated as low-risk and quasi-anonymised, at least by junior marketing execs managing campaigns. The decision confirms that regulators actually view these flows as full personal-data transfers requiring explicit, purpose-specific consent and DPIA-level scrutiny, which may surface exposure where legacy ad-tech setups were never revisited. It’s a useful reference example for GCs reminding teams that “hashed” never meant “out of scope” and that legacy ad-tech assumptions age badly.

  • 🧒 Age checks move from national rule to coordinated expectation. Age assurance is starting to be seen as a baseline platform capability rather than a niche obligation tied to adult content. On Thursday, the UK’s Ofcom together with its fellow Global Online Safety Regulators Network members (a forum launched in 2022 that brings together independent online safety regulators from Europe, Asia, Africa and the Pacific), published shared principles for online age verification, whilst noting its own rules on “highly effective age assurance” have been in force since July. 

    • Why it matters: This points to an emerging cross-regulator consensus that age gating is a foundational platform capability, not a niche obligation tied only to adult content. The risk is less about which vendor or method is used and more about whether a service can show that its approach is accurate, robust and privacy-preserving in practice. To demonstrate how prominent age verification is becoming, today the US consumer protection authority the FTC is convening a full-day open workshop on the topic focused on tools, regulatory contours and interaction with the US Children’s Online Privacy Protection Act.

  • 🇬🇧 Individual accountability clear in FCA market abuse enforcement. The UK Financial Conduct Authority fined two former Carillion plc finance directors for reckless involvement in misleading market statements, citing failures to escalate known financial issues and weaknesses in procedures, systems and controls. Separately, it fined an external oil-rig consultant £310k for insider dealing after trading on confidential drilling data obtained through his consultancy role. In both cases, the FCA emphasised personal knowledge of inside information and responsibility for how that information was handled, regardless of formal title or employment status.

    • Why it matters: Market integrity accountability follows access to sensitive information. Senior lawyers and GCs could mention these fines in audit and compliance refresh conversations, to underline that insider lists, share-dealing rules and escalation routes need to reflect how information actually moves across senior finance leaders and external advisers.

FROM THE SIDEBAR
Quick signals worth clocking (optional reading)

🇦🇺 Following Australia’s social media ban for under 16s (Edition 3), the UK government is consulting on similar restrictions in response to public pressure. Reports already suggest sales of books and board games are up since the ban.

🇸🇪 Swedish DPA fines SaaS processor used by sports clubs SEK 6m (around €565k) after SQL-injection breach affecting 2m+ users, mainly children.

🧑‍💻 A new library of vibe-coded projects made by lawyers.

POLL OF THE WEEK

Last week, we asked: “How seriously does your leadership team take AI governance today?

Reassuringly, the results put it firmly in boardroom risk territory.

🟩🟩🟩🟩⬜🛡️ As a core business risk, on a par with data protection or cyber

🟩⬜⬜⬜⬜🕰️ As important, but something we’ll “formalise later”

⬜⬜⬜⬜⬜☑️ As a compliance box to tick when regulators force the issue

⬜⬜⬜⬜⬜🙈 As someone else’s problem (tech, vendors, or the future)

That sets up a practical follow-on: how AI is actually landing inside legal teams. Vote by clicking below.

HIRING BOARD

This week’s senior hiring leans towards legal operators trusted to work close to frontier technology, complex commercial models and organisational scale.

  • 🇮🇪 🇺🇸 🇬🇧 Anthropic, Commercial Counsel EMEA: commercial counsel supporting AI deployment, complex enterprise deals and evolving regulatory expectations, alongside a growing US legal team spanning M&A, product safety, regulatory and platform work. A clear signal of legal scaling around frontier AI.

  • 🇬🇧 Snap, Senior Commercial Counsel, Platform & Tech: senior deal lawyer anchored in strategic partnerships across AI, AR, hardware and adtech, sitting close to product integrations and regulatory edges.

  • 🇬🇧 Multiverse, Legal Director (GC’s right hand): second-in-command legal leader combining M&A, fundraising, commercial, AI/data governance and team scaling, with a mandate to drive efficiency metrics and AI adoption.

  • 🇬🇧 🇮🇪 Stripe, Legal Counsel (EMEA Marketing): specialist commercial and privacy role supporting marketing, adtech and data practices at scale, reflecting rising scrutiny of how growth teams use data.

Unique role of the week? A chance to work at the 🇺🇸 The Metropolitan Museum of Art on Fifth Avenue as Associate General Counsel (Employment): a classic employment-centred AGC covering investigations, audits, labour and immigration, reminding that some institutions still prize deep functional specialism over blended generalism.

Enjoying Profiles in Legal?

If you know an in-house lawyer who gets pulled into “hallway question” conversations, feel free to forward this on.

💬 Forward to a colleague

🧠 If this was forwarded to you, you can subscribe here

ABOUT THE EDITOR

I’m a General Counsel working at the intersection of regulation, product and board-level decision-making in tech and regulated markets.

I work with a small number of companies and leadership teams where senior legal judgment is needed to navigate growth, regulatory pressure or investor scrutiny. Get in touch.

Too much legal content is dull and jargon-filled. Profiles in Legal is for lawyers who want to think clearly, sound credible in the room and get promoted.

🪃 Reply to this email with what you think we should cover

📣 Request to partner with us

This newsletter is for general information only and does not constitute legal advice. Seek professional advice for specific situations.

Keep Reading

No posts found