Good morning. For in-house lawyers and regulators, this week can feel like an air pocket. The break is over, the machine is ramping up, but objectives are not yet finalised. We’re not doing 2026 predictions. Instead, this edition looks at how risks and opportunities are already lining up. Our top story unpacks Gmail’s new AI summary features and what they may change inside organisations. We also track practical movement in ongoing stories: agentic AI, the EU AI Act, and the Grok controversy. This isn’t an AI newsletter, but its impact on risk is becoming unavoidable. 

You don’t need to have a view on all of this yet - but you do need to know where judgement will be tested first. 🎯

— Philip

If you read one thing this morning, read the Briefing Room. Everything else is optional.

BRIEFING ROOM

TL;DR? Gmail adds AI summaries

Canva AI

On Thursday, Google announced roll-out of a new set of AI features in Gmail, branded as AI Overviews and an “AI Inbox”, powered by its Gemini models. Users will be shown summaries of email threads, suggested actions and priority items surfaced at the top of the inbox. Users will also be able to ask their inbox questions in natural language and receive an AI-generated answer drawn from their emails.

Google describes this as turning Gmail into a “personal, proactive inbox assistant”. Unlike earlier tools (Smart Reply), this sits as a new interaction layer over email itself. Google’s approach differs from Microsoft Copilot by pushing AI summaries into the default Gmail flow, not an optional layer users switch on.

Your inbox just grew a middleman

When summaries, to-dos and “what matters” views sit above the underlying messages, behaviour adapts. People read less of the original material, move faster and rely more on what the system chooses to surface. Decisions may increasingly be taken on the basis of AI-mediated versions of emails and attachments.

In-house lawyers will feel the impact as inboxes shift from passive mailboxes to decision layers.

Lost in summarisation

💉 Prompt injection and manipulation. As reported recently by Forbes, Google itself has warned that AI-powered Gmail summaries can be influenced by hidden instructions embedded in emails or attachments, such as white-on-white text invisible to human readers. In these cases, no hacking is required: the AI simply follows what it reads, potentially altering how content is summarised or prioritised.

👩‍⚖️ Privilege and evidence risk. If decisions are taken on the basis of AI summaries rather than original emails or attachments, important nuance may be missed and audit trails may become harder to reconstruct later.

🎭 Counterparty behaviour. As AI summaries become normal, counterparties may experiment with how information is framed or embedded in communications, knowing that a human may never read the source in full.

⚠️ Accountability drift. Lawyers hardly need reminding that reliance on AI-generated summaries doesn’t shift responsibility. Explanations that begin with “the summary said” are unlikely to be persuasive internally, let alone externally.

🧩 Governance gaps. Many organisations have AI policies focused on tools and models, but less clarity on AI-mediated workflows such as inboxes, task lists and prioritisation layers.

None of this means organisations should panic or disable these tools. But it does mean inboxes have become governance surfaces.

RISK RADAR
  • 🇬🇧 ICO publishes Agentic AI report. Though part of its horizon-scanning Tech Futures series rather than formal guidance, it’s notable that the UK’s data protection authority has devoted 68 pages to agentic AI. The report defines agentic systems as AI that combines LLMs with tools such as databases, operating systems, memory, APIs and payment rails, allowing them to pursue goals, plan steps, take actions and adapt with limited human involvement. These systems go well beyond chatbots. The ICO urges caution around hype, but is clear that autonomy does not exempt responsibility for data protection compliance. 

    • Why it matters: Beyond “... but, liability”, the report gives real examples of how agentic AI can create board-level risks for deployers, not just vendors. For example, agents might process data beyond the GDPR purpose or what is necessary; they might create special category data; and they might reduce transparency to identify controllers and processors. This is where “interesting research” becomes a question someone asks Legal. More guidance is promised through 2026, including on automated decision-making.

  • 🤖 Grok outcry intensifies. As we covered last week, X is under fire for its public chatbot being used to created illegal content, including explicit images of children. The case has given Ofcom a high profile opportunity to vocalise its new powers (see Profiles edition 5). Ofcom said it had made urgent contact with the platform in early January, carried out an expedited evidence assessment, and will examine whether X has properly risk-assessed its AI tools, taken appropriate steps to prevent illegal content from being seen by users in the UK, and deployed effective age-assurance measures.

    • Why it matters: This is one of the first high-profile tests of the UK’s Online Safety Act applied to an AI tool embedded in a major social platform, not just user-generated content. For in-house lawyers, it shows that regulators are willing to treat AI-generated illegal content as part of platform safety obligations, not an abstract technical edge case. Ofcom can issue substantive fines and measures disruptive to business, including blocking access to services or sites, for tools that are rushed out without scrutiny. Senior politicians, right up to the Prime Minister, have publicly condemned the content and backed Ofcom’s actions, suggesting reputational risk is now front-page.

  • 🇪🇺  The European Commission launches its AI Act Single Information Platform in beta. The hub is designed to help organisations understand their AI Act duties, and seems to be arriving well ahead of its statutory deadline. In parallel, the EU AI Office launched its process to draft a voluntary Code of Practice to support compliance with Article 50 of the AI Act, which covers transparent disclosure of AI-generated content.  

    • Why it matters: Organisations have an opportunity to stress-test their AI governance approach early, before enforcement expectations harden. Included interactive tools like an “AI Act Compliance Checker” and “AI Act Explorer” make entry-level compliance more accessible and intuitive. A formal channel to submit questions to the AI Office offers a rare chance to test assumptions against an official source.

FROM THE SIDEBAR
Quick signals worth clocking (optional reading)

🧑‍💻 As if today’s edition needed proof, Bloomberg predicts Legal will evolve from “most tech-averse department” to vibe-coding and even build-not-buy through 2026.

👫 Is our profession bifurcating?

📈 The number of legaltech genAI products nears 1000.

POLL OF THE WEEK

The latest polling by the Ada Lovelace Institute finds the UK public is deeply sceptical that government will regulate AI in the public interest. Respondents prioritise fairness and safety and would like to see an independent regulator for AI with real teeth. That scepticism tends to surface first inside organisations.

In our last poll, we asked: Which word best sums up your legal team’s 2025?

⬜⬜⬜⬜⬜ 🤖AI (obviously)

🟩🟩🟩⬜⬜ 📉Do more with less

🟧🟧⬜⬜⬜ 🔥Firefighting

⬜⬜⬜⬜⬜ 🧩Fractional

⬜⬜⬜⬜⬜ ✍️Other

HIRING BOARD

We’ve seen a raft of tech roles this week. The focus of the job specs reflects a positioning shift for in-house legal.

  • 🇳🇱 Booking.com is hiring a Head of Cyber Legal, formalising cyber as its own senior legal vertical connecting privacy, security, fintech and incident response.

  • 🇬🇧 Google’s search for a Regulatory and Litigation Counsel (Content) anticipates increased regulatory defence given Ofcom and the European Commission’s new powers.  

  • 🇮🇪 Meta’s ad for a corporate governance lawyer pairs “emerging regulations” with board decision making. Corporate governance as a frontline response to increasing regulatory complexity. 

  • 🇸🇪 PayPal is hiring in Stockholm, Luxembourg and the UK. The European role emphasises legal as enabling market-entry. The London role appears more classic commercial. 

  • 🇫🇷 Docusign’s AGC position explicitly leads with AI governance, “leveraging AI to accelerate the commercial legal function” and the rapidly evolving digital landscape.

  • 🇬🇧 Microsoft’s spec is also upfront about “using AI solutions to scale work” as a requirement.

Taken together, the roles point to a raised bar for in-house legal: digital regulatory literacy as table stakes; AI as part of the job rather than a side topic; and judgment delivered to product and go-to-market teams under ambiguity. (Exact territory this newsletter is designed to help you navigate.)

IN THE CALENDAR

🧑‍⚖️ Tomorrow: UK Supreme Court handing down its first judgment of the year, a commercial contract dispute (Providence v Hexagon, UKSC/2024/0130). Potentially interesting for drafting risk but unlikely to reset principles of contract law.

🎿 Monday: World Economic Forum kicks off in Davos. Often more signal than substance, but watch for language around AI governance, online safety, and “responsible innovation”.

Enjoying Profiles in Legal?

If this was useful, forwarding it to a colleague is the way this grows.

💬 Forward to a fellow innovator in Legal

🧠 If you were forwarded this, subscribe now

ABOUT THE EDITOR

I work with scale-ups as a fractional GC, covering commercial, regulatory and AI governance. Fixed days per month, fixed fee. Typical work includes contract strategy, regulatory triage, and board-level risk decisions.

Too much legal content is dull and jargon-filled. Profiles in Legal is for lawyers who want to think clearly, sound credible in the room and get promoted.

🪃 Reply to this email with what you think we should cover

📣 Request to partner with us

This newsletter is for general information only and does not constitute legal advice. Seek professional advice for specific situations.

Keep Reading

No posts found