Good morning. This is not a technology newsletter. But occasionally a technological shift is so consequential for risk and accountability that it cannot be ignored.
Agentic AI is positioned to be more lasting than a standard compliance cycle; more impactful than GDPR c. 2016. When systems begin to act, not just passively react, lawyers know that exposure multiplies.
Profiles in Legal has long noted that Legal needs to be embedded earlier in product design. Agentic AI extends that logic further: into the architecture of operational systems themselves.
That doesn’t mean breathless hype or panic. It means an opportunity for governance to advance from auditing outcomes to shaping architecture. 🎯
— Philip
If you read one thing this morning, read the Briefing Room. Everything else is optional.
BRIEFING ROOM
Agentic AI joins the enterprise stack
This was the week that C-suite expectations of Legal using agentic AI became mainstream.
In January, Anthropic released “Cowork” as part of its Claude product, describing it as “the future of AI at work”. The agentic tool allows AI to sustain multi-step workflows to generate work products, and to take actions for example with actual files and on browsers. Combined with Claude’s library of “Skills” that can be applied to any given project within the tool, the tool has the potential to radically alter the structure of legal work. Practising lawyers are already outlining how in viral social media posts this week.
There’s more: last week, Anthropic announced connectors for Claude for mainstream business software tools including Google Workspace, Apollo and Docusign - and, notably, also for Harvey.
These improvements are providing momentum. Despite Anthropic’s ditching of its flagship safety pledge and its designation by the Trump administration as a “supply chain risk”, Claude became the number one free app on Apple’s App Store over the weekend.
There is an ongoing debate over whether Cowork reflects the ability of foundational model providers such as Anthropic to compete with, or enhance, legal-specific AI businesses such as Harvey. That’s a microcosm of the much larger “SaaSpocalypse” debate (Edition 11), where SaaS business share prices have been under pressure from the fear that customers will move away from seat-based applications to agentic systems co-ordinating workflows one layer up.
It’s not just Claude. Last week, Thomson Reuters announced one million professional users of CoCounsel, noting that forthcoming updates will be “designed around conversational task execution”.
The future already started
As CPOs continue to find AI use cases, and AI vendors roll out agentic offerings, GCs will need to address the shift in terms of what this means for the design of the organisation and of the legal team.
🔹 More than faster automation. As workflows are paired with probabilistic models, this technology provides the ability for AI agents to complete reasoning, make decisions and act in the real world without constant human intervention.
🔹 New disciplines for Legal. These roll-outs justify the emergence and urgency of entire fields such as AI governance, to understand when, where and how to implement guardrails. Those guardrails will need to align with board-level expectations of accountability; legal requirements such as GDPR and EU AI Act; and principles of ethics and company values.
🔹 AI doesn’t shift responsibility. AI agents are a tool, not a substitute, and accountability sits with people and organisations accordingly. Contrary to replacing lawyers, this puts a premium on human strategic judgement.
AI agents can now execute the workflow, and many companies will rebuild around this power. But the design of that workflow, and the risk it carries, remains a human decision. Legal must stay in that room. Claude doesn’t need sleep; the board does.
RISK RADAR
🏨 Checking out the competition. The UK’s Competition and Markets Authority launched an investigation into Hilton, IHG, and Marriott regarding the suspected sharing of competitively sensitive information via a data analytics tool provided by CoStar. While CoStar expressed surprise as the platform has been a “long-standing” industry standard for decades, the regulator is examining whether this historical or forward-looking data exchange - even if aggregated or anonymised - effectively reduces competitive uncertainty. This follows a similar US consumer lawsuit involving CoStar and these hotel chains, which the companies successfully defeated last year.
Why it matters: Long-standing, seemingly innocuous tools can generate real competition law exposure if they make competitor behaviour more predictable. “Market standard” isn’t always low risk. For GCs, the structural lesson will travel beyond hospitality. As organisations adopt analytics platforms and agentic systems that aggregate data and coordinate workflows, the legal question shifts from what the tool is designed to do, to what it enables in practice. When technology reduces uncertainty between market actors, regulators are likely to look more closely.
⏮️ Be kind, rewind. Last week, the French data protection authority (CNIL) has launched a public consultation on draft recommendations for “session replay” tools - software that records user interactions like clicks, mouse movements, and form inputs to recreate their browsing journey. The regulator is targeting both the developers who design these solutions and the website or app operators who deploy them. This move signals a push for stricter adherence to data minimisation and more granular consent mechanisms via Consent Management Platforms (CMPs).
Why it matters: Session replay software is often treated as a routine product analytics layer. The CNIL is signalling that regulators may look beyond labels and examine what is actually being captured, how long it is retained, and whether users meaningfully understand the extent of monitoring. For Legal, it’s another example of the need for governance in operational design. Where replay tools capture granular behavioural data, particularly in forms or authenticated environments, assumptions around consent, necessity and internal access controls may need revisiting. Back-end visibility into user behaviour is unlikely to be treated as low-risk simply because it is standard practice.
🇫🇷 Access, not archive. Just gaining traction is a recent French Court of Appeal decision holding that simply appearing in an email “To” or “cc” field does not give employees a general right to receive full copies of workplace correspondence under the GDPR. The court clarified that the purpose of the right of access is to enable individuals to verify the lawfulness and accuracy of processing, rather than serving as a broad litigation discovery mechanism. This ruling aligns with existing ICO and EDPB guidance, which maintains that appearing in a "To" or "CC" field does not automatically transform an entire document into the recipient's personal data.
Why it matters: Although a French decision, this is generally useful for employers facing broad, tactical and increasingly AI-generated DSARs during grievances or settlement negotiations. The judgment reinforces that the right of access is a compliance tool, not a litigation discovery mechanism. GCs may find this helpful when framing internal responses to high-volume requests. The exercise remains fact-specific: where substantive personal data is embedded in emails or files, disclosure may still be required. But the decision supports a more proportionate approach and reduces the risk that appearing in an inbox automatically converts the entire mailbox into disclosable material.
FROM THE SIDEBAR
Quick signals worth clocking (optional reading)
POLL OF THE WEEK
In our current poll, we’re asking “When do you expect an autonomous AI agent to become part of your legal team’s workflow?”
So far, the dominant view is that agentic AI is coming, but not yet.
⬜⬜⬜ ⏰ By Q3 2026
🟧⬜⬜ 🕟 By end of 2026
🟩🟩🟩 🕤 By end of 2027
🟧⬜⬜ ⏾ Never
Most respondents are placing it in the medium term rather than this year’s operating plan. But tools capable of delegated execution are already shipping. Adoption may arrive incrementally - through pilots, integrations and workflow experiments - before anyone formally decides to “hire” an agent.
Perhaps this week’s Briefing Room influences your thinking.
There’s still time to vote below.
When do you expect an autonomous AI agent to become part of your legal team’s workflow?
Enjoying the signal?
If you know an in-house lawyer who’s tired of the noise and wants to sound smarter in the boardroom, feel free to forward this edition.
💬 Forward to a colleague
🧠 Was this forwarded to you? Subscribe here to get it every Wednesday.
When you’re ready, here’s how I can help

I’m a General Counsel helping tech and SaaS scale-ups navigate digital regulation. I work with a small number of leadership teams as a Fractional GC or through targeted advisory sprints focused on:
AI & Regulatory Strategy: Translating regimes like the EU AI Act into design-level guardrails.
Strategic Triage: Making high-stakes calls with imperfect information to keep decisions moving.
Investor-Ready Foundations: Hardening your commercial architecture and contracts for the next funding round.
I work with 3-4 leadership teams at a time. If you’re navigating AI deployment, regulatory exposure or investor scrutiny, reply directly to this email.
- Philip
Too much legal content is dull and jargon-filled. Profiles in Legal is for lawyers who want to think clearly, sound credible in the room and get promoted.
This newsletter is for general information only and does not constitute legal advice. Seek professional advice for specific situations.


