Good morning. Even the quietest week of the year has not slowed down developments in AI. In today’s edition, we cover real shifts in consumer and business behaviour you can take straight into conversations with your teams. From the banal (brainrot and AI-assisted fraud) to the horrifying (explicit deepfakes and UGC-driven harassment), each open new points of exposure for organisations. And it’s only week one. As always, our aim to help you stay ahead and sound smarter with the exec. 🎯

— Philip

BRIEFING ROOM

The Business Cost of Synthetic Reality

This image was manipulated in seconds using generative AI. (Profiles in Legal / ChatGPT)

This week we’ve seen the collapse in trust of digital images directly impact the business models of several household names. Here’s how Legal can turn that into influence with product and leadership.

  • Deliveroo, JustEat and their restaurant customers are paying the cost of AI edits generated by end users making food look undercooked. Casual fraud is more accessible for everyday consumers.  

  • The head of Instagram warned of the risks of infinite synthetic content to its audience appeal. The app already notices that raw, unfiltered content is trusted more than aesthetic shots. 

  • At the same time, analysis shows as much of a third of YouTube content is AI slop or brainrot, though these channels still win millions of subscribers. 

  • On a much more distressing level, French and UK authorities opened investigations following reports that X’s Grok AI completed requests by users for non-consensual explicit deepfakes of women and even minors. 

Technology has provided systems that allow end users to generate harmful, fraudulent and illegal content quickly and for free. 

As is a recurring theme in regulatory trends we’ve been covering, prevention is better than cure. Businesses will need to mitigate at the product and systems level, rather than relying on after-the-event detection. 

Food delivery apps and online retailers might enhance their refund processes by investing in AI-detectors, shared databases of suspected offenders and/or asking for live video of deficient products, which is harder to fake. Social media platforms might lean more on labelling or even blocking AI-generated content

To navigate and stay solvent in the AI age, consumers and businesses will need to focus more on provenance than moderation (incidentally, a theme echoed by our first Risk Radar item, below).

RISK RADAR
  • 🧑‍💻Post first, apologise later” is over. A December CJEU ruling is making waves in the online platform space. A woman’s photos and phone number were used in a fake “sexual services” ad posted by an anonymous user on a Romanian classifieds site (Publi24.ro). The platform removed it within an hour once notified, but it had already been copied onto other sites. The Court ruled that, in publishing the sensitive personal data contained in the user-generated content, the platform may be a data controller, even a joint controller with the original poster. That can push obligations upstream, including pre-publication vetting for sensitive data and investment in technical and organisational measures aimed at preventing loss of control (for example to limit the data being copied and re-shared). The Court did not allow the platform to rely on the e-Commerce Directive hosting exemption, which is available only where the platform plays a genuinely passive, purely technical role.

    • Why it matters: The platform “hosting defence” just got weaker. If your company is actively setting the terms for how data is published for commercial purposes, GDPR controller duties can cut across reliance on the e-Commerce Directive. The ruling justifies increasing investment in pre-publication moderation tools and verification methods. Anonymous and real-time posting models now carry materially higher risk. This is required reading for any platform hosting user-generated content. 

  • 🇺🇸 AI orders down, enforcement intact. In late December, the US FTC set aside an order against Rytr LLC, which had been in place following its “Operation AI Comply”. Rytr sold an AI writing assistant which, the consumer protection regulator said at the time, allowed subscribers “to generate false and deceptive online reviews”. Following President Trump’s AI Executive Order and AI Action Plan, the FTC has now rowed back, saying “condemning a technology or service simply because it potentially could be used in a problematic manner is inconsistent with the law and ordered liberty”. Yet, on the same day, the FTC warned 10 companies about potential breaches of its new Consumer Review Rule targeting deceptive reviews.

    • Why it matters: While politics may shift US regulators away from AI-specific regulatory overreach, their existing toolkits remain intact. Even absent an overarching AI law à la EU AI Act, product and marketing practices that rely on AI remain squarely subject to long-standing rules on deception, unfair practices and consumer harm.

HIRING BOARD

🇺🇸 TikTok is hiring a new Head of Legal Operations.

🇬🇧 Dunnhumby is hiring a tech and AI Senior Legal Counsel.

💷 Deutsche Bank is looking for a Legal Counsel.

IN THE CALENDAR

🎿 19 January - World Economic Forum kicks off in Davos. Often more signal than substance, but watch for language around AI governance, online safety, and “responsible innovation”.

🇺🇲 28 January - FTC hosts a workshop exploring effective age verification technology, underscoring a broader, cross-jurisdictional shift toward verified user age as a governance and compliance issue (as seen from the UK’s Online Safety Act to Australia’s social media restrictions for children). 

FROM THE SIDEBAR

🧠 Agentic lawyering is coming but preserve your critical thinking.

🇨🇳 China is moving to regulate human-like AI, with bans on harmful manipulation by bots, a duty to intervene if emotional dependence is detected and specific protections for children. It’s being called the “world’s strictest” AI proposal.

🤖 Move over SEO, it’s GEO. Or AEO.

Enjoying Profiles in Legal?

Our readers are curious, commercially sharp and allergic to legalese. If that’s you - welcome.

💬 Forward to a fellow innovator in Legal

🧠 If you were forwarded this, subscribe now

ABOUT THE EDITOR

I work with start-ups and scale-ups as a fractional GC, covering commercial, regulatory and AI governance. Fixed days per month, fixed fee. Typical work includes contract strategy, regulatory triage, and board-level risk decisions.

Too much legal content is dull and jargon-filled. Profiles in Legal is for lawyers who want to think clearly, sound credible in the room and get promoted.

🪃 Reply to this email with what you think we should cover

📣 Request to partner with us

This newsletter is for general information only and does not constitute legal advice. Seek professional advice for specific situations.

Keep Reading