TL;DR. On 21 April 2026, Google embedded three new features explicitly labelled 'agentic safety and policy' into Ads Advisor — a documented admission that autonomous AI agents managing advertising accounts expose organisations to real compliance risks. The retrofit signals a pattern every enterprise deploying agentic AI should audit now.
The Case in One Paragraph
On 21 April 2026, Google published an official announcement introducing three new features into Ads Advisor, its AI assistant for Google Ads. The features are explicitly described as 'agentic safety and policy' measures — built specifically to govern autonomous AI agents operating on advertising accounts. According to the official announcement, they are designed to 'protect and streamline' Google Ads accounts. The word protect does not belong in the vocabulary of interface improvements. It belongs in the vocabulary of risk management. One day earlier, on 22 April, Google announced the eighth generation of its TPU chips, explicitly described per the official announcement as infrastructure for 'the agentic era': large-scale deployment of autonomous AI agents is not a working hypothesis — it is the sector's declared strategic direction.
What Actually Went Wrong
Ads Advisor is designed to let AI agents act autonomously on campaign parameters: bid adjustments, targeting recommendations, account structure modifications. Google's advertising policies constitute one of the densest regulatory corpora in the digital sector — thousands of pages covering prohibited content, sensitive targeting, financial advertising, health, and gambling.
An AI agent designed to maximise performance is not, by construction, calibrated on regulatory compliance — unless that constraint is explicitly encoded in its decision architecture. When an agent operates at machine cadence, the margin between an optimising action and a non-compliant one can close in milliseconds. The 21 April update confirms this by implication: Google judged it necessary to retrofit a dedicated security layer onto an already-deployed product. The agents were operating before the guardrails existed.
Three Root Causes That Travel Beyond This Case
1. Deployment speed outpaces safety maturity
Organisations — and platforms themselves — deploy AI agents on high-impact operational systems before safety mechanisms are formalised. Google, by retrofitting these features into Ads Advisor, provides the most direct demonstration: even leading vendors proceed through successive adjustments rather than prior safety architecture.
2. Policy complexity escapes agents without explicit constraints
Advertising rules are contextual, evolving, and frequently ambiguous. An AI agent optimising on a performance metric — click-through rate, cost per conversion — does not spontaneously integrate compliance. It must be encoded as a hard constraint in the decision system, not a secondary recommendation.
3. Human review has not kept pace with machine execution
Human approval cycles were designed for manual workflows. When agents act at machine frequency — dozens, potentially hundreds of adjustments per hour — traditional review processes become structurally inadequate. The gap between action speed and control speed is precisely where policy violations accumulate.
Three Levers to Avoid the Same Fate in Your Organisation
1. Audit your policies before any agentic deployment
Identify every rule your agents must comply with — platform policies, sector regulations, GDPR constraints — and encode them as hard constraints, not performance parameters. An AI agent must not be able to trigger a non-compliant action, even if that action improves its primary objective.
2. Define human approval thresholds by action type
Any modification above a defined budget ceiling, any change affecting sensitive targeting, any action on a campaign under compliance review — these must trigger mandatory human review before execution. The criterion is not the perceived importance of the action, but its violation potential.
3. Deploy real-time monitoring of agent actions
Post-hoc reports are insufficient when agents operate at machine cadence. Real-time alerts on account status changes, platform-detected violations, and abnormal spend deviations represent the minimum viable governance layer for agentic deployment. Do not discover problems in the monthly report.
Has your organisation defined formal policy constraints for its AI agents — or is it letting those agents optimise freely on high-regulatory-stakes systems?
If this analysis speaks to you, I publish a piece of this calibre every day on digital innovation and enterprise AI. 👉 Get the next one straight in your inbox — sign-up takes ten seconds, and each edition is read before 9 a.m. by leaders of European SMEs, mid-caps and public institutions.
Sources
This article is part of the Neurolinks AI & Automation blog.
Read in: French | Dutch