Security researchers uncovered a flaw in Google Gemini for Workspace that lets attackers hide malicious instructions in an email’s HTML and CSS. By styling text in white‑on‑white or zero font size, bad actors make their content invisible to human readers but still parse by Gemini’s summarization engine.

The Mechanics of the Prompt‑Injection Attack
When a user selects Gemini’s “Summarize this email” feature, the AI assistant processes both visible text and buried HTML directives. Attackers wrap hidden instructions in custom tags (often labeled) directing Gemini to append a fake Google security warning that urges the user to call a fraudulent number or visit a phishing site. No links or attachments are necessary, since the embedded HTML alone triggers the malicious output.
Scope Across Google Workspace Apps
This vulnerability affects any Workspace app that offers AI summarization, including Gmail, Docs, Slides, and Drive. An attacker who compromises one user’s inbox could automate newsletters or ticketing emails to millions of recipients, turning every summary into a potential phishing beacon or even enabling self‑replicating “AI worms” that spread harmful prompts across an organization.
Recommended Defensive Measures
Experts recommend sanitizing inbound HTML as a scaling routine that ensures all invisible styling and opaque tags are stripped of code prior to processing by the AI. The deployment of LLM firewalls that examine and filter AI will be able to stop instructions that are concealed instructions. Employees should also be trained not to regard the AI summaries as a source of information but rather as an informational resource to get a grip on a situation and cross-check alerts that do sound alarming by reading the entire email.
Urgent Need for Provider‑Side Fixes
Google is requested to improve its HTML parsing through sandboxing or ingestion blacklisting of secretive content by analysts. Better attribution context that is easily distinguished from the AI-generated text regarding the source material would assist the user in differentiating between official messages and prompts made by attackers.

AI assistants are unavoidable in everyday work, so programmed monitoring and sanitization have to be as highly advanced to counter the developing dangers. The Gemini bug shows that the AI characteristics bring novel avenues of social engineering. Tightening content checks, using specific AI security tools, and educating users will help organizations reduce the chances of being in danger of prompt injection attacks and maintain the safety of their Workspace environments.