
A new flaw in Google Gemini for Workspace could allow cybercriminals to dupe victims into unknowingly generating email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites. By utilizing this technique, threat actors circumvent the typical requirement that the end user open attachments or direct links. This type of attack leverages indirect prompt injections hidden inside emails that are obeyed by Gemini when generating the message summary.

What’s Notable and Unique
- The malicious instructions are not rendered in Gmail or visible to the human eye when reading the email, as the font is typically turned to white or sized to 0. However, if the recipient were to ask Gemini to generate a summary of the received email, the AI tool would parse and follow the invisible instructions.
- AI systems constantly struggle against threat actors who find new ways to exploit them for malicious purposes. Organizations managing these systems must implement safeguards to prevent misuse without significantly disrupting everyday user experiences.
Analyst Comments
Arete notes that this “hidden in plain sight” style of attack is likely to rise in the near future with the ongoing integration of AI into day-to-day workflows. Google has released recommended safeguards to aid in the prevention of this style of attack, which, in theory, should prevent it from being successful when properly implemented. However, it appears unlikely that end users will broadly implement these safeguards, especially when using Gemini for personal applications.