EmailGPT's Prompt Injection Flaw Threatens User Data

Penka Hristovska
Penka Hristovska Senior Editor
Penka Hristovska Penka Hristovska Senior Editor

A major security issue has been identified in EmailGPT, a Chrome extension that leverages OpenAI’s GPT models to assist with email drafting in Gmail.

The vulnerability, which was discovered by researchers at Synopsys Cybersecurity Research Center (CyRC), allows attackers to hijack the AI service by submitting prompts that cause the LLM (large language model) to follow the attacker’s commands.

This means a malicious user could create a prompt that injects unintended functionality, leading to data extraction, spam campaigns using compromised accounts, and generating misleading email content for disinformation. It could also lead to denial-of-service attacks and financial losses.

The affected part of the EmailGPT software is the main branch, and the vulnerability is particularly concerning because anyone with access to the EmailGPT service can exploit it, raising fears of widespread abuse.

“Exploitation of this vulnerability would lead to intellectual property leakage, denial-of-service, and direct financial loss through an attacker making repeated requests to the AI provider’s API which are pay-per-use”, Synopsys Cybersecurity Research Center (CyRC) said.

The vulnerability was assigned a CVSS base score 6.5, indicating medium severity.

CyRC revealed in the blog post where it discussed the vulnerability that despite several attempts to contact the developers, it has yet to receive a response within their 90-day disclosure period. As a result, CyRC advised users to immediately remove EmailGPT applications from their networks to mitigate potential risks.

“This latest research by the Synopsys Cybersecurity Research Center further highlights the importance of strong governance on building, securing and red-teaming the AI models themselves. At its core, AI is code that can be exploited like any other code and the same processes need to be implemented to secure that code to prevent these unwanted prompt exploits,” said Patrick Harr, CEO at SlashNext.

“Security and governance of the AI models are paramount as part of the culture and hygiene of companies building and proving the AI models either through applications or APIs. Customers particularly businesses need to demand proof of how the suppliers of these models are securing themselves including data access BEFORE they incorporate them into their business,” he added.

About the Author
Penka Hristovska
Penka Hristovska
Senior Editor

About the Author

Penka Hristovska is an editor at SafetyDetectives. She was an editor at several review sites that covered all things technology — including VPNs and password managers — and had previously written on various topics, from online security and gaming to computer hardware. She’s highly interested in the latest developments in the cybersecurity space and enjoys learning about new trends in the tech sector. When she’s not in “research mode,” she’s probably re-watching Lord of The Rings or playing DOTA 2 with her friends.

Leave a Comment