SecDevOps.comSecDevOps.com
Multiple ChatGPT Security Bugs Allow Rampant Data Theft

Multiple ChatGPT Security Bugs Allow Rampant Data Theft

Dark Reading(1 months ago)Updated 1 months ago

Attackers can use them to inject arbitrary prompts, exfiltrate personal user information, bypass safety mechanisms, and take other malicious actions.

TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTogether, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Attackers can use them to inject arbitrary prompts, exfiltrate personal user information, bypass safety mechanisms, and take other malicious actions.November 6, 2025In yet another "Your chatbot may be leaking" moment, researchers have uncovered multiple weaknesses in OpenAI's ChatGPT that could allow an attacker to exfiltrate private information from a user's chat history and stored memories.The issues — seven of them in total — stem largely from how ChatGPT and its helper model, SearchGPT, behave when browsing or searching the Web in response to user queries, whether looking up information, summarizing pages, or opening URLs. They allow attackers to manipulate the chatbot's behavior in different ways without the user's knowledge.Researchers at Tenable who discovered the flaws described them as leaving millions of ChatGPT users as potentially vulnerable to attacks. "By mixing and matching all of the vulnerabilities and techniques we discovered, we were able to create proofs of concept (PoCs) for multiple complete attack vectors," Tenable researchers Moshe Bernstein and Liv Matan said in a report this week. These included exploits for indirect prompt injection, bypassing safety features, exfiltrating private user information, and creating persistence.Tenable's discovery adds to a growing body of research exposing fundamental security weaknesses in large language models and AI chatbots. Since ChatGPT's public debut in late 2022, researchers have repeatedly demonstrated how prompt injection attacks, data leakage vulnerabilities, and jailbreaking techniques can compromise these systems in ways fundamentally different from traditional software vulnerabilities, and how they are a lot harder to mitigate. The new research is another reminder of the need for caution for enterprises that are integrating LLMs and chatbots into their workflows without much thought about the potential security implications.Related:Sora 2 Makes Videos So Believable, Reality Checks Are RequiredIn a nutshell, the seven vulnerabilities Tenable uncovered stem from how ChatGPT ingests and processes instructions from external sources, including websites it browses, search results, blog comments, and specially crafted URLs. The security vendor showed how attackers could exploit the flaws by hiding malicious prompts in blog comments, poisoning search results to bypass ChatGPT's safety filters and taking advantage of how ChatGPT processes conversation history and stores memories. One of the seven flaws involves indirect prompt injection, where the researchers showed how an adversary could plant malicious instructions on a trusted Web page, like in its comments section. If later a user were to ask ChatGPT to summarize the contents of that page, the chatbot's Web browsing component would dutifully follow the malicious instructions — which could, for instance, involve sending the user a link to a malicious site. Related:APT 'Bronze Butler' Exploits Zero-Day to Root Japan OrgsAnother method for prompt injection — a one-click method — that Tenable discovered attackers could use was through an OpenAI feature that allows users to prompt ChatGPT through URLs like https://chatgpt.com/?q={Prompt}. According to Tenable, because ChatGPT automatically submits whatever query is in that URL parameter, attackers can craft malicious links disguised as helpful ChatGPT queries. But when they're clicked on, they immediately inject a malicious prompt.A third vulnerability the researchers uncovered involves the implicit trust that ChatGPT places in the bing.com domain. Tenable discovered attackers can index malicious sites on Bing, extract their tracking links — which are wrapper links Bing uses to redirect users to the sites they want to visit — and use those bing.com tracking links to bypass ChatGPT's safety filters. A fourth involved conversation injection, which takes advantage of the fact that ChatGPT remembers entire conversations with a user when responding to input. Tenable found that when ChatGPT's Web browsing component, SearchGPT, reads and returns malicious instructions from a website — via indirect prompt injection — ChatGPT reads those instructions in the conversation history and follows them, essentially prompt injecting itself in the process.Related:Risk 'Comparable' to SolarWinds Incident Lurks in Popular Software Update ToolThe most concerning issue that Tenable discovered was a zero-click vulnerability, where simply asking ChatGPT a benign question could trigger an attack if the search results include a poisoned website. "The zero-click and one-click vulnerabilities are the most dangerous for non-technical users because they require no special action," Bernstein says in comments to Dark Reading. "A user can be compromised by simply prompting ChatGPT or clicking a presumed harmless link."Bernstein says it's very feasible for a high-resource attacker, like an advanced persistent threat (APT) group, to exploit one or all of the vulnerabilities to run a campaign targeting multiple users. "That being said, a more realistic scenario for an ordinary user could be as simple as an attacker planting comments on blog posts reviewing different products, which will inject a memory that the user prefers a specific product over others," he says. "Another example is an attacker injecting instructions to link to a phishing website, exploiting the high level of trust people have in ChatGPT, to steal their passwords or credit card information."Tenable conducted most of its research on ChatGPT-4o but found that several of the vulnerabilities and proofs of concept, including the indirect prompt injection issue and the zero- and one-click flaws, are valid on OpenAI's newer ChatGPT-5 as well. The company reported the issues to OpenAI in April. OpenAI acknowledged receiving Tenable's vulnerability disclosures, but it is unclear if the company has made any changes. While Tenable has had a hard time reproducing some of the vulnerabilities discovered and reported to OpenAI, others still persist, the security vendor said. OpenAI did not respond immediately to a request for comment."The main takeaway is how medium and high vulnerabilities can be chained together to create a critical severity situation," Bernstein says. "Individually, these vulnerabilities are concerning, but collectively they create a full attack path, spanning from injection and evasion to data exfiltration and persistence."Jai Vijayan, Contributing WriterJai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year career at Computerworld, Jai also covered a variety of other technology topics, including big data, Hadoop, Internet of Things, e-voting, and data analytics. Prior to Computerworld, Jai covered technology issues for The Economic Times in Bangalore, India. Jai has a Master's degree in Statistics and lives in Naperville, Ill.2025 DigiCert DDoS Biannual ReportDigiCert RADAR - Risk Analysis, Detection & Attack ReconnaissanceThe Total Economic Impact of DigiCert ONEIDC MarketScape: Worldwide Exposure Management 2025 Vendor AssessmentThe Forrester Wave™: Unified Vulnerability Management Solutions, Q3 2025How AI & Autonomous Patching Eliminate Exposure RisksThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesYou May Also LikeNov 13, 2025How AI & Autonomous Patching Eliminate Exposure RisksThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesPKI Modernization WhitepaperEDR v XDR v MDR- The Cybersecurity ABCs ExplainedHow to Chart a Path to Exposure Management MaturitySecurity Leaders' Guide to Exposure Management StrategyThe NHI Buyers GuideCopyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.

Source: This article was originally published on Dark Reading

Read full article on source →

Related Articles