LinkedIn and 3rd parties use essential and non-essential cookies to provide, secure, analyze and improve our Services, and to show you relevant ads (including professional and job ads) on and off LinkedIn. Learn more in our Cookie Policy.
Select Accept to consent or Reject to decline non-essential cookies for this use. You can update your choices at any time in your settings.
Tenable Research has published a disturbing set of findings in their report “HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage”, revealing seven fundamentally serious vulnerabilities in OpenAI’s ChatGPT platform, affecting both GPT-4o and the newly-released GPT-5 models, that permit attackers to exfiltrate private user data — and do so with astonishing stealth.
According to the Tenable blog, “hundreds of millions of users interact with LLMs on a daily basis and could be vulnerable to these attacks.”
Large language model systems like ChatGPT increasingly serve as a primary information hub for hundreds of millions of users. If an attacker can silently extract chat histories, private memories, or sensitive context from a user’s ChatGPT session, the implications stretch from individual privacy violations all the way to enterprise-data exposure.
How The Platform Is Structured — and Why That Matters
Tenable’s researchers have detailed a technical breakdown of ChatGPT’s architecture, exposing how features intended for convenience or richness become liability vectors.
Here are the key components:
System Prompt: The foundational instructions that define the model’s role, capabilities, limitations and tool-access. Tenable found that system prompts referenced tools like the “bio” tool (for memory) and a “web” tool (for browsing).
Memory (“bio” tool): ChatGPT by default retains a long-term memory of user-specified information or information it deems important, to improve contextual continuity. That means private details from prior chats may be stored and subsequently used.
Web Tool / Browsing Context: ChatGPT integrates a browsing capability: via commands like search and open_url, it can fetch web content or navigate URLs. Notably, the browsing is handled by a secondary LLM dubbed “SearchGPT” in Tenable’s analysis — a design intended to isolate browsing from direct user context/memory, but which Tenable found is insufficient isolation.
It is within these added layers — browsing, memory, context chaining — that the attacker finds opportunities to slip in invisible instructions, manipulate context, and cause data leakage.
The Seven Vulnerabilities: From “Zero-Click” To Memory Poisoning
Tenable categorise seven distinct attack vectors:
Indirect Prompt Injection via Browsing Context Attackers implant malicious instructions in web content (e.g., blog comments, forum posts). When ChatGPT’s browsing tool fetches and summarises that content, the hidden instruction is executed.
Zero-Click Indirect Prompt Injection in Search Context Perhaps the most alarming: a malicious website gets indexed by search engines. A user simply asks ChatGPT a legitimate question which triggers a search. That search returns/exposes the malicious content implicitly, without the user clicking anything. ChatGPT (via SearchGPT) brings the malicious prompt into its context and executes it. Zero further user interaction needed.
One-Click Prompt Injection via URL Parameter A crafted URL (e.g., chat.openai.com/?q=<malicious_prompt>) allows the attacker to inject instructions via the q= parameter. When a user clicks, ChatGPT immediately begins executing the injected prompt.
url_safe Safety Mechanism Bypass ChatGPT uses a “safe URL” mechanism to prevent malicious links from being rendered. Researchers found a way to abuse a whitelisted domain (e.g., bing.com/ck/a/…) to embed malicious redirect links that bypass the filter and lead to data exfiltration.
Conversation Injection After SearchGPT returns output (including potentially malicious instructions), ChatGPT takes that content into its conversational context. Subsequent user prompts are now answered within a manipulated context, enabling instruction chaining. Basically: “I summarised this website for you” → the summary contains hidden instructions → ChatGPT uses them in the next conversation turn.
Malicious Content Hiding By exploiting a markdown rendering flaw, attackers can hide injected instructions from user view, but ChatGPT still stores them in memory/context. For example: injecting content inside fenced code blocks or hidden inline text that visually doesn’t appear to user – yet is processed by the model.
Persistent Memory Injection Attackers can manipulate ChatGPT’s memory tool so that hidden instructions or exfiltration-logic get stored as a memory. That means future ChatGPT sessions with that user may continue leaking data without additional attacker prompts.
Proofs of Concept & Vendor Response
Tenable did more than hypothesise: they demonstrated full attack chains covering GPT-4o and GPT-5. For example, they illustrate scenarios such as: malicious blog comment → user asks innocuous question → ChatGPT invokes SearchGPT and fetches the malicious comment → hidden instruction triggers memory update and data exfiltration. Another example: via URL parameter exploitation (?q=) enabling prompt injection without user awareness.
OpenAI’s Reaction
Tenable states they disclosed the vulnerabilities to OpenAI through coordinated channels. Some vulnerabilities have been addressed via published Technical Research Advisories (TRAs) — e.g., TRA-2025-22, TRA-2025-11, TRA-2025-06. However, Tenable emphasises that despite patches, “prompt injection remains an inherent LLM challenge”, and that several proof-of-concepts still work on GPT-5.
Why This Matters: Broad Implications for AI Security
For Users
Private data you share with ChatGPT—via memory, past chats, preferences—may be accessible to adversaries using stealthy LLM-manipulation.
You may think you’re asking a harmless question (“What’s the best Italian restaurant near me”), but if the model’s context has been manipulated you might unknowingly trigger data exfiltration.
The “zero-click” nature of some vulnerabilities means you don’t need to click a malicious link or open a file: simply interacting with the model in a trusted way can expose you.
For Enterprises
Organisations adopting ChatGPT or analogous LLMs as knowledge bases, assistants or agents must treat LLM risk as enterprise-grade risk. Data leakage from internal chat/memory systems could expose business secrets, personal employee information or client data.
The integrated tool-chain (LLM + memory module + browsing + tool usage) increases attack surface. Prompt injection remains one of the most persistent attack vectors against LLM agents.
Relying solely on “isolation” (e.g., giving browsing to a separate LLM) is not sufficient: Tenable found that SearchGPT’s isolation design did not fully prevent propagation of malicious prompts back into ChatGPT’s memory/context.
For AI Vendors & Platform Designers
Well-known traditional software security analogues (SQL injection, cross-site scripting) now have their counterpart: prompt injection. As one white paper observed: “Applications built on LLMs can’t distinguish valid vs malicious instructions once they are in the prompt/context.”
Defenses must go beyond simple keyword filters or allow-list approaches; the architecture must assume worst-case adversarial context.
Memory, tool-use, browsing, contextual chaining—all these add complexity and create new trust boundaries that adversaries can exploit.
Monitoring for unusual model behaviour (e.g., memory content with odd instructions, unexpected URL generation) becomes essential.
Transparency around what the system prompt contains, how memory is handled, how browsing context is sanitized is critical — yet many systems abstract this away from end-users.
Conclusion
These vulnerabilities strike at the heart of trust in AI systems: when you ask a question to a conversational agent, you expect it to help — not surreptitiously siphon your private data.
That the attack vectors are indirect (via browsing context, memory injection, URL parameter misuse) and in some cases zero-click means that threat models must evolve. It is no longer just about “don’t click the malicious link” — it’s about “what happens inside the model behind the scenes”.
For users, the practical takeaway is caution: treat your ChatGPT sessions as you would a sensitive information channel. For organisations, the message is urgent: audit your LLM workflows, assume adversarial context, and implement monitoring & defence layers. For vendors, the imperative is clear: building safe LLM services goes beyond functionality — it demands architecture designed from the ground up for adversarial robustness.
Unless prompt-injection risk is treated as first-order, it may become the Achilles’ heel of the large language model ecosystem.
About Tenable
Tenable Holdings, Inc., headquartered in Columbia, Maryland, is a cybersecurity company best known for its vulnerability scanning software, Nessus. First released in 1998, Nessus has become one of the most widely used tools for vulnerability assessment across the industry. As of December 31, 2023, Tenable served around 44,000 customers, including 65% of Fortune 500 companies.
Join us live and discover how to detect and defend against the fakes 👇🏻
It’s really Terrifying!!
But now I am sure you will be able to find where my funds are hidden since 2021-2025 and those behind this crime!!
Thank you Cyber Security for your Analysis, it's highly appreciated!!
Wow. This is bad. Perhaps one has to start segregating LLMs on a per use case base? One for searching the internet, one for helping you with personal correspondences, and so forth.
Awareness is great but I'm not sure there's much organisations can do about this. You can either have no access at all to LLMs or accept these risks. Internal guard rails aren't going to stop these sort of fundamental flaws.
Critical vulnerabilities in widely used AI tools like ChatGPT highlight how security must evolve alongside technology. Staying ahead means continuous #upskill and #training to understand these risks and implement safeguards. Embracing #innovation through platforms like #AcademyIT helps professionals build resilience in an AI-driven world.
IT Support Specialist | Cybersecurity Enthusiast | System Administration & Data Management
2dWow This really intense.
--
2dIt’s really Terrifying!! But now I am sure you will be able to find where my funds are hidden since 2021-2025 and those behind this crime!! Thank you Cyber Security for your Analysis, it's highly appreciated!!
Transformative senior technology leader and engineer, enabling you to achieve more with fewer risks.
2dWow. This is bad. Perhaps one has to start segregating LLMs on a per use case base? One for searching the internet, one for helping you with personal correspondences, and so forth.
Security Manager @ BT Group | Governance, Risk and Compliance (GRC) | Cyber Threat Intelligence (CTI) | CISSP, CISM, CRISC, CCSP NIST CSF, CIS, NIS, DORA, ISO27001, MITRE ATT&CK
2dAwareness is great but I'm not sure there's much organisations can do about this. You can either have no access at all to LLMs or accept these risks. Internal guard rails aren't going to stop these sort of fundamental flaws.
Critical vulnerabilities in widely used AI tools like ChatGPT highlight how security must evolve alongside technology. Staying ahead means continuous #upskill and #training to understand these risks and implement safeguards. Embracing #innovation through platforms like #AcademyIT helps professionals build resilience in an AI-driven world.