A novel cybersecurity attack technique exploits the fundamental difference between how AI assistants parse webpages and how browsers render them, allowing malicious commands to be hidden from AI analysis while remaining fully visible to human users. Security researchers from browser security firm LayerX have developed a proof-of-concept that uses custom font glyph substitution and CSS styling tricks to create a dual representation of text on a webpage. The underlying HTML contains a benign, harmless string that AI tools read during analysis. However, through font remapping and visual tricks like extremely small font sizes or specific color choices, the browser renders a completely different, malicious command to the visitor. This creates a critical blind spot where AI-powered security scanners and assistants see a safe page, but a user is socially engineered to copy and execute a dangerous instruction.
The attack hinges on a sophisticated social engineering vector. An attacker crafts a webpage, often disguised as a legitimate tutorial or support forum, that instructs a user to run a specific command in their terminal or system. To an AI tool like ChatGPT, Claude, or Microsoft Copilot that is asked to analyze the page's safety, the HTML source reveals only the innocuous text. The AI, which typically processes structured text and code, fails to interpret the final visual output that the browser generates using the custom font mappings and CSS. Consequently, the AI assistant would incorrectly assure a user that the page and its instructions are safe, increasing the likelihood the user will comply with the malicious prompt.
According to LayerX, their testing as of December 2025 confirmed the technique's effectiveness against a wide array of leading AI assistants, including OpenAI's ChatGPT, Anthropic's Claude, Google's Gemini, Microsoft Copilot, xAI's Grok, and others like Perplexity and Perplexity's Leo. The core vulnerability stems from the AI's operational paradigm: it analyzes the webpage as a data structure (the Document Object Model or raw HTML), not as a rendered, visual experience. The browser's rendering engine acts as an interpreter for the hidden glyph instructions, creating a disconnect between what the AI "sees" and what the human sees.
This discovery exposes a significant and growing challenge in the AI security landscape. As organizations and individuals increasingly rely on AI agents to summarize content, check for threats, and automate tasks, ensuring these tools can accurately assess the true user experience is paramount. The LayerX report serves as a stark warning that current AI analysis methods are insufficient for detecting attacks that manipulate the presentation layer of web content. Mitigating this threat will require AI developers to incorporate advanced rendering simulation or visual analysis into their security protocols, moving beyond pure text and code parsing to understand the final output a user actually interacts with.



