Home OSINT News Signals
CYBER

Critical AI Sandbox Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Theft and Remote Code Execution

đź•“ 2 min read

Cybersecurity researchers have uncovered a novel and critical attack vector targeting AI code execution environments. The method exploits Domain Name System (DNS) queries to exfiltrate sensitive data and, in severe cases, establish remote command-and-control (C2) channels. Vulnerabilities identified in prominent platforms—including Amazon Bedrock's AgentCore Code Interpreter, LangChain's LangSmith, and the SGLang server—demonstrate systemic weaknesses in how AI sandboxes enforce network isolation. These flaws could allow malicious actors to bypass intended security controls, leading to significant data breaches and system compromise.

A detailed report from BeyondTrust highlights a fundamental flaw in Amazon Bedrock's AgentCore Code Interpreter, a fully managed service launched in August 2025 designed to let AI agents execute code in isolated sandboxes. Despite being configured for "no network access," the sandbox mode permits outbound DNS queries. This oversight, carrying a CVSS score of 7.5 (High severity), enables threat actors to establish bidirectional C2 communication. An attacker can abuse this to set up an interactive reverse shell, exfiltrate data via DNS queries—especially if an overprivileged AWS Identity and Access Management (IAM) role is attached—and deliver additional payloads. The system can be coerced into polling a malicious DNS server for commands stored in DNS A records, executing them, and returning results via subdomain queries, completely bypassing the expected network isolation.

The security implications extend beyond a single vendor. Researchers also demonstrated similar DNS exfiltration techniques against LangChain's LangSmith tracing and monitoring service. By manipulating AI-generated code within LangSmith's evaluation framework, attackers could embed malicious DNS queries to leak environment variables, API keys, and other secrets. Furthermore, the SGLang server, a high-performance runtime for LLM deployment, was found vulnerable to a path traversal attack. This flaw allows reading arbitrary files on the server, which could then be exfiltrated using the same DNS method. These cases collectively point to a pattern where AI development and inference tools, in their rush to provide powerful code execution capabilities, have inadequately sandboxed network-level interactions.

The remediation landscape requires immediate and layered defenses. Amazon has reportedly addressed the Bedrock issue, emphasizing that the service is designed to prohibit network access and that customers must follow IAM best practices. LangChain has released patches for LangSmith, and the SGLang maintainers have fixed the path traversal vulnerability. For organizations leveraging these AI tools, critical steps include: rigorously applying the principle of least privilege to IAM roles attached to AI services, segmenting AI development and production networks, monitoring for anomalous DNS query patterns (especially long, encoded subdomains), and treating AI code interpreters with the same security scrutiny as any internet-facing application. As AI agents become more autonomous and capable of code execution, ensuring their operational environments are truly isolated is paramount to preventing novel data exfiltration and remote code execution attacks.

Telegram X LinkedIn
Back to News