Vercel AI SDK, OpenAI SDK, and Anthropic SDK Security Issues in JavaScript and TypeScript
Introduction
SDKs from Vercel, OpenAI, and Anthropic are widely used to embed AI functionality into JavaScript and TypeScript applications. They simplify model calls, but they also expand the attack surface. Application security issues range from credential exposure to unvalidated model outputs influencing downstream execution.
API Key Exposure
All three SDKs require API keys. Developers often hardcode them in client-side code for rapid prototyping. In production, this results in exposed credentials visible in browser developer tools. Attackers can use these stolen keys to run model queries on behalf of the victim, leading to resource abuse or data leakage. In 2023, multiple GitHub repositories were found leaking OpenAI API keys embedded directly in Next.js applications.
Unvalidated Model Output
SDKs often connect LLM responses directly into application logic. For example, a Vercel AI SDK application allowed users to query a model and receive executable SQL queries as output. An attacker injected a crafted prompt to generate a destructive SQL DROP TABLE command, which was then executed because the developer failed to validate the model output.
Insecure Token Scopes
The Anthropic SDK, like its peers, relies on scoped tokens for enterprise deployments. Misconfigured tokens that grant excessive rights can be abused if leaked. A token intended for testing can end up with production-level privileges.
MITRE ATT&CK Mapping
Conclusion
AI SDKs are not secure by default. Hardcoded credentials, over-scoped tokens, and unvalidated model outputs introduce direct application-layer risks. Security teams must enforce secret management, validate model outputs before execution, and apply least-privilege principles to API tokens.
References
- GitGuardian. (2023). API key leaks in GitHub repositories. GitGuardian Blog. https://blog.gitguardian.com/api-key-leaks/
- MITRE ATT&CK®. (2024). ATT&CK Techniques. MITRE. https://attack.mitre.org/
Related blogs

Prompt Injection was Never the Real Problem
A review of “The Promptware Kill Chain”Over the last two years, “prompt injection” has become the SQL injection of the LLM era: widely referenced, poorly defined, and often blamed for failures that have little to do with prompts themselves.A recent arXiv paper, “The Promptware Kill Chain: How Prompt Injections Gradually Evolved Into a Multi-Step Malware,” tries to correct that by reframing prompt injection as just the initial access phase of a broader, multi-stage attack chain.As a security researcher working on real production AppSec and AI systems, I think this paper is directionally right and operationally incomplete.This post is a technical critique: what the paper gets right, where the analogy breaks down, and how defenders should actually think about agentic system compromise.
A Guide to Securing AI Code Editors: Cursor, Claude Code, Gemini CLI, and OpenAI Codex
AI-powered code editors such as Cursor, Claude Code, Gemini CLI, and OpenAI Codex are rapidly becoming part of enterprise development environments.
From Discovery to Resolution: A Single Source of Truth for Vulnerability Statuses
Continuous visibility from first discovery to final resolution across code repositories and container images, showing who fixed each vulnerability, when it was resolved and how long closure took. Kodem turns issue statuses into ownership for engineers, progress tracking for leadership and defensible risk reduction for application security.
A Primer on Runtime Intelligence
See how Kodem's cutting-edge sensor technology revolutionizes application monitoring at the kernel level.
Platform Overview Video
Watch our short platform overview video to see how Kodem discovers real security risks in your code at runtime.
The State of the Application Security Workflow
This report aims to equip readers with actionable insights that can help future-proof their security programs. Kodem, the publisher of this report, purpose built a platform that bridges these gaps by unifying shift-left strategies with runtime monitoring and protection.
.png)
Get real-time insights across the full stack…code, containers, OS, and memory
Watch how Kodem’s runtime security platform detects and blocks attacks before they cause damage. No guesswork. Just precise, automated protection.

Stay up-to-date on Audit Nexus
A curated resource for the many updates to cybersecurity and AI risk regulations, frameworks, and standards.
