Vercel AI SDK, OpenAI SDK, and Anthropic SDK Security Issues in JavaScript and TypeScript

This series shows how vulnerabilities propagate through the stack and provides a framework for defending AI applications in production.

written by
Mahesh Babu
published on
September 5, 2025
topic
Application Security

Introduction

SDKs from Vercel, OpenAI, and Anthropic are widely used to embed AI functionality into JavaScript and TypeScript applications. They simplify model calls, but they also expand the attack surface. Application security issues range from credential exposure to unvalidated model outputs influencing downstream execution.

API Key Exposure

All three SDKs require API keys. Developers often hardcode them in client-side code for rapid prototyping. In production, this results in exposed credentials visible in browser developer tools. Attackers can use these stolen keys to run model queries on behalf of the victim, leading to resource abuse or data leakage. In 2023, multiple GitHub repositories were found leaking OpenAI API keys embedded directly in Next.js applications.

Unvalidated Model Output

SDKs often connect LLM responses directly into application logic. For example, a Vercel AI SDK application allowed users to query a model and receive executable SQL queries as output. An attacker injected a crafted prompt to generate a destructive SQL DROP TABLE command, which was then executed because the developer failed to validate the model output.

Insecure Token Scopes

The Anthropic SDK, like its peers, relies on scoped tokens for enterprise deployments. Misconfigured tokens that grant excessive rights can be abused if leaked. A token intended for testing can end up with production-level privileges.

MITRE ATT&CK Mapping

Threat Vector MITRE Technique(s) Example
Hardcoded API keys T1552 – Unsecured Credentials OpenAI API keys exposed in client-side Next.js apps
Model output injection T1059 – Command & Scripting Interpreter LLM-generated SQL injected into runtime query execution
Over-scoped tokens T1528 – Steal Application Access Token Anthropic SDK token leaked with production privileges

Conclusion

AI SDKs are not secure by default. Hardcoded credentials, over-scoped tokens, and unvalidated model outputs introduce direct application-layer risks. Security teams must enforce secret management, validate model outputs before execution, and apply least-privilege principles to API tokens.

References

  • GitGuardian. (2023). API key leaks in GitHub repositories. GitGuardian Blog. https://blog.gitguardian.com/api-key-leaks/
  • MITRE ATT&CK®. (2024). ATT&CK Techniques. MITRE. https://attack.mitre.org/

Blog written by

Mahesh Babu

Head of Marketing

More blogs

View all

From Discovery to Resolution: A Single Source of Truth for Vulnerability Statuses

Continuous visibility from first discovery to final resolution across code repositories and container images, showing who fixed each vulnerability, when it was resolved and how long closure took. Kodem turns issue statuses into ownership for engineers, progress tracking for leadership and defensible risk reduction for application security.

October 27, 2025

Kai Gets Internet Access: Turning Context Into Intelligence for Product Security Teams

For years, product security teams have lived with a gap. Tools surfaced findings — CVEs, outdated packages, risky dependencies — but rarely the context to make sense of them. Engineers still had to open a browser, type a CVE into Google, skim through NVD, vendor advisories, GitHub issues, and random blogs to answer basic questions: Is this actually exploitable in our environment? Is there a safe upgrade path? Has anyone seen this exploited in the wild? This release closes that gap.

October 15, 2025

When NPM Goes Rogue: The @ctrl/tinycolor Supply-Chain Attack

On September 15, 2025, researchers at StepSecurity and Socket disclosed a large, sophisticated supply-chain compromise in the NPM ecosystem. The incident centers around the popular package @ctrl/tinycolor (with over two million weekly downloads), but it extends far beyond: 40+ other packages across multiple maintainers were also compromised.

September 16, 2025

A Primer on Runtime Intelligence

See how Kodem's cutting-edge sensor technology revolutionizes application monitoring at the kernel level.

5.1k
Applications covered
1.1m
False positives eliminated
4.8k
Triage hours reduced

Platform Overview Video

Watch our short platform overview video to see how Kodem discovers real security risks in your code at runtime.

5.1k
Applications covered
1.1m
False positives eliminated
4.8k
Triage hours reduced

The State of the Application Security Workflow

This report aims to equip readers with actionable insights that can help future-proof their security programs. Kodem, the publisher of this report, purpose built a platform that bridges these gaps by unifying shift-left strategies with runtime monitoring and protection.

Get real-time insights across the full stack…code, containers, OS, and memory

Watch how Kodem’s runtime security platform detects and blocks attacks before they cause damage. No guesswork. Just precise, automated protection.

Stay up-to-date on Audit Nexus

A curated resource for the many updates to cybersecurity and AI risk regulations, frameworks, and standards.