Security Risks Across the AI Application Stack: A Researcher’s Guide


Introduction
The modern AI application stack is being built on JavaScript and TypeScript. From large language model (LLM) orchestration to vector databases, from preprocessing pipelines to edge runtimes, enterprises are wiring AI directly into production systems at unprecedented speed. This stack is powerful, but it is not secure by default.
Application security teams must understand where the risks live. Attacks rarely target the model in isolation. They target the glue code, the runtimes, the frameworks, and the dependencies that stitch AI applications together. The AI stack is just another application stack—but one where immaturity, rapid adoption, and novel attack surfaces collide.
This series will dissect the AI application stack layer by layer, analyzing real-world security issues in the packages, frameworks, and runtimes that developers rely on today.
Mapping the AI Application Stack
The AI application stack can be broken into five layers, each with distinct security considerations.
1. Agent Frameworks
Frameworks such as LangChain, LangGraph, and CrewAI connect LLMs to external tools, APIs, and workflows. They enable automation but expose risks including prompt injection, tool exploitation, and API overreach.
2. Full-Stack Frameworks
Front-end and server-side frameworks such as Next.js (Vercel), React, Vue, and Angular power most enterprise AI applications. They inherit well-known issues like DOM clobbering, cross-site scripting, and dependency compromise.
3. Data Pipelines
Libraries such as TensorFlow.js, Transformers.js, and Hugging Face Datasets/Tokenizers.js handle preprocessing, inference, and embedding management. They introduce risks of poisoned models, dataset manipulation, and WASM runtime vulnerabilities.
4. Vector Databases
Backends such as Pinecone, Weaviate, and Milvus store embeddings and power semantic search. Security issues include data exfiltration through query abuse, metadata injection, and poisoning attacks.
5. Runtimes and Deployment Environments
Execution layers such as Node.js, Deno, and Bun support AI backends. Serverless and edge runtimes such as AWS Lambda, Vercel Edge Functions, and Cloudflare Workers introduce additional risks. Common issues include metadata credential theft, prototype pollution, build pipeline compromise, and misconfigured permissions.
Why Security Teams Need a Stack-Wide View
Security issues in AI applications are not isolated events. They chain across layers. A poisoned dataset ingested through Hugging Face may be embedded into Pinecone, retrieved by LangChain, and then exfiltrated through a Next.js API endpoint running on Vercel Edge. Each layer amplifies the next.
Traditional scanning tools are not designed for this context. They flag theoretical CVEs without proving whether they are exploitable in production. What matters is runtime exploitability: whether the vulnerability is reachable, whether it executes, and whether it can be chained with others.
What This Series Covers
This series will publish 8 posts on:
- Agent Frameworks: LangChain, LangGraph, CrewAI
- Model / LLM Integration SDKs: Vercel AI SDK, OpenAI SDK, Anthropic SDK
- Full-Stack Frameworks: Next.js, React, Vue, Angular
- Data Pipelines: TensorFlow.js, Transformers.js, Hugging Face Datasets/Tokenizers
- Vector Databases: Pinecone, Weaviate, Milvus
- Runtimes: Node.js, Deno, Bun
- Serverless/Edge: AWS Lambda, Vercel Edge Functions, Cloudflare Workers
Each post will document real-world security issues, provide MITRE ATT&CK mappings, and cite references for further research.
Conclusion
The AI application stack is becoming enterprise infrastructure. Its attack surface is broad, immature, and expanding. For security researchers and product security teams, the goal is not to treat AI as “special,” but to treat it as potentially vulnerable application code, because that is what it is.
This series will show how vulnerabilities propagate through the stack and provide a framework for defending AI applications in production.
More blogs
From Discovery to Resolution: A Single Source of Truth for Vulnerability Statuses
Continuous visibility from first discovery to final resolution across code repositories and container images, showing who fixed each vulnerability, when it was resolved and how long closure took. Kodem turns issue statuses into ownership for engineers, progress tracking for leadership and defensible risk reduction for application security.
Kai Gets Internet Access: Turning Context Into Intelligence for Product Security Teams
For years, product security teams have lived with a gap. Tools surfaced findings — CVEs, outdated packages, risky dependencies — but rarely the context to make sense of them. Engineers still had to open a browser, type a CVE into Google, skim through NVD, vendor advisories, GitHub issues, and random blogs to answer basic questions: Is this actually exploitable in our environment? Is there a safe upgrade path? Has anyone seen this exploited in the wild? This release closes that gap.
When NPM Goes Rogue: The @ctrl/tinycolor Supply-Chain Attack
On September 15, 2025, researchers at StepSecurity and Socket disclosed a large, sophisticated supply-chain compromise in the NPM ecosystem. The incident centers around the popular package @ctrl/tinycolor (with over two million weekly downloads), but it extends far beyond: 40+ other packages across multiple maintainers were also compromised.
A Primer on Runtime Intelligence
See how Kodem's cutting-edge sensor technology revolutionizes application monitoring at the kernel level.
Platform Overview Video
Watch our short platform overview video to see how Kodem discovers real security risks in your code at runtime.
The State of the Application Security Workflow
This report aims to equip readers with actionable insights that can help future-proof their security programs. Kodem, the publisher of this report, purpose built a platform that bridges these gaps by unifying shift-left strategies with runtime monitoring and protection.
.png)
Get real-time insights across the full stack…code, containers, OS, and memory
Watch how Kodem’s runtime security platform detects and blocks attacks before they cause damage. No guesswork. Just precise, automated protection.

Stay up-to-date on Audit Nexus
A curated resource for the many updates to cybersecurity and AI risk regulations, frameworks, and standards.
