Security Risks Across the AI Application Stack: A Researcher’s Guide

This series will dissect the AI application stack layer by layer, analyzing real-world security issues in the packages, frameworks, and runtimes that developers rely on today.

written by
Mahesh Babu
published on
September 5, 2025
topic
Application Security

Introduction

The modern AI application stack is being built on JavaScript and TypeScript. From large language model (LLM) orchestration to vector databases, from preprocessing pipelines to edge runtimes, enterprises are wiring AI directly into production systems at unprecedented speed. This stack is powerful, but it is not secure by default.

Application security teams must understand where the risks live. Attacks rarely target the model in isolation. They target the glue code, the runtimes, the frameworks, and the dependencies that stitch AI applications together. The AI stack is just another application stack—but one where immaturity, rapid adoption, and novel attack surfaces collide.

This series will dissect the AI application stack layer by layer, analyzing real-world security issues in the packages, frameworks, and runtimes that developers rely on today.

Mapping the AI Application Stack

The AI application stack can be broken into five layers, each with distinct security considerations.

1. Agent Frameworks

Frameworks such as LangChain, LangGraph, and CrewAI connect LLMs to external tools, APIs, and workflows. They enable automation but expose risks including prompt injection, tool exploitation, and API overreach.

2. Full-Stack Frameworks

Front-end and server-side frameworks such as Next.js (Vercel), React, Vue, and Angular power most enterprise AI applications. They inherit well-known issues like DOM clobbering, cross-site scripting, and dependency compromise.

3. Data Pipelines

Libraries such as TensorFlow.js, Transformers.js, and Hugging Face Datasets/Tokenizers.js handle preprocessing, inference, and embedding management. They introduce risks of poisoned models, dataset manipulation, and WASM runtime vulnerabilities.

4. Vector Databases

Backends such as Pinecone, Weaviate, and Milvus store embeddings and power semantic search. Security issues include data exfiltration through query abuse, metadata injection, and poisoning attacks.

5. Runtimes and Deployment Environments

Execution layers such as Node.js, Deno, and Bun support AI backends. Serverless and edge runtimes such as AWS Lambda, Vercel Edge Functions, and Cloudflare Workers introduce additional risks. Common issues include metadata credential theft, prototype pollution, build pipeline compromise, and misconfigured permissions.

Why Security Teams Need a Stack-Wide View

Security issues in AI applications are not isolated events. They chain across layers. A poisoned dataset ingested through Hugging Face may be embedded into Pinecone, retrieved by LangChain, and then exfiltrated through a Next.js API endpoint running on Vercel Edge. Each layer amplifies the next.

Traditional scanning tools are not designed for this context. They flag theoretical CVEs without proving whether they are exploitable in production. What matters is runtime exploitability: whether the vulnerability is reachable, whether it executes, and whether it can be chained with others.

What This Series Covers

This series will publish 8 posts on:

  • Agent Frameworks: LangChain, LangGraph, CrewAI
  • Model / LLM Integration SDKs: Vercel AI SDK, OpenAI SDK, Anthropic SDK
  • Full-Stack Frameworks: Next.js, React, Vue, Angular
  • Data Pipelines: TensorFlow.js, Transformers.js, Hugging Face Datasets/Tokenizers
  • Vector Databases: Pinecone, Weaviate, Milvus
  • Runtimes: Node.js, Deno, Bun
  • Serverless/Edge: AWS Lambda, Vercel Edge Functions, Cloudflare Workers

Each post will document real-world security issues, provide MITRE ATT&CK mappings, and cite references for further research.

Conclusion

The AI application stack is becoming enterprise infrastructure. Its attack surface is broad, immature, and expanding. For security researchers and product security teams, the goal is not to treat AI as “special,” but to treat it as potentially vulnerable application code, because that is what it is.

This series will show how vulnerabilities propagate through the stack and provide a framework for defending AI applications in production.

Blog written by

Mahesh Babu

Head of Marketing

More blogs

View all

Malicious Packages Alert: The Qix npm Supply-Chain Attack: Lessons for the Ecosystem

The npm ecosystem is in the middle of a major supply-chain compromise. The maintainer known as Qix is currently targeted in a phishing campaign that allows attackers to bypass two-factor authentication and take over their npm account. This is happening right now, and malicious versions of widely used libraries are being published and distributed.

September 8, 2025

Security Issues in popular AI Runtimes - Node.js, Deno, and Bun

Node.js, Deno, and Bun are the primary runtimes for executing JavaScript and TypeScript in modern applications. They form the backbone of AI backends, serverless deployments, and orchestration layers. Each runtime introduces distinct application security issues. For product security teams, understanding these runtime weaknesses is essential because attacks often bypass framework-level defenses and exploit the runtime directly.

September 8, 2025

Application Security Issues in AI Edge and Serverless Runtimes: AWS Lambda, Vercel Edge Functions, and Cloudflare Workers

AI workloads are increasingly deployed on serverless runtimes like AWS Lambda, Vercel Edge Functions, and Cloudflare Workers. These platforms reduce operational overhead but introduce new application-layer risks. Product security teams must recognize that serverless runtimes are not inherently safer—they simply shift the attack surface.

September 8, 2025

A Primer on Runtime Intelligence

See how Kodem's cutting-edge sensor technology revolutionizes application monitoring at the kernel level.

5.1k
Applications covered
1.1m
False positives eliminated
4.8k
Triage hours reduced

Platform Overview Video

Watch our short platform overview video to see how Kodem discovers real security risks in your code at runtime.

5.1k
Applications covered
1.1m
False positives eliminated
4.8k
Triage hours reduced

The State of the Application Security Workflow

This report aims to equip readers with actionable insights that can help future-proof their security programs. Kodem, the publisher of this report, purpose built a platform that bridges these gaps by unifying shift-left strategies with runtime monitoring and protection.

Get real-time insights across the full stack…code, containers, OS, and memory

Watch how Kodem’s runtime security platform detects and blocks attacks before they cause damage. No guesswork. Just precise, automated protection.

Stay up-to-date on Audit Nexus

A curated resource for the many updates to cybersecurity and AI risk regulations, frameworks, and standards.