Security Risks Across the AI Application Stack: A Researcher’s Guide

This series will dissect the AI application stack layer by layer, analyzing real-world security issues in the packages, frameworks, and runtimes that developers rely on today.

written by
Mahesh Babu
published on
September 5, 2025
topic
Application Security

Introduction

The modern AI application stack is being built on JavaScript and TypeScript. From large language model (LLM) orchestration to vector databases, from preprocessing pipelines to edge runtimes, enterprises are wiring AI directly into production systems at unprecedented speed. This stack is powerful, but it is not secure by default.

Application security teams must understand where the risks live. Attacks rarely target the model in isolation. They target the glue code, the runtimes, the frameworks, and the dependencies that stitch AI applications together. The AI stack is just another application stack—but one where immaturity, rapid adoption, and novel attack surfaces collide.

This series will dissect the AI application stack layer by layer, analyzing real-world security issues in the packages, frameworks, and runtimes that developers rely on today.

Mapping the AI Application Stack

The AI application stack can be broken into five layers, each with distinct security considerations.

1. Agent Frameworks

Frameworks such as LangChain, LangGraph, and CrewAI connect LLMs to external tools, APIs, and workflows. They enable automation but expose risks including prompt injection, tool exploitation, and API overreach.

2. Full-Stack Frameworks

Front-end and server-side frameworks such as Next.js (Vercel), React, Vue, and Angular power most enterprise AI applications. They inherit well-known issues like DOM clobbering, cross-site scripting, and dependency compromise.

3. Data Pipelines

Libraries such as TensorFlow.js, Transformers.js, and Hugging Face Datasets/Tokenizers.js handle preprocessing, inference, and embedding management. They introduce risks of poisoned models, dataset manipulation, and WASM runtime vulnerabilities.

4. Vector Databases

Backends such as Pinecone, Weaviate, and Milvus store embeddings and power semantic search. Security issues include data exfiltration through query abuse, metadata injection, and poisoning attacks.

5. Runtimes and Deployment Environments

Execution layers such as Node.js, Deno, and Bun support AI backends. Serverless and edge runtimes such as AWS Lambda, Vercel Edge Functions, and Cloudflare Workers introduce additional risks. Common issues include metadata credential theft, prototype pollution, build pipeline compromise, and misconfigured permissions.

Why Security Teams Need a Stack-Wide View

Security issues in AI applications are not isolated events. They chain across layers. A poisoned dataset ingested through Hugging Face may be embedded into Pinecone, retrieved by LangChain, and then exfiltrated through a Next.js API endpoint running on Vercel Edge. Each layer amplifies the next.

Traditional scanning tools are not designed for this context. They flag theoretical CVEs without proving whether they are exploitable in production. What matters is runtime exploitability: whether the vulnerability is reachable, whether it executes, and whether it can be chained with others.

What This Series Covers

This series will publish 8 posts on:

Each post will document real-world security issues, provide MITRE ATT&CK mappings, and cite references for further research.

Conclusion

The AI application stack is becoming enterprise infrastructure. Its attack surface is broad, immature, and expanding. For security researchers and product security teams, the goal is not to treat AI as “special,” but to treat it as potentially vulnerable application code, because that is what it is.

This series will show how vulnerabilities propagate through the stack and provide a framework for defending AI applications in production.

Blog written by

Mahesh Babu

Head of Marketing

More blogs

View all

CVE-2025-55182: Remote Code Execution in React Server Components

On December 3, 2025, the React and Vercel teams disclosed CVE-2025-55182, a critical remote-code-execution (RCE) vulnerability (CVSS 10) affecting React Server Components (RSC) as used in the Flight protocol implementation.

December 3, 2025

Shai Hulud 2.0: What We Know About the Ongoing NPM Supply Chain Attack

A new wave of supply chain compromise is unfolding across the open-source ecosystem. Multiple security vendors, including Aikido Security and Wiz have confirmed that the threat actor behind the earlier Shai Hulud malware campaign has resurfaced. This time, compromising NPM accounts, GitHub repositories and widely-used packages associated with Zapier and the ENS (Ethereum Name Service).

November 24, 2025

Remediation That Meets Developers in Context

Identifying issues isn’t the challenge. The challenge is effective remediation that fits your codebase, your environment and your team’s development velocity. Developers need to understand where issues originated, which packages to upgrade, what code to change and how disruptive fixes will be. Meanwhile, AppSec needs visibility into what's immediately actionable and which issues require cross-team coordination.

November 19, 2025

A Primer on Runtime Intelligence

See how Kodem's cutting-edge sensor technology revolutionizes application monitoring at the kernel level.

5.1k
Applications covered
1.1m
False positives eliminated
4.8k
Triage hours reduced

Platform Overview Video

Watch our short platform overview video to see how Kodem discovers real security risks in your code at runtime.

5.1k
Applications covered
1.1m
False positives eliminated
4.8k
Triage hours reduced

The State of the Application Security Workflow

This report aims to equip readers with actionable insights that can help future-proof their security programs. Kodem, the publisher of this report, purpose built a platform that bridges these gaps by unifying shift-left strategies with runtime monitoring and protection.

Get real-time insights across the full stack…code, containers, OS, and memory

Watch how Kodem’s runtime security platform detects and blocks attacks before they cause damage. No guesswork. Just precise, automated protection.

Stay up-to-date on Audit Nexus

A curated resource for the many updates to cybersecurity and AI risk regulations, frameworks, and standards.