TensorFlow.js and Transformers.js Security Issues in JavaScript and TypeScript Applications


Introduction
TensorFlow.js and Transformers.js allow developers to run machine learning models directly in JavaScript and TypeScript environments. They are widely adopted for preprocessing, inference, and integrating AI into web and Node.js applications. Their ease of use conceals significant application security issues.
Untrusted Model Execution in TensorFlow.js
TensorFlow.js enables execution of models in the browser or Node.js. Many projects download pre-trained models directly from unverified sources. This creates a clear supply chain vulnerability. In 2021, security researchers demonstrated that a poisoned model could contain malicious layers that triggered JavaScript execution in the client runtime. A developer who imported such a model from an untrusted repository unknowingly gave attackers arbitrary code execution inside end-user browsers.
Model Poisoning in Transformers.js
Transformers.js provides access to Hugging Face models in TypeScript. Attackers can poison embeddings or alter tokenizers to leak sensitive input data. For example, a poisoned sentiment analysis model can be modified to include hidden output channels. When queried with sensitive text, it returns encoded tokens representing the data. In production pipelines, this creates covert data exfiltration from otherwise trusted inference calls.
Dependency Bloat and Native Bindings
Both libraries depend on large dependency graphs with native bindings. TensorFlow.js relies on WebGL and WASM backends. Attackers can target outdated WASM runtimes with buffer overflow exploits. In 2022, multiple WASM sandbox escapes were published, showing how attacker-supplied data could break isolation and execute arbitrary code on the host system.
MITRE ATT&CK Mapping
Conclusion
TensorFlow.js and Transformers.js expand AI capabilities in JavaScript, but they also expand the attack surface. Poisoned models, covert exfiltration channels, and WASM runtime exploits create direct risks for application security teams. Defenses must include verifying model provenance, scanning model files for anomalies, and continuously monitoring runtime behavior during inference.
References
- Carlini, N., et al. (2021). Extracting Training Data from Large Language Models. arXiv. https://arxiv.org/abs/2012.07805
- MITRE ATT&CK®. (2024). ATT&CK Techniques. MITRE. https://attack.mitre.org/
- Hugging Face. (2024). Security best practices for model use. Hugging Face Docs. https://huggingface.co/docs
- Google. (2023). TensorFlow.js security considerations. TensorFlow Documentation. https://www.tensorflow.org/js
More blogs

Malicious Packages Alert: The Qix npm Supply-Chain Attack: Lessons for the Ecosystem
The npm ecosystem is in the middle of a major supply-chain compromise. The maintainer known as Qix is currently targeted in a phishing campaign that allows attackers to bypass two-factor authentication and take over their npm account. This is happening right now, and malicious versions of widely used libraries are being published and distributed.

Security Issues in popular AI Runtimes - Node.js, Deno, and Bun
Node.js, Deno, and Bun are the primary runtimes for executing JavaScript and TypeScript in modern applications. They form the backbone of AI backends, serverless deployments, and orchestration layers. Each runtime introduces distinct application security issues. For product security teams, understanding these runtime weaknesses is essential because attacks often bypass framework-level defenses and exploit the runtime directly.

Application Security Issues in AI Edge and Serverless Runtimes: AWS Lambda, Vercel Edge Functions, and Cloudflare Workers
AI workloads are increasingly deployed on serverless runtimes like AWS Lambda, Vercel Edge Functions, and Cloudflare Workers. These platforms reduce operational overhead but introduce new application-layer risks. Product security teams must recognize that serverless runtimes are not inherently safer—they simply shift the attack surface.
A Primer on Runtime Intelligence
See how Kodem's cutting-edge sensor technology revolutionizes application monitoring at the kernel level.
Platform Overview Video
Watch our short platform overview video to see how Kodem discovers real security risks in your code at runtime.
The State of the Application Security Workflow
This report aims to equip readers with actionable insights that can help future-proof their security programs. Kodem, the publisher of this report, purpose built a platform that bridges these gaps by unifying shift-left strategies with runtime monitoring and protection.
.png)
Get real-time insights across the full stack…code, containers, OS, and memory
Watch how Kodem’s runtime security platform detects and blocks attacks before they cause damage. No guesswork. Just precise, automated protection.

Stay up-to-date on Audit Nexus
A curated resource for the many updates to cybersecurity and AI risk regulations, frameworks, and standards.