TensorFlow.js and Transformers.js Security Issues in JavaScript and TypeScript Applications

This series shows how vulnerabilities propagate through the stack and provides a framework for defending AI applications in production.

Introduction

TensorFlow.js and Transformers.js allow developers to run machine learning models directly in JavaScript and TypeScript environments. They are widely adopted for preprocessing, inference, and integrating AI into web and Node.js applications. Their ease of use conceals significant application security issues.

Untrusted Model Execution in TensorFlow.js

TensorFlow.js enables execution of models in the browser or Node.js. Many projects download pre-trained models directly from unverified sources. This creates a clear supply chain vulnerability. In 2021, security researchers demonstrated that a poisoned model could contain malicious layers that triggered JavaScript execution in the client runtime. A developer who imported such a model from an untrusted repository unknowingly gave attackers arbitrary code execution inside end-user browsers.

Model Poisoning in Transformers.js

Transformers.js provides access to Hugging Face models in TypeScript. Attackers can poison embeddings or alter tokenizers to leak sensitive input data. For example, a poisoned sentiment analysis model can be modified to include hidden output channels. When queried with sensitive text, it returns encoded tokens representing the data. In production pipelines, this creates covert data exfiltration from otherwise trusted inference calls.

Dependency Bloat and Native Bindings

Both libraries depend on large dependency graphs with native bindings. TensorFlow.js relies on WebGL and WASM backends. Attackers can target outdated WASM runtimes with buffer overflow exploits. In 2022, multiple WASM sandbox escapes were published, showing how attacker-supplied data could break isolation and execute arbitrary code on the host system.

MITRE ATT&CK Mapping

Threat Vector MITRE Technique(s) Example
Model supply chain compromise T1195 – Supply Chain Compromise Poisoned TensorFlow.js model downloaded from GitHub repo
Model poisoning / covert channels T1041 – Exfiltration Over C2 Channel Transformers.js model encoding sensitive input for attacker retrieval
Exploiting WASM runtime T1203 – Exploitation for Client Execution Malicious payload exploiting outdated WebAssembly in TensorFlow.js backend

Conclusion

TensorFlow.js and Transformers.js expand AI capabilities in JavaScript, but they also expand the attack surface. Poisoned models, covert exfiltration channels, and WASM runtime exploits create direct risks for application security teams. Defenses must include verifying model provenance, scanning model files for anomalies, and continuously monitoring runtime behavior during inference.

References

  • Carlini, N., et al. (2021). Extracting Training Data from Large Language Models. arXiv. https://arxiv.org/abs/2012.07805
  • MITRE ATT&CK®. (2024). ATT&CK Techniques. MITRE. https://attack.mitre.org/
  • Hugging Face. (2024). Security best practices for model use. Hugging Face Docs. https://huggingface.co/docs
  • Google. (2023). TensorFlow.js security considerations. TensorFlow Documentation. https://www.tensorflow.org/js
Table of contents

Related blogs

Prompt Injection was Never the Real Problem

A review of “The Promptware Kill Chain”Over the last two years, “prompt injection” has become the SQL injection of the LLM era: widely referenced, poorly defined, and often blamed for failures that have little to do with prompts themselves.A recent arXiv paper, “The Promptware Kill Chain: How Prompt Injections Gradually Evolved Into a Multi-Step Malware,” tries to correct that by reframing prompt injection as just the initial access phase of a broader, multi-stage attack chain.As a security researcher working on real production AppSec and AI systems, I think this paper is directionally right and operationally incomplete.This post is a technical critique: what the paper gets right, where the analogy breaks down, and how defenders should actually think about agentic system compromise.

January 16, 2026

A Guide to Securing AI Code Editors: Cursor, Claude Code, Gemini CLI, and OpenAI Codex

AI-powered code editors such as Cursor, Claude Code, Gemini CLI, and OpenAI Codex are rapidly becoming part of enterprise development environments.

October 31, 2025

From Discovery to Resolution: A Single Source of Truth for Vulnerability Statuses

Continuous visibility from first discovery to final resolution across code repositories and container images, showing who fixed each vulnerability, when it was resolved and how long closure took. Kodem turns issue statuses into ownership for engineers, progress tracking for leadership and defensible risk reduction for application security.

October 27, 2025

Stop the waste.
Protect your environment with Kodem.

Get a personalized demo
Get a personalized demo

A Primer on Runtime Intelligence

See how Kodem's cutting-edge sensor technology revolutionizes application monitoring at the kernel level.

5.1k
Applications covered
1.1m
False positives eliminated
4.8k
Triage hours reduced

Platform Overview Video

Watch our short platform overview video to see how Kodem discovers real security risks in your code at runtime.

5.1k
Applications covered
1.1m
False positives eliminated
4.8k
Triage hours reduced

The State of the Application Security Workflow

This report aims to equip readers with actionable insights that can help future-proof their security programs. Kodem, the publisher of this report, purpose built a platform that bridges these gaps by unifying shift-left strategies with runtime monitoring and protection.

Get real-time insights across the full stack…code, containers, OS, and memory

Watch how Kodem’s runtime security platform detects and blocks attacks before they cause damage. No guesswork. Just precise, automated protection.

Stay up-to-date on Audit Nexus

A curated resource for the many updates to cybersecurity and AI risk regulations, frameworks, and standards.

Mahesh Babu
Publish date

0 min read

Application Security