Critical
Low
Medium

CVE-2025-59425

Overview

vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a method that was vulnerable to a timing attack. API key validation uses a string comparison that takes longer the more characters the provided AP...

Critical
Low
Medium
No items found.

Package:

Impact:

Fix:

Year:

CVSS:

Severity:

Affected Components

Location

Stop the waste.
Protect your environment with Kodem.

Get a personalized demo
Get a personalized demo