Nixpkgs security tracker

Login with GitHub

Suggestions search

With package: vllm

Found 6 matching suggestions

View:
Compact
Detailed
Permalink CVE-2026-44222
6.5 MEDIUM
  • CVSS version (CVSS): 3.1
  • Attack Vector (AV): Network (N)
  • Attack Complexity (AC): Low (L)
  • Privileges Required (PR): Low (L)
  • User Interaction (UI): None (N)
  • Scope (S): Unchanged (U)
  • Confidentiality (C): None (N)
  • Integrity (I): None (N)
  • Availability (A): High (H)
  • Modified Attack Vector (MAV): Network (N)
  • Modified Attack Complexity (MAC): Low (L)
  • Modified Privileges Required (MPR): Low (L)
  • Modified User Interaction (MUI): None (N)
  • Modified Confidentiality (MC): None (N)
  • Modified Scope (MS): Unchanged (U)
  • Modified Integrity (MI): None (N)
  • Modified Availability (MA): High (H)
updated 6 hours ago by @LeSuisse Activity log
  • Created suggestion
  • @LeSuisse accepted
  • @LeSuisse published on GitHub
vLLM: Remote DoS via Special-Token Placeholders

vLLM is an inference and serving engine for large language models (LLMs). From 0.6.1 to before 0.20.0, there is a a Token Injection vulnerability in vLLM’s multimodal processing. Unauthenticated, text-only prompts that spell special tokens are interpreted as control. Image and video placeholder sequences supplied without matching data cause vLLM to index into empty grids during input-position computation, raising an unhandled IndexError and terminating the worker or degrading availability. Multimodal paths that rely on image_grid_thw/video_grid_thw are affected. This vulnerability is fixed in 0.20.0.

Affected products

vllm
  • ==>= 0.6.1, < 0.20.0

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Permalink CVE-2026-44223
6.5 MEDIUM
  • CVSS version (CVSS): 3.1
  • Attack Vector (AV): Network (N)
  • Attack Complexity (AC): Low (L)
  • Privileges Required (PR): Low (L)
  • User Interaction (UI): None (N)
  • Scope (S): Unchanged (U)
  • Confidentiality (C): None (N)
  • Integrity (I): None (N)
  • Availability (A): High (H)
  • Modified Attack Vector (MAV): Network (N)
  • Modified Attack Complexity (MAC): Low (L)
  • Modified Privileges Required (MPR): Low (L)
  • Modified User Interaction (MUI): None (N)
  • Modified Confidentiality (MC): None (N)
  • Modified Scope (MS): Unchanged (U)
  • Modified Integrity (MI): None (N)
  • Modified Availability (MA): High (H)
updated 6 hours ago by @LeSuisse Activity log
  • Created suggestion
  • @LeSuisse accepted
  • @LeSuisse published on GitHub
vLLM: extract_hidden_states speculative decoding crashes server on any request with penalty parameters

vLLM is an inference and serving engine for large language models (LLMs). From to before 0.20.0, the extract_hidden_states speculative decoding proposer in vLLM returns a tensor with an incorrect shape after the first decode step, causing a RuntimeError that crashes the EngineCore process. The crash is triggered when any request in the batch uses sampling penalty parameters (repetition_penalty, frequency_penalty, or presence_penalty). A single request with a penalty parameter (e.g., "repetition_penalty": 1.1) is sufficient to crash the server. This vulnerability is fixed in 0.20.0.

Affected products

vllm
  • ==>= 0.18.0, < 0.20.0

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Permalink CVE-2026-27893
8.8 HIGH
  • CVSS version (CVSS): 3.1
  • Attack Vector (AV): Network (N)
  • Attack Complexity (AC): Low (L)
  • Privileges Required (PR): None (N)
  • User Interaction (UI): Required (R)
  • Scope (S): Unchanged (U)
  • Confidentiality (C): High (H)
  • Integrity (I): High (H)
  • Availability (A): High (H)
  • Modified Attack Vector (MAV): Network (N)
  • Modified Attack Complexity (MAC): Low (L)
  • Modified Privileges Required (MPR): None (N)
  • Modified User Interaction (MUI): Required (R)
  • Modified Confidentiality (MC): High (H)
  • Modified Scope (MS): Unchanged (U)
  • Modified Integrity (MI): High (H)
  • Modified Availability (MA): High (H)
updated 1 month, 2 weeks ago by @LeSuisse Activity log
  • Created suggestion
  • @LeSuisse accepted
  • @LeSuisse published on GitHub
vLLM's hardcoded trust_remote_code=True in NemotronVL and KimiK25 bypasses user security opt-out

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.

Affected products

vllm
  • ==>= 0.10.1, < 0.18.0

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Upstream advisory: https://github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59
Permalink CVE-2026-25960
7.1 HIGH
  • CVSS version (CVSS): 3.1
  • Attack Vector (AV): Network (N)
  • Attack Complexity (AC): Low (L)
  • Privileges Required (PR): Low (L)
  • User Interaction (UI): None (N)
  • Scope (S): Unchanged (U)
  • Confidentiality (C): High (H)
  • Integrity (I): None (N)
  • Availability (A): Low (L)
  • Modified Attack Vector (MAV): Network (N)
  • Modified Attack Complexity (MAC): Low (L)
  • Modified Privileges Required (MPR): Low (L)
  • Modified User Interaction (MUI): None (N)
  • Modified Confidentiality (MC): High (H)
  • Modified Scope (MS): Unchanged (U)
  • Modified Integrity (MI): None (N)
  • Modified Availability (MA): Low (L)
updated 2 months ago by @mweinelt Activity log
  • Created suggestion
  • @mweinelt ignored
    4 packages
    • pkgsRocm.vllm
    • python312Packages.vllm
    • python313Packages.vllm
    • pkgsRocm.python3Packages.vllm
  • @mweinelt accepted
  • @mweinelt published on GitHub
SSRF Protection Bypass in vLLM

vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.

Affected products

vllm
  • ==>= 0.15.1, < 0.17.0

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Ignored packages (4)

Package maintainers

https://github.com/vllm-project/vllm/security/advisories/GHSA-v359-jj2v-j536
https://github.com/vllm-project/vllm/security/advisories/GHSA-qh4c-xf7m-gxfc
Permalink CVE-2026-22778
9.8 CRITICAL
  • CVSS version (CVSS): 3.1
  • Attack Vector (AV): Network (N)
  • Attack Complexity (AC): Low (L)
  • Privileges Required (PR): None (N)
  • User Interaction (UI): None (N)
  • Scope (S): Unchanged (U)
  • Confidentiality (C): High (H)
  • Integrity (I): High (H)
  • Availability (A): High (H)
  • Modified Attack Vector (MAV): Network (N)
  • Modified Attack Complexity (MAC): Low (L)
  • Modified Privileges Required (MPR): None (N)
  • Modified User Interaction (MUI): None (N)
  • Modified Confidentiality (MC): High (H)
  • Modified Scope (MS): Unchanged (U)
  • Modified Integrity (MI): High (H)
  • Modified Availability (MA): High (H)
updated 3 months ago by @jopejoe1 Activity log
  • Created suggestion
  • @jopejoe1 accepted
  • @jopejoe1 published on GitHub
vLLM leaks a heap address when PIL throws an error

vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.

Affected products

vllm
  • ==>= 0.8.3, < 0.14.1

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

pkgs.pkgsRocm.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Upstream fix: https://github.com/vllm-project/vllm/releases/tag/v0.14.1
Upstream advisory: https://github.com/vllm-project/vllm/security/advisories/GHSA-4r2x-xpjr-7cvv

Unstable fix: https://github.com/NixOS/nixpkgs/pull/483505
Permalink CVE-2026-22807
8.8 HIGH
  • CVSS version (CVSS): 3.1
  • Attack Vector (AV): Network (N)
  • Attack Complexity (AC): Low (L)
  • Privileges Required (PR): None (N)
  • User Interaction (UI): Required (R)
  • Scope (S): Unchanged (U)
  • Confidentiality (C): High (H)
  • Integrity (I): High (H)
  • Availability (A): High (H)
  • Modified Attack Vector (MAV): Network (N)
  • Modified Attack Complexity (MAC): Low (L)
  • Modified Privileges Required (MPR): None (N)
  • Modified User Interaction (MUI): Required (R)
  • Modified Confidentiality (MC): High (H)
  • Modified Scope (MS): Unchanged (U)
  • Modified Integrity (MI): High (H)
  • Modified Availability (MA): High (H)
updated 3 months, 2 weeks ago by @LeSuisse Activity log
  • Created suggestion
  • @LeSuisse ignored
    4 packages
    • pkgsRocm.vllm
    • python312Packages.vllm
    • python313Packages.vllm
    • pkgsRocm.python3Packages.vllm
  • @LeSuisse accepted
  • @LeSuisse published on GitHub
vLLM affected by RCE via auto_map dynamic module loading during model initialization

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.

Affected products

vllm
  • ==>= 0.10.1, < 0.14.0

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Ignored packages (4)

pkgs.pkgsRocm.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Upstream advisory: https://github.com/vllm-project/vllm/security/advisories/GHSA-2pc9-4j83-qjmr
Upstream fix: https://github.com/vllm-project/vllm/commit/78d13ea9de4b1ce5e4d8a5af9738fea71fb024e5