5.6 MEDIUM
- CVSS version (CVSS): 3.1
- Attack Vector (AV): Network (N)
- Attack Complexity (AC): High (H)
- Privileges Required (PR): None (N)
- User Interaction (UI): None (N)
- Scope (S): Unchanged (U)
- Confidentiality (C): Low (L)
- Integrity (I): Low (L)
- Availability (A): Low (L)
- Exploit Code Maturity (E): Proof-of-Concept (P)
- Remediation Level (RL): Official Fix (O)
- Report Confidence (RC): Confirmed (C)
- Modified Attack Vector (MAV): Network (N)
- Modified Attack Complexity (MAC): High (H)
- Modified Privileges Required (MPR): None (N)
- Modified User Interaction (MUI): None (N)
- Modified Confidentiality (MC): Low (L)
- Modified Scope (MS): Unchanged (U)
- Modified Integrity (MI): Low (L)
- Modified Availability (MA): Low (L)
Activity log
- Created suggestion
vllm KV Block kv_cache_interface.py has_mamba_layers uninitialized resource
A vulnerability was found in vllm up to 0.19.0. The affected element is the function has_mamba_layers of the file vllm/v1/kv_cache_interface.py of the component KV Block Handler. Performing a manipulation results in uninitialized resource. It is possible to initiate the attack remotely. The attack is considered to have high complexity. The exploitability is described as difficult. The exploit has been made public and could be used. The patch is named 1ad67864c0c20f167929e64c875f5c28e1aad9fd. To fix this issue, it is recommended to deploy a patch.
References
-
VDB-359740 | vllm KV Block kv_cache_interface.py has_mamba_layers uninitialized resource vdb-entrytechnical-description
-
https://github.com/vllm-project/vllm/issues/39146 issue-tracking
-
-
Ignored references (2)
-
-
Submit #801297 | vllm-project vLLM 0.19.0 Use of Uninitialized Resource third-party-advisory
Affected products
- ==0.6
- ==0.9
- ==0.16
- ==0.2
- ==0.4
- ==0.15
- ==0.7
- ==0.17
- ==0.11
- ==0.19.0
- ==0.10
- ==0.1
- ==0.13
- ==0.3
- ==0.8
- ==0.12
- ==0.18
- ==0.14
- ==0.5
Matching in nixpkgs
pkgs.vllm
High-throughput and memory-efficient inference and serving engine for LLMs
pkgs.pkgsRocm.vllm
High-throughput and memory-efficient inference and serving engine for LLMs
pkgs.python312Packages.vllm
High-throughput and memory-efficient inference and serving engine for LLMs
pkgs.python313Packages.vllm
High-throughput and memory-efficient inference and serving engine for LLMs
Package maintainers
-
@happysalada Raphael Megzari <raphael@megzari.com>
-
@CertainLach Yaroslav Bolyukin <iam@lach.pw>
-
@LunNova Luna Nova <nixpkgs-maintainer@lunnova.dev>
-
@daniel-fahey Daniel Fahey <daniel.fahey+nixpkgs@pm.me>