Nixpkgs Security Tracker

Login with GitHub

Suggestions search

With package: python312Packages.vllm

Found 3 matching suggestions

created 1 week, 6 days ago
vLLM vulnerable to Server-Side Request Forgery (SSRF) in `MediaConnector`

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.14.1, a Server-Side Request Forgery (SSRF) vulnerability exists in the `MediaConnector` class within the vLLM project's multimodal feature set. The load_from_url and load_from_url_async methods obtain and process media from URLs provided by users, using different Python parsing libraries when restricting the target host. These two parsing libraries have different interpretations of backslashes, which allows the host name restriction to be bypassed. This allows an attacker to coerce the vLLM server into making arbitrary requests to internal network resources. This vulnerability is particularly critical in containerized environments like `llm-d`, where a compromised vLLM pod could be used to scan the internal network, interact with other pods, and potentially cause denial of service or access sensitive data. For example, an attacker could make the vLLM pod send malicious requests to an internal `llm-d` management endpoint, leading to system instability by falsely reporting metrics like the KV cache state. Version 0.14.1 contains a patch for the issue.

Affected products

vllm
  • ==< 0.14.1

Matching in nixpkgs

Package maintainers

created 4 months, 3 weeks ago
Vllm: a completions api request with an empty prompt will crash the vllm api server.

A flaw was found in the vLLM library. A completions API request with an empty prompt will crash the vLLM API server, resulting in a denial of service.

Affected products

vllm
  • <0.5.5
rhelai1/bootc-nvidia-rhel9
rhelai1/instructlab-nvidia-rhel9

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

created 4 months, 3 weeks ago
Vllm: denials of service in vllm json web api

A vulnerability was found in the ilab model serve component, where improper handling of the best_of parameter in the vllm JSON web API can lead to a Denial of Service (DoS). The API used for LLM-based sentence or chat completion accepts a best_of parameter to return the best completion from several options. When this parameter is set to a large value, the API does not handle timeouts or resource exhaustion properly, allowing an attacker to cause a DoS by consuming excessive system resources. This leads to the API becoming unresponsive, preventing legitimate users from accessing the service.

Affected products

vllm
  • <0.5.0.post1
rhelai1/bootc-nvidia-rhel9
rhelai1/instructlab-nvidia-rhel9

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers