vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
Project Subscriptions
No data.
Advisories
No advisories yet.
Fixes
Solution
No solution given by the vendor.
Workaround
No workaround given by the vendor.
References
History
Fri, 27 Mar 2026 04:00:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Description | vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue. | |
| Title | vLLM's hardcoded trust_remote_code=True in NemotronVL and KimiK25 bypasses user security opt-out | |
| Weaknesses | CWE-693 | |
| References |
| |
| Metrics |
cvssV3_1
|
Projects
Sign in to view the affected projects.
Status: PUBLISHED
Assigner: GitHub_M
Published:
Updated: 2026-03-26T23:56:53.579Z
Reserved: 2026-02-24T15:19:29.717Z
Link: CVE-2026-27893
No data.
Status : Received
Published: 2026-03-27T00:16:22.333
Modified: 2026-03-27T00:16:22.333
Link: CVE-2026-27893
No data.
OpenCVE Enrichment
No data.
Weaknesses