# production-stack **Repository Path**: underdogs/production-stack ## Basic Information - **Project Name**: production-stack - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-07-31 - **Last Updated**: 2025-12-02 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # vLLM Production Stack: reference stack for production vLLM deployment | [**Blog**](https://lmcache.github.io) | [**Docs**](https://docs.vllm.ai/projects/production-stack) | [**Production-Stack Slack Channel**](https://vllm-dev.slack.com/archives/C089SMEAKRA) | [**LMCache Slack**](https://join.slack.com/t/lmcacheworkspace/shared_invite/zt-2viziwhue-5Amprc9k5hcIdXT7XevTaQ) | [**Interest Form**](https://forms.gle/mQfQDUXbKfp2St1z7) | [**Official Email**](contact@lmcache.ai) | ## Latest News - 📄 [Official documentation](https://docs.vllm.ai/projects/production-stack) released for production-stack! - ✨ [Cloud Deployment Tutorials](https://github.com/vllm-project/production-stack/blob/main/tutorials) for Lambda Labs, AWS EKS, Google GCP are out! - 🛤️ 2025 Q1 roadmap is released! [Join the discussion now](https://github.com/vllm-project/production-stack/issues/26)! - 🔥 vLLM Production Stack is released! Check out our [release blogs](https://blog.lmcache.ai/2025-01-21-stack-release) posted on January 22, 2025. ## Community Events We host **bi-weekly** community meetings at the following timeslot: - Every other Tuesdays at 5:30 PM PT – [Add to Calendar](https://drive.usercontent.google.com/u/0/uc?id=1I3WuivUVAq1vZ2XSW4rmqgD5c0bQcxE0&export=download) All are welcome to join! ## Introduction **vLLM Production Stack** project provides a reference implementation on how to build an inference stack on top of vLLM, which allows you to: - 🚀 Scale from a single vLLM instance to a distributed vLLM deployment without changing any application code - 💻 Monitor the metrics through a web dashboard - 😄 Enjoy the performance benefits brought by request routing and KV cache offloading ## Step-By-Step Tutorials 0. How To [*Install Kubernetes (kubectl, helm, minikube, etc)*](https://github.com/vllm-project/production-stack/blob/main/tutorials/00-install-kubernetes-env.md)? 1. How to [*Deploy Production Stack on Major Cloud Platforms (AWS, GCP, Lambda Labs, Azure)*](https://github.com/vllm-project/production-stack/blob/main/tutorials/cloud_deployments)? 2. How To [*Set up a Minimal vLLM Production Stack*](https://github.com/vllm-project/production-stack/blob/main/tutorials/01-minimal-helm-installation.md)? 3. How To [*Customize vLLM Configs (optional)*](https://github.com/vllm-project/production-stack/blob/main/tutorials/02-basic-vllm-config.md)? 4. How to [*Load Your LLM Weights*](https://github.com/vllm-project/production-stack/blob/main/tutorials/03-load-model-from-pv.md)? 5. How to [*Launch Different LLMs in vLLM Production Stack*](https://github.com/vllm-project/production-stack/blob/main/tutorials/04-launch-multiple-model.md)? 6. How to [*Enable KV Cache Offloading with LMCache*](https://github.com/vllm-project/production-stack/blob/main/tutorials/05-offload-kv-cache.md)? ## Architecture The stack is set up using [Helm](https://helm.sh/docs/), and contains the following key parts: - **Serving engine**: The vLLM engines that run different LLMs. - **Request router**: Directs requests to appropriate backends based on routing keys or session IDs to maximize KV cache reuse. - **Observability stack**: monitors the metrics of the backends through [Prometheus](https://github.com/prometheus/prometheus) + [Grafana](https://grafana.com/)

Architecture of the stack

## Roadmap We are actively working on this project and will release the following features soon. Please stay tuned! - **Autoscaling** based on vLLM-specific metrics - Support for **disaggregated prefill** - **Router improvements** (e.g., more performant router using non-python languages, KV-cache-aware routing algorithm, better fault tolerance, etc) ## Deploying the stack via Helm ### Prerequisites - A running Kubernetes (K8s) environment with GPUs - Run `cd utils && bash install-minikube-cluster.sh` - Or follow our [tutorial](tutorials/00-install-kubernetes-env.md) ### Deployment vLLM Production Stack can be deployed via helm charts. Clone the repo to local and execute the following commands for a minimal deployment: ```bash git clone https://github.com/vllm-project/production-stack.git cd production-stack/ helm repo add vllm https://vllm-project.github.io/production-stack helm install vllm vllm/vllm-stack -f tutorials/assets/values-01-minimal-example.yaml ``` The deployed stack provides the same [**OpenAI API interface**](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html?ref=blog.mozilla.ai#openai-compatible-server) as vLLM, and can be accessed through kubernetes service. To validate the installation and send a query to the stack, refer to [this tutorial](tutorials/01-minimal-helm-installation.md). For more information about customizing the helm chart, please refer to [values.yaml](https://github.com/vllm-project/production-stack/blob/main/helm/values.yaml) and our other [tutorials](https://github.com/vllm-project/production-stack/tree/main/tutorials). ### Uninstall ```bash helm uninstall vllm ``` ## Grafana Dashboard ### Features The Grafana dashboard provides the following insights: 1. **Available vLLM Instances**: Displays the number of healthy instances. 2. **Request Latency Distribution**: Visualizes end-to-end request latency. 3. **Time-to-First-Token (TTFT) Distribution**: Monitors response times for token generation. 4. **Number of Running Requests**: Tracks the number of active requests per instance. 5. **Number of Pending Requests**: Tracks requests waiting to be processed. 6. **GPU KV Usage Percent**: Monitors GPU KV cache usage. 7. **GPU KV Cache Hit Rate**: Displays the hit rate for the GPU KV cache.

Grafana dashboard to monitor the deployment

### Configuration See the details in [`observability/README.md`](./observability/README.md) ## Router The router ensures efficient request distribution among backends. It supports: - Routing to endpoints that run different models - Exporting observability metrics for each serving engine instance, including QPS, time-to-first-token (TTFT), number of pending/running/finished requests, and uptime - Automatic service discovery and fault tolerance via the Kubernetes API - Model aliases - Multiple routing algorithms: - Round-robin routing - Session-ID based routing - Prefix-aware routing (WIP) Please refer to the [router documentation](./src/vllm_router/README.md) for more details. ## Contributing We welcome and value any contributions and collaborations. Please check out [CONTRIBUTING.md](CONTRIBUTING.md) for how to get involved. ## License This project is licensed under Apache License 2.0. See the `LICENSE` file for details. ## Sponsors We are grateful to our sponsors who support our development and benchmarking efforts:

GMI Cloud Logo

--- For any issues or questions, feel free to open an issue or contact us ([@ApostaC](https://github.com/ApostaC), [@YuhanLiu11](https://github.com/YuhanLiu11), [@Shaoting-Feng](https://github.com/Shaoting-Feng)).