# sparse **Repository Path**: anolis/sparse ## Basic Information - **Project Name**: sparse - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2025-09-08 - **Last Updated**: 2025-09-08 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Sparse: 面向深度推荐系统训练的GPU资源调度系统 ## 项目概述 本项目是针对深度推荐系统训练(DLRS)场景下的GPU资源调度问题,提出: 1. **精准资源预测算法**:融合模型结构特征(如大Embedding层) 2. **最小争用调度器**:基于性能干扰分析的GPU共享策略 3. **低延迟优化**:数据依赖感知的亲和性调度 4. **生产级K8s集成**:完整集成在k8s调度框架 > 📌 **核心价值**:在同等硬件条件下,相比K8s默认调度器**提升集群利用率50%-133%**,减少任务完成时间14% ## 技术亮点 ### 1. 资源需求预测模型 | 预测目标 | 算法 | 创新点 | | -------- | ---------------- | ----------------------------------------- | | GPU算力 | 随机森林回归 | 分析稀疏特征对FP32核心/显存带宽的独特影响 | | GPU显存 | 二阶多元线性回归 | 量化Embedding表大小与显存占用的非线性关系 | **精度**:R²=0.86(优于Horus等基线) ### 2. 最小争用调度算法 ```python def schedule(jobs, nodes): # 目标函数:最小化全局争用值 minimize Σ( Interference(job_i, job_j) * X_ij ) # 约束:GPU显存容量、任务拓扑依赖等 subject to mem_constraint & affinity_constraint ``` - **动态干扰评估**:实时分析共置任务对JCT(任务完成时间)的影响 - **混合部署策略**:避免稀疏训练任务(Wide&Deep)与计算密集型任务(BERT/YOLO)争用 ### 3. 低延迟优化策略 ```mermaid graph LR A[Embedding DB] -->|远程查询| B[训练节点] C[本地缓存代理] -->|缓存命中| B B -->|亲和性调度| A ``` - **拓扑感知调度**:将Embedding消费者与存储节点同机柜部署 - **分级缓存**:DaemonSet实现节点级Embedding缓存 ## 快速开始 ### 环境要求 - Kubernetes ≥ v1.20 - NVIDIA GPU Operator已部署 - Prometheus(用于指标采集) ### 算法部署 #### 部署CRD 需运行的文件位于crd-controller/config的crd文件夹和default 运行指令 ``` kubectl apply -k ./crd-controller/config/crd kubectl apply -k ./crd-controller/config/default ``` #### 部署scheduler 运行指令 ``` kubectl apply -f ./scheduler/yaml/kube-gpu-sparse-scheduler.yaml kubectl apply -f ./scheduler/yaml/node-conf.yaml ``` **需要注意的是**,要修改kube-gpu-sparse-scheduler.yaml中的镜像地址 ``` containers: - command: - /usr/local/bin/kube-scheduler - --config=/etc/kubernetes/config/kube-gpu-sparse-scheduler-config.yaml - --v=4 image: ``` 同时也要修改,node-conf.yaml中的信息 ``` apiVersion: v1 kind: ConfigMap metadata: name: kube-gpu-sparse-scheduler-node-conf namespace: kube-gpu-sparse data: config.yaml: | ## Prometheus configuration for monitoring (optional) prometheus: host: "http://prometheus.monitoring.svc.cluster.local:9090" ## Prometheus server endpoint ## GPU node configuration list nodes: ## First GPU node - Basic configuration with older GPU model - name: gpu-node-01 ## Unique node identifier ip: 10.0.1.100 ## Node IP address gpu: 2 ## Number of GPUs on this node gpu-mem: 16 ## Total GPU memory in GB (8GB per GPU) model: GTX1080Ti ## GPU model/type uuids: [ ## GPU device UUIDs (get via nvidia-smi -L) "GPU-12345678-1234-1234-1234-123456789abc", "GPU-87654321-4321-4321-4321-cba987654321" ] network: eth0 ## Primary network interface network_bandwidth_bytes: 1073741824 ## Network bandwidth in bytes (1GB/s) region: zone-a ## Logical region/zone for scheduling ## Second GPU node - High-end configuration - name: gpu-node-02 ## Unique node identifier ip: 10.0.1.101 ## Node IP address gpu: 4 ## Number of GPUs on this node gpu-mem: 64 ## Total GPU memory in GB (16GB per GPU) model: RTX3090 ## GPU model/type uuids: [ ## GPU device UUIDs "GPU-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee", "GPU-ffffffff-gggg-hhhh-iiii-jjjjjjjjjjjj", "GPU-kkkkkkkk-llll-mmmm-nnnn-oooooooooooo", "GPU-pppppppp-qqqq-rrrr-ssss-tttttttttttt" ] network: bond0 ## Bonded network interface for higher bandwidth network_bandwidth_bytes: 10737418240 ## Network bandwidth in bytes (10GB/s) region: zone-b ## Different region for load balancing ## Third GPU node - Enterprise configuration - name: gpu-node-03 ## Unique node identifier ip: 10.0.1.102 ## Node IP address gpu: 8 ## Number of GPUs on this node gpu-mem: 320 ## Total GPU memory in GB (40GB per GPU) model: A100 ## High-end enterprise GPU model uuids: [ ## GPU device UUIDs "GPU-11111111-2222-3333-4444-555555555555", "GPU-66666666-7777-8888-9999-aaaaaaaaaaaa", "GPU-bbbbbbbb-cccc-dddd-eeee-ffffffffffff", "GPU-gggggggg-hhhh-iiii-jjjj-kkkkkkkkkkkk", "GPU-llllllll-mmmm-nnnn-oooo-pppppppppppp", "GPU-qqqqqqqq-rrrr-ssss-tttt-uuuuuuuuuuuu", "GPU-vvvvvvvv-wwww-xxxx-yyyy-zzzzzzzzzzzz", "GPU-00000000-1111-2222-3333-444444444444" ] network: ens192 ## Standard network interface network_bandwidth_bytes: 21474836480 ## Network bandwidth in bytes (20GB/s) region: zone-c ## Third availability zone ## Additional notes: ## 1. GPU UUIDs can be obtained by running: nvidia-smi -L ## 2. Network bandwidth should be measured in bytes per second ## 3. GPU memory is per-node total (sum of all GPUs on that node) ## 4. Regions can be used for affinity/anti-affinity scheduling ## 5. Network interface names vary by system (eth0, ens192, bond0, etc.) ## 6. IP addresses should be reachable within your cluster network ``` #### 部署webhook ``` kubectl apply -f ./webhook/yaml ``` **需要注意的是**,需要修改webhook/yaml/deployment.yaml中的image ``` containers: - name: resource-analyze-webhook image: ``` ## 性能对比 ### 任务完成时间(Makespan) | 调度器 | 高负载场景 | 持续提交场景 | | ---------------- | ---------- | ------------ | | Kube-Exclusive | 100% | 100% | | Kube-Shared | 92% | 89% | | **Sparse(Ours)** | **78%** | **85%** | ## 架构设计 ```mermaid graph TD A[API Server] -->|Webhook| B[需求预测模块] B -->|写入Annotation| C[调度队列] C --> D[调度器核心] D -->|决策| E[节点过滤器] E -->|绑定| F[GPU节点] G[缓存DaemonSet] -->|本地缓存| F ``` ## 参与贡献 欢迎提Issue或PR!建议贡献方向: - 扩展模型支持(如Transformer-based推荐模型) - 集成MPS(Multi-Process Service)优化 - 开发静态代码分析器自动提取模型特征 ## 许可证 Apache 2.0 © 2023 PHILLI LEE ---