A vLLM (0.12.0) out-of-tree platform plugin that enables running vLLM on NPU (Ascend/torch_npu).
最近更新: 2小时前Omni_Infer is a suite of inference accelerators designed for the Ascend NPU platform, offering native support and an expanding feature set.
最近更新: 2个月前A CacheHit is a cache simulator where people can load memory actions to simulate underlying cache behaviors.
最近更新: 4年多前