@daxiaoxu233
许珉瑞 暂无简介
A vLLM out-of-tree platform plugin that enables running vLLM on NPU (Ascend/torch_npu).
Omni_Infer is a suite of inference accelerators designed for the Ascend NPU platform, offering native support and an expanding feature set.
记录自己学习CUDA的过程以及源码