# PyTorch LeNet on NPU **Repository Path**: tianyu__zhou/pytorch_lenet_on_npu ## Basic Information - **Project Name**: PyTorch LeNet on NPU - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-09-13 - **Last Updated**: 2022-09-29 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # PyTorch Lenet on Huawei NPU accelerator #### Training Environment For PyTorch 1.5, docker images are available here, https://ascendhub.huawei.com/#/detail/pytorch-modelzoo. For PyTorch 1.8, manual compilation is needed at the moment, https://gitee.com/ascend/pytorch. #### About This repo serves as a quick guide for migrating your existing PyTorch training scripts from GPU/CPU infrastructure to Huawei NPU platform. In a nutshell, instead of assigning your device as **cpu** or **cuda:0**, you will use **npu:0**. There might be some API changes. You can refer to https://gitee.com/ascend/pytorch. #### Start Training Note that for PyTorch 1.5, remove the line **import torch_npu** in **train_npu.py**. ##### Single Node Single device > cd test; bash train_1p.sh ##### Single Node Multiple devices > cd test; bash train_np.sh