From 5090e999fa76713585f3f4567554a0b2d9f45a17 Mon Sep 17 00:00:00 2001 From: yanhuiling Date: Fri, 3 Feb 2023 16:13:25 +0800 Subject: [PATCH] Updated README.en --- README.en.md | 58 ++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 40 insertions(+), 18 deletions(-) diff --git a/README.en.md b/README.en.md index 829e3d9..61727ef 100644 --- a/README.en.md +++ b/README.en.md @@ -1,11 +1,34 @@ -# Ascend apex - +# Ascend Apex + +## Apex Auxiliary Software + +| AscendPyTorch Version | Supported PyTorch Version | PyTorch Gitee Branch | Apex Gitee Branch | +| :---------------- | :--------------- | :--------------------- | :----------------- | +| 2.0.2 | 1.5.0.post2 | 2.0.2.tr5 | 2.0.2.tr5 | +| 2.0.3 | 1.5.0.post3 | 2.0.3.tr5 | 2.0.3.tr5 | +| 2.0.4 | 1.5.0.post4 | 2.0.4.tr5 | 2.0.4.tr5 | +| 3.0.rc1 | 1.5.0.post5 | v1.5.0-3.0.rc1 | v1.5.0-3.0.rc1 | +| 3.0.rc1 | 1.8.1.rc1 | v1.8.1-3.0.rc1 | v1.8.1-3.0.rc1 | +| 3.0.rc2 | 1.5.0.post6 | v1.5.0-3.0.rc2 | v1.5.0-3.0.rc2 | +| 3.0.rc2 | 1.8.1.rc2 | v1.8.1-3.0.rc2 | v1.8.1-3.0.rc2 | +| 3.0.rc3 | 1.5.0.post7 | v1.5.0-3.0.rc3 | v1.5.0-3.0.rc3 | +| 3.0.rc3 | 1.8.1.rc3 | v1.8.1-3.0.rc3 | v1.8.1-3.0.rc3 | +| 3.0.rc3 | 1.11.0.rc1 (beta)| v1.11.0-3.0.rc3 | v1.11.0-3.0.rc3 | +| 3.0.0 | 1.5.0.post8 | v1.5.0-3.0.0 | v1.5.0-3.0.0 | +| 3.0.0 | 1.8.1 | v1.8.1-3.0.0 | v1.8.1-3.0.0 | +| 3.0.0 | 1.11.0.rc2 (beta)| v1.11.0-3.0.0 | v1.11.0-3.0.0 | ## Full Code Generation and Compilation -Note: The root directory in the following description refers to the root directory of Ascend apex. +Note: The root directory in the following description refers to the root directory of Ascend Apex. + +##### Obtain the Ascend Apex source code adapted to Ascend. + +``` +git clone -b master https://gitee.com/ascend/apex.git +``` -**Obtain the native apex source code.** +##### Obtain the native Apex source code. Obtain the source code from GitHub and run the following command in the root directory: ``` @@ -18,7 +41,7 @@ git checkout 4ef930c1c884fdca5f472ab2ce7cb9b505d26c1a cd .. ``` -**Generate the apex code adapted to Ascend AI Processors.** +##### Generate the Apex code adapted to Ascend AI Processors. Go to the **scripts** directory and run the following command: ``` @@ -26,9 +49,9 @@ bash gen.sh ``` The full code adapted to NPUs is generated in the **apex** directory under the root directory. -**Compile the binary package of apex.** +##### Compile the binary package of Apex. -1. Ensure that PyTorch of the NPU version can be properly used. Otherwise, the apex compilation will be affected. +1. Ensure that PyTorch of the NPU version can be properly used. Otherwise, the Apex compilation will be affected. 2. Go to the **apex** directory under the root directory and run the following command: ``` @@ -42,37 +65,36 @@ The generated binary package is stored in the current **dist** directory. Go to the **dist** directory and run the following command: ``` pip3 uninstall apex -pip3 install --upgrade apex-0.1+ascend-cp37-cp37m-linux_{arch}.whl *arch* indicates the architecture, which can be AArch64 or x86_64. +pip3 install --upgrade apex-0.1+ascend-cp37-cp37m-linux_{arch}.whl arch表示架构,为aarch64或x86_64 ``` ## Features **Supported features:** - - [x] O1 mode - [x] O2 mode - [x] Static loss scale - [x] Dynamic loss scale -- [x] combine tensors -- [x] combine grad for unscale -- [x] npu fused optimizer: adadelta, adam, adamp, adamw, sgd, lamb, rmsprop, rmsprop_tf +- [x] Combine tensors +- [x] Combine grad for unscale +- [x] NPU fused optimizer: adadelta, adam, adamp, adamw, sgd, lamb, rmsprop, rmsprop_tf - [x] Adjustable parameters such as **dynamic_init_scale**, **scale_growth_factor**, **scale_backoff_factor**, and **scale_window** are added for dynamic loss scale. **Note:** -In the current version, apex is implemented using Python and does not support AscendCL or CUDA optimization. +In the current version, Apex is implemented using Python and does not support AscendCL or CUDA optimization. -## Method of Use -**Mixed precision:** +## Tool Usage +**Automatic mixed precision:** -For details, see https://nvidia.github.io/apex/amp.html. +For details, see [https://nvidia.github.io/apex/amp.html](https://nvidia.github.io/apex/amp.html). -**combine grad for unscale: ** +**Combine grad for unscale:** In **amp.initialize()**, set **combine_grad** to **True**. -**npu fused optimizer: ** +**NPU fused optimizer:** Replace the original optimizer with **apex.optimizers.xxx**, where *xxx* indicates the name of the fusion optimizer. -- Gitee