diff --git a/TensorFlow/built-in/nlp/Transformer_ID0004_for_TensorFlow/README.md b/TensorFlow/built-in/nlp/Transformer_ID0004_for_TensorFlow/README.md
index 2bce93c95fce2b129c6254bf0534eb245a34b405..f1a0216439dcd90213d649b4371999fa25c00f49 100644
--- a/TensorFlow/built-in/nlp/Transformer_ID0004_for_TensorFlow/README.md
+++ b/TensorFlow/built-in/nlp/Transformer_ID0004_for_TensorFlow/README.md
@@ -111,7 +111,7 @@ run_config = NPURunConfig(
训练环境准备
1. 硬件环境准备请参见各硬件产品文档"[驱动和固件安装升级指南]( https://support.huawei.com/enterprise/zh/category/ai-computing-platform-pid-1557196528909)"。需要在硬件设备上安装与CANN版本配套的固件与驱动。_
-2. 宿主机上需要安装Docker并登录[Ascend Hub中心](https://ascendhub.huawei.com/#/detail?name=ascend-tensorflow-arm)获取镜像。
+2. 宿主机上需要安装Docker并登录[Ascend Hub中心](https://www.hiascend.com/developer/ascendhub/)获取镜像。
当前模型支持的镜像列表如[表1](#zh-cn_topic_0000001074498056_table1519011227314)所示。
@@ -126,11 +126,11 @@ run_config = NPURunConfig(
- |
+ |
- ARM架构:ascend-tensorflow-arm
- x86架构:ascend-tensorflow-x86
|
20.2.0
|
- 20.2
+ | 8.0
|
@@ -485,4 +485,4 @@ python3.7.5 transformer_online_inference.py
--batchSize 在线推理的BatchSize
```
-在线推理由于涉及动态shape的模型编译,时间会比较长;如果是medium的模型,完全执行完约1h
\ No newline at end of file
+在线推理由于涉及动态shape的模型编译,时间会比较长;如果是medium的模型,完全执行完约1h