From 4fb6f7fe8b37ff5eabd1eccd1b7ebb2497b4a337 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=AE=A6=E6=99=93=E7=8E=B2?= <3174348550@qq.com> Date: Thu, 30 Oct 2025 11:35:18 +0800 Subject: [PATCH] modify error anchors --- docs/lite/docs/source_en/advanced/third_party/npu_info.md | 2 +- docs/lite/docs/source_zh_cn/advanced/third_party/npu_info.md | 1 + .../advanced_development/training_template_instruction.md | 4 ++-- tutorials/source_en/debug/sdc.md | 2 +- tutorials/source_en/model_infer/lite_infer/overview.md | 2 +- tutorials/source_zh_cn/debug/sdc.md | 2 +- 6 files changed, 7 insertions(+), 6 deletions(-) diff --git a/docs/lite/docs/source_en/advanced/third_party/npu_info.md b/docs/lite/docs/source_en/advanced/third_party/npu_info.md index 53cf507fe0..78d231cb86 100644 --- a/docs/lite/docs/source_en/advanced/third_party/npu_info.md +++ b/docs/lite/docs/source_en/advanced/third_party/npu_info.md @@ -28,7 +28,7 @@ For more information about compilation, see [Linux Environment Compilation](http When developers need to integrate the use of Kirin NPU features, it is important to note: - - [Configure the Kirin NPU backend](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html#configuring-the-npu-backend). + - [Configure the Kirin NPU backend](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html#configuring-the-kirin-npu-backend). For more information about using Runtime to perform inference, see [Using Runtime to Perform Inference (C++)](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html). - Compile and execute the binary. If you use dynamic linking, refer to [compile output](https://www.mindspore.cn/lite/docs/en/master/build/build.html) when the compile option is `-I arm64` or `-I arm32`. diff --git a/docs/lite/docs/source_zh_cn/advanced/third_party/npu_info.md b/docs/lite/docs/source_zh_cn/advanced/third_party/npu_info.md index 655998572b..86935547af 100644 --- a/docs/lite/docs/source_zh_cn/advanced/third_party/npu_info.md +++ b/docs/lite/docs/source_zh_cn/advanced/third_party/npu_info.md @@ -26,6 +26,7 @@ bash build.sh -I arm64 -j8 - 集成说明 开发者需要集成使用Kirin NPU功能时,需要注意: + - 在代码中[配置Kirin NPU后端](https://www.mindspore.cn/lite/docs/zh-CN/master/infer/runtime_cpp.html#配置使用npu后端),有关使用Runtime执行推理详情见[使用Runtime执行推理(C++)](https://www.mindspore.cn/lite/docs/zh-CN/master/infer/runtime_cpp.html)。 - 编译执行可执行程序。如采用动态加载方式,参考[编译输出](https://www.mindspore.cn/lite/docs/zh-CN/master/build/build.html)中编译选项为`-I arm64`或`-I arm32`时的内容,配置好环境变量,将会动态加载libhiai.so、libhiai_ir.so、libhiai_ir_build.so、libhiai_hcl_model_runtime.so。例如: diff --git a/docs/mindformers/docs/source_zh_cn/advanced_development/training_template_instruction.md b/docs/mindformers/docs/source_zh_cn/advanced_development/training_template_instruction.md index 128f2242bb..e7b90ccaca 100644 --- a/docs/mindformers/docs/source_zh_cn/advanced_development/training_template_instruction.md +++ b/docs/mindformers/docs/source_zh_cn/advanced_development/training_template_instruction.md @@ -46,7 +46,7 @@ MindSpore Transformers对于不同训练场景提供了对应的配置模板, ### 数据集配置修改 1. 预训练场景使用Megatron数据集,详情请参考[Megatron数据集](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/dataset.html#megatron%E6%95%B0%E6%8D%AE%E9%9B%86)。 -2. 微调场景使用HuggingFace数据集,详情请参考[HuggingFace数据集](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/dataset.html#huggingface%E6%95%B0%E6%8D%AE%E9%9B%86)。 +2. 微调场景使用HuggingFace数据集,详情请参考[HuggingFace数据集](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/dataset.html#hugging-face%E6%95%B0%E6%8D%AE%E9%9B%86)。 ### 模型配置修改 @@ -59,7 +59,7 @@ MindSpore Transformers对于不同训练场景提供了对应的配置模板, | Qwen2_5 | 2. 生成的模型配置优先以yaml配置为准,未配置参数则取值pretrained_model_dir路径下的config.json中的参数。如若要修改定制模型配置,则只需要在model_config中添加相关配置即可。 -3. 通用配置详情请参考[模型配置](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/configuration.html#%E6%A8%A1%E5%9E%8B%E9%85%8D%E7%BD%AE)。 +3. 通用配置详情请参考[模型配置](https://www.mindspore.cn/mindformers/docs/zh-CN/master/feature/configuration.html#legacy-%E6%A8%A1%E5%9E%8B%E9%85%8D%E7%BD%AE)。 ## 进阶配置修改 diff --git a/tutorials/source_en/debug/sdc.md b/tutorials/source_en/debug/sdc.md index e5126400b2..02f51c912e 100644 --- a/tutorials/source_en/debug/sdc.md +++ b/tutorials/source_en/debug/sdc.md @@ -369,7 +369,7 @@ When numerical anomalies are detected, the training task fails and alerts are re * Search application logs for **ERROR** level error logs with the keyword "accuracy sensitivity feature abnormal"; * Monitor the NPU health status: if Health Status displays Warning, Error Code displays 80818C00, and Error Information displays node type=SoC, sensor type=Check Sensor, event state=check fail; -* Check the [Ascend Device Plugin](https://github.com/Ascend/ascend-device-plugin) events, report error code 80818C00, event type is fault event, and the fault level is minor. +* Check the [MindCluster](https://gitcode.com/Ascend/mind-cluster) events, report error code 80818C00, event type is fault event, and the fault level is minor. When using combined detection, if feature value detection anomalies occur and CheckSum detects silent faults, warning logs can be found in the training logs: diff --git a/tutorials/source_en/model_infer/lite_infer/overview.md b/tutorials/source_en/model_infer/lite_infer/overview.md index a619402fac..2fc0694c34 100644 --- a/tutorials/source_en/model_infer/lite_infer/overview.md +++ b/tutorials/source_en/model_infer/lite_infer/overview.md @@ -38,7 +38,7 @@ The MindSpore Lite inference framework supports the conversion of MindSpore trai 3. [Quantification after Training](https://www.mindspore.cn/lite/docs/en/master/advanced/quantization.html) -4. [Lightweight Micro inference deployment](https://www.mindspore.cn/lite/docs/en/master/advanced/micro.html#%20Model%20inference%20code%20generation) +4. [Lightweight Micro inference deployment](https://www.mindspore.cn/lite/docs/en/master/advanced/micro.html#generating-model-inference-code) 5. [Benchmark Debugging Tool](https://www.mindspore.cn/lite/docs/en/master/tools/benchmark.html) diff --git a/tutorials/source_zh_cn/debug/sdc.md b/tutorials/source_zh_cn/debug/sdc.md index 6d1a05a08a..7ad0be2e8e 100644 --- a/tutorials/source_zh_cn/debug/sdc.md +++ b/tutorials/source_zh_cn/debug/sdc.md @@ -369,7 +369,7 @@ $ grep -m1 'Global CheckSum result is' worker_0.log * 通过搜索应用类日志,查询**ERROR**级别错误日志,关键字"accuracy sensitivity feature abnormal"; * 通过监控NPU健康状态:Health Status显示Warning,Error Code显示80818C00,Error Information显示node type=SoC, sensor type=Check Sensor, event state=check fail; -* 通过查看[Ascend Device Plugin](https://github.com/Ascend/ascend-device-plugin)事件,上报错误码80818C00,事件类型为故障事件,故障级别次要。 +* 通过查看[MindCluster](https://gitcode.com/Ascend/mind-cluster)事件,上报错误码80818C00,事件类型为故障事件,故障级别次要。 当使用联合检测时,若训练中发生特征值异常、CheckSum检测出静默故障,会在业务训练日志中产生告警: -- Gitee