登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
轻量养虾,开箱即用!低 Token + 稳定算力,Gitee & 模力方舟联合出品的 PocketClaw 正式开售!点击了解详情~
代码拉取完成,页面将自动刷新
开源项目
>
人工智能
>
大模型
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
109
Star
894
Fork
1.4K
MindSpore
/
models
代码
Issues
120
Pull Requests
0
Wiki
统计
流水线
服务
JavaDoc
PHPDoc
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
开发画像分析
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
[Question]: RuntimeError: Run task for graph:kernel_graph_1 error!
DONE
#I7ONO5
zc0416
创建于
2023-07-28 10:19
### 请描述您的问题? / Please describe your question 使用基础modelarts的bert改的模型,在mindspore2.0.0上执行,每次都是执行大概1个多epoch的时候报错。其中一个报EE9999,另外七个报EI0002: The wait execution of the Notify register times out。这个是什么原因导致的? log0: epoch: 1, current epoch percent: 0.874, step: 49900, outputs are (Tensor(shape=[1], dtype=Float32, value= [ 3.32405716e-01]), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 262144)) Train epoch time: 54512.023 ms, per step time: 545.120 ms use last line use last line epoch: 1, current epoch percent: 0.878, step: 50000, outputs are (Tensor(shape=[1], dtype=Float32, value= [ 5.08989751e-01]), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 262144)) Train epoch time: 69872.935 ms, per step time: 698.729 ms use last line use last line use last line use last line use last line epoch: 1, current epoch percent: 0.882, step: 50100, outputs are (Tensor(shape=[1], dtype=Float32, value= [ 3.82104754e-01]), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 262144)) Train epoch time: 54909.512 ms, per step time: 549.095 ms [EVENT] RUNTIME(133931,python):2023-07-27-05:48:12.311.231 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-05:51:16.631.225 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-05:54:20.951.216 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-05:57:25.271.243 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-06:00:29.591.214 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-06:03:33.907.214 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-06:06:38.227.218 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-06:09:42.547.217 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-06:12:46.867.224 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-06:15:51.187.213 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] HCCL(133931,python):2023-07-27-06:16:09.047.280 [heartbeat.cc:412][133931][164460]rank [[192.168.0.7][0]]: rank [[192.168.0.7][2]] error status[1] by rank [[192.168.0.7][0]] [EVENT] HCCL(133931,python):2023-07-27-06:16:10.049.963 [heartbeat.cc:412][133931][164460]rank [[192.168.0.7][0]]: rank [[192.168.0.7][4]] error status[1] by rank [[192.168.0.7][0]] [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.886.710 [engine.cc:1311]138165 ReportExceptProc:Real task exception! device_id=0, stream_id=5, task_id=2, task_type=3 (EVENT_WAIT) [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.886.755 [engine.cc:1315]138165 ReportExceptProc:Task exception! device_id=0, stream_id=4, task_id=15010, type=13, retCode=0x91, [the model stream execute failed] [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.888.130 [task.cc:3537]138165 ReportErrorInfo:model execute error, retCode=0x91, [the model stream execute failed]. [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.888.150 [task.cc:3514]138165 PrintErrorInfo:model execute task failed, device_id=0, model stream_id=4, model task_id=15010, flip_num=3651, model_id=0, first_task_id=65535 [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.888.187 [task.cc:106]138165 PrintErrorInfo:Task execute failed, device_id=0, stream_id=5, task_id=2, flip_num=0, task_type=3. [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.888.250 [callback.cc:91]138165 Notify:notify [HCCL] task fail start.notify taskid:2 streamid:5 retcode:507011 [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.398 [callback.cc:91]138165 Notify:notify [MindSpore] task fail start.notify taskid:2 streamid:5 retcode:507011 [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.446 [stream.cc:1122]138165 GetError:Stream Synchronize failed, stream_id=4, retCode=0x91, [the model stream execute failed]. [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.458 [stream.cc:1125]138165 GetError:report error module_type=7, module_name=EE9999 [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.468 [stream.cc:1125]138165 GetError:Task execute failed, device_id=0, stream_id=5, task_id=2, flip_num=0, task_type=3. [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.526 [logger.cc:345]138165 StreamSynchronize:Stream synchronize failed [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.581 [api_c.cc:719]138165 rtStreamSynchronize:ErrCode=507011, desc=[the model stream execute failed], InnerCode=0x7150050 [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.593 [error_message_manage.cc:49]138165 FuncErrorReason:report error module_type=3, module_name=EE8888 [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.611 [error_message_manage.cc:49]138165 FuncErrorReason:rtStreamSynchronize execute failed, reason=[the model stream execute failed] [WARNING] DEVICE(133931,fffe10d5e160,python):2023-07-27-06:16:47.912.627 [mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:742] GetDumpPath] The environment variable 'MS_OM_PATH' is not set, the files of node dump will save to the process local path, as ./rank_id/node_dump/... [ERROR] DEVICE(133931,fffe10d5e160,python):2023-07-27-06:16:47.912.738 [mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:760] DumpTaskExceptionInfo] Task fail infos task_id: 2, stream_id: 5, tid: 133931, device_id: 0, retcode: 507011 ( model execute failed) Traceback (most recent call last): File "/root/lxd_code/gene_pretrain/code/train/t5rpe_new/device0/./run_pretrain.py", line 207, in <module> run_pretrain() File "/root/lxd_code/gene_pretrain/code/train/t5rpe_new/device0/src/model_utils/moxing_adapter.py", line 68, in wrapped_func run_func(*args, **kwargs) File "/root/lxd_code/gene_pretrain/code/train/t5rpe_new/device0/./run_pretrain.py", line 201, in run_pretrain model.train(new_repeat_count, ds, callbacks=callback, File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 1044, in train self._train(epoch, File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 100, in wrapper func(self, *args, **kwargs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 597, in _train self._train_dataset_sink_process(epoch, train_dataset, list_callback, File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 681, in _train_dataset_sink_process outputs = train_network(*inputs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 620, in __call__ out = self.compile_and_run(*args, **kwargs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 942, in compile_and_run return _cell_graph_executor(self, *new_args, phase=self.phase) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1439, in __call__ return self.run(obj, *args, phase=phase) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1478, in run return self._exec_pip(obj, *args, phase=phase_real) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 102, in wrapper results = fn(*arg, **kwargs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1458, in _exec_pip return self._graph_executor(args, phase) RuntimeError: Run task for graph:kernel_graph_1 error! The details refer to 'Ascend Error Message'. ---------------------------------------------------- - Ascend Error Message: ---------------------------------------------------- EE9999: Inner Error, Please contact support engineer! EE9999 Task execute failed, device_id=0, stream_id=5, task_id=2, flip_num=0, task_type=3.[FUNC:GetError][FILE:stream.cc][LINE:1125] TraceBack (most recent call last): rtStreamSynchronize execute failed, reason=[the model stream execute failed][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:49] (Please search "Ascend Error Message" at https://www.mindspore.cn for error code description) ---------------------------------------------------- - C++ Call Stack: (For framework developers) ---------------------------------------------------- mindspore/ccsrc/plugin/device/ascend/hal/hardware/ascend_graph_executor.cc:256 RunGraph [WARNING] MD(133931,ffff9ed47010,python):2023-07-27-06:16:49.928.814 [mindspore/ccsrc/minddata/dataset/engine/datasetops/data_queue_op.cc:115] ~DataQueueOp] preprocess_batch: 50194; batch_queue: 16, 16, 16, 16, 16, 16, 16, 16, 16, 16; push_start_time: 2023-07-27-05:44:07.591.112, 2023-07-27-05:44:08.132.669, 2023-07-27-05:44:08.682.632, 2023-07-27-05:44:09.223.711, 2023-07-27-05:44:09.767.355, 2023-07-27-05:44:10.315.898, 2023-07-27-05:44:10.860.925, 2023-07-27-05:44:11.408.294, 2023-07-27-05:44:11.955.528, 2023-07-27-05:44:12.500.722; push_end_time: 2023-07-27-05:44:07.591.479, 2023-07-27-05:44:08.133.057, 2023-07-27-05:44:08.683.056, 2023-07-27-05:44:09.224.106, 2023-07-27-05:44:09.767.742, 2023-07-27-05:44:10.316.266, 2023-07-27-05:44:10.861.297, 2023-07-27-05:44:11.408.658, 2023-07-27-05:44:11.955.874, 2023-07-27-05:44:12.501.082. [EVENT] HCCL(133931,python):2023-07-27-06:16:50.060.629 [opexecounter.cc:80][hccl-133931-0-1690379684-hccl_world_group][0]head counting tasks tag:[HcomAllReduce_6629421139219749105_0] count:[50194] [EVENT] HCCL(133931,python):2023-07-27-06:16:50.060.711 [opexecounter.cc:80][hccl-133931-0-1690379684-hccl_world_group][0]tail counting tasks tag:[HcomAllReduce_6629421139219749105_0] count:[50194] [EVENT] HCCL(133931,python):2023-07-27-06:16:50.285.881 [opexecounter.cc:80][hccl-133931-0-1690379684-hccl_world_group][0]head counting tasks tag:[HcomAllReduce_6629421139219749105_1] count:[50194] [EVENT] HCCL(133931,python):2023-07-27-06:16:50.285.948 [opexecounter.cc:80][hccl-133931-0-1690379684-hccl_world_group][0]tail counting tasks tag:[HcomAllReduce_6629421139219749105_1] count:[50194] [EVENT] HCCL(133931,python):2023-07-27-06:16:50.331.309 [hcom_executor.cc:39][hccl-133931-0-1690379684-hccl_world_group][0][Finalize][HcomExecutor]Hcom Excutor Finalize end. ret[0] [EVENT] HCCL(133931,python):2023-07-27-06:16:50.331.472 [hcom.cc:314][hccl-133931-0-1690379684-hccl_world_group][0]Entry-HcomDestroy:void log1: [EVENT] RUNTIME(133507,python):2023-07-27-06:12:46.867.226 [engine.cc:1355] 137895 ReportTimeoutProc: report timeout! streamId=12, pendingTaskNum=2, reportCount=119713029, parseTaskCount=119713029 [EVENT] RUNTIME(133507,python):2023-07-27-06:15:51.187.227 [engine.cc:1355] 137895 ReportTimeoutProc: report timeout! streamId=12, pendingTaskNum=2, reportCount=119713029, parseTaskCount=119713029 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.704.231 [engine.cc:1311]137895 ReportExceptProc:Real task exception! device_id=1, stream_id=2, task_id=6, task_type=14 (NOTIFY_WAIT) [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.704.298 [engine.cc:1315]137895 ReportExceptProc:Task exception! device_id=1, stream_id=12, task_id=14218, type=13, retCode=0x91, [the model stream execute failed] [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.707.555 [task.cc:3537]137895 ReportErrorInfo:model execute error, retCode=0x91, [the model stream execute failed]. [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.707.583 [task.cc:3514]137895 PrintErrorInfo:model execute task failed, device_id=1, model stream_id=12, model task_id=14218, flip_num=3651, model_id=0, first_task_id=65535 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.707.670 [task.cc:3847]137895 PrintErrorInfo:Notify wait execute failed, device_id=1, stream_id=2, task_id=6, flip_num=0, notify_id=6 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.707.722 [callback.cc:91]137895 Notify:notify [HCCL] task fail start.notify taskid:6 streamid:2 retcode:507011 [ERROR] HCCL(133507,python):2023-07-27-06:16:02.708.983 [task_exception_handler.cc:220][133507][137895][TaskExceptionHandler][Callback]Task from HCCL run failed. [ERROR] HCCL(133507,python):2023-07-27-06:16:02.709.071 [task_exception_handler.cc:223][133507][137895][TaskExceptionHandler][Callback]Task run failed, base information is streamID:[2], taskID[6], taskType[Notify Wait], tag[HcomAllReduce_6629421139219749105_0]. [ERROR] HCCL(133507,python):2023-07-27-06:16:02.709.147 [task_exception_handler.cc:225][133507][137895][TaskExceptionHandler][Callback]Task run failed, para information is notify id:[0x0000000100000030], stage:[ffffffff], remote rank:[0]. [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.386 [callback.cc:91]137895 Notify:notify [MindSpore] task fail start.notify taskid:6 streamid:2 retcode:507011 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.508 [stream.cc:1122]137895 GetError:Stream Synchronize failed, stream_id=12, retCode=0x91, [the model stream execute failed]. [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.520 [stream.cc:1125]137895 GetError:report error module_type=2, module_name=EI9999 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.530 [stream.cc:1125]137895 GetError:Notify wait execute failed, device_id=1, stream_id=2, task_id=6, flip_num=0, notify_id=6 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.566 [logger.cc:345]137895 StreamSynchronize:Stream synchronize failed [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.587 [api_c.cc:719]137895 rtStreamSynchronize:ErrCode=507011, desc=[the model stream execute failed], InnerCode=0x7150050 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.597 [error_message_manage.cc:49]137895 FuncErrorReason:report error module_type=3, module_name=EE8888 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.612 [error_message_manage.cc:49]137895 FuncErrorReason:rtStreamSynchronize execute failed, reason=[the model stream execute failed] [WARNING] DEVICE(133507,fffd367fc160,python):2023-07-27-06:16:02.852.515 [mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:742] GetDumpPath] The environment variable 'MS_OM_PATH' is not set, the files of node dump will save to the process local path, as ./rank_id/node_dump/... [ERROR] DEVICE(133507,fffd367fc160,python):2023-07-27-06:16:02.852.658 [mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:760] DumpTaskExceptionInfo] Task fail infos task_id: 6, stream_id: 2, tid: 133507, device_id: 1, retcode: 507011 ( model execute failed) [WARNING] MD(133507,fffc8d7fa160,python):2023-07-27-06:16:02.870.419 [mindspore/ccsrc/minddata/dataset/engine/datasetops/data_queue_op.cc:280] SendDataToAscend] Thread has already been terminated. Traceback (most recent call last): File "/root/lxd_code/gene_pretrain/code/train/t5rpe_new/device1/./run_pretrain.py", line 207, in <module> run_pretrain() File "/root/lxd_code/gene_pretrain/code/train/t5rpe_new/device1/src/model_utils/moxing_adapter.py", line 68, in wrapped_func run_func(*args, **kwargs) File "/root/lxd_code/gene_pretrain/code/train/t5rpe_new/device1/./run_pretrain.py", line 201, in run_pretrain model.train(new_repeat_count, ds, callbacks=callback, File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 1044, in train self._train(epoch, File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 100, in wrapper func(self, *args, **kwargs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 597, in _train self._train_dataset_sink_process(epoch, train_dataset, list_callback, File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 681, in _train_dataset_sink_process outputs = train_network(*inputs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 620, in __call__ out = self.compile_and_run(*args, **kwargs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 942, in compile_and_run return _cell_graph_executor(self, *new_args, phase=self.phase) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1439, in __call__ return self.run(obj, *args, phase=phase) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1478, in run return self._exec_pip(obj, *args, phase=phase_real) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 102, in wrapper results = fn(*arg, **kwargs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1458, in _exec_pip return self._graph_executor(args, phase) RuntimeError: Run task for graph:kernel_graph_1 error! The details refer to 'Ascend Error Message'. ---------------------------------------------------- - Ascend Error Message: ---------------------------------------------------- EI0002: The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [0].base information: [streamID:[2], taskID[6], taskType[Notify Wait], tag[HcomAllReduce_6629421139219749105_0].] task information: [notify id:[0x0000000100000030], stage:[ffffffff], remote rank:[0].] Possible Cause: 1. An exception occurs during the execution on some NPUs in the cluster. As a result, collective communication operation failed.2. The execution speed on some NPU in the cluster is too slow to complete a communication operation within the timeout interval. (default 1800s, You can set the interval by using HCCL_EXEC_TIMEOUT.)3. The number of training samples of each NPU is inconsistent.4. Packet loss or other connectivity problems occur on the communication link. Solution: 1. If this error is reported on part of these ranks, check other ranks to see whether other errors have been reported earlier.2. If this error is reported for all ranks, check whether the error reporting time is consistent (the maximum difference must not exceed 1800s). If not, locate the cause or adjust the locate the cause or set the HCCL_EXEC_TIMEOUT environment variable to a larger value.3. Check whether the completion queue element (CQE) of the error exists in the plog(grep -rn 'error cqe'). If so, check the network connection status. (For details, see the TLS command and HCCN connectivity check examples.)4. Ensure that the number of training samples of each NPU is consistent. TraceBack (most recent call last): Notify wait execute failed, device_id=1, stream_id=2, task_id=6, flip_num=0, notify_id=6[FUNC:GetError][FILE:stream.cc][LINE:1125] rtStreamSynchronize execute failed, reason=[the model stream execute failed][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:49] (Please search "Ascend Error Message" at https://www.mindspore.cn for error code description)
### 请描述您的问题? / Please describe your question 使用基础modelarts的bert改的模型,在mindspore2.0.0上执行,每次都是执行大概1个多epoch的时候报错。其中一个报EE9999,另外七个报EI0002: The wait execution of the Notify register times out。这个是什么原因导致的? log0: epoch: 1, current epoch percent: 0.874, step: 49900, outputs are (Tensor(shape=[1], dtype=Float32, value= [ 3.32405716e-01]), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 262144)) Train epoch time: 54512.023 ms, per step time: 545.120 ms use last line use last line epoch: 1, current epoch percent: 0.878, step: 50000, outputs are (Tensor(shape=[1], dtype=Float32, value= [ 5.08989751e-01]), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 262144)) Train epoch time: 69872.935 ms, per step time: 698.729 ms use last line use last line use last line use last line use last line epoch: 1, current epoch percent: 0.882, step: 50100, outputs are (Tensor(shape=[1], dtype=Float32, value= [ 3.82104754e-01]), Tensor(shape=[], dtype=Bool, value= False), Tensor(shape=[], dtype=Float32, value= 262144)) Train epoch time: 54909.512 ms, per step time: 549.095 ms [EVENT] RUNTIME(133931,python):2023-07-27-05:48:12.311.231 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-05:51:16.631.225 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-05:54:20.951.216 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-05:57:25.271.243 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-06:00:29.591.214 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-06:03:33.907.214 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-06:06:38.227.218 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-06:09:42.547.217 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-06:12:46.867.224 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] RUNTIME(133931,python):2023-07-27-06:15:51.187.213 [engine.cc:1355] 138165 ReportTimeoutProc: report timeout! streamId=4, pendingTaskNum=2, reportCount=119728043, parseTaskCount=119728043 [EVENT] HCCL(133931,python):2023-07-27-06:16:09.047.280 [heartbeat.cc:412][133931][164460]rank [[192.168.0.7][0]]: rank [[192.168.0.7][2]] error status[1] by rank [[192.168.0.7][0]] [EVENT] HCCL(133931,python):2023-07-27-06:16:10.049.963 [heartbeat.cc:412][133931][164460]rank [[192.168.0.7][0]]: rank [[192.168.0.7][4]] error status[1] by rank [[192.168.0.7][0]] [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.886.710 [engine.cc:1311]138165 ReportExceptProc:Real task exception! device_id=0, stream_id=5, task_id=2, task_type=3 (EVENT_WAIT) [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.886.755 [engine.cc:1315]138165 ReportExceptProc:Task exception! device_id=0, stream_id=4, task_id=15010, type=13, retCode=0x91, [the model stream execute failed] [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.888.130 [task.cc:3537]138165 ReportErrorInfo:model execute error, retCode=0x91, [the model stream execute failed]. [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.888.150 [task.cc:3514]138165 PrintErrorInfo:model execute task failed, device_id=0, model stream_id=4, model task_id=15010, flip_num=3651, model_id=0, first_task_id=65535 [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.888.187 [task.cc:106]138165 PrintErrorInfo:Task execute failed, device_id=0, stream_id=5, task_id=2, flip_num=0, task_type=3. [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.888.250 [callback.cc:91]138165 Notify:notify [HCCL] task fail start.notify taskid:2 streamid:5 retcode:507011 [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.398 [callback.cc:91]138165 Notify:notify [MindSpore] task fail start.notify taskid:2 streamid:5 retcode:507011 [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.446 [stream.cc:1122]138165 GetError:Stream Synchronize failed, stream_id=4, retCode=0x91, [the model stream execute failed]. [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.458 [stream.cc:1125]138165 GetError:report error module_type=7, module_name=EE9999 [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.468 [stream.cc:1125]138165 GetError:Task execute failed, device_id=0, stream_id=5, task_id=2, flip_num=0, task_type=3. [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.526 [logger.cc:345]138165 StreamSynchronize:Stream synchronize failed [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.581 [api_c.cc:719]138165 rtStreamSynchronize:ErrCode=507011, desc=[the model stream execute failed], InnerCode=0x7150050 [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.593 [error_message_manage.cc:49]138165 FuncErrorReason:report error module_type=3, module_name=EE8888 [ERROR] RUNTIME(133931,python):2023-07-27-06:16:47.889.611 [error_message_manage.cc:49]138165 FuncErrorReason:rtStreamSynchronize execute failed, reason=[the model stream execute failed] [WARNING] DEVICE(133931,fffe10d5e160,python):2023-07-27-06:16:47.912.627 [mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:742] GetDumpPath] The environment variable 'MS_OM_PATH' is not set, the files of node dump will save to the process local path, as ./rank_id/node_dump/... [ERROR] DEVICE(133931,fffe10d5e160,python):2023-07-27-06:16:47.912.738 [mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:760] DumpTaskExceptionInfo] Task fail infos task_id: 2, stream_id: 5, tid: 133931, device_id: 0, retcode: 507011 ( model execute failed) Traceback (most recent call last): File "/root/lxd_code/gene_pretrain/code/train/t5rpe_new/device0/./run_pretrain.py", line 207, in <module> run_pretrain() File "/root/lxd_code/gene_pretrain/code/train/t5rpe_new/device0/src/model_utils/moxing_adapter.py", line 68, in wrapped_func run_func(*args, **kwargs) File "/root/lxd_code/gene_pretrain/code/train/t5rpe_new/device0/./run_pretrain.py", line 201, in run_pretrain model.train(new_repeat_count, ds, callbacks=callback, File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 1044, in train self._train(epoch, File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 100, in wrapper func(self, *args, **kwargs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 597, in _train self._train_dataset_sink_process(epoch, train_dataset, list_callback, File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 681, in _train_dataset_sink_process outputs = train_network(*inputs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 620, in __call__ out = self.compile_and_run(*args, **kwargs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 942, in compile_and_run return _cell_graph_executor(self, *new_args, phase=self.phase) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1439, in __call__ return self.run(obj, *args, phase=phase) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1478, in run return self._exec_pip(obj, *args, phase=phase_real) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 102, in wrapper results = fn(*arg, **kwargs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1458, in _exec_pip return self._graph_executor(args, phase) RuntimeError: Run task for graph:kernel_graph_1 error! The details refer to 'Ascend Error Message'. ---------------------------------------------------- - Ascend Error Message: ---------------------------------------------------- EE9999: Inner Error, Please contact support engineer! EE9999 Task execute failed, device_id=0, stream_id=5, task_id=2, flip_num=0, task_type=3.[FUNC:GetError][FILE:stream.cc][LINE:1125] TraceBack (most recent call last): rtStreamSynchronize execute failed, reason=[the model stream execute failed][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:49] (Please search "Ascend Error Message" at https://www.mindspore.cn for error code description) ---------------------------------------------------- - C++ Call Stack: (For framework developers) ---------------------------------------------------- mindspore/ccsrc/plugin/device/ascend/hal/hardware/ascend_graph_executor.cc:256 RunGraph [WARNING] MD(133931,ffff9ed47010,python):2023-07-27-06:16:49.928.814 [mindspore/ccsrc/minddata/dataset/engine/datasetops/data_queue_op.cc:115] ~DataQueueOp] preprocess_batch: 50194; batch_queue: 16, 16, 16, 16, 16, 16, 16, 16, 16, 16; push_start_time: 2023-07-27-05:44:07.591.112, 2023-07-27-05:44:08.132.669, 2023-07-27-05:44:08.682.632, 2023-07-27-05:44:09.223.711, 2023-07-27-05:44:09.767.355, 2023-07-27-05:44:10.315.898, 2023-07-27-05:44:10.860.925, 2023-07-27-05:44:11.408.294, 2023-07-27-05:44:11.955.528, 2023-07-27-05:44:12.500.722; push_end_time: 2023-07-27-05:44:07.591.479, 2023-07-27-05:44:08.133.057, 2023-07-27-05:44:08.683.056, 2023-07-27-05:44:09.224.106, 2023-07-27-05:44:09.767.742, 2023-07-27-05:44:10.316.266, 2023-07-27-05:44:10.861.297, 2023-07-27-05:44:11.408.658, 2023-07-27-05:44:11.955.874, 2023-07-27-05:44:12.501.082. [EVENT] HCCL(133931,python):2023-07-27-06:16:50.060.629 [opexecounter.cc:80][hccl-133931-0-1690379684-hccl_world_group][0]head counting tasks tag:[HcomAllReduce_6629421139219749105_0] count:[50194] [EVENT] HCCL(133931,python):2023-07-27-06:16:50.060.711 [opexecounter.cc:80][hccl-133931-0-1690379684-hccl_world_group][0]tail counting tasks tag:[HcomAllReduce_6629421139219749105_0] count:[50194] [EVENT] HCCL(133931,python):2023-07-27-06:16:50.285.881 [opexecounter.cc:80][hccl-133931-0-1690379684-hccl_world_group][0]head counting tasks tag:[HcomAllReduce_6629421139219749105_1] count:[50194] [EVENT] HCCL(133931,python):2023-07-27-06:16:50.285.948 [opexecounter.cc:80][hccl-133931-0-1690379684-hccl_world_group][0]tail counting tasks tag:[HcomAllReduce_6629421139219749105_1] count:[50194] [EVENT] HCCL(133931,python):2023-07-27-06:16:50.331.309 [hcom_executor.cc:39][hccl-133931-0-1690379684-hccl_world_group][0][Finalize][HcomExecutor]Hcom Excutor Finalize end. ret[0] [EVENT] HCCL(133931,python):2023-07-27-06:16:50.331.472 [hcom.cc:314][hccl-133931-0-1690379684-hccl_world_group][0]Entry-HcomDestroy:void log1: [EVENT] RUNTIME(133507,python):2023-07-27-06:12:46.867.226 [engine.cc:1355] 137895 ReportTimeoutProc: report timeout! streamId=12, pendingTaskNum=2, reportCount=119713029, parseTaskCount=119713029 [EVENT] RUNTIME(133507,python):2023-07-27-06:15:51.187.227 [engine.cc:1355] 137895 ReportTimeoutProc: report timeout! streamId=12, pendingTaskNum=2, reportCount=119713029, parseTaskCount=119713029 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.704.231 [engine.cc:1311]137895 ReportExceptProc:Real task exception! device_id=1, stream_id=2, task_id=6, task_type=14 (NOTIFY_WAIT) [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.704.298 [engine.cc:1315]137895 ReportExceptProc:Task exception! device_id=1, stream_id=12, task_id=14218, type=13, retCode=0x91, [the model stream execute failed] [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.707.555 [task.cc:3537]137895 ReportErrorInfo:model execute error, retCode=0x91, [the model stream execute failed]. [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.707.583 [task.cc:3514]137895 PrintErrorInfo:model execute task failed, device_id=1, model stream_id=12, model task_id=14218, flip_num=3651, model_id=0, first_task_id=65535 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.707.670 [task.cc:3847]137895 PrintErrorInfo:Notify wait execute failed, device_id=1, stream_id=2, task_id=6, flip_num=0, notify_id=6 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.707.722 [callback.cc:91]137895 Notify:notify [HCCL] task fail start.notify taskid:6 streamid:2 retcode:507011 [ERROR] HCCL(133507,python):2023-07-27-06:16:02.708.983 [task_exception_handler.cc:220][133507][137895][TaskExceptionHandler][Callback]Task from HCCL run failed. [ERROR] HCCL(133507,python):2023-07-27-06:16:02.709.071 [task_exception_handler.cc:223][133507][137895][TaskExceptionHandler][Callback]Task run failed, base information is streamID:[2], taskID[6], taskType[Notify Wait], tag[HcomAllReduce_6629421139219749105_0]. [ERROR] HCCL(133507,python):2023-07-27-06:16:02.709.147 [task_exception_handler.cc:225][133507][137895][TaskExceptionHandler][Callback]Task run failed, para information is notify id:[0x0000000100000030], stage:[ffffffff], remote rank:[0]. [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.386 [callback.cc:91]137895 Notify:notify [MindSpore] task fail start.notify taskid:6 streamid:2 retcode:507011 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.508 [stream.cc:1122]137895 GetError:Stream Synchronize failed, stream_id=12, retCode=0x91, [the model stream execute failed]. [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.520 [stream.cc:1125]137895 GetError:report error module_type=2, module_name=EI9999 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.530 [stream.cc:1125]137895 GetError:Notify wait execute failed, device_id=1, stream_id=2, task_id=6, flip_num=0, notify_id=6 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.566 [logger.cc:345]137895 StreamSynchronize:Stream synchronize failed [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.587 [api_c.cc:719]137895 rtStreamSynchronize:ErrCode=507011, desc=[the model stream execute failed], InnerCode=0x7150050 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.597 [error_message_manage.cc:49]137895 FuncErrorReason:report error module_type=3, module_name=EE8888 [ERROR] RUNTIME(133507,python):2023-07-27-06:16:02.709.612 [error_message_manage.cc:49]137895 FuncErrorReason:rtStreamSynchronize execute failed, reason=[the model stream execute failed] [WARNING] DEVICE(133507,fffd367fc160,python):2023-07-27-06:16:02.852.515 [mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:742] GetDumpPath] The environment variable 'MS_OM_PATH' is not set, the files of node dump will save to the process local path, as ./rank_id/node_dump/... [ERROR] DEVICE(133507,fffd367fc160,python):2023-07-27-06:16:02.852.658 [mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:760] DumpTaskExceptionInfo] Task fail infos task_id: 6, stream_id: 2, tid: 133507, device_id: 1, retcode: 507011 ( model execute failed) [WARNING] MD(133507,fffc8d7fa160,python):2023-07-27-06:16:02.870.419 [mindspore/ccsrc/minddata/dataset/engine/datasetops/data_queue_op.cc:280] SendDataToAscend] Thread has already been terminated. Traceback (most recent call last): File "/root/lxd_code/gene_pretrain/code/train/t5rpe_new/device1/./run_pretrain.py", line 207, in <module> run_pretrain() File "/root/lxd_code/gene_pretrain/code/train/t5rpe_new/device1/src/model_utils/moxing_adapter.py", line 68, in wrapped_func run_func(*args, **kwargs) File "/root/lxd_code/gene_pretrain/code/train/t5rpe_new/device1/./run_pretrain.py", line 201, in run_pretrain model.train(new_repeat_count, ds, callbacks=callback, File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 1044, in train self._train(epoch, File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 100, in wrapper func(self, *args, **kwargs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 597, in _train self._train_dataset_sink_process(epoch, train_dataset, list_callback, File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 681, in _train_dataset_sink_process outputs = train_network(*inputs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 620, in __call__ out = self.compile_and_run(*args, **kwargs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 942, in compile_and_run return _cell_graph_executor(self, *new_args, phase=self.phase) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1439, in __call__ return self.run(obj, *args, phase=phase) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1478, in run return self._exec_pip(obj, *args, phase=phase_real) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 102, in wrapper results = fn(*arg, **kwargs) File "/root/miniconda3/envs/mindspore_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1458, in _exec_pip return self._graph_executor(args, phase) RuntimeError: Run task for graph:kernel_graph_1 error! The details refer to 'Ascend Error Message'. ---------------------------------------------------- - Ascend Error Message: ---------------------------------------------------- EI0002: The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [0].base information: [streamID:[2], taskID[6], taskType[Notify Wait], tag[HcomAllReduce_6629421139219749105_0].] task information: [notify id:[0x0000000100000030], stage:[ffffffff], remote rank:[0].] Possible Cause: 1. An exception occurs during the execution on some NPUs in the cluster. As a result, collective communication operation failed.2. The execution speed on some NPU in the cluster is too slow to complete a communication operation within the timeout interval. (default 1800s, You can set the interval by using HCCL_EXEC_TIMEOUT.)3. The number of training samples of each NPU is inconsistent.4. Packet loss or other connectivity problems occur on the communication link. Solution: 1. If this error is reported on part of these ranks, check other ranks to see whether other errors have been reported earlier.2. If this error is reported for all ranks, check whether the error reporting time is consistent (the maximum difference must not exceed 1800s). If not, locate the cause or adjust the locate the cause or set the HCCL_EXEC_TIMEOUT environment variable to a larger value.3. Check whether the completion queue element (CQE) of the error exists in the plog(grep -rn 'error cqe'). If so, check the network connection status. (For details, see the TLS command and HCCN connectivity check examples.)4. Ensure that the number of training samples of each NPU is consistent. TraceBack (most recent call last): Notify wait execute failed, device_id=1, stream_id=2, task_id=6, flip_num=0, notify_id=6[FUNC:GetError][FILE:stream.cc][LINE:1125] rtStreamSynchronize execute failed, reason=[the model stream execute failed][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:49] (Please search "Ascend Error Message" at https://www.mindspore.cn for error code description)
评论 (
4
)
登录
后才可以发表评论
状态
DONE
TODO
ACCEPTED
WIP
VALIDATION
DONE
CLOSED
REJECTED
负责人
未设置
Shawny
Shawny233
负责人
协作者
+负责人
+协作者
标签
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (
-
)
标签 (
-
)
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(3)
1
https://gitee.com/mindspore/models.git
git@gitee.com:mindspore/models.git
mindspore
models
models
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册