登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
登录
注册
9月17日,Gitee Xtreme 极智AI重磅发布,来Gitee直播间一起探索AI时代的软件研发新模式
代码拉取完成,页面将自动刷新
当前仓库属于暂停状态,部分功能使用受限,详情请查阅
仓库状态说明
开源项目
>
人工智能
>
机器学习/深度学习
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
88
Star
649
Fork
1.4K
Ascend
/
pytorch
暂停
代码
Issues
41
Pull Requests
350
Wiki
统计
流水线
服务
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
RuntimeErrorRuntimeError: : npuSynchronizeDevice:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:467 NPU function error: AclrtSynchronizeDeviceWithTimeout, error code is 507048
DONE
#IBS5FF
训练问题
Estelle-gqy
创建于
2025-03-10 15:08
一、问题现象(附报错日志上下文): 单机8卡分布式训练报错, 二、软件版本: torch=2.4.0 torch-npu=2.4.0.post2 CANN version=8.0.RC3 arch=aarch64 python=3.10.16 npu-smi=23.0.6 ubuntu=22.04 910B2 64G 8卡 三、测试步骤: 使用模型并行+数据并行,环境变量export HCCL_CONNECT_TIMEOUT=6000 报错代码如下: ``` def evaluate_ppo(self): # noqa: C901 # self.model.eval() """Samples model on `eval_prompts`, logs stats with `reward_fn` or `metric_fn` if provided""" stats = {} all_full_ids = [] all_rev_kl = [] all_lens = [] table = [] with torch.no_grad(): for batch in tqdm(self.eval_dataloader, "Generation Evaluation", disable=(not get_rank() == 0)): batch, no_model_batch = batch batch, _ = self.eval_pipeline.move_to_device(batch, no_model_batch, self.device) # 2. 插入NPU流同步(新增) stream = torch.npu.current_stream() stream.synchronize() # 获取学生模型的输出 gen_out = self.generate( **batch, return_dict_in_generate=True, output_scores=True ) full_ids = gen_out.sequences gen_logits = gen_out.scores # NOTE: [b, s, h_p] inf_mask = torch.isinf(gen_logits) # 5. 同步NPU流后再收集数据(新增) stream = torch.npu.current_stream() stream.synchronize() all_full_ids.append(full_ids) input_ids = batch["input_ids"] gen_ids = full_ids[:, input_ids.size(1):] mask = self.get_mask(full_ids) mask = mask[:, input_ids.size(1)-1:input_ids.size(1)+gen_ids.size(1)-1] lens = torch.sum(mask, dim=-1) teacher_rewards = self.reward_fn(input_ids, gen_ids)["rewards"] # \log p(y_t | y_{<t}, x) _, logprobs = self.compute_logits_and_log_probs(input_ids, gen_ids, inf_mask=inf_mask, base="base") # \log q_{\theta}(y_t | y_{<t}, x) kl = get_rev_kl(teacher_rewards, logprobs, mask) # 获取反向KL分数 kl = kl.sum(-1) if self.args.length_norm: kl = kl / lens all_rev_kl.append(kl) all_lens.append(lens) # 7. 添加分布式同步点(新增) torch.distributed.barrier() all_full_ids = torch.cat(all_full_ids, dim=0) all_rev_kl = torch.cat(all_rev_kl, dim=0) all_lens = torch.cat(all_lens, dim=0) full_ids = all_gather(all_full_ids, dim=1, world_size=self.dp_world_size, group=self.dp_group, op="stack") full_ids = full_ids.view(-1, full_ids.size(-1)) prompt_ids = full_ids[:, :self.eval_pipeline.max_prompt_length] all_rev_kl = all_gather(all_rev_kl, dim=0, world_size=self.dp_world_size, group=self.dp_group) stats["rev_kl"] = all_rev_kl.mean() all_lens = all_gather(all_lens, dim=0, world_size=self.dp_world_size, group=self.dp_group) stats["lens"] = all_lens.float().mean() # 7. 添加分布式同步点(新增) torch.distributed.barrier() response_texts = [] if get_rank() == 0: # 解码前同步流(新增) stream = torch.npu.current_stream() stream.synchronize() prompt_texts = self.tokenizer.batch_decode(prompt_ids, skip_special_tokens=True) response_texts = self.tokenizer.batch_decode(full_ids[:, self.eval_pipeline.max_prompt_length:], skip_special_tokens=True) gen_texts = [p + g for p, g in zip(prompt_texts, response_texts)] columns = ["prompts"] columns_data = [prompt_texts] # in online setting, compute the reward for validation columns.append("samples") if isinstance(gen_texts[0], str): columns_data.append(gen_texts) else: columns_data.append(gen_texts.tolist()) table.append(list(zip(*columns_data))) # 9. 添加全局同步点(关键修复点) torch.distributed.barrier() # Log and display evaluation metrics if get_rank() == 0: rows = sum(list(map(list, zip(*table))), []) # Add metrics/rewards to the table's title table_title = f"Evaluation #{self.nth_evaluation}" for k, x in stats.items(): if k.startswith("reward") or k.startswith("metrics"): table_title += f" {k}: {significant(x)}" rich_table = Table(*columns, title=table_title, show_lines=True) for ix in range(min(3, len(rows))): rich_table.add_row(*[str(significant(x)) for x in rows[ix]]) try: Console().print(rich_table) except: pass # 10. 返回前同步所有进程(新增) torch.distributed.barrier() self.nth_evaluation += 1 return stats, table, response_texts ``` 四、报错信息: **在以下这几行都报过一模一样的错误:`all_lens.append(lens)`、`torch.distributed.barrier()`、`full_ids = all_gather(all_full_ids, dim=1, world_size=self.dp_world_size, group=self.dp_group, op="stack")`、`prompt_texts = self.tokenizer.batch_decode(prompt_ids, skip_special_tokens=True)`** **【03-11补充】:当eval dataloader样本数量少(如64)的时候不会报错,样本数量增多至200就报错** Generation Evaluation: 0%| | 0/106 [00:00<?, ?it/s]len(self.train_dataloader), 4 len(self.train_dataloader), 4 len(self.train_dataloader), 4 len(self.train_dataloader), 4 Generation Evaluation: 37%|███████████████████████████████████████████████████████▏ | 39/106 [39:39<1:11:12, 63.76s/it] Generation Evaluation: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 106/106 [1:26:00<00:00, 48.68s/it] Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): File "/home/jovyan/gqy/minillm/./train_minillm.py", line 113, in <module> File "/home/jovyan/gqy/minillm/./train_minillm.py", line 113, in <module> File "/home/jovyan/gqy/minillm/./train_minillm.py", line 113, in <module> Traceback (most recent call last): File "/home/jovyan/gqy/minillm/./train_minillm.py", line 113, in <module> main()main() File "/home/jovyan/gqy/minillm/./train_minillm.py", line 99, in main File "/home/jovyan/gqy/minillm/./train_minillm.py", line 99, in main main() File "/home/jovyan/gqy/minillm/./train_minillm.py", line 99, in main main() File "/home/jovyan/gqy/minillm/./train_minillm.py", line 99, in main train(train( File "/home/jovyan/gqy/minillm/minillm/__init__.py", line 52, in train File "/home/jovyan/gqy/minillm/minillm/__init__.py", line 52, in train train( File "/home/jovyan/gqy/minillm/minillm/__init__.py", line 52, in train train( File "/home/jovyan/gqy/minillm/minillm/__init__.py", line 52, in train trainer.train()trainer.train() File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 244, in train File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 244, in train trainer.train() File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 244, in train trainer.train() File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 244, in train self.global_iter_count = 1self.global_iter_count = 1 File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 432, in evaluate File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 432, in evaluate self.global_iter_count = 1 File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 432, in evaluate self.global_iter_count = 1 File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 432, in evaluate File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 505, in evaluate_ppo File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 505, in evaluate_ppo File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 505, in evaluate_ppo File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 505, in evaluate_ppo all_lens.append(lens)all_lens.append(lens) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper all_lens.append(lens)all_lens.append(lens) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs)return func(*args, **kwargs)return func(*args, **kwargs) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3703, in barrier File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3703, in barrier File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3703, in barrier return func(*args, **kwargs) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3703, in barrier work.wait()work.wait() work.wait()RuntimeError RuntimeErrorRuntimeError: : npuSynchronizeDevice:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:467 NPU function error: AclrtSynchronizeDeviceWithTimeout, error code is 507048 [ERROR] 2025-03-10-04:19:37 (PID:1185975, Device:2, RankID:2) ERR00100 PTA call acl api failed [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EI0002: [PID: 1185975] 2025-03-10-04:19:37.576.321 The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[4265454028], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] Possible Cause: 1. An exception occurs during the execution on some NPUs in the cluster. As a result, collective communication operation failed.2. The execution speed on some NPU in the cluster is too slow to complete a communication operation within the timeout interval. (default 1800s, You can set the interval by using HCCL_EXEC_TIMEOUT.)3. The number of training samples of each NPU is inconsistent.4. Packet loss or other connectivity problems occur on the communication link. Solution: 1. If this error is reported on part of these ranks, check other ranks to see whether other errors have been reported earlier.2. If this error is reported for all ranks, check whether the error reporting time is consistent (the maximum difference must not exceed 1800s). If not, locate the cause or adjust the locate the cause or set the HCCL_EXEC_TIMEOUT environment variable to a larger value.3. Check whether the completion queue element (CQE) of the error exists in the plog(grep -rn 'error cqe'). If so, check the network connection status. (For details, see the TLS command and HCCN connectivity check examples.)4. Ensure that the number of training samples of each NPU is consistent. For details:https://www.hiascend.com/document TraceBack (most recent call last): The error from device(chipId:2, dieId:0), serial number is 3, hccl fftsplus task timeout occurred during task execution, stream_id:4, sq_id:4, task_id:13, stuck notify num:1, timeout:1836.[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1645] The 0 stuck notify wait context info:(context_id=4, notify_id=3).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[4265454028], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] npuSynchronizeDevice:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:467 NPU function error: AclrtSynchronizeDeviceWithTimeout, error code is 507048 [ERROR] 2025-03-10-04:19:37 (PID:1185974, Device:1, RankID:1) ERR00100 PTA call acl api failed [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EI0002: [PID: 1185974] 2025-03-10-04:19:37.566.086 The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[3917014364], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] Possible Cause: 1. An exception occurs during the execution on some NPUs in the cluster. As a result, collective communication operation failed.2. The execution speed on some NPU in the cluster is too slow to complete a communication operation within the timeout interval. (default 1800s, You can set the interval by using HCCL_EXEC_TIMEOUT.)3. The number of training samples of each NPU is inconsistent.4. Packet loss or other connectivity problems occur on the communication link. Solution: 1. If this error is reported on part of these ranks, check other ranks to see whether other errors have been reported earlier.2. If this error is reported for all ranks, check whether the error reporting time is consistent (the maximum difference must not exceed 1800s). If not, locate the cause or adjust the locate the cause or set the HCCL_EXEC_TIMEOUT environment variable to a larger value.3. Check whether the completion queue element (CQE) of the error exists in the plog(grep -rn 'error cqe'). If so, check the network connection status. (For details, see the TLS command and HCCN connectivity check examples.)4. Ensure that the number of training samples of each NPU is consistent. For details:https://www.hiascend.com/document TraceBack (most recent call last): The error from device(chipId:1, dieId:0), serial number is 3, hccl fftsplus task timeout occurred during task execution, stream_id:4, sq_id:4, task_id:13, stuck notify num:1, timeout:1836.[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1645] The 0 stuck notify wait context info:(context_id=4, notify_id=11).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[3917014364], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] : work.wait()npuSynchronizeDevice:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:467 NPU function error: AclrtSynchronizeDeviceWithTimeout, error code is 507048 [ERROR] 2025-03-10-04:19:37 (PID:1185976, Device:3, RankID:3) ERR00100 PTA call acl api failed [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EI0002: [PID: 1185976] 2025-03-10-04:19:37.588.583 The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[3517536076], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] Possible Cause: 1. An exception occurs during the execution on some NPUs in the cluster. As a result, collective communication operation failed.2. The execution speed on some NPU in the cluster is too slow to complete a communication operation within the timeout interval. (default 1800s, You can set the interval by using HCCL_EXEC_TIMEOUT.)3. The number of training samples of each NPU is inconsistent.4. Packet loss or other connectivity problems occur on the communication link. Solution: 1. If this error is reported on part of these ranks, check other ranks to see whether other errors have been reported earlier.2. If this error is reported for all ranks, check whether the error reporting time is consistent (the maximum difference must not exceed 1800s). If not, locate the cause or adjust the locate the cause or set the HCCL_EXEC_TIMEOUT environment variable to a larger value.3. Check whether the completion queue element (CQE) of the error exists in the plog(grep -rn 'error cqe'). If so, check the network connection status. (For details, see the TLS command and HCCN connectivity check examples.)4. Ensure that the number of training samples of each NPU is consistent. For details:https://www.hiascend.com/document TraceBack (most recent call last): The error from device(chipId:3, dieId:0), serial number is 3, hccl fftsplus task timeout occurred during task execution, stream_id:4, sq_id:4, task_id:13, stuck notify num:1, timeout:1836.[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1645] The 0 stuck notify wait context info:(context_id=4, notify_id=17).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[3517536076], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] RuntimeError: npuSynchronizeDevice:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:467 NPU function error: AclrtSynchronizeDeviceWithTimeout, error code is 507048 [ERROR] 2025-03-10-04:19:37 (PID:1185973, Device:0, RankID:0) ERR00100 PTA call acl api failed [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EI0002: [PID: 1185973] 2025-03-10-04:19:37.591.361 The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[3366440588], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] Possible Cause: 1. An exception occurs during the execution on some NPUs in the cluster. As a result, collective communication operation failed.2. The execution speed on some NPU in the cluster is too slow to complete a communication operation within the timeout interval. (default 1800s, You can set the interval by using HCCL_EXEC_TIMEOUT.)3. The number of training samples of each NPU is inconsistent.4. Packet loss or other connectivity problems occur on the communication link. Solution: 1. If this error is reported on part of these ranks, check other ranks to see whether other errors have been reported earlier.2. If this error is reported for all ranks, check whether the error reporting time is consistent (the maximum difference must not exceed 1800s). If not, locate the cause or adjust the locate the cause or set the HCCL_EXEC_TIMEOUT environment variable to a larger value.3. Check whether the completion queue element (CQE) of the error exists in the plog(grep -rn 'error cqe'). If so, check the network connection status. (For details, see the TLS command and HCCN connectivity check examples.)4. Ensure that the number of training samples of each NPU is consistent. For details:https://www.hiascend.com/document TraceBack (most recent call last): The error from device(chipId:0, dieId:0), serial number is 3, hccl fftsplus task timeout occurred during task execution, stream_id:4, sq_id:4, task_id:13, stuck notify num:4, timeout:1836.[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1645] The 0 stuck notify wait context info:(context_id=9, notify_id=14).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The 1 stuck notify wait context info:(context_id=11, notify_id=24).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The 2 stuck notify wait context info:(context_id=13, notify_id=20).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The 3 stuck notify wait context info:(context_id=15, notify_id=18).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[3366440588], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] [W compiler_depend.ts:487] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:19:38.591.420 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeUsedDevices) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:19:58.399.507 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:19:58.943.347 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:19:59.775.426 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:00.287.260 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:00.803.311 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:01.311.298 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:01.823.326 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:02.335.348 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:02.851.347 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:03.359.328 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:03.875.310 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:04.383.291 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:04.895.384 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:05.407.455 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:05.923.406 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:06.467.417 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [2025-03-10 04:20:20,376] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1185977 closing signal SIGTERM [2025-03-10 04:20:20,376] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1185978 closing signal SIGTERM [2025-03-10 04:20:20,376] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1185979 closing signal SIGTERM [2025-03-10 04:20:20,376] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1185980 closing signal SIGTERM [2025-03-10 04:20:27,429] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 1185973) of binary: /home/jovyan/.conda/envs/model_dis/bin/python Traceback (most recent call last): File "/home/jovyan/.conda/envs/model_dis/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/run.py", line 806, in main run(args) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/run.py", line 797, in run elastic_launch( File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ./train_minillm.py FAILED Failures: [1]: time : 2025-03-10_04:20:20 host : nb-546241047869523123-0 rank : 1 (local_rank: 1) exitcode : 1 (pid: 1185974) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [2]: time : 2025-03-10_04:20:20 host : nb-546241047869523123-0 rank : 2 (local_rank: 2) exitcode : 1 (pid: 1185975) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [3]: time : 2025-03-10_04:20:20 host : nb-546241047869523123-0 rank : 3 (local_rank: 3) exitcode : 1 (pid: 1185976) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html Root Cause (first observed failure): [0]: time : 2025-03-10_04:20:20 host : nb-546241047869523123-0 rank : 0 (local_rank: 0) exitcode : 1 (pid: 1185973) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html 五、日志信息: 日志太大,点击链接查看 [日志文件链接](https://pan.quark.cn/s/2d01e19b2bea)
一、问题现象(附报错日志上下文): 单机8卡分布式训练报错, 二、软件版本: torch=2.4.0 torch-npu=2.4.0.post2 CANN version=8.0.RC3 arch=aarch64 python=3.10.16 npu-smi=23.0.6 ubuntu=22.04 910B2 64G 8卡 三、测试步骤: 使用模型并行+数据并行,环境变量export HCCL_CONNECT_TIMEOUT=6000 报错代码如下: ``` def evaluate_ppo(self): # noqa: C901 # self.model.eval() """Samples model on `eval_prompts`, logs stats with `reward_fn` or `metric_fn` if provided""" stats = {} all_full_ids = [] all_rev_kl = [] all_lens = [] table = [] with torch.no_grad(): for batch in tqdm(self.eval_dataloader, "Generation Evaluation", disable=(not get_rank() == 0)): batch, no_model_batch = batch batch, _ = self.eval_pipeline.move_to_device(batch, no_model_batch, self.device) # 2. 插入NPU流同步(新增) stream = torch.npu.current_stream() stream.synchronize() # 获取学生模型的输出 gen_out = self.generate( **batch, return_dict_in_generate=True, output_scores=True ) full_ids = gen_out.sequences gen_logits = gen_out.scores # NOTE: [b, s, h_p] inf_mask = torch.isinf(gen_logits) # 5. 同步NPU流后再收集数据(新增) stream = torch.npu.current_stream() stream.synchronize() all_full_ids.append(full_ids) input_ids = batch["input_ids"] gen_ids = full_ids[:, input_ids.size(1):] mask = self.get_mask(full_ids) mask = mask[:, input_ids.size(1)-1:input_ids.size(1)+gen_ids.size(1)-1] lens = torch.sum(mask, dim=-1) teacher_rewards = self.reward_fn(input_ids, gen_ids)["rewards"] # \log p(y_t | y_{<t}, x) _, logprobs = self.compute_logits_and_log_probs(input_ids, gen_ids, inf_mask=inf_mask, base="base") # \log q_{\theta}(y_t | y_{<t}, x) kl = get_rev_kl(teacher_rewards, logprobs, mask) # 获取反向KL分数 kl = kl.sum(-1) if self.args.length_norm: kl = kl / lens all_rev_kl.append(kl) all_lens.append(lens) # 7. 添加分布式同步点(新增) torch.distributed.barrier() all_full_ids = torch.cat(all_full_ids, dim=0) all_rev_kl = torch.cat(all_rev_kl, dim=0) all_lens = torch.cat(all_lens, dim=0) full_ids = all_gather(all_full_ids, dim=1, world_size=self.dp_world_size, group=self.dp_group, op="stack") full_ids = full_ids.view(-1, full_ids.size(-1)) prompt_ids = full_ids[:, :self.eval_pipeline.max_prompt_length] all_rev_kl = all_gather(all_rev_kl, dim=0, world_size=self.dp_world_size, group=self.dp_group) stats["rev_kl"] = all_rev_kl.mean() all_lens = all_gather(all_lens, dim=0, world_size=self.dp_world_size, group=self.dp_group) stats["lens"] = all_lens.float().mean() # 7. 添加分布式同步点(新增) torch.distributed.barrier() response_texts = [] if get_rank() == 0: # 解码前同步流(新增) stream = torch.npu.current_stream() stream.synchronize() prompt_texts = self.tokenizer.batch_decode(prompt_ids, skip_special_tokens=True) response_texts = self.tokenizer.batch_decode(full_ids[:, self.eval_pipeline.max_prompt_length:], skip_special_tokens=True) gen_texts = [p + g for p, g in zip(prompt_texts, response_texts)] columns = ["prompts"] columns_data = [prompt_texts] # in online setting, compute the reward for validation columns.append("samples") if isinstance(gen_texts[0], str): columns_data.append(gen_texts) else: columns_data.append(gen_texts.tolist()) table.append(list(zip(*columns_data))) # 9. 添加全局同步点(关键修复点) torch.distributed.barrier() # Log and display evaluation metrics if get_rank() == 0: rows = sum(list(map(list, zip(*table))), []) # Add metrics/rewards to the table's title table_title = f"Evaluation #{self.nth_evaluation}" for k, x in stats.items(): if k.startswith("reward") or k.startswith("metrics"): table_title += f" {k}: {significant(x)}" rich_table = Table(*columns, title=table_title, show_lines=True) for ix in range(min(3, len(rows))): rich_table.add_row(*[str(significant(x)) for x in rows[ix]]) try: Console().print(rich_table) except: pass # 10. 返回前同步所有进程(新增) torch.distributed.barrier() self.nth_evaluation += 1 return stats, table, response_texts ``` 四、报错信息: **在以下这几行都报过一模一样的错误:`all_lens.append(lens)`、`torch.distributed.barrier()`、`full_ids = all_gather(all_full_ids, dim=1, world_size=self.dp_world_size, group=self.dp_group, op="stack")`、`prompt_texts = self.tokenizer.batch_decode(prompt_ids, skip_special_tokens=True)`** **【03-11补充】:当eval dataloader样本数量少(如64)的时候不会报错,样本数量增多至200就报错** Generation Evaluation: 0%| | 0/106 [00:00<?, ?it/s]len(self.train_dataloader), 4 len(self.train_dataloader), 4 len(self.train_dataloader), 4 len(self.train_dataloader), 4 Generation Evaluation: 37%|███████████████████████████████████████████████████████▏ | 39/106 [39:39<1:11:12, 63.76s/it] Generation Evaluation: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 106/106 [1:26:00<00:00, 48.68s/it] Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): File "/home/jovyan/gqy/minillm/./train_minillm.py", line 113, in <module> File "/home/jovyan/gqy/minillm/./train_minillm.py", line 113, in <module> File "/home/jovyan/gqy/minillm/./train_minillm.py", line 113, in <module> Traceback (most recent call last): File "/home/jovyan/gqy/minillm/./train_minillm.py", line 113, in <module> main()main() File "/home/jovyan/gqy/minillm/./train_minillm.py", line 99, in main File "/home/jovyan/gqy/minillm/./train_minillm.py", line 99, in main main() File "/home/jovyan/gqy/minillm/./train_minillm.py", line 99, in main main() File "/home/jovyan/gqy/minillm/./train_minillm.py", line 99, in main train(train( File "/home/jovyan/gqy/minillm/minillm/__init__.py", line 52, in train File "/home/jovyan/gqy/minillm/minillm/__init__.py", line 52, in train train( File "/home/jovyan/gqy/minillm/minillm/__init__.py", line 52, in train train( File "/home/jovyan/gqy/minillm/minillm/__init__.py", line 52, in train trainer.train()trainer.train() File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 244, in train File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 244, in train trainer.train() File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 244, in train trainer.train() File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 244, in train self.global_iter_count = 1self.global_iter_count = 1 File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 432, in evaluate File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 432, in evaluate self.global_iter_count = 1 File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 432, in evaluate self.global_iter_count = 1 File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 432, in evaluate File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 505, in evaluate_ppo File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 505, in evaluate_ppo File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 505, in evaluate_ppo File "/home/jovyan/gqy/minillm/minillm/trainer.py", line 505, in evaluate_ppo all_lens.append(lens)all_lens.append(lens) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper all_lens.append(lens)all_lens.append(lens) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs)return func(*args, **kwargs)return func(*args, **kwargs) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3703, in barrier File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3703, in barrier File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3703, in barrier return func(*args, **kwargs) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3703, in barrier work.wait()work.wait() work.wait()RuntimeError RuntimeErrorRuntimeError: : npuSynchronizeDevice:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:467 NPU function error: AclrtSynchronizeDeviceWithTimeout, error code is 507048 [ERROR] 2025-03-10-04:19:37 (PID:1185975, Device:2, RankID:2) ERR00100 PTA call acl api failed [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EI0002: [PID: 1185975] 2025-03-10-04:19:37.576.321 The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[4265454028], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] Possible Cause: 1. An exception occurs during the execution on some NPUs in the cluster. As a result, collective communication operation failed.2. The execution speed on some NPU in the cluster is too slow to complete a communication operation within the timeout interval. (default 1800s, You can set the interval by using HCCL_EXEC_TIMEOUT.)3. The number of training samples of each NPU is inconsistent.4. Packet loss or other connectivity problems occur on the communication link. Solution: 1. If this error is reported on part of these ranks, check other ranks to see whether other errors have been reported earlier.2. If this error is reported for all ranks, check whether the error reporting time is consistent (the maximum difference must not exceed 1800s). If not, locate the cause or adjust the locate the cause or set the HCCL_EXEC_TIMEOUT environment variable to a larger value.3. Check whether the completion queue element (CQE) of the error exists in the plog(grep -rn 'error cqe'). If so, check the network connection status. (For details, see the TLS command and HCCN connectivity check examples.)4. Ensure that the number of training samples of each NPU is consistent. For details:https://www.hiascend.com/document TraceBack (most recent call last): The error from device(chipId:2, dieId:0), serial number is 3, hccl fftsplus task timeout occurred during task execution, stream_id:4, sq_id:4, task_id:13, stuck notify num:1, timeout:1836.[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1645] The 0 stuck notify wait context info:(context_id=4, notify_id=3).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[4265454028], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] npuSynchronizeDevice:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:467 NPU function error: AclrtSynchronizeDeviceWithTimeout, error code is 507048 [ERROR] 2025-03-10-04:19:37 (PID:1185974, Device:1, RankID:1) ERR00100 PTA call acl api failed [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EI0002: [PID: 1185974] 2025-03-10-04:19:37.566.086 The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[3917014364], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] Possible Cause: 1. An exception occurs during the execution on some NPUs in the cluster. As a result, collective communication operation failed.2. The execution speed on some NPU in the cluster is too slow to complete a communication operation within the timeout interval. (default 1800s, You can set the interval by using HCCL_EXEC_TIMEOUT.)3. The number of training samples of each NPU is inconsistent.4. Packet loss or other connectivity problems occur on the communication link. Solution: 1. If this error is reported on part of these ranks, check other ranks to see whether other errors have been reported earlier.2. If this error is reported for all ranks, check whether the error reporting time is consistent (the maximum difference must not exceed 1800s). If not, locate the cause or adjust the locate the cause or set the HCCL_EXEC_TIMEOUT environment variable to a larger value.3. Check whether the completion queue element (CQE) of the error exists in the plog(grep -rn 'error cqe'). If so, check the network connection status. (For details, see the TLS command and HCCN connectivity check examples.)4. Ensure that the number of training samples of each NPU is consistent. For details:https://www.hiascend.com/document TraceBack (most recent call last): The error from device(chipId:1, dieId:0), serial number is 3, hccl fftsplus task timeout occurred during task execution, stream_id:4, sq_id:4, task_id:13, stuck notify num:1, timeout:1836.[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1645] The 0 stuck notify wait context info:(context_id=4, notify_id=11).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[3917014364], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] : work.wait()npuSynchronizeDevice:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:467 NPU function error: AclrtSynchronizeDeviceWithTimeout, error code is 507048 [ERROR] 2025-03-10-04:19:37 (PID:1185976, Device:3, RankID:3) ERR00100 PTA call acl api failed [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EI0002: [PID: 1185976] 2025-03-10-04:19:37.588.583 The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[3517536076], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] Possible Cause: 1. An exception occurs during the execution on some NPUs in the cluster. As a result, collective communication operation failed.2. The execution speed on some NPU in the cluster is too slow to complete a communication operation within the timeout interval. (default 1800s, You can set the interval by using HCCL_EXEC_TIMEOUT.)3. The number of training samples of each NPU is inconsistent.4. Packet loss or other connectivity problems occur on the communication link. Solution: 1. If this error is reported on part of these ranks, check other ranks to see whether other errors have been reported earlier.2. If this error is reported for all ranks, check whether the error reporting time is consistent (the maximum difference must not exceed 1800s). If not, locate the cause or adjust the locate the cause or set the HCCL_EXEC_TIMEOUT environment variable to a larger value.3. Check whether the completion queue element (CQE) of the error exists in the plog(grep -rn 'error cqe'). If so, check the network connection status. (For details, see the TLS command and HCCN connectivity check examples.)4. Ensure that the number of training samples of each NPU is consistent. For details:https://www.hiascend.com/document TraceBack (most recent call last): The error from device(chipId:3, dieId:0), serial number is 3, hccl fftsplus task timeout occurred during task execution, stream_id:4, sq_id:4, task_id:13, stuck notify num:1, timeout:1836.[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1645] The 0 stuck notify wait context info:(context_id=4, notify_id=17).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[3517536076], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] RuntimeError: npuSynchronizeDevice:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:467 NPU function error: AclrtSynchronizeDeviceWithTimeout, error code is 507048 [ERROR] 2025-03-10-04:19:37 (PID:1185973, Device:0, RankID:0) ERR00100 PTA call acl api failed [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EI0002: [PID: 1185973] 2025-03-10-04:19:37.591.361 The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[3366440588], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] Possible Cause: 1. An exception occurs during the execution on some NPUs in the cluster. As a result, collective communication operation failed.2. The execution speed on some NPU in the cluster is too slow to complete a communication operation within the timeout interval. (default 1800s, You can set the interval by using HCCL_EXEC_TIMEOUT.)3. The number of training samples of each NPU is inconsistent.4. Packet loss or other connectivity problems occur on the communication link. Solution: 1. If this error is reported on part of these ranks, check other ranks to see whether other errors have been reported earlier.2. If this error is reported for all ranks, check whether the error reporting time is consistent (the maximum difference must not exceed 1800s). If not, locate the cause or adjust the locate the cause or set the HCCL_EXEC_TIMEOUT environment variable to a larger value.3. Check whether the completion queue element (CQE) of the error exists in the plog(grep -rn 'error cqe'). If so, check the network connection status. (For details, see the TLS command and HCCN connectivity check examples.)4. Ensure that the number of training samples of each NPU is consistent. For details:https://www.hiascend.com/document TraceBack (most recent call last): The error from device(chipId:0, dieId:0), serial number is 3, hccl fftsplus task timeout occurred during task execution, stream_id:4, sq_id:4, task_id:13, stuck notify num:4, timeout:1836.[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1645] The 0 stuck notify wait context info:(context_id=9, notify_id=14).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The 1 stuck notify wait context info:(context_id=11, notify_id=24).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The 2 stuck notify wait context info:(context_id=13, notify_id=20).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The 3 stuck notify wait context info:(context_id=15, notify_id=18).[FUNC:ProcessStarsHcclFftsPlusTimeoutErrorInfo][FILE:device_error_proc.cc][LINE:1652] The wait execution of the Notify register times out. Reason: The Notify register has not received the Notify record from remote rank [unknown].base information: [streamID:[3366440588], taskID[13], tag[AllReduce_100.124.104.201%eth0_60000_0_1741569404003403], AlgType(level 0-1-2):[fullmesh-ring-ring].] task information: [] rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] [W compiler_depend.ts:487] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:19:38.591.420 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeUsedDevices) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:19:58.399.507 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:19:58.943.347 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:19:59.775.426 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:00.287.260 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:00.803.311 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:01.311.298 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:01.823.326 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:02.335.348 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:02.851.347 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:03.359.328 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:03.875.310 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:04.383.291 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:04.895.384 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:05.407.455 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [W compiler_depend.ts:122] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:05.923.406 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function empty_cache) [W compiler_depend.ts:469] Warning: NPU warning, error code is 507048[Error]: [Error]: The execution of the internal task times out. Rectify the fault based on the error information in the ascend log. EH9999: Inner Error! rtDeviceSynchronizeWithTimeout execute failed, reason=[fftsplus timeout][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53] EH9999: [PID: 1185973] 2025-03-10-04:20:06.467.417 wait for compute device to finish failed, runtime result = 507048.[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] TraceBack (most recent call last): (function npuSynchronizeDevice) [2025-03-10 04:20:20,376] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1185977 closing signal SIGTERM [2025-03-10 04:20:20,376] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1185978 closing signal SIGTERM [2025-03-10 04:20:20,376] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1185979 closing signal SIGTERM [2025-03-10 04:20:20,376] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1185980 closing signal SIGTERM [2025-03-10 04:20:27,429] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 1185973) of binary: /home/jovyan/.conda/envs/model_dis/bin/python Traceback (most recent call last): File "/home/jovyan/.conda/envs/model_dis/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/run.py", line 806, in main run(args) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/run.py", line 797, in run elastic_launch( File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/jovyan/.conda/envs/model_dis/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ./train_minillm.py FAILED Failures: [1]: time : 2025-03-10_04:20:20 host : nb-546241047869523123-0 rank : 1 (local_rank: 1) exitcode : 1 (pid: 1185974) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [2]: time : 2025-03-10_04:20:20 host : nb-546241047869523123-0 rank : 2 (local_rank: 2) exitcode : 1 (pid: 1185975) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [3]: time : 2025-03-10_04:20:20 host : nb-546241047869523123-0 rank : 3 (local_rank: 3) exitcode : 1 (pid: 1185976) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html Root Cause (first observed failure): [0]: time : 2025-03-10_04:20:20 host : nb-546241047869523123-0 rank : 0 (local_rank: 0) exitcode : 1 (pid: 1185973) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html 五、日志信息: 日志太大,点击链接查看 [日志文件链接](https://pan.quark.cn/s/2d01e19b2bea)
评论 (
10
)
登录
后才可以发表评论
状态
DONE
TODO
ACCEPTED
Analysing
Feedback
WIP
Replied
CLOSED
DONE
REJECTED
负责人
未设置
标签
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (79)
标签 (186)
master
v2.8.0
v2.1.0
v2.6.0
v2.7.1
v2.5.1
v2.6.0-7.1.0
v2.5.1-7.1.0
v2.1.0-7.1.0
revert-merge-23967-master
revert-merge-23966-v2.8.0
revert-merge-23965-v2.7.1
revert-merge-23964-v2.6.0
revert-merge-23962-v2.5.1
revert-merge-23789-v2.1.0
v2.1.0-7.0.0
v2.4.0-7.0.0
v2.4.0
v2.3.1
v2.3.1-7.0.0
v2.5.1-7.0.0
v2.4.0-6.0.0
v2.3.1-6.0.0
v2.1.0-6.0.0
v2.1.0-6.0.rc3
v2.3.1-6.0.rc3
v2.4.0-6.0.rc3
v2.2.0
v1.11.0-6.0.rc1
v2.1.0-6.0.rc1
v2.2.0-6.0.rc1
v1.11.0-6.0.rc2
v2.1.0-6.0.rc2
v2.2.0-6.0.rc2
v2.3.1-6.0.rc2
v1.11.0
v2.1.0-5.0.0
v2.0.1-5.0.0
v1.11.0-5.0.0
v2.0.1
v2.1.0-5.0.rc3
v1.11.0-5.0.rc3
v2.0.1-5.0.rc3
v1.11.0-5.0.rc3.3
v1.8.1
v1.11.0-x1
v1.8.1-5.0.rc3
v1.11.0-5.0.rc2.2
v1.11.0-zj
v1.11.0-5.0.rc2.1
v2.0.1-5.0.rc2
v1.11.0-5.0.rc2
v1.8.1-5.0.rc2
v2.0.0-5.0.rc2
v1.8.1-5.0.rc1
v1.11.0-5.0.rc1
v1.11.0-yd
v1.11.0-xf
v1.11.0-infer
v1.11.0-bigkernel
v1.11.0-host_api
v1.8.1-3.0.0
v1.11.0-5.0.rc2.t100
v1.8.1-5.0.rc2.t100
v1.8.1-3.0.0-dev
v1.11.0-3.0.0
v2.0-dev
v1.8.1-3.0.rc3
v1.5.0-3.0.0
v1.5.0
v1.8.1-3.0.rc1
v1.11.0-3.0.rc3
v1.8.1-3.0.rc2
v1.5.0-3.0.rc3
v1.5.0-3.0.rc2
2.0.4.tr5
v1.5.0-3.0.rc1
2.0.2.tr5
2.0.3.tr5
v7.2.RC1.alpha002-pytorch2.8.0
v7.2.RC1.alpha002-pytorch2.7.1
v7.2.RC1.alpha002-pytorch2.6.0
v7.2.RC1.alpha002-pytorch2.1.0
v7.1.0.2-pytorch2.5.1
v7.1.0.2-pytorch2.6.0
v7.1.0.2-pytorch2.1.0
v7.0.0.1-pytorch2.4.0
v7.0.0.1-pytorch2.1.0
v7.2.RC1.alpha001-pytorch2.8.0
v7.2.RC1.alpha001-pytorch2.7.1
v7.2.RC1.alpha001-pytorch2.6.0
v7.2.RC1.alpha001-pytorch2.5.1
v7.2.RC1.alpha001-pytorch2.1.0
v7.1.0.1-pytorch2.6.0
v7.1.0.1-pytorch2.5.1
v7.1.0.1-pytorch2.1.0
v7.1.0-pytorch2.6.0
v7.1.0-pytorch2.5.1
v7.1.0-pytorch2.1.0
v7.1.RC1.alpha003-pytorch2.6.0
v7.1.RC1.alpha003-pytorch2.5.1
v7.1.RC1.alpha003-pytorch2.1.0
v7.1.RC1.alpha002-pytorch2.7.1
v7.1.RC1.alpha002-pytorch2.6.0
v7.1.RC1.alpha002-pytorch2.5.1
v7.1.RC1.alpha002-pytorch2.4.0
v7.1.RC1.alpha002-pytorch2.3.1
v7.1.RC1.alpha002-pytorch2.1.0
v6.0.0.1-pytorch2.4.0
v6.0.0.1-pytorch2.3.1
v6.0.0.1-pytorch2.1.0
v7.1.RC1.alpha001-pytorch2.6.0
v7.1.RC1.alpha001-pytorch2.5.1
v7.1.RC1.alpha001-pytorch2.4.0
v7.1.RC1.alpha001-pytorch2.3.1
v7.1.RC1.alpha001-pytorch2.1.0
v7.0.0-pytorch2.5.1
v7.0.0-pytorch2.4.0
v7.0.0-pytorch2.3.1
v7.0.RC1.alpha002-pytorch2.6.0
v7.0.0-pytorch2.1.0
v7.0.RC1.alpha002-pytorch2.5.1
v7.0.RC1.alpha002-pytorch2.4.0
v7.0.RC1.alpha002-pytorch2.3.1
v7.0.RC1.alpha002-pytorch2.1.0
v7.0.RC1.alpha001-pytorch2.5.1
v7.0.RC1.alpha001-pytorch2.1.0
v7.0.RC1.alpha001-pytorch2.4.0
v7.0.RC1.alpha001-pytorch2.3.1
v6.0.0-pytorch2.4.0
v6.0.0-pytorch2.3.1
v6.0.0-pytorch2.1.0
v6.0.0.alpha003-pytorch2.4.0
v6.0.0.alpha003-pytorch2.3.1
v6.0.0.alpha003-pytorch2.1.0
v6.0.0.alpha002-pytorch2.4.0
v6.0.0.alpha002-pytorch2.3.1
v6.0.0.alpha002-pytorch2.1.0
v6.0.0.alpha001-pytorch2.5.1
v6.0.rc3-pytorch2.4.0
v6.0.rc3-pytorch2.3.1
v6.0.rc3-pytorch2.1.0
v6.0.0.alpha001-pytorch2.4.0
v6.0.0.alpha001-pytorch2.3.1
v6.0.0.alpha001-pytorch2.1.0
v6.0.rc2.1-pytorch1.11.0
v6.0.rc2.1-pytorch2.3.1
v6.0.rc2.1-pytorch2.2.0
v6.0.rc2.1-pytorch2.1.0
v6.0.rc3.alpha003-pytorch2.3.1
v6.0.rc3.alpha003-pytorch2.1.0
v6.0.rc3.alpha001-pytorch2.4.0
v6.0.rc3.alpha002-pytorch2.3.1
v6.0.rc3.alpha002-pytorch2.2.0
v6.0.rc3.alpha002-pytorch2.1.0
v6.0.rc3.alpha002-pytorch1.11.0
v6.0.rc2-pytorch2.1.0
v6.0.rc2-pytorch2.3.1
v6.0.rc2-pytorch2.2.0
v6.0.rc2-pytorch1.11.0
v6.0.rc3.alpha001-pytorch2.3.1
v6.0.rc3.alpha001-pytorch2.2.0
v6.0.rc3.alpha001-pytorch2.1.0
v6.0.rc3.alpha001-pytorch1.11.0
v6.0.rc2.alpha002-pytorch2.3.1
v6.0.rc2.alpha003-pytorch1.11.0
v6.0.rc2.alpha003-pytorch2.2.0
v6.0.rc2.alpha003-pytorch2.1.0
v6.0.rc1.1-pytorch2.2.0
v6.0.rc1.1-pytorch2.1.0
v6.0.rc1.1-pytorch1.11.0
v5.0.1.2-pytorch1.11.0
v5.0.1.2-pytorch2.1.0
v5.0.1.2-pytorch2.0.1
v6.0.rc2.alpha002-pytorch2.2.0
v6.0.rc2.alpha002-pytorch2.1.0
v6.0.rc2.alpha002-pytorch1.11.0
v6.0.rc1-pytorch2.2.0
v6.0.rc1-pytorch2.1.0
v6.0.rc1-pytorch1.11.0
v6.0.rc2.alpha001-pytorch2.2.0
v6.0.rc2.alpha001-pytorch2.1.0
v6.0.rc2.alpha001-pytorch1.11.0
v6.0.rc1.alpha003-pytorch2.0.1
v6.0.rc1.alpha003-pytorch2.1.0
v5.0.1.1-pytorch2.0.1
v5.0.1.1-pytorch1.11.0
v5.0.1.1-pytorch2.1.0
v6.0.rc1.alpha003-pytorch1.11.0
v6.0.rc1.alpha002-pytorch2.1.0
v6.0.rc1.alpha002-pytorch1.11.0
v6.0.rc1.alpha002-pytorch2.0.1
v6.0.rc1.alpha001-pytorch2.2.0
v5.0.1-pytorch2.1.0
v5.0.1-pytorch2.0.1
v5.0.1-pytorch1.11.0
v6.0.RC1.alpha001-pytorch2.0.1
v6.0.RC1.alpha001-pytorch2.1.0
v6.0.RC1.alpha001-pytorch1.11.0
v5.0.0-pytorch2.1.0
v5.0.0-pytorch2.0.1
v5.0.0-pytorch1.11.0
v5.0.0.alpha003-pytorch2.1.0
v5.0.0.alpha003-pytorch2.0.1
v5.0.0.alpha003-pytorch1.11.0
v5.0.rc3.3-pytorch1.11.0
v5.0.rc3.2-pytorch1.11.0
v5.0.0.alpha002-pytorch2.1.0
v5.0.0.alpha002-pytorch2.0.1
v5.0.0.alpha002-pytorch1.11.0
v5.0.rc3.1-pytorch1.11.0
v5.0.0.alpha001-pytorch2.1.0
v5.0.0.alpha001-pytorch2.0.1
v5.0.0.alpha001-pytorch1.11.0
v5.0.rc3-pytorch2.1.0
v5.0.rc3-pytorch2.0.1
v5.0.rc3-pytorch1.11.0
v5.0.rc3.alpha003-pytorch2.0.1
v5.0.rc3.alpha003-pytorch1.11.0
v5.0.rc3.alpha003-pytorch1.8.1
v5.0.rc2.2-pytorch1.11.0
v5.0.rc2.1-pytorch1.11.0
v5.0.rc3.alpha002-pytorch2.0.1
v5.0.rc3.alpha002-pytorch1.11.0
v5.0.rc3.alpha002-pytorch1.8.1
v5.0.rc2-pytorch2.0.1
v5.0.rc2-pytorch1.11.0
v5.0.rc2-pytorch1.8.1
v5.0.rc3.alpha001-pytorch1.8.1
v5.0.rc3.alpha001-pytorch1.11.0
v5.0.rc2.alpha003-pytorch1.11.0
v5.0.rc2.alpha003-pytorch1.8.1
v5.0.rc2.alpha002-pytorch1.11.0
v5.0.rc2.alpha002-pytorch1.8.1
v5.0.rc1.alpha003-pytorch1.11.0
v5.0.rc1.alpha003-pytorch1.8.1
v5.0.rc1-pytorch1.11.0
v5.0.rc1-pytorch1.8.1
v5.0.rc1.alpha002-pytorch1.11.0
v5.0.rc1.alpha002-pytorch1.8.1
v5.0.rc1.alpha001-pytorch1.11.0
v5.0.rc1.alpha001-pytorch1.8.1
v3.0.0-pytorch1.11.0
v3.0.0-pytorch1.8.1
v3.0.0-pytorch1.5.0
v3.0.alpha006-pytorch1.8.1
v3.0.alpha005-pytorch1.8.1
v3.0.alpha003-pytorch1.8.1
v3.0.rc3-pytorch1.11.0
v3.0.rc3-pytorch1.8.1
v3.0.rc3-pytorch1.5.0
v3.0.rc2-pytorch1.8.1
v3.0.rc2-pytorch1.5.0
v3.0.rc1-pytorch1.8.1
v3.0.rc1-pytorch1.5.0
v2.0.4
v2.0.4-rc2
v2.0.4-rc1
v2.0.3.1
v2.0.3
v2.0.3-rc4
v2.0.3-rc3
v2.0.3-rc2
v2.0.3-rc1
v2.0.2
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(1)
Python
1
https://gitee.com/ascend/pytorch.git
git@gitee.com:ascend/pytorch.git
ascend
pytorch
pytorch
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册