登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
轻量养虾,开箱即用!低 Token + 稳定算力,Gitee & 模力方舟联合出品的 PocketClaw 正式开售!点击了解详情~
代码拉取完成,页面将自动刷新
仓库状态说明
开源项目
>
人工智能
>
机器学习/深度学习
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
91
Star
660
Fork
1.5K
Ascend
/
pytorch
暂停
代码
Issues
38
Pull Requests
350
Wiki
统计
流水线
服务
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
开发画像分析
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
多卡训练报错
DONE
#I8NY3R
训练问题
liuhc33
创建于
2023-12-13 10:36
你好,我的python版本3.8.10,CAAN版本为7.0RC1,torch版本1.11.0 在跑自己的模型时,单卡训练正常,在跑多卡(4卡)时遇到以下问题: ``` Traceback (most recent call last): File "bin/train_ddp.py", line 122, in <module> main() File "bin/train_ddp.py", line 97, in main task.to_distributed(args.nodes, args.node_id, args.gpus, args.gpu) File "/home/liuhangchen/Workspace/eteh-v2-release-offline_export_transfDDP_202311/env_train/eteh/tools/interface/pytorch_backend/th_task.py", line 59, in to_distributed self.trainer.model = DDP(self.model, device_ids=[ File "/usr/local/python3.8.10/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 641, in __init__ dist._verify_params_across_processes(self.process_group, parameters) RuntimeError: [ERROR] HCCL error in: /usr1/01/workspace/j_ctViaEnN/pytorch/torch_npu/csrc/distributed/HCCLUtils.hpp:80. EJ0001: Failed to initialize the HCCP process. Reason: Maybe the last training process is running. Solution: Wait for 10s after killing the last training process and try again. TraceBack (most recent call last): tsd client wait response fail, device response code[1]. unknown device error.[FUNC:WaitRsp][FILE:process_mode_manager.cpp][LINE:270] Traceback (most recent call last): File "bin/train_ddp.py", line 122, in <module> main() File "bin/train_ddp.py", line 97, in main task.to_distributed(args.nodes, args.node_id, args.gpus, args.gpu) File "/home/liuhangchen/Workspace/eteh-v2-release-offline_export_transfDDP_202311/env_train/eteh/tools/interface/pytorch_backend/th_task.py", line 59, in to_distributed self.trainer.model = DDP(self.model, device_ids=[ File "/usr/local/python3.8.10/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 641, in __init__ dist._verify_params_across_processes(self.process_group, parameters) RuntimeError: [ERROR] HCCL error in: /usr1/01/workspace/j_ctViaEnN/pytorch/torch_npu/csrc/distributed/HCCLUtils.hpp:80. EJ0001: Failed to initialize the HCCP process. Reason: Maybe the last training process is running. Solution: Wait for 10s after killing the last training process and try again. TraceBack (most recent call last): tsd client wait response fail, device response code[1]. unknown device error.[FUNC:WaitRsp][FILE:process_mode_manager.cpp][LINE:270] Traceback (most recent call last): File "bin/train_ddp.py", line 122, in <module> main() File "bin/train_ddp.py", line 97, in main task.to_distributed(args.nodes, args.node_id, args.gpus, args.gpu) File "/home/liuhangchen/Workspace/eteh-v2-release-offline_export_transfDDP_202311/env_train/eteh/tools/interface/pytorch_backend/th_task.py", line 59, in to_distributed self.trainer.model = DDP(self.model, device_ids=[ File "/usr/local/python3.8.10/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 641, in __init__ dist._verify_params_across_processes(self.process_group, parameters) RuntimeError: [ERROR] HCCL error in: /usr1/01/workspace/j_ctViaEnN/pytorch/torch_npu/csrc/distributed/HCCLUtils.hpp:80. EJ0001: Failed to initialize the HCCP process. Reason: Maybe the last training process is running. Solution: Wait for 10s after killing the last training process and try again. TraceBack (most recent call last): tsd client wait response fail, device response code[1]. unknown device error.[FUNC:WaitRsp][FILE:process_mode_manager.cpp][LINE:270] /usr/local/python3.8.10/lib/python3.8/tempfile.py:818: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmplttd8atn'> _warnings.warn(warn_message, ResourceWarning) /usr/local/python3.8.10/lib/python3.8/tempfile.py:818: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmpiud_t2y7'> _warnings.warn(warn_message, ResourceWarning) /usr/local/python3.8.10/lib/python3.8/tempfile.py:818: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmph9uzpg9i'> _warnings.warn(warn_message, ResourceWarning) Traceback (most recent call last): File "bin/train_ddp.py", line 122, in <module> main() File "bin/train_ddp.py", line 97, in main task.to_distributed(args.nodes, args.node_id, args.gpus, args.gpu) File "/home/liuhangchen/Workspace/eteh-v2-release-offline_export_transfDDP_202311/env_train/eteh/tools/interface/pytorch_backend/th_task.py", line 59, in to_distributed self.trainer.model = DDP(self.model, device_ids=[ File "/usr/local/python3.8.10/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 641, in __init__ dist._verify_params_across_processes(self.process_group, parameters) RuntimeError: [ERROR] HCCL error in: /usr1/01/workspace/j_ctViaEnN/pytorch/torch_npu/csrc/distributed/HCCLUtils.hpp:80. EI0006: Getting socket times out. Reason: Remote Rank did not send the data in time. Please check the reason for the rank being stuck Solution: 1. Check the rank service processes with other errors or no errors in the cluster.2. If this error is reported for all NPUs, check whether the time difference between the earliest and latest errors is greater than the connect timeout interval (120s by default). If so, adjust the timeout interval by using the HCCL_CONNECT_TIMEOUT environment variable.3. Check the connectivity of the communication link between nodes. (For details, see the TLS command and HCCN connectivity check examples.). For details:https://www.hiascend.com/document /usr/local/python3.8.10/lib/python3.8/tempfile.py:818: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmp77wv06bj'> _warnings.warn(warn_message, ResourceWarning) ./run.offline.transDDP.sh: line 73: 74074 Segmentation fault python3 bin/train_ddp.py -train_config ${train_config} -data_config ${data_conf} -train_name HKUST -task_file bin.taskegs.pytorch_backend.task_ctc_att -task_name CtcAttTask -exp_dir ${exp_dir} -num_epochs ${epochs} -seed 100 --split --nodes $num_nodes --node_id $node_rank --gpus $num_gpus --gpu $gpu_id --dist_backend $dist_backend --init_method $init_method ```
你好,我的python版本3.8.10,CAAN版本为7.0RC1,torch版本1.11.0 在跑自己的模型时,单卡训练正常,在跑多卡(4卡)时遇到以下问题: ``` Traceback (most recent call last): File "bin/train_ddp.py", line 122, in <module> main() File "bin/train_ddp.py", line 97, in main task.to_distributed(args.nodes, args.node_id, args.gpus, args.gpu) File "/home/liuhangchen/Workspace/eteh-v2-release-offline_export_transfDDP_202311/env_train/eteh/tools/interface/pytorch_backend/th_task.py", line 59, in to_distributed self.trainer.model = DDP(self.model, device_ids=[ File "/usr/local/python3.8.10/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 641, in __init__ dist._verify_params_across_processes(self.process_group, parameters) RuntimeError: [ERROR] HCCL error in: /usr1/01/workspace/j_ctViaEnN/pytorch/torch_npu/csrc/distributed/HCCLUtils.hpp:80. EJ0001: Failed to initialize the HCCP process. Reason: Maybe the last training process is running. Solution: Wait for 10s after killing the last training process and try again. TraceBack (most recent call last): tsd client wait response fail, device response code[1]. unknown device error.[FUNC:WaitRsp][FILE:process_mode_manager.cpp][LINE:270] Traceback (most recent call last): File "bin/train_ddp.py", line 122, in <module> main() File "bin/train_ddp.py", line 97, in main task.to_distributed(args.nodes, args.node_id, args.gpus, args.gpu) File "/home/liuhangchen/Workspace/eteh-v2-release-offline_export_transfDDP_202311/env_train/eteh/tools/interface/pytorch_backend/th_task.py", line 59, in to_distributed self.trainer.model = DDP(self.model, device_ids=[ File "/usr/local/python3.8.10/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 641, in __init__ dist._verify_params_across_processes(self.process_group, parameters) RuntimeError: [ERROR] HCCL error in: /usr1/01/workspace/j_ctViaEnN/pytorch/torch_npu/csrc/distributed/HCCLUtils.hpp:80. EJ0001: Failed to initialize the HCCP process. Reason: Maybe the last training process is running. Solution: Wait for 10s after killing the last training process and try again. TraceBack (most recent call last): tsd client wait response fail, device response code[1]. unknown device error.[FUNC:WaitRsp][FILE:process_mode_manager.cpp][LINE:270] Traceback (most recent call last): File "bin/train_ddp.py", line 122, in <module> main() File "bin/train_ddp.py", line 97, in main task.to_distributed(args.nodes, args.node_id, args.gpus, args.gpu) File "/home/liuhangchen/Workspace/eteh-v2-release-offline_export_transfDDP_202311/env_train/eteh/tools/interface/pytorch_backend/th_task.py", line 59, in to_distributed self.trainer.model = DDP(self.model, device_ids=[ File "/usr/local/python3.8.10/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 641, in __init__ dist._verify_params_across_processes(self.process_group, parameters) RuntimeError: [ERROR] HCCL error in: /usr1/01/workspace/j_ctViaEnN/pytorch/torch_npu/csrc/distributed/HCCLUtils.hpp:80. EJ0001: Failed to initialize the HCCP process. Reason: Maybe the last training process is running. Solution: Wait for 10s after killing the last training process and try again. TraceBack (most recent call last): tsd client wait response fail, device response code[1]. unknown device error.[FUNC:WaitRsp][FILE:process_mode_manager.cpp][LINE:270] /usr/local/python3.8.10/lib/python3.8/tempfile.py:818: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmplttd8atn'> _warnings.warn(warn_message, ResourceWarning) /usr/local/python3.8.10/lib/python3.8/tempfile.py:818: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmpiud_t2y7'> _warnings.warn(warn_message, ResourceWarning) /usr/local/python3.8.10/lib/python3.8/tempfile.py:818: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmph9uzpg9i'> _warnings.warn(warn_message, ResourceWarning) Traceback (most recent call last): File "bin/train_ddp.py", line 122, in <module> main() File "bin/train_ddp.py", line 97, in main task.to_distributed(args.nodes, args.node_id, args.gpus, args.gpu) File "/home/liuhangchen/Workspace/eteh-v2-release-offline_export_transfDDP_202311/env_train/eteh/tools/interface/pytorch_backend/th_task.py", line 59, in to_distributed self.trainer.model = DDP(self.model, device_ids=[ File "/usr/local/python3.8.10/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 641, in __init__ dist._verify_params_across_processes(self.process_group, parameters) RuntimeError: [ERROR] HCCL error in: /usr1/01/workspace/j_ctViaEnN/pytorch/torch_npu/csrc/distributed/HCCLUtils.hpp:80. EI0006: Getting socket times out. Reason: Remote Rank did not send the data in time. Please check the reason for the rank being stuck Solution: 1. Check the rank service processes with other errors or no errors in the cluster.2. If this error is reported for all NPUs, check whether the time difference between the earliest and latest errors is greater than the connect timeout interval (120s by default). If so, adjust the timeout interval by using the HCCL_CONNECT_TIMEOUT environment variable.3. Check the connectivity of the communication link between nodes. (For details, see the TLS command and HCCN connectivity check examples.). For details:https://www.hiascend.com/document /usr/local/python3.8.10/lib/python3.8/tempfile.py:818: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmp77wv06bj'> _warnings.warn(warn_message, ResourceWarning) ./run.offline.transDDP.sh: line 73: 74074 Segmentation fault python3 bin/train_ddp.py -train_config ${train_config} -data_config ${data_conf} -train_name HKUST -task_file bin.taskegs.pytorch_backend.task_ctc_att -task_name CtcAttTask -exp_dir ${exp_dir} -num_epochs ${epochs} -seed 100 --split --nodes $num_nodes --node_id $node_rank --gpus $num_gpus --gpu $gpu_id --dist_backend $dist_backend --init_method $init_method ```
评论 (
9
)
登录
后才可以发表评论
状态
DONE
TODO
ACCEPTED
Analysing
Feedback
WIP
Replied
CLOSED
DONE
REJECTED
负责人
未设置
标签
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (
-
)
标签 (
-
)
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(2)
Python
1
https://gitee.com/ascend/pytorch.git
git@gitee.com:ascend/pytorch.git
ascend
pytorch
pytorch
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册