登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
2025 Gitee 年度开源项目评选投票进行中,快为你的心仪项目助力!
代码拉取完成,页面将自动刷新
仓库状态说明
开源项目
>
人工智能
>
机器学习/深度学习
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
89
Star
657
Fork
1.5K
Ascend
/
pytorch
暂停
代码
Issues
41
Pull Requests
350
Wiki
统计
流水线
服务
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
单机多卡训练时,任务拉起失败
DONE
#IBE61G
训练问题
洛小颜
创建于
2024-12-27 14:43
一、问题现象(附报错日志上下文): 单机多卡训练时,任务拉起失败。 Model creation failed: InnerRun:torch_npu/csrc/framework/OpParamMaker.cpp:197 NPU error, error code is 207001 [ERROR] 2024-12-24-11:59:10 (PID:307680, Device:4, RankID:4) ERR01100 OPS call acl api failed [Error]: Failed to apply for memory. Check the remaining storage space in the hardware environment. EE9999: Inner Error! pool is nullptr or id less than zero, now id=-1.[FUNC:GetItemById][FILE:pool.cc][LINE:251] EE9999 load args failed, retCode=0x711000e.[FUNC:Load][FILE:arg_loader.cc][LINE:317] TraceBack (most recent call last): Failed to load args and l2ctrl, stream_id=15, retCode=0x711000e.[FUNC:LaunchKernel][FILE:context.cc][LINE:700] rtKernelLaunchWithFlagV2 execute failed, reason=[alloc memory error][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:50] Call rtKernelLaunchWithFlagV2(stub_func_, block_dim_, &args_ex_, sm_desc, stream, 0U, &cfg_) fail, ret: 0x32899[FUNC:DoLaunchKernelWithArgsEx][FILE:op_task.cc][LINE:791] Call static_cast<rtError_t>(DoLaunchKernelWithArgsEx(stream)) fail, ret: 0x32899[FUNC:DoLaunchKernel][FILE:op_task.cc][LINE:781] invoke rtKernelLaunch failed, ret = 207001, task = 0/1_-1_0_trans_TransData_1_tvmbin[FUNC:LaunchKernel][FILE:op_task.cc][LINE:394] [Exec][Op]Execute op failed. op type = Identity, ge result = 207001[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] 二、软件版本: -- CANN 版本: 7.0.0.beta1 --Pytorch 版本:2.1.0 --Python 版本 : Python 3.9 --操作系统版本 : Ubuntu 20.04 LTS 三、测试步骤: torchrun --nproc-per-node=8 main_train.py 四、日志信息: before: torch_npu.distributed.is_hccl_available(): True /home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:209: ImportWarning: ************************************************************************************************************* The torch.Tensor.cuda and torch.nn.Module.cuda are replaced with torch.Tensor.npu and torch.nn.Module.npu now.. The torch.cuda.DoubleTensor is replaced with torch.npu.FloatTensor cause the double type is not supported now.. The backend in torch.distributed.init_process_group set to hccl now.. The torch.cuda.* and torch.cuda.amp.* are replaced with torch.npu.* and torch.npu.amp.* now.. The device parameters have been replaced with npu in the function below: torch.logspace, torch.randint, torch.hann_window, torch.rand, torch.full_like, torch.ones_like, torch.rand_like, torch.randperm, torch.arange, torch.frombuffer, torch.normal, torch._empty_per_channel_affine_quantized, torch.empty_strided, torch.empty_like, torch.scalar_tensor, torch.tril_indices, torch.bartlett_window, torch.ones, torch.sparse_coo_tensor, torch.randn, torch.kaiser_window, torch.tensor, torch.triu_indices, torch.as_tensor, torch.zeros, torch.randint_like, torch.full, torch.eye, torch._sparse_csr_tensor_unsafe, torch.empty, torch._sparse_coo_tensor_unsafe, torch.blackman_window, torch.zeros_like, torch.range, torch.sparse_csr_tensor, torch.randn_like, torch.from_file, torch._cudnn_init_dropout_state, torch._empty_affine_quantized, torch.linspace, torch.hamming_window, torch.empty_quantized, torch._pin_memory, torch.autocast, torch.load, torch.Generator, torch.Tensor.new_empty, torch.Tensor.new_empty_strided, torch.Tensor.new_full, torch.Tensor.new_ones, torch.Tensor.new_tensor, torch.Tensor.new_zeros, torch.Tensor.to, torch.nn.Module.to, torch.nn.Module.to_empty ************************************************************************************************************* warnings.warn(msg, ImportWarning) before: torch_npu.distributed.is_hccl_available(): True before: torch_npu.distributed.is_hccl_available(): True before: torch_npu.distributed.is_hccl_available(): True before: torch_npu.distributed.is_hccl_available(): True before: torch_npu.distributed.is_hccl_available(): True before: torch_npu.distributed.is_hccl_available(): True before: torch_npu.distributed.is_hccl_available(): True /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 3, world_size: 8, device: npu:3, device_count: 8 after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 1, world_size: 8, device: npu:1, device_count: 8 NPU 内存状态: NPU 内存状态: 已分配: 0.00 MB 已分配: 0.00 MB 已缓存: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB 最大内存: 0.00 MB after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 7, world_size: 8, device: npu:7, device_count: 8 NPU 内存状态: 已分配: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 4, world_size: 8, device: npu:4, device_count: 8 NPU 内存状态: 已分配: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 5, world_size: 8, device: npu:5, device_count: 8 NPU 内存状态: 已分配: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB 使用分布式训练 NPU-0,1,2,3,4,5,6,7, 共8个进程 after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 0, world_size: 8, device: npu:0, device_count: 8 ================================================== main_train: 2024-12-24 11:59:09.291022 learning_rate: 0.0001 optim: lamb weight_decay: 0 clamp_value: 10 use_clip_grad_norm: True max_norm: 1.0 use_l1: False BATCH_SIZE: 640 num_workers: 32 pred_dim: 1 years: 2 ASCEND_LAUNCH_BLOCKING = 1 OMP_NUM_THREADS = 1 HCCL_CONNECT_TIMEOUT = 300 NPU 内存状态: 已分配: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 6, world_size: 8, device: npu:6, device_count: 8 NPU 内存状态: 已分配: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 2, world_size: 8, device: npu:2, device_count: 8 NPU 内存状态: 已分配: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB Model creation failed: InnerRun:torch_npu/csrc/framework/OpParamMaker.cpp:197 NPU error, error code is 207001 [ERROR] 2024-12-24-11:59:10 (PID:307680, Device:4, RankID:4) ERR01100 OPS call acl api failed [Error]: Failed to apply for memory. Check the remaining storage space in the hardware environment. EE9999: Inner Error! pool is nullptr or id less than zero, now id=-1.[FUNC:GetItemById][FILE:pool.cc][LINE:251] EE9999 load args failed, retCode=0x711000e.[FUNC:Load][FILE:arg_loader.cc][LINE:317] TraceBack (most recent call last): Failed to load args and l2ctrl, stream_id=15, retCode=0x711000e.[FUNC:LaunchKernel][FILE:context.cc][LINE:700] rtKernelLaunchWithFlagV2 execute failed, reason=[alloc memory error][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:50] Call rtKernelLaunchWithFlagV2(stub_func_, block_dim_, &args_ex_, sm_desc, stream, 0U, &cfg_) fail, ret: 0x32899[FUNC:DoLaunchKernelWithArgsEx][FILE:op_task.cc][LINE:791] Call static_cast<rtError_t>(DoLaunchKernelWithArgsEx(stream)) fail, ret: 0x32899[FUNC:DoLaunchKernel][FILE:op_task.cc][LINE:781] invoke rtKernelLaunch failed, ret = 207001, task = 0/1_-1_0_trans_TransData_1_tvmbin[FUNC:LaunchKernel][FILE:op_task.cc][LINE:394] [Exec][Op]Execute op failed. op type = Identity, ge result = 207001[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB [2024-12-24 11:59:20,846] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307676 closing signal SIGTERM [2024-12-24 11:59:20,846] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307677 closing signal SIGTERM [2024-12-24 11:59:20,846] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307678 closing signal SIGTERM [2024-12-24 11:59:20,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307679 closing signal SIGTERM [2024-12-24 11:59:20,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307681 closing signal SIGTERM [2024-12-24 11:59:20,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307682 closing signal SIGTERM [2024-12-24 11:59:20,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307683 closing signal SIGTERM [2024-12-24 11:59:50,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Unable to shutdown process 307676 via Signals.SIGTERM, forcefully exiting via Signals.SIGKILL [2024-12-24 11:59:51,044] torch.distributed.elastic.multiprocessing.api: [WARNING] Unable to shutdown process 307677 via Signals.SIGTERM, forcefully exiting via Signals.SIGKILL Process ForkServerPoolWorker-9: Process ForkServerPoolWorker-4: Process ForkServerPoolWorker-8: Traceback (most recent call last): Traceback (most recent call last): File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) Process ForkServerPoolWorker-7: File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) BrokenPipeError: [Errno 32] Broken pipe Process ForkServerPoolWorker-2: File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) During handling of the above exception, another exception occurred: Traceback (most recent call last): BrokenPipeError: [Errno 32] Broken pipe During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 136, in worker put((job, i, (False, wrapped))) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 136, in worker put((job, i, (False, wrapped))) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) BrokenPipeError: [Errno 32] Broken pipe File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe Traceback (most recent call last): File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) Traceback (most recent call last): File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) Traceback (most recent call last): BrokenPipeError: [Errno 32] Broken pipe During handling of the above exception, another exception occurred: File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) Traceback (most recent call last): File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 136, in worker put((job, i, (False, wrapped))) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) ...
一、问题现象(附报错日志上下文): 单机多卡训练时,任务拉起失败。 Model creation failed: InnerRun:torch_npu/csrc/framework/OpParamMaker.cpp:197 NPU error, error code is 207001 [ERROR] 2024-12-24-11:59:10 (PID:307680, Device:4, RankID:4) ERR01100 OPS call acl api failed [Error]: Failed to apply for memory. Check the remaining storage space in the hardware environment. EE9999: Inner Error! pool is nullptr or id less than zero, now id=-1.[FUNC:GetItemById][FILE:pool.cc][LINE:251] EE9999 load args failed, retCode=0x711000e.[FUNC:Load][FILE:arg_loader.cc][LINE:317] TraceBack (most recent call last): Failed to load args and l2ctrl, stream_id=15, retCode=0x711000e.[FUNC:LaunchKernel][FILE:context.cc][LINE:700] rtKernelLaunchWithFlagV2 execute failed, reason=[alloc memory error][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:50] Call rtKernelLaunchWithFlagV2(stub_func_, block_dim_, &args_ex_, sm_desc, stream, 0U, &cfg_) fail, ret: 0x32899[FUNC:DoLaunchKernelWithArgsEx][FILE:op_task.cc][LINE:791] Call static_cast<rtError_t>(DoLaunchKernelWithArgsEx(stream)) fail, ret: 0x32899[FUNC:DoLaunchKernel][FILE:op_task.cc][LINE:781] invoke rtKernelLaunch failed, ret = 207001, task = 0/1_-1_0_trans_TransData_1_tvmbin[FUNC:LaunchKernel][FILE:op_task.cc][LINE:394] [Exec][Op]Execute op failed. op type = Identity, ge result = 207001[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] 二、软件版本: -- CANN 版本: 7.0.0.beta1 --Pytorch 版本:2.1.0 --Python 版本 : Python 3.9 --操作系统版本 : Ubuntu 20.04 LTS 三、测试步骤: torchrun --nproc-per-node=8 main_train.py 四、日志信息: before: torch_npu.distributed.is_hccl_available(): True /home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:209: ImportWarning: ************************************************************************************************************* The torch.Tensor.cuda and torch.nn.Module.cuda are replaced with torch.Tensor.npu and torch.nn.Module.npu now.. The torch.cuda.DoubleTensor is replaced with torch.npu.FloatTensor cause the double type is not supported now.. The backend in torch.distributed.init_process_group set to hccl now.. The torch.cuda.* and torch.cuda.amp.* are replaced with torch.npu.* and torch.npu.amp.* now.. The device parameters have been replaced with npu in the function below: torch.logspace, torch.randint, torch.hann_window, torch.rand, torch.full_like, torch.ones_like, torch.rand_like, torch.randperm, torch.arange, torch.frombuffer, torch.normal, torch._empty_per_channel_affine_quantized, torch.empty_strided, torch.empty_like, torch.scalar_tensor, torch.tril_indices, torch.bartlett_window, torch.ones, torch.sparse_coo_tensor, torch.randn, torch.kaiser_window, torch.tensor, torch.triu_indices, torch.as_tensor, torch.zeros, torch.randint_like, torch.full, torch.eye, torch._sparse_csr_tensor_unsafe, torch.empty, torch._sparse_coo_tensor_unsafe, torch.blackman_window, torch.zeros_like, torch.range, torch.sparse_csr_tensor, torch.randn_like, torch.from_file, torch._cudnn_init_dropout_state, torch._empty_affine_quantized, torch.linspace, torch.hamming_window, torch.empty_quantized, torch._pin_memory, torch.autocast, torch.load, torch.Generator, torch.Tensor.new_empty, torch.Tensor.new_empty_strided, torch.Tensor.new_full, torch.Tensor.new_ones, torch.Tensor.new_tensor, torch.Tensor.new_zeros, torch.Tensor.to, torch.nn.Module.to, torch.nn.Module.to_empty ************************************************************************************************************* warnings.warn(msg, ImportWarning) before: torch_npu.distributed.is_hccl_available(): True before: torch_npu.distributed.is_hccl_available(): True before: torch_npu.distributed.is_hccl_available(): True before: torch_npu.distributed.is_hccl_available(): True before: torch_npu.distributed.is_hccl_available(): True before: torch_npu.distributed.is_hccl_available(): True before: torch_npu.distributed.is_hccl_available(): True /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/7.0.0.alpha003/python/site-packages/tbe/tvm/contrib/ccec.py:776: DeprecationWarning: invalid escape sequence \L if not dirpath.find("AppData\Local\Temp"): /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/transdata/transdata_classifier.py:222: DeprecationWarning: invalid escape sequence \B """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ /usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/unify_schedule/vector/transdata/common/graph/transdata_graph_info.py:143: DeprecationWarning: invalid escape sequence \c """ after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 3, world_size: 8, device: npu:3, device_count: 8 after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 1, world_size: 8, device: npu:1, device_count: 8 NPU 内存状态: NPU 内存状态: 已分配: 0.00 MB 已分配: 0.00 MB 已缓存: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB 最大内存: 0.00 MB after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 7, world_size: 8, device: npu:7, device_count: 8 NPU 内存状态: 已分配: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 4, world_size: 8, device: npu:4, device_count: 8 NPU 内存状态: 已分配: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 5, world_size: 8, device: npu:5, device_count: 8 NPU 内存状态: 已分配: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB 使用分布式训练 NPU-0,1,2,3,4,5,6,7, 共8个进程 after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 0, world_size: 8, device: npu:0, device_count: 8 ================================================== main_train: 2024-12-24 11:59:09.291022 learning_rate: 0.0001 optim: lamb weight_decay: 0 clamp_value: 10 use_clip_grad_norm: True max_norm: 1.0 use_l1: False BATCH_SIZE: 640 num_workers: 32 pred_dim: 1 years: 2 ASCEND_LAUNCH_BLOCKING = 1 OMP_NUM_THREADS = 1 HCCL_CONNECT_TIMEOUT = 300 NPU 内存状态: 已分配: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 6, world_size: 8, device: npu:6, device_count: 8 NPU 内存状态: 已分配: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB after: torch_npu.distributed.is_hccl_available(): True is_distributed: True, local_rank: 2, world_size: 8, device: npu:2, device_count: 8 NPU 内存状态: 已分配: 0.00 MB 已缓存: 0.00 MB 最大内存: 0.00 MB NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB Model creation failed: InnerRun:torch_npu/csrc/framework/OpParamMaker.cpp:197 NPU error, error code is 207001 [ERROR] 2024-12-24-11:59:10 (PID:307680, Device:4, RankID:4) ERR01100 OPS call acl api failed [Error]: Failed to apply for memory. Check the remaining storage space in the hardware environment. EE9999: Inner Error! pool is nullptr or id less than zero, now id=-1.[FUNC:GetItemById][FILE:pool.cc][LINE:251] EE9999 load args failed, retCode=0x711000e.[FUNC:Load][FILE:arg_loader.cc][LINE:317] TraceBack (most recent call last): Failed to load args and l2ctrl, stream_id=15, retCode=0x711000e.[FUNC:LaunchKernel][FILE:context.cc][LINE:700] rtKernelLaunchWithFlagV2 execute failed, reason=[alloc memory error][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:50] Call rtKernelLaunchWithFlagV2(stub_func_, block_dim_, &args_ex_, sm_desc, stream, 0U, &cfg_) fail, ret: 0x32899[FUNC:DoLaunchKernelWithArgsEx][FILE:op_task.cc][LINE:791] Call static_cast<rtError_t>(DoLaunchKernelWithArgsEx(stream)) fail, ret: 0x32899[FUNC:DoLaunchKernel][FILE:op_task.cc][LINE:781] invoke rtKernelLaunch failed, ret = 207001, task = 0/1_-1_0_trans_TransData_1_tvmbin[FUNC:LaunchKernel][FILE:op_task.cc][LINE:394] [Exec][Op]Execute op failed. op type = Identity, ge result = 207001[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161] NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB NPU 内存状态: 已分配: 265.96 MB 已缓存: 274.00 MB 最大内存: 265.96 MB [2024-12-24 11:59:20,846] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307676 closing signal SIGTERM [2024-12-24 11:59:20,846] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307677 closing signal SIGTERM [2024-12-24 11:59:20,846] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307678 closing signal SIGTERM [2024-12-24 11:59:20,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307679 closing signal SIGTERM [2024-12-24 11:59:20,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307681 closing signal SIGTERM [2024-12-24 11:59:20,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307682 closing signal SIGTERM [2024-12-24 11:59:20,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307683 closing signal SIGTERM [2024-12-24 11:59:50,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Unable to shutdown process 307676 via Signals.SIGTERM, forcefully exiting via Signals.SIGKILL [2024-12-24 11:59:51,044] torch.distributed.elastic.multiprocessing.api: [WARNING] Unable to shutdown process 307677 via Signals.SIGTERM, forcefully exiting via Signals.SIGKILL Process ForkServerPoolWorker-9: Process ForkServerPoolWorker-4: Process ForkServerPoolWorker-8: Traceback (most recent call last): Traceback (most recent call last): File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) Process ForkServerPoolWorker-7: File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) BrokenPipeError: [Errno 32] Broken pipe Process ForkServerPoolWorker-2: File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) During handling of the above exception, another exception occurred: Traceback (most recent call last): BrokenPipeError: [Errno 32] Broken pipe During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 136, in worker put((job, i, (False, wrapped))) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 136, in worker put((job, i, (False, wrapped))) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) BrokenPipeError: [Errno 32] Broken pipe File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe Traceback (most recent call last): File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) Traceback (most recent call last): File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) Traceback (most recent call last): BrokenPipeError: [Errno 32] Broken pipe During handling of the above exception, another exception occurred: File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) Traceback (most recent call last): File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 136, in worker put((job, i, (False, wrapped))) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) ...
评论 (
3
)
登录
后才可以发表评论
状态
DONE
TODO
ACCEPTED
Analysing
Feedback
WIP
Replied
CLOSED
DONE
REJECTED
负责人
未设置
标签
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (79)
标签 (186)
master
v2.8.0
v2.1.0
v2.6.0
v2.7.1
v2.5.1
v2.6.0-7.1.0
v2.5.1-7.1.0
v2.1.0-7.1.0
revert-merge-23967-master
revert-merge-23966-v2.8.0
revert-merge-23965-v2.7.1
revert-merge-23964-v2.6.0
revert-merge-23962-v2.5.1
revert-merge-23789-v2.1.0
v2.1.0-7.0.0
v2.4.0-7.0.0
v2.4.0
v2.3.1
v2.3.1-7.0.0
v2.5.1-7.0.0
v2.4.0-6.0.0
v2.3.1-6.0.0
v2.1.0-6.0.0
v2.1.0-6.0.rc3
v2.3.1-6.0.rc3
v2.4.0-6.0.rc3
v2.2.0
v1.11.0-6.0.rc1
v2.1.0-6.0.rc1
v2.2.0-6.0.rc1
v1.11.0-6.0.rc2
v2.1.0-6.0.rc2
v2.2.0-6.0.rc2
v2.3.1-6.0.rc2
v1.11.0
v2.1.0-5.0.0
v2.0.1-5.0.0
v1.11.0-5.0.0
v2.0.1
v2.1.0-5.0.rc3
v1.11.0-5.0.rc3
v2.0.1-5.0.rc3
v1.11.0-5.0.rc3.3
v1.8.1
v1.11.0-x1
v1.8.1-5.0.rc3
v1.11.0-5.0.rc2.2
v1.11.0-zj
v1.11.0-5.0.rc2.1
v2.0.1-5.0.rc2
v1.11.0-5.0.rc2
v1.8.1-5.0.rc2
v2.0.0-5.0.rc2
v1.8.1-5.0.rc1
v1.11.0-5.0.rc1
v1.11.0-yd
v1.11.0-xf
v1.11.0-infer
v1.11.0-bigkernel
v1.11.0-host_api
v1.8.1-3.0.0
v1.11.0-5.0.rc2.t100
v1.8.1-5.0.rc2.t100
v1.8.1-3.0.0-dev
v1.11.0-3.0.0
v2.0-dev
v1.8.1-3.0.rc3
v1.5.0-3.0.0
v1.5.0
v1.8.1-3.0.rc1
v1.11.0-3.0.rc3
v1.8.1-3.0.rc2
v1.5.0-3.0.rc3
v1.5.0-3.0.rc2
2.0.4.tr5
v1.5.0-3.0.rc1
2.0.2.tr5
2.0.3.tr5
v7.2.RC1.alpha002-pytorch2.8.0
v7.2.RC1.alpha002-pytorch2.7.1
v7.2.RC1.alpha002-pytorch2.6.0
v7.2.RC1.alpha002-pytorch2.1.0
v7.1.0.2-pytorch2.5.1
v7.1.0.2-pytorch2.6.0
v7.1.0.2-pytorch2.1.0
v7.0.0.1-pytorch2.4.0
v7.0.0.1-pytorch2.1.0
v7.2.RC1.alpha001-pytorch2.8.0
v7.2.RC1.alpha001-pytorch2.7.1
v7.2.RC1.alpha001-pytorch2.6.0
v7.2.RC1.alpha001-pytorch2.5.1
v7.2.RC1.alpha001-pytorch2.1.0
v7.1.0.1-pytorch2.6.0
v7.1.0.1-pytorch2.5.1
v7.1.0.1-pytorch2.1.0
v7.1.0-pytorch2.6.0
v7.1.0-pytorch2.5.1
v7.1.0-pytorch2.1.0
v7.1.RC1.alpha003-pytorch2.6.0
v7.1.RC1.alpha003-pytorch2.5.1
v7.1.RC1.alpha003-pytorch2.1.0
v7.1.RC1.alpha002-pytorch2.7.1
v7.1.RC1.alpha002-pytorch2.6.0
v7.1.RC1.alpha002-pytorch2.5.1
v7.1.RC1.alpha002-pytorch2.4.0
v7.1.RC1.alpha002-pytorch2.3.1
v7.1.RC1.alpha002-pytorch2.1.0
v6.0.0.1-pytorch2.4.0
v6.0.0.1-pytorch2.3.1
v6.0.0.1-pytorch2.1.0
v7.1.RC1.alpha001-pytorch2.6.0
v7.1.RC1.alpha001-pytorch2.5.1
v7.1.RC1.alpha001-pytorch2.4.0
v7.1.RC1.alpha001-pytorch2.3.1
v7.1.RC1.alpha001-pytorch2.1.0
v7.0.0-pytorch2.5.1
v7.0.0-pytorch2.4.0
v7.0.0-pytorch2.3.1
v7.0.RC1.alpha002-pytorch2.6.0
v7.0.0-pytorch2.1.0
v7.0.RC1.alpha002-pytorch2.5.1
v7.0.RC1.alpha002-pytorch2.4.0
v7.0.RC1.alpha002-pytorch2.3.1
v7.0.RC1.alpha002-pytorch2.1.0
v7.0.RC1.alpha001-pytorch2.5.1
v7.0.RC1.alpha001-pytorch2.1.0
v7.0.RC1.alpha001-pytorch2.4.0
v7.0.RC1.alpha001-pytorch2.3.1
v6.0.0-pytorch2.4.0
v6.0.0-pytorch2.3.1
v6.0.0-pytorch2.1.0
v6.0.0.alpha003-pytorch2.4.0
v6.0.0.alpha003-pytorch2.3.1
v6.0.0.alpha003-pytorch2.1.0
v6.0.0.alpha002-pytorch2.4.0
v6.0.0.alpha002-pytorch2.3.1
v6.0.0.alpha002-pytorch2.1.0
v6.0.0.alpha001-pytorch2.5.1
v6.0.rc3-pytorch2.4.0
v6.0.rc3-pytorch2.3.1
v6.0.rc3-pytorch2.1.0
v6.0.0.alpha001-pytorch2.4.0
v6.0.0.alpha001-pytorch2.3.1
v6.0.0.alpha001-pytorch2.1.0
v6.0.rc2.1-pytorch1.11.0
v6.0.rc2.1-pytorch2.3.1
v6.0.rc2.1-pytorch2.2.0
v6.0.rc2.1-pytorch2.1.0
v6.0.rc3.alpha003-pytorch2.3.1
v6.0.rc3.alpha003-pytorch2.1.0
v6.0.rc3.alpha001-pytorch2.4.0
v6.0.rc3.alpha002-pytorch2.3.1
v6.0.rc3.alpha002-pytorch2.2.0
v6.0.rc3.alpha002-pytorch2.1.0
v6.0.rc3.alpha002-pytorch1.11.0
v6.0.rc2-pytorch2.1.0
v6.0.rc2-pytorch2.3.1
v6.0.rc2-pytorch2.2.0
v6.0.rc2-pytorch1.11.0
v6.0.rc3.alpha001-pytorch2.3.1
v6.0.rc3.alpha001-pytorch2.2.0
v6.0.rc3.alpha001-pytorch2.1.0
v6.0.rc3.alpha001-pytorch1.11.0
v6.0.rc2.alpha002-pytorch2.3.1
v6.0.rc2.alpha003-pytorch1.11.0
v6.0.rc2.alpha003-pytorch2.2.0
v6.0.rc2.alpha003-pytorch2.1.0
v6.0.rc1.1-pytorch2.2.0
v6.0.rc1.1-pytorch2.1.0
v6.0.rc1.1-pytorch1.11.0
v5.0.1.2-pytorch1.11.0
v5.0.1.2-pytorch2.1.0
v5.0.1.2-pytorch2.0.1
v6.0.rc2.alpha002-pytorch2.2.0
v6.0.rc2.alpha002-pytorch2.1.0
v6.0.rc2.alpha002-pytorch1.11.0
v6.0.rc1-pytorch2.2.0
v6.0.rc1-pytorch2.1.0
v6.0.rc1-pytorch1.11.0
v6.0.rc2.alpha001-pytorch2.2.0
v6.0.rc2.alpha001-pytorch2.1.0
v6.0.rc2.alpha001-pytorch1.11.0
v6.0.rc1.alpha003-pytorch2.0.1
v6.0.rc1.alpha003-pytorch2.1.0
v5.0.1.1-pytorch2.0.1
v5.0.1.1-pytorch1.11.0
v5.0.1.1-pytorch2.1.0
v6.0.rc1.alpha003-pytorch1.11.0
v6.0.rc1.alpha002-pytorch2.1.0
v6.0.rc1.alpha002-pytorch1.11.0
v6.0.rc1.alpha002-pytorch2.0.1
v6.0.rc1.alpha001-pytorch2.2.0
v5.0.1-pytorch2.1.0
v5.0.1-pytorch2.0.1
v5.0.1-pytorch1.11.0
v6.0.RC1.alpha001-pytorch2.0.1
v6.0.RC1.alpha001-pytorch2.1.0
v6.0.RC1.alpha001-pytorch1.11.0
v5.0.0-pytorch2.1.0
v5.0.0-pytorch2.0.1
v5.0.0-pytorch1.11.0
v5.0.0.alpha003-pytorch2.1.0
v5.0.0.alpha003-pytorch2.0.1
v5.0.0.alpha003-pytorch1.11.0
v5.0.rc3.3-pytorch1.11.0
v5.0.rc3.2-pytorch1.11.0
v5.0.0.alpha002-pytorch2.1.0
v5.0.0.alpha002-pytorch2.0.1
v5.0.0.alpha002-pytorch1.11.0
v5.0.rc3.1-pytorch1.11.0
v5.0.0.alpha001-pytorch2.1.0
v5.0.0.alpha001-pytorch2.0.1
v5.0.0.alpha001-pytorch1.11.0
v5.0.rc3-pytorch2.1.0
v5.0.rc3-pytorch2.0.1
v5.0.rc3-pytorch1.11.0
v5.0.rc3.alpha003-pytorch2.0.1
v5.0.rc3.alpha003-pytorch1.11.0
v5.0.rc3.alpha003-pytorch1.8.1
v5.0.rc2.2-pytorch1.11.0
v5.0.rc2.1-pytorch1.11.0
v5.0.rc3.alpha002-pytorch2.0.1
v5.0.rc3.alpha002-pytorch1.11.0
v5.0.rc3.alpha002-pytorch1.8.1
v5.0.rc2-pytorch2.0.1
v5.0.rc2-pytorch1.11.0
v5.0.rc2-pytorch1.8.1
v5.0.rc3.alpha001-pytorch1.8.1
v5.0.rc3.alpha001-pytorch1.11.0
v5.0.rc2.alpha003-pytorch1.11.0
v5.0.rc2.alpha003-pytorch1.8.1
v5.0.rc2.alpha002-pytorch1.11.0
v5.0.rc2.alpha002-pytorch1.8.1
v5.0.rc1.alpha003-pytorch1.11.0
v5.0.rc1.alpha003-pytorch1.8.1
v5.0.rc1-pytorch1.11.0
v5.0.rc1-pytorch1.8.1
v5.0.rc1.alpha002-pytorch1.11.0
v5.0.rc1.alpha002-pytorch1.8.1
v5.0.rc1.alpha001-pytorch1.11.0
v5.0.rc1.alpha001-pytorch1.8.1
v3.0.0-pytorch1.11.0
v3.0.0-pytorch1.8.1
v3.0.0-pytorch1.5.0
v3.0.alpha006-pytorch1.8.1
v3.0.alpha005-pytorch1.8.1
v3.0.alpha003-pytorch1.8.1
v3.0.rc3-pytorch1.11.0
v3.0.rc3-pytorch1.8.1
v3.0.rc3-pytorch1.5.0
v3.0.rc2-pytorch1.8.1
v3.0.rc2-pytorch1.5.0
v3.0.rc1-pytorch1.8.1
v3.0.rc1-pytorch1.5.0
v2.0.4
v2.0.4-rc2
v2.0.4-rc1
v2.0.3.1
v2.0.3
v2.0.3-rc4
v2.0.3-rc3
v2.0.3-rc2
v2.0.3-rc1
v2.0.2
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(1)
Python
1
https://gitee.com/ascend/pytorch.git
git@gitee.com:ascend/pytorch.git
ascend
pytorch
pytorch
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册