Sign in
Sign up
Explore
Enterprise
Education
Search
Help
Terms of use
About Us
Explore
Enterprise
Education
Gitee Premium
Gitee AI
AI teammates
Sign in
Sign up
Fetch the repository succeeded.
description of repo status
Open Source
>
AI/ML
>
Machine Learning/Deep Learning
&&
Donate
Please sign in before you donate.
Cancel
Sign in
Scan WeChat QR to Pay
Cancel
Complete
Prompt
Switch to Alipay.
OK
Cancel
Watch
Unwatch
Watching
Releases Only
Ignoring
91
Star
660
Fork
1.5K
Ascend
/
pytorch
Paused
Code
Issues
38
Pull Requests
350
Wiki
Insights
Pipelines
Service
Quality Analysis
Jenkins for Gitee
Tencent CloudBase
Tencent Cloud Serverless
悬镜安全
Aliyun SAE
Codeblitz
SBOM
DevLens
Don’t show this again
Update failed. Please try again later!
Remove this flag
Content Risk Flag
This task is identified by
as the content contains sensitive information such as code security bugs, privacy leaks, etc., so it is only accessible to contributors of this repository.
多机多卡问题 RuntimeError: [ERROR] HCCL error
DONE
#I9H56E
训练问题
yourui
Opened this issue
2024-04-16 16:08
一、问题现象(附报错日志上下文): ```log Traceback (most recent call last): File "/data/test_npu.py", line 29, in <module> main() File "/data/test_npu.py", line 25, in main run(rank, world_size) File "/data/test_npu.py", line 17, in run dist.all_reduce(mat) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce work = group.allreduce([tensor], opts) RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41. ``` 二、软件版本: -- CANN 版本 (e.g., CANN 3.0.x,5.x.x): 7.0.0 -- Tensorflow/Pytorch/MindSpore 版本: torch 2.1.0, torch_npu 2.1.0 -- Python 版本 (e.g., Python 3.7.5): 3.9.19 -- MindStudio版本 (e.g., MindStudio 2.0.0 (beta3)): None -- 操作系统版本 (e.g., Ubuntu 18.04): Ubuntu 22.04 三、测试步骤: test_master.sh ```bash GPUS_PER_NODE=8 MASTER_ADDR=*.*.*.* MASTER_PORT=29005 NNODES=2 NODE_RANK=0 WORLD_SIZE=$(($GPUS_PER_NODE * $NNODES)) DISTRIBUTED_ARGS=" --nproc_per_node $GPUS_PER_NODE \ --nnodes $NNODES \ --node_rank $NODE_RANK \ --master_addr $MASTER_ADDR \ --master_port $MASTER_PORT " torchrun $DISTRIBUTED_ARGS test_npu.py > test_master.log 2>&1 ``` test_slave.sh ```bash GPUS_PER_NODE=8 MASTER_ADDR=*.*.*.* MASTER_PORT=29005 NNODES=2 NODE_RANK=1 WORLD_SIZE=$(($GPUS_PER_NODE * $NNODES)) DISTRIBUTED_ARGS=" --nproc_per_node $GPUS_PER_NODE \ --nnodes $NNODES \ --node_rank $NODE_RANK \ --master_addr $MASTER_ADDR \ --master_port $MASTER_PORT " torchrun $DISTRIBUTED_ARGS test_npu.py > test_slave.log 2>&1 ``` ```test_npu.py import torch import torch_npu import torch.distributed as dist import os def setup(rank, world_size): dist.init_process_group(backend='hccl', rank=rank, world_size=world_size) def cleanup(): dist.destroy_process_group() def run(rank, size): torch.manual_seed(42 + rank) mat = torch.randn(2, 2, device=f'npu:{rank % 8}') print(f'Rank {rank} initial matrix:\n{mat}') dist.all_reduce(mat) if rank == 0: print(f'Reduced matrix on root:\n {mat}') def main(): rank = int(os.environ['RANK']) world_size = int(os.environ['WORLD_SIZE']) setup(rank, world_size) run(rank, world_size) cleanup() if __name__ == "__main__": main() ``` 四、日志信息: **test_master.log** ```log [2024-04-16 15:48:32,815] torch.distributed.run: [WARNING] [2024-04-16 15:48:32,815] torch.distributed.run: [WARNING] ***************************************** [2024-04-16 15:48:32,815] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. [2024-04-16 15:48:32,815] torch.distributed.run: [WARNING] ***************************************** /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 0 initial matrix: tensor([[-0.3010, 0.5186], [-0.1569, -1.1761]], device='npu:0') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 2 initial matrix: tensor([[ 0.0884, -1.0816], [-0.4099, 0.3553]], device='npu:2') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 1 initial matrix: tensor([[-0.1896, 0.1807], [ 1.0292, -1.5843]], device='npu:1') Warning: Device do not support double dtype now, dtype cast repalce with float. Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 4 initial matrix: tensor([[ 1.5810, -1.6340], [-0.1435, 2.3288]], device='npu:4') Rank 7 initial matrix: tensor([[0.3208, 0.0338], [0.0500, 0.9941]], device='npu:7') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 5 initial matrix: tensor([[ 0.0212, 0.1011], [-0.6113, -0.0202]], device='npu:5') Warning: Device do not support double dtype now, dtype cast repalce with float. Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 6 initial matrix: tensor([[-0.6829, 0.9623], [ 0.3159, 1.7030]], device='npu:6') Rank 3 initial matrix: tensor([[ 1.0250, -0.4651], [-0.0470, -1.1238]], device='npu:3') Traceback (most recent call last): File "/data/test_npu.py", line 29, in <module> main() File "/data/test_npu.py", line 25, in main run(rank, world_size) File "/data/test_npu.py", line 17, in run dist.all_reduce(mat) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce work = group.allreduce([tensor], opts) RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41. EI0006: Getting socket times out. Reason: Remote Rank did not send the data in time. Please check the reason for the rank being stuck Solution: 1. Check the rank service processes with other errors or no errors in the cluster.2. If this error is reported for all NPUs, check whether the time difference between the earliest and latest errors is greater than the connect timeout interval (120s by default). If so, adjust the timeout interval by using the HCCL_CONNECT_TIMEOUT environment variable.3. Check the connectivity of the communication link between nodes. (For details, see the TLS command and HCCN connectivity check examples.). For details:https://www.hiascend.com/document Traceback (most recent call last): File "/data/test_npu.py", line 29, in <module> main() File "/data/test_npu.py", line 25, in main run(rank, world_size) File "/data/test_npu.py", line 17, in run dist.all_reduce(mat) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce work = group.allreduce([tensor], opts) RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41. EI0006: Getting socket times out. Reason: Remote Rank did not send the data in time. Please check the reason for the rank being stuck Solution: 1. Check the rank service processes with other errors or no errors in the cluster.2. If this error is reported for all NPUs, check whether the time difference between the earliest and latest errors is greater than the connect timeout interval (120s by default). If so, adjust the timeout interval by using the HCCL_CONNECT_TIMEOUT environment variable.3. Check the connectivity of the communication link between nodes. (For details, see the TLS command and HCCN connectivity check examples.). For details:https://www.hiascend.com/document Traceback (most recent call last): File "/data/test_npu.py", line 29, in <module> main() File "/data/test_npu.py", line 25, in main run(rank, world_size) File "/data/test_npu.py", line 17, in run dist.all_reduce(mat) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce work = group.allreduce([tensor], opts) RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41. EI0006: Getting socket times out. Reason: Remote Rank did not send the data in time. Please check the reason for the rank being stuck Solution: 1. Check the rank service processes with other errors or no errors in the cluster.2. If this error is reported for all NPUs, check whether the time difference between the earliest and latest errors is greater than the connect timeout interval (120s by default). If so, adjust the timeout interval by using the HCCL_CONNECT_TIMEOUT environment variable.3. Check the connectivity of the communication link between nodes. (For details, see the TLS command and HCCN connectivity check examples.). For details:https://www.hiascend.com/document ...... ``` **test_slave.log** ```log [2024-04-16 15:48:57,594] torch.distributed.run: [WARNING] [2024-04-16 15:48:57,594] torch.distributed.run: [WARNING] ***************************************** [2024-04-16 15:48:57,594] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. [2024-04-16 15:48:57,594] torch.distributed.run: [WARNING] ***************************************** /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 10 initial matrix: tensor([[-0.3249, -1.3386], [ 1.4715, 0.3518]], device='npu:2') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 15 initial matrix: tensor([[ 0.0527, 0.2152], [-1.5841, -0.6014]], device='npu:7') Warning: Device do not support double dtype now, dtype cast repalce with float. Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 11 initial matrix: tensor([[-0.0631, 0.9608], [ 1.4438, 0.8795]], device='npu:3') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 9 initial matrix: tensor([[ 0.6343, -1.3501], [ 1.9291, 0.9612]], device='npu:1') Rank 14 initial matrix: tensor([[ 0.7910, 1.1560], [-0.6018, 0.1331]], device='npu:6') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 12 initial matrix: tensor([[-0.6680, -1.8644], [-1.3363, -0.1779]], device='npu:4') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 8 initial matrix: tensor([[-2.1638, 0.2733], [-1.7053, 0.8722]], device='npu:0') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 13 initial matrix: tensor([[ 1.9668, -2.1328], [ 0.8167, 0.0494]], device='npu:5') Traceback (most recent call last): File "/data/test_npu.py", line 29, in <module> main() File "/data/test_npu.py", line 25, in main run(rank, world_size) File "/data/test_npu.py", line 17, in run dist.all_reduce(mat) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce work = group.allreduce([tensor], opts) RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41. [2024-04-16 15:49:37,728] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401524 closing signal SIGTERM [2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401525 closing signal SIGTERM [2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401526 closing signal SIGTERM [2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401527 closing signal SIGTERM [2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401528 closing signal SIGTERM [2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401529 closing signal SIGTERM [2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401530 closing signal SIGTERM /root/miniconda3/envs/llm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 97 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' /root/miniconda3/envs/llm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 97 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' [2024-04-16 15:50:07,730] torch.distributed.elastic.multiprocessing.api: [WARNING] Unable to shutdown process 401524 via Signals.SIGTERM, forcefully exiting via Signals.SIGKILL /root/miniconda3/envs/llm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 97 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' [2024-04-16 15:50:07,867] torch.distributed.elastic.multiprocessing.api: [WARNING] Unable to shutdown process 401525 via Signals.SIGTERM, forcefully exiting via Signals.SIGKILL Process ForkServerPoolWorker-4: Process ForkServerPoolWorker-6: Process ForkServerPoolWorker-3: Traceback (most recent call last): Traceback (most recent call last): File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe BrokenPipeError: [Errno 32] Broken pipe During handling of the above exception, another exception occurred: ...... ```
一、问题现象(附报错日志上下文): ```log Traceback (most recent call last): File "/data/test_npu.py", line 29, in <module> main() File "/data/test_npu.py", line 25, in main run(rank, world_size) File "/data/test_npu.py", line 17, in run dist.all_reduce(mat) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce work = group.allreduce([tensor], opts) RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41. ``` 二、软件版本: -- CANN 版本 (e.g., CANN 3.0.x,5.x.x): 7.0.0 -- Tensorflow/Pytorch/MindSpore 版本: torch 2.1.0, torch_npu 2.1.0 -- Python 版本 (e.g., Python 3.7.5): 3.9.19 -- MindStudio版本 (e.g., MindStudio 2.0.0 (beta3)): None -- 操作系统版本 (e.g., Ubuntu 18.04): Ubuntu 22.04 三、测试步骤: test_master.sh ```bash GPUS_PER_NODE=8 MASTER_ADDR=*.*.*.* MASTER_PORT=29005 NNODES=2 NODE_RANK=0 WORLD_SIZE=$(($GPUS_PER_NODE * $NNODES)) DISTRIBUTED_ARGS=" --nproc_per_node $GPUS_PER_NODE \ --nnodes $NNODES \ --node_rank $NODE_RANK \ --master_addr $MASTER_ADDR \ --master_port $MASTER_PORT " torchrun $DISTRIBUTED_ARGS test_npu.py > test_master.log 2>&1 ``` test_slave.sh ```bash GPUS_PER_NODE=8 MASTER_ADDR=*.*.*.* MASTER_PORT=29005 NNODES=2 NODE_RANK=1 WORLD_SIZE=$(($GPUS_PER_NODE * $NNODES)) DISTRIBUTED_ARGS=" --nproc_per_node $GPUS_PER_NODE \ --nnodes $NNODES \ --node_rank $NODE_RANK \ --master_addr $MASTER_ADDR \ --master_port $MASTER_PORT " torchrun $DISTRIBUTED_ARGS test_npu.py > test_slave.log 2>&1 ``` ```test_npu.py import torch import torch_npu import torch.distributed as dist import os def setup(rank, world_size): dist.init_process_group(backend='hccl', rank=rank, world_size=world_size) def cleanup(): dist.destroy_process_group() def run(rank, size): torch.manual_seed(42 + rank) mat = torch.randn(2, 2, device=f'npu:{rank % 8}') print(f'Rank {rank} initial matrix:\n{mat}') dist.all_reduce(mat) if rank == 0: print(f'Reduced matrix on root:\n {mat}') def main(): rank = int(os.environ['RANK']) world_size = int(os.environ['WORLD_SIZE']) setup(rank, world_size) run(rank, world_size) cleanup() if __name__ == "__main__": main() ``` 四、日志信息: **test_master.log** ```log [2024-04-16 15:48:32,815] torch.distributed.run: [WARNING] [2024-04-16 15:48:32,815] torch.distributed.run: [WARNING] ***************************************** [2024-04-16 15:48:32,815] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. [2024-04-16 15:48:32,815] torch.distributed.run: [WARNING] ***************************************** /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 0 initial matrix: tensor([[-0.3010, 0.5186], [-0.1569, -1.1761]], device='npu:0') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 2 initial matrix: tensor([[ 0.0884, -1.0816], [-0.4099, 0.3553]], device='npu:2') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 1 initial matrix: tensor([[-0.1896, 0.1807], [ 1.0292, -1.5843]], device='npu:1') Warning: Device do not support double dtype now, dtype cast repalce with float. Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 4 initial matrix: tensor([[ 1.5810, -1.6340], [-0.1435, 2.3288]], device='npu:4') Rank 7 initial matrix: tensor([[0.3208, 0.0338], [0.0500, 0.9941]], device='npu:7') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 5 initial matrix: tensor([[ 0.0212, 0.1011], [-0.6113, -0.0202]], device='npu:5') Warning: Device do not support double dtype now, dtype cast repalce with float. Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 6 initial matrix: tensor([[-0.6829, 0.9623], [ 0.3159, 1.7030]], device='npu:6') Rank 3 initial matrix: tensor([[ 1.0250, -0.4651], [-0.0470, -1.1238]], device='npu:3') Traceback (most recent call last): File "/data/test_npu.py", line 29, in <module> main() File "/data/test_npu.py", line 25, in main run(rank, world_size) File "/data/test_npu.py", line 17, in run dist.all_reduce(mat) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce work = group.allreduce([tensor], opts) RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41. EI0006: Getting socket times out. Reason: Remote Rank did not send the data in time. Please check the reason for the rank being stuck Solution: 1. Check the rank service processes with other errors or no errors in the cluster.2. If this error is reported for all NPUs, check whether the time difference between the earliest and latest errors is greater than the connect timeout interval (120s by default). If so, adjust the timeout interval by using the HCCL_CONNECT_TIMEOUT environment variable.3. Check the connectivity of the communication link between nodes. (For details, see the TLS command and HCCN connectivity check examples.). For details:https://www.hiascend.com/document Traceback (most recent call last): File "/data/test_npu.py", line 29, in <module> main() File "/data/test_npu.py", line 25, in main run(rank, world_size) File "/data/test_npu.py", line 17, in run dist.all_reduce(mat) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce work = group.allreduce([tensor], opts) RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41. EI0006: Getting socket times out. Reason: Remote Rank did not send the data in time. Please check the reason for the rank being stuck Solution: 1. Check the rank service processes with other errors or no errors in the cluster.2. If this error is reported for all NPUs, check whether the time difference between the earliest and latest errors is greater than the connect timeout interval (120s by default). If so, adjust the timeout interval by using the HCCL_CONNECT_TIMEOUT environment variable.3. Check the connectivity of the communication link between nodes. (For details, see the TLS command and HCCN connectivity check examples.). For details:https://www.hiascend.com/document Traceback (most recent call last): File "/data/test_npu.py", line 29, in <module> main() File "/data/test_npu.py", line 25, in main run(rank, world_size) File "/data/test_npu.py", line 17, in run dist.all_reduce(mat) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce work = group.allreduce([tensor], opts) RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41. EI0006: Getting socket times out. Reason: Remote Rank did not send the data in time. Please check the reason for the rank being stuck Solution: 1. Check the rank service processes with other errors or no errors in the cluster.2. If this error is reported for all NPUs, check whether the time difference between the earliest and latest errors is greater than the connect timeout interval (120s by default). If so, adjust the timeout interval by using the HCCL_CONNECT_TIMEOUT environment variable.3. Check the connectivity of the communication link between nodes. (For details, see the TLS command and HCCN connectivity check examples.). For details:https://www.hiascend.com/document ...... ``` **test_slave.log** ```log [2024-04-16 15:48:57,594] torch.distributed.run: [WARNING] [2024-04-16 15:48:57,594] torch.distributed.run: [WARNING] ***************************************** [2024-04-16 15:48:57,594] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. [2024-04-16 15:48:57,594] torch.distributed.run: [WARNING] ***************************************** /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( /root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair. warnings.warn( Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 10 initial matrix: tensor([[-0.3249, -1.3386], [ 1.4715, 0.3518]], device='npu:2') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 15 initial matrix: tensor([[ 0.0527, 0.2152], [-1.5841, -0.6014]], device='npu:7') Warning: Device do not support double dtype now, dtype cast repalce with float. Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 11 initial matrix: tensor([[-0.0631, 0.9608], [ 1.4438, 0.8795]], device='npu:3') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 9 initial matrix: tensor([[ 0.6343, -1.3501], [ 1.9291, 0.9612]], device='npu:1') Rank 14 initial matrix: tensor([[ 0.7910, 1.1560], [-0.6018, 0.1331]], device='npu:6') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 12 initial matrix: tensor([[-0.6680, -1.8644], [-1.3363, -0.1779]], device='npu:4') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 8 initial matrix: tensor([[-2.1638, 0.2733], [-1.7053, 0.8722]], device='npu:0') Warning: Device do not support double dtype now, dtype cast repalce with float. Rank 13 initial matrix: tensor([[ 1.9668, -2.1328], [ 0.8167, 0.0494]], device='npu:5') Traceback (most recent call last): File "/data/test_npu.py", line 29, in <module> main() File "/data/test_npu.py", line 25, in main run(rank, world_size) File "/data/test_npu.py", line 17, in run dist.all_reduce(mat) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce work = group.allreduce([tensor], opts) RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41. [2024-04-16 15:49:37,728] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401524 closing signal SIGTERM [2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401525 closing signal SIGTERM [2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401526 closing signal SIGTERM [2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401527 closing signal SIGTERM [2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401528 closing signal SIGTERM [2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401529 closing signal SIGTERM [2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401530 closing signal SIGTERM /root/miniconda3/envs/llm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 97 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' /root/miniconda3/envs/llm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 97 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' [2024-04-16 15:50:07,730] torch.distributed.elastic.multiprocessing.api: [WARNING] Unable to shutdown process 401524 via Signals.SIGTERM, forcefully exiting via Signals.SIGKILL /root/miniconda3/envs/llm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 97 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' [2024-04-16 15:50:07,867] torch.distributed.elastic.multiprocessing.api: [WARNING] Unable to shutdown process 401525 via Signals.SIGTERM, forcefully exiting via Signals.SIGKILL Process ForkServerPoolWorker-4: Process ForkServerPoolWorker-6: Process ForkServerPoolWorker-3: Traceback (most recent call last): Traceback (most recent call last): File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/pool.py", line 131, in worker put((job, i, result)) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/queues.py", line 377, in put self._writer.send_bytes(obj) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes self._send(header + buf) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe BrokenPipeError: [Errno 32] Broken pipe During handling of the above exception, another exception occurred: ...... ```
Comments (
15
)
Sign in
to comment
Status
DONE
TODO
ACCEPTED
Analysing
Feedback
WIP
Replied
CLOSED
DONE
REJECTED
Assignees
Not set
Labels
Not set
Projects
Unprojected
Unprojected
Milestones
No related milestones
No related milestones
Pull Requests
None yet
None yet
Successfully merging a pull request will close this issue.
Branches
No related branch
Branches (
-
)
Tags (
-
)
Planed to start   -   Planed to end
-
Top level
Not Top
Top Level: High
Top Level: Medium
Top Level: Low
Priority
Not specified
Serious
Main
Secondary
Unimportant
Duration
(hours)
参与者(3)
Python
1
https://gitee.com/ascend/pytorch.git
git@gitee.com:ascend/pytorch.git
ascend
pytorch
pytorch
Going to Help Center
Search
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
Repository Report
Back to the top
Login prompt
This operation requires login to the code cloud account. Please log in before operating.
Go to login
No account. Register