一、问题现象(附报错日志上下文):
Traceback (most recent call last):
File "/data/test_npu.py", line 29, in <module>
main()
File "/data/test_npu.py", line 25, in main
run(rank, world_size)
File "/data/test_npu.py", line 17, in run
dist.all_reduce(mat)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce
work = group.allreduce([tensor], opts)
RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41.
二、软件版本:
-- CANN 版本 (e.g., CANN 3.0.x,5.x.x): 7.0.0
-- Tensorflow/Pytorch/MindSpore 版本: torch 2.1.0, torch_npu 2.1.0
-- Python 版本 (e.g., Python 3.7.5): 3.9.19
-- MindStudio版本 (e.g., MindStudio 2.0.0 (beta3)): None
-- 操作系统版本 (e.g., Ubuntu 18.04): Ubuntu 22.04
三、测试步骤:
test_master.sh
GPUS_PER_NODE=8
MASTER_ADDR=*.*.*.*
MASTER_PORT=29005
NNODES=2
NODE_RANK=0
WORLD_SIZE=$(($GPUS_PER_NODE * $NNODES))
DISTRIBUTED_ARGS="
--nproc_per_node $GPUS_PER_NODE \
--nnodes $NNODES \
--node_rank $NODE_RANK \
--master_addr $MASTER_ADDR \
--master_port $MASTER_PORT
"
torchrun $DISTRIBUTED_ARGS test_npu.py > test_master.log 2>&1
test_slave.sh
GPUS_PER_NODE=8
MASTER_ADDR=*.*.*.*
MASTER_PORT=29005
NNODES=2
NODE_RANK=1
WORLD_SIZE=$(($GPUS_PER_NODE * $NNODES))
DISTRIBUTED_ARGS="
--nproc_per_node $GPUS_PER_NODE \
--nnodes $NNODES \
--node_rank $NODE_RANK \
--master_addr $MASTER_ADDR \
--master_port $MASTER_PORT
"
torchrun $DISTRIBUTED_ARGS test_npu.py > test_slave.log 2>&1
import torch
import torch_npu
import torch.distributed as dist
import os
def setup(rank, world_size):
dist.init_process_group(backend='hccl', rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
def run(rank, size):
torch.manual_seed(42 + rank)
mat = torch.randn(2, 2, device=f'npu:{rank % 8}')
print(f'Rank {rank} initial matrix:\n{mat}')
dist.all_reduce(mat)
if rank == 0:
print(f'Reduced matrix on root:\n {mat}')
def main():
rank = int(os.environ['RANK'])
world_size = int(os.environ['WORLD_SIZE'])
setup(rank, world_size)
run(rank, world_size)
cleanup()
if __name__ == "__main__":
main()
四、日志信息:
test_master.log
[2024-04-16 15:48:32,815] torch.distributed.run: [WARNING]
[2024-04-16 15:48:32,815] torch.distributed.run: [WARNING] *****************************************
[2024-04-16 15:48:32,815] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-04-16 15:48:32,815] torch.distributed.run: [WARNING] *****************************************
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 0 initial matrix:
tensor([[-0.3010, 0.5186],
[-0.1569, -1.1761]], device='npu:0')
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 2 initial matrix:
tensor([[ 0.0884, -1.0816],
[-0.4099, 0.3553]], device='npu:2')
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 1 initial matrix:
tensor([[-0.1896, 0.1807],
[ 1.0292, -1.5843]], device='npu:1')
Warning: Device do not support double dtype now, dtype cast repalce with float.
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 4 initial matrix:
tensor([[ 1.5810, -1.6340],
[-0.1435, 2.3288]], device='npu:4')
Rank 7 initial matrix:
tensor([[0.3208, 0.0338],
[0.0500, 0.9941]], device='npu:7')
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 5 initial matrix:
tensor([[ 0.0212, 0.1011],
[-0.6113, -0.0202]], device='npu:5')
Warning: Device do not support double dtype now, dtype cast repalce with float.
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 6 initial matrix:
tensor([[-0.6829, 0.9623],
[ 0.3159, 1.7030]], device='npu:6')
Rank 3 initial matrix:
tensor([[ 1.0250, -0.4651],
[-0.0470, -1.1238]], device='npu:3')
Traceback (most recent call last):
File "/data/test_npu.py", line 29, in <module>
main()
File "/data/test_npu.py", line 25, in main
run(rank, world_size)
File "/data/test_npu.py", line 17, in run
dist.all_reduce(mat)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce
work = group.allreduce([tensor], opts)
RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41.
EI0006: Getting socket times out. Reason: Remote Rank did not send the data in time. Please check the reason for the rank being stuck
Solution: 1. Check the rank service processes with other errors or no errors in the cluster.2. If this error is reported for all NPUs, check whether the time difference between the earliest and latest errors is greater than the connect timeout interval (120s by default). If so, adjust the timeout interval by using the HCCL_CONNECT_TIMEOUT environment variable.3. Check the connectivity of the communication link between nodes. (For details, see the TLS command and HCCN connectivity check examples.). For details:https://www.hiascend.com/document
Traceback (most recent call last):
File "/data/test_npu.py", line 29, in <module>
main()
File "/data/test_npu.py", line 25, in main
run(rank, world_size)
File "/data/test_npu.py", line 17, in run
dist.all_reduce(mat)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce
work = group.allreduce([tensor], opts)
RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41.
EI0006: Getting socket times out. Reason: Remote Rank did not send the data in time. Please check the reason for the rank being stuck
Solution: 1. Check the rank service processes with other errors or no errors in the cluster.2. If this error is reported for all NPUs, check whether the time difference between the earliest and latest errors is greater than the connect timeout interval (120s by default). If so, adjust the timeout interval by using the HCCL_CONNECT_TIMEOUT environment variable.3. Check the connectivity of the communication link between nodes. (For details, see the TLS command and HCCN connectivity check examples.). For details:https://www.hiascend.com/document
Traceback (most recent call last):
File "/data/test_npu.py", line 29, in <module>
main()
File "/data/test_npu.py", line 25, in main
run(rank, world_size)
File "/data/test_npu.py", line 17, in run
dist.all_reduce(mat)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce
work = group.allreduce([tensor], opts)
RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41.
EI0006: Getting socket times out. Reason: Remote Rank did not send the data in time. Please check the reason for the rank being stuck
Solution: 1. Check the rank service processes with other errors or no errors in the cluster.2. If this error is reported for all NPUs, check whether the time difference between the earliest and latest errors is greater than the connect timeout interval (120s by default). If so, adjust the timeout interval by using the HCCL_CONNECT_TIMEOUT environment variable.3. Check the connectivity of the communication link between nodes. (For details, see the TLS command and HCCN connectivity check examples.). For details:https://www.hiascend.com/document
......
test_slave.log
[2024-04-16 15:48:57,594] torch.distributed.run: [WARNING]
[2024-04-16 15:48:57,594] torch.distributed.run: [WARNING] *****************************************
[2024-04-16 15:48:57,594] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-04-16 15:48:57,594] torch.distributed.run: [WARNING] *****************************************
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 10 initial matrix:
tensor([[-0.3249, -1.3386],
[ 1.4715, 0.3518]], device='npu:2')
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 15 initial matrix:
tensor([[ 0.0527, 0.2152],
[-1.5841, -0.6014]], device='npu:7')
Warning: Device do not support double dtype now, dtype cast repalce with float.
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 11 initial matrix:
tensor([[-0.0631, 0.9608],
[ 1.4438, 0.8795]], device='npu:3')
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 9 initial matrix:
tensor([[ 0.6343, -1.3501],
[ 1.9291, 0.9612]], device='npu:1')
Rank 14 initial matrix:
tensor([[ 0.7910, 1.1560],
[-0.6018, 0.1331]], device='npu:6')
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 12 initial matrix:
tensor([[-0.6680, -1.8644],
[-1.3363, -0.1779]], device='npu:4')
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 8 initial matrix:
tensor([[-2.1638, 0.2733],
[-1.7053, 0.8722]], device='npu:0')
Warning: Device do not support double dtype now, dtype cast repalce with float.
Rank 13 initial matrix:
tensor([[ 1.9668, -2.1328],
[ 0.8167, 0.0494]], device='npu:5')
Traceback (most recent call last):
File "/data/test_npu.py", line 29, in <module>
main()
File "/data/test_npu.py", line 25, in main
run(rank, world_size)
File "/data/test_npu.py", line 17, in run
dist.all_reduce(mat)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce
work = group.allreduce([tensor], opts)
RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41.
[2024-04-16 15:49:37,728] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401524 closing signal SIGTERM
[2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401525 closing signal SIGTERM
[2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401526 closing signal SIGTERM
[2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401527 closing signal SIGTERM
[2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401528 closing signal SIGTERM
[2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401529 closing signal SIGTERM
[2024-04-16 15:49:37,729] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 401530 closing signal SIGTERM
/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 97 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 97 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
[2024-04-16 15:50:07,730] torch.distributed.elastic.multiprocessing.api: [WARNING] Unable to shutdown process 401524 via Signals.SIGTERM, forcefully exiting via Signals.SIGKILL
/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 97 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
[2024-04-16 15:50:07,867] torch.distributed.elastic.multiprocessing.api: [WARNING] Unable to shutdown process 401525 via Signals.SIGTERM, forcefully exiting via Signals.SIGKILL
Process ForkServerPoolWorker-4:
Process ForkServerPoolWorker-6:
Process ForkServerPoolWorker-3:
Traceback (most recent call last):
Traceback (most recent call last):
File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/pool.py", line 131, in worker
put((job, i, result))
File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/pool.py", line 131, in worker
put((job, i, result))
File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/queues.py", line 377, in put
self._writer.send_bytes(obj)
File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/queues.py", line 377, in put
self._writer.send_bytes(obj)
File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
File "/root/miniconda3/envs/llm/lib/python3.9/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
......
补充:
在尝试多机多卡时报类似错误,所以怀疑是torch_npu的问题,所以实现了上述简易的 allreduce
的测试
ModelLink-mixtral预训练
sh examples/mixtral/pretrain_mixtral_8x7b_ptd_slave_npu1.sh
NODE_RANK 1
[2024-04-16 16:13:38,281] torch.distributed.run: [WARNING]
[2024-04-16 16:13:38,281] torch.distributed.run: [WARNING] *****************************************
[2024-04-16 16:13:38,281] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-04-16 16:13:38,281] torch.distributed.run: [WARNING] *****************************************
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.
warnings.warn(
Zarr-based strategies will not be registered because of missing packages
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:124: RuntimeWarning: torch.jit.script will be disabled by transfer_to_npu, which currently does not support it.
warnings.warn(msg, RuntimeWarning)
Zarr-based strategies will not be registered because of missing packages
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:124: RuntimeWarning: torch.jit.script will be disabled by transfer_to_npu, which currently does not support it.
warnings.warn(msg, RuntimeWarning)
Zarr-based strategies will not be registered because of missing packages
Zarr-based strategies will not be registered because of missing packages
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:124: RuntimeWarning: torch.jit.script will be disabled by transfer_to_npu, which currently does not support it.
warnings.warn(msg, RuntimeWarning)
Zarr-based strategies will not be registered because of missing packages
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:124: RuntimeWarning: torch.jit.script will be disabled by transfer_to_npu, which currently does not support it.
warnings.warn(msg, RuntimeWarning)
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:124: RuntimeWarning: torch.jit.script will be disabled by transfer_to_npu, which currently does not support it.
warnings.warn(msg, RuntimeWarning)
Zarr-based strategies will not be registered because of missing packages
Zarr-based strategies will not be registered because of missing packages
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:124: RuntimeWarning: torch.jit.script will be disabled by transfer_to_npu, which currently does not support it.
warnings.warn(msg, RuntimeWarning)
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:124: RuntimeWarning: torch.jit.script will be disabled by transfer_to_npu, which currently does not support it.
warnings.warn(msg, RuntimeWarning)
Zarr-based strategies will not be registered because of missing packages
/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:124: RuntimeWarning: torch.jit.script will be disabled by transfer_to_npu, which currently does not support it.
warnings.warn(msg, RuntimeWarning)
> compiling dataset index builder ...
make: Entering directory '/data/ModelLink/megatron/core/datasets'
make: Nothing to be done for 'default'.
make: Leaving directory '/data/ModelLink/megatron/core/datasets'
>>> done with dataset index builder. Compilation time: 0.166 seconds
-----------------------------COC Settings: ------------------------------------
is coc turned on: False
-------------------------------------------------------------------------------
COC REPLACE NONE!
Warning: The torch.npu.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='npu') to create tensors.
Warning: The torch.npu.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='npu') to create tensors.
Warning: The torch.npu.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='npu') to create tensors.
Warning: The torch.npu.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='npu') to create tensors.
Warning: The torch.npu.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='npu') to create tensors.
Warning: The torch.npu.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='npu') to create tensors.
Warning: The torch.npu.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='npu') to create tensors.
Warning: The torch.npu.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='npu') to create tensors.
Warning: Device do not support double dtype now, dtype cast repalce with float.
Traceback (most recent call last):
File "/data/ModelLink/pretrain_gpt.py", line 265, in <module>
pretrain(train_valid_test_datasets_provider,
File "/data/ModelLink/megatron/training.py", line 126, in pretrain
torch.distributed.all_reduce(start_time_tensor,
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce
work = group.allreduce([tensor], opts)
RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41.
Warning: Device do not support double dtype now, dtype cast repalce with float.
Traceback (most recent call last):
File "/data/ModelLink/pretrain_gpt.py", line 265, in <module>
pretrain(train_valid_test_datasets_provider,
File "/data/ModelLink/megatron/training.py", line 126, in pretrain
torch.distributed.all_reduce(start_time_tensor,
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2050, in all_reduce
work = group.allreduce([tensor], opts)
RuntimeError: [ERROR] HCCL error in: torch_npu/csrc/distributed/ProcessGroupHCCL.cpp:41.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。
上述简易测试可以提供plog日志,看下首次报错的情况吗
您好,日志中首次报错是[ERROR] TEFUSION(412438,python):2024-04-17-09:12:36.535.075 [common_utils.cc:531]412438 FcntlLockFileSet fcntl F_SETLK failed. type:1, recursiveCnt:0, errno:37, No locks available
磁盘不支持fcntl锁,使用限制上不能用nfsv3的磁盘,NFSV3及以下的磁盘,不支持flock锁,如果是本地磁盘,需要操作系统排查下,为什么加不上flock锁。
重新换了磁盘,emmm又有新的问题了
master.plog
[EVENT] HCCP(468114,python):2024-04-17-15:37:01.270.229 [ra_host.c:818]tid:468114,ra_socket_listen_start(818) : Input parameters: [0]th, phy_id[0], local_ip[172.17.0.2]
[EVENT] HCCP(468114,python):2024-04-17-15:37:01.270.258 [rs_socket.c:646]tid:468114,rs_socket_listen_bind_listen(646) : socket bind: family 2, addr 172.17.0.2, port 60000
[TRACE] HCCL(468114,python):2024-04-17-15:37:01.270.389 [status:init] [op_base.cc:331][468114]HcclGetRootInfo success, take time [7027]us
[EVENT] HCCP(468114,python):2024-04-17-15:37:04.441.618 [ra_host.c:253]tid:468114,ra_init(253) : Input parameters: phy_id[0], nic_position:[1]
[EVENT] HCCP(468114,python):2024-04-17-15:37:04.442.681 [ra_hdc.c:1419]tid:468114,ra_hdc_init(1419) : hdc init start! logic id is 0, phy id is 0
[EVENT] HCCP(468114,python):2024-04-17-15:37:04.443.416 [ra_hdc.c:1454]tid:468114,ra_hdc_init(1454) : hdc init OK! phy_id[0]
[EVENT] HCCP(468114,python):2024-04-17-15:37:04.447.619 [ra_host.c:1845]tid:468114,ra_get_ifaddrs(1845) : Input parameters: phy_id[0], nic_position:[1], interface num[2]
[EVENT] HCCP(468114,python):2024-04-17-15:37:04.448.985 [ra_host.c:740]tid:468114,ra_socket_batch_connect(740) : Input parameters: [0]th, phy_id[0], local_ip[172.17.0.2], remote_ip[172.17.0.2], tag:[topo_detect_default_tag_60000]
[ERROR] HCCL(468114,python):2024-04-17-15:39:01.270.943 [topoinfo_exchange_server.cc:91][471424][Get][Connection]topo exchange server get socket timeout! timeout[120 s]
[ERROR] HCCL(468114,python):2024-04-17-15:39:01.271.006 [topoinfo_exchange_server.cc:161][471424][TopoInfoExchangeServer][DispalyConnectionedRank]total connected num is [8],line num is [1]
[ERROR] HCCL(468114,python):2024-04-17-15:39:01.271.048 [topoinfo_exchange_server.cc:174][471424][TopoInfoExchangeServer][DispalyConnectionedRank]connected rankinfo[LINE 0]: [0000000000000000],[0000000000000001],[0000000000000002],[0000000000000003],[0000000000000004],[0000000000000005],[0000000000000006],[0000000000000007];
[ERROR] HCCL(468114,python):2024-04-17-15:39:01.271.059 [topoinfo_exchange_server.cc:75][471424]call trace: hcclRet -> 9
[ERROR] HCCL(468114,python):2024-04-17-15:39:01.271.070 [topoinfo_exchange_server.cc:41][471424][TopoInfoExchangeServer][Setup]cluster topo exchange server connect client failed
[EVENT] HCCP(468114,python):2024-04-17-15:39:01.271.128 [ra_host.c:856]tid:471424,ra_socket_listen_stop(856) : Input parameters: [0]th, phy_id[0], local_ip[172.17.0.2]
[ERROR] HCCL(468114,python):2024-04-17-15:39:01.271.248 [topoinfo_detect.cc:49][471424][Setup][TopoExchangeServer]setup topoExchangeServer failed, ret[9]
[ERROR] HCCL(468114,python):2024-04-17-15:39:04.451.819 [adapter_hccp.cc:765][468114][Recv][RaSocket]errNo[0x0000000005000013] Wait timeout for sockets recv, data[0xffffeaf5a428], size[4], recvSize[0], The most common cause is that the firewall is incorrectly configured. Check the firewall configuration or try to disable the firewall.
[ERROR] HCCL(468114,python):2024-04-17-15:39:04.452.070 [topoinfo_exchange_base.cc:136][468114]call trace: hcclRet -> 9
[ERROR] HCCL(468114,python):2024-04-17-15:39:04.452.087 [topoinfo_exchange_base.cc:103][468114][Recv][ClusterInfoMsg]receive msg length from fdhandle failed, ret[9]
[ERROR] HCCL(468114,python):2024-04-17-15:39:04.452.097 [topoinfo_exchange_agent.cc:267][468114]call trace: hcclRet -> 4
[ERROR] HCCL(468114,python):2024-04-17-15:39:04.452.107 [topoinfo_exchange_agent.cc:71][468114]call trace: hcclRet -> 4
[ERROR] HCCL(468114,python):2024-04-17-15:39:04.452.122 [topoinfo_exchange_agent.cc:43][468114]call trace: hcclRet -> 4
[ERROR] HCCL(468114,python):2024-04-17-15:39:04.452.133 [topoinfo_detect.cc:206][468114]call trace: hcclRet -> 4
[ERROR] HCCL(468114,python):2024-04-17-15:39:04.452.153 [op_base.cc:425][468114][Init][CommRootInfo]errNo[0x0000000005000004] setup topo detect error
[EVENT] HCCP(468114,python):2024-04-17-15:39:04.452.173 [ra_host.c:1797]tid:468114,ra_get_ifnum(1797) : Input parameters: phy_id[0], nic_position:[0]
[EVENT] HCCP(468114,python):2024-04-17-15:39:04.454.078 [ra_host.c:1845]tid:468114,ra_get_ifaddrs(1845) : Input parameters: phy_id[0], nic_position:[0], interface num[4]
[ERROR] HCCL(468114,python):2024-04-17-15:39:04.455.488 [op_base.cc:468][468114][Init][CommRootInfo]HcclCommInitRootInfo failed, rankNum[16], rank[0], server[172.17.0.2%eth0], return[0x0000000005000004], rootInfo identifier[172.17
.0.2%eth0_60000_0_1713339421270374]
[ERROR] HCCL(468114,python):2024-04-17-15:39:04.456.259 [op_base.cc:1222][468114][HcclCommDestroy] comm is not exist, comm=0x28329340, group=172.17.0.2%eth0_60000_0_1713339421270374, deviceLogicId=0
slave.plog
[EVENT] HCCP(415996,python):2024-04-17-15:37:26.606.605 [ra_hdc.c:1419]tid:415996,ra_hdc_init(1419) : hdc init start! logic id is 5, phy id is 5
[EVENT] HCCP(415996,python):2024-04-17-15:37:26.607.173 [ra_hdc.c:1454]tid:415996,ra_hdc_init(1454) : hdc init OK! phy_id[5]
[EVENT] HCCP(415996,python):2024-04-17-15:37:26.611.038 [ra_host.c:414]tid:415996,ra_socket_init_v1(414) : socket init:mode=0 phy_id=5 family=2 ip=172.17.0.2
[EVENT] HCCP(415996,python):2024-04-17-15:37:26.611.506 [ra_host.c:1845]tid:415996,ra_get_ifaddrs(1845) : Input parameters: phy_id[5], nic_position:[1], interface num[2]
[EVENT] HCCP(415996,python):2024-04-17-15:37:26.612.862 [ra_host.c:740]tid:415996,ra_socket_batch_connect(740) : Input parameters: [0]th, phy_id[5], local_ip[172.17.0.2], remote_ip[172.17.0.2], tag:[topo_detect_default_tag_60000]
[EVENT] HCCP(415996,python):2024-04-17-15:38:07.415.872 [rs_drv_socket.c:103]tid:419327,rs_drv_connect(103) : client connect success. client family 2 addr 172.17.0.2:60000, server addr 172.17.0.2:60000, fd:125
[ERROR] HCCL(415996,python):2024-04-17-15:38:07.417.526 [topoinfo_exchange_base.cc:105][415996][Recv][ClusterInfoMsg]receive msg length from fdhandle failed, msg length is beyond [1 ~ 10485760].
[ERROR] HCCL(415996,python):2024-04-17-15:38:07.417.549 [topoinfo_exchange_agent.cc:267][415996]call trace: hcclRet -> 4
[ERROR] HCCL(415996,python):2024-04-17-15:38:07.417.561 [topoinfo_exchange_agent.cc:71][415996]call trace: hcclRet -> 4
[ERROR] HCCL(415996,python):2024-04-17-15:38:07.417.575 [topoinfo_exchange_agent.cc:43][415996]call trace: hcclRet -> 4
[ERROR] HCCL(415996,python):2024-04-17-15:38:07.417.585 [topoinfo_detect.cc:206][415996]call trace: hcclRet -> 4
[ERROR] HCCL(415996,python):2024-04-17-15:38:07.417.607 [op_base.cc:425][415996][Init][CommRootInfo]errNo[0x0000000005000004] setup topo detect error
[EVENT] HCCP(415996,python):2024-04-17-15:38:07.417.631 [ra_host.c:1797]tid:415996,ra_get_ifnum(1797) : Input parameters: phy_id[0], nic_position:[0]
[EVENT] HCCP(415996,python):2024-04-17-15:38:07.422.512 [ra_host.c:1845]tid:415996,ra_get_ifaddrs(1845) : Input parameters: phy_id[0], nic_position:[0], interface num[2]
[ERROR] HCCL(415996,python):2024-04-17-15:38:07.423.521 [op_base.cc:468][415996][Init][CommRootInfo]HcclCommInitRootInfo failed, rankNum[16], rank[13], server[172.17.0.2%eth0], return[0x0000000005000004], rootInfo identifier[172.17.0.2%eth0_60000_0_1713339421270374]
[ERROR] HCCL(415996,python):2024-04-17-15:38:07.423.906 [op_base.cc:1222][415996][HcclCommDestroy] comm is not exist, comm=0x376e5160, group=172.17.0.2%eth0_60000_0_1713339421270374, deviceLogicId=5
好像之前八个日志只有三个报了磁盘加锁的错误,其余几个都是 [ERROR] HCCL(412442,python):2024-04-17-09:13:30.994.753 [topoinfo_exchange_base.cc:105][412442][Recv][ClusterInfoMsg]receive msg length from fdhandle failed, msg length is beyond [1 ~ 10485760].
不太清楚是不是指系统盘?我就把代码换到了另一个磁盘下,跑了,所有8个plog都是上面这个错误了,没有加锁失败的错误日志
您好,可以提供完整的日志吗,方便hccl定位,/root/ascend/log下面的都需要
https://gitee.com/csyourui/test_data/blob/master/hccl_error_alllog.zip
上传了,其余日志之前没有清空,所以今天的两次都是0417的日志
日志显示两台机器的host网卡ip是一样的,ifconfig检查下两台机器的配置。可以参考 https://www.hiascend.com/document/detail/zh/canncommercial/700/reference/envvar/envref_07_0068.html 配置下
两个设备都是一样的,然后两个宿主机下都有一个docker容器,这个172.17.0.2都是容器在各自宿主机的地址
docker网络不支持吗,还是要特殊配置呢?
gpu-1@gpu-1:~$ ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 10.227.5.221 netmask 255.255.255.0 broadcast 10.227.5.255
inet6 fe80::a8e5:22ff:fed5:767d prefixlen 64 scopeid 0x20
ether aa:e5:22:d5:76:7d txqueuelen 1000 (Ethernet)
RX packets 245228144 bytes 182385338495 (182.3 GB)
RX errors 0 dropped 1606657 overruns 0 frame 0
TX packets 39250476 bytes 33434217737 (33.4 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:b8ff:fec2:9833 prefixlen 64 scopeid 0x20
ether 02:42:b8:c2:98:33 txqueuelen 0 (Ethernet)
RX packets 2288843 bytes 128554092 (128.5 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4402003 bytes 6645557237 (6.6 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
(base) gpu-2@gpu-2:~$ ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 10.227.5.222 netmask 255.255.255.0 broadcast 10.227.5.255
inet6 fe80::54fc:11ff:fe85:39ff prefixlen 64 scopeid 0x20
ether 56:fc:11:85:39:ff txqueuelen 1000 (Ethernet)
RX packets 683025571 bytes 688431016725 (688.4 GB)
RX errors 0 dropped 2561220 overruns 0 frame 0
TX packets 660374023 bytes 949120004281 (949.1 GB)
TX errors 0 dropped 5 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:56ff:fe0e:c83d prefixlen 64 scopeid 0x20
ether 02:42:56:0e:c8:3d txqueuelen 0 (Ethernet)
RX packets 3712539 bytes 1271184747 (1.2 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5673998 bytes 8412846130 (8.4 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
两个环境一个作为master_addr,看起来就会有一个日志的local_ip和remote_ip是一样的,这个应该是合理的吧?
重新设置了docker的网络为宿主机网络日志的问题也不太一样了
master_addr 设置为10.227.5.222 该环境两个地址就一致了
[EVENT] HCCP(20171,python):2024-04-25-01:24:25.859.501 [ra_host.c:414]tid:20171,ra_socket_init_v1(414) : socket init:mode=0 phy_id=0 family=2 ip=10.227.5.222
[EVENT] HCCP(20171,python):2024-04-25-01:24:25.859.520 [ra_host.c:818]tid:20171,ra_socket_listen_start(818) : Input parameters: [0]th, phy_id[0], local_ip[10.227.5.222]
[EVENT] HCCP(20171,python):2024-04-25-01:24:25.859.610 [rs_socket.c:646]tid:20171,rs_socket_listen_bind_listen(646) : socket bind: family 2, addr 10.227.5.222, port 60000
[TRACE] HCCL(20171,python):2024-04-25-01:24:25.859.737 [status:init] [op_base.cc:331][20171]HcclGetRootInfo success, take time [13785]us
[EVENT] HCCP(20171,python):2024-04-25-01:24:29.101.737 [ra_host.c:253]tid:20171,ra_init(253) : Input parameters: phy_id[0], nic_position:[1]
[EVENT] HCCP(20171,python):2024-04-25-01:24:29.135.351 [ra_hdc.c:1419]tid:20171,ra_hdc_init(1419) : hdc init start! logic id is 0, phy id is 0
[EVENT] HCCP(20171,python):2024-04-25-01:24:29.135.810 [ra_hdc.c:1454]tid:20171,ra_hdc_init(1454) : hdc init OK! phy_id[0]
[EVENT] HCCP(20171,python):2024-04-25-01:24:29.140.482 [ra_host.c:1845]tid:20171,ra_get_ifaddrs(1845) : Input parameters: phy_id[0], nic_position:[1], interface num[2]
[EVENT] HCCP(20171,python):2024-04-25-01:24:29.141.395 [ra_host.c:740]tid:20171,ra_socket_batch_connect(740) : Input parameters: [0]th, phy_id[0], local_ip[10.227.5.222], remote_ip[10.227.5.222], tag:[topo_detect_default_tag_60000]
[EVENT] HCCP(20171,python):2024-04-25-01:24:30.785.766 [ra_host.c:778]tid:20171,ra_socket_batch_close(778) : Input parameters: [0]th, phy_id[0], local_ip[10.227.5.222]
[ERROR] HCCL(20171,python):2024-04-25-01:24:30.827.995 [config.cc:354][20171][Check][RankIpFamily]rank[7] device ip family[2] is not same with others[10].
[ERROR] HCCL(20171,python):2024-04-25-01:24:30.828.011 [topoinfo_exchange_agent.cc:389][20171]call trace: hcclRet -> 1
[ERROR] HCCL(20171,python):2024-04-25-01:24:30.828.029 [topoinfo_exchange_agent.cc:49][20171]call trace: hcclRet -> 1
[ERROR] HCCL(20171,python):2024-04-25-01:24:30.828.039 [topoinfo_detect.cc:206][20171]call trace: hcclRet -> 1
[ERROR] HCCL(20171,python):2024-04-25-01:24:30.828.070 [op_base.cc:425][20171][Init][CommRootInfo]errNo[0x0000000005000001] setup topo detect error
[EVENT] HCCP(20171,python):2024-04-25-01:24:30.828.090 [ra_host.c:1797]tid:20171,ra_get_ifnum(1797) : Input parameters: phy_id[0], nic_position:[0]
[EVENT] HCCP(20171,python):2024-04-25-01:24:30.836.206 [ra_host.c:1845]tid:20171,ra_get_ifaddrs(1845) : Input parameters: phy_id[0], nic_position:[0], interface num[6]
[ERROR] HCCL(20171,python):2024-04-25-01:24:30.839.603 [op_base.cc:468][20171][Init][CommRootInfo]HcclCommInitRootInfo failed, rankNum[16], rank[0], server[10.227.5.222%bond0], return[0x0000000005000001], rootInfo identifier[10.227.5.222%bond0_60000_0_1714008265859722]
[ERROR] HCCL(20171,python):2024-04-25-01:24:30.839.871 [op_base.cc:1222][20171][HcclCommDestroy] comm is not exist, comm=0x30d36ed0, group=10.227.5.222%bond0_60000_0_1714008265859722, deviceLogicId=0
[EVENT] HCCP(20171,python):2024-04-25-01:24:30.842.689 [ra_host.c:778]tid:23349,ra_socket_batch_close(778) : Input parameters: [0]th, phy_id[0], local_ip[10.227.5.222]
slaver
[EVENT] HCCP(20151,python):2024-04-25-01:24:50.351.605 [ra_host.c:1797]tid:20151,ra_get_ifnum(1797) : Input parameters: phy_id[0], nic_position:[0]
[EVENT] HCCP(20151,python):2024-04-25-01:24:50.355.910 [ra_host.c:1845]tid:20151,ra_get_ifaddrs(1845) : Input parameters: phy_id[0], nic_position:[0], interface num[6]
[EVENT] HCCP(20151,python):2024-04-25-01:24:50.357.879 [ra_host.c:1637]tid:20151,ra_socket_set_white_list_status(1637) : Input parameters: enable[0]
[EVENT] HCCP(20151,python):2024-04-25-01:24:50.357.976 [ra_host.c:253]tid:20151,ra_init(253) : Input parameters: phy_id[0], nic_position:[0]
[EVENT] HCCP(20151,python):2024-04-25-01:24:50.358.209 [rs_ssl.c:1062]tid:20151,rs_ssl_init(1062) : TLS SWITCH (0)
[EVENT] HCCP(20151,python):2024-04-25-01:24:50.358.514 [rs_epoll.c:470]tid:23345,rs_epoll_handle(470) : pthread[epoll_pthread] is alive!
[EVENT] HCCP(20151,python):2024-04-25-01:24:50.358.554 [rs_epoll.c:595]tid:23346,rs_connect_handle(595) : pthread[connect_pthread] is alive!
[EVENT] HCCP(20151,python):2024-04-25-01:24:50.358.589 [rs.c:401]tid:20151,rs_init(401) : rs init success, chip_id[0]
[EVENT] HCCP(20151,python):2024-04-25-01:24:53.185.646 [ra_host.c:253]tid:20151,ra_init(253) : Input parameters: phy_id[0], nic_position:[1]
[EVENT] HCCP(20151,python):2024-04-25-01:24:53.186.295 [ra_hdc.c:1419]tid:20151,ra_hdc_init(1419) : hdc init start! logic id is 0, phy id is 0
[EVENT] HCCP(20151,python):2024-04-25-01:24:53.186.799 [ra_hdc.c:1454]tid:20151,ra_hdc_init(1454) : hdc init OK! phy_id[0]
[EVENT] HCCP(20151,python):2024-04-25-01:24:53.190.455 [ra_host.c:414]tid:20151,ra_socket_init_v1(414) : socket init:mode=0 phy_id=0 family=2 ip=10.227.5.221
[EVENT] HCCP(20151,python):2024-04-25-01:24:53.190.837 [ra_host.c:1845]tid:20151,ra_get_ifaddrs(1845) : Input parameters: phy_id[0], nic_position:[1], interface num[2]
[EVENT] HCCP(20151,python):2024-04-25-01:24:53.191.795 [ra_host.c:740]tid:20151,ra_socket_batch_connect(740) : Input parameters: [0]th, phy_id[0], local_ip[10.227.5.221], remote_ip[10.227.5.222], tag:[topo_detect_default_tag_60000]
[EVENT] HCCP(20151,python):2024-04-25-01:24:54.259.772 [ra_host.c:778]tid:20151,ra_socket_batch_close(778) : Input parameters: [0]th, phy_id[0], local_ip[10.227.5.221]
[ERROR] HCCL(20151,python):2024-04-25-01:24:54.261.535 [config.cc:354][20151][Check][RankIpFamily]rank[7] device ip family[2] is not same with others[10].
[ERROR] HCCL(20151,python):2024-04-25-01:24:54.261.552 [topoinfo_exchange_agent.cc:389][20151]call trace: hcclRet -> 1
[ERROR] HCCL(20151,python):2024-04-25-01:24:54.261.569 [topoinfo_exchange_agent.cc:49][20151]call trace: hcclRet -> 1
[ERROR] HCCL(20151,python):2024-04-25-01:24:54.261.579 [topoinfo_detect.cc:206][20151]call trace: hcclRet -> 1
[ERROR] HCCL(20151,python):2024-04-25-01:24:54.261.608 [op_base.cc:425][20151][Init][CommRootInfo]errNo[0x0000000005000001] setup topo detect error
[EVENT] HCCP(20151,python):2024-04-25-01:24:54.261.630 [ra_host.c:1797]tid:20151,ra_get_ifnum(1797) : Input parameters: phy_id[0], nic_position:[0]
[EVENT] HCCP(20151,python):2024-04-25-01:24:54.285.340 [ra_host.c:1845]tid:20151,ra_get_ifaddrs(1845) : Input parameters: phy_id[0], nic_position:[0], interface num[6]
[ERROR] HCCL(20151,python):2024-04-25-01:24:54.298.910 [op_base.cc:468][20151][Init][CommRootInfo]HcclCommInitRootInfo failed, rankNum[16], rank[8], server[10.227.5.221%bond0], return[0x0000000005000001], rootInfo identifier[10.227.5.222%bond0_60000_0_1714008265859722]
[ERROR] HCCL(20151,python):2024-04-25-01:24:54.299.494 [op_base.cc:1222][20151][HcclCommDestroy] comm is not exist, comm=0x2e421d80, group=10.227.5.222%bond0_60000_0_1714008265859722, deviceLogicId=0
[ERROR] HCCL(20171,python):2024-04-25-01:24:30.827.995 [config.cc:354][20171][Check][RankIpFamily]rank[7] device ip family[2] is not same with others[10].
device网卡的ip族不一致,可以在裸机执行如下命令明确下~
单机8卡执行
for i in {0..7}; do hccn_tool -i $i -ip -g ; done
单机16卡执行
for i in {0..15}; do hccn_tool -i $i -ip -g ; done
感谢,看起来应该是这个问题,但是我还是不明白应该怎么配置这个hccn_tool的网络,看了网上的教程都是
hccn_tool -i 0 -ip -s address 192.168.100.101 netmask 255.255.255.0
hccn_tool -i 1 -ip -s address 192.168.101.101 netmask 255.255.255.0
hccn_tool -i 2 -ip -s address 192.168.102.101 netmask 255.255.255.0
hccn_tool -i 3 -ip -s address 192.168.103.101 netmask 255.255.255.0
hccn_tool -i 4 -ip -s address 192.168.100.100 netmask 255.255.255.0
hccn_tool -i 5 -ip -s address 192.168.101.100 netmask 255.255.255.0
hccn_tool -i 6 -ip -s address 192.168.102.100 netmask 255.255.255.0
hccn_tool -i 7 -ip -s address 192.168.103.100 netmask 255.255.255.0
这上面的是单机的配置,看起来合理,毕竟可以单机的局域网(192.)网络,但是如果我是多机,并且多机之间是通过10.227.5.222 10.227.5.221这样的局域网ip连接的,我该怎么配置呢?还有这个hccn给每个卡都配了一个ip,这个ip是怎么和宿主机的ip关联的,这个ip如果要申请10.227.5.***的网络ip可以直接获得的到吗,如此之后,master_addr又该填哪个ip呢?
登录 后才可以发表评论