登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
登录
注册
9月20日,Gitee × 模力方舟来成都了!聚焦 AI 应用在开发范式、算力架构、交互设计、硬件选型等跨场景创新实践,点击立即报名~
代码拉取完成,页面将自动刷新
开源项目
>
人工智能
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
202
Star
1.3K
Fork
1.2K
GVP
Ascend
/
MindSpeed-LLM
代码
Issues
2
Pull Requests
49
Wiki
统计
流水线
服务
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
qwen1.5-32b双机训练报错
DONE
#IA5CHB
训练问题
changhan
创建于
2024-06-14 09:57
一、问题现象(附报错日志上下文): 在进行qwen1.5-32b双机预训练时,两个节点都报下面的错误 ``` root@test:/home/0612-ModelLink/ModelLink# bash examples/qwen15/pretrain_qwen15_32b_ptd.sh [2024-06-13 12:24:11,800] torch.distributed.run: [WARNING] [2024-06-13 12:24:11,800] torch.distributed.run: [WARNING] ***************************************** [2024-06-13 12:24:11,800] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avtem being overloaded, please further tune the variable for optimal performance in your application as needed. [2024-06-13 12:24:11,800] torch.distributed.run: [WARNING] ***************************************** /usr/local/python3.10.9/lib/python3.10/site-packages/torch_npu/contrib/transfer_to_npu.py:209: ImportWarning: ************************************************************************************************************* The torch.Tensor.cuda and torch.nn.Module.cuda are replaced with torch.Tensor.npu and torch.nn.Module.npu now.. The torch.cuda.DoubleTensor is replaced with torch.npu.FloatTensor cause the double type is not supported now.. The backend in torch.distributed.init_process_group set to hccl now.. The torch.cuda.* and torch.cuda.amp.* are replaced with torch.npu.* and torch.npu.amp.* now.. The device parameters have been replaced with npu in the function below: torch.logspace, torch.randint, torch.hann_window, torch.rand, torch.full_like, torch.ones_like, torch.rand_like, torch.randperm, torch.arange,uffer, torch.normal, torch._empty_per_channel_affine_quantized, torch.empty_strided, torch.empty_like, torch.scalar_tensor, torch.tril_indices, to_window, torch.ones, torch.sparse_coo_tensor, torch.randn, torch.kaiser_window, torch.tensor, torch.triu_indices, torch.as_tensor, torch.zeros, tolike, torch.full, torch.eye, torch._sparse_csr_tensor_unsafe, torch.empty, torch._sparse_coo_tensor_unsafe, torch.blackman_window, torch.zeros_likge, torch.sparse_csr_tensor, torch.randn_like, torch.from_file, torch._cudnn_init_dropout_state, torch._empty_affine_quantized, torch.linspace, towindow, torch.empty_quantized, torch._pin_memory, torch.autocast, torch.load, torch.Generator, torch.Tensor.new_empty, torch.Tensor.new_empty_striensor.new_full, torch.Tensor.new_ones, torch.Tensor.new_tensor, torch.Tensor.new_zeros, torch.Tensor.to, torch.nn.Module.to, torch.nn.Module.to_em ************************************************************************************************************* warnings.warn(msg, ImportWarning) Traceback (most recent call last): File "/home/0612-ModelLink/ModelLink/pretrain_gpt.py", line 9, in <module> import modellink File "/home/0612-ModelLink/ModelLink/modellink/__init__.py", line 24, in <module> from . import patchs File "/home/0612-ModelLink/ModelLink/modellink/patchs/__init__.py", line 20, in <module> from .megatron_patch import exec_patch File "/home/0612-ModelLink/ModelLink/modellink/patchs/megatron_patch.py", line 20, in <module> from mindspeed.core.fusions.rotary_pos_embedding import rotary_embedding_init_wrapper File "/usr/local/python3.10.9/lib/python3.10/site-packages/mindspeed/core/fusions/rotary_pos_embedding.py", line 5, in <module> from megatron.training import get_args File "/home/0612-ModelLink/ModelLink/megatron/training/__init__.py", line 16, in <module> from .initialize import initialize_megatron File "/home/0612-ModelLink/ModelLink/megatron/training/initialize.py", line 18, in <module> from megatron.training.arguments import parse_args, validate_args File "/home/0612-ModelLink/ModelLink/megatron/training/arguments.py", line 14, in <module> from megatron.core.models.retro.utils import ( File "/home/0612-ModelLink/ModelLink/megatron/core/models/retro/__init__.py", line 12, in <module> from .decoder_spec import get_retro_decoder_block_spec File "/home/0612-ModelLink/ModelLink/megatron/core/models/retro/decoder_spec.py", line 9, in <module> from megatron.core.models.gpt.gpt_layer_specs import ( File "/home/0612-ModelLink/ModelLink/megatron/core/models/gpt/__init__.py", line 1, in <module> from .gpt_model import GPTModel File "/home/0612-ModelLink/ModelLink/megatron/core/models/gpt/gpt_model.py", line 17, in <module> from megatron.core.transformer.transformer_block import TransformerBlock File "/home/0612-ModelLink/ModelLink/megatron/core/transformer/transformer_block.py", line 16, in <module> from megatron.core.transformer.custom_layers.transformer_engine import ( File "/home/0612-ModelLink/ModelLink/megatron/core/transformer/custom_layers/transformer_engine.py", line 539, in <module> class TEDelayedScaling(te.common.recipe.DelayedScaling): AttributeError: module 'transformer_engine' has no attribute 'common' /usr/local/python3.10.9/lib/python3.10/tempfile.py:860: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmprlccipns'> ``` 二、软件版本: -- CANN 版本 (e.g., CANN 3.0.x,5.x.x): 8.0.RC1 --Tensorflow 版本:4.41.1 --Pytorch:2.2.0 --Python 版本 (e.g., Python 3.7.5):3.10 --操作系统版本 (e.g., Ubuntu 18.04):18.04 LTS (Bionic Beaver) 三、测试步骤: 通过权重转换和数据集转换(命令如下)后,报上面错误: 权重转换命令: python tools/checkpoint/convert_ckpt.py \ --model-type GPT \ --loader llama2_hf \ --saver megatron \ --target-tensor-parallel-size 8 \ --target-pipeline-parallel-size 2 \ --num-layers-per-virtual-pipeline-stage 2 \ --params-dtype bf16 \ --load-dir /data/weight/Qwen1.5-32B-chat/ \ --save-dir ./model_weights/Qwen1.5-32B-v0.1-tp8-pp2-vpp2/ \ --tokenizer-model /data/weight/Qwen1.5-32B-chat/tokenizer.json \ --add-qkv-bias 数据集转换命令: python ./tools/preprocess_data.py \ --input /data/train-00000-of-00001-a09b74b3ef9c3b56.parquet \ --tokenizer-name-or-path /data/weight/Qwen1.5-32B-chat/ \ --output-prefix ./dataset/qwen-1.5-32b-hf/alpaca \ --workers 4 \ --log-interval 1000 \ --tokenizer-type PretrainedFromHF \ --append-eod 四、日志信息: 日志第一步所示
一、问题现象(附报错日志上下文): 在进行qwen1.5-32b双机预训练时,两个节点都报下面的错误 ``` root@test:/home/0612-ModelLink/ModelLink# bash examples/qwen15/pretrain_qwen15_32b_ptd.sh [2024-06-13 12:24:11,800] torch.distributed.run: [WARNING] [2024-06-13 12:24:11,800] torch.distributed.run: [WARNING] ***************************************** [2024-06-13 12:24:11,800] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avtem being overloaded, please further tune the variable for optimal performance in your application as needed. [2024-06-13 12:24:11,800] torch.distributed.run: [WARNING] ***************************************** /usr/local/python3.10.9/lib/python3.10/site-packages/torch_npu/contrib/transfer_to_npu.py:209: ImportWarning: ************************************************************************************************************* The torch.Tensor.cuda and torch.nn.Module.cuda are replaced with torch.Tensor.npu and torch.nn.Module.npu now.. The torch.cuda.DoubleTensor is replaced with torch.npu.FloatTensor cause the double type is not supported now.. The backend in torch.distributed.init_process_group set to hccl now.. The torch.cuda.* and torch.cuda.amp.* are replaced with torch.npu.* and torch.npu.amp.* now.. The device parameters have been replaced with npu in the function below: torch.logspace, torch.randint, torch.hann_window, torch.rand, torch.full_like, torch.ones_like, torch.rand_like, torch.randperm, torch.arange,uffer, torch.normal, torch._empty_per_channel_affine_quantized, torch.empty_strided, torch.empty_like, torch.scalar_tensor, torch.tril_indices, to_window, torch.ones, torch.sparse_coo_tensor, torch.randn, torch.kaiser_window, torch.tensor, torch.triu_indices, torch.as_tensor, torch.zeros, tolike, torch.full, torch.eye, torch._sparse_csr_tensor_unsafe, torch.empty, torch._sparse_coo_tensor_unsafe, torch.blackman_window, torch.zeros_likge, torch.sparse_csr_tensor, torch.randn_like, torch.from_file, torch._cudnn_init_dropout_state, torch._empty_affine_quantized, torch.linspace, towindow, torch.empty_quantized, torch._pin_memory, torch.autocast, torch.load, torch.Generator, torch.Tensor.new_empty, torch.Tensor.new_empty_striensor.new_full, torch.Tensor.new_ones, torch.Tensor.new_tensor, torch.Tensor.new_zeros, torch.Tensor.to, torch.nn.Module.to, torch.nn.Module.to_em ************************************************************************************************************* warnings.warn(msg, ImportWarning) Traceback (most recent call last): File "/home/0612-ModelLink/ModelLink/pretrain_gpt.py", line 9, in <module> import modellink File "/home/0612-ModelLink/ModelLink/modellink/__init__.py", line 24, in <module> from . import patchs File "/home/0612-ModelLink/ModelLink/modellink/patchs/__init__.py", line 20, in <module> from .megatron_patch import exec_patch File "/home/0612-ModelLink/ModelLink/modellink/patchs/megatron_patch.py", line 20, in <module> from mindspeed.core.fusions.rotary_pos_embedding import rotary_embedding_init_wrapper File "/usr/local/python3.10.9/lib/python3.10/site-packages/mindspeed/core/fusions/rotary_pos_embedding.py", line 5, in <module> from megatron.training import get_args File "/home/0612-ModelLink/ModelLink/megatron/training/__init__.py", line 16, in <module> from .initialize import initialize_megatron File "/home/0612-ModelLink/ModelLink/megatron/training/initialize.py", line 18, in <module> from megatron.training.arguments import parse_args, validate_args File "/home/0612-ModelLink/ModelLink/megatron/training/arguments.py", line 14, in <module> from megatron.core.models.retro.utils import ( File "/home/0612-ModelLink/ModelLink/megatron/core/models/retro/__init__.py", line 12, in <module> from .decoder_spec import get_retro_decoder_block_spec File "/home/0612-ModelLink/ModelLink/megatron/core/models/retro/decoder_spec.py", line 9, in <module> from megatron.core.models.gpt.gpt_layer_specs import ( File "/home/0612-ModelLink/ModelLink/megatron/core/models/gpt/__init__.py", line 1, in <module> from .gpt_model import GPTModel File "/home/0612-ModelLink/ModelLink/megatron/core/models/gpt/gpt_model.py", line 17, in <module> from megatron.core.transformer.transformer_block import TransformerBlock File "/home/0612-ModelLink/ModelLink/megatron/core/transformer/transformer_block.py", line 16, in <module> from megatron.core.transformer.custom_layers.transformer_engine import ( File "/home/0612-ModelLink/ModelLink/megatron/core/transformer/custom_layers/transformer_engine.py", line 539, in <module> class TEDelayedScaling(te.common.recipe.DelayedScaling): AttributeError: module 'transformer_engine' has no attribute 'common' /usr/local/python3.10.9/lib/python3.10/tempfile.py:860: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmprlccipns'> ``` 二、软件版本: -- CANN 版本 (e.g., CANN 3.0.x,5.x.x): 8.0.RC1 --Tensorflow 版本:4.41.1 --Pytorch:2.2.0 --Python 版本 (e.g., Python 3.7.5):3.10 --操作系统版本 (e.g., Ubuntu 18.04):18.04 LTS (Bionic Beaver) 三、测试步骤: 通过权重转换和数据集转换(命令如下)后,报上面错误: 权重转换命令: python tools/checkpoint/convert_ckpt.py \ --model-type GPT \ --loader llama2_hf \ --saver megatron \ --target-tensor-parallel-size 8 \ --target-pipeline-parallel-size 2 \ --num-layers-per-virtual-pipeline-stage 2 \ --params-dtype bf16 \ --load-dir /data/weight/Qwen1.5-32B-chat/ \ --save-dir ./model_weights/Qwen1.5-32B-v0.1-tp8-pp2-vpp2/ \ --tokenizer-model /data/weight/Qwen1.5-32B-chat/tokenizer.json \ --add-qkv-bias 数据集转换命令: python ./tools/preprocess_data.py \ --input /data/train-00000-of-00001-a09b74b3ef9c3b56.parquet \ --tokenizer-name-or-path /data/weight/Qwen1.5-32B-chat/ \ --output-prefix ./dataset/qwen-1.5-32b-hf/alpaca \ --workers 4 \ --log-interval 1000 \ --tokenizer-type PretrainedFromHF \ --append-eod 四、日志信息: 日志第一步所示
评论 (
14
)
登录
后才可以发表评论
状态
DONE
TODO
ACCEPTED
Analysing
Feedback
WIP
Replied
CLOSED
DONE
REJECTED
负责人
未设置
标签
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (8)
标签 (6)
2.1.0
master
2.0.0
1.0.0
1.0.RC3
1.0.RC2
1.0.RC1
bk_origin_23
v2.1.0
v2.0.0
v1.0.0
v1.0.RC3.0
v1.0.RC2.0
v1.0.RC1.0
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(1)
Python
1
https://gitee.com/ascend/MindSpeed-LLM.git
git@gitee.com:ascend/MindSpeed-LLM.git
ascend
MindSpeed-LLM
MindSpeed-LLM
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
评论
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册