2.4K Star 8.1K Fork 4.4K

GVPMindSpore / mindspore

 / 详情

[ST][MS][MF][llama2/gpt2/codellama][910B1 16P]网络训练失败

DONE
Bug-Report
创建于  
2024-03-19 14:30
name about labels
Bug Report Use this template for reporting a bug kind/bug

Describe the current behavior / 问题描述 (Mandatory / 必填)

llama2/gpt2/codellama网络在910B1环境上训练失败
模型仓地址:https://gitee.com/mindspore/mindformers/blob/dev/docs/model_cards/llama2.md

Environment / 环境信息 (Mandatory / 必填)

  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境:

Please delete the backend not involved / 请删除不涉及的后端:
/device ascend/

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) :
    -- Python version (e.g., Python 3.7.5) :
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04):
    -- GCC/Compiler version (if compiled from source):

失败版本
cann:Milan_C17/20240308
MS:r2.3_20240318021514_647c66237
MF:dev_20240318121523_4a844784a3b

  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode pynative
/mode graph

Related testcase / 关联用例 (Mandatory / 必填)

用例仓地址:MindFormers_Test/cases/llama2/13b/train/
用例:
test_mf_llama2_13b_train_alpaca_16p_0001

Steps to reproduce the issue / 重现步骤 (Mandatory / 必填)

  1. get code from mindformers
  2. cd mindformers/scripts
  3. 将配置文件中得权重、数据集路径改为本地得路径
  4. bash run_distribute.sh /home/workspace/config/hccl_16p.json ./configs/llama2/run_llama2_13b_910b_finetune.yaml [0,8] finetune 16

bash run_distribute.sh /home/workspace/config/hccl_16p.json ./configs/llama2/run_llama2_13b_910b_finetune.yaml [8,16] finetune 16

  1. 验证网络是否训练成功

Describe the expected behavior / 预期结果 (Mandatory / 必填)

网络训练成功

Related log / screenshot / 日志 / 截图 (Mandatory / 必填)

[ERROR] KERNEL(2671083,ffff951de480,python):2024-03-19-11:53:08.037.101 [mindspore/ccsrc/plugin/factory/ms_factory.h:53] Register] Kernel NotEqual is already registered!
[WARNING] GE_ADPT(2671083,ffff951de480,python):2024-03-19-11:53:08.103.617 [mindspore/ccsrc/transform/symbol/acl_tdt_symbol.cc:49] LoadAcltdtApiSymbol] Dlopen /usr/local/Ascend/latest/lib64/libacl_tdt_channel.so failed!libacl_tdt_queue.so: cannot open shared object file: No such file or directory
2024-03-19 11:53:14,103 - mindformers[mindformers/tools/utils.py:153] - INFO - set output path to '/home/jenkins0/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_train_alpaca_16p_0001/output'
[WARNING] DEVICE(2671083,ffff951de480,python):2024-03-19-11:53:14.284.288 [mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_memory_adapter.cc:95] Initialize] Reserved memory size for other components(3187671040) is less than recommend size(4073724928), It may lead to Out Of Memory in HCCL or other components, Please double check context key 'variable_memory_max_size'/'max_device_memory'
2024-03-19 11:53:31,012 - mindformers[mindformers/tools/cloud_adapter/cloud_monitor.py:43] - ERROR - Traceback (most recent call last):
 File "/home/jenkins0/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_train_alpaca_16p_0001/scripts/mf_parallel15/mindformers/core/context/build_context.py", line 116, in init_context
   init()
 File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/communication/management.py", line 188, in init
   init_hccl()
RuntimeError: acltdtCreateChannelWithCapacity is null.

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ccsrc/transform/symbol/symbol_utils.h:26 RunAscendApi


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
 File "/home/jenkins0/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_train_alpaca_16p_0001/scripts/mf_parallel15/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
   result = run_func(*args, **kwargs)
 File "run_mindformer.py", line 35, in main
   build_context(config)
 File "/home/jenkins0/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_train_alpaca_16p_0001/scripts/mf_parallel15/mindformers/core/context/build_context.py", line 44, in build_context
   context_config=config.context, parallel_config=config.parallel)
 File "/home/jenkins0/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_train_alpaca_16p_0001/scripts/mf_parallel15/mindformers/core/context/build_context.py", line 118, in init_context
   raise RuntimeError("Notice: if you are trying to run with a single device, please set "
RuntimeError: Notice: if you are trying to run with a single device, please set use_parallel=False. If not, please check the error message above.

Traceback (most recent call last):
 File "/home/jenkins0/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_train_alpaca_16p_0001/scripts/mf_parallel15/mindformers/core/context/build_context.py", line 116, in init_context
   init()
 File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/communication/management.py", line 188, in init
   init_hccl()
RuntimeError: acltdtCreateChannelWithCapacity is null.

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ccsrc/transform/symbol/symbol_utils.h:26 RunAscendApi

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
 File "run_mindformer.py", line 267, in <module>
   main(config_)
 File "/home/jenkins0/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_train_alpaca_16p_0001/scripts/mf_parallel15/mindformers/tools/cloud_adapter/cloud_monitor.py", line 44, in wrapper
   raise exc
 File "/home/jenkins0/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_train_alpaca_16p_0001/scripts/mf_parallel15/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
   result = run_func(*args, **kwargs)
 File "run_mindformer.py", line 35, in main
   build_context(config)
 File "/home/jenkins0/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_train_alpaca_16p_0001/scripts/mf_parallel15/mindformers/core/context/build_context.py", line 44, in build_context
   context_config=config.context, parallel_config=config.parallel)
 File "/home/jenkins0/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_train_alpaca_16p_0001/scripts/mf_parallel15/mindformers/core/context/build_context.py", line 118, in init_context
   raise RuntimeError("Notice: if you are trying to run with a single device, please set "
RuntimeError: Notice: if you are trying to run with a single device, please set use_parallel=False. If not, please check the error message above.
Error in atexit._run_exitfuncs:
RuntimeError: The pointer[runtime_instance_] is null.

----------------------------------------------------
- Framework Unexpected Exception Raised:
----------------------------------------------------
This exception is caused by framework's unexpected error. Please create an issue at https://gitee.com/mindspore/mindspore/issues to get help.

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ccsrc/plugin/device/ascend/hal/hardware/ge_device_res_manager.h:82 GetStream

free(): double free detected in tcache 2


Special notes for this issue/备注 (Optional / 选填)

走给冯浩

评论 (6)

zhangjie18 创建了Bug-Report
zhangjie18 添加了
 
kind/bug
标签
zhangjie18 添加了
 
v2.3.0
标签
zhangjie18 添加了
 
v2.3.0.alpha
标签
zhangjie18 添加了
 
stage/func-debug
标签
zhangjie18 添加了
 
sig/mindformers
标签
zhangjie18 添加了
 
attr/function
标签
展开全部操作日志

Please assign maintainer to check this issue.
请为此issue分配处理人。
@zhangjie18

感谢您的提问,您可以评论//mindspore-assistant更快获取帮助:

  1. 如果您刚刚接触MindSpore,或许您可以在教程找到答案
  2. 如果您是资深Pytorch用户,您或许需要:
  1. 如果您遇到动态图问题,可以设置set_context(pynative_synchronize=True)查看报错栈协助定位
  2. 模型精度调优问题可参考官网调优指南
  3. 如果您反馈的是框架BUG,请确认您在ISSUE中提供了MindSpore版本、使用的后端类型(CPU、GPU、Ascend)、环境、训练的代码官方链接以及可以复现报错的代码的启动方式等必要的定位信息
  4. 如果您已经定位出问题根因,欢迎提交PR参与MindSpore开源社区,我们会尽快review
zhangjie18 添加了
 
device/ascend
标签
zhangjie18 添加了
 
device/ascend
标签
xiangminshan 负责人xiangminshan 修改为冯浩
冯浩 添加协作者冯浩
冯浩 负责人冯浩 修改为niyuxin94520

CANN包环境问题

xiangminshan 优先级主要 修改为严重
hsshuai 任务状态TODO 修改为VALIDATION
hsshuai 添加了
 
rct/cann
标签
hsshuai 添加了
 
rca/others
标签
hsshuai 添加了
 
rct/refactor
标签
hsshuai 负责人niyuxin94520 修改为zhangjie18
hsshuai 添加协作者niyuxin94520
hsshuai 里程碑B-SIG-MindFormers 修改为B-SolutionTest

回归版本:CANN:Milan_C17/20240321
MS:r2.3.q1_20240322204955_5a94f41a4670cc(2.3.B090)
MF:r1.1.tr5_20240322145526_e61f4a0e72e69
回归步骤:参考issue步骤
基本问题:已解决
输入图片说明
输入图片说明
输入图片说明
输入图片说明
测试结论:回归通过
回归时间:2024.3.26

i-robot 添加了
 
foruda
标签
i-robot 添加了
 
foruda
标签
i-robot 添加了
 
foruda
标签
i-robot 添加了
 
foruda
标签
zhangjie18 任务状态VALIDATION 修改为DONE
niyuxin94520 添加协作者Yulin Cao

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(8)
11016979 xiangmd 1654824581
Python
1
https://gitee.com/mindspore/mindspore.git
git@gitee.com:mindspore/mindspore.git
mindspore
mindspore
mindspore

搜索帮助

53164aa7 5694891 3bd8fe86 5694891