2.4K Star 8.2K Fork 4.4K

GVPMindSpore / mindspore

 / 详情

[ST][MS][大集群专项]mixtral网络在910B上单卡模拟编译报错

VALIDATION
Bug-Report
创建于  
2024-05-17 11:33
name about labels
Bug Report Use this template for reporting a bug kind/bug

Describe the current behavior / 问题描述 (Mandatory / 必填)

mixtral网络在910B上单卡模拟编译报错

Environment / 环境信息 (Mandatory / 必填)

  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境:

Please delete the backend not involved / 请删除不涉及的后端:
/device ascend

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) :
    -- Python version (e.g., Python 3.7.5) :
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04):
    -- GCC/Compiler version (if compiled from source):
    master_20240510061514_c6a1400a90

  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode pynative
/mode graph

Related testcase / 关联用例 (Mandatory / 必填)

mixtral_ascend910b_mixtral_8x7b_4096_64_False_8_000

Steps to reproduce the issue / 重现步骤 (Mandatory / 必填)

  1. get code from mindformers
  2. 设置参数值:
    RANK_ID: 13
    RANK_SIZE: 64
    batch_size: 1
    compile_cache: true
    data_parallel: 2
    device_id: 2
    enable_parallel_optimizer: true
    expert_num: 8
    expert_parallel: 2
    fine_grain_interleave: false
    micro_batch_interleave_num: 1
    micro_batch_num: 184
    mode: 8x7b
    model_parallel: 8
    param_init_type: bfloat16
    pipeline_stage: 4
    seq_length: 4096
    use_seq_parallel: true
    3.设置ENABLE_CELL_REUSE=1
    4.开始网络训练
    5.验证网络是否能够编译成功

Describe the expected behavior / 预期结果 (Mandatory / 必填)

网络模拟编译成功

Related log / screenshot / 日志 / 截图 (Mandatory / 必填)

Traceback (most recent call last):
  File "run_mindformer.py", line 263, in <module>
    main(config_)
  File "/home/jenkins/bmz/Solution_Special_Test/cases/01special_test/mixtral/test_ms_mixtral_8x7b_910b_0001/mixtral_ascend910b_mixtral_8x7b_4096_64_False_8_001/mixtral_ascend910b_mixtral_8x7b_4096_64_0013_000/mindformers/tools/cloud_adapter/cloud_monitor.py", line 44, in wrapper
    raise exc
  File "/home/jenkins/bmz/Solution_Special_Test/cases/01special_test/mixtral/test_ms_mixtral_8x7b_910b_0001/mixtral_ascend910b_mixtral_8x7b_4096_64_False_8_001/mixtral_ascend910b_mixtral_8x7b_4096_64_0013_000/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
    result = run_func(*args, **kwargs)
  File "run_mindformer.py", line 39, in main
    trainer.train()
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/_checkparam.py", line 1372, in wrapper
    return func(*args, **kwargs)
  File "/home/jenkins/bmz/Solution_Special_Test/cases/01special_test/mixtral/test_ms_mixtral_8x7b_910b_0001/mixtral_ascend910b_mixtral_8x7b_4096_64_False_8_001/mixtral_ascend910b_mixtral_8x7b_4096_64_0013_000/mindformers/trainer/trainer.py", line 424, in train
    is_full_config=True)
  File "/home/jenkins/bmz/Solution_Special_Test/cases/01special_test/mixtral/test_ms_mixtral_8x7b_910b_0001/mixtral_ascend910b_mixtral_8x7b_4096_64_False_8_001/mixtral_ascend910b_mixtral_8x7b_4096_64_0013_000/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 120, in train
    **kwargs)
  File "/home/jenkins/bmz/Solution_Special_Test/cases/01special_test/mixtral/test_ms_mixtral_8x7b_910b_0001/mixtral_ascend910b_mixtral_8x7b_4096_64_False_8_001/mixtral_ascend910b_mixtral_8x7b_4096_64_0013_000/mindformers/trainer/base_trainer.py", line 778, in training_process
    initial_epoch=config.runner_config.initial_epoch)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 1087, in train
    initial_epoch=initial_epoch)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 115, in wrapper
    func(self, *args, **kwargs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 637, in _train
    cb_params, sink_size, initial_epoch, valid_infos)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 721, in _train_dataset_sink_process
    outputs = train_network(*inputs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 695, in __call__
    out = self.compile_and_run(*args, **kwargs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 1013, in compile_and_run
    self.compile(*args, **kwargs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 997, in compile
    jit_config_dict=self._jit_config_dict, **kwargs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/common/api.py", line 1642, in compile
    result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
RuntimeError: Compile graph kernel_graph1 failed.

----------------------------------------------------
- Ascend Error Message:
----------------------------------------------------
E19999: Inner Error!
E19999: 2024-05-10-19:14:30.521.527  Call OptimizeGraphPrepare failed, ret:1343225860, engine_name:hccl_graph_optimizer, graph_name:kernel_graph1[FUNC:OptimizeOriginalGraphForQuantize][FILE:graph_optimize.cc][LINE:269]
        TraceBack (most recent call last):
        [Call][PreRun] Failed, graph_id:2, session_id:0.[FUNC:CompileGraph][FILE:graph_manager.cc][LINE:4409]
        [Compile][Graph]Compile graph failed, error code:1343225857, session_id:0, graph_id:2.[FUNC:CompileGraph][FILE:ge_api.cc][LINE:1165]

(Please search "CANN Common Error Analysis" at https://www.mindspore.cn for error code description)

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ccsrc/plugin/device/ascend/hal/hardware/ge_graph_executor.cc:948 CompileGraph

Special notes for this issue/备注 (Optional / 选填)

走给周亚强

评论 (6)

baimz 创建了Bug-Report
baimz 添加了
 
kind/bug
标签
baimz 添加了
 
attr/function
标签
baimz 添加了
 
stage/coding
标签
baimz 添加了
 
master
标签
baimz 添加了
 
sig/parallel
标签
展开全部操作日志

Please assign maintainer to check this issue.
请为此issue分配处理人。
@baimz

感谢您的提问,您可以评论//mindspore-assistant更快获取帮助:

  1. 如果您刚刚接触MindSpore,或许您可以在教程找到答案
  2. 如果您是资深Pytorch用户,您或许需要:
  1. 如果您遇到动态图问题,可以设置set_context(pynative_synchronize=True)查看报错栈协助定位
  2. 模型精度调优问题可参考官网调优指南
  3. 如果您反馈的是框架BUG,请确认您在ISSUE中提供了MindSpore版本、使用的后端类型(CPU、GPU、Ascend)、环境、训练的代码官方链接以及可以复现报错的代码的启动方式等必要的定位信息
  4. 如果您已经定位出问题根因,欢迎提交PR参与MindSpore开源社区,我们会尽快review
duanjiali 添加协作者duanjiali
duanjiali 负责人duanjiali 修改为zhouyaqiang0
fangwenyi 添加了
 
device/ascend
标签
fangwenyi 添加了
 
device/ascend
标签

将max_device_memory从59g设置为57g后,用例跑通,该报错是ms预留内存过多,导致tdt和hccl内存不足引起的,请使用一台真实环境能跑的一张卡再测试一遍,看是否还有问题

@zhouyaqiang0 max_device_memory这个设置有什么规律可循,单纯从报错上来看看不出来是由于显存分配不合理导致的

在更前面一点的位置有minddata相关的报错,打开plog会有内存相关报错

fangwenyi 添加了关联分支master 选项
fangwenyi 添加了问题后端类型Ascend 选项
zhouyaqiang0 任务状态TODO 修改为VALIDATION
zhouyaqiang0 添加了
 
ctl/solutiontest
标签
zhouyaqiang0 添加了
 
rca/others
标签
zhouyaqiang0 添加了
 
rct/cann
标签
zhouyaqiang0 取消协作者duanjiali
zhouyaqiang0 负责人zhouyaqiang0 修改为duanjiali

Appearance & Root Cause:
是ms预留内存过多,导致tdt和hccl内存不足引起的
Fix Solution:
将max_device_memory从59g设置为57g,使用一台真实环境能跑的一张进行测试
Relation PR:
不涉及
Selftest Result:
不涉及
Self-test Report & DT Review:
不涉及

duanjiali 添加协作者duanjiali
duanjiali 负责人duanjiali 修改为baimz

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(5)
Python
1
https://gitee.com/mindspore/mindspore.git
git@gitee.com:mindspore/mindspore.git
mindspore
mindspore
mindspore

搜索帮助

344bd9b3 5694891 D2dac590 5694891