2.4K Star 8.2K Fork 4.4K

GVPMindSpore / mindspore

 / 详情

[ST][MS][大集群专项]mixtral网络在910B上单卡模拟编译,第一次保存编译缓存时编译失败,再次加载编译缓存却成功

TODO
Bug-Report
创建于  
2024-05-17 11:30
name about labels
Bug Report Use this template for reporting a bug kind/bug

Describe the current behavior / 问题描述 (Mandatory / 必填)

mixtral网络在910B上单卡模拟编译,第一次保存编译缓存时编译失败,再次加载编译缓存却成功

Environment / 环境信息 (Mandatory / 必填)

  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境:

Please delete the backend not involved / 请删除不涉及的后端:
/device ascend

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) :
    -- Python version (e.g., Python 3.7.5) :
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04):
    -- GCC/Compiler version (if compiled from source):
    master_20240510061514_c6a1400a90

  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode pynative
/mode graph

Related testcase / 关联用例 (Mandatory / 必填)

mixtral_ascend910b_mixtral_8x7b_4096_64_False_8_000

Steps to reproduce the issue / 重现步骤 (Mandatory / 必填)

  1. get code from mindformers
  2. 设置参数值:
    RANK_ID: 13
    RANK_SIZE: 64
    batch_size: 1
    compile_cache: true
    data_parallel: 2
    device_id: 2
    enable_parallel_optimizer: true
    expert_num: 8
    expert_parallel: 2
    fine_grain_interleave: false
    micro_batch_interleave_num: 1
    micro_batch_num: 184
    mode: 8x7b
    model_parallel: 8
    param_init_type: bfloat16
    pipeline_stage: 4
    seq_length: 4096
    use_seq_parallel: true
    3.设置ENABLE_CELL_REUSE=1
    4.开始网络训练
    5.验证网络是否能够编译成功

Describe the expected behavior / 预期结果 (Mandatory / 必填)

网络模拟编译成功

Related log / screenshot / 日志 / 截图 (Mandatory / 必填)

第一次保存编译缓存报错

Traceback (most recent call last):
  File "run_mindformer.py", line 263, in <module>
    main(config_)
  File "/home/jenkins/bmz/Solution_Special_Test/cases/01special_test/mixtral/test_ms_mixtral_8x7b_910b_0001/mixtral_ascend910b_mixtral_8x7b_4096_64_False_8_001/mixtral_ascend910b_mixtral_8x7b_4096_64_0013_000/mindformers/tools/cloud_adapter/cloud_monitor.py", line 44, in wrapper
    raise exc
  File "/home/jenkins/bmz/Solution_Special_Test/cases/01special_test/mixtral/test_ms_mixtral_8x7b_910b_0001/mixtral_ascend910b_mixtral_8x7b_4096_64_False_8_001/mixtral_ascend910b_mixtral_8x7b_4096_64_0013_000/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
    result = run_func(*args, **kwargs)
  File "run_mindformer.py", line 39, in main
    trainer.train()
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/_checkparam.py", line 1372, in wrapper
    return func(*args, **kwargs)
  File "/home/jenkins/bmz/Solution_Special_Test/cases/01special_test/mixtral/test_ms_mixtral_8x7b_910b_0001/mixtral_ascend910b_mixtral_8x7b_4096_64_False_8_001/mixtral_ascend910b_mixtral_8x7b_4096_64_0013_000/mindformers/trainer/trainer.py", line 424, in train
    is_full_config=True)
  File "/home/jenkins/bmz/Solution_Special_Test/cases/01special_test/mixtral/test_ms_mixtral_8x7b_910b_0001/mixtral_ascend910b_mixtral_8x7b_4096_64_False_8_001/mixtral_ascend910b_mixtral_8x7b_4096_64_0013_000/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 120, in train
    **kwargs)
  File "/home/jenkins/bmz/Solution_Special_Test/cases/01special_test/mixtral/test_ms_mixtral_8x7b_910b_0001/mixtral_ascend910b_mixtral_8x7b_4096_64_False_8_001/mixtral_ascend910b_mixtral_8x7b_4096_64_0013_000/mindformers/trainer/base_trainer.py", line 778, in training_process
    initial_epoch=config.runner_config.initial_epoch)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 1087, in train
    initial_epoch=initial_epoch)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 115, in wrapper
    func(self, *args, **kwargs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 637, in _train
    cb_params, sink_size, initial_epoch, valid_infos)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 721, in _train_dataset_sink_process
    outputs = train_network(*inputs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 695, in __call__
    out = self.compile_and_run(*args, **kwargs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 1013, in compile_and_run
    self.compile(*args, **kwargs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 997, in compile
    jit_config_dict=self._jit_config_dict, **kwargs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/common/api.py", line 1642, in compile
    result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
RuntimeError: Compile graph kernel_graph1 failed.

----------------------------------------------------
- Ascend Error Message:
----------------------------------------------------
E19999: Inner Error!
E19999: 2024-05-10-19:14:30.521.527  Call OptimizeGraphPrepare failed, ret:1343225860, engine_name:hccl_graph_optimizer, graph_name:kernel_graph1[FUNC:OptimizeOriginalGraphForQuantize][FILE:graph_optimize.cc][LINE:269]
        TraceBack (most recent call last):
        [Call][PreRun] Failed, graph_id:2, session_id:0.[FUNC:CompileGraph][FILE:graph_manager.cc][LINE:4409]
        [Compile][Graph]Compile graph failed, error code:1343225857, session_id:0, graph_id:2.[FUNC:CompileGraph][FILE:ge_api.cc][LINE:1165]

(Please search "CANN Common Error Analysis" at https://www.mindspore.cn for error code description)

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ccsrc/plugin/device/ascend/hal/hardware/ge_graph_executor.cc:948 CompileGraph

第二次加载编译缓存成功了

2024-05-10 19:17:21,537 - mindformers[mindformers/core/callback/callback.py:319] - INFO - { Epoch:[  1/  1], step:[    2/  141], loss: 0.000, per_step_time: 62830ms, lr: 0.0, overflow cond: False, loss_scale: 65535.0
2024-05-10 19:17:22,088 - mindformers[mindformers/core/callback/callback.py:319] - INFO - { Epoch:[  1/  1], step:[    4/  141], loss: 0.000, per_step_time: 219ms, lr: 0.0, overflow cond: False, loss_scale: 65535.0
2024-05-10 19:17:22,561 - mindformers[mindformers/core/callback/callback.py:319] - INFO - { Epoch:[  1/  1], step:[    6/  141], loss: 0.000, per_step_time: 229ms, lr: 0.0, overflow cond: False, loss_scale: 65535.0
2024-05-10 19:17:23,025 - mindformers[mindformers/core/callback/callback.py:319] - INFO - { Epoch:[  1/  1], step:[    8/  141], loss: 0.000, per_step_time: 226ms, lr: 0.0, overflow cond: False, loss_scale: 65535.0
2024-05-10 19:17:23,493 - mindformers[mindformers/core/callback/callback.py:319] - INFO - { Epoch:[  1/  1], step:[   10/  141], loss: 0.000, per_step_time: 228ms, lr: 0.0, overflow cond: False, loss_scale: 65535.0
2024-05-10 19:17:23,964 - mindformers[mindformers/core/callback/callback.py:319] - INFO - { Epoch:[  1/  1], step:[   12/  141], loss: 0.000, per_step_time: 229ms, lr: 0.0, overflow cond: False, loss_scale: 65535.0
2024-05-10 19:17:24,427 - mindformers[mindformers/core/callback/callback.py:319] - INFO - { Epoch:[  1/  1], step:[   14/  141], loss: 0.000, per_step_time: 226ms, lr: 0.0, overflow cond: False, loss_scale: 65535.0
2024-05-10 19:17:24,896 - mindformers[mindformers/core/callback/callback.py:319] - INFO - { Epoch:[  1/  1], step:[   16/  141], l

Special notes for this issue/备注 (Optional / 选填)

走给肖尧

评论 (8)

baimz 创建了Bug-Report
baimz 添加了
 
kind/bug
标签
baimz 添加了
 
attr/function
标签
baimz 添加了
 
stage/coding
标签
baimz 添加了
 
sig/parallel
标签
baimz 添加了
 
master
标签
baimz 添加了
 
master
标签
展开全部操作日志

Please assign maintainer to check this issue.
请为此issue分配处理人。
@baimz

感谢您的提问,您可以评论//mindspore-assistant更快获取帮助:

  1. 如果您刚刚接触MindSpore,或许您可以在教程找到答案
  2. 如果您是资深Pytorch用户,您或许需要:
  1. 如果您遇到动态图问题,可以设置set_context(pynative_synchronize=True)查看报错栈协助定位
  2. 模型精度调优问题可参考官网调优指南
  3. 如果您反馈的是框架BUG,请确认您在ISSUE中提供了MindSpore版本、使用的后端类型(CPU、GPU、Ascend)、环境、训练的代码官方链接以及可以复现报错的代码的启动方式等必要的定位信息
  4. 如果您已经定位出问题根因,欢迎提交PR参与MindSpore开源社区,我们会尽快review
duanjiali 添加协作者duanjiali
duanjiali 负责人duanjiali 修改为xiaoyao
fangwenyi 添加了
 
device/ascend
标签

问题已经明确, 第1次编译时, 生成了前端编译缓存, kernel_graph1对应的后端编译缓存没生成。第2次编译时mindspore根据第1次编译产生的脚本hash值判定为非首次编译,传给GE的CompileGraph接口一个Fake Graph,然后GE找不到第1次编译的缓存文件, 从而编译Fake Graph, 导致编译成功。 能跑出loss, 但精度是错误的。
这不属于功能设计实现问题, 属于编译缓存使用问题。建议以非问题关闭该问题单。

编译缓存第1次没生成成功, 就不应该第2次再跑, 应该删了缓存。

问题已经明确, 第1次编译时, 生成了前端编译缓存, kernel_graph1对应的后端编译缓存没生成。第2次编译时mindspore根据第1次编译产生的脚本hash值判定为非首次编译,传给GE的CompileGraph接口一个Fake Graph,然后GE找不到第1次编译的缓存文件, 从而编译Fake Graph, 导致编译成功。 能跑出loss, 但精度是错误的。
这不属于功能设计实现问题, 属于编译缓存使用问题。建议以非问题关闭该问题单。

@xiaoyao 1. 现在理论上存在这个场景吧,编译缓存理论上要做完整性校验才对,不完整的话应该要完全重编,并给出相关告警提示。
2. 为啥会出现第一次能成功第二次失败的情况

@xiaoyao 1. 现在理论上存在这个场景吧,编译缓存理论上要做完整性校验才对,不完整的话应该要完全重编,并给出相关告警提示。
2. 为啥会出现第一次能成功第二次失败的情况

@leiwei2

  1. 当前的编译缓存确实缺少这种校验。
  2. 这是第1次失败, 第2次能跑通, 但精度是错误。
xiaoyao 移除了
 
attr/function
标签
xiaoyao 移除了
 
attr/function
标签
xiaoyao 添加了
 
ccb/rfc
标签

已经过鲍翀、郭琦评议。转需求。

fangwenyi 添加了关联分支master 选项
fangwenyi 添加了问题后端类型Ascend 选项

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(5)
Python
1
https://gitee.com/mindspore/mindspore.git
git@gitee.com:mindspore/mindspore.git
mindspore
mindspore
mindspore

搜索帮助