2.4K Star 8.2K Fork 4.4K

GVPMindSpore / mindspore

 / 详情

[ST][MS][master] ascend环境,enable_parallel_optimizer=True,与支持优化器组合后,不使用allreduce分组融合,bert网络8p训练失败,报ge相关错误

VALIDATION
Bug-Report
创建于  
2024-05-20 21:25
name about labels
Bug Report Use this template for reporting a bug kind/bug

Describe the current behavior / 问题描述 (Mandatory / 必填)

enable_parallel_optimizer=True,与支持优化器组合后,不使用allreduce分组融合,bert网络8p训练失败,报ge相关错误

Environment / 环境信息 (Mandatory / 必填)

  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境:

Please delete the backend not involved / 请删除不涉及的后端:
/device ascend

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) :
    -- Python version (e.g., Python 3.7.5) :
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04):
    -- GCC/Compiler version (if compiled from source):
    ms版本:commit_id = '[sha1]:2f410aa8,[branch]:(HEAD,origin/master,origin/HEAD,master)'
    run包:runpkg_version:Milan_C18/20240517

  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode graph

Related testcase / 关联用例 (Mandatory / 必填)

test_ms_optimizer_automodelparallel_normal_008

Steps to reproduce the issue / 重现步骤 (Mandatory / 必填)

source /home/miniconda3/bin/activate feature_39
export TRAIN_MODE=GRAPH_MODE
export DEVICE_TYPE=Ascend910B_Arm
export ENV_DEVICE=0
source solution_test/env_set.source -e ascend

cd solution_test/remaining/test_scripts/mindspore/features/automodelparallel/optimizer_parallel
pytest -s test_ms_optimizer_automodelparallel.py::test_ms_optimizer_automodelparallel_normal_008

Describe the expected behavior / 预期结果 (Mandatory / 必填)

训练正常,用例pass

Related log / screenshot / 日志 / 截图 (Mandatory / 必填)

Traceback (most recent call last):
File "/data/jenkins_workspace/TDT_deployment/solution_test/remaining/test_scripts/mindspore/features/automodelparallel/optimizer_parallel/test_ms_optimizer_bert_without_allreduce_base_loss_8p/run_pretrain.py", line 288, in
run_pretrain()
File "/data/jenkins_workspace/TDT_deployment/solution_test/remaining/test_scripts/mindspore/features/automodelparallel/optimizer_parallel/test_ms_optimizer_bert_without_allreduce_base_loss_8p/src/model_utils/moxing_adapter.py", line 109, in wrapped_func
run_func(*args, **kwargs)
File "/data/jenkins_workspace/TDT_deployment/solution_test/remaining/test_scripts/mindspore/features/automodelparallel/optimizer_parallel/test_ms_optimizer_bert_without_allreduce_base_loss_8p/run_pretrain.py", line 282, in run_pretrain
model.train(new_repeat_count, ds, callbacks=callback,
File "/home/miniconda3/envs/feature_39/lib/python3.9/site-packages/mindspore/train/model.py", line 1082, in train
self._train(epoch,
File "/home/miniconda3/envs/feature_39/lib/python3.9/site-packages/mindspore/train/model.py", line 115, in wrapper
func(self, *args, **kwargs)
File "/home/miniconda3/envs/feature_39/lib/python3.9/site-packages/mindspore/train/model.py", line 636, in _train
self._train_dataset_sink_process(epoch, train_dataset, list_callback,
File "/home/miniconda3/envs/feature_39/lib/python3.9/site-packages/mindspore/train/model.py", line 721, in _train_dataset_sink_process
outputs = train_network(*inputs)
File "/home/miniconda3/envs/feature_39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 696, in call
out = self.compile_and_run(*args, **kwargs)
File "/home/miniconda3/envs/feature_39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 1014, in compile_and_run
self.compile(*args, **kwargs)
File "/home/miniconda3/envs/feature_39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 997, in compile
_cell_graph_executor.compile(self, *self._compile_args, phase=self.phase,
File "/home/miniconda3/envs/feature_39/lib/python3.9/site-packages/mindspore/common/api.py", line 1643, in compile
result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
RuntimeError: Compile graph kernel_graph1 failed.


  • Ascend Error Message:

E19999: Inner Error!
E19999: 2024-05-18-21:40:40.383.389 Node[Default/network/Switch-op1_kernel_graph2/Default/network/optimizer/Broadcast-op1]input offset [14820334080]should equal to output offset[14820334592]with ref in[23]to output[23][FUNC:CheckRefNodeOffset][FILE:graph_mem_assigner.cc][LINE:2272]
TraceBack (most recent call last):
[Call][PreRun] Failed, graph_id:2, session_id:0.[FUNC:CompileGraph][FILE:graph_manager.cc][LINE:4542]
[Compile][Graph]Compile graph failed, error code:1343225857, session_id:0, graph_id:2.[FUNC:CompileGraph][FILE:ge_api.cc][LINE:1178]

Special notes for this issue/备注 (Optional / 选填)

走给 俞超杰

评论 (4)

wenli 创建了Bug-Report
wenli 添加了
 
kind/bug
标签
wenli 添加了
 
master
标签
wenli 添加了
 
device/ascend
标签
wenli 添加了
 
sig/ascend
标签
wenli 添加了
 
attr/function
标签
wenli 添加了
 
stage/func-debug
标签
wenli 添加协作者mudongrui
展开全部操作日志

Please assign maintainer to check this issue.
请为此issue分配处理人。
@wenli

感谢您的提问,您可以评论//mindspore-assistant更快获取帮助:

  1. 如果您刚刚接触MindSpore,或许您可以在教程找到答案
  2. 如果您是资深Pytorch用户,您或许需要:
  1. 如果您遇到动态图问题,可以设置set_context(pynative_synchronize=True)查看报错栈协助定位
  2. 模型精度调优问题可参考官网调优指南
  3. 如果您反馈的是框架BUG,请确认您在ISSUE中提供了MindSpore版本、使用的后端类型(CPU、GPU、Ascend)、环境、训练的代码官方链接以及可以复现报错的代码的启动方式等必要的定位信息
  4. 如果您已经定位出问题根因,欢迎提交PR参与MindSpore开源社区,我们会尽快review
i-robot 添加了
 
dts-szv
标签
yuchaojie 添加了
 
rct/cann
标签
yuchaojie 添加了
 
rct/cann
标签
wenli 添加了
 
br_base
标签
fangwenyi 添加了关联分支master 选项
fangwenyi 添加了关联分支br_base 选项
fangwenyi 添加了问题后端类型Ascend 选项
yuchaojie 添加了
 
rca/others
标签
yuchaojie 添加了
 
ctl/solutiontest
标签
yuchaojie 添加协作者yuchaojie
yuchaojie 负责人yuchaojie 修改为wenli
yuchaojie 任务状态TODO 修改为VALIDATION
yuchaojie 里程碑B-SIG-ASCEND 修改为B-SolutionTest

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(4)
Python
1
https://gitee.com/mindspore/mindspore.git
git@gitee.com:mindspore/mindspore.git
mindspore
mindspore
mindspore

搜索帮助

344bd9b3 5694891 D2dac590 5694891