2.4K Star 8.2K Fork 4.4K

GVPMindSpore / mindspore

 / 详情

ModelArts多节点训练gather函数报错 '>=' not supported between instances of 'str' and 'int'

WIP
Question
创建于  
2024-02-01 09:31
name about labels
Bug Report Use this template for reporting a bug kind/bug

Describe the current behavior / 问题描述 (Mandatory / 必填)

单节点8卡训练没有任何问题,但是多节点如16卡、32卡会报错

Environment / 环境信息 (Mandatory / 必填)

  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境:

Please delete the backend not involved / 请删除不涉及的后端:
/device ascend/GPU/CPU/kirin/等其他芯片

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) : 2.1.1
    -- Python version (e.g., Python 3.7.5) : 3.7.10
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04): ModelArts
    -- GCC/Compiler version (if compiled from source): 7.3.0

  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode graph

Related testcase / 关联用例 (Mandatory / 必填)

Steps to reproduce the issue / 重现步骤 (Mandatory / 必填)

  1. Notebook上8卡能正常训练
  2. ModelArts起单节点8卡能正常训练
  3. ModelArts起多节点多卡报错

Describe the expected behavior / 预期结果 (Mandatory / 必填)

正常训练

Related log / screenshot / 日志 / 截图 (Mandatory / 必填)

报错日志为:
Default/network/acc_net/GatherV2-op115458
Traceback (most recent call last):
File "/home/ma-user/modelarts/user-job-dir/hubert_ms2/examples/ss_spksemi_msp_sub/prebucket_speechsplit_spksemi_pretrain.py", line 223, in
train()
File "/home/ma-user/modelarts/user-job-dir/hubert_ms2/haihe/adapter/moxing_adapter.py", line 194, in wrapped_func
run_func(*args, **kwargs)
File "/home/ma-user/modelarts/user-job-dir/hubert_ms2/examples/ss_spksemi_msp_sub/prebucket_speechsplit_spksemi_pretrain.py", line 206, in train
model.train(num_epochs, train_dataset, callbacks=callback_list, dataset_sink_mode=False)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/train/model.py", line 1066, in train
initial_epoch=initial_epoch)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/train/model.py", line 113, in wrapper
func(self, *args, **kwargs)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/train/model.py", line 613, in _train
self._train_process(epoch, train_dataset, list_callback, cb_params, initial_epoch, valid_infos)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/train/model.py", line 914, in _train_process
outputs = self._train_network(*next_element)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/nn/cell.py", line 637, in call
out = self.compile_and_run(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/nn/cell.py", line 961, in compile_and_run
self.compile(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/nn/cell.py", line 939, in compile
jit_config_dict=self._jit_config_dict, *compile_args, **kwargs)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 1623, in compile
result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
RuntimeError: TBE Single op compile failed. Compile failed op number:1, failed log:op: gather_v2_10893291838084043881_4.


  • Operator Compilation Exception Message: (For framework developers)

2024-01-31 10:56:29.894046+00:00: Query except_msg:Traceback (most recent call last):
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/te_fusion/parallel_compilation.py", line 1683, in run
optional_input_mode=self._optional_input_mode)
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/te_fusion/fusion_manager.py", line 1622, in build_single_op
build_res = call_op()
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/te_fusion/fusion_manager.py", line 1604, in call_op
build_res = build_static_op(op_info, new_attrs, caxis_valus)
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/te_fusion/fusion_manager.py", line 1531, in build_static_op
opfunc(*inputs, *outputs, *new_attrs, **kwargs)
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/common/register/operation_func_mgr.py", line 197, in wrapper
return func(*args, **kwargs)
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/common/utils/para_check.py", line 534, in _in_wrapper
return func(*args, **kwargs)
File "/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe/impl/dynamic/gather_v2.py", line 4466, in gather_v2
gather_v2_dsl(x, indices, axis, y, batch_dims, negative_index_support, kernel_name, impl_mode)
File "/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe/impl/dynamic/gather_v2.py", line 4379, in gather_v2_dsl
ins = classify([x, indices, real_axis, batch_dims], OpPatternMode.GATHER, {"gather_type": "gather"})
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/api.py", line 1136, in classify
return shape_classifier.classify(ins, mode, extra_params)
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/common/register/operation_func_mgr.py", line 276, in wrapper
return func(*args, **kwargs)
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/common/register/operation_func_mgr.py", line 276, in wrapper
return func(*args, **kwargs)
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/shape_classifier.py", line 90, in classify
return classifier_func(ins, extra_params)
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/shape_classifier.py", line 105, in wrapper
return func(*args, **kwargs)
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/gather_classifier.py", line 41, in classify_gather
return GatherClassifier(ins).classify()
File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/dsl/classifier/gather_classifier.py", line 93, in init
self.axis = ins[2] if ins[2] >= 0 else ins[2] + len(self.params_shape)
TypeError: '>=' not supported between instances of 'str' and 'int'


  • The Function Call Stack: (For framework developers)

In file /home/ma-user/modelarts/user-job-dir/hubert_ms2/examples/ss_spksemi_msp_sub/speech_split.py:322/ y = self.gather(self.label_embs_concat, target, 0) # æ ¹æ®åˆ†ç±»åºåˆ—æ ‡ç­¾ï¼ŒèŽ·å¾—æ¯ä¸ªåˆ†ç±»çš„embç»„æˆåºåˆ—çš„æ ‡ç­¾/
In file /home/ma-user/modelarts/user-job-dir/hubert_ms2/examples/ss_spksemi_msp_sub/speech_split.py:392/ logit_m = self.compute_pred(proj_xs_m, ys_m)/
In file /home/ma-user/modelarts/user-job-dir/hubert_ms2/examples/ss_spksemi_msp_sub/speech_split.py:504/ loss = self.acc_net(xs, xs_len, ys, padding_mask, wavlm_mask, spk_labels)/
In file /home/ma-user/modelarts/user-job-dir/hubert_ms2/examples/ss_spksemi_msp_sub/speech_split.py:602/ loss = self.network(*input_args)/


  • C++ Call Stack: (For framework developers)

mindspore/ccsrc/plugin/device/ascend/hal/device/kernel_build_ascend.cc:140 KernelBuildParallelCompile

Special notes for this issue/备注 (Optional / 选填)

评论 (11)

wangtianrui 创建了Bug-Report

Please assign maintainer to check this issue.
请为此issue分配处理人。
@fangwenyi @chengxiaoli @Shawny

感谢您的反馈,您可以评论//mindspore-assistant更快获取帮助,更多标签可以查看标签列表

  1. 如果您刚刚接触MindSpore,或许您可以在教程找到答案
  2. 如果您是资深Pytorch用户,您或许需要:
    与PyTorch典型区别 / PyTorch与MindSpore API映射表
  3. 如果您遇到动态图问题,可以设置mindspore.set_context(pynative_synchronize=True)查看报错栈协助定位
  4. 模型精度调优问题可参考官网调优指南
  5. 如果您反馈的是框架BUG,请确认您在ISSUE中提供了MindSpore版本、使用的后端类型(CPU、GPU、Ascend)、环境、训练的代码官方链接以及可以复现报错的代码的启动方式等必要的定位信息
  6. 如果您已经定位出问题根因,欢迎提交PR参与MindSpore开源社区,我们会尽快review
Shawny 负责人设置为hedongdong
Shawny 任务状态TODO 修改为WIP
Shawny 任务类型Bug-Report 修改为Question
Shawny 关联项目设置为MindSpore Issue Assistant
Shawny 计划开始日期设置为2024-02-01
Shawny 计划截止日期设置为2024-03-01
Shawny 添加了
 
mindspore-assistant
标签
Shawny 添加了
 
sig/ops
标签
Shawny 添加了
 
sig/parallel
标签

您好,正在联系相关负责人定位

补充一些额外信息:我现在训的模型相当于是一个12层的transformer。然后我会从第4层的位置基于中间的特征计算一个额外的loss,相当于:

模型部分

re = []
for i in range(12):
    x = self.attention(x, padding_mask)
    if i == 4:
        re.append(x)
re.append(x)

loss

loss = self.loss1(re[0], label0) + self.loss2(re[1], label1)

如果把中间的loss1计算给去掉,这个模型在多机多卡也是能训练的。但是如果加上,报错的gather反而是loss2的

很抽象的bug。。。

可以提供一下问题复现的网络脚本吗,数据集和启动方式

抱歉这个是自己写的代码和数据集。不太方便共享出来

Shawny 负责人hedongdong 修改为yangluhang
Shawny 添加协作者hedongdong

您好,由于问题单没有回复,我们后续会关闭,如您仍有疑问,可以反馈下具体信息,并将ISSUE状态修改为WIP,我们这边会进一步跟踪,谢谢

您好,由于问题单没有回复,我们后续会关闭,如您仍有疑问,可以反馈下具体信息,并将ISSUE状态修改为WIP,我们这边会进一步跟踪,谢谢

您好,由于问题单没有回复,我们后续会关闭,如您仍有疑问,可以反馈下具体信息,并将ISSUE状态修改为WIP,我们这边会进一步跟踪,谢谢

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(5)
8108889 shawny233 1628167362
Python
1
https://gitee.com/mindspore/mindspore.git
git@gitee.com:mindspore/mindspore.git
mindspore
mindspore
mindspore

搜索帮助

344bd9b3 5694891 D2dac590 5694891