登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
代码拉取完成,页面将自动刷新
仓库状态说明
开源项目
>
人工智能
>
大模型
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
42
Star
397
Fork
242
Ascend
/
MindSpeed-MM
暂停
代码
Issues
2
Pull Requests
24
Wiki
统计
流水线
服务
JavaDoc
PHPDoc
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
训练internvl3-8b的时候报错
DONE
#ICOMHR
训练问题
jiaow
创建于
2025-07-24 17:54
一、问题现象(附报错日志上下文): frozen importlib._bootstrap>:671: ImportWarning: TBEMetaPathLoader.exec_module() not found; falling back to load_module() all tp gourps [[0], [1], [2], [3], [4], [5], [6], [7]] all ep groups [[0], [1], [2], [3], [4], [5], [6], [7]] all dp groups [[0, 1], [2, 3], [4, 5], [6, 7]] all_dp_modulo_exp_group_ranks [[0, 1], [2, 3], [4, 5], [6, 7]] all_tensor_and_expert_group_ranks [[0], [1], [2], [3], [4], [5], [6], [7]] all_data_parallel_group_ranks_with_cp [[0, 1], [2, 3], [4, 5], [6, 7]] > initialized tensor model parallel with size 1 > initialized pipeline model parallel with size 4 > setting random seeds to 1234 ... > compiling dataset index builder ... make: 进入目录“/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/core/datasets” make: 对“default”无需做任何事。 make: 离开目录“/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/core/datasets” >>> done with dataset index builder. Compilation time: 0.067 seconds time to initialize megatron (seconds): -43.989 [after megatron is initialized] datetime: 2025-07-24 17:37:24 building VLMModel ... image encoder pipeline config: pp_rank:0, pre_process:True, post_process:True, local_num_layers:24 text decoder pipeline config: pp_rank:2, pre_process:False, post_process:False, local_num_layers:8 image encoder pipeline config: pp_rank:0, pre_process:True, post_process:True, local_num_layers:24 text decoder pipeline config: pp_rank:1, pre_process:False, post_process:False, local_num_layers:8 text decoder pipeline config: pp_rank:2, pre_process:False, post_process:False, local_num_layers:8 text decoder pipeline config: pp_rank:3, pre_process:False, post_process:True, local_num_layers:6 text decoder pipeline config: pp_rank:3, pre_process:False, post_process:True, local_num_layers:6 Traceback (most recent call last): Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> pretrain(pretrain( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain pretrain( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain model, optimizer, opt_param_scheduler = setup_model_and_optimizer(model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> pretrain(model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer pretrain( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper model = get_model(model_provider_func, model_type)model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model = get_model(model_provider_func, model_type)model = get_model(model_provider_func, model_type) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper model = fn(model_provider_func, model_type, wrap_with_ddp) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model model = fn(model_provider_func, model_type, wrap_with_ddp)model = fn(model_provider_func, model_type, wrap_with_ddp) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model = get_model(model_provider_func, model_type) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper model = model_provider_func( model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider model = fn(model_provider_func, model_type, wrap_with_ddp) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider model = VLMModel(vlm_config)model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider model = get_model(model_provider_func, model_type) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model model = fn(model_provider_func, model_type, wrap_with_ddp)model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper self.text_decoder = self._build_text_decoder_model(config.text_decoder)fn(self, *args, **kwargs)self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model TypeError File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper fn(self, *args, **kwargs): RotaryEmbedding.__init__() got an unexpected keyword argument 'config' TypeErrormodel = model_provider_func(*args, **kwargs): RotaryEmbedding.__init__() got an unexpected keyword argument 'config' File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider fn(self, *args, **kwargs) TypeError: RotaryEmbedding.__init__() got an unexpected keyword argument 'config' return MMGPTModel(model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model fn(self, *args, **kwargs) TypeError: RotaryEmbedding.__init__() got an unexpected keyword argument 'config' text decoder pipeline config: pp_rank:1, pre_process:False, post_process:False, local_num_layers:8 return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper fn(self, *args, **kwargs) TypeError: RotaryEmbedding.__init__() got an unexpected keyword argument 'config' Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> pretrain( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model = get_model(model_provider_func, model_type) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper model = fn(model_provider_func, model_type, wrap_with_ddp) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper fn(self, *args, **kwargs) TypeError: RotaryEmbedding.__init__() got an unexpected keyword argument 'config' text decoder pipeline config: pp_rank:0, pre_process:True, post_process:False, local_num_layers:6 Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> pretrain( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model = get_model(model_provider_func, model_type) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper model = fn(model_provider_func, model_type, wrap_with_ddp) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ text decoder pipeline config: pp_rank:0, pre_process:True, post_process:False, local_num_layers:6 self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper fn(self, *args, **kwargs) TypeError: RotaryEmbedding.__init__() got an unexpected keyword argument 'config' Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> pretrain( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model = get_model(model_provider_func, model_type) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper model = fn(model_provider_func, model_type, wrap_with_ddp) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper fn(self, *args, **kwargs) TypeError: RotaryEmbedding.__init__() got an unexpected keyword argument 'config' [2025-07-24 17:37:33,240] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1197781 closing signal SIGTERM [2025-07-24 17:37:56,038] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 1 (pid: 1197782) of binary: /opt/miniconda/envs/ascend910b/bin/python3.10 Traceback (most recent call last): File "/opt/miniconda/envs/ascend910b/bin/torchrun", line 8, in <module> sys.exit(main()) File "/opt/miniconda/envs/ascend910b/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/opt/miniconda/envs/ascend910b/lib/python3.10/site-packages/torch/distributed/run.py", line 806, in main run(args) File "/opt/miniconda/envs/ascend910b/lib/python3.10/site-packages/torch/distributed/run.py", line 797, in run elastic_launch( File "/opt/miniconda/envs/ascend910b/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/opt/miniconda/envs/ascend910b/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ pretrain_vlm.py FAILED ------------------------------------------------------------ Failures: [1]: time : 2025-07-24_17:37:33 host : worker-Ascend910B-74ip rank : 2 (local_rank: 2) exitcode : 1 (pid: 1197783) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [2]: time : 2025-07-24_17:37:33 host : worker-Ascend910B-74ip 二、软件版本: -- CANN 版本 (e.g., CANN 3.0.x,5.x.x): --Tensorflow/Pytorch/MindSpore 版本: --Python 版本 (e.g., Python 3.7.5): -- MindStudio版本 (e.g., MindStudio 2.0.0 (beta3)): --操作系统版本 (e.g., Ubuntu 18.04): 三、测试步骤: 单节点训练internvl3-8b,训练脚本如下所示: #!/bin/bash source /usr/local/Ascend/ascend-toolkit/set_env.sh export ASCEND_SLOG_PRINT_TO_STDOUT=0 export ASCEND_GLOBAL_LOG_LEVEL=3 export TASK_QUEUE_ENABLE=2 export COMBINED_ENABLE=1 export CPU_AFFINITY_CONF=1 export HCCL_CONNECT_TIMEOUT=1200 # 该变量只用于规避megatron对其校验,对npu无效 export CUDA_DEVICE_MAX_CONNECTIONS=1 #export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True export ACLNN_CACHE_LIMIT=100000 NPUS_PER_NODE=8 MASTER_ADDR=localhost MASTER_PORT=6000 NNODES=1 NODE_RANK=0 WORLD_SIZE=$(($NPUS_PER_NODE * $NNODES)) echo $MASTER_ADDR echo $NNODES MBS=1 GRAD_ACC_STEP=64 TP=1 PP=4 CP=1 DP=$(($WORLD_SIZE/$TP/$PP/$CP)) GBS=$(($MBS*$GRAD_ACC_STEP*$DP)) MM_DATA="./examples/internvl3/data_8B.json" MM_MODEL="./examples/internvl3/model_8B.json" MM_TOOL="./mindspeed_mm/tools/tools.json" LOAD_PATH="/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/raw_ckpt/InternVL3-8B" SAVE_PATH="/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/output_ckpt" MM_ARGS=" --mm-data ${MM_DATA} \ --mm-model ${MM_MODEL} \ --mm-tool ${MM_TOOL} " DISTRIBUTED_ARGS=" --nproc_per_node $NPUS_PER_NODE \ --nnodes $NNODES \ --node_rank $NODE_RANK \ --master_addr $MASTER_ADDR \ --master_port $MASTER_PORT " GPT_ARGS=" --tensor-model-parallel-size ${TP} \ --pipeline-model-parallel-size ${PP} \ --context-parallel-size ${CP} \ --micro-batch-size ${MBS} \ --global-batch-size ${GBS} \ --seq-length 4096 \ --tokenizer-type NullTokenizer \ --vocab-size 151674 \ --position-embedding-type rope \ --rotary-base 1000000 \ --swiglu \ --no-masked-softmax-fusion \ --lr 2e-5 \ --min-lr 0.0 \ --train-iters 5000 \ --lr-decay-style cosine \ --weight-decay 0.05 \ --clip-grad 1.0 \ --adam-beta1 0.9 \ --adam-beta2 0.999 \ --no-gradient-accumulation-fusion \ --no-load-optim \ --no-load-rng \ --no-save-optim \ --no-save-rng \ --use-distributed-optimizer \ --use-flash-attn \ --bf16 \ --load $LOAD_PATH \ --variable-seq-lengths \ --normalization RMSNorm \ --num-workers 4 \ --enable-dummy-optimizer \ --trust-remote-code \ " OUTPUT_ARGS=" --log-interval 1 \ --save-interval 5000 \ --eval-interval 5000 \ --eval-iters 5000 \ --save $SAVE_PATH \ " logfile=$(date +%Y%m%d)_$(date +%H%M%S) mkdir -p logs torchrun $DISTRIBUTED_ARGS \ pretrain_vlm.py \ $GPT_ARGS \ $MM_ARGS \ $OUTPUT_ARGS \ --distributed-backend nccl \ | tee logs/train_${logfile}.log 2>&1 chmod 440 logs/train_${logfile}.log chmod -R 640 $SAVE_PATH STEP_TIME=`grep "elapsed time per iteration" logs/train_${logfile}.log | awk -F ':' '{print$5}' | awk -F '|' '{print$1}' | head -n 150 | tail -n 100 | awk '{sum+=$1} END {if (NR != 0) printf("%.1f",sum/NR)}'` SAMPLES_PER_SECOND=`awk 'BEGIN{printf "%.3f\n", '${GBS}'*1000/'${STEP_TIME}'}'` echo "Elapsed Time Per iteration: $STEP_TIME" echo "Average Samples per Second: $SAMPLES_PER_SECOND" LOG_TOKENS_PER_SECOND=`grep "tokens per sample" logs/train_${logfile}.log` if [ "$LOG_TOKENS_PER_SECOND" ]; then AVERAGE_TOKENS=`grep "tokens per sample" logs/train_${logfile}.log | awk -F 'tokens per sample:' '{print$2}' | awk -F '|' '{print$1}' | head -n 150 | tail -n 100 | awk '{sum+=$1} END {if (NR != 0) printf("%.1f",sum/NR)}'` TOKENS_PER_SECOND=`awk 'BEGIN{printf "%.3f\n", '${SAMPLES_PER_SECOND}'*'${AVERAGE_TOKENS}'}'` echo "Consumed Tokens per Second: $TOKENS_PER_SECOND" fi
一、问题现象(附报错日志上下文): frozen importlib._bootstrap>:671: ImportWarning: TBEMetaPathLoader.exec_module() not found; falling back to load_module() all tp gourps [[0], [1], [2], [3], [4], [5], [6], [7]] all ep groups [[0], [1], [2], [3], [4], [5], [6], [7]] all dp groups [[0, 1], [2, 3], [4, 5], [6, 7]] all_dp_modulo_exp_group_ranks [[0, 1], [2, 3], [4, 5], [6, 7]] all_tensor_and_expert_group_ranks [[0], [1], [2], [3], [4], [5], [6], [7]] all_data_parallel_group_ranks_with_cp [[0, 1], [2, 3], [4, 5], [6, 7]] > initialized tensor model parallel with size 1 > initialized pipeline model parallel with size 4 > setting random seeds to 1234 ... > compiling dataset index builder ... make: 进入目录“/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/core/datasets” make: 对“default”无需做任何事。 make: 离开目录“/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/core/datasets” >>> done with dataset index builder. Compilation time: 0.067 seconds time to initialize megatron (seconds): -43.989 [after megatron is initialized] datetime: 2025-07-24 17:37:24 building VLMModel ... image encoder pipeline config: pp_rank:0, pre_process:True, post_process:True, local_num_layers:24 text decoder pipeline config: pp_rank:2, pre_process:False, post_process:False, local_num_layers:8 image encoder pipeline config: pp_rank:0, pre_process:True, post_process:True, local_num_layers:24 text decoder pipeline config: pp_rank:1, pre_process:False, post_process:False, local_num_layers:8 text decoder pipeline config: pp_rank:2, pre_process:False, post_process:False, local_num_layers:8 text decoder pipeline config: pp_rank:3, pre_process:False, post_process:True, local_num_layers:6 text decoder pipeline config: pp_rank:3, pre_process:False, post_process:True, local_num_layers:6 Traceback (most recent call last): Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> pretrain(pretrain( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain pretrain( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain model, optimizer, opt_param_scheduler = setup_model_and_optimizer(model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> pretrain(model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer pretrain( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper model = get_model(model_provider_func, model_type)model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model = get_model(model_provider_func, model_type)model = get_model(model_provider_func, model_type) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper model = fn(model_provider_func, model_type, wrap_with_ddp) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model model = fn(model_provider_func, model_type, wrap_with_ddp)model = fn(model_provider_func, model_type, wrap_with_ddp) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model = get_model(model_provider_func, model_type) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper model = model_provider_func( model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider model = fn(model_provider_func, model_type, wrap_with_ddp) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider model = VLMModel(vlm_config)model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider model = get_model(model_provider_func, model_type) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model model = fn(model_provider_func, model_type, wrap_with_ddp)model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper self.text_decoder = self._build_text_decoder_model(config.text_decoder)fn(self, *args, **kwargs)self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model TypeError File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper fn(self, *args, **kwargs): RotaryEmbedding.__init__() got an unexpected keyword argument 'config' TypeErrormodel = model_provider_func(*args, **kwargs): RotaryEmbedding.__init__() got an unexpected keyword argument 'config' File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider fn(self, *args, **kwargs) TypeError: RotaryEmbedding.__init__() got an unexpected keyword argument 'config' return MMGPTModel(model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model fn(self, *args, **kwargs) TypeError: RotaryEmbedding.__init__() got an unexpected keyword argument 'config' text decoder pipeline config: pp_rank:1, pre_process:False, post_process:False, local_num_layers:8 return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper fn(self, *args, **kwargs) TypeError: RotaryEmbedding.__init__() got an unexpected keyword argument 'config' Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> pretrain( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model = get_model(model_provider_func, model_type) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper model = fn(model_provider_func, model_type, wrap_with_ddp) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper fn(self, *args, **kwargs) TypeError: RotaryEmbedding.__init__() got an unexpected keyword argument 'config' text decoder pipeline config: pp_rank:0, pre_process:True, post_process:False, local_num_layers:6 Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> pretrain( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model = get_model(model_provider_func, model_type) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper model = fn(model_provider_func, model_type, wrap_with_ddp) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ text decoder pipeline config: pp_rank:0, pre_process:True, post_process:False, local_num_layers:6 self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper fn(self, *args, **kwargs) TypeError: RotaryEmbedding.__init__() got an unexpected keyword argument 'config' Traceback (most recent call last): File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 158, in <module> pretrain( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/training.py", line 186, in pretrain model, optimizer, opt_param_scheduler = setup_model_and_optimizer( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/training.py", line 537, in wrapper model, optimizer, opt_param_scheduler = setup_model_and_optimizer(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 535, in setup_model_and_optimizer model = get_model(model_provider_func, model_type) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 171, in wrapper model = fn(model_provider_func, model_type, wrap_with_ddp) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/megatron/training/training.py", line 410, in get_model model = model_provider_func( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/tasks/finetune/lora/lora_patch.py", line 42, in wrapper model = model_provider_func(*args, **kwargs) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/pretrain_vlm.py", line 41, in model_provider model = VLMModel(vlm_config) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 83, in __init__ self.text_decoder = self._build_text_decoder_model(config.text_decoder) File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/vlm_model.py", line 283, in _build_text_decoder_model return MMGPTModel( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/mindspeed_mm/models/common/mm_gpt_model.py", line 108, in __init__ self.rotary_pos_emb = DynamicRotaryEmbedding( File "/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/MindSpeed/mindspeed/core/models/common/embeddings/rotary_pos_embedding.py", line 111, in wrapper fn(self, *args, **kwargs) TypeError: RotaryEmbedding.__init__() got an unexpected keyword argument 'config' [2025-07-24 17:37:33,240] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1197781 closing signal SIGTERM [2025-07-24 17:37:56,038] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 1 (pid: 1197782) of binary: /opt/miniconda/envs/ascend910b/bin/python3.10 Traceback (most recent call last): File "/opt/miniconda/envs/ascend910b/bin/torchrun", line 8, in <module> sys.exit(main()) File "/opt/miniconda/envs/ascend910b/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/opt/miniconda/envs/ascend910b/lib/python3.10/site-packages/torch/distributed/run.py", line 806, in main run(args) File "/opt/miniconda/envs/ascend910b/lib/python3.10/site-packages/torch/distributed/run.py", line 797, in run elastic_launch( File "/opt/miniconda/envs/ascend910b/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/opt/miniconda/envs/ascend910b/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ pretrain_vlm.py FAILED ------------------------------------------------------------ Failures: [1]: time : 2025-07-24_17:37:33 host : worker-Ascend910B-74ip rank : 2 (local_rank: 2) exitcode : 1 (pid: 1197783) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [2]: time : 2025-07-24_17:37:33 host : worker-Ascend910B-74ip 二、软件版本: -- CANN 版本 (e.g., CANN 3.0.x,5.x.x): --Tensorflow/Pytorch/MindSpore 版本: --Python 版本 (e.g., Python 3.7.5): -- MindStudio版本 (e.g., MindStudio 2.0.0 (beta3)): --操作系统版本 (e.g., Ubuntu 18.04): 三、测试步骤: 单节点训练internvl3-8b,训练脚本如下所示: #!/bin/bash source /usr/local/Ascend/ascend-toolkit/set_env.sh export ASCEND_SLOG_PRINT_TO_STDOUT=0 export ASCEND_GLOBAL_LOG_LEVEL=3 export TASK_QUEUE_ENABLE=2 export COMBINED_ENABLE=1 export CPU_AFFINITY_CONF=1 export HCCL_CONNECT_TIMEOUT=1200 # 该变量只用于规避megatron对其校验,对npu无效 export CUDA_DEVICE_MAX_CONNECTIONS=1 #export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True export ACLNN_CACHE_LIMIT=100000 NPUS_PER_NODE=8 MASTER_ADDR=localhost MASTER_PORT=6000 NNODES=1 NODE_RANK=0 WORLD_SIZE=$(($NPUS_PER_NODE * $NNODES)) echo $MASTER_ADDR echo $NNODES MBS=1 GRAD_ACC_STEP=64 TP=1 PP=4 CP=1 DP=$(($WORLD_SIZE/$TP/$PP/$CP)) GBS=$(($MBS*$GRAD_ACC_STEP*$DP)) MM_DATA="./examples/internvl3/data_8B.json" MM_MODEL="./examples/internvl3/model_8B.json" MM_TOOL="./mindspeed_mm/tools/tools.json" LOAD_PATH="/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/raw_ckpt/InternVL3-8B" SAVE_PATH="/data2/jiaow/2-internvl-train/code/train/MindSpeed-MM/output_ckpt" MM_ARGS=" --mm-data ${MM_DATA} \ --mm-model ${MM_MODEL} \ --mm-tool ${MM_TOOL} " DISTRIBUTED_ARGS=" --nproc_per_node $NPUS_PER_NODE \ --nnodes $NNODES \ --node_rank $NODE_RANK \ --master_addr $MASTER_ADDR \ --master_port $MASTER_PORT " GPT_ARGS=" --tensor-model-parallel-size ${TP} \ --pipeline-model-parallel-size ${PP} \ --context-parallel-size ${CP} \ --micro-batch-size ${MBS} \ --global-batch-size ${GBS} \ --seq-length 4096 \ --tokenizer-type NullTokenizer \ --vocab-size 151674 \ --position-embedding-type rope \ --rotary-base 1000000 \ --swiglu \ --no-masked-softmax-fusion \ --lr 2e-5 \ --min-lr 0.0 \ --train-iters 5000 \ --lr-decay-style cosine \ --weight-decay 0.05 \ --clip-grad 1.0 \ --adam-beta1 0.9 \ --adam-beta2 0.999 \ --no-gradient-accumulation-fusion \ --no-load-optim \ --no-load-rng \ --no-save-optim \ --no-save-rng \ --use-distributed-optimizer \ --use-flash-attn \ --bf16 \ --load $LOAD_PATH \ --variable-seq-lengths \ --normalization RMSNorm \ --num-workers 4 \ --enable-dummy-optimizer \ --trust-remote-code \ " OUTPUT_ARGS=" --log-interval 1 \ --save-interval 5000 \ --eval-interval 5000 \ --eval-iters 5000 \ --save $SAVE_PATH \ " logfile=$(date +%Y%m%d)_$(date +%H%M%S) mkdir -p logs torchrun $DISTRIBUTED_ARGS \ pretrain_vlm.py \ $GPT_ARGS \ $MM_ARGS \ $OUTPUT_ARGS \ --distributed-backend nccl \ | tee logs/train_${logfile}.log 2>&1 chmod 440 logs/train_${logfile}.log chmod -R 640 $SAVE_PATH STEP_TIME=`grep "elapsed time per iteration" logs/train_${logfile}.log | awk -F ':' '{print$5}' | awk -F '|' '{print$1}' | head -n 150 | tail -n 100 | awk '{sum+=$1} END {if (NR != 0) printf("%.1f",sum/NR)}'` SAMPLES_PER_SECOND=`awk 'BEGIN{printf "%.3f\n", '${GBS}'*1000/'${STEP_TIME}'}'` echo "Elapsed Time Per iteration: $STEP_TIME" echo "Average Samples per Second: $SAMPLES_PER_SECOND" LOG_TOKENS_PER_SECOND=`grep "tokens per sample" logs/train_${logfile}.log` if [ "$LOG_TOKENS_PER_SECOND" ]; then AVERAGE_TOKENS=`grep "tokens per sample" logs/train_${logfile}.log | awk -F 'tokens per sample:' '{print$2}' | awk -F '|' '{print$1}' | head -n 150 | tail -n 100 | awk '{sum+=$1} END {if (NR != 0) printf("%.1f",sum/NR)}'` TOKENS_PER_SECOND=`awk 'BEGIN{printf "%.3f\n", '${SAMPLES_PER_SECOND}'*'${AVERAGE_TOKENS}'}'` echo "Consumed Tokens per Second: $TOKENS_PER_SECOND" fi
评论 (
1
)
登录
后才可以发表评论
状态
DONE
TODO
ACCEPTED
Analysing
Feedback
WIP
Replied
CLOSED
DONE
REJECTED
负责人
未设置
标签
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (
-
)
标签 (
-
)
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(2)
1
https://gitee.com/ascend/MindSpeed-MM.git
git@gitee.com:ascend/MindSpeed-MM.git
ascend
MindSpeed-MM
MindSpeed-MM
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
评论
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册