登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
Gitee 2025年度开源项目评选启动,快来选出你心中的最佳开源项目!
代码拉取完成,页面将自动刷新
开源项目
>
人工智能
>
AI-人工智能
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
101
Star
1.4K
Fork
930
GVP
MindSpore
/
mindformers
代码
Issues
117
Pull Requests
129
Wiki
统计
流水线
服务
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
910A qwen2-7B 推理加载权重报错
DONE
#IAK730
Question
wende.ma
创建于
2024-08-15 11:47
<!-- Welcome to ask questions and discuss with other members in MindSpore community --> 尝试使用910A推理 qwen2-7B,qwen1.5-7B推理已跑通 mindformers 1.0.2 mindpet 1.0.3 mindspore 2.2.14 Python 3.9.18 CANN 7.0.0 驱动23.0.0 使用分支1.0.a 双卡推理报错RuntimeError: For 'load_param_into_net', model.layers.0.attention.wk.weight in the argument 'net' should have the same shape as model.layers.0.attention.wk.weight in the argument 'parameter_dict'. But got its shape (1792, 3584) in the argument 'net' and shape (256, 3584) in the argument 'parameter_dict'.May you need to check whether the checkpoint you loaded is correct or the batch size and so on in the 'net' and 'parameter_dict' are same. 使用配置文件 seed: 0 output_dir: './output' # path to save checkpoint/strategy load_checkpoint: '/home/mwd/data/Qwen2-7B' src_strategy_path_or_dir: '' auto_trans_ckpt: True # If true, auto transform load_checkpoint to load in distributed model only_save_strategy: False resume_training: False use_parallel: True run_mode: 'predict' # trainer config trainer: type: CausalLanguageModelingTrainer model_name: 'qwen1_5_7b' # dataset train_dataset: &train_dataset data_loader: type: MindDataset dataset_dir: "" shuffle: True input_columns: ["input_ids", "labels", "attention_mask"] num_parallel_workers: 8 python_multiprocessing: False drop_remainder: True batch_size: 1 repeat: 1 numa_enable: False prefetch_size: 1 train_dataset_task: type: CausalLanguageModelDataset dataset_config: *train_dataset # runner config runner_config: epochs: 5 batch_size: 1 sink_mode: True sink_size: 2 runner_wrapper: type: MFTrainOneStepCell scale_sense: type: DynamicLossScaleUpdateCell loss_scale_value: 65536 scale_factor: 2 scale_window: 1000 use_clip_grad: True # optimizer optimizer: type: FP32StateAdamWeightDecay beta1: 0.9 beta2: 0.95 eps: 1.e-6 weight_decay: 0.1 # lr sechdule lr_schedule: type: CosineWithWarmUpLR learning_rate: 1.e-5 warmup_ratio: 0.01 total_steps: -1 # -1 means it will load the total steps of the dataset # callbacks callbacks: - type: MFLossMonitor - type: CheckpointMointor prefix: "qwen2" save_checkpoint_steps: 10000 keep_checkpoint_max: 3 integrated_save: False async_save: False - type: ObsMonitor # default parallel of device num = 8 for Atlas 800T A2 parallel_config: data_parallel: 1 model_parallel: 2 pipeline_stage: 1 micro_batch_num: 1 vocab_emb_dp: False gradient_aggregation_group: 4 # when model parallel is greater than 1, we can set micro_batch_interleave_num=2, that may accelerate the train process. micro_batch_interleave_num: 1 # recompute config recompute_config: recompute: False select_recompute: False parallel_optimizer_comm_recompute: False mp_comm_recompute: False recompute_slice_activation: False model: model_config: type: LlamaConfig batch_size: 1 seq_length: 2048 hidden_size: 3584 num_layers: 28 num_heads: 28 vocab_size: 152064 intermediate_size: 18944 qkv_has_bias: True rms_norm_eps: 1.0e-6 theta: 1000000.0 emb_dropout_prob: 0.0 eos_token_id: 151643 pad_token_id: 151643 bos_token_id: 151643 compute_dtype: "float16" layernorm_compute_type: "float32" softmax_compute_type: "float16" rotary_dtype: "float16" param_init_type: "float16" use_past: True use_flash_attention: False use_paged_attention: False # block_size: 16 # num_blocks: 512 use_past_shard: False offset: 0 checkpoint_name_or_path: "" repetition_penalty: 1 max_decode_length: 512 top_k: 0 top_p: 0.8 do_sample: False # configuration items copied from Qwen rotary_pct: 1.0 rotary_emb_base: 10000 kv_channels: 128 arch: type: LlamaForCausalLM processor: return_tensors: ms tokenizer: model_max_length: 32768 vocab_file: "../../data/Qwen2-7B/vocab.json" merges_file: "../../data/Qwen2-7B/merges.txt" unk_token: "<|endoftext|>" eos_token: "<|endoftext|>" pad_token: "<|endoftext|>" type: Qwen2Tokenizer type: Qwen2Processor # mindspore context init config context: mode: 0 #0--Graph Mode; 1--Pynative Mode device_target: "Ascend" enable_graph_kernel: False graph_kernel_flags: "--disable_expand_ops=Softmax,Dropout --enable_parallel_fusion=true --reduce_fuse_depth=8 --enable_auto_tensor_inplace=true" ascend_config: precision_mode: "must_keep_origin_dtype" max_call_depth: 10000 max_device_memory: "31GB" save_graphs: False save_graphs_path: "./graph" device_id: 0 # parallel context config parallel: parallel_mode: 1 # 0-data parallel, 1-semi-auto parallel, 2-auto parallel, 3-hybrid parallel gradients_mean: False enable_alltoall: False full_batch: True search_mode: "sharding_propagation" enable_parallel_optimizer: True strategy_ckpt_config: save_file: "./ckpt_strategy.ckpt" only_trainable_params: False parallel_optimizer_config: gradient_accumulation_shard: False parallel_optimizer_threshold: 64
<!-- Welcome to ask questions and discuss with other members in MindSpore community --> 尝试使用910A推理 qwen2-7B,qwen1.5-7B推理已跑通 mindformers 1.0.2 mindpet 1.0.3 mindspore 2.2.14 Python 3.9.18 CANN 7.0.0 驱动23.0.0 使用分支1.0.a 双卡推理报错RuntimeError: For 'load_param_into_net', model.layers.0.attention.wk.weight in the argument 'net' should have the same shape as model.layers.0.attention.wk.weight in the argument 'parameter_dict'. But got its shape (1792, 3584) in the argument 'net' and shape (256, 3584) in the argument 'parameter_dict'.May you need to check whether the checkpoint you loaded is correct or the batch size and so on in the 'net' and 'parameter_dict' are same. 使用配置文件 seed: 0 output_dir: './output' # path to save checkpoint/strategy load_checkpoint: '/home/mwd/data/Qwen2-7B' src_strategy_path_or_dir: '' auto_trans_ckpt: True # If true, auto transform load_checkpoint to load in distributed model only_save_strategy: False resume_training: False use_parallel: True run_mode: 'predict' # trainer config trainer: type: CausalLanguageModelingTrainer model_name: 'qwen1_5_7b' # dataset train_dataset: &train_dataset data_loader: type: MindDataset dataset_dir: "" shuffle: True input_columns: ["input_ids", "labels", "attention_mask"] num_parallel_workers: 8 python_multiprocessing: False drop_remainder: True batch_size: 1 repeat: 1 numa_enable: False prefetch_size: 1 train_dataset_task: type: CausalLanguageModelDataset dataset_config: *train_dataset # runner config runner_config: epochs: 5 batch_size: 1 sink_mode: True sink_size: 2 runner_wrapper: type: MFTrainOneStepCell scale_sense: type: DynamicLossScaleUpdateCell loss_scale_value: 65536 scale_factor: 2 scale_window: 1000 use_clip_grad: True # optimizer optimizer: type: FP32StateAdamWeightDecay beta1: 0.9 beta2: 0.95 eps: 1.e-6 weight_decay: 0.1 # lr sechdule lr_schedule: type: CosineWithWarmUpLR learning_rate: 1.e-5 warmup_ratio: 0.01 total_steps: -1 # -1 means it will load the total steps of the dataset # callbacks callbacks: - type: MFLossMonitor - type: CheckpointMointor prefix: "qwen2" save_checkpoint_steps: 10000 keep_checkpoint_max: 3 integrated_save: False async_save: False - type: ObsMonitor # default parallel of device num = 8 for Atlas 800T A2 parallel_config: data_parallel: 1 model_parallel: 2 pipeline_stage: 1 micro_batch_num: 1 vocab_emb_dp: False gradient_aggregation_group: 4 # when model parallel is greater than 1, we can set micro_batch_interleave_num=2, that may accelerate the train process. micro_batch_interleave_num: 1 # recompute config recompute_config: recompute: False select_recompute: False parallel_optimizer_comm_recompute: False mp_comm_recompute: False recompute_slice_activation: False model: model_config: type: LlamaConfig batch_size: 1 seq_length: 2048 hidden_size: 3584 num_layers: 28 num_heads: 28 vocab_size: 152064 intermediate_size: 18944 qkv_has_bias: True rms_norm_eps: 1.0e-6 theta: 1000000.0 emb_dropout_prob: 0.0 eos_token_id: 151643 pad_token_id: 151643 bos_token_id: 151643 compute_dtype: "float16" layernorm_compute_type: "float32" softmax_compute_type: "float16" rotary_dtype: "float16" param_init_type: "float16" use_past: True use_flash_attention: False use_paged_attention: False # block_size: 16 # num_blocks: 512 use_past_shard: False offset: 0 checkpoint_name_or_path: "" repetition_penalty: 1 max_decode_length: 512 top_k: 0 top_p: 0.8 do_sample: False # configuration items copied from Qwen rotary_pct: 1.0 rotary_emb_base: 10000 kv_channels: 128 arch: type: LlamaForCausalLM processor: return_tensors: ms tokenizer: model_max_length: 32768 vocab_file: "../../data/Qwen2-7B/vocab.json" merges_file: "../../data/Qwen2-7B/merges.txt" unk_token: "<|endoftext|>" eos_token: "<|endoftext|>" pad_token: "<|endoftext|>" type: Qwen2Tokenizer type: Qwen2Processor # mindspore context init config context: mode: 0 #0--Graph Mode; 1--Pynative Mode device_target: "Ascend" enable_graph_kernel: False graph_kernel_flags: "--disable_expand_ops=Softmax,Dropout --enable_parallel_fusion=true --reduce_fuse_depth=8 --enable_auto_tensor_inplace=true" ascend_config: precision_mode: "must_keep_origin_dtype" max_call_depth: 10000 max_device_memory: "31GB" save_graphs: False save_graphs_path: "./graph" device_id: 0 # parallel context config parallel: parallel_mode: 1 # 0-data parallel, 1-semi-auto parallel, 2-auto parallel, 3-hybrid parallel gradients_mean: False enable_alltoall: False full_batch: True search_mode: "sharding_propagation" enable_parallel_optimizer: True strategy_ckpt_config: save_file: "./ckpt_strategy.ckpt" only_trainable_params: False parallel_optimizer_config: gradient_accumulation_shard: False parallel_optimizer_threshold: 64
评论 (
3
)
登录
后才可以发表评论
状态
DONE
TODO
ACCEPTED
WIP
VALIDATION
DONE
CLOSED
REJECTED
负责人
未设置
标签
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (41)
标签 (23)
master
br_infer_boom_1115
r1.7.0
r1.6.0
bugfix/r1.7.0/value_issue
br_feature_pynative
r1.7.0-beta3
br_feature_infer
r1.7.0-beta1
br_infer_boom
dev
br_infer_deepseek_os
r1.5.0
br_feature_checkpoint
br_feature_infer_300iduo
br_feature_mcore
r1.6.0-beta1
br_infer_deepseek_ep
br_feature_rl_dpo
r1.3.0
r1.3.1
r1.4.0-beta2
r1.4.0-beta1
r1.5.0-beta1
r1.2.0
r1.1.0
r1.1.0-infer
r1.1.rc1
r1.0
kbk-infer
r1.0.a
r0.8
r0.7
r0.6.1_demo
r0.6
0.6rc1
r0.3
r0.2
v0.1.2
v0.1.1
v0.1.0
v1.7.0
v1.7.0-beta3
v1.7.0-beta2
v1.6.0
v1.6.0-beta1
v1.5.0
v1.5.0-beta2
v1.5.0-beta1
v1.4.0-beta2
v1.3.2
v1.3.1-beta1
v1.4.0-beta1
v1.3.0
v1.2.0
v1.1.0
v1.0.2
v1.0.1
v1.0.0
v0.6.0
v0.3
v0.2_rc
v0.1.1
v0.1.0
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(1)
Python
1
https://gitee.com/mindspore/mindformers.git
git@gitee.com:mindspore/mindformers.git
mindspore
mindformers
mindformers
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
评论
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册