登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
Gitee 年度开源项目评选结果正式揭晓,速戳👉
代码拉取完成,页面将自动刷新
仓库状态说明
开源项目
>
人工智能
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
204
Star
1.3K
Fork
1.2K
Ascend
/
MindSpeed-LLM
暂停
代码
Issues
3
Pull Requests
32
Wiki
统计
流水线
服务
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
lora微调中的LORA_CHECKPOINT参数怎么设置?
DONE
#I9GZWW
需求
g30041353
创建于
2024-04-16 11:08
lora微调中的LORA_CHECKPOINT参数怎么设置?下图设置空文件夹(和保存目录一致),微调后将训练好的megatron模型重新转回HuggingFace格式报错  ``` loading checkpoint from ./output-llama2-7b-lora at iteration 200 Traceback (most recent call last): File "/home/guan/temp/convert/ModelLink/tools/checkpoint/convert_ckpt.py", line 98, in <module> Loader exited, exiting saver main() File "/home/guan/temp/convert/ModelLink/tools/checkpoint/convert_ckpt.py", line 91, in main loader.load_checkpoint(queue, args) File "/home/guan/temp/convert/ModelLink/tools/checkpoint/loader_megatron.py", line 380, in load_checkpoint _load_checkpoint(queue, args) File "/home/guan/temp/convert/ModelLink/tools/checkpoint/loader_megatron.py", line 236, in _load_checkpoint all_models = [get_models(tp_size, md.params_dtype)] File "/home/guan/temp/convert/ModelLink/tools/checkpoint/loader_megatron.py", line 161, in get_models load_checkpoint_mg(model_, None, None) File "/home/guan/temp/convert/ModelLink/megatron/checkpointing.py", line 584, in load_checkpoint model[0].load_state_dict(state_dict['model'], strict=strict) File "/home/guan/temp/convert/ModelLink/modellink/model/gpt_model.py", line 103, in load_state_dict self.language_model.load_state_dict(state_dict, strict=strict) File "/home/guan/temp/convert/ModelLink/megatron/model/language_model.py", line 608, in load_state_dict self.encoder.load_state_dict(state_dict_, strict=strict) File "/home/guan/temp/convert/ModelLink/megatron/model/transformer.py", line 1793, in load_state_dict super().load_state_dict(state_dict_, strict) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2152, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for ParallelTransformer: Missing key(s) in state_dict: "layers.0.input_norm.weight", "layers.0.self_attention.query_key_value.weight", "layers.0.self_attention.dense.weight", "layers.0.post_attention_norm.weight", "layers.0.mlp.dense_h_to_4h.weight", "layers.0.mlp.dense_4h_to_h.weight", "layers.1.input_norm.weight", "layers.1.self_attention.query_key_value.weight", "layers.1.self_attention.dense.weight", "layers.1.post_attention_norm.weight", "layers.1.mlp.dense_h_to_4h.weight", "layers.1.mlp.dense_4h_to_h.weight", "layers.2.input_norm.weight", "layers.2.self_attention.query_key_value.weight", "layers.2.self_attention.dense.weight", "layers.2.post_attention_norm.weight", "layers.2.mlp.dense_h_to_4h.weight", "layers.2.mlp.dense_4h_to_h.weight", "layers.3.input_norm.weight", "layers.3.self_attention.query_key_value.weight", "layers.3.self_attention.dense.weight", "layers.3.post_attention_norm.weight", "layers.3.mlp.dense_h_to_4h.weight", "layers.3.mlp.dense_4h_to_h.weight", "layers.4.input_norm.weight", "layers.4.self_attention.query_key_value.weight", "layers.4.self_attention.dense.weight", "layers.4.post_attention_norm.weight", "layers.4.mlp.dense_h_to_4h.weight", "layers.4.mlp.dense_4h_to_h.weight", "layers.5.input_norm.weight", "layers.5.self_attention.query_key_value.weight", "layers.5.self_attention.dense.weight", "layers.5.post_attention_norm.weight", "layers.5.mlp.dense_h_to_4h.weight", "layers.5.mlp.dense_4h_to_h.weight", "layers.6.input_norm.weight", "layers.6.self_attention.query_key_value.weight", "layers.6.self_attention.dense.weight", "layers.6.post_attention_norm.weight", "layers.6.mlp.dense_h_to_4h.weight", "layers.6.mlp.dense_4h_to_h.weight", "layers.7.input_norm.weight", "layers.7.self_attention.query_key_value.weight", "layers.7.self_attention.dense.weight", "layers.7.post_attention_norm.weight", "layers.7.mlp.dense_h_to_4h.weight", "layers.7.mlp.dense_4h_to_h.weight", "layers.8.input_norm.weight", "layers.8.self_attention.query_key_value.weight", "layers.8.self_attention.dense.weight", "layers.8.post_attention_norm.weight", "layers.8.mlp.dense_h_to_4h.weight", "layers.8.mlp.dense_4h_to_h.weight", "layers.9.input_norm.weight", "layers.9.self_attention.query_key_value.weight", "layers.9.self_attention.dense.weight", "layers.9.post_attention_norm.weight", "layers.9.mlp.dense_h_to_4h.weight", "layers.9.mlp.dense_4h_to_h.weight", "layers.10.input_norm.weight", "layers.10.self_attention.query_key_value.weight", "layers.10.self_attention.dense.weight", "layers.10.post_attention_norm.weight", "layers.10.mlp.dense_h_to_4h.weight", "layers.10.mlp.dense_4h_to_h.weight", "layers.11.input_norm.weight", "layers.11.self_attention.query_key_value.weight", "layers.11.self_attention.dense.weight", "layers.11.post_attention_norm.weight", "layers.11.mlp.dense_h_to_4h.weight", "layers.11.mlp.dense_4h_to_h.weight", "layers.12.input_norm.weight", "layers.12.self_attention.query_key_value.weight", "layers.12.self_attention.dense.weight", "layers.12.post_attention_norm.weight", "layers.12.mlp.dense_h_to_4h.weight", "layers.12.mlp.dense_4h_to_h.weight", "layers.13.input_norm.weight", "layers.13.self_attention.query_key_value.weight", "layers.13.self_attention.dense.weight", "layers.13.post_attention_norm.weight", "layers.13.mlp.dense_h_to_4h.weight", "layers.13.mlp.dense_4h_to_h.weight", "layers.14.input_norm.weight", "layers.14.self_attention.query_key_value.weight", "layers.14.self_attention.dense.weight", "layers.14.post_attention_norm.weight", "layers.14.mlp.dense_h_to_4h.weight", "layers.14.mlp.dense_4h_to_h.weight", "layers.15.input_norm.weight", "layers.15.self_attention.query_key_value.weight", "layers.15.self_attention.dense.weight", "layers.15.post_attention_norm.weight", "layers.15.mlp.dense_h_to_4h.weight", "layers.15.mlp.dense_4h_to_h.weight", "layers.16.input_norm.weight", "layers.16.self_attention.query_key_value.weight", "layers.16.self_attention.dense.weight", "layers.16.post_attention_norm.weight", "layers.16.mlp.dense_h_to_4h.weight", "layers.16.mlp.dense_4h_to_h.weight", "layers.17.input_norm.weight", "layers.17.self_attention.query_key_value.weight", "layers.17.self_attention.dense.weight", "layers.17.post_attention_norm.weight", "layers.17.mlp.dense_h_to_4h.weight", "layers.17.mlp.dense_4h_to_h.weight", "layers.18.input_norm.weight", "layers.18.self_attention.query_key_value.weight", "layers.18.self_attention.dense.weight", "layers.18.post_attention_norm.weight", "layers.18.mlp.dense_h_to_4h.weight", "layers.18.mlp.dense_4h_to_h.weight", "layers.19.input_norm.weight", "layers.19.self_attention.query_key_value.weight", "layers.19.self_attention.dense.weight", "layers.19.post_attention_norm.weight", "layers.19.mlp.dense_h_to_4h.weight", "layers.19.mlp.dense_4h_to_h.weight", "layers.20.input_norm.weight", "layers.20.self_attention.query_key_value.weight", "layers.20.self_attention.dense.weight", "layers.20.post_attention_norm.weight", "layers.20.mlp.dense_h_to_4h.weight", "layers.20.mlp.dense_4h_to_h.weight", "layers.21.input_norm.weight", "layers.21.self_attention.query_key_value.weight", "layers.21.self_attention.dense.weight", "layers.21.post_attention_norm.weight", "layers.21.mlp.dense_h_to_4h.weight", "layers.21.mlp.dense_4h_to_h.weight", "layers.22.input_norm.weight", "layers.22.self_attention.query_key_value.weight", "layers.22.self_attention.dense.weight", "layers.22.post_attention_norm.weight", "layers.22.mlp.dense_h_to_4h.weight", "layers.22.mlp.dense_4h_to_h.weight", "layers.23.input_norm.weight", "layers.23.self_attention.query_key_value.weight", "layers.23.self_attention.dense.weight", "layers.23.post_attention_norm.weight", "layers.23.mlp.dense_h_to_4h.weight", "layers.23.mlp.dense_4h_to_h.weight", "layers.24.input_norm.weight", "layers.24.self_attention.query_key_value.weight", "layers.24.self_attention.dense.weight", "layers.24.post_attention_norm.weight", "layers.24.mlp.dense_h_to_4h.weight", "layers.24.mlp.dense_4h_to_h.weight", "layers.25.input_norm.weight", "layers.25.self_attention.query_key_value.weight", "layers.25.self_attention.dense.weight", "layers.25.post_attention_norm.weight", "layers.25.mlp.dense_h_to_4h.weight", "layers.25.mlp.dense_4h_to_h.weight", "layers.26.input_norm.weight", "layers.26.self_attention.query_key_value.weight", "layers.26.self_attention.dense.weight", "layers.26.post_attention_norm.weight", "layers.26.mlp.dense_h_to_4h.weight", "layers.26.mlp.dense_4h_to_h.weight", "layers.27.input_norm.weight", "layers.27.self_attention.query_key_value.weight", "layers.27.self_attention.dense.weight", "layers.27.post_attention_norm.weight", "layers.27.mlp.dense_h_to_4h.weight", "layers.27.mlp.dense_4h_to_h.weight", "layers.28.input_norm.weight", "layers.28.self_attention.query_key_value.weight", "layers.28.self_attention.dense.weight", "layers.28.post_attention_norm.weight", "layers.28.mlp.dense_h_to_4h.weight", "layers.28.mlp.dense_4h_to_h.weight", "layers.29.input_norm.weight", "layers.29.self_attention.query_key_value.weight", "layers.29.self_attention.dense.weight", "layers.29.post_attention_norm.weight", "layers.29.mlp.dense_h_to_4h.weight", "layers.29.mlp.dense_4h_to_h.weight", "layers.30.input_norm.weight", "layers.30.self_attention.query_key_value.weight", "layers.30.self_attention.dense.weight", "layers.30.post_attention_norm.weight", "layers.30.mlp.dense_h_to_4h.weight", "layers.30.mlp.dense_4h_to_h.weight", "layers.31.input_norm.weight", "layers.31.self_attention.query_key_value.weight", "layers.31.self_attention.dense.weight", "layers.31.post_attention_norm.weight", "layers.31.mlp.dense_h_to_4h.weight", "layers.31.mlp.dense_4h_to_h.weight", "final_norm.weight". Unexpected key(s) in state_dict: "layers.0.self_attention.query_key_value.lora_A.default.weight", "layers.0.self_attention.query_key_value.lora_B.default.weight", "layers.0.self_attention.dense.lora_A.default.weight", "layers.0.self_attention.dense.lora_B.default.weight", "layers.0.mlp.dense_h_to_4h.lora_A.default.weight", "layers.0.mlp.dense_h_to_4h.lora_B.default.weight", "layers.0.mlp.dense_4h_to_h.lora_A.default.weight", "layers.0.mlp.dense_4h_to_h.lora_B.default.weight", "layers.1.self_attention.query_key_value.lora_A.default.weight", "layers.1.self_attention.query_key_value.lora_B.default.weight", "layers.1.self_attention.dense.lora_A.default.weight", "layers.1.self_attention.dense.lora_B.default.weight", "layers.1.mlp.dense_h_to_4h.lora_A.default.weight", "layers.1.mlp.dense_h_to_4h.lora_B.default.weight", "layers.1.mlp.dense_4h_to_h.lora_A.default.weight", "layers.1.mlp.dense_4h_to_h.lora_B.default.weight", "layers.2.self_attention.query_key_value.lora_A.default.weight", "layers.2.self_attention.query_key_value.lora_B.default.weight", "layers.2.self_attention.dense.lora_A.default.weight", " ```
lora微调中的LORA_CHECKPOINT参数怎么设置?下图设置空文件夹(和保存目录一致),微调后将训练好的megatron模型重新转回HuggingFace格式报错  ``` loading checkpoint from ./output-llama2-7b-lora at iteration 200 Traceback (most recent call last): File "/home/guan/temp/convert/ModelLink/tools/checkpoint/convert_ckpt.py", line 98, in <module> Loader exited, exiting saver main() File "/home/guan/temp/convert/ModelLink/tools/checkpoint/convert_ckpt.py", line 91, in main loader.load_checkpoint(queue, args) File "/home/guan/temp/convert/ModelLink/tools/checkpoint/loader_megatron.py", line 380, in load_checkpoint _load_checkpoint(queue, args) File "/home/guan/temp/convert/ModelLink/tools/checkpoint/loader_megatron.py", line 236, in _load_checkpoint all_models = [get_models(tp_size, md.params_dtype)] File "/home/guan/temp/convert/ModelLink/tools/checkpoint/loader_megatron.py", line 161, in get_models load_checkpoint_mg(model_, None, None) File "/home/guan/temp/convert/ModelLink/megatron/checkpointing.py", line 584, in load_checkpoint model[0].load_state_dict(state_dict['model'], strict=strict) File "/home/guan/temp/convert/ModelLink/modellink/model/gpt_model.py", line 103, in load_state_dict self.language_model.load_state_dict(state_dict, strict=strict) File "/home/guan/temp/convert/ModelLink/megatron/model/language_model.py", line 608, in load_state_dict self.encoder.load_state_dict(state_dict_, strict=strict) File "/home/guan/temp/convert/ModelLink/megatron/model/transformer.py", line 1793, in load_state_dict super().load_state_dict(state_dict_, strict) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2152, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for ParallelTransformer: Missing key(s) in state_dict: "layers.0.input_norm.weight", "layers.0.self_attention.query_key_value.weight", "layers.0.self_attention.dense.weight", "layers.0.post_attention_norm.weight", "layers.0.mlp.dense_h_to_4h.weight", "layers.0.mlp.dense_4h_to_h.weight", "layers.1.input_norm.weight", "layers.1.self_attention.query_key_value.weight", "layers.1.self_attention.dense.weight", "layers.1.post_attention_norm.weight", "layers.1.mlp.dense_h_to_4h.weight", "layers.1.mlp.dense_4h_to_h.weight", "layers.2.input_norm.weight", "layers.2.self_attention.query_key_value.weight", "layers.2.self_attention.dense.weight", "layers.2.post_attention_norm.weight", "layers.2.mlp.dense_h_to_4h.weight", "layers.2.mlp.dense_4h_to_h.weight", "layers.3.input_norm.weight", "layers.3.self_attention.query_key_value.weight", "layers.3.self_attention.dense.weight", "layers.3.post_attention_norm.weight", "layers.3.mlp.dense_h_to_4h.weight", "layers.3.mlp.dense_4h_to_h.weight", "layers.4.input_norm.weight", "layers.4.self_attention.query_key_value.weight", "layers.4.self_attention.dense.weight", "layers.4.post_attention_norm.weight", "layers.4.mlp.dense_h_to_4h.weight", "layers.4.mlp.dense_4h_to_h.weight", "layers.5.input_norm.weight", "layers.5.self_attention.query_key_value.weight", "layers.5.self_attention.dense.weight", "layers.5.post_attention_norm.weight", "layers.5.mlp.dense_h_to_4h.weight", "layers.5.mlp.dense_4h_to_h.weight", "layers.6.input_norm.weight", "layers.6.self_attention.query_key_value.weight", "layers.6.self_attention.dense.weight", "layers.6.post_attention_norm.weight", "layers.6.mlp.dense_h_to_4h.weight", "layers.6.mlp.dense_4h_to_h.weight", "layers.7.input_norm.weight", "layers.7.self_attention.query_key_value.weight", "layers.7.self_attention.dense.weight", "layers.7.post_attention_norm.weight", "layers.7.mlp.dense_h_to_4h.weight", "layers.7.mlp.dense_4h_to_h.weight", "layers.8.input_norm.weight", "layers.8.self_attention.query_key_value.weight", "layers.8.self_attention.dense.weight", "layers.8.post_attention_norm.weight", "layers.8.mlp.dense_h_to_4h.weight", "layers.8.mlp.dense_4h_to_h.weight", "layers.9.input_norm.weight", "layers.9.self_attention.query_key_value.weight", "layers.9.self_attention.dense.weight", "layers.9.post_attention_norm.weight", "layers.9.mlp.dense_h_to_4h.weight", "layers.9.mlp.dense_4h_to_h.weight", "layers.10.input_norm.weight", "layers.10.self_attention.query_key_value.weight", "layers.10.self_attention.dense.weight", "layers.10.post_attention_norm.weight", "layers.10.mlp.dense_h_to_4h.weight", "layers.10.mlp.dense_4h_to_h.weight", "layers.11.input_norm.weight", "layers.11.self_attention.query_key_value.weight", "layers.11.self_attention.dense.weight", "layers.11.post_attention_norm.weight", "layers.11.mlp.dense_h_to_4h.weight", "layers.11.mlp.dense_4h_to_h.weight", "layers.12.input_norm.weight", "layers.12.self_attention.query_key_value.weight", "layers.12.self_attention.dense.weight", "layers.12.post_attention_norm.weight", "layers.12.mlp.dense_h_to_4h.weight", "layers.12.mlp.dense_4h_to_h.weight", "layers.13.input_norm.weight", "layers.13.self_attention.query_key_value.weight", "layers.13.self_attention.dense.weight", "layers.13.post_attention_norm.weight", "layers.13.mlp.dense_h_to_4h.weight", "layers.13.mlp.dense_4h_to_h.weight", "layers.14.input_norm.weight", "layers.14.self_attention.query_key_value.weight", "layers.14.self_attention.dense.weight", "layers.14.post_attention_norm.weight", "layers.14.mlp.dense_h_to_4h.weight", "layers.14.mlp.dense_4h_to_h.weight", "layers.15.input_norm.weight", "layers.15.self_attention.query_key_value.weight", "layers.15.self_attention.dense.weight", "layers.15.post_attention_norm.weight", "layers.15.mlp.dense_h_to_4h.weight", "layers.15.mlp.dense_4h_to_h.weight", "layers.16.input_norm.weight", "layers.16.self_attention.query_key_value.weight", "layers.16.self_attention.dense.weight", "layers.16.post_attention_norm.weight", "layers.16.mlp.dense_h_to_4h.weight", "layers.16.mlp.dense_4h_to_h.weight", "layers.17.input_norm.weight", "layers.17.self_attention.query_key_value.weight", "layers.17.self_attention.dense.weight", "layers.17.post_attention_norm.weight", "layers.17.mlp.dense_h_to_4h.weight", "layers.17.mlp.dense_4h_to_h.weight", "layers.18.input_norm.weight", "layers.18.self_attention.query_key_value.weight", "layers.18.self_attention.dense.weight", "layers.18.post_attention_norm.weight", "layers.18.mlp.dense_h_to_4h.weight", "layers.18.mlp.dense_4h_to_h.weight", "layers.19.input_norm.weight", "layers.19.self_attention.query_key_value.weight", "layers.19.self_attention.dense.weight", "layers.19.post_attention_norm.weight", "layers.19.mlp.dense_h_to_4h.weight", "layers.19.mlp.dense_4h_to_h.weight", "layers.20.input_norm.weight", "layers.20.self_attention.query_key_value.weight", "layers.20.self_attention.dense.weight", "layers.20.post_attention_norm.weight", "layers.20.mlp.dense_h_to_4h.weight", "layers.20.mlp.dense_4h_to_h.weight", "layers.21.input_norm.weight", "layers.21.self_attention.query_key_value.weight", "layers.21.self_attention.dense.weight", "layers.21.post_attention_norm.weight", "layers.21.mlp.dense_h_to_4h.weight", "layers.21.mlp.dense_4h_to_h.weight", "layers.22.input_norm.weight", "layers.22.self_attention.query_key_value.weight", "layers.22.self_attention.dense.weight", "layers.22.post_attention_norm.weight", "layers.22.mlp.dense_h_to_4h.weight", "layers.22.mlp.dense_4h_to_h.weight", "layers.23.input_norm.weight", "layers.23.self_attention.query_key_value.weight", "layers.23.self_attention.dense.weight", "layers.23.post_attention_norm.weight", "layers.23.mlp.dense_h_to_4h.weight", "layers.23.mlp.dense_4h_to_h.weight", "layers.24.input_norm.weight", "layers.24.self_attention.query_key_value.weight", "layers.24.self_attention.dense.weight", "layers.24.post_attention_norm.weight", "layers.24.mlp.dense_h_to_4h.weight", "layers.24.mlp.dense_4h_to_h.weight", "layers.25.input_norm.weight", "layers.25.self_attention.query_key_value.weight", "layers.25.self_attention.dense.weight", "layers.25.post_attention_norm.weight", "layers.25.mlp.dense_h_to_4h.weight", "layers.25.mlp.dense_4h_to_h.weight", "layers.26.input_norm.weight", "layers.26.self_attention.query_key_value.weight", "layers.26.self_attention.dense.weight", "layers.26.post_attention_norm.weight", "layers.26.mlp.dense_h_to_4h.weight", "layers.26.mlp.dense_4h_to_h.weight", "layers.27.input_norm.weight", "layers.27.self_attention.query_key_value.weight", "layers.27.self_attention.dense.weight", "layers.27.post_attention_norm.weight", "layers.27.mlp.dense_h_to_4h.weight", "layers.27.mlp.dense_4h_to_h.weight", "layers.28.input_norm.weight", "layers.28.self_attention.query_key_value.weight", "layers.28.self_attention.dense.weight", "layers.28.post_attention_norm.weight", "layers.28.mlp.dense_h_to_4h.weight", "layers.28.mlp.dense_4h_to_h.weight", "layers.29.input_norm.weight", "layers.29.self_attention.query_key_value.weight", "layers.29.self_attention.dense.weight", "layers.29.post_attention_norm.weight", "layers.29.mlp.dense_h_to_4h.weight", "layers.29.mlp.dense_4h_to_h.weight", "layers.30.input_norm.weight", "layers.30.self_attention.query_key_value.weight", "layers.30.self_attention.dense.weight", "layers.30.post_attention_norm.weight", "layers.30.mlp.dense_h_to_4h.weight", "layers.30.mlp.dense_4h_to_h.weight", "layers.31.input_norm.weight", "layers.31.self_attention.query_key_value.weight", "layers.31.self_attention.dense.weight", "layers.31.post_attention_norm.weight", "layers.31.mlp.dense_h_to_4h.weight", "layers.31.mlp.dense_4h_to_h.weight", "final_norm.weight". Unexpected key(s) in state_dict: "layers.0.self_attention.query_key_value.lora_A.default.weight", "layers.0.self_attention.query_key_value.lora_B.default.weight", "layers.0.self_attention.dense.lora_A.default.weight", "layers.0.self_attention.dense.lora_B.default.weight", "layers.0.mlp.dense_h_to_4h.lora_A.default.weight", "layers.0.mlp.dense_h_to_4h.lora_B.default.weight", "layers.0.mlp.dense_4h_to_h.lora_A.default.weight", "layers.0.mlp.dense_4h_to_h.lora_B.default.weight", "layers.1.self_attention.query_key_value.lora_A.default.weight", "layers.1.self_attention.query_key_value.lora_B.default.weight", "layers.1.self_attention.dense.lora_A.default.weight", "layers.1.self_attention.dense.lora_B.default.weight", "layers.1.mlp.dense_h_to_4h.lora_A.default.weight", "layers.1.mlp.dense_h_to_4h.lora_B.default.weight", "layers.1.mlp.dense_4h_to_h.lora_A.default.weight", "layers.1.mlp.dense_4h_to_h.lora_B.default.weight", "layers.2.self_attention.query_key_value.lora_A.default.weight", "layers.2.self_attention.query_key_value.lora_B.default.weight", "layers.2.self_attention.dense.lora_A.default.weight", " ```
评论 (
3
)
登录
后才可以发表评论
状态
DONE
TODO
WIP
DONE
CLOSED
REJECTED
负责人
未设置
标签
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (8)
标签 (6)
2.1.0
master
2.0.0
1.0.0
1.0.RC3
1.0.RC2
1.0.RC1
bk_origin_23
v2.1.0
v2.0.0
v1.0.0
v1.0.RC3.0
v1.0.RC2.0
v1.0.RC1.0
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(1)
Python
1
https://gitee.com/ascend/MindSpeed-LLM.git
git@gitee.com:ascend/MindSpeed-LLM.git
ascend
MindSpeed-LLM
MindSpeed-LLM
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
评论
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册