100 Star 1.3K Fork 914

GVPMindSpore/mindformers

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
克隆/下载
predict_llama2_7b.yaml 1.67 KB
一键复制 编辑 原始数据 按行查看 历史
husichao 提交于 2024-11-25 10:40 +08:00 . bug fix & simplify llama2,cogvlm2 yaml
seed: 0
output_dir: './output' # path to save checkpoint/strategy
run_mode: 'predict'
use_parallel: False
load_checkpoint: ''
auto_trans_ckpt: False # If true, auto transform load_checkpoint to load in distributed model
# trainer config
trainer:
type: CausalLanguageModelingTrainer
model_name: 'llama2_7b'
# default parallel of device num = 8 for Atlas 800T A2
parallel_config:
model_parallel: 1
pipeline_stage: 1
# mindspore context init config
context:
mode: 0 #0--Graph Mode; 1--Pynative Mode
max_device_memory: "58GB"
device_id: 0
# model config
model:
model_config:
type: LlamaConfig
batch_size: 1 # add for increase predict
seq_length: 4096
hidden_size: 4096
num_layers: 32
num_heads: 32
vocab_size: 32000
multiple_of: 256
rms_norm_eps: 1.0e-5
bos_token_id: 1
eos_token_id: 2
pad_token_id: 0
ignore_token_id: -100
compute_dtype: "float16"
layernorm_compute_type: "float32"
softmax_compute_type: "float32"
rotary_dtype: "float16"
param_init_type: "float16"
use_past: True
scaling_factor: 1.0 # The scale factor of seq length
extend_method: "None" # support "None", "PI", "NTK"
use_flash_attention: True # FA can accelerate training or finetune
block_size: 16
num_blocks: 1024
is_dynamic: True
qkv_concat: False
offset: 0
checkpoint_name_or_path: "llama2_7b"
repetition_penalty: 1
max_decode_length: 512
top_k: 3
top_p: 1
do_sample: False
arch:
type: LlamaForCausalLM
processor:
return_tensors: ms
tokenizer:
unk_token: '<unk>'
bos_token: '<s>'
eos_token: '</s>'
pad_token: '<unk>'
type: LlamaTokenizer
type: LlamaProcessor
Loading...
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
Python
1
https://gitee.com/mindspore/mindformers.git
git@gitee.com:mindspore/mindformers.git
mindspore
mindformers
mindformers
r1.5.0

搜索帮助