[WARNING] PARALLEL(1390182,ffff8723a010,python):2024-10-11-08:58:27.466.383 [mindspore/ccsrc/frontend/parallel/step_parallel.cc:1929] ReshapeInit] FindNextLayout for Default/network-MFTrainOneStepCell/network-_VirtualDatasetCell/_backbone-PetModel/pet_model-LoraModel/lora_model-LlamaForCausalLM/model-LlamaModel/layers-CellList/31-LLamaDecodeLayer/attention-LLamaAttention/Reshape-op0 return nullptr, and is_next_reshape is 1. If reshape is not the last primitive, there must be some error.
[WARNING] PARALLEL(1390182,ffff8723a010,python):2024-10-11-08:58:47.494.697 [mindspore/ccsrc/frontend/parallel/graph_util/graph_utils.cc:68] GetTensorRedistributionFromCNode] Default/network-MFTrainOneStepCell/clip_grad_norm-ClipGradNorm/Sqrt-op0 has no OperatorInfo.
[ERROR] GE_ADPT(1390182,fff99c515160,python):2024-10-11-09:03:00.854.335 [mindspore/ccsrc/transform/graph_ir/graph_runner.cc:371] RunGraphWithStreamAsync] Call GE RunGraphWithStreamAsync Failed, ret is: 4294967295
2024-10-11 09:04:01,007 - mindformers[mindformers/tools/cloud_adapter/cloud_monitor.py:43] - ERROR - Traceback (most recent call last):
File "/home/ma-user/work/mindformers/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
result = run_func(*args, **kwargs)
File "/home/ma-user/work/mindformers/run_mindformer.py", line 41, in main
trainer.train()
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/_checkparam.py", line 1372, in wrapper
return func(*args, **kwargs)
File "/home/ma-user/work/mindformers/mindformers/trainer/trainer.py", line 421, in train
self.trainer.train(
File "/home/ma-user/work/mindformers/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 113, in train
self.training_process(
File "/home/ma-user/work/mindformers/mindformers/trainer/base_trainer.py", line 788, in training_process
model.train(config.runner_config.epochs, dataset,
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/train/model.py", line 1082, in train
self._train(epoch,
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/train/model.py", line 115, in wrapper
func(self, *args, **kwargs)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/train/model.py", line 636, in _train
self._train_dataset_sink_process(epoch, train_dataset, list_callback,
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/train/model.py", line 721, in _train_dataset_sink_process
outputs = train_network(*inputs)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/nn/cell.py", line 695, in call
out = self.compile_and_run(*args, **kwargs)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/nn/cell.py", line 1016, in compile_and_run
return _cell_graph_executor(self, *new_args, phase=self.phase)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/common/api.py", line 1684, in call
return self.run(obj, *args, phase=phase)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/common/api.py", line 1723, in run
return self._exec_pip(obj, *args, phase=phase_real)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/common/api.py", line 132, in wrapper
results = fn(*arg, **kwargs)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/common/api.py", line 1703, in _exec_pip
return self._graph_executor(args, phase)
RuntimeError: Exec graph failed
- Ascend Error Message:
EL0004: 2024-10-11-09:02:31.185.921 Failed to allocate memory.
Possible Cause: Available memory is insufficient.
Solution: Close applications not in use.
TraceBack (most recent call last):
rtKernelGetAddrAndPrefCntV2 execute failed, reason=[kernel lookup error][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53]
Call rtKernelGetAddrAndPrefCntV2(bin_handle, tiling_key, nullptr, RT_DYNAMIC_SHAPE_KERNEL, &kernel_info) fail, ret: 0x7BC93[FUNC:GetAddrAndPrefCnt][FILE:tbe_kernel_handle.cc][LINE:141]
Assert ((ffts_proto_transfer_->Transfer(op_desc_, task_def.ffts_plus_task(), ffts_plus_task_info_, desc_buf_host_, desc_buffer_len_)) == ge::SUCCESS) failed[FUNC:Init][FILE:ffts_plus_task_info.cc][LINE:153]
Failed to init task index 4126, related node Default/network-MFTrainOneStepCell/network-_VirtualDatasetCell/backbone-PetModel/pet_model-LoraModel/lora_model-LlamaForCausalLM/model-LlamaModel/layers-CellList/0-LLamaDecodeLayer/attention-LLamaAttention/flash_attention-FlashAttention/FlashAttentionScore-op1[FUNC:InitTaskInfoV2][FILE:model_args_manager.cc][LINE:215]
Call ops_kernel_info_store unloadTask fail[FUNC:Release][FILE:hccl_task_info.cc][LINE:683]
Check param ops_kernel_store nullptr[FUNC:Release][FILE:hccl_task_info.cc][LINE:675]
GraphManager RunGrapWithStreamhAsync failed,session id = 0, graph id = 2, stream = 0xaaab29bf5410.[FUNC:RunGraphWithStreamAsync][FILE:inner_session.cc][LINE:513]
[Run][Graph]Run graph with stream asyn failed, error code = 1343225857, session id = 0,graph id = 2, stream = 0xaaab29bf5410.[FUNC:RunGraphWithStreamAsync][FILE:ge_api.cc][LINE:800]
(Please search "CANN Common Error Analysis" at https://www.mindspore.cn for error code description)
- C++ Call Stack: (For framework developers)
mindspore/ccsrc/plugin/device/ascend/hal/hardware/ge_graph_executor.cc:1295 RunGraphRefMode
Traceback (most recent call last):
File "/home/ma-user/work/mindformers/run_mindformer.py", line 271, in
main(config_)
File "/home/ma-user/work/mindformers/mindformers/tools/cloud_adapter/cloud_monitor.py", line 44, in wrapper
raise exc
File "/home/ma-user/work/mindformers/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
result = run_func(*args, **kwargs)
File "/home/ma-user/work/mindformers/run_mindformer.py", line 41, in main
trainer.train()
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/_checkparam.py", line 1372, in wrapper
return func(*args, **kwargs)
File "/home/ma-user/work/mindformers/mindformers/trainer/trainer.py", line 421, in train
self.trainer.train(
File "/home/ma-user/work/mindformers/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 113, in train
self.training_process(
File "/home/ma-user/work/mindformers/mindformers/trainer/base_trainer.py", line 788, in training_process
model.train(config.runner_config.epochs, dataset,
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/train/model.py", line 1082, in train
self._train(epoch,
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/train/model.py", line 115, in wrapper
func(self, *args, **kwargs)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/train/model.py", line 636, in _train
self._train_dataset_sink_process(epoch, train_dataset, list_callback,
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/train/model.py", line 721, in _train_dataset_sink_process
outputs = train_network(*inputs)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/nn/cell.py", line 695, in call
out = self.compile_and_run(*args, **kwargs)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/nn/cell.py", line 1016, in compile_and_run
return _cell_graph_executor(self, *new_args, phase=self.phase)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/common/api.py", line 1684, in call
return self.run(obj, *args, phase=phase)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/common/api.py", line 1723, in run
return self._exec_pip(obj, *args, phase=phase_real)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/common/api.py", line 132, in wrapper
results = fn(*arg, **kwargs)
File "/home/ma-user/miniconda3/lib/python3.9/site-packages/mindspore/common/api.py", line 1703, in _exec_pip
return self._graph_executor(args, phase)
RuntimeError: Exec graph failed
- Ascend Error Message:
EL0004: 2024-10-11-09:02:31.185.921 Failed to allocate memory.
Possible Cause: Available memory is insufficient.
Solution: Close applications not in use.
TraceBack (most recent call last):
rtKernelGetAddrAndPrefCntV2 execute failed, reason=[kernel lookup error][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53]
Call rtKernelGetAddrAndPrefCntV2(bin_handle, tiling_key, nullptr, RT_DYNAMIC_SHAPE_KERNEL, &kernel_info) fail, ret: 0x7BC93[FUNC:GetAddrAndPrefCnt][FILE:tbe_kernel_handle.cc][LINE:141]
Assert ((ffts_proto_transfer_->Transfer(op_desc_, task_def.ffts_plus_task(), ffts_plus_task_info_, desc_buf_host_, desc_buffer_len_)) == ge::SUCCESS) failed[FUNC:Init][FILE:ffts_plus_task_info.cc][LINE:153]
Failed to init task index 4126, related node Default/network-MFTrainOneStepCell/network-_VirtualDatasetCell/backbone-PetModel/pet_model-LoraModel/lora_model-LlamaForCausalLM/model-LlamaModel/layers-CellList/0-LLamaDecodeLayer/attention-LLamaAttention/flash_attention-FlashAttention/FlashAttentionScore-op1[FUNC:InitTaskInfoV2][FILE:model_args_manager.cc][LINE:215]
Call ops_kernel_info_store unloadTask fail[FUNC:Release][FILE:hccl_task_info.cc][LINE:683]
Check param ops_kernel_store nullptr[FUNC:Release][FILE:hccl_task_info.cc][LINE:675]
GraphManager RunGrapWithStreamhAsync failed,session id = 0, graph id = 2, stream = 0xaaab29bf5410.[FUNC:RunGraphWithStreamAsync][FILE:inner_session.cc][LINE:513]
[Run][Graph]Run graph with stream asyn failed, error code = 1343225857, session id = 0,graph id = 2, stream = 0xaaab29bf5410.[FUNC:RunGraphWithStreamAsync][FILE:ge_api.cc][LINE:800]
(Please search "CANN Common Error Analysis" at https://www.mindspore.cn for error code description)
- C++ Call Stack: (For framework developers)
mindspore/ccsrc/plugin/device/ascend/hal/hardware/ge_graph_executor.cc:1295 RunGraphRefMode
[WARNING] MD(1390182,ffff8723a010,python):2024-10-11-09:04:01.290.609 [mindspore/ccsrc/minddata/dataset/engine/datasetops/data_queue_op.cc:163] ~DataQueueOp]
preprocess_batch: 100;
batch_queue: 1, 1, 1, 1, 1, 1, 1, 1, 1, 1;
push_start_time -> push_end_time
2024-10-11-08:57:49.283.598 -> 2024-10-11-08:57:49.283.749
2024-10-11-08:57:49.283.832 -> 2024-10-11-08:57:49.283.979
2024-10-11-08:57:49.284.064 -> 2024-10-11-08:57:49.284.208
2024-10-11-08:57:49.284.293 -> 2024-10-11-08:57:49.284.439
2024-10-11-08:57:49.284.525 -> 2024-10-11-08:57:49.284.671
2024-10-11-08:57:49.284.759 -> 2024-10-11-08:57:49.284.904
2024-10-11-08:57:49.284.990 -> 2024-10-11-08:57:49.285.137
2024-10-11-08:57:49.285.220 -> 2024-10-11-08:57:49.285.367
2024-10-11-08:57:49.285.453 -> 2024-10-11-08:57:49.285.596
2024-10-11-08:57:49.285.686 -> 2024-10-11-08:57:49.285.834
For more details, please refer to the FAQ at https://www.mindspore.cn/docs/en/master/faq/data_processing.html.
seed: 42
output_dir: './output'
load_checkpoint: '/home/ma-user/qwen15/ckpt/transform.ckpt'
auto_trans_ckpt: True # If true, auto transform load_checkpoint to load in distributed model
only_save_strategy: False
resume_training: False
run_mode: 'finetune'
# trainer config
trainer:
type: CausalLanguageModelingTrainer
model_name: 'qwen2_7b'
# if True, do evaluate during the training process. if false, do nothing.
# note that the task trainer should support _evaluate_in_training function.
do_eval: False
eval_step_interval: -1 # num of step intervals between each eval, -1 means no step end eval.
eval_epoch_interval: 50 # num of epoch intervals between each eval, 1 means eval on every epoch end.
# runner config
runner_config:
epochs: 5
batch_size: 1
sink_mode: True
sink_size: 1
# wrapper cell config
runner_wrapper:
type: MFTrainOneStepCell
scale_sense:
type: DynamicLossScaleUpdateCell
loss_scale_value: 4096
scale_factor: 2
scale_window: 1000
use_clip_grad: True
# optimizer
optimizer:
type: FP32StateAdamWeightDecay
beta1: 0.9
beta2: 0.95
eps: 1.e-8
learning_rate: 1.e-6
weight_decay: 0.01
# lr schedule
lr_schedule:
type: CosineWithWarmUpLR
learning_rate: 1.e-6
warmup_ratio: 0.01
total_steps: -1 # -1 means it will load the total steps of the dataset
# dataset
train_dataset: &train_dataset
data_loader:
type: MindDataset
dataset_dir: ""
shuffle: True
input_columns: ["input_ids", "target_ids", "attention_mask"]
num_parallel_workers: 8
python_multiprocessing: False
drop_remainder: True
batch_size: 4
repeat: 1
numa_enable: False
prefetch_size: 1
train_dataset_task:
type: CausalLanguageModelDataset
dataset_config: *train_dataset
# eval dataset
eval_dataset: &eval_dataset
data_loader:
type: MindDataset
dataset_dir: ""
shuffle: False
input_columns: ["input_ids", "target_ids", "attention_mask"]
num_parallel_workers: 1
python_multiprocessing: False
drop_remainder: False
repeat: 1
numa_enable: False
prefetch_size: 1
eval_dataset_task:
type: CausalLanguageModelDataset
dataset_config: *eval_dataset
use_parallel: False
# parallel context config
parallel:
parallel_mode: 1 # 0-data parallel, 1-semi-auto parallel, 2-auto parallel, 3-hybrid parallel
gradients_mean: False
enable_alltoall: False
full_batch: True
search_mode: "sharding_propagation"
enable_parallel_optimizer: True
strategy_ckpt_save_file: "./ckpt_strategy.ckpt"
parallel_optimizer_config:
gradient_accumulation_shard: False
parallel_optimizer_threshold: 64
# default parallel of device num = 8 910B
parallel_config:
data_parallel: 1
model_parallel: 1
pipeline_stage: 1
use_seq_parallel: True
micro_batch_num: 1
vocab_emb_dp: True
gradient_aggregation_group: 4
# when model parallel is greater than 1, we can set micro_batch_interleave_num=2, that may accelerate the train process.
micro_batch_interleave_num: 1
# recompute config
recompute_config:
recompute: True
select_recompute: False
parallel_optimizer_comm_recompute: False
mp_comm_recompute: True
recompute_slice_activation: False
# callbacks
callbacks:
- type: MFLossMonitor
- type: CheckpointMonitor
prefix: "qwen2"
save_checkpoint_steps: 5000
keep_checkpoint_max: 1
integrated_save: False
async_save: False
- type: ObsMonitor
# mindspore context init config
context:
mode: 0 #0--Graph Mode; 1--Pynative Mode
device_target: "Ascend"
enable_graph_kernel: False
max_call_depth: 10000
max_device_memory: "55GB"
save_graphs: False
save_graphs_path: "./graph"
device_id: 0
jit_config:
jit_level: "O1"
ascend_config:
precision_mode: "must_keep_origin_dtype"
# model config
model:
model_config:
type: LlamaConfig
batch_size: 1 # add for increase predict
seq_length: 4096
hidden_size: 4096
num_layers: 32
num_heads: 32
vocab_size: 151936
intermediate_size: 11008
qkv_has_bias: True
rms_norm_eps: 1.0e-6
theta: 1000000.0
max_position_embedding: 32768
emb_dropout_prob: 0.0
eos_token_id: 151643
pad_token_id: 151643
compute_dtype: "bfloat16"
layernorm_compute_type: "float32"
softmax_compute_type: "float16"
rotary_dtype: "float16"
param_init_type: "float32"
use_past: False
extend_method: "None" # support "None", "PI", "NTK"
use_flash_attention: True
fine_grain_interleave: 1
qkv_concat: False
block_size: 32
num_blocks: 128
offset: 0
checkpoint_name_or_path: "/home/ma-user/qwen15/ckpt/transform.ckpt"
repetition_penalty: 1
max_decode_length: 512
top_k: 0
top_p: 0.8
do_sample: False
compute_in_2d: True
# configuration items copied from Qwen
rotary_pct: 1.0
rotary_emb_base: 1000000
kv_channels: 128
arch:
type: LlamaForCausalLM
processor:
return_tensors: ms
tokenizer:
model_max_length: 4096
vocab_file: "/home/ma-user/qwen15/vocab.json"
merges_file: "/home/ma-user/qwen15/merges.txt"
unk_token: "<|endoftext|>"
eos_token: "<|endoftext|>"
pad_token: "<|endoftext|>"
type: Qwen2Tokenizer
type: Qwen2Processor
# metric
metric:
type: PerplexityMetric
eval_callbacks:
- type: ObsMonitor
auto_tune: False
filepath_prefix: './autotune'
autotune_per_step: 10
profile: False
profile_start_step: 1
profile_stop_step: 10
init_start_profile: False
profile_communication: False
profile_memory: True
layer_scale: False
layer_decay: 0.65
lr_scale_factor: 256
# aicc
remote_save_url: "Please input obs url on AICC platform."
bash /home/ma-user/work/mindformers/scripts/msrun_launcher.sh "/home/ma-user/work/mindformers/research/qwen1_5/run_qwen1_5.py
--config /home/ma-user/work/mindformers/research/qwen1_5/finetune_qwen1_5_7b.yaml
--load_checkpoint /home/ma-user/qwen15/ckpt/transform.ckpt
--auto_trans_ckpt True
--train_dataset /home/ma-user/qwen15/alpaca-messages.mindrecord
--run_mode finetune" 1