/usr/local/python3.11.10/lib/python3.11/site-packages/torch_npu/contrib/transfer_to_npu.py:246: RuntimeWarning: torch.jit.script and torch.jit.script_method will be disabled by transfer_to_npu, which currently does not support them, if you need to enable them, please do not use transfer_to_npu.
warnings.warn(msg, RuntimeWarning)
2025-04-27 08:17:06,756 - modelscope - INFO - PyTorch version 2.4.0 Found.
2025-04-27 08:17:06,757 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
2025-04-27 08:17:06,819 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 d9b2fb29d4a982bfb7209e3708098c33 and a total number of 980 components indexed
failed to import ttsfrd, use WeTextProcessing instead
/usr/local/python3.11.10/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/usr/local/python3.11.10/lib/python3.11/site-packages/torchvision/image.so: undefined symbol: _ZN3c1017RegisterOperatorsD1Ev'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
/usr/local/python3.11.10/lib/python3.11/site-packages/diffusers/models/lora.py:393: FutureWarning: `LoRACompatibleLinear` is deprecated and will be removed in version 1.0.0. Use of `LoRACompatibleLinear` is deprecated. Please switch to PEFT backend by installing PEFT: `pip install peft`.
/usr/local/python3.11.10/lib/python3.11/site-packages/librosa/core/intervals.py:15: DeprecationWarning: path is deprecated. Use files() instead. Refer to https://importlib-resources.readthedocs.io/en/latest/using.html#migrating-from-legacy for migration advice.
with resources.path("librosa.core", "intervals.msgpack") as imsgpack:
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
/usr/local/python3.11.10/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
warnings.warn(
2025-04-27 08:17:29,304 WETEXT INFO building fst for zh_normalizer ...
[INFO] building fst for zh_normalizer ...
2025-04-27 08:18:16,056 WETEXT INFO done
[INFO] done
2025-04-27 08:18:16,056 WETEXT INFO fst path: /usr/local/python3.11.10/lib/python3.11/site-packages/tn/zh_tn_tagger.fst
attention mask must be NULL, when Qs,Kvs is unAlign or Qs is not equal to Kvs, Qs = 53, Kvs = 53[FUNC:RunBigKernelTilingWithParams][FILE:prompt_flash_attention_tiling.cpp][LINE:2446]
attention mask must be NULL, when Qs,Kvs is unAlign or Qs is not equal to Kvs, Qs = 53, Kvs = 53[FUNC:RunBigKernelTilingWithParams][FILE:prompt_flash_attention_tiling.cpp][LINE:2446]
attention mask must be NULL, when Qs,Kvs is unAlign or Qs is not equal to Kvs, Qs = 53, Kvs = 53[FUNC:RunBigKernelTilingWithParams][FILE:prompt_flash_attention_tiling.cpp][LINE:2446]
attention mask must be NULL, when Qs,Kvs is unAlign or Qs is not equal to Kvs, Qs = 53, Kvs = 53[FUNC:RunBigKernelTilingWithParams][FILE:prompt_flash_attention_tiling.cpp][LINE:2446]
/usr/local/python3.11.10/lib/python3.11/site-packages/torch_npu/contrib/transfer_to_npu.py:246: RuntimeWarning: torch.jit.script and torch.jit.script_method will be disabled by transfer_to_npu, which currently does not support them, if you need to enable them, please do not use transfer_to_npu.
warnings.warn(msg, RuntimeWarning)
2025-04-27 08:17:06,756 - modelscope - INFO - PyTorch version 2.4.0 Found.
2025-04-27 08:17:06,757 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
2025-04-27 08:17:06,819 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 d9b2fb29d4a982bfb7209e3708098c33 and a total number of 980 components indexed
failed to import ttsfrd, use WeTextProcessing instead
/usr/local/python3.11.10/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/usr/local/python3.11.10/lib/python3.11/site-packages/torchvision/image.so: undefined symbol: _ZN3c1017RegisterOperatorsD1Ev'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
/usr/local/python3.11.10/lib/python3.11/site-packages/diffusers/models/lora.py:393: FutureWarning: `LoRACompatibleLinear` is deprecated and will be removed in version 1.0.0. Use of `LoRACompatibleLinear` is deprecated. Please switch to PEFT backend by installing PEFT: `pip install peft`.
/usr/local/python3.11.10/lib/python3.11/site-packages/librosa/core/intervals.py:15: DeprecationWarning: path is deprecated. Use files() instead. Refer to https://importlib-resources.readthedocs.io/en/latest/using.html#migrating-from-legacy for migration advice.
with resources.path("librosa.core", "intervals.msgpack") as imsgpack:
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
/usr/local/python3.11.10/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
warnings.warn(
2025-04-27 08:17:29,304 WETEXT INFO building fst for zh_normalizer ...
[INFO] building fst for zh_normalizer ...
2025-04-27 08:18:16,056 WETEXT INFO done
[INFO] done
2025-04-27 08:18:16,056 WETEXT INFO fst path: /usr/local/python3.11.10/lib/python3.11/site-packages/tn/zh_tn_tagger.fst
attention mask must be NULL, when Qs,Kvs is unAlign or Qs is not equal to Kvs, Qs = 53, Kvs = 53[FUNC:RunBigKernelTilingWithParams][FILE:prompt_flash_attention_tiling.cpp][LINE:2446]
attention mask must be NULL, when Qs,Kvs is unAlign or Qs is not equal to Kvs, Qs = 53, Kvs = 53[FUNC:RunBigKernelTilingWithParams][FILE:prompt_flash_attention_tiling.cpp][LINE:2446]
attention mask must be NULL, when Qs,Kvs is unAlign or Qs is not equal to Kvs, Qs = 53, Kvs = 53[FUNC:RunBigKernelTilingWithParams][FILE:prompt_flash_attention_tiling.cpp][LINE:2446]
attention mask must be NULL, when Qs,Kvs is unAlign or Qs is not equal to Kvs, Qs = 53, Kvs = 53[FUNC:RunBigKernelTilingWithParams][FILE:prompt_flash_attention_tiling.cpp][LINE:2446]
[ERROR] OP(123766,python3):2025-04-27-08:35:57.111.020 [prompt_flash_attention_tiling.cpp:2446][OP_TILING][RunBigKernelTilingWithParams][123766] OpName:[GetBasicShape310P] "attention mask must be NULL, when Qs,Kvs is unAlign or Qs is not equal to Kvs, Qs = 53, Kvs = 53"
[ERROR] OP(123766,python3):2025-04-27-08:35:57.111.331 [acl_rfft1d.cpp:164][NNOP][~UniqueExecutor][123766] errno[561102] OpName:[aclnnInnerPromptFlashAttention_318] When aclnnInnerPromptFlashAttentionGetWorkspaceSize do success, ReleaseTo(executor) should be called before return.