登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
轻量养虾,开箱即用!低 Token + 稳定算力,Gitee & 模力方舟联合出品的 PocketClaw 正式开售!点击了解详情~
代码拉取完成,页面将自动刷新
仓库状态说明
开源项目
>
人工智能
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
205
Star
1.3K
Fork
1.2K
Ascend
/
MindSpeed-LLM
暂停
代码
Issues
3
Pull Requests
32
Wiki
统计
流水线
服务
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
开发画像分析
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
llama3-8b推理第二次提问报错
DONE
#I9K8L2
Bug-Report
yyh17
创建于
2024-04-28 10:25
一、问题现象(附报错日志上下文):  二、软件版本: -- CANN 版本 (e.g., CANN 3.0.x,5.x.x): 社区版 8.0.RC2.alpha001 --Tensorflow/Pytorch/MindSpore 版本: torch 2.1.0 torch-npu 2.1.0.post3 --Python 版本 (e.g., Python 3.7.5): Python 3.8.19 -- MindStudio版本 (e.g., MindStudio 2.0.0 (beta3)): --操作系统版本 (e.g., Ubuntu 18.04): Ubuntu 18.04 三、测试步骤: 使用转换的megatron权重推理,bash examples/llama3/generate_llama3_8b_ptd.sh 第二次提问时报错 四、日志信息: Warning: The torch.npu.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='npu') to create tensors. loading checkpoint from ./model_from_hf/llama-3-8b-tp1/ at iteration 1 checkpoint version 3.0 successfully loaded checkpoint from ./model_from_hf/llama-3-8b-tp1/ at iteration 1 /home/ModelLink/modellink/tasks/inference/text_generation/module.py:379: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at torch_npu/csrc/aten/common/TensorFactories.cpp:74.) broadcast_rank[dist.get_rank()] = 1 INFO:root: =============== Greedy Search ================ INFO:root: You: how are you? ModelLink: I hope you are well. I am fine. I am writing to you because I have a problem. I am a student and I am studying in the university. I am studying in the university of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of INFO:root:============================================== INFO:root: Elapsed: 26.07s INFO:root: ================ Do Sample ================= INFO:root: You: how are you? ModelLink: [' did u hear that there is a plan to demolish a big building that u and u liked?\nhow is he? did you hear that there is a plan to demolish a building that you have liked?\nHow are you? Have you heard the news? The building that you like is about to be demolished.\nHow are you? Do you know the news? The building you have like is going to be demolished.', " hope you're fine!!\nWe just launched a new project: [url removed, login to view]\nIt's a music recommendation engine based on LastFM.com that uses tags (like the tag used in [url removed, login to view])\nWe think it's a very very good engine - especially considering that we spent almost no time on it! so we'd like to use it in our sites.\nWe'll be glad to offer you some kind of deal (free ads etc.) that will work for you (depending on your needs)\nPlease send your bid to [url removed, login to view] and give us some feedback!!\nThank you very much in advance for your bid!!\nSee more: work on line site, what is the best site to work on, what is a recommendation letter, music work, job line com, job com, how to work on site, how to work online, how to work in music, how to work as a music producer, how to send feedback to a boss, how to make some music, how to make money online as a producer, how to be music producer, how to be a producer, how to be a music producer, how much money can you make on line, how much do music producers make, engine in c,"] INFO:root:============================================ INFO:root: Elapsed: 26.82s INFO:root: =============== Beam Search ================= INFO:root: You: how are you? ModelLink: I hope you are doing well. I am fine. I am writing to you because I am interested in your profile. I would like to know more about you. I hope to hear from you soon. I am waiting for your reply. INFO:root:============================================= INFO:root: Elapsed: 5.06s INFO:root: ======== Beam Search with sampling ========== INFO:root: You: how are you? ModelLink: I hope all is well with you. I am doing fine. I am writing to you because I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you INFO:root:============================================= INFO:root: Elapsed: 24.63s INFO:root: =========================================== Warning: Device do not support double dtype now, dtype cast repalce with float. INFO:root:Probability Distribution: tensor([[1.3892e-05, 1.1534e-08, 7.2071e-06, ..., 7.0762e-11, 7.0762e-11, 7.0762e-11], [2.5979e-06, 1.0377e-04, 4.0237e-06, ..., 4.8681e-12, 4.8681e-12, 4.8681e-12], [2.3368e-05, 5.1627e-07, 3.3004e-08, ..., 1.5613e-11, 1.5613e-11, 1.5613e-11], ..., [4.2867e-09, 1.7320e-09, 3.3056e-10, ..., 5.9946e-13, 5.9946e-13, 5.9946e-13], [2.2400e-10, 4.9981e-11, 7.5460e-12, ..., 7.3248e-15, 7.3248e-15, 7.3248e-15], [3.3625e-10, 1.3168e-10, 8.2873e-12, ..., 3.8982e-14, 3.8982e-14, 3.8982e-14]], device='npu:0') INFO:root:Beam Search Score: tensor([0.7731, 0.7715], device='npu:0') INFO:root:=========================================== INFO:root: Elapsed: 49.06s INFO:root:=========================================================== INFO:root:1. If you want to quit, please entry one of [q, quit, exit] INFO:root:2. To create new title, please entry one of [clear, new] INFO:root:=========================================================== You >> how's it going? ModelLink: I'm new to the forum and I'm here to learn a bit about the world of Linux. I've been using windows for many years, and I'm tired of it. I'm a simple guy, I just want to be able to surf the web, check my e-mails, and play some games. I'm running a desktop with an i5 processor, 8 GB of RAM, and an nVidia GTX 660 graphics card. I've tried linux mint, and I'm currently using ubuntu. I've tried the live version of both and I have no problem with installing them, but when I try to install them on my hard drive, I'm stuck at 0%. I've tried installing both in legacy mode and in EFI mode, but nothing seems to work. I've tried to use the "easybcd" program to create a bootable usb drive, but I'm always getting a "not a bootable drive" error. Does anyone know why I'm getting this error? And if there is any way to fix it? Thank you very much in advance for any help you can provide. Please post the output of the following commands: 1. sudo parted -l 2. sudo fdisk -l 3. sudo blkid You >> how are you ? ModelLink: [E OpParamMaker.cpp:273] call aclnnFlashAttentionScore failed, detail:EZ9999: Inner Error! EZ9999: 2024-04-28-02:12:31.934.126 get unsupported atten_mask shape, the shape is [320, 320][FUNC:AnalyzeOptionalInput][FILE:flash_attention_score_tiling_general.cpp][LINE:1360] TraceBack (most recent call last): fail to analyze context info[FUNC:GetShapeAttrsInfo][FILE:flash_attention_score_tiling_general.cpp][LINE:826] Tiling failed Tiling Failed. Kernel Run failed. opType: 102, FlashAttentionScore launch failed for FlashAttentionScore, errno:561103. [ERROR] 2024-04-28-02:12:31 (PID:1892, Device:0, RankID:0) ERR01005 OPS internal error Exception raised from operator() at third_party/op-plugin/op_plugin/ops/v2r1/opapi/FlashAttentionKernelNpuOpApi.cpp:457 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x68 (0xffffb37ff898 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x6c (0xffffb37b82a8 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/lib/libc10.so) frame #2: <unknown function> + 0xd0efa8 (0xfffdce813fa8 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch_npu/lib/libtorch_npu.so) frame #3: <unknown function> + 0xe26ad0 (0xfffdce92bad0 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch_npu/lib/libtorch_npu.so) frame #4: <unknown function> + 0x56a1c0 (0xfffdce06f1c0 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch_npu/lib/libtorch_npu.so) frame #5: <unknown function> + 0x56a5e8 (0xfffdce06f5e8 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch_npu/lib/libtorch_npu.so) frame #6: <unknown function> + 0x5684c0 (0xfffdce06d4c0 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch_npu/lib/libtorch_npu.so) frame #7: <unknown function> + 0x946ec (0xffffb38266ec in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/lib/libc10.so) frame #8: <unknown function> + 0x7088 (0xffffbcebb088 in /lib/aarch64-linux-gnu/libpthread.so.0) Traceback (most recent call last): File "inference.py", line 62, in <module> task_factory(args, model, system_template=system_template, dialog_template=dialog_template) File "/home/ModelLink/modellink/tasks/inference/text_generation/infer_base.py", line 79, in task_factory task_map.get(task)( File "/home/ModelLink/modellink/tasks/inference/text_generation/infer_base.py", line 300, in task_chat for output in responses: File "/home/ModelLink/modellink/tasks/inference/text_generation/module.py", line 426, in _yield for output, context_lengths, log_probs in token_stream: File "/home/ModelLink/modellink/tasks/inference/text_generation/utils.py", line 150, in greedy_search_or_sampling yield from _post_process( File "/home/ModelLink/modellink/tasks/inference/text_generation/utils.py", line 158, in _post_process for tokens, _, log_probs in batch_token_iterator: File "/home/ModelLink/modellink/tasks/inference/text_generation/utils.py", line 248, in sample_sequence_batch logits = _recompute_forward(model, File "/home/ModelLink/modellink/tasks/inference/text_generation/utils.py", line 433, in _recompute_forward output = forward_step(model, File "/home/ModelLink/modellink/tasks/inference/text_generation/utils.py", line 194, in forward_step output_tensor = model( File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/megatron/core/distributed/distributed_data_parallel.py", line 136, in forward return self.module(*inputs, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/megatron/model/module.py", line 181, in forward outputs = self.module(*inputs, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/modellink/model/gpt_model.py", line 64, in forward lm_output = self.language_model( File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/modellink/model/language_model.py", line 30, in wrapper return fn(self, *args, **kwargs) File "/home/ModelLink/megatron/model/language_model.py", line 493, in forward encoder_output = self.encoder( File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/megatron/model/transformer.py", line 1761, in forward hidden_states = layer( File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/megatron/model/transformer.py", line 1146, in forward norm_output = self.input_norm(hidden_states) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/AscendSpeed/ascendspeed/core/fusions/rms_norm.py", line 36, in rms_norm_forward output = self._norm(x.float()).type_as(x) File "/home/ModelLink/megatron/model/rms_norm.py", line 27, in _norm return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps) RuntimeError: The Inner error is reported as above. Since the operator is called asynchronously, the stacktrace may be inaccurate. If you want to get the accurate stacktrace, pleace set the environment variable ASCEND_LAUNCH_BLOCKING=1. [ERROR] 2024-04-28-02:12:31 (PID:1892, Device:0, RankID:0) ERR00005 PTA internal error 请根据自己的运行环境参考以下方式搜集日志信息,如果涉及到算子开发相关的问题,建议也提供UT/ST测试和单算子集成测试相关的日志。 日志提供方式: 将日志打包后作为附件上传。若日志大小超出附件限制,则可上传至外部网盘后提供链接。 获取方法请参考wiki: https://gitee.com/ascend/modelzoo/wikis/%E5%A6%82%E4%BD%95%E8%8E%B7%E5%8F%96%E6%97%A5%E5%BF%97%E5%92%8C%E8%AE%A1%E7%AE%97%E5%9B%BE?sort_id=4097825
一、问题现象(附报错日志上下文):  二、软件版本: -- CANN 版本 (e.g., CANN 3.0.x,5.x.x): 社区版 8.0.RC2.alpha001 --Tensorflow/Pytorch/MindSpore 版本: torch 2.1.0 torch-npu 2.1.0.post3 --Python 版本 (e.g., Python 3.7.5): Python 3.8.19 -- MindStudio版本 (e.g., MindStudio 2.0.0 (beta3)): --操作系统版本 (e.g., Ubuntu 18.04): Ubuntu 18.04 三、测试步骤: 使用转换的megatron权重推理,bash examples/llama3/generate_llama3_8b_ptd.sh 第二次提问时报错 四、日志信息: Warning: The torch.npu.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='npu') to create tensors. loading checkpoint from ./model_from_hf/llama-3-8b-tp1/ at iteration 1 checkpoint version 3.0 successfully loaded checkpoint from ./model_from_hf/llama-3-8b-tp1/ at iteration 1 /home/ModelLink/modellink/tasks/inference/text_generation/module.py:379: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at torch_npu/csrc/aten/common/TensorFactories.cpp:74.) broadcast_rank[dist.get_rank()] = 1 INFO:root: =============== Greedy Search ================ INFO:root: You: how are you? ModelLink: I hope you are well. I am fine. I am writing to you because I have a problem. I am a student and I am studying in the university. I am studying in the university of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of the city of INFO:root:============================================== INFO:root: Elapsed: 26.07s INFO:root: ================ Do Sample ================= INFO:root: You: how are you? ModelLink: [' did u hear that there is a plan to demolish a big building that u and u liked?\nhow is he? did you hear that there is a plan to demolish a building that you have liked?\nHow are you? Have you heard the news? The building that you like is about to be demolished.\nHow are you? Do you know the news? The building you have like is going to be demolished.', " hope you're fine!!\nWe just launched a new project: [url removed, login to view]\nIt's a music recommendation engine based on LastFM.com that uses tags (like the tag used in [url removed, login to view])\nWe think it's a very very good engine - especially considering that we spent almost no time on it! so we'd like to use it in our sites.\nWe'll be glad to offer you some kind of deal (free ads etc.) that will work for you (depending on your needs)\nPlease send your bid to [url removed, login to view] and give us some feedback!!\nThank you very much in advance for your bid!!\nSee more: work on line site, what is the best site to work on, what is a recommendation letter, music work, job line com, job com, how to work on site, how to work online, how to work in music, how to work as a music producer, how to send feedback to a boss, how to make some music, how to make money online as a producer, how to be music producer, how to be a producer, how to be a music producer, how much money can you make on line, how much do music producers make, engine in c,"] INFO:root:============================================ INFO:root: Elapsed: 26.82s INFO:root: =============== Beam Search ================= INFO:root: You: how are you? ModelLink: I hope you are doing well. I am fine. I am writing to you because I am interested in your profile. I would like to know more about you. I hope to hear from you soon. I am waiting for your reply. INFO:root:============================================= INFO:root: Elapsed: 5.06s INFO:root: ======== Beam Search with sampling ========== INFO:root: You: how are you? ModelLink: I hope all is well with you. I am doing fine. I am writing to you because I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you are still interested in me. I have been thinking about you a lot and I want to know if you INFO:root:============================================= INFO:root: Elapsed: 24.63s INFO:root: =========================================== Warning: Device do not support double dtype now, dtype cast repalce with float. INFO:root:Probability Distribution: tensor([[1.3892e-05, 1.1534e-08, 7.2071e-06, ..., 7.0762e-11, 7.0762e-11, 7.0762e-11], [2.5979e-06, 1.0377e-04, 4.0237e-06, ..., 4.8681e-12, 4.8681e-12, 4.8681e-12], [2.3368e-05, 5.1627e-07, 3.3004e-08, ..., 1.5613e-11, 1.5613e-11, 1.5613e-11], ..., [4.2867e-09, 1.7320e-09, 3.3056e-10, ..., 5.9946e-13, 5.9946e-13, 5.9946e-13], [2.2400e-10, 4.9981e-11, 7.5460e-12, ..., 7.3248e-15, 7.3248e-15, 7.3248e-15], [3.3625e-10, 1.3168e-10, 8.2873e-12, ..., 3.8982e-14, 3.8982e-14, 3.8982e-14]], device='npu:0') INFO:root:Beam Search Score: tensor([0.7731, 0.7715], device='npu:0') INFO:root:=========================================== INFO:root: Elapsed: 49.06s INFO:root:=========================================================== INFO:root:1. If you want to quit, please entry one of [q, quit, exit] INFO:root:2. To create new title, please entry one of [clear, new] INFO:root:=========================================================== You >> how's it going? ModelLink: I'm new to the forum and I'm here to learn a bit about the world of Linux. I've been using windows for many years, and I'm tired of it. I'm a simple guy, I just want to be able to surf the web, check my e-mails, and play some games. I'm running a desktop with an i5 processor, 8 GB of RAM, and an nVidia GTX 660 graphics card. I've tried linux mint, and I'm currently using ubuntu. I've tried the live version of both and I have no problem with installing them, but when I try to install them on my hard drive, I'm stuck at 0%. I've tried installing both in legacy mode and in EFI mode, but nothing seems to work. I've tried to use the "easybcd" program to create a bootable usb drive, but I'm always getting a "not a bootable drive" error. Does anyone know why I'm getting this error? And if there is any way to fix it? Thank you very much in advance for any help you can provide. Please post the output of the following commands: 1. sudo parted -l 2. sudo fdisk -l 3. sudo blkid You >> how are you ? ModelLink: [E OpParamMaker.cpp:273] call aclnnFlashAttentionScore failed, detail:EZ9999: Inner Error! EZ9999: 2024-04-28-02:12:31.934.126 get unsupported atten_mask shape, the shape is [320, 320][FUNC:AnalyzeOptionalInput][FILE:flash_attention_score_tiling_general.cpp][LINE:1360] TraceBack (most recent call last): fail to analyze context info[FUNC:GetShapeAttrsInfo][FILE:flash_attention_score_tiling_general.cpp][LINE:826] Tiling failed Tiling Failed. Kernel Run failed. opType: 102, FlashAttentionScore launch failed for FlashAttentionScore, errno:561103. [ERROR] 2024-04-28-02:12:31 (PID:1892, Device:0, RankID:0) ERR01005 OPS internal error Exception raised from operator() at third_party/op-plugin/op_plugin/ops/v2r1/opapi/FlashAttentionKernelNpuOpApi.cpp:457 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x68 (0xffffb37ff898 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x6c (0xffffb37b82a8 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/lib/libc10.so) frame #2: <unknown function> + 0xd0efa8 (0xfffdce813fa8 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch_npu/lib/libtorch_npu.so) frame #3: <unknown function> + 0xe26ad0 (0xfffdce92bad0 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch_npu/lib/libtorch_npu.so) frame #4: <unknown function> + 0x56a1c0 (0xfffdce06f1c0 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch_npu/lib/libtorch_npu.so) frame #5: <unknown function> + 0x56a5e8 (0xfffdce06f5e8 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch_npu/lib/libtorch_npu.so) frame #6: <unknown function> + 0x5684c0 (0xfffdce06d4c0 in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch_npu/lib/libtorch_npu.so) frame #7: <unknown function> + 0x946ec (0xffffb38266ec in /root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/lib/libc10.so) frame #8: <unknown function> + 0x7088 (0xffffbcebb088 in /lib/aarch64-linux-gnu/libpthread.so.0) Traceback (most recent call last): File "inference.py", line 62, in <module> task_factory(args, model, system_template=system_template, dialog_template=dialog_template) File "/home/ModelLink/modellink/tasks/inference/text_generation/infer_base.py", line 79, in task_factory task_map.get(task)( File "/home/ModelLink/modellink/tasks/inference/text_generation/infer_base.py", line 300, in task_chat for output in responses: File "/home/ModelLink/modellink/tasks/inference/text_generation/module.py", line 426, in _yield for output, context_lengths, log_probs in token_stream: File "/home/ModelLink/modellink/tasks/inference/text_generation/utils.py", line 150, in greedy_search_or_sampling yield from _post_process( File "/home/ModelLink/modellink/tasks/inference/text_generation/utils.py", line 158, in _post_process for tokens, _, log_probs in batch_token_iterator: File "/home/ModelLink/modellink/tasks/inference/text_generation/utils.py", line 248, in sample_sequence_batch logits = _recompute_forward(model, File "/home/ModelLink/modellink/tasks/inference/text_generation/utils.py", line 433, in _recompute_forward output = forward_step(model, File "/home/ModelLink/modellink/tasks/inference/text_generation/utils.py", line 194, in forward_step output_tensor = model( File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/megatron/core/distributed/distributed_data_parallel.py", line 136, in forward return self.module(*inputs, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/megatron/model/module.py", line 181, in forward outputs = self.module(*inputs, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/modellink/model/gpt_model.py", line 64, in forward lm_output = self.language_model( File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/modellink/model/language_model.py", line 30, in wrapper return fn(self, *args, **kwargs) File "/home/ModelLink/megatron/model/language_model.py", line 493, in forward encoder_output = self.encoder( File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/megatron/model/transformer.py", line 1761, in forward hidden_states = layer( File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/megatron/model/transformer.py", line 1146, in forward norm_output = self.input_norm(hidden_states) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/anaconda3/envs/py38_llama3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ModelLink/AscendSpeed/ascendspeed/core/fusions/rms_norm.py", line 36, in rms_norm_forward output = self._norm(x.float()).type_as(x) File "/home/ModelLink/megatron/model/rms_norm.py", line 27, in _norm return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps) RuntimeError: The Inner error is reported as above. Since the operator is called asynchronously, the stacktrace may be inaccurate. If you want to get the accurate stacktrace, pleace set the environment variable ASCEND_LAUNCH_BLOCKING=1. [ERROR] 2024-04-28-02:12:31 (PID:1892, Device:0, RankID:0) ERR00005 PTA internal error 请根据自己的运行环境参考以下方式搜集日志信息,如果涉及到算子开发相关的问题,建议也提供UT/ST测试和单算子集成测试相关的日志。 日志提供方式: 将日志打包后作为附件上传。若日志大小超出附件限制,则可上传至外部网盘后提供链接。 获取方法请参考wiki: https://gitee.com/ascend/modelzoo/wikis/%E5%A6%82%E4%BD%95%E8%8E%B7%E5%8F%96%E6%97%A5%E5%BF%97%E5%92%8C%E8%AE%A1%E7%AE%97%E5%9B%BE?sort_id=4097825
评论 (
1
)
登录
后才可以发表评论
状态
DONE
TODO
Analysing
ACCEPTED
WIP
Feedback
TEST
DONE
REJECTED
负责人
未设置
标签
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (
-
)
标签 (
-
)
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(2)
Python
1
https://gitee.com/ascend/MindSpeed-LLM.git
git@gitee.com:ascend/MindSpeed-LLM.git
ascend
MindSpeed-LLM
MindSpeed-LLM
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
评论
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册