登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
代码拉取完成,页面将自动刷新
开源项目
>
人工智能
>
AI-人工智能
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
224
Star
1.3K
Fork
1.1K
Ascend
/
samples
代码
Issues
41
Pull Requests
99
Wiki
统计
流水线
服务
JavaDoc
PHPDoc
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
mmad算子尝试改为int8类型
DONE
#ICO66I
需求
zhuye_0729
创建于
2025-07-23 10:16
一、问题现象(附报错日志上下文): 将mmad示例(链接为operator/ascendc/0_introduction/20_mmad_kernellaunch/MmadBiasInvocation)中half类型改成int8_t类型后,计算结果出错。 我后面参见mmad的api文档尝试修改splitA、splitB这些里面的分型,因为我看到似乎A矩阵分型是16*32的,B矩阵的32*16,但是整来整去似乎还是不对,这个nd、nz格式的以及随路格式转换的整的很迷惑。所以如果要调整为int8类型的mmad,需要如何调整代码? 整个运行结果如下: -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Configuring done -- Generating done -- Build files have been written to: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build [ 2%] Creating directories for 'ascendc_kernels_npu_precompile' [ 4%] No download step for 'ascendc_kernels_npu_precompile' [ 6%] No update step for 'ascendc_kernels_npu_precompile' [ 9%] No patch step for 'ascendc_kernels_npu_precompile' [ 11%] Performing configure step for 'ascendc_kernels_npu_precompile' -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Configuring done -- Generating done -- Build files have been written to: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_precompile-prefix/src/ascendc_kernels_npu_precompile-build [ 13%] Performing build step for 'ascendc_kernels_npu_precompile' [100%] Building CXX object CMakeFiles/precompile_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [100%] Built target precompile_obj [100%] Built target check_src_template [ 15%] No install step for 'ascendc_kernels_npu_precompile' [ 18%] Completed 'ascendc_kernels_npu_precompile' [ 18%] Built target ascendc_kernels_npu_precompile [ 20%] Creating directories for 'ascendc_kernels_npu_preprocess' [ 22%] No download step for 'ascendc_kernels_npu_preprocess' [ 25%] No update step for 'ascendc_kernels_npu_preprocess' [ 27%] No patch step for 'ascendc_kernels_npu_preprocess' [ 29%] Performing configure step for 'ascendc_kernels_npu_preprocess' -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Configuring done -- Generating done -- Build files have been written to: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_preprocess-prefix/src/ascendc_kernels_npu_preprocess-build [ 31%] Performing build step for 'ascendc_kernels_npu_preprocess' [100%] Building CXX object CMakeFiles/aic_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [100%] Building CXX object CMakeFiles/preprocess_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [100%] Building CXX object CMakeFiles/aiv_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [100%] Built target preprocess_obj [100%] Built target aic_obj /usr/local/Ascend/ascend-toolkit/latest/tools/ccec_compiler/bin/ld.lld -m aicorelinux -Ttext=0 /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_preprocess-prefix/src/ascendc_kernels_npu_preprocess-build/CMakeFiles/aic_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o -o /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_preprocess-prefix/src/ascendc_kernels_npu_preprocess-build/CMakeFiles/aic_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o[100%] Built target merge_aic_obj_text [100%] Built target aiv_obj /usr/local/Ascend/ascend-toolkit/latest/tools/ccec_compiler/bin/ld.lld -m aicorelinux -Ttext=0 /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_preprocess-prefix/src/ascendc_kernels_npu_preprocess-build/CMakeFiles/aiv_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o -o /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_preprocess-prefix/src/ascendc_kernels_npu_preprocess-build/CMakeFiles/aiv_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o[100%] Built target merge_aiv_obj_text [100%] Built target _host_cpp [ 34%] No install step for 'ascendc_kernels_npu_preprocess' [ 36%] Completed 'ascendc_kernels_npu_preprocess' [ 36%] Built target ascendc_kernels_npu_preprocess [ 43%] Creating directories for 'ascendc_kernels_npu_aic_device' [ 43%] Creating directories for 'ascendc_kernels_npu_host' [ 43%] Creating directories for 'ascendc_kernels_npu_aiv_device' [ 50%] No download step for 'ascendc_kernels_npu_aic_device' [ 50%] No download step for 'ascendc_kernels_npu_aiv_device' [ 50%] No download step for 'ascendc_kernels_npu_host' [ 56%] No update step for 'ascendc_kernels_npu_aic_device' [ 56%] No update step for 'ascendc_kernels_npu_host' [ 56%] No update step for 'ascendc_kernels_npu_aiv_device' [ 63%] No patch step for 'ascendc_kernels_npu_host' [ 63%] No patch step for 'ascendc_kernels_npu_aic_device' [ 63%] No patch step for 'ascendc_kernels_npu_aiv_device' [ 70%] Performing configure step for 'ascendc_kernels_npu_host' [ 70%] Performing configure step for 'ascendc_kernels_npu_aic_device' [ 70%] Performing configure step for 'ascendc_kernels_npu_aiv_device' -- The C compiler identification is GNU 11.4.0 -- The C compiler identification is GNU 11.4.0 -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compiler ABI info - done -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting CXX compile features - done -- Detecting CXX compiler ABI info - done -- Configuring done -- Configuring done -- Generating done -- Build files have been written to: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_aiv_device-prefix/src/ascendc_kernels_npu_aiv_device-build -- Generating done -- Build files have been written to: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_aic_device-prefix/src/ascendc_kernels_npu_aic_device-build -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Configuring done -- Generating done -- Build files have been written to: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_host-prefix/src/ascendc_kernels_npu_host-build [ 75%] Performing build step for 'ascendc_kernels_npu_aic_device' [ 75%] Performing build step for 'ascendc_kernels_npu_aiv_device' [ 77%] Performing build step for 'ascendc_kernels_npu_host' [ 79%] No install step for 'ascendc_kernels_npu_aiv_device' [100%] Building CXX object CMakeFiles/device_aic_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/auto_gen/ascendc_kernels_npu/auto_gen_mmad_custom.cpp.o [100%] Building CXX object CMakeFiles/host_bisheng_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [ 81%] Completed 'ascendc_kernels_npu_aiv_device' [ 81%] Built target ascendc_kernels_npu_aiv_device [100%] Built target host_bisheng_obj [ 84%] Performing install step for 'ascendc_kernels_npu_host' Consolidate compiler generated dependencies of target host_bisheng_obj [100%] Building CXX object CMakeFiles/host_bisheng_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [100%] Built target host_bisheng_obj Install the project... -- Install configuration: "Debug" -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_host_dir/./objects-Debug/host_bisheng_obj/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [ 86%] Completed 'ascendc_kernels_npu_host' [ 86%] Built target ascendc_kernels_npu_host [100%] Built target device_aic_obj /usr/local/Ascend/ascend-toolkit/latest/tools/ccec_compiler/bin/ld.lld -m aicorelinux -r -Ttext=0 /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_aic_device-prefix/src/ascendc_kernels_npu_aic_device-build/CMakeFiles/device_aic_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/auto_gen/ascendc_kernels_npu/auto_gen_mmad_custom.cpp.o -static -o /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_aic_device_dir/device_aic.o [100%] Built target merge_aic_device_obj [ 88%] No install step for 'ascendc_kernels_npu_aic_device' [ 90%] Completed 'ascendc_kernels_npu_aic_device' [ 90%] Built target ascendc_kernels_npu_aic_device /usr/local/Ascend/ascend-toolkit/latest/tools/ccec_compiler/bin/ld.lld -m aicorelinux -Ttext=0 /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_aic_device_dir/device_aic.o -static -o /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_merge_obj_dir/device_aic.o [ 90%] Built target ascendc_kernels_npu_merge_obj [ 93%] Building CXX object CMakeFiles/ascendc_kernels_npu_host_stub_obj.dir/auto_gen/ascendc_kernels_npu/host_stub.cpp.o [ 93%] Built target ascendc_kernels_npu_host_stub_obj [ 95%] Linking CXX shared library lib/libascendc_kernels_npu.so /usr/local/Ascend/ascend-toolkit/latest/bin/ascendc_pack_kernel /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/CMakeFiles/ascendc_kernels_npu_host_stub_obj.dir/auto_gen/ascendc_kernels_npu/host_stub.cpp.o /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_merge_obj_dir/device_aic.o 2 /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/CMakeFiles/ascendc_kernels_npu_host_stub_obj.dir/auto_gen/ascendc_kernels_npu/host_stub.cpp.o recompile: /usr/bin/c++ -fPIC -g -Wl,-z,relro -Wl,-z,now -Wl,-z,noexecstack -shared -Wl,-soname,libascendc_kernels_npu.so -o lib/libascendc_kernels_npu.so CMakeFiles/ascendc_kernels_npu_host_stub_obj.dir/auto_gen/ascendc_kernels_npu/host_stub.cpp.o /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_host_dir/objects-Debug/host_bisheng_obj/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o -L/usr/local/Ascend/ascend-toolkit/latest/lib64 -L/usr/local/Ascend/ascend-toolkit/latest/tools/simulator/Ascend910B2/lib /usr/local/Ascend/ascend-toolkit/latest/lib64/libascendc_runtime.a -lascend_dump -lc_sec [ 95%] Built target ascendc_kernels_npu [ 97%] Building CXX object CMakeFiles/ascendc_kernels_bbit.dir/main.cpp.o [100%] Linking CXX executable ascendc_kernels_bbit [100%] Built target ascendc_kernels_bbit -- Install configuration: "Debug" -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/out/lib/libascendc_kernels_npu.so -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/out/include -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/out/include/ascendc_kernels_npu -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/out/include/ascendc_kernels_npu/aclrtlaunch_triple_chevrons_func.h -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/out/include/ascendc_kernels_npu/aclrtlaunch_mmad_custom.h -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/out/bin/ascendc_kernels_bbit 计算完成,最终结果为 int32 类型。 Golden result (first 4x4 block): [[751 693 644 657] [792 727 634 724] [747 693 706 642] [673 651 627 661]] opType=mmad_custom, DumpHead: AIC-0, CoreType=AIC, block dim=1, total_block_num=1, block_remain_len=1041504, block_initial_space=1048576, rsv=0, magic=5aa5bccd CANN Version: 8.2.RC1.alpha002, TimeStamp: 20250522125202340 DumpTensor: desc=1, addr=0, data_type=int8, position=L1 [[3,9,6,5,4,1,7,6,2,8,1,9,9,1,2,6,7,7,4,9,3,2,7,1,1,3,6,6,8,7,9,5], [9,1,5,6,1,5,9,8,2,2,5,4,8,7,9,3,5,8,8,3,1,3,8,5,1,4,1,2,6,9,9,4], [9,6,6,7,5,1,5,3,6,1,2,3,5,6,1,5,3,2,9,7,7,6,2,1,8,4,3,7,9,2,8,3], [6,4,1,2,8,5,1,6,1,4,5,4,7,4,8,1,7,1,3,7,5,2,7,1,5,8,8,3,9,3,4,1], [8,8,9,1,5,1,5,4,4,6,3,5,2,6,7,7,4,2,8,1,7,9,9,5,8,5,8,5,2,1,5,2], [1,5,9,5,7,6,3,1,5,5,5,7,9,5,1,6,3,9,2,4,5,1,7,4,8,6,1,6,5,4,1,8], [2,2,6,5,8,3,2,2,9,9,3,5,3,7,7,3,2,3,6,6,7,3,5,9,2,8,1,7,3,5,2,7], [6,4,9,5,2,4,9,8,8,3,4,7,6,9,3,4,3,2,8,8,2,6,3,2,9,9,7,1,6,7,1,4], [9,4,3,9,8,5,6,5,9,7,5,4,3,2,9,7,7,9,7,8,3,6,4,2,8,8,7,1,5,5,7,4], [3,9,1,7,1,3,3,2,2,9,1,4,9,5,1,9,8,4,9,5,4,6,7,4,9,6,8,1,7,3,7,5], [9,2,1,4,9,2,8,5,2,3,6,2,5,3,8,4,3,5,5,7,7,8,5,8,2,5,9,9,4,1,5,3], [7,3,9,2,4,8,2,9,3,6,6,3,3,3,1,7,9,5,2,8,7,6,9,7,7,8,8,5,2,8,5,2], [2,4,9,8,4,1,7,3,4,8,5,2,7,7,1,3,2,7,3,8,4,2,2,9,9,3,6,3,3,1,1,1], [3,3,6,5,7,9,7,9,4,6,2,3,3,7,9,3,9,8,6,8,7,5,9,2,7,4,6,7,9,9,2,4], [2,1,8,5,4,8,7,8,9,6,3,5,9,2,9,2,5,9,8,3,5,5,6,9,9,6,4,4,4,8,2,9], [2,3,1,6,1,8,5,3,2,7,2,3,1,9,5,3,4,5,4,3,2,1,4,4,4,9,6,7,2,3,5,6], [4,9,6,2,7,4,7,3,7,5,4,7,6,1,7,8,6,7,8,8,5,3,2,8,8,4,8,7,7,5,3,9], [4,8,3,9,1,1,2,6,2,5,8,6,6,9,3,7,2,4,9,8,5,9,2,2,1,3,2,3,5,1,7,2], [7,6,4,4,4,4,9,5,2,8,3,9,4,6,6,4,3,4,1,1,9,2,4,7,1,7,6,2,4,8,3,9], [7,2,3,1,6,9,3,8,5,9,6,1,9,4,5,5,2,9,5,1,2,1,2,9,7,4,7,4,2,1,8,6], [5,3,2,4,6,8,8,8,6,3,8,5,9,4,1,7,7,7,4,3,1,2,8,3,2,1,1,2,7,6,5,5], [9,4,6,2,5,7,9,8,4,9,6,7,5,9,9,8,2,4,4,1,5,1,9,5,6,6,6,5,8,6,4,2], [8,9,2,1,1,2,4,3,7,2,9,5,5,8,6,7,9,6,1,3,3,7,1,2,8,9,1,1,6,3,1,5], [3,5,6,6,7,6,5,9,4,5,5,9,8,8,7,6,3,9,6,7,5,1,4,1,9,7,7,3,8,5,7,9], [8,3,4,8,1,8,5,3,2,8,4,1,4,1,6,2,9,3,5,8,2,1,2,5,2,7,3,2,4,2,3,3], [4,7,8,4,6,3,2,2,6,6,1,6,1,2,1,8,5,6,1,1,5,7,4,6,1,4,3,4,9,8,8,9], [1,7,2,5,8,7,3,1,9,9,8,5,4,9,2,1,6,2,4,7,3,9,1,1,2,5,7,7,8,6,5,3], [4,8,4,3,1,6,7,4,1,3,2,9,9,4,3,7,7,2,3,8,5,7,9,6,3,4,2,7,7,2,8,8], [8,9,2,4,7,5,7,5,1,1,3,3,8,6,6,7,7,8,2,2,2,8,5,1,7,7,6,9,8,9,8,3], [5,1,2,8,2,5,6,1,5,2,2,7,6,7,9,1,3,2,5,5,1,7,6,3,8,5,5,6,1,8,5,6], [9,9,3,4,2,9,1,9,4,7,9,5,8,1,1,4,1,4,6,5,4,4,1,4,1,5,8,1,8,5,2,5], [1,5,3,3,4,6,1,9,4,5,6,5,1,8,3,9,4,4,1,1,6,5,9,7,4,4,3,5,1,1,1,9]] DumpTensor: desc=2, addr=400, data_type=int8, position=L1 [[6,3,7,5,7,6,1,2,4,6,5,3,1,3,6,5,2,6,6,8,7,3,3,6,3,2,6,9,9,6,3,1], [4,1,3,1,4,6,9,8,5,5,6,5,5,1,9,6,5,5,5,1,8,1,6,3,9,7,4,3,5,3,2,7], [9,5,4,2,9,6,8,4,5,2,7,3,1,8,3,3,5,1,2,4,2,3,7,6,3,1,1,3,4,5,1,3], [3,6,5,9,5,3,9,5,2,5,4,2,2,5,5,4,2,3,6,1,5,4,5,2,2,3,3,1,7,9,1,2], [4,1,7,6,5,9,8,6,1,9,7,9,5,8,4,3,9,2,2,7,7,9,4,6,7,5,7,2,8,6,4,8], [9,7,1,6,5,8,8,5,6,4,8,9,6,5,1,9,5,1,2,2,7,3,7,4,3,8,6,2,9,1,9,7], [6,2,9,2,8,7,5,2,8,9,6,9,8,4,5,9,9,8,2,6,3,1,3,9,3,3,8,7,2,5,7,5], [9,3,6,9,4,1,4,4,2,8,5,1,1,9,7,4,5,1,1,2,7,3,7,8,9,4,9,5,3,8,4,6], [4,4,7,3,4,6,6,4,7,2,1,1,9,5,6,5,4,6,3,8,2,1,6,5,7,9,6,5,2,3,3,3], [1,3,2,1,5,1,7,8,5,6,3,5,7,6,5,4,2,8,7,1,9,1,9,5,4,7,1,7,3,5,9,2], [7,6,9,6,7,9,7,2,2,1,4,3,4,9,3,4,7,3,6,5,6,2,5,9,8,1,2,5,8,7,9,6], [3,9,4,1,3,5,1,2,5,8,2,8,3,1,1,8,1,2,8,6,5,1,8,5,7,4,1,2,1,8,1,3], [1,4,1,5,6,9,9,3,2,1,1,6,1,2,7,5,5,4,6,5,7,9,1,6,9,1,3,5,6,2,1,2], [7,8,5,2,3,7,9,9,7,7,8,1,2,8,8,8,1,1,2,2,2,1,4,7,9,3,1,7,8,9,6,2], [1,4,2,7,2,9,9,2,3,2,7,2,6,6,9,7,7,8,5,5,9,7,6,1,3,5,3,9,8,9,6,4], [4,6,1,8,3,2,8,9,3,3,1,2,8,5,9,3,5,3,7,1,2,2,8,8,2,9,5,2,4,8,6,5], [4,7,6,7,1,5,9,3,5,7,6,9,8,2,4,1,2,1,1,2,5,8,4,1,6,1,9,9,9,1,6,2], [1,2,3,2,3,1,4,9,5,3,8,2,4,2,1,5,3,4,4,6,6,5,3,5,3,9,8,3,9,5,6,3], [5,8,3,7,2,1,7,5,2,3,7,2,5,3,8,8,6,2,9,7,7,5,4,7,2,6,5,4,1,7,3,7], [4,5,6,2,5,5,3,2,9,3,2,2,9,8,6,9,1,3,3,4,2,2,6,4,1,5,9,4,3,8,1,1], [7,7,8,1,5,2,6,2,5,9,8,8,7,9,9,8,4,4,6,7,9,1,5,7,7,1,3,9,8,1,5,1], [3,2,1,8,3,8,1,5,4,1,9,5,1,5,5,8,7,9,1,4,7,9,8,6,4,1,7,4,4,3,2,4], [6,5,1,4,3,1,2,6,3,5,3,9,1,1,4,2,8,9,5,4,8,1,8,1,1,8,3,8,9,3,9,5], [7,8,1,1,8,7,7,4,1,3,4,7,1,5,3,1,6,9,4,5,3,8,2,3,6,2,7,8,5,2,9,2], [5,7,7,4,6,2,3,1,1,7,5,9,3,2,9,4,4,4,2,3,1,4,1,6,8,9,9,2,7,4,8,4], [1,3,6,5,6,4,6,4,4,1,4,7,8,8,4,1,6,4,2,3,8,3,5,6,6,9,4,5,2,8,4,7], [8,7,4,9,5,4,3,5,1,8,2,5,6,9,3,3,6,4,8,9,7,7,3,6,9,8,1,2,4,2,1,2], [6,1,4,1,5,7,1,1,3,3,2,1,3,5,9,8,6,8,5,6,5,4,1,2,2,3,6,3,8,3,2,9], [5,3,5,3,9,5,1,2,7,5,2,2,8,4,9,6,2,5,4,3,7,4,9,6,2,2,5,9,2,1,1,5], [7,1,1,3,3,7,2,2,9,5,3,8,2,5,5,7,4,5,8,7,4,3,9,1,4,7,7,9,8,6,1,6], [3,5,1,1,3,4,4,5,7,4,5,3,1,8,9,4,5,9,3,4,2,6,3,9,6,4,4,8,7,5,2,3], [7,2,5,9,1,2,7,8,6,2,3,9,3,3,8,6,6,1,3,5,8,5,8,7,2,4,6,6,1,9,2,7]] DumpTensor: desc=3, addr=800, data_type=int32, position=L1 [5, 7, 3, 5, 6, 4, 2, 3, 1, 8, 8, 5, 6, 4, 9, 2, 1, 8, 5, 4, 7, 4, 1, 9, 9, 3, 8, 1, 6, 8, 6, 7] a2Local: a2[0]: 0, a2[1]: 0, a2[2]: 0, a2[3]: 0 b2Local: b2[0]: 0, b2[1]: 0, b2[2]: 0, b2[3]: 0 bias2Local: bias2[0]: 0, bias2[1]: 0, bias2[2]: 0, bias2[3]: 0 DumpTensor: desc=2, addr=0, data_type=int32, position=L0C [[712,796,852,978,799,705,866,930,709,703,684,901,844,734,880,704,737,737,854,849,707,686,776,917,696,721,736,899,773,745,869,697], [639,656,815,946,674,659,733,888,700,671,694,860,730,682,893,672,694,656,766,805,656,682,644,768,623,567,663,778,694,620,753,594], [767,770,894,886,756,732,735,905,612,766,769,964,830,677,923,685,705,695,868,915,715,702,747,822,694,657,668,856,835,668,823,637], [745,691,879,875,708,774,702,820,674,629,702,894,798,659,854,669,713,784,924,989,738,819,777,966,760,702,825,892,852,747,970,707], [833,855,1003,1088,783,823,854,991,803,770,856,1017,972,823,1024,795,627,755,860,913,805,697,833,892,677,752,708,924,813,686,858,761], [862,723,869,922,739,738,707,856,623,648,647,923,760,676,949,680,835,816,935,950,814,740,869,924,695,757,769,940,884,737,1022,693], [604,555,766,863,717,716,605,790,641,613,593,730,756,588,794,588,847,858,1006,1091,823,794,910,1024,815,797,843,1041,993,809,1053,749], [879,877,1018,1073,769,808,867,995,793,768,860,978,1007,773,1023,772,575,546,721,736,611,588,613,736,501,569,543,763,641,562,712,549], [891,865,964,1132,850,826,881,1032,780,758,811,1028,994,815,1082,776,629,637,782,810,725,683,770,794,655,618,602,875,665,674,810,745], [777,816,897,844,704,769,682,866,632,681,707,849,800,657,918,692,771,700,836,881,739,703,740,820,612,662,664,843,744,631,901,668], [697,729,790,853,701,649,793,766,650,626,644,836,703,657,852,647,849,861,994,986,848,874,823,1009,718,804,890,1006,881,773,1053,766], [670,725,810,761,631,660,698,787,682,622,681,815,703,757,841,613,861,874,1004,1151,850,865,896,1065,851,768,836,1040,1012,808,1034,830], [570,566,687,742,610,599,620,691,526,537,604,706,639,581,698,554,685,725,801,842,648,645,765,845,629,663,641,863,751,641,887,645], [723,672,923,947,738,805,835,858,732,640,675,926,716,741,916,731,759,773,837,908,774,683,863,904,654,699,637,961,778,726,887,690], [816,820,985,943,765,686,851,1001,752,766,720,968,787,854,993,681,678,641,832,815,612,646,659,841,618,585,617,795,764,684,776,619], [715,730,816,827,703,735,752,771,629,615,707,788,680,646,924,715,692,684,725,731,680,607,675,686,517,568,544,830,707,547,820,586], [780,656,673,645,747,852,754,909,813,718,840,811,695,998,828,639,743,625,632,704,699,748,722,896,834,642,789,787,760,954,838,710], [684,667,625,688,710,782,752,807,724,672,836,722,758,875,733,584,699,517,588,556,592,658,597,834,713,638,682,682,649,848,727,587], [798,685,741,643,778,817,803,861,791,764,875,723,698,998,731,605,706,624,642,580,659,840,657,864,777,726,839,758,667,879,768,662], [673,601,652,647,619,808,734,858,807,760,775,692,725,878,713,691,756,718,723,697,796,830,860,905,812,731,871,881,774,1009,861,672], [929,771,763,816,847,933,906,1047,874,833,968,966,899,1085,881,739,815,666,688,628,779,779,741,898,805,733,857,821,702,952,729,611], [749,608,670,646,715,770,764,873,790,776,786,752,778,922,771,660,853,678,735,623,762,829,866,986,856,814,872,859,810,1050,896,697], [619,496,573,493,681,718,661,737,676,652,683,699,639,796,664,564,920,746,795,746,820,914,892,1081,902,885,867,881,958,1097,973,762], [857,750,814,720,798,860,891,1089,888,872,949,950,850,1002,955,823,615,527,523,529,609,643,566,755,631,606,586,638,620,823,598,553], [856,770,846,732,823,978,877,1015,888,846,986,910,821,1088,901,791,680,597,574,629,649,789,659,696,768,628,786,691,733,814,641,551], [746,624,640,651,691,763,730,863,783,728,772,764,660,957,746,629,773,570,596,587,721,767,695,922,742,720,784,819,654,877,749,663], [740,619,589,607,646,764,675,913,767,651,788,755,672,867,796,616,889,657,770,712,838,887,841,1039,929,813,884,853,758,1132,878,701], [723,650,644,577,663,815,639,766,706,574,758,687,687,902,678,616,906,754,777,785,829,971,829,1062,942,847,983,988,856,1089,935,758], [587,515,505,549,555,617,604,728,578,544,584,633,642,803,674,611,713,634,587,646,683,809,683,837,739,714,805,700,660,871,701,579], [769,670,593,663,755,884,716,892,790,773,801,799,777,934,722,616,763,732,711,648,689,801,734,910,810,734,850,750,712,959,823,708], [879,699,719,709,835,869,768,956,855,777,889,846,790,1081,819,688,667,634,630,658,635,673,688,814,736,679,758,761,693,854,723,647], [722,583,557,597,691,766,684,813,706,659,799,800,674,851,785,590,683,575,605,519,566,732,614,799,705,687,702,601,633,801,666,548]] 9364fb1f2445cdc7125f9ea01e30a902 output/golden.bin ed0beff0e69ef4adb9d84c299d3235ed output/output.bin Found mismatched elements: data index: 000000, expected: 751, actual: 712, diff: -39 data index: 000001, expected: 693, actual: 796, diff: 103 data index: 000002, expected: 644, actual: 852, diff: 208 data index: 000003, expected: 657, actual: 978, diff: 321 data index: 000004, expected: 748, actual: 799, diff: 51 data index: 000005, expected: 760, actual: 705, diff: -55 data index: 000006, expected: 843, actual: 866, diff: 23 data index: 000007, expected: 731, actual: 930, diff: 199 data index: 000008, expected: 779, actual: 709, diff: -70 data index: 000009, expected: 795, actual: 703, diff: -92 data index: 000010, expected: 681, actual: 684, diff: 3 data index: 000011, expected: 806, actual: 901, diff: 95 data index: 000012, expected: 733, actual: 844, diff: 111 data index: 000013, expected: 783, actual: 734, diff: -49 data index: 000014, expected: 962, actual: 880, diff: -82 data index: 000015, expected: 866, actual: 704, diff: -162 data index: 000016, expected: 725, actual: 780, diff: 55 data index: 000017, expected: 752, actual: 656, diff: -96 data index: 000018, expected: 761, actual: 673, diff: -88 data index: 000019, expected: 714, actual: 645, diff: -69 data index: 000020, expected: 919, actual: 747, diff: -172 data index: 000021, expected: 629, actual: 852, diff: 223 data index: 000022, expected: 901, actual: 754, diff: -147 data index: 000023, expected: 846, actual: 909, diff: 63 data index: 000024, expected: 780, actual: 813, diff: 33 data index: 000025, expected: 778, actual: 718, diff: -60 data index: 000026, expected: 811, actual: 840, diff: 29 data index: 000027, expected: 857, actual: 811, diff: -46 data index: 000028, expected: 839, actual: 695, diff: -144 data index: 000029, expected: 826, actual: 998, diff: 172 data index: 000030, expected: 578, actual: 828, diff: 250 data index: 000031, expected: 679, actual: 639, diff: -40 data index: 000032, expected: 792, actual: 737, diff: -55 data index: 000033, expected: 727, actual: 737, diff: 10 data index: 000034, expected: 634, actual: 854, diff: 220 data index: 000035, expected: 724, actual: 849, diff: 125 data index: 000036, expected: 738, actual: 707, diff: -31 data index: 000037, expected: 817, actual: 686, diff: -131 data index: 000038, expected: 866, actual: 776, diff: -90 data index: 000039, expected: 683, actual: 917, diff: 234 data index: 000040, expected: 741, actual: 696, diff: -45 data index: 000042, expected: 783, actual: 736, diff: -47 data index: 000043, expected: 749, actual: 899, diff: 150 data index: 000044, expected: 612, actual: 773, diff: 161 data index: 000045, expected: 792, actual: 745, diff: -47 data index: 000046, expected: 927, actual: 869, diff: -58 data index: 000047, expected: 866, actual: 697, diff: -169 data index: 000048, expected: 759, actual: 743, diff: -16 data index: 000049, expected: 758, actual: 625, diff: -133 data index: 000050, expected: 708, actual: 632, diff: -76 data index: 000051, expected: 734, actual: 704, diff: -30 data index: 000052, expected: 894, actual: 699, diff: -195 data index: 000053, expected: 652, actual: 748, diff: 96 data index: 000054, expected: 839, actual: 722, diff: -117 data index: 000055, expected: 849, actual: 896, diff: 47 data index: 000056, expected: 736, actual: 834, diff: 98 data index: 000057, expected: 713, actual: 642, diff: -71 data index: 000058, expected: 821, actual: 789, diff: -32 data index: 000059, expected: 953, actual: 787, diff: -166 data index: 000060, expected: 928, actual: 760, diff: -168 data index: 000061, expected: 885, actual: 954, diff: 69 data index: 000062, expected: 688, actual: 838, diff: 150 data index: 000063, expected: 666, actual: 710, diff: 44 data index: 000064, expected: 747, actual: 639, diff: -108 data index: 000065, expected: 693, actual: 656, diff: -37 data index: 000066, expected: 706, actual: 815, diff: 109 data index: 000067, expected: 642, actual: 946, diff: 304 data index: 000068, expected: 746, actual: 674, diff: -72 data index: 000069, expected: 740, actual: 659, diff: -81 data index: 000070, expected: 777, actual: 733, diff: -44 data index: 000071, expected: 621, actual: 888, diff: 267 data index: 000072, expected: 679, actual: 700, diff: 21 data index: 000073, expected: 720, actual: 671, diff: -49 data index: 000074, expected: 715, actual: 694, diff: -21 data index: 000075, expected: 666, actual: 860, diff: 194 data index: 000076, expected: 666, actual: 730, diff: 64 data index: 000077, expected: 780, actual: 682, diff: -98 data index: 000078, expected: 1003, actual: 893, diff: -110 data index: 000079, expected: 847, actual: 672, diff: -175 data index: 000080, expected: 674, actual: 684, diff: 10 data index: 000081, expected: 683, actual: 667, diff: -16 data index: 000082, expected: 654, actual: 625, diff: -29 data index: 000083, expected: 698, actual: 688, diff: -10 data index: 000084, expected: 792, actual: 710, diff: -82 data index: 000085, expected: 589, actual: 782, diff: 193 data index: 000086, expected: 746, actual: 752, diff: 6 data index: 000087, expected: 862, actual: 807, diff: -55 data index: 000088, expected: 736, actual: 724, diff: -12 data index: 000089, expected: 668, actual: 672, diff: 4 data index: 000090, expected: 783, actual: 836, diff: 53 data index: 000091, expected: 778, actual: 722, diff: -56 data index: 000092, expected: 801, actual: 758, diff: -43 data index: 000093, expected: 771, actual: 875, diff: 104 data index: 000094, expected: 527, actual: 733, diff: 206 data index: 000095, expected: 628, actual: 584, diff: -44 data index: 000096, expected: 673, actual: 694, diff: 21 data index: 000097, expected: 651, actual: 656, diff: 5 data index: 000098, expected: 627, actual: 766, diff: 139 data index: 000099, expected: 661, actual: 805, diff: 144 data index: 000100, expected: 671, actual: 656, diff: -15 ... (more errors exist but not shown) ---------------------------------------- Error ratio: 0.9980, Tolerance: 0.0000 [ERROR] Verification failed: Result mismatch. mmad_custom_cube_only.h改为如下形式(可以主要参见注释和调整部分): ``` /** * @file mmad_custom_cube_only.h * * Copyright (C) 2023-2024. Huawei Technologies Co., Ltd. All rights reserved. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. */ #ifndef MMAD_CUSTOM_CUBE_ONLY_H #define MMAD_CUSTOM_CUBE_ONLY_H #include "kernel_operator.h" // int8_t type, cube block: [16, 16] constexpr uint32_t CUBE_BLOCK = 16; constexpr uint32_t CUBE_BLOCK_SIZE = 16 * 16; class KernelMmad { public: __aicore__ inline KernelMmad() { aSize = m * k; bSize = k * n; cSize = m * n; } __aicore__ inline void Init(GM_ADDR a, GM_ADDR b, GM_ADDR bias, GM_ADDR c) { // set cube only KERNEL_TASK_TYPE_DEFAULT(KERNEL_TYPE_AIC_ONLY); aGM.SetGlobalBuffer((__gm__ int8_t *)a); bGM.SetGlobalBuffer((__gm__ int8_t *)b); cGM.SetGlobalBuffer((__gm__ int32_t *)c); biasGM.SetGlobalBuffer((__gm__ int32_t *)bias); pipe.InitBuffer(inQueueA1, 1, aSize * sizeof(int8_t)); pipe.InitBuffer(inQueueA2, 1, aSize * sizeof(int8_t)); pipe.InitBuffer(inQueueB1, 1, bSize * sizeof(int8_t)); pipe.InitBuffer(inQueueB2, 1, bSize * sizeof(int8_t)); pipe.InitBuffer(outQueueCO1, 1, cSize * sizeof(int32_t)); pipe.InitBuffer(inQueueC1, 1, n * sizeof(int32_t)); pipe.InitBuffer(outQueueC2, 1, n * sizeof(int32_t)); } __aicore__ inline void Process() { CopyIn(); SplitA(); SplitB(); SplitBias(); Compute(); CopyOut(); } private: __aicore__ inline uint32_t CeilCubeBlock(uint32_t len) { return (len + CUBE_BLOCK - 1) / CUBE_BLOCK; } // 对于K维度,每个分型大小应当为32 __aicore__ inline uint32_t CeilCubeBlock_K(uint32_t len) { return (len + 32 - 1) / 32; } __aicore__ inline void CopyIn() { AscendC::LocalTensor<int8_t> a1Local = inQueueA1.AllocTensor<int8_t>(); AscendC::LocalTensor<int8_t> b1Local = inQueueB1.AllocTensor<int8_t>(); AscendC::LocalTensor<int32_t> bias1Local = inQueueC1.AllocTensor<int32_t>(); AscendC::Nd2NzParams nd2nzA1Params; nd2nzA1Params.ndNum = 1; nd2nzA1Params.nValue = m; nd2nzA1Params.dValue = k; nd2nzA1Params.srcNdMatrixStride = 0; nd2nzA1Params.srcDValue = k; // nd2nzA1Params.dstNzC0Stride = CeilCubeBlock(m) * CUBE_BLOCK; nd2nzA1Params.dstNzC0Stride = CeilCubeBlock(m) * CUBE_BLOCK; //32,为啥这个跟ceilcubeblock有关??? nd2nzA1Params.dstNzNStride = 1; nd2nzA1Params.dstNzMatrixStride = 0; AscendC::DataCopy(a1Local, aGM, nd2nzA1Params); AscendC::Nd2NzParams nd2nzB1Params; nd2nzB1Params.ndNum = 1; nd2nzB1Params.nValue = k; nd2nzB1Params.dValue = n; nd2nzB1Params.srcNdMatrixStride = 0; nd2nzB1Params.srcDValue = n; nd2nzB1Params.dstNzC0Stride = CeilCubeBlock(k) * CUBE_BLOCK; nd2nzB1Params.dstNzNStride = 1; nd2nzB1Params.dstNzMatrixStride = 0; AscendC::DataCopy(b1Local, bGM, nd2nzB1Params); AscendC::DataCopy(bias1Local, biasGM, n); inQueueA1.EnQue(a1Local); inQueueB1.EnQue(b1Local); inQueueC1.EnQue(bias1Local); } __aicore__ inline void SplitA() { AscendC::LocalTensor<int8_t> a1Local = inQueueA1.DeQue<int8_t>(); AscendC::LocalTensor<int8_t> a2Local = inQueueA2.AllocTensor<int8_t>(); // uint32_t dstOffset = CeilCubeBlock(k) * CUBE_BLOCK_SIZE; // uint32_t srcOffset = CUBE_BLOCK_SIZE; uint32_t dstOffset = 32*16; uint32_t srcOffset = 16*32; AscendC::LoadData2DParams loadDataParams; // loadDataParams.repeatTimes = CeilCubeBlock(k); // loadDataParams.srcStride = CeilCubeBlock(m); loadDataParams.repeatTimes = 1; loadDataParams.srcStride = 2; loadDataParams.dstGap = 0; loadDataParams.ifTranspose = false; // for (int i = 0; i < CeilCubeBlock(m); ++i) { for (int i = 0; i < 2; ++i) { AscendC::LoadData(a2Local[i * dstOffset], a1Local[i * srcOffset], loadDataParams); } inQueueA2.EnQue<int8_t>(a2Local); uint32_t array[] = {static_cast<uint32_t>(32),static_cast<uint32_t>(32)}; AscendC::ShapeInfo shapeInfo(2, array); AscendC::DumpTensor(a1Local, 1,32*32,shapeInfo); inQueueA1.FreeTensor(a1Local); } __aicore__ inline void SplitB() { AscendC::LocalTensor<int8_t> b1Local = inQueueB1.DeQue<int8_t>(); AscendC::LocalTensor<int8_t> b2Local = inQueueB2.AllocTensor<int8_t>(); // uint32_t dstOffset = CeilCubeBlock(n) * CUBE_BLOCK_SIZE; // uint32_t srcOffset = CUBE_BLOCK_SIZE; uint32_t dstOffset = 32*16; uint32_t srcOffset = 16*32; // Nz -> Zn AscendC::LoadData2DParams loadDataParams; // loadDataParams.repeatTimes = CeilCubeBlock(n); // loadDataParams.srcStride = CeilCubeBlock(k); loadDataParams.repeatTimes = 2; loadDataParams.srcStride = 1; loadDataParams.dstGap = 0; loadDataParams.ifTranspose = true; // for (int i = 0; i < CeilCubeBlock(k); ++i) { for (int i = 0; i < 1; ++i) { AscendC::LoadData(b2Local[i * dstOffset], b1Local[i * srcOffset], loadDataParams); } inQueueB1.FreeTensor(b1Local); uint32_t array[] = {static_cast<uint32_t>(32),static_cast<uint32_t>(32)}; AscendC::ShapeInfo shapeInfo(2, array); AscendC::DumpTensor(b1Local, 2,32*32,shapeInfo); inQueueB2.EnQue<int8_t>(b2Local); } __aicore__ inline void SplitBias() { AscendC::LocalTensor<int32_t> bias1Local = inQueueC1.DeQue<int32_t>(); AscendC::LocalTensor<int32_t> bias2Local = outQueueC2.AllocTensor<int32_t>(); AscendC::DataCopy(bias2Local, bias1Local, { 1, (uint16_t)(n * sizeof(int32_t) / 64), 0, 0 }); outQueueC2.EnQue<int32_t>(bias2Local); AscendC::DumpTensor(bias1Local, 3,32); inQueueC1.FreeTensor(bias1Local); } __aicore__ inline void Compute() { AscendC::LocalTensor<int8_t> a2Local = inQueueA2.DeQue<int8_t>(); AscendC::LocalTensor<int8_t> b2Local = inQueueB2.DeQue<int8_t>(); AscendC::LocalTensor<int32_t> bias2Local = outQueueC2.DeQue<int32_t>(); AscendC::LocalTensor<int32_t> c1Local = outQueueCO1.AllocTensor<int32_t>(); AscendC::MmadParams mmadParams; mmadParams.m = m; mmadParams.n = n; mmadParams.k = k; mmadParams.cmatrixInitVal = false; //打印一下a2/b2的数值,没用,全是0,无法打印 AscendC::printf("a2Local: \n"); AscendC::printf("a2[0]: %d, a2[1]: %d, a2[2]: %d, a2[3]: %d\n", a2Local(0), a2Local(1), a2Local(2), a2Local(3)); AscendC::printf("b2Local: \n"); AscendC::printf("b2[0]: %d, b2[1]: %d, b2[2]: %d, b2[3]: %d\n", b2Local(0), b2Local(1), b2Local(2), b2Local(3)); AscendC::printf("bias2Local: \n"); AscendC::printf("bias2[0]: %d, bias2[1]: %d, bias2[2]: %d, bias2[3]: %d\n", bias2Local(0), bias2Local(1), bias2Local(2), bias2Local(3)); AscendC::Mmad(c1Local, a2Local, b2Local, bias2Local, mmadParams); outQueueCO1.EnQue<int32_t>(c1Local); uint32_t array[] = {static_cast<uint32_t>(32),static_cast<uint32_t>(32)}; AscendC::ShapeInfo shapeInfo(2, array); AscendC::DumpTensor(c1Local, 2,32*32,shapeInfo); inQueueA2.FreeTensor(a2Local); inQueueB2.FreeTensor(b2Local); outQueueC2.FreeTensor(bias2Local); } __aicore__ inline void CopyOut() { AscendC::LocalTensor<int32_t> c1Local = outQueueCO1.DeQue<int32_t>(); AscendC::FixpipeParamsV220 fixpipeParams; fixpipeParams.nSize = n; fixpipeParams.mSize = m; fixpipeParams.srcStride = m; fixpipeParams.dstStride = n; fixpipeParams.ndNum = 1; fixpipeParams.srcNdStride = 0; fixpipeParams.dstNdStride = 0; AscendC::Fixpipe(cGM, c1Local, fixpipeParams); outQueueCO1.FreeTensor(c1Local); } private: AscendC::TPipe pipe; AscendC::TQue<AscendC::TPosition::A1, 1> inQueueA1; AscendC::TQue<AscendC::TPosition::A2, 1> inQueueA2; AscendC::TQue<AscendC::TPosition::B1, 1> inQueueB1; AscendC::TQue<AscendC::TPosition::B2, 1> inQueueB2; AscendC::TQue<AscendC::TPosition::CO1, 1> outQueueCO1; AscendC::TQue<AscendC::TPosition::C1, 1> inQueueC1; AscendC::TQue<AscendC::TPosition::C2, 1> outQueueC2; AscendC::GlobalTensor<int8_t> aGM; AscendC::GlobalTensor<int8_t> bGM; AscendC::GlobalTensor<int32_t> cGM; AscendC::GlobalTensor<int32_t> biasGM; uint16_t m = 32, k = 32, n = 32; uint16_t aSize, bSize, cSize; }; #endif // MMAD_CUSTOM_CUBE_ONLY_H ``` main.cpp改动如下(仅改动数据类型): ``` size_t aFileSize = M * K * sizeof(int8_t); // uint16_t represent half size_t bFileSize = K * N * sizeof(int8_t); // uint16_t represent half size_t biasFileSize = N * sizeof(int32_t); // uint16_t represent half size_t cFileSize = M * N * sizeof(int32_t); uint32_t blockDim = 1; ``` gen_data.py改为如下(主要是改动数据类型): ``` import numpy as np import os def gen_golden_data(): M = 32 N = 32 K = 32 x1_gm = np.random.randint(1, 10, [M, K]).astype(np.int8) x2_gm = np.random.randint(1, 10, [K, N]).astype(np.int8) bias_gm = np.random.randint(1, 10, [N]).astype(np.int32) golden = np.matmul(x1_gm.astype(np.int32), x2_gm.astype(np.int32)) + bias_gm os.system("mkdir -p input") os.system("mkdir -p output") x1_gm.tofile("./input/x1_gm.bin") x2_gm.tofile("./input/x2_gm.bin") bias_gm.tofile("./input/bias_gm.bin") golden.tofile("./output/golden.bin") # 确认最终结果确实是 int32 assert golden.dtype == np.int32 print("\n计算完成,最终结果为 int32 类型。") # 打印一小部分结果查看 print("\nGolden result (first 4x4 block):") print(golden[:4, :4]) if __name__ == "__main__": gen_golden_data() ``` 二、软件版本: -- CANN 版本 : 8.2.RC1.alpha002 --Tensorflow/Pytorch/MindSpore 版本:无 --Python 版本 :无 -- MindStudio版本 :无 --操作系统版本 (e.g., Ubuntu 18.04):Ubuntu 22.04.5 LTS 三、测试步骤: bash ./run.sh -r npu -v Ascend910B2
一、问题现象(附报错日志上下文): 将mmad示例(链接为operator/ascendc/0_introduction/20_mmad_kernellaunch/MmadBiasInvocation)中half类型改成int8_t类型后,计算结果出错。 我后面参见mmad的api文档尝试修改splitA、splitB这些里面的分型,因为我看到似乎A矩阵分型是16*32的,B矩阵的32*16,但是整来整去似乎还是不对,这个nd、nz格式的以及随路格式转换的整的很迷惑。所以如果要调整为int8类型的mmad,需要如何调整代码? 整个运行结果如下: -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Configuring done -- Generating done -- Build files have been written to: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build [ 2%] Creating directories for 'ascendc_kernels_npu_precompile' [ 4%] No download step for 'ascendc_kernels_npu_precompile' [ 6%] No update step for 'ascendc_kernels_npu_precompile' [ 9%] No patch step for 'ascendc_kernels_npu_precompile' [ 11%] Performing configure step for 'ascendc_kernels_npu_precompile' -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Configuring done -- Generating done -- Build files have been written to: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_precompile-prefix/src/ascendc_kernels_npu_precompile-build [ 13%] Performing build step for 'ascendc_kernels_npu_precompile' [100%] Building CXX object CMakeFiles/precompile_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [100%] Built target precompile_obj [100%] Built target check_src_template [ 15%] No install step for 'ascendc_kernels_npu_precompile' [ 18%] Completed 'ascendc_kernels_npu_precompile' [ 18%] Built target ascendc_kernels_npu_precompile [ 20%] Creating directories for 'ascendc_kernels_npu_preprocess' [ 22%] No download step for 'ascendc_kernels_npu_preprocess' [ 25%] No update step for 'ascendc_kernels_npu_preprocess' [ 27%] No patch step for 'ascendc_kernels_npu_preprocess' [ 29%] Performing configure step for 'ascendc_kernels_npu_preprocess' -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Configuring done -- Generating done -- Build files have been written to: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_preprocess-prefix/src/ascendc_kernels_npu_preprocess-build [ 31%] Performing build step for 'ascendc_kernels_npu_preprocess' [100%] Building CXX object CMakeFiles/aic_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [100%] Building CXX object CMakeFiles/preprocess_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [100%] Building CXX object CMakeFiles/aiv_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [100%] Built target preprocess_obj [100%] Built target aic_obj /usr/local/Ascend/ascend-toolkit/latest/tools/ccec_compiler/bin/ld.lld -m aicorelinux -Ttext=0 /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_preprocess-prefix/src/ascendc_kernels_npu_preprocess-build/CMakeFiles/aic_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o -o /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_preprocess-prefix/src/ascendc_kernels_npu_preprocess-build/CMakeFiles/aic_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o[100%] Built target merge_aic_obj_text [100%] Built target aiv_obj /usr/local/Ascend/ascend-toolkit/latest/tools/ccec_compiler/bin/ld.lld -m aicorelinux -Ttext=0 /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_preprocess-prefix/src/ascendc_kernels_npu_preprocess-build/CMakeFiles/aiv_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o -o /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_preprocess-prefix/src/ascendc_kernels_npu_preprocess-build/CMakeFiles/aiv_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o[100%] Built target merge_aiv_obj_text [100%] Built target _host_cpp [ 34%] No install step for 'ascendc_kernels_npu_preprocess' [ 36%] Completed 'ascendc_kernels_npu_preprocess' [ 36%] Built target ascendc_kernels_npu_preprocess [ 43%] Creating directories for 'ascendc_kernels_npu_aic_device' [ 43%] Creating directories for 'ascendc_kernels_npu_host' [ 43%] Creating directories for 'ascendc_kernels_npu_aiv_device' [ 50%] No download step for 'ascendc_kernels_npu_aic_device' [ 50%] No download step for 'ascendc_kernels_npu_aiv_device' [ 50%] No download step for 'ascendc_kernels_npu_host' [ 56%] No update step for 'ascendc_kernels_npu_aic_device' [ 56%] No update step for 'ascendc_kernels_npu_host' [ 56%] No update step for 'ascendc_kernels_npu_aiv_device' [ 63%] No patch step for 'ascendc_kernels_npu_host' [ 63%] No patch step for 'ascendc_kernels_npu_aic_device' [ 63%] No patch step for 'ascendc_kernels_npu_aiv_device' [ 70%] Performing configure step for 'ascendc_kernels_npu_host' [ 70%] Performing configure step for 'ascendc_kernels_npu_aic_device' [ 70%] Performing configure step for 'ascendc_kernels_npu_aiv_device' -- The C compiler identification is GNU 11.4.0 -- The C compiler identification is GNU 11.4.0 -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compiler ABI info - done -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting CXX compile features - done -- Detecting CXX compiler ABI info - done -- Configuring done -- Configuring done -- Generating done -- Build files have been written to: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_aiv_device-prefix/src/ascendc_kernels_npu_aiv_device-build -- Generating done -- Build files have been written to: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_aic_device-prefix/src/ascendc_kernels_npu_aic_device-build -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Configuring done -- Generating done -- Build files have been written to: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_host-prefix/src/ascendc_kernels_npu_host-build [ 75%] Performing build step for 'ascendc_kernels_npu_aic_device' [ 75%] Performing build step for 'ascendc_kernels_npu_aiv_device' [ 77%] Performing build step for 'ascendc_kernels_npu_host' [ 79%] No install step for 'ascendc_kernels_npu_aiv_device' [100%] Building CXX object CMakeFiles/device_aic_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/auto_gen/ascendc_kernels_npu/auto_gen_mmad_custom.cpp.o [100%] Building CXX object CMakeFiles/host_bisheng_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [ 81%] Completed 'ascendc_kernels_npu_aiv_device' [ 81%] Built target ascendc_kernels_npu_aiv_device [100%] Built target host_bisheng_obj [ 84%] Performing install step for 'ascendc_kernels_npu_host' Consolidate compiler generated dependencies of target host_bisheng_obj [100%] Building CXX object CMakeFiles/host_bisheng_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [100%] Built target host_bisheng_obj Install the project... -- Install configuration: "Debug" -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_host_dir/./objects-Debug/host_bisheng_obj/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o [ 86%] Completed 'ascendc_kernels_npu_host' [ 86%] Built target ascendc_kernels_npu_host [100%] Built target device_aic_obj /usr/local/Ascend/ascend-toolkit/latest/tools/ccec_compiler/bin/ld.lld -m aicorelinux -r -Ttext=0 /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_aic_device-prefix/src/ascendc_kernels_npu_aic_device-build/CMakeFiles/device_aic_obj.dir/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/auto_gen/ascendc_kernels_npu/auto_gen_mmad_custom.cpp.o -static -o /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_aic_device_dir/device_aic.o [100%] Built target merge_aic_device_obj [ 88%] No install step for 'ascendc_kernels_npu_aic_device' [ 90%] Completed 'ascendc_kernels_npu_aic_device' [ 90%] Built target ascendc_kernels_npu_aic_device /usr/local/Ascend/ascend-toolkit/latest/tools/ccec_compiler/bin/ld.lld -m aicorelinux -Ttext=0 /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_aic_device_dir/device_aic.o -static -o /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_merge_obj_dir/device_aic.o [ 90%] Built target ascendc_kernels_npu_merge_obj [ 93%] Building CXX object CMakeFiles/ascendc_kernels_npu_host_stub_obj.dir/auto_gen/ascendc_kernels_npu/host_stub.cpp.o [ 93%] Built target ascendc_kernels_npu_host_stub_obj [ 95%] Linking CXX shared library lib/libascendc_kernels_npu.so /usr/local/Ascend/ascend-toolkit/latest/bin/ascendc_pack_kernel /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/CMakeFiles/ascendc_kernels_npu_host_stub_obj.dir/auto_gen/ascendc_kernels_npu/host_stub.cpp.o /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_merge_obj_dir/device_aic.o 2 /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/CMakeFiles/ascendc_kernels_npu_host_stub_obj.dir/auto_gen/ascendc_kernels_npu/host_stub.cpp.o recompile: /usr/bin/c++ -fPIC -g -Wl,-z,relro -Wl,-z,now -Wl,-z,noexecstack -shared -Wl,-soname,libascendc_kernels_npu.so -o lib/libascendc_kernels_npu.so CMakeFiles/ascendc_kernels_npu_host_stub_obj.dir/auto_gen/ascendc_kernels_npu/host_stub.cpp.o /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/build/ascendc_kernels_npu_host_dir/objects-Debug/host_bisheng_obj/root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/mmad_custom.cpp.o -L/usr/local/Ascend/ascend-toolkit/latest/lib64 -L/usr/local/Ascend/ascend-toolkit/latest/tools/simulator/Ascend910B2/lib /usr/local/Ascend/ascend-toolkit/latest/lib64/libascendc_runtime.a -lascend_dump -lc_sec [ 95%] Built target ascendc_kernels_npu [ 97%] Building CXX object CMakeFiles/ascendc_kernels_bbit.dir/main.cpp.o [100%] Linking CXX executable ascendc_kernels_bbit [100%] Built target ascendc_kernels_bbit -- Install configuration: "Debug" -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/out/lib/libascendc_kernels_npu.so -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/out/include -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/out/include/ascendc_kernels_npu -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/out/include/ascendc_kernels_npu/aclrtlaunch_triple_chevrons_func.h -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/out/include/ascendc_kernels_npu/aclrtlaunch_mmad_custom.h -- Installing: /root/wja/project/samples/operator/ascendc/0_introduction/20_mmad_kernellaunch_my/MmadBiasInvocation/out/bin/ascendc_kernels_bbit 计算完成,最终结果为 int32 类型。 Golden result (first 4x4 block): [[751 693 644 657] [792 727 634 724] [747 693 706 642] [673 651 627 661]] opType=mmad_custom, DumpHead: AIC-0, CoreType=AIC, block dim=1, total_block_num=1, block_remain_len=1041504, block_initial_space=1048576, rsv=0, magic=5aa5bccd CANN Version: 8.2.RC1.alpha002, TimeStamp: 20250522125202340 DumpTensor: desc=1, addr=0, data_type=int8, position=L1 [[3,9,6,5,4,1,7,6,2,8,1,9,9,1,2,6,7,7,4,9,3,2,7,1,1,3,6,6,8,7,9,5], [9,1,5,6,1,5,9,8,2,2,5,4,8,7,9,3,5,8,8,3,1,3,8,5,1,4,1,2,6,9,9,4], [9,6,6,7,5,1,5,3,6,1,2,3,5,6,1,5,3,2,9,7,7,6,2,1,8,4,3,7,9,2,8,3], [6,4,1,2,8,5,1,6,1,4,5,4,7,4,8,1,7,1,3,7,5,2,7,1,5,8,8,3,9,3,4,1], [8,8,9,1,5,1,5,4,4,6,3,5,2,6,7,7,4,2,8,1,7,9,9,5,8,5,8,5,2,1,5,2], [1,5,9,5,7,6,3,1,5,5,5,7,9,5,1,6,3,9,2,4,5,1,7,4,8,6,1,6,5,4,1,8], [2,2,6,5,8,3,2,2,9,9,3,5,3,7,7,3,2,3,6,6,7,3,5,9,2,8,1,7,3,5,2,7], [6,4,9,5,2,4,9,8,8,3,4,7,6,9,3,4,3,2,8,8,2,6,3,2,9,9,7,1,6,7,1,4], [9,4,3,9,8,5,6,5,9,7,5,4,3,2,9,7,7,9,7,8,3,6,4,2,8,8,7,1,5,5,7,4], [3,9,1,7,1,3,3,2,2,9,1,4,9,5,1,9,8,4,9,5,4,6,7,4,9,6,8,1,7,3,7,5], [9,2,1,4,9,2,8,5,2,3,6,2,5,3,8,4,3,5,5,7,7,8,5,8,2,5,9,9,4,1,5,3], [7,3,9,2,4,8,2,9,3,6,6,3,3,3,1,7,9,5,2,8,7,6,9,7,7,8,8,5,2,8,5,2], [2,4,9,8,4,1,7,3,4,8,5,2,7,7,1,3,2,7,3,8,4,2,2,9,9,3,6,3,3,1,1,1], [3,3,6,5,7,9,7,9,4,6,2,3,3,7,9,3,9,8,6,8,7,5,9,2,7,4,6,7,9,9,2,4], [2,1,8,5,4,8,7,8,9,6,3,5,9,2,9,2,5,9,8,3,5,5,6,9,9,6,4,4,4,8,2,9], [2,3,1,6,1,8,5,3,2,7,2,3,1,9,5,3,4,5,4,3,2,1,4,4,4,9,6,7,2,3,5,6], [4,9,6,2,7,4,7,3,7,5,4,7,6,1,7,8,6,7,8,8,5,3,2,8,8,4,8,7,7,5,3,9], [4,8,3,9,1,1,2,6,2,5,8,6,6,9,3,7,2,4,9,8,5,9,2,2,1,3,2,3,5,1,7,2], [7,6,4,4,4,4,9,5,2,8,3,9,4,6,6,4,3,4,1,1,9,2,4,7,1,7,6,2,4,8,3,9], [7,2,3,1,6,9,3,8,5,9,6,1,9,4,5,5,2,9,5,1,2,1,2,9,7,4,7,4,2,1,8,6], [5,3,2,4,6,8,8,8,6,3,8,5,9,4,1,7,7,7,4,3,1,2,8,3,2,1,1,2,7,6,5,5], [9,4,6,2,5,7,9,8,4,9,6,7,5,9,9,8,2,4,4,1,5,1,9,5,6,6,6,5,8,6,4,2], [8,9,2,1,1,2,4,3,7,2,9,5,5,8,6,7,9,6,1,3,3,7,1,2,8,9,1,1,6,3,1,5], [3,5,6,6,7,6,5,9,4,5,5,9,8,8,7,6,3,9,6,7,5,1,4,1,9,7,7,3,8,5,7,9], [8,3,4,8,1,8,5,3,2,8,4,1,4,1,6,2,9,3,5,8,2,1,2,5,2,7,3,2,4,2,3,3], [4,7,8,4,6,3,2,2,6,6,1,6,1,2,1,8,5,6,1,1,5,7,4,6,1,4,3,4,9,8,8,9], [1,7,2,5,8,7,3,1,9,9,8,5,4,9,2,1,6,2,4,7,3,9,1,1,2,5,7,7,8,6,5,3], [4,8,4,3,1,6,7,4,1,3,2,9,9,4,3,7,7,2,3,8,5,7,9,6,3,4,2,7,7,2,8,8], [8,9,2,4,7,5,7,5,1,1,3,3,8,6,6,7,7,8,2,2,2,8,5,1,7,7,6,9,8,9,8,3], [5,1,2,8,2,5,6,1,5,2,2,7,6,7,9,1,3,2,5,5,1,7,6,3,8,5,5,6,1,8,5,6], [9,9,3,4,2,9,1,9,4,7,9,5,8,1,1,4,1,4,6,5,4,4,1,4,1,5,8,1,8,5,2,5], [1,5,3,3,4,6,1,9,4,5,6,5,1,8,3,9,4,4,1,1,6,5,9,7,4,4,3,5,1,1,1,9]] DumpTensor: desc=2, addr=400, data_type=int8, position=L1 [[6,3,7,5,7,6,1,2,4,6,5,3,1,3,6,5,2,6,6,8,7,3,3,6,3,2,6,9,9,6,3,1], [4,1,3,1,4,6,9,8,5,5,6,5,5,1,9,6,5,5,5,1,8,1,6,3,9,7,4,3,5,3,2,7], [9,5,4,2,9,6,8,4,5,2,7,3,1,8,3,3,5,1,2,4,2,3,7,6,3,1,1,3,4,5,1,3], [3,6,5,9,5,3,9,5,2,5,4,2,2,5,5,4,2,3,6,1,5,4,5,2,2,3,3,1,7,9,1,2], [4,1,7,6,5,9,8,6,1,9,7,9,5,8,4,3,9,2,2,7,7,9,4,6,7,5,7,2,8,6,4,8], [9,7,1,6,5,8,8,5,6,4,8,9,6,5,1,9,5,1,2,2,7,3,7,4,3,8,6,2,9,1,9,7], [6,2,9,2,8,7,5,2,8,9,6,9,8,4,5,9,9,8,2,6,3,1,3,9,3,3,8,7,2,5,7,5], [9,3,6,9,4,1,4,4,2,8,5,1,1,9,7,4,5,1,1,2,7,3,7,8,9,4,9,5,3,8,4,6], [4,4,7,3,4,6,6,4,7,2,1,1,9,5,6,5,4,6,3,8,2,1,6,5,7,9,6,5,2,3,3,3], [1,3,2,1,5,1,7,8,5,6,3,5,7,6,5,4,2,8,7,1,9,1,9,5,4,7,1,7,3,5,9,2], [7,6,9,6,7,9,7,2,2,1,4,3,4,9,3,4,7,3,6,5,6,2,5,9,8,1,2,5,8,7,9,6], [3,9,4,1,3,5,1,2,5,8,2,8,3,1,1,8,1,2,8,6,5,1,8,5,7,4,1,2,1,8,1,3], [1,4,1,5,6,9,9,3,2,1,1,6,1,2,7,5,5,4,6,5,7,9,1,6,9,1,3,5,6,2,1,2], [7,8,5,2,3,7,9,9,7,7,8,1,2,8,8,8,1,1,2,2,2,1,4,7,9,3,1,7,8,9,6,2], [1,4,2,7,2,9,9,2,3,2,7,2,6,6,9,7,7,8,5,5,9,7,6,1,3,5,3,9,8,9,6,4], [4,6,1,8,3,2,8,9,3,3,1,2,8,5,9,3,5,3,7,1,2,2,8,8,2,9,5,2,4,8,6,5], [4,7,6,7,1,5,9,3,5,7,6,9,8,2,4,1,2,1,1,2,5,8,4,1,6,1,9,9,9,1,6,2], [1,2,3,2,3,1,4,9,5,3,8,2,4,2,1,5,3,4,4,6,6,5,3,5,3,9,8,3,9,5,6,3], [5,8,3,7,2,1,7,5,2,3,7,2,5,3,8,8,6,2,9,7,7,5,4,7,2,6,5,4,1,7,3,7], [4,5,6,2,5,5,3,2,9,3,2,2,9,8,6,9,1,3,3,4,2,2,6,4,1,5,9,4,3,8,1,1], [7,7,8,1,5,2,6,2,5,9,8,8,7,9,9,8,4,4,6,7,9,1,5,7,7,1,3,9,8,1,5,1], [3,2,1,8,3,8,1,5,4,1,9,5,1,5,5,8,7,9,1,4,7,9,8,6,4,1,7,4,4,3,2,4], [6,5,1,4,3,1,2,6,3,5,3,9,1,1,4,2,8,9,5,4,8,1,8,1,1,8,3,8,9,3,9,5], [7,8,1,1,8,7,7,4,1,3,4,7,1,5,3,1,6,9,4,5,3,8,2,3,6,2,7,8,5,2,9,2], [5,7,7,4,6,2,3,1,1,7,5,9,3,2,9,4,4,4,2,3,1,4,1,6,8,9,9,2,7,4,8,4], [1,3,6,5,6,4,6,4,4,1,4,7,8,8,4,1,6,4,2,3,8,3,5,6,6,9,4,5,2,8,4,7], [8,7,4,9,5,4,3,5,1,8,2,5,6,9,3,3,6,4,8,9,7,7,3,6,9,8,1,2,4,2,1,2], [6,1,4,1,5,7,1,1,3,3,2,1,3,5,9,8,6,8,5,6,5,4,1,2,2,3,6,3,8,3,2,9], [5,3,5,3,9,5,1,2,7,5,2,2,8,4,9,6,2,5,4,3,7,4,9,6,2,2,5,9,2,1,1,5], [7,1,1,3,3,7,2,2,9,5,3,8,2,5,5,7,4,5,8,7,4,3,9,1,4,7,7,9,8,6,1,6], [3,5,1,1,3,4,4,5,7,4,5,3,1,8,9,4,5,9,3,4,2,6,3,9,6,4,4,8,7,5,2,3], [7,2,5,9,1,2,7,8,6,2,3,9,3,3,8,6,6,1,3,5,8,5,8,7,2,4,6,6,1,9,2,7]] DumpTensor: desc=3, addr=800, data_type=int32, position=L1 [5, 7, 3, 5, 6, 4, 2, 3, 1, 8, 8, 5, 6, 4, 9, 2, 1, 8, 5, 4, 7, 4, 1, 9, 9, 3, 8, 1, 6, 8, 6, 7] a2Local: a2[0]: 0, a2[1]: 0, a2[2]: 0, a2[3]: 0 b2Local: b2[0]: 0, b2[1]: 0, b2[2]: 0, b2[3]: 0 bias2Local: bias2[0]: 0, bias2[1]: 0, bias2[2]: 0, bias2[3]: 0 DumpTensor: desc=2, addr=0, data_type=int32, position=L0C [[712,796,852,978,799,705,866,930,709,703,684,901,844,734,880,704,737,737,854,849,707,686,776,917,696,721,736,899,773,745,869,697], [639,656,815,946,674,659,733,888,700,671,694,860,730,682,893,672,694,656,766,805,656,682,644,768,623,567,663,778,694,620,753,594], [767,770,894,886,756,732,735,905,612,766,769,964,830,677,923,685,705,695,868,915,715,702,747,822,694,657,668,856,835,668,823,637], [745,691,879,875,708,774,702,820,674,629,702,894,798,659,854,669,713,784,924,989,738,819,777,966,760,702,825,892,852,747,970,707], [833,855,1003,1088,783,823,854,991,803,770,856,1017,972,823,1024,795,627,755,860,913,805,697,833,892,677,752,708,924,813,686,858,761], [862,723,869,922,739,738,707,856,623,648,647,923,760,676,949,680,835,816,935,950,814,740,869,924,695,757,769,940,884,737,1022,693], [604,555,766,863,717,716,605,790,641,613,593,730,756,588,794,588,847,858,1006,1091,823,794,910,1024,815,797,843,1041,993,809,1053,749], [879,877,1018,1073,769,808,867,995,793,768,860,978,1007,773,1023,772,575,546,721,736,611,588,613,736,501,569,543,763,641,562,712,549], [891,865,964,1132,850,826,881,1032,780,758,811,1028,994,815,1082,776,629,637,782,810,725,683,770,794,655,618,602,875,665,674,810,745], [777,816,897,844,704,769,682,866,632,681,707,849,800,657,918,692,771,700,836,881,739,703,740,820,612,662,664,843,744,631,901,668], [697,729,790,853,701,649,793,766,650,626,644,836,703,657,852,647,849,861,994,986,848,874,823,1009,718,804,890,1006,881,773,1053,766], [670,725,810,761,631,660,698,787,682,622,681,815,703,757,841,613,861,874,1004,1151,850,865,896,1065,851,768,836,1040,1012,808,1034,830], [570,566,687,742,610,599,620,691,526,537,604,706,639,581,698,554,685,725,801,842,648,645,765,845,629,663,641,863,751,641,887,645], [723,672,923,947,738,805,835,858,732,640,675,926,716,741,916,731,759,773,837,908,774,683,863,904,654,699,637,961,778,726,887,690], [816,820,985,943,765,686,851,1001,752,766,720,968,787,854,993,681,678,641,832,815,612,646,659,841,618,585,617,795,764,684,776,619], [715,730,816,827,703,735,752,771,629,615,707,788,680,646,924,715,692,684,725,731,680,607,675,686,517,568,544,830,707,547,820,586], [780,656,673,645,747,852,754,909,813,718,840,811,695,998,828,639,743,625,632,704,699,748,722,896,834,642,789,787,760,954,838,710], [684,667,625,688,710,782,752,807,724,672,836,722,758,875,733,584,699,517,588,556,592,658,597,834,713,638,682,682,649,848,727,587], [798,685,741,643,778,817,803,861,791,764,875,723,698,998,731,605,706,624,642,580,659,840,657,864,777,726,839,758,667,879,768,662], [673,601,652,647,619,808,734,858,807,760,775,692,725,878,713,691,756,718,723,697,796,830,860,905,812,731,871,881,774,1009,861,672], [929,771,763,816,847,933,906,1047,874,833,968,966,899,1085,881,739,815,666,688,628,779,779,741,898,805,733,857,821,702,952,729,611], [749,608,670,646,715,770,764,873,790,776,786,752,778,922,771,660,853,678,735,623,762,829,866,986,856,814,872,859,810,1050,896,697], [619,496,573,493,681,718,661,737,676,652,683,699,639,796,664,564,920,746,795,746,820,914,892,1081,902,885,867,881,958,1097,973,762], [857,750,814,720,798,860,891,1089,888,872,949,950,850,1002,955,823,615,527,523,529,609,643,566,755,631,606,586,638,620,823,598,553], [856,770,846,732,823,978,877,1015,888,846,986,910,821,1088,901,791,680,597,574,629,649,789,659,696,768,628,786,691,733,814,641,551], [746,624,640,651,691,763,730,863,783,728,772,764,660,957,746,629,773,570,596,587,721,767,695,922,742,720,784,819,654,877,749,663], [740,619,589,607,646,764,675,913,767,651,788,755,672,867,796,616,889,657,770,712,838,887,841,1039,929,813,884,853,758,1132,878,701], [723,650,644,577,663,815,639,766,706,574,758,687,687,902,678,616,906,754,777,785,829,971,829,1062,942,847,983,988,856,1089,935,758], [587,515,505,549,555,617,604,728,578,544,584,633,642,803,674,611,713,634,587,646,683,809,683,837,739,714,805,700,660,871,701,579], [769,670,593,663,755,884,716,892,790,773,801,799,777,934,722,616,763,732,711,648,689,801,734,910,810,734,850,750,712,959,823,708], [879,699,719,709,835,869,768,956,855,777,889,846,790,1081,819,688,667,634,630,658,635,673,688,814,736,679,758,761,693,854,723,647], [722,583,557,597,691,766,684,813,706,659,799,800,674,851,785,590,683,575,605,519,566,732,614,799,705,687,702,601,633,801,666,548]] 9364fb1f2445cdc7125f9ea01e30a902 output/golden.bin ed0beff0e69ef4adb9d84c299d3235ed output/output.bin Found mismatched elements: data index: 000000, expected: 751, actual: 712, diff: -39 data index: 000001, expected: 693, actual: 796, diff: 103 data index: 000002, expected: 644, actual: 852, diff: 208 data index: 000003, expected: 657, actual: 978, diff: 321 data index: 000004, expected: 748, actual: 799, diff: 51 data index: 000005, expected: 760, actual: 705, diff: -55 data index: 000006, expected: 843, actual: 866, diff: 23 data index: 000007, expected: 731, actual: 930, diff: 199 data index: 000008, expected: 779, actual: 709, diff: -70 data index: 000009, expected: 795, actual: 703, diff: -92 data index: 000010, expected: 681, actual: 684, diff: 3 data index: 000011, expected: 806, actual: 901, diff: 95 data index: 000012, expected: 733, actual: 844, diff: 111 data index: 000013, expected: 783, actual: 734, diff: -49 data index: 000014, expected: 962, actual: 880, diff: -82 data index: 000015, expected: 866, actual: 704, diff: -162 data index: 000016, expected: 725, actual: 780, diff: 55 data index: 000017, expected: 752, actual: 656, diff: -96 data index: 000018, expected: 761, actual: 673, diff: -88 data index: 000019, expected: 714, actual: 645, diff: -69 data index: 000020, expected: 919, actual: 747, diff: -172 data index: 000021, expected: 629, actual: 852, diff: 223 data index: 000022, expected: 901, actual: 754, diff: -147 data index: 000023, expected: 846, actual: 909, diff: 63 data index: 000024, expected: 780, actual: 813, diff: 33 data index: 000025, expected: 778, actual: 718, diff: -60 data index: 000026, expected: 811, actual: 840, diff: 29 data index: 000027, expected: 857, actual: 811, diff: -46 data index: 000028, expected: 839, actual: 695, diff: -144 data index: 000029, expected: 826, actual: 998, diff: 172 data index: 000030, expected: 578, actual: 828, diff: 250 data index: 000031, expected: 679, actual: 639, diff: -40 data index: 000032, expected: 792, actual: 737, diff: -55 data index: 000033, expected: 727, actual: 737, diff: 10 data index: 000034, expected: 634, actual: 854, diff: 220 data index: 000035, expected: 724, actual: 849, diff: 125 data index: 000036, expected: 738, actual: 707, diff: -31 data index: 000037, expected: 817, actual: 686, diff: -131 data index: 000038, expected: 866, actual: 776, diff: -90 data index: 000039, expected: 683, actual: 917, diff: 234 data index: 000040, expected: 741, actual: 696, diff: -45 data index: 000042, expected: 783, actual: 736, diff: -47 data index: 000043, expected: 749, actual: 899, diff: 150 data index: 000044, expected: 612, actual: 773, diff: 161 data index: 000045, expected: 792, actual: 745, diff: -47 data index: 000046, expected: 927, actual: 869, diff: -58 data index: 000047, expected: 866, actual: 697, diff: -169 data index: 000048, expected: 759, actual: 743, diff: -16 data index: 000049, expected: 758, actual: 625, diff: -133 data index: 000050, expected: 708, actual: 632, diff: -76 data index: 000051, expected: 734, actual: 704, diff: -30 data index: 000052, expected: 894, actual: 699, diff: -195 data index: 000053, expected: 652, actual: 748, diff: 96 data index: 000054, expected: 839, actual: 722, diff: -117 data index: 000055, expected: 849, actual: 896, diff: 47 data index: 000056, expected: 736, actual: 834, diff: 98 data index: 000057, expected: 713, actual: 642, diff: -71 data index: 000058, expected: 821, actual: 789, diff: -32 data index: 000059, expected: 953, actual: 787, diff: -166 data index: 000060, expected: 928, actual: 760, diff: -168 data index: 000061, expected: 885, actual: 954, diff: 69 data index: 000062, expected: 688, actual: 838, diff: 150 data index: 000063, expected: 666, actual: 710, diff: 44 data index: 000064, expected: 747, actual: 639, diff: -108 data index: 000065, expected: 693, actual: 656, diff: -37 data index: 000066, expected: 706, actual: 815, diff: 109 data index: 000067, expected: 642, actual: 946, diff: 304 data index: 000068, expected: 746, actual: 674, diff: -72 data index: 000069, expected: 740, actual: 659, diff: -81 data index: 000070, expected: 777, actual: 733, diff: -44 data index: 000071, expected: 621, actual: 888, diff: 267 data index: 000072, expected: 679, actual: 700, diff: 21 data index: 000073, expected: 720, actual: 671, diff: -49 data index: 000074, expected: 715, actual: 694, diff: -21 data index: 000075, expected: 666, actual: 860, diff: 194 data index: 000076, expected: 666, actual: 730, diff: 64 data index: 000077, expected: 780, actual: 682, diff: -98 data index: 000078, expected: 1003, actual: 893, diff: -110 data index: 000079, expected: 847, actual: 672, diff: -175 data index: 000080, expected: 674, actual: 684, diff: 10 data index: 000081, expected: 683, actual: 667, diff: -16 data index: 000082, expected: 654, actual: 625, diff: -29 data index: 000083, expected: 698, actual: 688, diff: -10 data index: 000084, expected: 792, actual: 710, diff: -82 data index: 000085, expected: 589, actual: 782, diff: 193 data index: 000086, expected: 746, actual: 752, diff: 6 data index: 000087, expected: 862, actual: 807, diff: -55 data index: 000088, expected: 736, actual: 724, diff: -12 data index: 000089, expected: 668, actual: 672, diff: 4 data index: 000090, expected: 783, actual: 836, diff: 53 data index: 000091, expected: 778, actual: 722, diff: -56 data index: 000092, expected: 801, actual: 758, diff: -43 data index: 000093, expected: 771, actual: 875, diff: 104 data index: 000094, expected: 527, actual: 733, diff: 206 data index: 000095, expected: 628, actual: 584, diff: -44 data index: 000096, expected: 673, actual: 694, diff: 21 data index: 000097, expected: 651, actual: 656, diff: 5 data index: 000098, expected: 627, actual: 766, diff: 139 data index: 000099, expected: 661, actual: 805, diff: 144 data index: 000100, expected: 671, actual: 656, diff: -15 ... (more errors exist but not shown) ---------------------------------------- Error ratio: 0.9980, Tolerance: 0.0000 [ERROR] Verification failed: Result mismatch. mmad_custom_cube_only.h改为如下形式(可以主要参见注释和调整部分): ``` /** * @file mmad_custom_cube_only.h * * Copyright (C) 2023-2024. Huawei Technologies Co., Ltd. All rights reserved. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. */ #ifndef MMAD_CUSTOM_CUBE_ONLY_H #define MMAD_CUSTOM_CUBE_ONLY_H #include "kernel_operator.h" // int8_t type, cube block: [16, 16] constexpr uint32_t CUBE_BLOCK = 16; constexpr uint32_t CUBE_BLOCK_SIZE = 16 * 16; class KernelMmad { public: __aicore__ inline KernelMmad() { aSize = m * k; bSize = k * n; cSize = m * n; } __aicore__ inline void Init(GM_ADDR a, GM_ADDR b, GM_ADDR bias, GM_ADDR c) { // set cube only KERNEL_TASK_TYPE_DEFAULT(KERNEL_TYPE_AIC_ONLY); aGM.SetGlobalBuffer((__gm__ int8_t *)a); bGM.SetGlobalBuffer((__gm__ int8_t *)b); cGM.SetGlobalBuffer((__gm__ int32_t *)c); biasGM.SetGlobalBuffer((__gm__ int32_t *)bias); pipe.InitBuffer(inQueueA1, 1, aSize * sizeof(int8_t)); pipe.InitBuffer(inQueueA2, 1, aSize * sizeof(int8_t)); pipe.InitBuffer(inQueueB1, 1, bSize * sizeof(int8_t)); pipe.InitBuffer(inQueueB2, 1, bSize * sizeof(int8_t)); pipe.InitBuffer(outQueueCO1, 1, cSize * sizeof(int32_t)); pipe.InitBuffer(inQueueC1, 1, n * sizeof(int32_t)); pipe.InitBuffer(outQueueC2, 1, n * sizeof(int32_t)); } __aicore__ inline void Process() { CopyIn(); SplitA(); SplitB(); SplitBias(); Compute(); CopyOut(); } private: __aicore__ inline uint32_t CeilCubeBlock(uint32_t len) { return (len + CUBE_BLOCK - 1) / CUBE_BLOCK; } // 对于K维度,每个分型大小应当为32 __aicore__ inline uint32_t CeilCubeBlock_K(uint32_t len) { return (len + 32 - 1) / 32; } __aicore__ inline void CopyIn() { AscendC::LocalTensor<int8_t> a1Local = inQueueA1.AllocTensor<int8_t>(); AscendC::LocalTensor<int8_t> b1Local = inQueueB1.AllocTensor<int8_t>(); AscendC::LocalTensor<int32_t> bias1Local = inQueueC1.AllocTensor<int32_t>(); AscendC::Nd2NzParams nd2nzA1Params; nd2nzA1Params.ndNum = 1; nd2nzA1Params.nValue = m; nd2nzA1Params.dValue = k; nd2nzA1Params.srcNdMatrixStride = 0; nd2nzA1Params.srcDValue = k; // nd2nzA1Params.dstNzC0Stride = CeilCubeBlock(m) * CUBE_BLOCK; nd2nzA1Params.dstNzC0Stride = CeilCubeBlock(m) * CUBE_BLOCK; //32,为啥这个跟ceilcubeblock有关??? nd2nzA1Params.dstNzNStride = 1; nd2nzA1Params.dstNzMatrixStride = 0; AscendC::DataCopy(a1Local, aGM, nd2nzA1Params); AscendC::Nd2NzParams nd2nzB1Params; nd2nzB1Params.ndNum = 1; nd2nzB1Params.nValue = k; nd2nzB1Params.dValue = n; nd2nzB1Params.srcNdMatrixStride = 0; nd2nzB1Params.srcDValue = n; nd2nzB1Params.dstNzC0Stride = CeilCubeBlock(k) * CUBE_BLOCK; nd2nzB1Params.dstNzNStride = 1; nd2nzB1Params.dstNzMatrixStride = 0; AscendC::DataCopy(b1Local, bGM, nd2nzB1Params); AscendC::DataCopy(bias1Local, biasGM, n); inQueueA1.EnQue(a1Local); inQueueB1.EnQue(b1Local); inQueueC1.EnQue(bias1Local); } __aicore__ inline void SplitA() { AscendC::LocalTensor<int8_t> a1Local = inQueueA1.DeQue<int8_t>(); AscendC::LocalTensor<int8_t> a2Local = inQueueA2.AllocTensor<int8_t>(); // uint32_t dstOffset = CeilCubeBlock(k) * CUBE_BLOCK_SIZE; // uint32_t srcOffset = CUBE_BLOCK_SIZE; uint32_t dstOffset = 32*16; uint32_t srcOffset = 16*32; AscendC::LoadData2DParams loadDataParams; // loadDataParams.repeatTimes = CeilCubeBlock(k); // loadDataParams.srcStride = CeilCubeBlock(m); loadDataParams.repeatTimes = 1; loadDataParams.srcStride = 2; loadDataParams.dstGap = 0; loadDataParams.ifTranspose = false; // for (int i = 0; i < CeilCubeBlock(m); ++i) { for (int i = 0; i < 2; ++i) { AscendC::LoadData(a2Local[i * dstOffset], a1Local[i * srcOffset], loadDataParams); } inQueueA2.EnQue<int8_t>(a2Local); uint32_t array[] = {static_cast<uint32_t>(32),static_cast<uint32_t>(32)}; AscendC::ShapeInfo shapeInfo(2, array); AscendC::DumpTensor(a1Local, 1,32*32,shapeInfo); inQueueA1.FreeTensor(a1Local); } __aicore__ inline void SplitB() { AscendC::LocalTensor<int8_t> b1Local = inQueueB1.DeQue<int8_t>(); AscendC::LocalTensor<int8_t> b2Local = inQueueB2.AllocTensor<int8_t>(); // uint32_t dstOffset = CeilCubeBlock(n) * CUBE_BLOCK_SIZE; // uint32_t srcOffset = CUBE_BLOCK_SIZE; uint32_t dstOffset = 32*16; uint32_t srcOffset = 16*32; // Nz -> Zn AscendC::LoadData2DParams loadDataParams; // loadDataParams.repeatTimes = CeilCubeBlock(n); // loadDataParams.srcStride = CeilCubeBlock(k); loadDataParams.repeatTimes = 2; loadDataParams.srcStride = 1; loadDataParams.dstGap = 0; loadDataParams.ifTranspose = true; // for (int i = 0; i < CeilCubeBlock(k); ++i) { for (int i = 0; i < 1; ++i) { AscendC::LoadData(b2Local[i * dstOffset], b1Local[i * srcOffset], loadDataParams); } inQueueB1.FreeTensor(b1Local); uint32_t array[] = {static_cast<uint32_t>(32),static_cast<uint32_t>(32)}; AscendC::ShapeInfo shapeInfo(2, array); AscendC::DumpTensor(b1Local, 2,32*32,shapeInfo); inQueueB2.EnQue<int8_t>(b2Local); } __aicore__ inline void SplitBias() { AscendC::LocalTensor<int32_t> bias1Local = inQueueC1.DeQue<int32_t>(); AscendC::LocalTensor<int32_t> bias2Local = outQueueC2.AllocTensor<int32_t>(); AscendC::DataCopy(bias2Local, bias1Local, { 1, (uint16_t)(n * sizeof(int32_t) / 64), 0, 0 }); outQueueC2.EnQue<int32_t>(bias2Local); AscendC::DumpTensor(bias1Local, 3,32); inQueueC1.FreeTensor(bias1Local); } __aicore__ inline void Compute() { AscendC::LocalTensor<int8_t> a2Local = inQueueA2.DeQue<int8_t>(); AscendC::LocalTensor<int8_t> b2Local = inQueueB2.DeQue<int8_t>(); AscendC::LocalTensor<int32_t> bias2Local = outQueueC2.DeQue<int32_t>(); AscendC::LocalTensor<int32_t> c1Local = outQueueCO1.AllocTensor<int32_t>(); AscendC::MmadParams mmadParams; mmadParams.m = m; mmadParams.n = n; mmadParams.k = k; mmadParams.cmatrixInitVal = false; //打印一下a2/b2的数值,没用,全是0,无法打印 AscendC::printf("a2Local: \n"); AscendC::printf("a2[0]: %d, a2[1]: %d, a2[2]: %d, a2[3]: %d\n", a2Local(0), a2Local(1), a2Local(2), a2Local(3)); AscendC::printf("b2Local: \n"); AscendC::printf("b2[0]: %d, b2[1]: %d, b2[2]: %d, b2[3]: %d\n", b2Local(0), b2Local(1), b2Local(2), b2Local(3)); AscendC::printf("bias2Local: \n"); AscendC::printf("bias2[0]: %d, bias2[1]: %d, bias2[2]: %d, bias2[3]: %d\n", bias2Local(0), bias2Local(1), bias2Local(2), bias2Local(3)); AscendC::Mmad(c1Local, a2Local, b2Local, bias2Local, mmadParams); outQueueCO1.EnQue<int32_t>(c1Local); uint32_t array[] = {static_cast<uint32_t>(32),static_cast<uint32_t>(32)}; AscendC::ShapeInfo shapeInfo(2, array); AscendC::DumpTensor(c1Local, 2,32*32,shapeInfo); inQueueA2.FreeTensor(a2Local); inQueueB2.FreeTensor(b2Local); outQueueC2.FreeTensor(bias2Local); } __aicore__ inline void CopyOut() { AscendC::LocalTensor<int32_t> c1Local = outQueueCO1.DeQue<int32_t>(); AscendC::FixpipeParamsV220 fixpipeParams; fixpipeParams.nSize = n; fixpipeParams.mSize = m; fixpipeParams.srcStride = m; fixpipeParams.dstStride = n; fixpipeParams.ndNum = 1; fixpipeParams.srcNdStride = 0; fixpipeParams.dstNdStride = 0; AscendC::Fixpipe(cGM, c1Local, fixpipeParams); outQueueCO1.FreeTensor(c1Local); } private: AscendC::TPipe pipe; AscendC::TQue<AscendC::TPosition::A1, 1> inQueueA1; AscendC::TQue<AscendC::TPosition::A2, 1> inQueueA2; AscendC::TQue<AscendC::TPosition::B1, 1> inQueueB1; AscendC::TQue<AscendC::TPosition::B2, 1> inQueueB2; AscendC::TQue<AscendC::TPosition::CO1, 1> outQueueCO1; AscendC::TQue<AscendC::TPosition::C1, 1> inQueueC1; AscendC::TQue<AscendC::TPosition::C2, 1> outQueueC2; AscendC::GlobalTensor<int8_t> aGM; AscendC::GlobalTensor<int8_t> bGM; AscendC::GlobalTensor<int32_t> cGM; AscendC::GlobalTensor<int32_t> biasGM; uint16_t m = 32, k = 32, n = 32; uint16_t aSize, bSize, cSize; }; #endif // MMAD_CUSTOM_CUBE_ONLY_H ``` main.cpp改动如下(仅改动数据类型): ``` size_t aFileSize = M * K * sizeof(int8_t); // uint16_t represent half size_t bFileSize = K * N * sizeof(int8_t); // uint16_t represent half size_t biasFileSize = N * sizeof(int32_t); // uint16_t represent half size_t cFileSize = M * N * sizeof(int32_t); uint32_t blockDim = 1; ``` gen_data.py改为如下(主要是改动数据类型): ``` import numpy as np import os def gen_golden_data(): M = 32 N = 32 K = 32 x1_gm = np.random.randint(1, 10, [M, K]).astype(np.int8) x2_gm = np.random.randint(1, 10, [K, N]).astype(np.int8) bias_gm = np.random.randint(1, 10, [N]).astype(np.int32) golden = np.matmul(x1_gm.astype(np.int32), x2_gm.astype(np.int32)) + bias_gm os.system("mkdir -p input") os.system("mkdir -p output") x1_gm.tofile("./input/x1_gm.bin") x2_gm.tofile("./input/x2_gm.bin") bias_gm.tofile("./input/bias_gm.bin") golden.tofile("./output/golden.bin") # 确认最终结果确实是 int32 assert golden.dtype == np.int32 print("\n计算完成,最终结果为 int32 类型。") # 打印一小部分结果查看 print("\nGolden result (first 4x4 block):") print(golden[:4, :4]) if __name__ == "__main__": gen_golden_data() ``` 二、软件版本: -- CANN 版本 : 8.2.RC1.alpha002 --Tensorflow/Pytorch/MindSpore 版本:无 --Python 版本 :无 -- MindStudio版本 :无 --操作系统版本 (e.g., Ubuntu 18.04):Ubuntu 22.04.5 LTS 三、测试步骤: bash ./run.sh -r npu -v Ascend910B2
评论 (
1
)
登录
后才可以发表评论
状态
DONE
TODO
WIP
DONE
CLOSED
REJECTED
负责人
未设置
标签
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (
-
)
标签 (
-
)
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(2)
1
https://gitee.com/ascend/samples.git
git@gitee.com:ascend/samples.git
ascend
samples
samples
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册