425 Star 4.3K Fork 424

GVPPaddlePaddle / Paddle

 / 详情

Visual Prediction test fails with MKL-DNN due to incorrect Pooling dimensions

已完成
创建于  
2021-03-27 03:33

源自github用户Sand3r-:
I've managed to correct the code of AnalysisPredictor (analysis_predictor.cc) as to actually have it run an MKL-DNN engine (previously the MKL-DNN layers weren't called at all) in the analyzer_vis_tester.cc. The corrected code can be found at https://github.com/Sand3r-/Paddle at branch mgallus/fix_mkldnn_at_vis_test.

However, after running the test with MKL-DNN enabled it crashed.

To assess whether that was caused by the fuses, I have disabled all the relevant fuses.
The program remained crashing anyway.

I have then debugged the program deep enough to find out that the crash was caused by MKLDNN's check for pooling output size consistency (the relevant line can be found here). It says that the determined output size of (1, 16, 24, 385) is incorrect, and that the correct output shape should be of (1, 16, 24, 384), given that:
the input to the pooling has shape of (1, 16, 48, 769),
kernel size is (2, 2),
padding of (0,0),
and strides of (2, 2).

Which according to formula found at http://cs231n.github.io/convolutional-networks/?utm_source=top.caibaojian.com/48879 Section Pooling Layer provides the following:

width = (old_width - filter_size) / stride + 1

width = ((769 - 2)/2 + 1) = 384, hence the MKL-DNN check is correct.

That would imply, that model downloaded for the purposes of test_analyzer_ocr is flawed, as it contains incorrectly computed output pooling size(s?).

After an investigation into how does the reference CPU implementation of Paddle computes the output shape, I have noticed that it must've had used the ceil_mode attribute, which uses the formula
(input_size - filter_size + 2 * padding + stride - 1) / stride + 1 here for output shape computation. That indeed results with computation of output width to be 385 and not 384. Now the question is, is that ceil_mode formula necessary, and if so, why? Cannot the model be trained without it? I haven't found any usages in the code that would alter the behaviour of pooling when this parameter is enabled. Moreover, is the formula even correct if it outputs different results?

评论 (4)

PaddlePaddle-Gardener 创建了任务
展开全部操作日志

源自github用户luotao1:

is that ceil_mode formula necessary, and if so, why?

The ceil_mode formula is necessary.
In v2 code, CudnnPoolLayer.cpp support ceil_mode attribute as well, i.e, cudnn library support it.
https://github.com/PaddlePaddle/Paddle/blob/642cf6ca2f77d409bfd1ecff9f0604f4b911167b/paddle/legacy/gserver/layers/CudnnPoolLayer.cpp#L28-L35

Cannot the model be trained without it?

The model could not be trained without it, it is trained by cudnn before.

源自github用户Sand3r-:
@ luotao I have reviewed the code you've linked, however I cannot find a direct reference to the ceil_mode in this file. The mode in the quoted code is related to max or average pooling. Could you provide more data, or perhaps any papers that tell more about this ceil_mode please? We need to modify mkldnn integration to support this scenario.

Update:
I have found the following
https://github.com/PaddlePaddle/Paddle/blob/642cf6ca2f77d409bfd1ecff9f0604f4b911167b/paddle/legacy/gserver/layers/CudnnPoolLayer.cpp#L87-L91

And it indeed makes use of the other formula:
https://github.com/PaddlePaddle/Paddle/blob/71b1c397d7d7cf7cc02c274fb2d69aaf7f794933/paddle/legacy/math/MathUtils.cpp#L71-L79

However, what I am curious about, is whether this change affects computations anyhow, since I haven't found that anywhere in the computing code.

PaddlePaddle-Coordinator 任务状态待办的 修改为已完成
PaddlePaddle-Coordinator 添加了
 
Intel
标签

登录 后才可以发表评论

状态
负责人
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
参与者(1)
Python
1
https://gitee.com/paddlepaddle/Paddle.git
git@gitee.com:paddlepaddle/Paddle.git
paddlepaddle
Paddle
Paddle

搜索帮助