源自github用户Sand3r-:
I've managed to correct the code of AnalysisPredictor (analysis_predictor.cc
) as to actually have it run an MKL-DNN engine (previously the MKL-DNN layers weren't called at all) in the analyzer_vis_tester.cc
. The corrected code can be found at https://github.com/Sand3r-/Paddle at branch mgallus/fix_mkldnn_at_vis_test
.
However, after running the test with MKL-DNN enabled it crashed.
To assess whether that was caused by the fuses, I have disabled all the relevant fuses.
The program remained crashing anyway.
I have then debugged the program deep enough to find out that the crash was caused by MKLDNN's check for pooling output size consistency (the relevant line can be found here). It says that the determined output size of (1, 16, 24, 385)
is incorrect, and that the correct output shape should be of (1, 16, 24, 384)
, given that:
the input to the pooling has shape of (1, 16, 48, 769)
,
kernel size is (2, 2)
,
padding of (0,0)
,
and strides of (2, 2)
.
Which according to formula found at http://cs231n.github.io/convolutional-networks/?utm_source=top.caibaojian.com/48879 Section Pooling Layer provides the following:
width = (old_width - filter_size) / stride + 1
width = ((769 - 2)/2 + 1) = 384
, hence the MKL-DNN check is correct.
That would imply, that model downloaded for the purposes of test_analyzer_ocr
is flawed, as it contains incorrectly computed output pooling size(s?).
After an investigation into how does the reference CPU implementation of Paddle computes the output shape, I have noticed that it must've had used the ceil_mode
attribute, which uses the formula
(input_size - filter_size + 2 * padding + stride - 1) / stride + 1
here for output shape computation. That indeed results with computation of output width to be 385
and not 384
. Now the question is, is that ceil_mode
formula necessary, and if so, why? Cannot the model be trained without it? I haven't found any usages in the code that would alter the behaviour of pooling when this parameter is enabled. Moreover, is the formula even correct if it outputs different results?
is that
ceil_mode
formula necessary, and if so, why?
The ceil_mode
formula is necessary.
In v2 code, CudnnPoolLayer.cpp support ceil_mode
attribute as well, i.e, cudnn library support it.
https://github.com/PaddlePaddle/Paddle/blob/642cf6ca2f77d409bfd1ecff9f0604f4b911167b/paddle/legacy/gserver/layers/CudnnPoolLayer.cpp#L28-L35
Cannot the model be trained without it?
The model could not be trained without it, it is trained by cudnn before.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。
源自github用户Sand3r-:
Thank you for a quick response!
源自github用户Sand3r-:
@ luotao I have reviewed the code you've linked, however I cannot find a direct reference to the ceil_mode
in this file. The mode in the quoted code is related to max
or average
pooling. Could you provide more data, or perhaps any papers that tell more about this ceil_mode
please? We need to modify mkldnn integration to support this scenario.
Update:
I have found the following
https://github.com/PaddlePaddle/Paddle/blob/642cf6ca2f77d409bfd1ecff9f0604f4b911167b/paddle/legacy/gserver/layers/CudnnPoolLayer.cpp#L87-L91
And it indeed makes use of the other formula:
https://github.com/PaddlePaddle/Paddle/blob/71b1c397d7d7cf7cc02c274fb2d69aaf7f794933/paddle/legacy/math/MathUtils.cpp#L71-L79
However, what I am curious about, is whether this change affects computations anyhow, since I haven't found that anywhere in the computing code.
登录 后才可以发表评论