torch.nn.MaxPool1d(
kernel_size,
stride=None,
padding=0,
dilation=1,
return_indices=False,
ceil_mode=False
)
For more information, see torch.nn.MaxPool1d.
class mindspore.nn.MaxPool1d(
kernel_size=1,
stride=1,
pad_mode="valid"
)
For more information, see mindspore.nn.MaxPool1d.
PyTorch:The output shape can be adjusted through the padding parameter. If the shape of input is $ (N, C, L_{in}) $,the shape of output is $ (N, C, L_{out}) $, where
$$ L_{out} = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernel_size} - 1) - 1}{\text{stride}} + 1\right\rfloor $$
MindSpore:There is no padding parameter, the pad mode is controlled by the pad_mode parameter only. If the shape of input is $ (N, C, L_{in}) $,the shape of output is $ (N, C, L_{out}) $, where
pad_mode is "valid":
$$ L_{out} = \left\lfloor \frac{L_{in} - (\text{kernel_size} - 1)}{\text{stride}}\right\rfloor $$
pad_mode is "same":
$$ L_{out} = \left\lfloor \frac{L_{in}}{\text{stride}}\right\rfloor $$
import mindspore as ms
import mindspore.nn as nn
import torch
import numpy as np
# In MindSpore, pad_mode="valid"
pool = nn.MaxPool1d(kernel_size=3, stride=2, pad_mode="valid")
input_x = ms.Tensor(np.random.randn(20, 16, 50).astype(np.float32))
output = pool(input_x)
print(output.shape)
# Out:
# (20, 16, 24)
# In MindSpore, pad_mode="same"
pool = nn.MaxPool1d(kernel_size=3, stride=2, pad_mode="same")
input_x = ms.Tensor(np.random.randn(20, 16, 50).astype(np.float32))
output = pool(input_x)
print(output.shape)
# Out:
# (20, 16, 25)
# In torch, padding=1
m = torch.nn.MaxPool1d(3, stride=2, padding=1)
input_x = torch.randn(20, 16, 50)
output = m(input_x)
print(output.shape)
# Out:
# torch.Size([20, 16, 25])
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。