335 Star 1.5K Fork 862

MindSpore / docs

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
Flatten.md 1.72 KB
一键复制 编辑 原始数据 按行查看 历史
俞涵 提交于 2023-01-29 16:21 . add mindquantum and modify url

Function Differences with torch.flatten

torch.flatten

torch.flatten(
    input,
    start_dim=0,
    end_dim=-1
)

For more information, see torch.flatten.

mindspore.ops.Flatten

class mindspore.ops.Flatten(*args, **kwargs)(input_x)

For more information, see mindspore.ops.Flatten.

Differences

PyTorch: Supports the flatten of elements by specified dimensions.

MindSpore:Only the 0th dimension element is reserved and the elements of the remaining dimensions are flattened.

Code Example

import mindspore as ms
import mindspore.ops as ops
import torch
import numpy as np

# In MindSpore, only the 0th dimension will be reserved and the rest will be flattened.
input_tensor = ms.Tensor(np.ones(shape=[1, 2, 3, 4]), ms.float32)
flatten = ops.Flatten()
output = flatten(input_tensor)
print(output.shape)
# Out:
# (1, 24)

# In torch, the dimension to reserve will be specified and the rest will be flattened.
input_tensor = torch.Tensor(np.ones(shape=[1, 2, 3, 4]))
output1 = torch.flatten(input=input_tensor, start_dim=1)
print(output1.shape)
# Out:
# torch.Size([1, 24])

input_tensor = torch.Tensor(np.ones(shape=[1, 2, 3, 4]))
output2 = torch.flatten(input=input_tensor, start_dim=2)
print(output2.shape)
# Out:
# torch.Size([1, 2, 12])
1
https://gitee.com/mindspore/docs.git
git@gitee.com:mindspore/docs.git
mindspore
docs
docs
r2.0.0-alpha

搜索帮助

53164aa7 5694891 3bd8fe86 5694891