335 Star 1.5K Fork 862

MindSpore / docs

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
all_gather.md 1.96 KB
一键复制 编辑 原始数据 按行查看 历史
宦晓玲 提交于 2023-02-17 17:19 . modify the errors in api mapping in 2.0

Function Differences with torch.distributed.all_gather

torch.distributed.all_gather

torch.distributed.all_gather(
    tensor_list,
    tensor,
    group=None,
    async_op=False
)

For more information, see torch.distributed.all_gather.

mindspore.ops.AllGather

class mindspore.ops.AllGather(group=GlobalComm.WORLD_COMM_GROUP)(input_x)

For more information, see mindspore.ops.AllGather.

Differences

PyTorch: The inputs are the tensor broadcasted by the current process tensor, the communication group group and the async op flag async_op. The output is tensor_list after AllGather op, whose type is list[Tensor] and the length is the number of devices in the communication group. The return is a async work handle if async_op=True, otherwise is None.

MindSpore: The input of this interface is input_x that is a tensor. The output is a tensor, whose first dimension equals the number of devices in the communication group. This interface currently does not support the configuration of async_op.

Class Sub-class PyTorch MindSpore Difference
Param Param 1 tensor_list - PyTorch: the output after AllGather. MindSpore does not have this parameter
Param 2 tensor - PyTorch: the tensor broadcasted by the current process. MindSpore does not have this parameter
Param 3 group group No dfference
Param 4 async_op - PyTorch: the async op flag. MindSpore does not have this parameter
1
https://gitee.com/mindspore/docs.git
git@gitee.com:mindspore/docs.git
mindspore
docs
docs
r2.0.0-alpha

搜索帮助

53164aa7 5694891 3bd8fe86 5694891