335 Star 1.5K Fork 862

MindSpore / docs

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
all_reduce.md 1.98 KB
一键复制 编辑 原始数据 按行查看 历史
luojianing 提交于 2023-07-21 15:16 . replace target=blank

Function Differences with torch.distributed.all_reduce

View Source On Gitee

torch.distributed.all_reduce

torch.distributed.all_reduce(
    tensor,
    op=<ReduceOp.SUM: 0>,
    group=None,
    async_op=False
)

For more information, see torch.distributed.all_reduce.

mindspore.ops.AllReduce

class mindspore.ops.AllReduce(
    op=ReduceOp.SUM,
    group=GlobalComm.WORLD_COMM_GROUP
)(input_x)

For more information, see mindspore.ops.AllReduce.

Differences

PyTorch: The inputs are the tensor broadcasted by the current process tensor, the AllReduce operation op, the communication group group and the async op flag async_op. After the AllReduce operation, the output is written back to tensor. The return is a async work handle if async_op=True, otherwise is None.

MindSpore: The input of this interface is input_x that is a tensor. The output tensor has the same shape as input_x, after the AllReduce operation configured by op in the communication group group. This interface currently not support the configuration of async_op.

Class Sub-class PyTorch MindSpore Difference
Param Param 1 tensor - PyTorch: the input tensor, and the output is written back to it. MindSpore does not have this parameter
Param 2 op op No difference
Param 3 group group No difference
Param 4 async_op - PyTorch: the async op flag. MindSpore does not have this parameter
Input Single input - input_x PyTorch: not applied. MindSpore: the input tensor of AllReduce.
1
https://gitee.com/mindspore/docs.git
git@gitee.com:mindspore/docs.git
mindspore
docs
docs
r2.0

搜索帮助

53164aa7 5694891 3bd8fe86 5694891