torch.distributed.all_reduce(
tensor,
op=<ReduceOp.SUM: 0>,
group=None,
async_op=False
)
For more information, see torch.distributed.all_reduce.
class mindspore.ops.AllReduce(
op=ReduceOp.SUM,
group=GlobalComm.WORLD_COMM_GROUP
)(input_x)
For more information, see mindspore.ops.AllReduce.
PyTorch: The inputs are the tensor broadcasted by the current process tensor
, the AllReduce operation op
, the communication group group
and the async op flag async_op
. After the AllReduce operation, the output is written back to tensor
. The return is a async work handle if async_op=True
, otherwise is None
.
MindSpore: The input of this interface is input_x
that is a tensor
. The output tensor
has the same shape as input_x
, after the AllReduce operation configured by op
in the communication group group
. This interface currently not support the configuration of async_op
.
Class | Sub-class | PyTorch | MindSpore | Difference |
---|---|---|---|---|
Param | Param 1 | tensor | - | PyTorch: the input tensor, and the output is written back to it. MindSpore does not have this parameter |
Param 2 | op | op | No difference | |
Param 3 | group | group | No difference | |
Param 4 | async_op | - | PyTorch: the async op flag. MindSpore does not have this parameter | |
Input | Single input | - | input_x | PyTorch: not applied. MindSpore: the input tensor of AllReduce. |
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。