1,
检查算子说明资料,执行对应测试样例,
测试样例结果符合要求,但格式和实际不一致,建议格式也和实际输出一致
_________________________________________________ [doctest] mindspore.ops.operations.sparse_ops.DenseToDenseSetOperation _________________________________________________
228 ``Ascend`` ``CPU``
229
230 Examples:
231 >>> from mindspore import Tensor
232 >>> from mindspore.ops import operations as P
233 >>> x1 = Tensor([[2, 2, 0], [2, 2, 1], [0, 2, 2]], dtype=mstype.int32)
234 >>> x2 = Tensor([[2, 2, 1], [0, 2, 0], [0, 1, 1]], dtype=mstype.int32)
235 >>> dtod=P.DenseToDenseSetOperation(set_operation="a-b",validate_indices=True)
236 >>> res=dtod(x1,x2)
237 >>> print(res)
Differences (unified diff with -expected +actual):
@@ -1,3 +1,4 @@
-(Tensor(shape=[3, 2], dtype=Int64, value=[[0, 0],[1, 0],[2, 0]]),
- Tensor(shape=[3], dtype=Int32, value= [0, 1, 2]),
- Tensor(shape=[2], dtype=Int64, value= [3, 1]))
+(Tensor(shape=[3, 2], dtype=Int64, value=
+[[0, 0],
+ [1, 0],
+ [2, 0]]), Tensor(shape=[3], dtype=Int32, value= [0, 1, 2]), Tensor(shape=[2], dtype=Int64, value= [3, 1]))
2,
检查算子说明资料,参数 set_operation 没有指定默认值,
但实际运行发现,必须指定某一值。
def test_densetodensesetoperation_attr_set_operation_none():
input_list = []
input_list.append(Tensor(np.random.randint(-20, 20, (4, 8, 6)), dtype=mstype.int32))
input_list.append(Tensor(np.random.randint(-20, 20, (4, 8, 6)), dtype=mstype.int32))
fact = DenseToDenseSetOperationMock(attributes={"set_operation": ""}, inputs=input_list)
# with pytest.raises(ValueError):
> fact.forward_mindspore_impl()
test_densetodensesetoperation.py:741:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../share/ops/primitive/densetodensesetoperation_ops.py:45: in forward_mindspore_impl
out = net(*self.input_x)
../share/utils.py:185: in __call__
out = super().__call__(*args, **kwargs)
/root/archiconda3/envs/xuegao3.7/lib/python3.7/site-packages/mindspore/nn/cell.py:604: in __call__
raise err
/root/archiconda3/envs/xuegao3.7/lib/python3.7/site-packages/mindspore/nn/cell.py:600: in __call__
output = self._run_construct(cast_inputs, kwargs)
/root/archiconda3/envs/xuegao3.7/lib/python3.7/site-packages/mindspore/nn/cell.py:417: in _run_construct
output = self.construct(*cast_inputs, **kwargs)
../share/ops/primitive/densetodensesetoperation_ops.py:24: in construct
return self.dtdsetoperation(x1, x2)
/root/archiconda3/envs/xuegao3.7/lib/python3.7/site-packages/mindspore/ops/primitive.py:294: in __call__
return _run_op(self, self.name, args)
/root/archiconda3/envs/xuegao3.7/lib/python3.7/site-packages/mindspore/common/api.py:93: in wrapper
results = fn(*arg, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = Prim[DenseToDenseSetOperation]<set_operation=, validate_indices=True, cust_aicpu=DenseToDenseSetOperation>, op_name = 'DenseToDenseSetOperation'
args = (Tensor(shape=[4, 8, 6], dtype=Int32, value=
[[[ -7, -3, 17, 3, 12, -8],
[ -7, 8, -16, 19, -4, -2],
[ ...3, -19],
...
[-16, 9, 14, 7, 14, -6],
[ 18, -1, -2, -7, 4, -6],
[ 3, -4, -16, -5, 5, -7]]]))
@_wrap_func
def _run_op(obj, op_name, args):
"""Single op execution function supported by ge in PyNative mode."""
> output = real_run_op(obj, op_name, args)
E ValueError: build/mindspore/merge/mindspore/core/ops_merge.cc:9243 DenseToDenseSetOperationInferShape] For DenseToDenseSetOperation, the attr set_operation must be any one of ['a-b','b-a','intersection','union'], but got .
3,
当输入 x1 和 x2 的shape不相等时,建议修改报错信息,
不是它们必须等于某一个定值(从目前的报错中可以看出是要求等于x2的shape),而是它们的 shape 应该相等。
test_densetodensesetoperation_input_shape_rank_diff
def test_densetodensesetoperation_input_shape_rank_diff():
input_list = []
input_list.append(Tensor(np.random.randint(-20, 20, (3, 4, 3)), dtype=mstype.int64))
input_list.append(Tensor(np.random.randint(-20, 20, (3, 4, 3, 5)), dtype=mstype.int64))
fact = DenseToDenseSetOperationMock(attributes={"set_operation": "a-b"}, inputs=input_list)
# with pytest.raises(ValueError):
> fact.forward_mindspore_impl()
test_densetodensesetoperation.py:721:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../share/ops/primitive/densetodensesetoperation_ops.py:45: in forward_mindspore_impl
out = net(*self.input_x)
../share/utils.py:185: in __call__
out = super().__call__(*args, **kwargs)
/root/miniconda3/envs/xuegao3.7/lib/python3.7/site-packages/mindspore/nn/cell.py:573: in __call__
out = self.compile_and_run(*args)
/root/miniconda3/envs/xuegao3.7/lib/python3.7/site-packages/mindspore/nn/cell.py:952: in compile_and_run
self.compile(*inputs)
/root/miniconda3/envs/xuegao3.7/lib/python3.7/site-packages/mindspore/nn/cell.py:925: in compile
_cell_graph_executor.compile(self, *inputs, phase=self.phase, auto_parallel_mode=self._auto_parallel_mode)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <mindspore.common.api._CellGraphExecutor object at 0x7f62ea8d7a10>, obj = WrapOp<>, phase = 'train.1653128499055747328.140052793770736.28', do_convert = True
auto_parallel_mode = False
args = (Tensor(shape=[3, 4, 3], dtype=Int64, value=
[[[-11, 15, 16],
[ 19, -4, 7],
[-18, -6, -17],
[ 7, 18, -8...-5, -19, -14, -12, -13]],
[[-19, 13, 3, -12, -5],
[ -1, 11, 12, 18, 11],
[ 1, 17, 19, 17, 11]]]]))
def compile(self, obj, *args, phase='predict', do_convert=True, auto_parallel_mode=False):
"""
Compiles graph.
Args:
obj (Function/Cell): The function or cell instance need compile.
args (tuple): Function or cell input arguments.
phase (str): The name of compile phase. Default: 'predict'.
do_convert (bool): When set to True, convert ME graph to GE graph after compiling graph.
auto_parallel_mode: When set to True, use auto parallel mode to compile graph.
Return:
Str, the full phase of the cell.
Bool, if the graph has been compiled before, return False, else return True.
"""
obj.__parse_method__ = 'construct'
if not hasattr(obj, obj.__parse_method__):
raise AttributeError(
'The class {} dose not have method {}'.format(obj.__class__.__name__, obj.__parse_method__))
args_list = args
if hasattr(obj, "enable_tuple_broaden"):
self.enable_tuple_broaden = obj.enable_tuple_broaden
self._graph_executor.set_enable_tuple_broaden(self.enable_tuple_broaden)
key = self._graph_executor.generate_arguments_key(args_list, self.enable_tuple_broaden)
obj.arguments_key = str(key)
phase = phase + '.' + str(obj.create_time) + '.' + str(id(obj)) + '.' + obj.arguments_key
if phase in obj.compile_cache and self.has_compiled(phase):
logger.debug("%r graph has existed.", phase)
return phase, False
obj.check_names()
_check_full_batch()
self._set_dataset_mode(args_list)
self._set_compile_cache_dep_files(phase)
is_sink_mode = args and isinstance(args[0], Tensor) and args[0].virtual_flag
if auto_parallel_mode and _need_to_full() and not is_sink_mode and obj.auto_parallel_compile_and_run():
args_list = _to_full_tensor(args, _get_device_num(), _get_global_rank())
enable_ge = context.get_context("enable_ge")
self._graph_executor.set_weights_values(obj.parameters_dict())
> result = self._graph_executor.compile(obj, args_list, phase, self._use_vm_mode())
E ValueError: mindspore/core/utils/check_convert_utils.cc:398 CheckInteger] For primitive[DenseToDenseSetOperation], the x1_rank and x2 rank must be equal to 4, but got 3.
E The function call stack (See file '/home/zhangxuebao/MindSporeTest/operations/rank_0/om/analyze_fail.dat' for more details):
E # 0 In file /home/zhangxuebao/MindSporeTest/share/ops/primitive/densetodensesetoperation_ops.py(24)
E return self.dtdsetoperation(x1, x2)
Ascend
/GPU
/CPU
) / 硬件环境:Please delete the backend not involved / 请删除不涉及的后端:
/device ascend/GPU/CPU/kirin/等其他芯片
Software Environment / 软件环境 (Mandatory / 必填):
-- MindSpore version (e.g., 1.7.0.Bxxx) :
-- Python version (e.g., Python 3.7.5) :
-- OS platform and distribution (e.g., Linux Ubuntu 16.04):
-- GCC/Compiler version (if compiled from source):
Excute Mode / 执行模式 (Mandatory / 必填)(PyNative
/Graph
):
Please delete the mode not involved / 请删除不涉及的模式:
/mode pynative
/mode graph
Please assign maintainer to check this issue.
请为此issue分配处理人。
@xuebao_zhang
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。
Please add labels (comp or sig), also you can visit https://gitee.com/mindspore/community/blob/master/sigs/dx/docs/labels.md to find more.
为了让代码尽快被审核,请您为Pull Request打上 组件(comp)或兴趣组(sig) 标签,打上标签的PR可直接推送给责任人进行审核。
更多的标签可以查看https://gitee.com/mindspore/community/blob/master/sigs/dx/docs/labels.md
以组件相关代码提交为例,如果你提交的是data组件代码,你可以这样评论:
//comp/data
当然你也可以邀请data SIG组来审核代码,可以这样写:
//sig/data
另外你还可以给这个PR标记类型,例如是bugfix或者是特性需求:
//kind/bug or //kind/feature
恭喜你,你已经学会了使用命令来打标签,接下来就在下面的评论里打上标签吧!
Appearance & Root Cause
检查算子说明资料,执行对应测试样例,
测试样例结果符合要求,但格式和实际不一致,建议格式也和实际输出一致
Fix Solution
修改对应测试样例与实际输出一致
Self Validation
现在样例修改为
Examples:
>>> x1 = Tensor([[2, 2, 0], [2, 2, 1], [0, 2, 2]], dtype=mstype.int32)
>>> x2 = Tensor([[2, 2, 1], [0, 2, 0], [0, 1, 1]], dtype=mstype.int32)
>>> dtod=P.DenseToDenseSetOperation(set_operation="a-b",validate_indices=True)
>>> res=dtod(x1,x2)
>>> print(res[0])
[[0 0]
[1 0]
[2 0]]
>>> print(res[1])
[0 1 2]
>>> print(res[2])
[3 1]
Related PR
#I48O5K:【众智】【计算-AICPU开发】DenseToDenseSetOperation
Appearance & Root Cause
检查算子说明资料,参数 set_operation 没有指定默认值,
但实际运行发现,必须指定某一值。
Fix Solution
给出的需求描述中未指定该属性的默认值,现已修改为默认值“a-b"
Self Validation
目前如果未提供该属性,默认为"a-b"
Related PR
#I48O5K:【众智】【计算-AICPU开发】DenseToDenseSetOperation
Appearance & Root Cause
当输入 x1 和 x2 的shape不相等时,建议修改报错信息,
不是它们必须等于某一个定值(从目前的报错中可以看出是要求等于x2的shape),而是它们的 shape 应该相等。
Fix Solution
修改为直接对两个rank值进行比较,手动打印报错
Self Validation
现在的错误打印为
"For DenseToDenseSetOperation , the rank of input x1
and x2
must be equal, "
"but got x1_rank [%d] and x2_rank [%d]"
Related PR
#I48O5K:【众智】【计算-AICPU开发】DenseToDenseSetOperation
def test_densetodensesetoperation_input_default():
x1 = Tensor(np.random.randn(57, 4, 4, 8, 8).astype(np.int32))
x2 = Tensor(np.random.randn(57, 4, 4, 8, 7).astype(np.int32))
net1 = DenseToDenseSetOperation(set_operation="a-b", validate_indices=True)
fact1 = AnyNetFactory(net=net1)
out1 = fact1(x1, x2)
net2 = P.DenseToDenseSetOperation()
fact2 = AnyNetFactory(net=net2)
out2 = fact2(x1, x2)
len_out1 = len(out1)
for i in range(len_out1):
assert np.all(out1[i].asnumpy() == out2[i].asnumpy())
回归通过。
登录 后才可以发表评论