感谢以下人员做出的贡献:
changzherui,chenfei_mindspore,chenjianping,chenkang,chenweifeng,chujinjin,fangwenyi,GuoZhibin,guozhijian,hangq,hanhuifeng,haozhang,hedongdong,尤澍,zhoufeng,代宇鑫
欢迎以任何形式对项目提供贡献!
[BETA] JIT Fallback supports variable scenarios. In static graph mode, JIT Fallback supports return of Dict type and Scalar type, supports property setting of non-Parameter type objects, supports partial in-place modification operations of List, and supports third-party libraries such as NumPy. Moreover, it supports related operations of user-defined classes and supports Python basic operators and built-in functions to use more data types. It is compatible with features like control flow, side effects, automatic differentiation. For more details, please refer to Static Graph Syntax Support.
[BETA] In static graph mode, the error message of using undefined variables in the control flow scene is optimized. When using variables defined in if, while, and for control flow branches, the variables need to be initialized and defined before the control flow.
[STABLE] Add module ReWrite, support the ability to modify multiple network in batches based on customized rules.
[BETA] Add optim_ex module for optimizers, extend the current functionality, support parameter grouping for every parameter in the optimizer, and support parameter modification by assignment while training.
[STABLE] Optimize PyTorch and MindSpore API Mapping Table, specify the differences between APIs among functionality, parameter, input, output and specialized cases.
[STABLE] Support offload parameters or intermediate activations to the CPU or NVMe storage during training process. Users can enable this offload feature by configuring context to scale up the trainable model size.
[STABLE] Enhanced automatic parallel capability including:
Performance of automatic strategy for typical networks is no less than 90% of default configuration.
Support 3D hybrid parallel training: automatic operator-level strategy generation combined with manual configured pipeline partition.
mindspore.dataset.GraphData
, mindspore.dataset.Graph
, mindspore.dataset.InMemoryGraphDataset
, mindspore.dataset. ArgoverseDataset
are no longer evolved and are deprecated. Use MindSpore Graph Learning for related functional replacements. When replacing networks in Model repositories that use this API, please refer to GCN for GCN and GAT.mindspore.set_context
adds jit_syntax_level
option, which is used to set JIT syntax support level. For more details, please refer to set_context.model.infer_predict_layout
interface, which has a new parameter skip_backend_compile with a default value of False. Set to True when the user wants to skip the backend compilation process to get the parameter slicing strategy.mindspore.ops.ApplyAdamWithAmsgradV2
. It is recommended to call this operator through API mindspore.nn.Adam
.mindspore.ops.UpsampleTrilinear3D
. It is recommended to call this operator through API mindspore.ops.interpolate
.mindspore.ops.UpsampleNearest3D
. It is recommended to call this operator through API mindspore.ops.interpolate
.mindspore.ops.ScatterNonAliasingAdd
. It is recommended to use operator primitive mindspore.ops.TensorScatterAdd
as a replacement.Interface name: mindspore.nn.Dense
, mindspore.nn.Conv1d
, mindspore.nn.Conv1dTranspose
, mindspore.nn.Conv2d
, mindspore.nn.Conv2dTranspose
, mindspore.nn.Conv3d
, mindspore.nn.Conv3dTranspose
Changes: Change initialization parameter strategy. The default value of weight_init is changed from "normal" to None, and the default value of bias_init is changed from "zeros" to None.
Description: The default initialization method for weights has been changed from "normal" to internal HeUniform initialization. The default initialization method of bias is changed from "zeros" to internal Uniform initialization.
Original interface | v2.1 interface |
mindspore.nn.Dense(in_channels, out_channels, weight_init='normal', bias_init='zeros', has_bias=True, activation=None) |
mindspore.nn.Dense(in_channels, out_channels, weight_init=None, bias_init=None, has_bias=True, activation=None) |
mindspore.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros') |
mindspore.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init=None, bias_init=None) |
mindspore.nn.Conv1dTranspose(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros') |
mindspore.nn.Conv1dTranspose(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init=None, bias_init=None) |
mindspore.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros', data_format='NCHW') |
mindspore.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init=None, bias_init=None, data_format='NCHW') |
mindspore.nn.Conv2dTranspose(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, output_padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros') |
mindspore.nn.Conv2dTranspose(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, output_padding=0, dilation=1, group=1, has_bias=False, weight_init=None, bias_init=None) |
mindspore.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros', data_format='NCDHW') |
mindspore.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init=None, bias_init=None, data_format='NCDHW') |
mindspore.nn.Conv3dTranspose(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, output_padding=0, has_bias=False, weight_init='normal', bias_init='zeros', data_format='NCDHW') |
mindspore.nn.Conv3dTranspose(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, output_padding=0, has_bias=False, weight_init=None, bias_init=None, data_format='NCDHW') |
mindspore.ops.AdaptiveAvgPool2D
.mindspore.ops.BatchToSpaceNDV2
.mindspore.ops.CeLU
.mindspore.ops.ExtractVolumePatches
.mindspore.ops.FFTWithSize
.mindspore.ops.FillDiagonal
.mindspore.ops.FractionalMaxPool3DWithFixedKsize
.mindspore.ops.Im2Col
.mindspore.ops.MaskedScatter
.mindspore.ops.MatrixBandPart
.mindspore.ops.MatrixInverse
.mindspore.ops.MaxPoolWithArgmaxV2
.mindspore.ops.Ormqr
.mindspore.ops.RandpermV2
.mindspore.ops.ResizeBicubic
.mindspore.ops.Triu
.mindspore.ops.Zeta
.Interface: mindspore.ops.MultitypeFuncGraph
Change: The interface parameter doc_url is used as a test feature in MindSpore 2.0.0.rc1 version. After the optimization of MindSpore 2.0.0 version, users do not need to configure this parameter, so this parameter is deleted in MindSpore 2.0.0 version.
Original Interface | Interface v2.0.0 |
mindspore.ops.MultitypeFuncGraph(name, read_value=False, doc_url="") |
mindspore.ops.MultitypeFuncGraph(name, read_value=False) |
Interface: mindspore.set_context(auto_tune_mode="GA,RL")
Change: The AutoTune tool has been deprecated, delete auto_tune_mode option, new tuning tools will be planned in the future.
Interface: mindspore.set_context(mode=PYNATIVE_MODE)
Change: The default value is changed from GRAPH_MODE to PYNATIVE_MODE.
Description: If the running mode is not set and the diagram mode needs to be set, use the following method:
mindspore.set_context(mode=GRAPH_MODE).
Original Interface | Interface v2.0.0-rc1 |
mindspore.set_context(mode=GRAPH_MODE) |
mindspore.set_context(mode=PYNATIVE_MODE) |
Interface: mindspore.train.Model.train
Change: The default value of dataset_sink_mode is changed from True to False.
Description: If dataset_sink_mode is not set and the data sinking mode needs to be set, use the following method:
Model.train(dataset_sink_mode=True).
Original Interface | Interface v2.0.0-rc1 |
Model.train(dataset_sink_mode=True) |
Model.train(dataset_sink_mode=False) |
Interface: mindspore.export
Change: The file_format parameter is changed from AIR to no default value.
Description: If file_format is not set in the original mode, you need to set file_format additionally. In this case, use the following method:
mindspore.export(net, *inputs, file_name, file_format="AIR", **kwargs).
Original Interface | Interface v2.0.0-rc1 |
mindspore.export(net, *inputs, file_name, file_format="AIR", **kwargs) |
mindspore.export(net, *inputs, file_name, file_format, **kwargs) |
Interface: mindspore.ops.norm
Change: The ord parameter function is extended to support multiple forms.
Original Interface | Interface v2.0.0-rc1 |
ops.norm(input_x, axis, p=2, keep_dims=False, epsilon=1e-12) >>> # Example: >>> input = Tensor(np.array([[[1.0, 2.0], [3.0, 4.0]], ... [[5.0, 6.0], [7.0, 8.0]]]).astype(np.float32)) >>> output = ops.norm(input, [0, 1], p=2) |
ops.norm(A, ord=None, dim=None, keepdim=False, *, dtype=None) >>> # Example: >>> input = Tensor(np.array([[[1.0, 2.0], [3.0, 4.0]], ... [[5.0, 6.0], [7.0, 8.0]]]).astype(np.float32)) >>> output = ops.norm(input, ord=2, dim=(0, 1)) |
Interface: mindspore.Tensor.norm
Change: The ord parameter function is extended to support multiple forms.
Description: For details, see the example of ops.norm.
Original Interface | Interface v2.0.0-rc1 |
Tensor.norm(axis, p=2, keep_dims=False, epsilon=1e-12) |
Tensor.norm(ord=None, dim=None, keepdim=False, *, dtype=None) |
Interface: mindspore.ops.dropout
Change: The seed0 and seed1 parameters are deleted and seed=None parameter is added. Instead of returning Tensors and masks, only Tensors are returned. The input parameter training=True is added.
Original Interface | Interface v2.0.0-rc1 |
ops.dropout(x, p=0.5, seed0=0, seed1=0) >>> # Example: >>> input = Tensor(((20, 16), (50, 50)), ... mindspore.float32) >>> output, mask = dropout(x, p=0.5) |
ops.dropout(input, p=0.5, training=True, seed=None) >>> # Example: >>> input = Tensor(((20, 16), (50, 50)), ... mindspore.float32) >>> output = ops.dropout(input, p=0.5,training=True) |
Interface: mindspore.ops.dropout2d
Change: Return value is changed from Tensor and mask to Tensor only. The input parameter training=True is added.
Original Interface | Interface v2.0.0-rc1 |
ops.dropout2d(x, p=0.5) >>> # Example: >>> input = Tensor(np.ones([2, 1, 2, 3]), ... mindspore.float32) >>> output, mask = dropout2d(input, 0.5) |
ops.dropout2d(input, p=0.5, training=True) >>> # Example: >>> input = Tensor(np.ones([2, 1, 2, 3]), ... mindspore.float32) >>> output = ops.dropout2d(input, 0.5, training=True) |
Interface: mindspore.ops.dropout3d
Change: Return value is changed from Tensor and mask to Tensor only. The input parameter training=True is added.
Original Interface | Interface v2.0.0-rc1 |
ops.dropout3d(x, p=0.5) >>> # Example: >>> input = Tensor(np.ones([2, 1, 2, 3]), ... mindspore.float32) >>> output, mask = dropout3d(input, 0.5) |
ops.dropout3d(input, p=0.5, training=True) >>> # Example: >>> input = Tensor(np.ones([2, 1, 2, 3]), ... mindspore.float32) >>> output = ops.dropout3d(input, 0.5, training=True) |
Interface: mindspore.ops.std
Change: The interface is reconstructed, and the interface usage mode is more consistent with user habits.
Description: If parameter unbiased
has been set, use the following alternative: unbiased=False
-> ddof=0
, unbiased=True
-> ddof=1
.
Original Interface | Interface v2.0.0-rc1 |
ops.std(input_x, axis=(), unbiased=True, keep_dims=False) |
ops.std(input, axis=None, ddof=0, keepdims=False) |
Interface: mindspore.load_param_into_net
Change: Parameters that are not loaded in the ckpt are added as return values.
Original Interface | Interface v2.0.0-rc1 |
net_param = load_param_into_net() |
net_param, ckpt_param = load_param_into_net() |
Interface: mindspore.nn.BCELoss
Change: The default value of reduction
is changed from 'none' to 'mean'.
Original Interface | Interface v2.0.0-rc1 |
BCELoss(weight=None, reduction='none') >>> # Example: >>> weight = Tensor(np.array([[1.0, 2.0, 3.0], ... [4.0, 3.3, 2.2]]), ... mindspore.float32) >>> loss = nn.BCELoss(weight=weight, reduction='mean') >>> logits = Tensor(np.array([[0.1, 0.2, 0.3], ... [0.5, 0.7, 0.9]]), ... mindspore.float32) >>> labels = Tensor(np.array([[0, 1, 0], [0, 0, 1]]), ... mindspore.float32) >>> output = loss(logits, labels) >>> print(output) >>> 1.8952923 |
BCELoss(weight=None, reduction='mean') >>> # Example: >>> weight = Tensor(np.array([[1.0, 2.0, 3.0], ... [4.0, 3.3, 2.2]]), ... mindspore.float32) >>> loss = nn.BCELoss(weight=weight) >>> logits = Tensor(np.array([[0.1, 0.2, 0.3], ... [0.5, 0.7, 0.9]]), ... mindspore.float32) >>> labels = Tensor(np.array([[0, 1, 0], [0, 0, 1]]), ... mindspore.float32) >>> output = loss(logits, labels) >>> print(output) >>> 1.8952923 |
Interface: mindspore.ops.split
Change: The interface is reconstructed. The interface usage mode is more suitable for users. The sequence of the second and third parameters is adjusted, and the split_size_or_sections function is modified and extended.
Original Interface | Interface v2.0.0-rc1 |
ops.split(input_x, axis=0, output_num=1) >>> # Example: >>> input = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]), ... mindspore.int32) >>> output = ops.split(input, axis=1, output_num=4) |
ops.split(tensor, split_size_or_sections, axis=0) >>> # Example: >>> input = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]), ... mindspore.int32) >>> output = ops.split(input, split_size_or_sections=1, axis=1) |
Interface: mindspore.Tensor.split
Change: The interface is reconstructed. The interface usage mode is more suitable for users. The positions of the two parameters is adjusted, and the split_size_or_sections function is modified and extended.
Description: For details, see the example of ops.split.
Original Interface | Interface v2.0.0-rc1 |
Tensor.split(axis=0, output_num=1) |
Tensor.split(split_size_or_sections, axis=0) |
Interface: mindspore.ops.pad
Change: Modify the parameter name paddings to padding, and the mode and value functions are added.
Original Interface | Interface v2.0.0-rc1 |
ops.pad(input_x, paddings) >>> # Example: >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], ... [0.4, 0.5, -3.2]]), ... mindspore.float32) >>> paddings = ((1, 2), (2, 1)) >>> output = ops.pad(input_x, paddings) |
ops.pad(input_x, padding, mode='constant', value=None) >>> # Example: >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], ... [0.4, 0.5, -3.2]]), ... mindspore.float32) >>> paddings = (2, 1, 1, 2) >>> output = ops.pad(input_x, paddings) |
Interface: mindspore.ops.meshgrid
Change: The input parameter is changed from inputs
to *input
.
Original Interface | Interface v2.0.0-rc1 |
ops.meshgrid(inputs, indexing='xy') >>> # Example: >>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32)) >>> y = Tensor(np.array([5, 6, 7]).astype(np.int32)) >>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32)) output = ops.meshgrid((x, y, z), indexing='xy') |
ops.meshgrid(*inputs, indexing='xy') >>> # Example: >>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32)) >>> y = Tensor(np.array([5, 6, 7]).astype(np.int32)) >>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32)) output = ops.meshgrid(x, y, z, indexing='xy') |
Interface: mindspore.ops.max
Change: Return value exchange sequence. The value is changed from "index, value" to "value, index".
Original Interface | Interface v2.0.0-rc1 |
ops.max(x, axis=0, keep_dims=False) >>> # Example: >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), ... mindspore.float32) >>> index, output = ops.max(input) >>> print(index, output) >>> 3 0.7 |
ops.max(input, axis=None, keepdims=False, *, initial=None, where=True, return_indices=False) >>> # Example: >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), ... mindspore.float32) >>> output, index = ops.max(input, axis=0) >>> print(output, index) |
Interface: mindspore.ops.min
Change: Return value exchange sequence. The value is changed from "index, value" to "value, index".
Original Interface | Interface v2.0.0-rc1 |
ops.min(x, axis=0, keep_dims=False) >>> # Example: >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), ... mindspore.float32) >>> index, output = ops.min(input) >>> 0 0.0 |
ops.min(input, axis=None, keepdims=False, *, initial=None, where=True, return_indices=False) >>> # Example: >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), ... mindspore.float32) >>> output, index = ops.min(input, keepdims=True) >>> 0.0 0 |
Interface: mindspore.ops.random_gamma
Change: The seed2 parameter is deleted and seed=0 is changed to None. The framework behavior is unified and complies with the actual application scenarios and habits of users.
Original Interface | Interface v2.0.0-rc1 |
ops.random_gamma(shape, alpha, seed=0, seed2=0) |
ops.random_gamma(shape, alpha, seed=None) |
Interface: mindspore.ops.standard_laplace
Change: The seed2 parameter is deleted and seed=0 is changed to None. The framework behavior is unified and complies with the actual application scenarios and habits of users.
Original Interface | Interface v2.0.0-rc1 |
ops.standard_laplace(shape, seed=0, seed2=0) |
ops.standard_laplace(shape, seed=None) |
Interface: mindspore.ops.standard_normal
Change: The seed2 parameter is deleted and seed=0 is changed to None. The framework behavior is unified and complies with the actual application scenarios and habits of users.
Original Interface | Interface v2.0.0-rc1 |
ops.standard_normal(shape, seed=0, seed2=0) |
ops.standard_normal(shape, seed=None) |
Interface: mindspore.ops.bernoulli
Change: The default value of seed is changed from -1 to None. Meets the actual application scenario.
Original Interface | Interface v2.0.0-rc1 |
ops.bernoulli(x, p=0.5, seed=-1) |
ops.bernoulli(input, p=0.5, seed=None) |
Interface: mindspore.data_sink
Change: Deleted the steps parameter. Parameter name jit is changed to jit_config, and new input_signature parameter is added. The usability is improved to meet the requirements of actual application scenarios.
Original Interface | Interface v2.0.0-rc1 |
mindspore.data_sink(fn, dataset, steps, sink_size=1, jit=False) |
mindspore.data_sink(fn, dataset, sink_size=1, jit_config=None, input_signature=None) |
Interface: mindspore.ops.conv2d
Change: Extend Interface Function. Add the bias parameter and modify the parameter name and parameter sequence.
Original Interface | Interface v2.0.0-rc1 |
conv2d(inputs, weight, pad_mode="valid", padding=0, stride=1, dilation=1, group=1) |
conv2d(input, weight, bias=None, stride=1, pad_mode="valid", padding=0, dilation=1, groups=1) |
Interface: mindspore.dataset.vision.Pad
Change: Adjust the input parameter padding of Pad, RandomCrop, and RandomCropWithBbox. When the input length of Padding is 2, the first value is used to fill the left/upper boundary, the second value is used to fill the right/lower boundary, and the first value is used to fill the left/right boundary. Fill the upper/lower boundary with the second value.
Description: The padding parameter whose size is 2 is not compatible with the effect of the earlier version. The padding parameter needs to be explicitly represented (left, right, top, and bottom).
Original Interface | Interface v2.0.0-rc1 |
mindspore.dataset.vision.Pad(padding=(1,2)) Indicates that the left/upper part of the image is filled with 1 pixel, and the right/down part is filled with 2 pixels. |
mindspore.dataset.vision.Pad(padding=(1,2,1,2)) Indicates that the left/upper part of the image is filled with 1 pixel, and the right/down part is filled with 2 pixels. |
Interface: mindspore.dataset.Dataset.map
Change: Delete the column_order parameter. In most cases, output_columns and column_order have the same value. Therefore, column_order does not need to be transferred. To adjust the sequence of data columns, use mindspore.dataset.Dataset.project.
Description:
Original Interface | Interface v2.0.0-rc1 |
>>> dataset = dataset.map(operations=[transforms], ... input_columns=["column_a"], ... output_columns=["column_b", "column_c"], ... column_order=["column_c", "column_b"]) |
>>> dataset = dataset.map(operations=[transforms], ... input_columns=["column_a"], ... output_columns=["column_b", "column_c"]) >>> dataset = dataset.project(["column_c", column_b"])") |
Interface: mindspore.dataset.Dataset.batch
Change: Delete the column_order parameter. In most cases, output_columns and column_order have the same value. Therefore, column_order does not need to be transferred. To adjust the sequence of data columns, use mindspore.dataset.Dataset.project.
Description:
Original Interface | Interface v2.0.0-rc1 |
>>> dataset = dataset.batch(batch_size=4, ... input_columns=["column_a"], ... output_columns=["column_b", "column_c"], ... column_order=["column_c", "column_b"]) |
>>> dataset = dataset.batch(batch_size=4, input_columns=["column_a"] ... output_columns=["column_b", "column_c"]) >>> dataset = dataset.project(["column_c", column_b"])") |
Interface: mindspore.dataset.Dataset.batch
Change: Split the batch method into two methods: batch and padded_batch. The pad_info parameter is moved from the batch method to the padded_batch method.
Description: To use the pad_info parameter, use the padded_batch method instead.
Original Interface | Interface v2.0.0-rc1 |
>>> dataset = dataset.batch(batch_size=4, ... drop_remainder=True, pad_info=...) |
>>> dataset = dataset.padded_batch(batch_size=4, ... drop_remainder=True, pad_info=...) |
Thanks goes to these wonderful people:
alashkari,anzhengqi,archer2049,B.L.LAN,baihuawei,bichaoyang,BJ-WANG,Bokai Li,Brian-K,caifubi,caiyimeng,cathwong,changzherui,ChenDonYY,chenfei_mindspore,chengang,chengbin,chenhaozhe,chenjianping,chenkang,chenweifeng,chuht,chujinjin,davidanugraha,DavidFFFan,DeshiChen,douzhixing,emmmmtang,Erpim,Ethan,fangwenyi,fangzehua,fangzhou0329,fary86,fengyixing,gaoshuanglong,Gaoxiong,gaoyong10,gengdongjie,gongdaguo1,Greatpan,GuoZhibin,guozhijian,hangq,hanhuifeng,haozhang,hedongdong,Henry Shi,heterogeneous_to_backoff_2_0,huangbingjian,huanghui,huangxinjing,hujiahui8,hujingsong,huoxinyou,jachua,jiahongQian,jianghui58,jiangzhenguang,jiaorui,jiaoy1224,jijiarong,jjfeing,JoeyLin,json,JuiceZ,jxl,kairui_kou,KevinYi,kisnwang,KXiong,laiyongqiang,lanzhineng,liangchenghui,liangzelang,LiangZhibo,lianliguang,lichen,ligan,lijunbin,limingqi107,ling,linqingke,liubuyu,liuchao,liuchuting,liujunzhu,liuluobin,liutongtong9,liuyang811,lixiao,liyan2022,liyejun,liyuxia,looop5,luochao60,luojianing,luoyang,luoyuan,lyqlola,maning202007,maoyaomin,Margaret_wangrui,mayadong,MaZhiming,melody,mengyuanli,michaelzhu_70ab,Mohammad Motallebi,moran,NaCN,nomindcarry,OwenSec,panfengfeng,panshaowu,panzhihui,pkuliuliu,qinzheng,qiuzhongya,qujianwei,r1chardf1d0,Renyuan Zhang,RobinGrosman,shaojunsong,shenwei41,Soaringfish,tangdezhi_123,tanghuikang,tan-wei-cheng,TinaMengtingZhang,TronZhang,TuDouNi,VectorSL,wang_ziqi,wanghenchang,wangnan39,wangpingan,wangshaocong,wangshengnan123,wangtongyu6,weichaoran,wind-zyx,wqx,wtcheng,wujueying,wYann,XianglongZeng,xiaohanzhang,xiaotianci,xiaoyao,XinDu,xulei,xumengjuan1,xupan,xwkgch,yanghaoran,yangluhang,yangruoqi713,yangshuo,yangsijia,yangzhenzhang,yanzhenxiang2020,Yanzhi_YI,yao_yf,yefeng,yeyunpeng2020,Yi_zhang95,yide12,YijieChen,YingLai Lin,YingtongHu,youshu,yuchaojie,yuedongli,YuJianfeng,zangqx,ZengZitao,zhangbuxue,zhangdanyang,zhangdong,zhangfanghe,zhangqi,zhangqinghua,zhangyanhui,zhangyinxia,zhangyongxian,zhangzhaoju,zhanzhan,zhengzuohe,ZhidanLiu,zhixinaa,zhoufeng,zhouyaqiang0,zhuguodong,zhupuxu,zhuyuxiao,zichun_ye,zjun,zlq2020,zong_shuai,ZPaC,zuochuanyong,zyli2020,陈宇,范吉斌,冯一航,胡彬,宦晓玲,黄勇,雷元哲,李良灿,李林杰,刘崇鸣,刘力力,刘勇琪,吕浩宇,吕昱峰(Nate.River),没有窗户的小巷,沈竞兴,十六夜,王程浩,王禹程,王振邦,徐安越,徐永飞,杨旭华,于振华,俞涵,张清华,张澍坤,张栩浩,张学同,赵英灼,周超,周洪叶,朱家兴
Contributions of any kind are welcome!
GRAPH_MODE
.list
type are supported in GRAPH_MODE
.GRAPH_MODE
.mindspore.ops.AdaptiveAvgPool3D
.mindspore.ops.AffineGrid
.mindspore.ops.Angle
.mindspore.ops.BartlettWindow
.mindspore.ops.Bernoulli
.mindspore.ops.BesselI0
.mindspore.ops.BesselI1
.mindspore.ops.BesselJ0
.mindspore.ops.BesselJ1
.mindspore.ops.BesselK0
.mindspore.ops.BesselK0e
.mindspore.ops.BesselK1
.mindspore.ops.BesselK1e
.mindspore.ops.BesselY0
.mindspore.ops.BesselY1
.mindspore.ops.Bincount
.mindspore.ops.BlackmanWindow
.mindspore.ops.ChannelShuffle
.mindspore.ops.Cholesky
.mindspore.ops.Col2Im
.mindspore.ops.Complex
.mindspore.ops.ComplexAbs
.mindspore.ops.Cross
.mindspore.ops.CTCLossV2
.mindspore.ops.Cummin
.mindspore.ops.Diag
.mindspore.ops.Digamma
.mindspore.ops.Expand
.mindspore.ops.Fmax
.mindspore.ops.Gcd
.mindspore.ops.Geqrf
.mindspore.ops.GLU
.mindspore.ops.GridSampler2D
.mindspore.ops.GridSampler3D
.mindspore.ops.HammingWindow
.mindspore.ops.Heaviside
.mindspore.ops.Hypot
.mindspore.ops.Igamma
.mindspore.ops.IndexFill
.mindspore.ops.InplaceIndexAdd
.mindspore.ops.InplaceUpdateV2
.mindspore.ops.Lcm
.mindspore.ops.LeftShift
.mindspore.ops.LogicalXor
.mindspore.ops.Logit
.mindspore.ops.LogSpace
.mindspore.ops.LuUnpack
.mindspore.ops.MatrixDiagPartV3
.mindspore.ops.MatrixDiagV3
.mindspore.ops.MatrixSetDiagV3
.mindspore.ops.MaxPool3DWithArgmax
.mindspore.ops.MaxUnpool2D
.mindspore.ops.MaxUnpool3D
.mindspore.ops.MultiMarginLoss
.mindspore.ops.MultinomialWithReplacement
.mindspore.ops.Mvlgamma
.mindspore.ops.NanToNum
.mindspore.ops.NextAfter
.mindspore.ops.Orgqr
.mindspore.ops.Polygamma
.mindspore.ops.ResizeBilinearV2
.mindspore.ops.RightShift
.mindspore.ops.ScatterNdDiv
.mindspore.ops.ScatterNdMul
.mindspore.ops.SearchSorted
.mindspore.ops.Sinc
.mindspore.ops.Trace
.mindspore.ops.Tril
.mindspore.ops.TrilIndices
.mindspore.ops.TriuIndices
.mindspore.ops.UniqueConsecutive
.mindspore.ops.Cummax
.mindspore.ops.FillV2
.mindspore.ops.IsClose
.mindspore.ops.MatrixSolve
.mindspore.ops.Median
.mindspore.ops.MultilabelMarginLoss
.mindspore.ops.NonZero
.mindspore.ops.Pdist
.mindspore.ops.Polar
.mindspore.ops.RandomGamma
.mindspore.ops.RandomPoisson
.mindspore.ops.RandomShuffle
.mindspore.ops.Renorm
.mindspore.ops.ScatterNdMax
.mindspore.ops.ScatterNdMin
.mindspore.ops.Svd
.mindspore.ops.TripletMarginLoss
.mindspore.compression
feature was deprecated at MindSpore 1.8 and is removed in this version.mindspore.nn.quant
interfaces are also removed simultaneously: mindspore.nn.FakeQuantWithMinMaxObserver
, mindspore.nn.Conv2dBnFoldQuantOneConv
, mindspore.nn.Conv2dBnFoldQuant
, mindspore.nn.Conv2dBnWithoutFoldQuant
, mindspore.nn.Conv2dQuant
, mindspore.nn.DenseQuant
, mindspore.nn.ActQuant
, mindspore.nn.TensorAddQuant
, mindspore.nn.ActQuant
, mindspore.nn.MulQuant
. Please use MindSpore Golden Stick instead to implement QuantAwareTraining in MindSpore.mindspore.dataset.close_pool
, mindspore.dataset.to_device
, and mindspore.dataset.set_dynamic_columns
interfaces are discarded in earlier version and being removed in this version.Interface: mindspore.set_context(mode=PYNATIVE_MODE)
Change: The default value is changed from GRAPH_MODE to PYNATIVE_MODE.
Description: If the running mode is not set and the diagram mode needs to be set, use the following method:
mindspore.set_context(mode=GRAPH_MODE).
Original Interface | Interface v2.0.0-rc1 |
mindspore.set_context(mode=GRAPH_MODE) |
mindspore.set_context(mode=PYNATIVE_MODE) |
Interface: mindspore.train.Model.train
Change: The default value of dataset_sink_mode is changed from True to False.
Description: If dataset_sink_mode is not set and the data sinking mode needs to be set, use the following method:
Model.train(dataset_sink_mode=True).
Original Interface | Interface v2.0.0-rc1 |
Model.train(dataset_sink_mode=True) |
Model.train(dataset_sink_mode=False) |
Interface: mindspore.export
Change: The file_format parameter is changed from AIR to no default value.
Description: If file_format is not set in the original mode, you need to set file_format additionally. In this case, use the following method:
mindspore.export(net, *inputs, file_name, file_format="AIR", **kwargs).
Original Interface | Interface v2.0.0-rc1 |
mindspore.export(net, *inputs, file_name, file_format="AIR", **kwargs) |
mindspore.export(net, *inputs, file_name, file_format, **kwargs) |
Interface: mindspore.ops.norm
Change: The ord parameter function is extended to support multiple forms.
Original Interface | Interface v2.0.0-rc1 |
ops.norm(input_x, axis, p=2, keep_dims=False, epsilon=1e-12) >>> # Example: >>> input = Tensor(np.array([[[1.0, 2.0], [3.0, 4.0]], ... [[5.0, 6.0], [7.0, 8.0]]]).astype(np.float32)) >>> output = ops.norm(input, [0, 1], p=2) |
ops.norm(A, ord=None, dim=None, keepdim=False, *, dtype=None) >>> # Example: >>> input = Tensor(np.array([[[1.0, 2.0], [3.0, 4.0]], ... [[5.0, 6.0], [7.0, 8.0]]]).astype(np.float32)) >>> output = ops.norm(input, ord=2, dim=(0, 1)) |
Interface: mindspore.Tensor.norm
Change: The ord parameter function is extended to support multiple forms.
Description: For details, see the example of ops.norm.
Original Interface | Interface v2.0.0-rc1 |
Tensor.norm(axis, p=2, keep_dims=False, epsilon=1e-12) |
Tensor.norm(ord=None, dim=None, keepdim=False, *, dtype=None) |
Interface: mindspore.ops.dropout
Change: The seed0 and seed1 parameters are deleted and seed=None parameter is added. Instead of returning Tensors and masks, only Tensors are returned. The input parameter training=True is added.
Original Interface | Interface v2.0.0-rc1 |
ops.dropout(x, p=0.5, seed0=0, seed1=0) >>> # Example: >>> input = Tensor(((20, 16), (50, 50)), ... mindspore.float32) >>> output, mask = dropout(x, p=0.5) |
ops.dropout(input, p=0.5, training=True, seed=None) >>> # Example: >>> input = Tensor(((20, 16), (50, 50)), ... mindspore.float32) >>> output = ops.dropout(input, p=0.5,training=True) |
Interface: mindspore.ops.dropout2d
Change: Return value is changed from Tensor and mask to Tensor only. The input parameter training=True is added.
Original Interface | Interface v2.0.0-rc1 |
ops.dropout2d(x, p=0.5) >>> # Example: >>> input = Tensor(np.ones([2, 1, 2, 3]), ... mindspore.float32) >>> output, mask = dropout2d(input, 0.5) |
ops.dropout2d(input, p=0.5, training=True) >>> # Example: >>> input = Tensor(np.ones([2, 1, 2, 3]), ... mindspore.float32) >>> output = ops.dropout2d(input, 0.5, training=True) |
Interface: mindspore.ops.dropout3d
Change: Return value is changed from Tensor and mask to Tensor only. The input parameter training=True is added.
Original Interface | Interface v2.0.0-rc1 |
ops.dropout3d(x, p=0.5) >>> # Example: >>> input = Tensor(np.ones([2, 1, 2, 3]), ... mindspore.float32) >>> output, mask = dropout3d(input, 0.5) |
ops.dropout3d(input, p=0.5, training=True) >>> # Example: >>> input = Tensor(np.ones([2, 1, 2, 3]), ... mindspore.float32) >>> output = ops.dropout3d(input, 0.5, training=True) |
Interface: mindspore.ops.std
Change: The interface is reconstructed, and the interface usage mode is more consistent with user habits.
Description: If parameter unbiased
has been set, use the following alternative: unbiased=False
-> ddof=0
, unbiased=True
-> ddof=1
.
Original Interface | Interface v2.0.0-rc1 |
ops.std(input_x, axis=(), unbiased=True, keep_dims=False) |
ops.std(input, axis=None, ddof=0, keepdims=False) |
Interface: mindspore.load_param_into_net
Change: Parameters that are not loaded in the ckpt are added as return values.
Original Interface | Interface v2.0.0-rc1 |
net_param = load_param_into_net() |
net_param, ckpt_param = load_param_into_net() |
Interface: mindspore.nn.BCELoss
Change: The default value of reduction
is changed from 'none' to 'mean'.
Original Interface | Interface v2.0.0-rc1 |
BCELoss(weight=None, reduction='none') >>> # Example: >>> weight = Tensor(np.array([[1.0, 2.0, 3.0], ... [4.0, 3.3, 2.2]]), ... mindspore.float32) >>> loss = nn.BCELoss(weight=weight, reduction='mean') >>> logits = Tensor(np.array([[0.1, 0.2, 0.3], ... [0.5, 0.7, 0.9]]), ... mindspore.float32) >>> labels = Tensor(np.array([[0, 1, 0], [0, 0, 1]]), ... mindspore.float32) >>> output = loss(logits, labels) >>> print(output) >>> 1.8952923 |
BCELoss(weight=None, reduction='mean') >>> # Example: >>> weight = Tensor(np.array([[1.0, 2.0, 3.0], ... [4.0, 3.3, 2.2]]), ... mindspore.float32) >>> loss = nn.BCELoss(weight=weight) >>> logits = Tensor(np.array([[0.1, 0.2, 0.3], ... [0.5, 0.7, 0.9]]), ... mindspore.float32) >>> labels = Tensor(np.array([[0, 1, 0], [0, 0, 1]]), ... mindspore.float32) >>> output = loss(logits, labels) >>> print(output) >>> 1.8952923 |
Interface: mindspore.ops.split
Change: The interface is reconstructed. The interface usage mode is more suitable for users. The sequence of the second and third parameters is adjusted, and the split_size_or_sections function is modified and extended.
Original Interface | Interface v2.0.0-rc1 |
ops.split(input_x, axis=0, output_num=1) >>> # Example: >>> input = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]), ... mindspore.int32) >>> output = ops.split(input, axis=1, output_num=4) |
ops.split(tensor, split_size_or_sections, axis=0) >>> # Example: >>> input = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]), ... mindspore.int32) >>> output = ops.split(input, split_size_or_sections=1, axis=1) |
Interface: mindspore.Tensor.split
Change: The interface is reconstructed. The interface usage mode is more suitable for users. The positions of the two parameters is adjusted, and the split_size_or_sections function is modified and extended.
Description: For details, see the example of ops.split.
Original Interface | Interface v2.0.0-rc1 |
Tensor.split(axis=0, output_num=1) |
Tensor.split(split_size_or_sections, axis=0) |
Interface: mindspore.ops.pad
Change: Modify the parameter name paddings to padding, and the mode and value functions are added.
Original Interface | Interface v2.0.0-rc1 |
ops.pad(input_x, paddings) >>> # Example: >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], ... [0.4, 0.5, -3.2]]), ... mindspore.float32) >>> paddings = ((1, 2), (2, 1)) >>> output = ops.pad(input_x, paddings) |
ops.pad(input_x, padding, mode='constant', value=None) >>> # Example: >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], ... [0.4, 0.5, -3.2]]), ... mindspore.float32) >>> paddings = (2, 1, 1, 2) >>> output = ops.pad(input_x, paddings) |
Interface: mindspore.ops.meshgrid
Change: The input parameter is changed from inputs
to *input
.
Original Interface | Interface v2.0.0-rc1 |
ops.meshgrid(inputs, indexing='xy') >>> # Example: >>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32)) >>> y = Tensor(np.array([5, 6, 7]).astype(np.int32)) >>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32)) output = ops.meshgrid((x, y, z), indexing='xy') |
ops.meshgrid(*inputs, indexing='xy') >>> # Example: >>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32)) >>> y = Tensor(np.array([5, 6, 7]).astype(np.int32)) >>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32)) output = ops.meshgrid(x, y, z, indexing='xy') |
Interface: mindspore.ops.max
Change: Return value exchange sequence. The value is changed from "index, value" to "value, index".
Original Interface | Interface v2.0.0-rc1 |
ops.max(x, axis=0, keep_dims=False) >>> # Example: >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), ... mindspore.float32) >>> index, output = ops.max(input) >>> print(index, output) >>> 3 0.7 |
ops.max(input, axis=None, keepdims=False, *, initial=None, where=True, return_indices=False) >>> # Example: >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), ... mindspore.float32) >>> output, index = ops.max(input, axis=0) >>> print(output, index) |
Interface: mindspore.ops.min
Change: Return value exchange sequence. The value is changed from "index, value" to "value, index".
Original Interface | Interface v2.0.0-rc1 |
ops.min(x, axis=0, keep_dims=False) >>> # Example: >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), ... mindspore.float32) >>> index, output = ops.min(input) >>> 0 0.0 |
ops.min(input, axis=None, keepdims=False, *, initial=None, where=True, return_indices=False) >>> # Example: >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), ... mindspore.float32) >>> output, index = ops.min(input, keepdims=True) >>> 0.0 0 |
Interface: mindspore.ops.random_gamma
Change: The seed2 parameter is deleted and seed=0 is changed to None. The framework behavior is unified and complies with the actual application scenarios and habits of users.
Original Interface | Interface v2.0.0-rc1 |
ops.random_gamma(shape, alpha, seed=0, seed2=0) |
ops.random_gamma(shape, alpha, seed=None) |
Interface: mindspore.ops.standard_laplace
Change: The seed2 parameter is deleted and seed=0 is changed to None. The framework behavior is unified and complies with the actual application scenarios and habits of users.
Original Interface | Interface v2.0.0-rc1 |
ops.standard_laplace(shape, seed=0, seed2=0) |
ops.standard_laplace(shape, seed=None) |
Interface: mindspore.ops.standard_normal
Change: The seed2 parameter is deleted and seed=0 is changed to None. The framework behavior is unified and complies with the actual application scenarios and habits of users.
Original Interface | Interface v2.0.0-rc1 |
ops.standard_normal(shape, seed=0, seed2=0) |
ops.standard_normal(shape, seed=None) |
Interface: mindspore.ops.bernoulli
Change: The default value of seed is changed from -1 to None. Meets the actual application scenario.
Original Interface | Interface v2.0.0-rc1 |
ops.bernoulli(x, p=0.5, seed=-1) |
ops.bernoulli(input, p=0.5, seed=None) |
Interface: mindspore.data_sink
Change: Deleted the steps parameter. Parameter name jit is changed to jit_config, and new input_signature parameter is added. The usability is improved to meet the requirements of actual application scenarios.
Original Interface | Interface v2.0.0-rc1 |
mindspore.data_sink(fn, dataset, steps, sink_size=1, jit=False) |
mindspore.data_sink(fn, dataset, sink_size=1, jit_config=None, input_signature=None) |
Interface: mindspore.ops.conv2d
Change: Extend Interface Function. Add the bias parameter and modify the parameter name and parameter sequence.
Original Interface | Interface v2.0.0-rc1 |
conv2d(inputs, weight, pad_mode="valid", padding=0, stride=1, dilation=1, group=1) |
conv2d(input, weight, bias=None, stride=1, pad_mode="valid", padding=0, dilation=1, groups=1) |
Interface: mindspore.dataset.vision.Pad
Change: Adjust the input parameter padding of Pad, RandomCrop, and RandomCropWithBbox. When the input length of Padding is 2, the first value is used to fill the left/upper boundary, the second value is used to fill the right/lower boundary, and the first value is used to fill the left/right boundary. Fill the upper/lower boundary with the second value.
Description: The padding parameter whose size is 2 is not compatible with the effect of the earlier version. The padding parameter needs to be explicitly represented (left, right, top, and bottom).
Original Interface | Interface v2.0.0-rc1 |
mindspore.dataset.vision.Pad(padding=(1,2)) Indicates that the left/upper part of the image is filled with 1 pixel, and the right/down part is filled with 2 pixels. |
mindspore.dataset.vision.Pad(padding=(1,2,1,2)) Indicates that the left/upper part of the image is filled with 1 pixel, and the right/down part is filled with 2 pixels. |
Interface: mindspore.dataset.Dataset.map
Change: Delete the column_order parameter. In most cases, output_columns and column_order have the same value. Therefore, column_order does not need to be transferred. To adjust the sequence of data columns, use mindspore.dataset.Dataset.project.
Description:
Original Interface | Interface v2.0.0-rc1 |
>>> dataset = dataset.map(operations=[transforms], ... input_columns=["column_a"], ... output_columns=["column_b", "column_c"], ... column_order=["column_c", "column_b"]) |
>>> dataset = dataset.map(operations=[transforms], ... input_columns=["column_a"], ... output_columns=["column_b", "column_c"]) >>> dataset = dataset.project(["column_c", column_b"])") |
Interface: mindspore.dataset.Dataset.batch
Change: Delete the column_order parameter. In most cases, output_columns and column_order have the same value. Therefore, column_order does not need to be transferred. To adjust the sequence of data columns, use mindspore.dataset.Dataset.project.
Description:
Original Interface | Interface v2.0.0-rc1 |
>>> dataset = dataset.batch(batch_size=4, ... input_columns=["column_a"], ... output_columns=["column_b", "column_c"], ... column_order=["column_c", "column_b"]) |
>>> dataset = dataset.batch(batch_size=4, input_columns=["column_a"] ... output_columns=["column_b", "column_c"]) >>> dataset = dataset.project(["column_c", column_b"])") |
Interface: mindspore.dataset.Dataset.batch
Change: Split the batch method into two methods: batch and padded_batch. The pad_info parameter is moved from the batch method to the padded_batch method.
Description: To use the pad_info parameter, use the padded_batch method instead.
Original Interface | Interface v2.0.0-rc1 |
>>> dataset = dataset.batch(batch_size=4, ... drop_remainder=True, pad_info=...) |
>>> dataset = dataset.padded_batch(batch_size=4, ... drop_remainder=True, pad_info=...) |
[I66PE6] fix AssignSub primitive abnormal input leads to coredump.
[I6F5E6] fix data_sink function timeout on Ascend.
Thanks goes to these wonderful people:
alashkari,anzhengqi,archer2049,B.L.LAN,baihuawei,bichaoyang,BJ-WANG,Bokai Li,Brian-K,caifubi,caiyimeng,cathwong,changzherui,ChenDonYY,chenfei_mindspore,chengang,chengbin,chenhaozhe,chenjianping,chenkang,chenweifeng,chuht,chujinjin,davidanugraha,DavidFFFan,DeshiChen,douzhixing,emmmmtang,Erpim,Ethan,fangwenyi,fangzehua,fangzhou0329,fary86,fengyixing,gaoshuanglong,Gaoxiong,gaoyong10,gengdongjie,gongdaguo1,Greatpan,GuoZhibin,guozhijian,hangq,hanhuifeng,haozhang,hedongdong,Henry Shi,heterogeneous_to_backoff_2_0,huangbingjian,huanghui,huangxinjing,hujiahui8,hujingsong,huoxinyou,jachua,jiahongQian,jianghui58,jiangzhenguang,jiaorui,jiaoy1224,jijiarong,jjfeing,JoeyLin,json,JuiceZ,jxl,kairui_kou,KevinYi,kisnwang,KXiong,laiyongqiang,lanzhineng,liangchenghui,liangzelang,LiangZhibo,lianliguang,lichen,ligan,lijunbin,limingqi107,ling,linqingke,liubuyu,liuchao,liuchuting,liujunzhu,liuluobin,liutongtong9,liuyang811,lixiao,liyan2022,liyejun,liyuxia,looop5,luochao60,luojianing,luoyang,luoyuan,lyqlola,maning202007,maoyaomin,Margaret_wangrui,mayadong,MaZhiming,melody,mengyuanli,michaelzhu_70ab,Mohammad Motallebi,moran,NaCN,nomindcarry,OwenSec,panfengfeng,panshaowu,panzhihui,pkuliuliu,qinzheng,qiuzhongya,qujianwei,r1chardf1d0,Renyuan Zhang,RobinGrosman,shaojunsong,shenwei41,Soaringfish,tangdezhi_123,tanghuikang,tan-wei-cheng,TinaMengtingZhang,TronZhang,TuDouNi,VectorSL,wang_ziqi,wanghenchang,wangnan39,wangpingan,wangshaocong,wangshengnan123,wangtongyu6,weichaoran,wind-zyx,wqx,wtcheng,wujueying,wYann,XianglongZeng,xiaohanzhang,xiaotianci,xiaoyao,XinDu,xulei,xumengjuan1,xupan,xwkgch,yanghaoran,yangluhang,yangruoqi713,yangshuo,yangsijia,yangzhenzhang,yanzhenxiang2020,Yanzhi_YI,yao_yf,yefeng,yeyunpeng2020,Yi_zhang95,yide12,YijieChen,YingLai Lin,YingtongHu,youshu,yuchaojie,yuedongli,YuJianfeng,zangqx,ZengZitao,zhangbuxue,zhangdanyang,zhangdong,zhangfanghe,zhangqi,zhangqinghua,zhangyanhui,zhangyinxia,zhangyongxian,zhangzhaoju,zhanzhan,zhengzuohe,ZhidanLiu,zhixinaa,zhoufeng,zhouyaqiang0,zhuguodong,zhupuxu,zhuyuxiao,zichun_ye,zjun,zlq2020,zong_shuai,ZPaC,zuochuanyong,zyli2020,陈宇,范吉斌,冯一航,胡彬,宦晓玲,黄勇,雷元哲,李良灿,李林杰,刘崇鸣,刘力力,刘勇琪,吕浩宇,吕昱峰(Nate.River),没有窗户的小巷,沈竞兴,十六夜,王程浩,王禹程,王振邦,徐安越,徐永飞,杨旭华,于振华,俞涵,张清华,张澍坤,张栩浩,张学同,赵英灼,周超,周洪叶,朱家兴
Contributions of any kind are welcome!
mindspore.dataset.config.set_seed
does not take effect for random seeds in distributed training scenarios.Supports more operators with distributed implements.
Element Wise Operators:AddN, BitwiseAnd, BitwiseOr, BitwiseXor, CumProd, HShrink, HSigmoid, IsFinite, Mish, MulNoNan, Rint, SeLU, SoftShrink, TruncateDiv, TruncateMod, Xdivy Xlogy, InplaceAdd, InplacSub, InplaceUpdate, Cdist, L2Loss, Lerp.
Math Operators:SquaredDifference, Erfinv, MaskedFill, SplitV, Gamma, KLDivLoss, LinSpace.
Scatter Operators:ScatterAdd,ScatterDiv,ScatterMax,ScatterMul,ScatterNdAdd,ScatterNdSub,ScatterNdUpdate,ScatterSub,TensorScatterAdd,TensorScatterDiv,TensorScatterMax,TensorScatterMax,TensorScatterMul,TensorScatterAdd,TensorScatterUpdate.
Add new apis transform_checkpoints
and transform_checkpoint_by_rank
to transfer the distributed checkpoint files by strategy files. Please refer to Distributed Resilience Training and Inference。
mindspore.ops.AdaptiveMaxPool3D
.mindspore.ops.AdjustHue
.mindspore.ops.BartlettWindow
.mindspore.ops.BesselJ0
.mindspore.ops.BesselJ1
.mindspore.ops.BesselK0
.mindspore.ops.BesselK0e
.mindspore.ops.BesselK1
.mindspore.ops.BesselK1e
.mindspore.ops.BesselY0
.mindspore.ops.BesselY1
.mindspore.ops.Betainc
.mindspore.ops.Bincount
.mindspore.ops.BlackmanWindow
.mindspore.ops.Bucketize
.mindspore.ops.CombinedNonMaxSuppression
.mindspore.ops.CompareAndBitpack
.mindspore.ops.Complex
.mindspore.ops.DataFormatVecPermute
.mindspore.ops.EuclideanNorm
.mindspore.ops.Expand
.mindspore.ops.ExtractGlimpse
.mindspore.ops.FillDiagonal
.mindspore.ops.FractionalAvgPool
.mindspore.ops.FractionalMaxPool
.mindspore.ops.Gcd
.mindspore.ops.HammingWindow
.mindspore.ops.Histogram
.mindspore.ops.HSVToRGB
.mindspore.ops.Lcm
.mindspore.ops.LeftShift
.mindspore.ops.ListDiff
.mindspore.ops.LogSpace
.mindspore.ops.Lstsq
.mindspore.ops.MatrixDiagPartV3
.mindspore.ops.MatrixDiagV3
.mindspore.ops.MatrixExp
.mindspore.ops.MatrixPower
.mindspore.ops.MaxPool3DWithArgmax
.mindspore.ops.MaxUnpool2D
.mindspore.ops.MultilabelMarginLoss
.mindspore.ops.NextAfter
.mindspore.ops.Orgqr
.mindspore.ops.ReduceStd
.mindspore.ops.RGBToHSV
.mindspore.ops.RightShift
.mindspore.ops.SampleDistortedBoundingBoxV2
.mindspore.ops.ScaleAndTranslate
.mindspore.ops.ScatterAddWithAxis
.mindspore.ops.ScatterNdDiv
.mindspore.ops.ScatterNdMax
.mindspore.ops.ScatterNdMul
.mindspore.ops.STFT
.mindspore.ops.Trace
.mindspore.ops.UpsampleNearest3D
.mindspore.ops.UpsampleTrilinear3D
.mindspore.parallel.transform_checkpoints
.mindspore.parallel.transform_checkpoint_by_rank
.mindspore.ms_function
interface is renamed to mindspore.jit
, and mindspore.ms_function
will be deprecated and removed in a future version.mindspore.ms_class
interface is renamed to mindspore.jit_class
, and mindspore.ms_class
will be deprecated and removed in a future version.mindspore.ops.ms_kernel
interface is renamed to mindspore.ops.kernel
, and mindspore.ops.ms_kernel
will be deprecated and removed in a future version.mindspore.dataset.map
interface parameter column_order
does not take effect, usemindspore.dataset.project
.mindspore.dataset.close_pool
and mindspore.dataset.to_device
and mindspore.dataset.set_dynamic_columns
are deprecated and removed in this version.Thanks goes to these wonderful people:
AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking, shu-kun-zhang.
Contributions of any kind are welcome!
Thanks goes to these wonderful people:
archer2049, caifubi, chenfei_mindspore, gaoshuanglong, Greatpan, guozhijian, huoxinyou, Kxiong, lanzhineng, lijunbin, liubuyu, liuchuting, luochao60, lyqlola, nomindcarry, TuDouNi, xiaotianci, xupan, yangshuo, yefeng, YingtongHu, yuchaojie, zhoufeng, ZPaC, 刘勇琪, 吕昱峰, 王禹程, 于振华.
Contributions of any kind are welcome!
Thanks goes to these wonderful people:
AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking, shu-kun-zhang.
Contributions of any kind are welcome!
mindspore.amp.LossScaler
, mindspore.amp.DynamicLossScaler
, mindspore.amp.StaticLossScaler
, mindspore.amp.auto_mixed_precision
and mindspore.amp.all_finite
.nn.AdaptiveAvgPool3d
.ops.adaptive_avg_pool3d
.ops.addcdiv
.ops.addcmul
.ops.approximate_equal
.ops.atanh
.ops.bessel_i0
.ops.bessel_i0e
.ops.bessel_i1
.ops.bessel_i1e
.ops.bessel_j0
.ops.bessel_j1
.ops.bessel_k0
.ops.bessel_k0e
.ops.bessel_k1
.ops.bessel_k1e
.ops.bessel_y0
.ops.bessel_y1
.ops.bias_add
.ops.bitwise_and
.ops.bitwise_or
.ops.bitwise_xor
.ops.grid_sample
.ops.inplace_update
.ops.isclose
.ops.isnan
.ops.lerp
.ops.random_poisson
.ops.reverse_sequence
.ops.scatter_mul
.ops.scatter_nd_max
.ops.scatter_nd_min
.ops.SparseToDense
.ops.square
.ops.standard_laplace
.ops.std
.ops.trunc
.ops.unsorted_segment_sum
.ops.xdivy
.ops.xlogy
.ops.poisson
and use ops.random_poisson
instead.ops.SparseApplyAdagrad
and use ops.SparseApplyAdagradV2
instead.[BUGFIX] The logic of the auto mixed precision (amp) O2 level is revised. In addition to the BatchNorm1d
and BatchNorm2d
operators, the other two operators BatchNorm3d
and LayerNorm
are added. The four operators still use the float32 data type when calculating.
[BUGFIX] Fix the problem that when processing string type data, if output_numpy=True
is specified when calling the create_dict_iterator
or create_tuple_iterator
interface, the obtained data will be of type numpy.bytes_
. After this fixing, these interfaces will directly return numpy.str_
type data, and users do not need to perform string decoding operations on it. Likewise, when performing user defined processing functions, the received data will also be of type numpy.str_
directly, matching the original source data type.
Thanks goes to these wonderful people:
AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, liyanliu, lizhenyu, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, panfengfeng, panyifeng, Payne, peixu_ren, Pengyongrong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanyuan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking, shu-kun-zhang.
Contributions of any kind are welcome!
Thanks goes to these wonderful people:
AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
Contributions of any kind are welcome!
mindspore.Model.fit
API, add mindspore.callback.EarlyStopping
and mindspore.callback.ReduceLROnPlateau
in Callback.mindspore.dataset.vision.c_transforms.SoftDvppDecodeRandomCropResizeJpeg
and mindspore.dataset.vision.c_transforms.SoftDvppDecodeResizeJpeg
interfaces.on_train_epoch_end
method in LossMonitor, which implements printing metric information in the epoch level when it is used in mindspore.Model.fit
.filter_prefix
of mindspore.load_checkpoint
interface: empty string ("") is no longer supported, and the matching rules are changed from strong matching to fuzzy matching.Thanks goes to these wonderful people:
AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking, shu-kun-zhang.
Contributions of any kind are welcome!
mindspore.numpy.rand()
, mindspore.numpy.randn()
, mindspore.numpy.randint()
, and mindspore.ops.arange()
.mindspore.train.callback.History
in Callback.mindspore.ms_class
class decorator.__getitem__/__next__
methods of GeneratorDataset return a single NumPy object.ulimit -u 10240
to increase the number of threads/processes available to the current user when specify too many processes or threads for loading dataset may cause RuntimeError: can't start new thread.import mindspore.dataset.engine.datasets as ds
. Use import mindspore.dataset as ds
instead as recommended in mindspore doc.mindspore.ms_class
interface, as class decorator for user-defined classes. It allows MindSpore to identify user-defined classes and access their attributes and methods(!30855:Support user-defined classes by ms_class decorators)mindspore.SparseTensor
and use mindspore.COOTensor
instead. (!28505:Change SparseTensor to COOTensor)Thanks goes to these wonderful people:
AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
Contributions of any kind are welcome!
Thanks goes to these wonderful people:
Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
Contributions of any kind are welcome!
mindspore.nn.LSTMCell
from single-layer LSTM to single-cell LSTM.mindspore.ops.Custom
to customize your own operators for Ascend(AICore, AICPU), GPU, CPU backends, and the custom type can be one of TBE, AKG, pure Python function or prebuild binary(called aot operator).mindspore.dataset.MindDataset
interface changes input parameter dataset_file(!27542:dataset: change dataset_file into dataset_files for MindDataset)MindDataset
contains the input parameter dataset_file
, which is in the singular format. It can receive a single file path or a list that stores multiple file paths. Thus It is preferred to change the input parameter dataset_file
into plural format. In addition, the input parameters of most dataset API, such as TFRecordDataset
, are in plural formart (dataset_files
). To ensure consistency, the input parameter dataset_file
of MindDataset is changed to plural formart as dataset_files
, we can see the updated version in api of mindspore.dataset.MindDataset.
mindspore.Tensor
's property virtual_flag
(!26989:change tensor.virtual_flag and parameter.is_init interfaces to inner api)mindspore.Parameter
's property is_init
(!26989:change tensor.virtual_flag and parameter.is_init interfaces to inner api)mindspore.nn.ROC
's interface roc
(!25713:ROC.roc change to inner api)shard()
interface of primitive is changed from shard(strategy)
to shard(in_strategy=None, out_strategy=None)
set_auto_parallel_context()
interface of context is changed fromset_auto_parallel_context(parallel_mode=AUTO_PARALLEL, auto_parallel_search_mode="dynamic_programming")
to set_auto_parallel_context(parallel_mode=AUTO_PARALLEL, search_mode="dynamic_programming")
mindspore.train.callback.SummaryCollector
interface's parameter collect_specified_data
add new operations collect_landscape
(!26229:add loss landscape visualization function)collect_landscape
can collect the parameters needed to create the loss landscape. we can see the updated version in api of mindspore.train.callback.SummaryCollector.
mindspore.train.callback
add new interface SummaryLandscape
(!26229:add loss landscape visualization function)SummaryLandscape
can help you to collect loss landscape information. It can create landscape in PCA direction or random direction by calculating loss. We can see the updated version in api of mindspore.train.callback.SummaryLandscape.
Thanks goes to these wonderful people:
AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
Contributions of any kind are welcome!
Thanks goes to these wonderful people:
Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
Contributions of any kind are welcome!
Thanks goes to these wonderful people:
Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
Contributions of any kind are welcome!
model_zoo
has been seperated to an individual repositorymodels
while
andbreak
,continue
statements of training network inGRAPH_MODE
.Configuring the recomputation of the communication operations generated by the model parallel and optimizer parallel to save the memory on the
devices. Users can pass mp_comm_recompute
and parallel_optimizer_comm_recompute
to enable the recomputation of the communication operations.
for
statement.(!23669:Fix inline pass problem in switch. )model_zoo
has been seperated to an individual repositorymodels
while
andbreak
,continue
statements of training network inGRAPH_MODE
.Configuring the recomputation of the communication operations generated by the model parallel and optimizer parallel to save the memory on the
devices. Users can pass mp_comm_recompute
and parallel_optimizer_comm_recompute
to enable the recomputation of the communication operations.
for
statement.(!23669:Fix inline pass problem in switch. )Previously, we need to set the dump path in dump config file. To make the dump feature easier to use on cloud, we support new environment parameter MS_DIAGNOSTIC_DATA_PATH
.
1.3.0 | 1.4.0 |
---|---|
path is a mandatory field. |
path field is optional. If path field is not provided or is empty string, MS_DIAGNOSTIC_DATA_PATH should be set in environment. |
Thanks goes to these wonderful people:
Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
Contributions of any kind are welcome!
run_check
to check whether MindSpore is working properly or not.set_indexes
to select the inputs of update
in the specified order._Loss
to an external API LossBase
as the base class of losses.Gather
.mindspore.dataset.Dataset.device_que
interface removes unused parameter prefetch_size
(!18973:Delete unused param in device_que)Previously, we have a parameter prefetch_size
in device_que
to define the prefetch number of records ahead of the user's request. But indeed this parameter is never used which means it is an ineffective parameter. Therefore, we remove this parameter in 1.3.0 and users can set this configuration by mindspore.dataset.config.set_prefetch_size.
1.2.1 | 1.3.0 |
|
|
mindspore.nn.optim.thor
interface changes to lowercase thor
and adds two parameters enable_clip_grad
and frequency
(!17212:clearn codechekc for thor)The parameter enable_clip_grad
is used for gradient clipping and another parameter frequency
is used to control the update interval of second order information matrix.
1.2.1 | 1.3.0 |
|
|
Previously, we could only dump tensor data for one or all steps. To make the dump feature easier to use, we changed the dump configuration format and dump structure. View the New Dump Tutorial.
1.2.1 | 1.3.0 |
---|---|
iteration is an int. |
iteration is a string. |
op_debug_mode is in async_dump_settings field. |
op_debug_mode is in common_dump_settings field. async_dump_settings is removed. |
Previously, Training on Device use TrainSession while Inference on Device use LiteSession. To simplify implementation, we move TrainSession functions to LiteSession as virtual function. and move APIs previous defined in train_session.h to lite_session.h.
class MS_API LiteSession {
...
static LiteSession *CreateTrainSession(const std::string &filename, const lite::Context *context,
bool train_mode = false, const lite::TrainCfg *cfg = nullptr);
static LiteSession *CreateTransferSession(const std::string &filename_backbone, const std::string &filename_head,
const lite::Context *context, bool train_mode = false,
const lite::TrainCfg *cfg = nullptr);
virtual int Train() { return mindspore::lite::RET_ERROR; }
virtual int Eval() { return mindspore::lite::RET_OK; }
virtual int SetupVirtualBatch(int virtual_batch_multiplier, float lr = -1.0f, float momentum = -1.0f) {
return mindspore::lite::RET_ERROR;
}
virtual std::vector<tensor::MSTensor *> GetPredictions() const {
std::vector<tensor::MSTensor *> outputs;
return outputs;
}
...
Previously, Training on Device uses SaveToFile API to save the training model to file. Export API was added in this release to support more format, more model type(train or interface part of the model), and save weight quant model of train.
virtual int Export(const std::string &file_name, lite::ModelType model_type = lite::MT_TRAIN,
lite::QuantizationType quant_type = lite::QT_DEFAULT, lite::FormatType = lite::FT_FLATBUFFERS) {
return mindspore::lite::RET_ERROR;
}
When Training on the device, we may need to update the model featuremap and get model featuremap.particularly in MindSpore Federated Scenario.
virtual std::vector<tensor::MSTensor *> GetFeatureMaps() const {
std::vector<tensor::MSTensor *> features;
return features;
}
virtual int UpdateFeatureMaps(const std::vector<tensor::MSTensor *> &features) { return mindspore::lite::RET_ERROR; }
Previously, if we want to create a LiteSession object, we need to call two APIs:
MSConfig config;
// config options ...
LiteSession liteSession = new LiteSession();
boolean ret = liteSession.init(config);
if (!ret) {
// handle init LiteSession failed ...
}
now we can create a LiteSession object with new API just like:
MSConfig config;
// config options ...
LiteSession liteSession = createSession(config);
if (liteSession == null) {
// handle create LiteSession failed ...
}
Previously, if we want to inference a model, we need to call APIs like:
MSConfig config;
// config options ...
LiteSession liteSession = new LiteSession();
boolean initSessionRet = liteSession.init(config);
if (!initSessionRet) {
// handle init LiteSession failed and return ...
}
Model model = new Model();
boolean loadModelRet = model.loadModel(modelMappedByteBuffer);
if (!loadModelRet) {
// handle load model failed and return ...
}
boolean compileModelRet = liteSession.compileGraph(model);
if (!loadModelRet) {
// handle compile model failed and return ...
}
model.free();
// liteSession is ready to inference model, call runGraph in LiteSession.class ...
now we can use new API just like:
MSConfig config;
// config options ...
LiteSession liteSession = createSession(modelMappedByteBuffer, config);
if (liteSession == null) {
// handle init LiteSession failed and return ...
}
// liteSession is ready to inference model, call runGraph in LiteSession.class ...
New createSession method is an API that integrates four old APIs: LiteSession.init, Model.loadModel, LiteSession.compileGraph and model.free. It is simple and efficient as it reduces one modelBuffer copy operation.
Recently, we add a new C++ api in LiteSession class, Correspondingly we add a new java API in LiteSession.java.
public List<MSTensor> getFeaturesMap() {
List<Long> ret = this.getFeaturesMap(this.sessionPtr);
ArrayList<MSTensor> tensors = new ArrayList<MSTensor>();
for (Long msTensorAddr : ret) {
MSTensor msTensor = new MSTensor(msTensorAddr);
tensors.add(msTensor);
}
return tensors;
}
public boolean updateFeatures(List<MSTensor> features) {
long[] inputsArray = new long[features.size()];
for (int i = 0; i < features.size(); i++) {
inputsArray[i] = features.get(i).getMSTensorPtr();
}
return this.updateFeatures(this.sessionPtr, inputsArray);
}
Recently, we add a new C++ api in LiteSession class, Correspondingly we add a new java API in LiteSession.java.
public boolean export(String modelFileName, int modelType, int quantizationType) {
return this.export(this.sessionPtr, modelFileName, modelType, quantizationType);
}
Align with update of C++ api in LiteSession class, add new java API to LiteSession.java Correspondingly.
public class LiteSession {
...
public static LiteSession createTrainSession(String modelName, final MSConfig config, boolean trainMode){...}
public boolean train() {...}
public boolean eval() {...}
...
Thanks goes to these wonderful people:
Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
Contributions of any kind are welcome!