The input and output of the operator can be saved for debugging through the data dump when the training result deviates from the expectation.
For the dynamic graph mode, the Dump function only support overflow detection ability on Ascend. To view those nodes which are not overflow, please use the native Python execution capabilities. Users can view and record the corresponding input and output during the running of the network script.
For the static graph mode, MindSpore provides the Dump function to save the graph and the input and output data of the operator during model training to a disk file.
Using dump to help debugging is divided into two steps: 1. Data preparation; 2. Data analysis.
The data preparation phase uses synchronous Dump or asynchronous Dump to generate Dump data. See Synchronous Dump Step and Asynchronous Dump Step for details.
When preparing data, you can refer to the following best practices:
iteration
parameter to save only the data of the iteration with the problem and the previous iteration. For example, if the problem to be analyzed will appear in the 10th iteration (counting from 1), you can set it as follows: "iteration": "8 | 9"
. Note that the iteration
parameter evaluates iterations from 0. Saving the data of the above two iterations can help problem analysis under most scenarios.If you have installed MindSpore Insight, you can use offline debugger of MindSpore Insight to analyze it. It only supports analyzing data saved by e2e_dump currently. See Using the Offline Debugger for the usage of offline debugger.
If MindSpore Insight is not installed, you need to analyze the data through the following steps.
Find the corresponding operator from the script.
The Dump function needs to use the IR file of the final execution graph (The IR file contains the full name of the operator, and the dependency of the operator on the input and output of the computational graph, and also contains the trace information from the operator to the corresponding script code). The IR file can be viewed with the vi
command. For the configuration of the Dump function, see Synchronous Dump Step and Asynchronous Dump Step. For the directory structure of the Dump output, see Synchronous Dump Data Object Directory and Asynchronous Dump Data Object Directory. Then find the operator corresponding to the code in the script through the graph file, and refer to Synchronous Dump Data Analysis Sample and Asynchronous Dump Data Analysis Sample.
From operator to Dump data.
After understanding the mapping relationship between the script and the operator, you can determine the name of the operator you want to analyze and find the dump file corresponding to the operator. Please refer to Synchronous Dump Data Object Directory and Asynchronous Dump Data Object Directory.
Analyze Dump data.
By analyzing Dump data, it can be compared with other third-party frameworks. For the synchronous Dump data format, please refer to Introduction to Synchronous Dump Data File. For the asynchronous Dump data format, please refer to Introduction to Asynchronous Dump Data File.
Analysis of static graph operator results.
Through the IR diagram obtained by the Dump function(Only e2e_dump support saving IR diagram), you can understand the mapping relationship between the script code and the execution operator (for details, see MindSpore IR Introduction). Combining the input and output data of the execution operator, it is possible to analyze possible overflow, gradient explosion and disappearance during the training process, and backtrack to the code that may have problems in the script.
Analysis of the feature map.
Analyze the information of the feature map by obtaining the output data of the layer.
Model migration.
In the scenario of migrating a model from a third-party framework (TensorFlow and PyTorch) to MindSpore, by comparing the output data of operator at the same position, analyzing whether the training results of the third-party framework and MindSpore for the same model are close enough to locate the model precision issues.
MindSpore provides two modes: synchronous Dump and asynchronous Dump:
The configuration files required for different modes and the data format of dump are different:
Create a configuration file in json format , and the name and location of the JSON file can be customized.
{
"common_dump_settings": {
"dump_mode": 0,
"path": "/absolute_path",
"net_name": "ResNet50",
"iteration": "0|5-8|100-120",
"saved_data": "tensor",
"input_output": 0,
"kernels": ["Default/Conv-op12"],
"support_device": [0,1,2,3,4,5,6,7]
},
"e2e_dump_settings": {
"enable": true,
"trans_flag": true
}
}
dump_mode
: 0: all operator data in the network dumped out; 1: the operator data specified in Dump "kernels"
.path
: The absolute path to Dump saved data.net_name
: The customized net name: "ResNet50".iteration
: Specify the iterations of data required to be dumped, type is string. Use "|" to separate the step data of different intervals to be saved. For example, "0 | 5-8 | 100-120" represents dump the data of the 1st, 6th to 9th, and 101st to 121st steps. If iteration set to "all", data of every iteration will be dumped.saved_data
: Specify what data is to be dumped, type is string. Use "tensor" to indicate complete tensor data Dumped, use "statistic" to dump tensor statistics, use "full" to dump both tensor data and statistics. Synchronous statistics dump is only supported on GPU and Ascend. Using "statistic" or "full" on CPU will result in exception. Default setting is "tensor".input_output
: 0: dump input and output of kernel, 1:dump input of kernel, 2:dump output of kernel. This configuration parameter only supports Ascend and CPU, and GPU can only dump the output of operator.kernels
: This item can be configured in two formats:
set_context(save_graphs=2)
and execute the network to obtain the operator name from the generated trace_code_graph_{graph_id}
IR file. For details, please refer to Saving IR.
Note that whether setting set_context(save_graphs=2)
may cause the different IDs of the same operator, so when dump specified operators, keep this setting unchanged after obtaining the operator name. Or you can obtain the operator names from the file ms_output_trace_code_graph_{graph_id}.ir
saved by Dump. Refer to Synchronous Dump Data Object Directory.support_device
: Supported devices, default setting is [0,1,2,3,4,5,6,7]
. You can specify specific device ids to dump specific device data. This configuration parameter is invalid on the CPU, because there is no concept of device on the CPU, but it is still need to reserve this parameter in the json file.enable
: When set to true, enable Synchronous Dump. When set to false, asynchronous dump will be used on Ascend and synchronous dump will still be used on GPU.trans_flag
: Enable trans flag. Transform the device data format into NCHW. If it is True
, the data will be saved in the 4D format (NCHW) format on the Host side; if it is False
, the data format on the Device side will be retained. This configuration parameter is invalid on the CPU, because there is no format conversion on the CPU, but it is still need to reserve this parameter in the json file.Set Dump environment variable.
Specify the json configuration file of Dump.
export MINDSPORE_DUMP_CONFIG=${xxx}
"xxx" represents the absolute path to the configuration file.
export MINDSPORE_DUMP_CONFIG=/path/to/data_dump.json
If the path
field is not set or set to an empty string in the Dump configuration file, you also need to configure the environment variable MS_DIAGNOSTIC_DATA_PATH
.
export MS_DIAGNOSTIC_DATA_PATH=${yyy}
Then "$MS_DIAGNOSTIC_DATA_PATH/debug_dump" is regarded as path
. If the path
field is set in Dump configuration file, the actual value of the field is still the same.
Note:
mindspore.communication.init
.Execute the training script to dump data.
After the training is started, if the MINDSPORE_DUMP_CONFIG
environment variable is correctly configured, the content of the configuration file will be read and the operator data will be saved according to the data storage path specified in the Dump configuration.
In synchronous mode, if you want to dump data in GPU environment, you must use the non-data sink mode (set the dataset_sink_mode
parameter in model.train
or DatasetHelper
to False
) to ensure that you can get the dump data of each step.
If model.train
or DatasetHelper
is not called in the script, the default is non-data sinking mode. Using the Dump function will automatically generate the IR file of the final execution graph.
You can set set_context(reserve_class_name_in_scope=False)
in your training script to avoid dump failure because of file name is too long.
Read and parse synchronous dump data through numpy.load
, refer to Introduction to Synchronous Dump Data File.
After starting the training, the data objects saved by the synchronous Dump include the final execution graph (ms_output_trace_code_graph_{graph_id}.ir
file) and the input and output data of the operators in the graph. The data directory structure is as follows:
{path}/
- rank_{rank_id}/
- .dump_metadata/
- {net_name}/
- {graph_id}/
- {iteration_id}/
statistic.csv
{op_type}.{op_name}.{task_id}.{stream_id}.{timestamp}.{input_output_index}.{slot}.{format}.npy
- constants/
Parameter.data-{data_id}.0.0.{timestamp}.output.0.DefaultFormat.npy
...
- graphs/
ms_output_trace_code_graph_{graph_id}.pb
ms_output_trace_code_graph_{graph_id}.ir
- execution_order/
ms_execution_order_graph_{graph_id}.csv
ms_global_execution_order_graph_{graph_id}.csv
path
: the absolute path set in the data_dump.json
configuration file.rank_id
: the id of the logic device.net_name
: the network name set in the data_dump.json
configuration file.graph_id
: the id of the training graph.iteration_id
: the iteration of the training.op_type
: the type of the operator.op_name
: the name of the operator.task_id
: the id of the task.stream_id
: the id of the stream.timestamp
: the time stamp.input_output_index
: the index of input or output. For example, output_0
means that the file is the data of the first output Tensor of the operator.slot
: the id of the slot.format
: the format of the data.data_id
: the id of constant data.For multi-graph networks, due to the control flow, some subgraphs may not be executed, but Dump only saves the executed nodes, so the {graph_id} in the .pb
file name in the graphs directory does not necessarily exist in the {graph_id} directory under {net_name}.
Only when saved_data
is "statistic" or "full", statistic.csv
is generated. Only when saved_data
is "tensor" or "full", {op_type}. {op_name}. {task_id}. {stream_id}. {timestamp}. {input_output_index}. {slot}. {format}.npy
named complete tensor information is generated.
The data file generated by the synchronous Dump is a binary file with the suffix .npy
, and the file naming format is:
{op_type}.{op_name}.{task_id}.{stream_id}.{timestamp}.{input_output_index}.{slot}.{format}.npy
The constant data file generated by the synchronous Dump is in the same format as data file, whereas {op_type}, {task_id}, {stream_id}, {input_output_index}, {slot}, {format} are unchanged for all constant data. Note, non-Tensor type will not generate data file.
Parameter.data-{data_id}.0.0.{timestamp}.output.0.DefaultFormat.npy
User can use Numpy interface numpy.load
to read the data.
The statistics file generated by the synchronous dump is named statistic.csv
. This file stores key statistics for all tensors dumped under the same directory as itself (with the file names {op_type}.{op_name}.{task_id}.{stream_id}.{timestamp}.{input_output_index}.{slot}.{format}.npy
). Each row in statistic.csv
summarizes a single tensor, each row contains the statistics: Op Type, Op Name, Task ID, Stream ID, Timestamp, IO, Slot, Data Size, Data Type, Shape, Max Value, Min Value, Avg Value, Count, Negative Zero Count, Positive Zero Count, NaN Count, Negative Inf Count, Positive Inf Count, Zero Count, MD5. Note that opening this file with Excel may cause data to be displayed incorrectly. Please use commands like vi
or cat
, or use Excel to import csv from text for viewing.
The suffixes of the final execution graph files generated by synchronous Dump are .pb
and .ir
respectively, and the file naming format is:
ms_output_trace_code_graph_{graph_id}.pb
ms_output_trace_code_graph_{graph_id}.ir
The files with the suffix .ir
can be opened and viewed by the vi
command.
The suffix of the node execution sequence file generated by the synchronous Dump is .csv
, and the file naming format is:
ms_execution_order_graph_{graph_id}.csv
The suffix of the graph execution history file is .csv
. The file naming format is:
ms_global_execution_order_graph_{graph_id}.csv
This file stores the list of iterations in which the graph was executed. After the graph is compiled, it may be split into multiple sub-graphs. Since sub-graphs share the same graph execution history with root graph, only root graph will generate an execution history file.
.dump_metadata
records the original training information, and data_dump.json
saves the dump configuration set by the user.
In order to better demonstrate the process of using dump to save and analyze data, we provide a set of complete sample script , you only need to execute bash dump_sync_dump.sh
for synchronous dump.
After the graph corresponding to the script is saved to the disk through the Dump function, the final execution graph file ms_output_trace_code_graph_{graph_id}.ir
will be generated. This file saves the stack information of each operator in the corresponding graph, and records the generation script corresponding to the operator.
Take AlexNet script as an example:
...
def conv(in_channels, out_channels, kernel_size, stride=1, padding=0, pad_mode="valid"):
weight = weight_variable()
return nn.Conv2d(in_channels, out_channels,
kernel_size=kernel_size, stride=stride, padding=padding,
weight_init=weight, has_bias=False, pad_mode=pad_mode)
def fc_with_initialize(input_channels, out_channels):
weight = weight_variable()
bias = weight_variable()
return nn.Dense(input_channels, out_channels, weight, bias)
def weight_variable():
return TruncatedNormal(0.02)
class AlexNet(nn.Cell):
"""
Alexnet
"""
def __init__(self, num_classes=10, channel=3):
super(AlexNet, self).__init__()
self.conv1 = conv(channel, 96, 11, stride=4)
self.conv2 = conv(96, 256, 5, pad_mode="same")
self.conv3 = conv(256, 384, 3, pad_mode="same")
self.conv4 = conv(384, 384, 3, pad_mode="same")
self.conv5 = conv(384, 256, 3, pad_mode="same")
self.relu = nn.ReLU()
self.max_pool2d = nn.MaxPool2d(kernel_size=3, stride=2)
self.flatten = nn.Flatten()
self.fc1 = fc_with_initialize(6 * 6 * 256, 4096)
self.fc2 = fc_with_initialize(4096, 4096)
self.fc3 = fc_with_initialize(4096, num_classes)
def construct(self, x):
"""
The construct function.
Args:
x(int): Input of the network.
Returns:
Tensor, the output of the network.
"""
x = self.conv1(x)
x = self.relu(x)
x = self.max_pool2d(x)
x = self.conv2(x)
x = self.relu(x)
x = self.max_pool2d(x)
x = self.conv3(x)
x = self.relu(x)
x = self.conv4(x)
x = self.relu(x)
x = self.conv5(x)
x = self.relu(x)
x = self.max_pool2d(x)
x = self.flatten(x)
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
x = self.relu(x)
x = self.fc3(x)
return x
...
If the user wants to view the code at line 175 in the script:
x = self.conv3(x)
After executing the network training, you can find multiple operator information corresponding to the line of code from the final execution graph (ms_output_trace_code_graph_{graph_id}.ir
file). The content of the file corresponding to Conv2D-op12 is as follows:
%20(equivoutput) = Conv2D(%17, %19) {instance name: conv2d} primitive_attrs: {IsFeatureMapInputList: (0), kernel_size: (3, 3), mode: 1, out_channel: 384, input_names: [
x, w], pri_format: NC1HWC0, pad: (0, 0, 0, 0), visited: true, pad_mod: same, format: NCHW, pad_list: (1, 1, 1, 1), precision_flag: reduce, groups: 1, output_used_num:
(1), stream_id: 0, stride: (1, 1, 1, 1), group: 1, dilation: (1, 1, 1, 1), output_names: [output], IsFeatureMapOutput: true, ms_function_graph: true}
: (<Tensor[Float32], (32, 256, 13, 13)>, <Tensor[Float32], (384, 256, 3, 3)>) -> (<Tensor[Float32], (32, 384, 13, 13)>)
: (<Float16xNC1HWC0[const vector][32, 16, 13, 13, 16]>, <Float16xFracZ[const vector][144, 24, 16, 16]>) -> (<Float32xNC1HWC0[const vector][32, 24, 13, 13, 16]>)
: full_name_with_scope: (Default/network-WithLossCell/_backbone-AlexNet/conv3-Conv2d/Conv2D-op12)
...
# In file ./tain_alexnet.py(175)/ x = self.conv3(x)/
...
The meanings of the lines in the file content shown above are as follows:
The input and output of the operator on the Host side (the first line) and the Device side (the second line, some operators may not exist). It can be seen from the execution graph that the operator has two inputs (left side of the arrow) and one output (right side of the arrow).
: (<Tensor[Float32], (32, 256, 13, 13)>, <Tensor[Float32], (384, 256, 3, 3)>) -> (<Tensor[Float32], (32, 384, 13, 13)>)
: (<Float16xNC1HWC0[const vector][32, 16, 13, 13, 16]>, <Float16xFracZ[const vector][144, 24, 16, 16]>) -> (<Float32xNC1HWC0[const vector][32, 24, 13, 13, 16]>)
Operator name. It can be seen from the execution graph that the full name of the operator in the final execution graph is Default/network-WithLossCell/_backbone-AlexNet/conv3-Conv2d/Conv2D-op12
.
: (Default/network-WithLossCell/_backbone-AlexNet/conv3-Conv2d/Conv2D-op12)
The training script code corresponding to the operator. By searching the training script code to be queried, multiple matching operators can be found.
# In file {Absolute path of model_zoo}/official/cv/alexnet/src/alexnet.py(175)/ x = self.conv3(x)/
Through the operator name and input and output information, you can find the only corresponding Tensor data file. For example, if you want to view the dump file corresponding to the first output data of the Conv2D-op12 operator, you can obtain the following information:
operator_name
: Conv2D-op12
.
input_output_index
: output.0
indicates that the file is the data of the first output Tensor of the operator.
slot
: 0, this tensor only has one slot.
Search for the corresponding file name in the data object file directory saved by Dump:
Conv2d.Conv2D-op12.0.0.1623124369613540.output.0.DefaultFormat.npy
.
When restoring data, execute:
import numpy
numpy.load("Conv2D.Conv2D-op12.0.0.1623124369613540.output.0.DefaultFormat.npy")
Generate the numpy.array data.
MindSpore provides debugging capabilities for large networks through asynchronous dumps on Ascend.
Create configuration file:data_dump.json
.
The name and location of the JSON file can be customized.
{
"common_dump_settings": {
"dump_mode": 0,
"path": "/absolute_path",
"net_name": "ResNet50",
"iteration": "0|5-8|100-120",
"saved_data": "tensor",
"input_output": 0,
"kernels": ["Default/Conv-op12"],
"support_device": [0,1,2,3,4,5,6,7],
"op_debug_mode": 0,
"file_format": "npy"
}
}
dump_mode
: 0: all operator data in the network dumped out; 1: dump kernels data in kernels list. When overflow detection is enabled, the setting of this field becomes invalid, and Dump only saves the data of the overflow node.path
: The absolute path to save Dump data. When the graph compilation level is O0, MindSpore will create a new subdirectory for each step in the path directory.net_name
: The customized net name: "ResNet50".iteration
: Specify the iterations to dump, type is string. Use "|" to separate the step data of different intervals to be saved. For example, "0 | 5-8 | 100-120" represents dump the data of the 1st, 6th to 9th, and 101st to 121st steps. If iteration set to "all", data of every iteration will be dumped. When overflow detection is enabled for PyNative mode, it must be set to "all".saved_data
: Specify what data is to be dumped, type is string. Use "tensor" to dump tensor data, use "statistic" to dump tensor statistics, use "full" to dump both tensor data and statistics. Default setting is "tensor". Asynchronous statistics dump is only supported when file_format
is set to npy
, using "statistic" or "full" when file_format
is set to bin
will result in exception.input_output
: When set to 0, it means to Dump the operator's input and output; when set to 1, it means to Dump the operator's input; setting it to 2 means to Dump the output of the operator.kernels
: List of operator names. Specifying operator needs to first set the environment variable for saving the graph file to save the graph, and then obtain the operator name from the saved graph file.
Please refer to the documentation on Ascend Developer Zone
DUMP_GE_GRAPH , DUMP_GRAPH_LEVEL and DUMP_GRAPH_PATH for details about the environment variable for saving the graph file.support_device
: Supported devices, default setting is [0,1,2,3,4,5,6,7]
. You can specify specific device ids to dump specific device data.op_debug_mode
: This attribute is used for operator overflow debugging. 0: disable overflow check function; 1: enable AiCore overflow check; 2: enable Atomic overflow check; 3: enable all overflow check function; 4: enable the lightweight exception dump function. Set it to 0 when Dump data is processed. If it is not set to 0, only the data of the overflow operator or exception operator will be dumped.file_format
: Dump file type. It can be either npy
and bin
. npy
: data will be dumped in npy files as host format. bin
: data will be dumped in protobuf file as device format and need to be transformed to parse using the provided data analysis tool. Please refer to Asynchronous Dump Data Analysis Sample for details. The default value is bin
.Set Dump environment variable.
export MINDSPORE_DUMP_CONFIG=${Absolute path of data_dump.json}
If the path
field is not set or set to an empty string in the Dump configuration file, you also need to configure the environment variable MS_DIAGNOSTIC_DATA_PATH
.
export MS_DIAGNOSTIC_DATA_PATH=${yyy}
Then "$MS_DIAGNOSTIC_DATA_PATH/debug_dump" is regarded as path
. If the path
field in configuration file is not empty, it is still used as the path to save Dump data.
mindspore.communication.init
.Execute the training script to dump data.
You can set set_context(reserve_class_name_in_scope=False)
in your training script to avoid dump failure because of file name is too long.
Refer to Asynchronous Dump Data Analysis Sample to analyze the Dump data file.
Note:
dump_mode
option in the json configuration file to 0 or 1.AllReduce
, AllGather
, ReduceScatter
, Broadcast
, NeighborExchange
, NeighborExchange2
, AlltoAll
), because the input address will be overwritten by the output when executed on the device, asynchronous dump cannot directly save its input data, but will save the output data of its input operator. You can view the input operator of the communication operator through the ir graph.The Dump directory structure of the graph pattern is as follows:
{path}/
- {time}/
- {device_id}/
- {model_name}/
- {model_id}/
- {iteration_id}/
statistic.csv
{op_type}.{op_name}.{task_id}.{stream_id}.{timestamp}
Opdebug.Node_OpDebug.{task_id}.{stream_id}.{timestamp}
mapping.csv
When using MS_ACL_DUMP_CFG_PATH to enable ACL dump, and the graph compilation level is not O0, the Dump directory structure is as follows, where the main feature is the {step_id} directory, which represents user side training step id:
{path}/
- {step_id}/
- {time}/
- {device_id}/
- {model_id}/
- {iteration_id}/
statistic.csv
{op_type}.{op_name}.{task_id}.{stream_id}.{timestamp}
Opdebug.Node_OpDebug.{task_id}.{stream_id}.{timestamp}
mapping.csv
When using MS_ACL_DUMP_CFG_PATH to enable ACL dump, and the graph compilation level is O0, the Dump directory structure is as follows, where the main feature is no {model_name} and {model_id} directory. In this scenario, the dump files for dynamic shape operators will be saved in {iteration_id} directory and the dump files for static shape operators will be saved in {device_id} directory:
{path}/
- {step_id}/
- {time}/
- {device_id}/
- {iteration_id}/
statistic.csv
{op_type}.{op_name}.{task_id}.{stream_id}.{timestamp}
Opdebug.Node_OpDebug.{task_id}.{stream_id}.{timestamp}
mapping.csv
path
: the absolute path set in the data_dump.json
configuration file.device_id
: the id of the device.model_name
: the model name generated by MindSpore.model_id
: the id of the model.graph_id
: the id of the training graph.iteration_id
: the iteration of the training.op_type
: the type of the operator.op_name
: the name of the operator.task_id
: the id of the task.stream_id
: the id of the stream.timestamp
: the time stamp.step_id
: user side training step id.The overflow file (file Opdebug.Node_OpDebug.{task_id}.{stream_id}.{timestamp}
) is only saved when overflow dump is enabled and overflow is detected.
If set file_format
to npy
, the operator file will be saved as a npy format file, and the overflow file will be saved as a json format file. The file naming formats are:
{op_type}.{op_name}.{task_id}.{stream_id}.{timestamp}.{input_output_index}.{slot}.{format}.npy
Opdebug.Node_OpDebug.{task_id}.{stream_id}.{timestamp}.output.0.json
If the length of the tensor file name defined according to the naming rules exceeds the OS file name length limit (usually 255 characters), the tensor file will be renamed to a string of random numbers. The mapping relationship will be written to the file 'mapping.csv' in the same directory.
If set file_format
to npy
, it can be loaded by numpy.load
.
If not configured file_format
or set file_format
to bin
, after the training is started, the original data file generated by asynchronous Dump or overflow files generated by overflow detection are in protobuf format. They need to be parsed using the data analysis tool that comes with the HiSilicon Run package. For details, please refer to How to view dump data files.
The data format on the Device side may be different from the definition in the calculation diagram on the Host side. The bin file data format of the asynchronous dump is the Device side format. If you want to convert to the Host side format, you can refer to How to convert dump data file format.
If the file is saved in bin
format, the file naming format is:
{op_type}.{op_name}.{task_id}.{stream_id}.{timestamp}
Take the Conv2D-op12 of AlexNet network as an example: Conv2D.Default_network-WithLossCell__backbone-AlexNet_conv3-Conv2d_Conv2D-op12.2.7.161243956333802
, where Conv2D
is {op_type}
, Default_network-WithLossCell__backbone-AlexNet_conv3-Conv2d_Conv2D-op12
is {op_name}
, and 2
is {task_id' }
, 7
is {stream_id' }
, 161243956333802
is {timestamp}
.
If ".", "/", "", and spaces appear in op_type
and op_name
, they will be converted to underscores.
The original data file generated by dump can also be parsed by using the data parsing tool DumpParser of MindSpore Insight. Please refer to DumpParser Introduction for the usage of DumpParser. The data format parsed by MindSpore Insight is exactly the same as that of synchronous dump.
If setting file_format
to npy
, the naming convention of data files generated by asynchronous dump is the same as those of synchronous dump. Please refer to Introduction to Synchronous Dump Data File. The overflow file generated by overflow detection is in the json
format, and the content analysis of the overflow file can refer to the Analyzing the Data File of an Overflow/Underflow Operator .
The saved_data
option only takes effect when file_format
is "npy". If saved_data
is "statistic" or "full", tensor statistics will be dumped in statistic.csv
. When saved_data
is "tensor" or "full", full tensor data will be dumped in {op_type}.{op_name}.{task_id}.{stream_id}.{timestamp}.{input_output_index}.{slot}.{format}.npy
. The format of the statistic file will be the same as that of synchonous dump. Please refer to Introduction to Synchronous Dump Data File.
The constant dump file, final execution graph file and execution order file naming rules generated by asynchronous Dump are the same as that of synchronous Dump. You can refer to Introduction to Synchronous Dump Data File.
In order to better demonstrate the process of using dump to save and analyze data, we provide a set of complete sample script , you only need to execute bash run_async_dump.sh
for asynchronous dump.
Through the asynchronous Dump function, the data files generated by the operator asynchronous Dump can be obtained. If file_format
in the Dump configure file is set to "npy", then the step 1, 2 in the follows steps can be skipped. If file_format
is not set or set to "bin", the tensor files need to be converted to .npy
format.
Parse the dumped file using msaccucmp.py
provied in the run package, the path where the msaccucmp.py
file is located may be different on different environments. You can find it through the find
command:
find ${run_path} -name "msaccucmp.py"
run_path
: The installation path of the run package.After finding the msaccucmp.py
, go to the /absolute_path
directory and run the following command to parse the Dump data:
python ${The absolute path of msaccucmp.py} convert -d {file path of dump} -out {file path of output}
The {file path of dump} can be path to a single .bin
file, or the folder that include the .bin
files.
If you need to convert the data format, please refer to the user instructions link https://www.hiascend.com/document/detail/en/CANNCommunityEdition/600alphaX/developmenttools/devtool/atlasaccuracy_16_0077.html.
For example, the data file generated by Dump is:
Conv2D.Default_network-WithLossCell__backbone-AlexNet_conv3-Conv2d_Conv2D-op12.2.7.161243956333802
Then execute:
python3.7.5 msaccucmp.py convert -d /path/to/Conv2D.Default_network-WithLossCell__backbone-AlexNet_conv3-Conv2d_Conv2D-op12.2.7.161243956333802 -out ./output -f NCHW -t npy
All input and output data for this operator can be generated under ./output
. Each data is saved as a file with the .npy
suffix in the format NCHW
. The result is as follows:
Conv2D.Default_network-WithLossCell__backbone-AlexNet_conv3-Conv2d_Conv2D-op12.2.7.161243956333802.input.0.32x256x13x13.npy
Conv2D.Default_network-WithLossCell__backbone-AlexNet_conv3-Conv2d_Conv2D-op12.2.7.161243956333802.input.1.384x256x3x3.npy
Conv2D.Default_network-WithLossCell__backbone-AlexNet_conv3-Conv2d_Conv2D-op12.2.7.161243956333802.output.0.32x384x13x13.npy
At the end of the file name, you can see which input or output the file is the operator, and the dimensional information of the data. For example, by the first .npy
file name
Conv2D.Default_network-WithLossCell__backbone-AlexNet_conv3-Conv2d_Conv2D-op12.2.7.161243956333802.input.0.32x256x13x13.npy
It can be seen that the file is the 0th input of the operator, and the dimension information of the data is 32x256x13x13
.
The corresponding data can be read through numpy.load("file_name")
. For example:
import numpy
numpy.load("Conv2D.Default_network-WithLossCell__backbone-AlexNet_conv3-Conv2d_Conv2D-op12.2.7.161243956333802.input.0.32x256x13x13.npy")
bfloat16
is saved to the npy
file, it will be converted to type float32
.此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。