1 Star 0 Fork 0

一米田/MachineLearningNotebooks

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
克隆/下载
img-classification-part1-training.ipynb 28.16 KB
一键复制 编辑 原始数据 按行查看 历史

Copyright (c) Microsoft Corporation. All rights reserved.

Licensed under the MIT License.

Tutorial #1: Train an image classification model with Azure Machine Learning

In this tutorial, you train a machine learning model on remote compute resources. You'll use the training and deployment workflow for Azure Machine Learning service (preview) in a Python Jupyter notebook. You can then use the notebook as a template to train your own machine learning model with your own data. This tutorial is part one of a two-part tutorial series.

This tutorial trains a simple logistic regression using the MNIST dataset and scikit-learn with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing a number from 0 to 9. The goal is to create a multi-class classifier to identify the digit a given image represents.

Learn how to:

  • Set up your development environment
  • Access and examine the data
  • Train a simple logistic regression model on a remote cluster
  • Review training results, find and register the best model

You'll learn how to select a model and deploy it in part two of this tutorial later.

Prerequisites

See prerequisites in the Azure Machine Learning documentation.

On the computer running this notebook, conda install matplotlib, numpy, scikit-learn=0.22.1

Set up your development environment

All the setup for your development work can be accomplished in a Python notebook. Setup includes:

  • Importing Python packages
  • Connecting to a workspace to enable communication between your local computer and remote resources
  • Creating an experiment to track all your runs
  • Creating a remote compute target to use for training

Import packages

Import Python packages you need in this session. Also display the Azure Machine Learning SDK version.

%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt

import azureml.core
from azureml.core import Workspace

# check core SDK version number
print("Azure ML SDK Version: ", azureml.core.VERSION)

Connect to workspace

Create a workspace object from the existing workspace. Workspace.from_config() reads the file config.json and loads the details into an object named ws.

# load workspace configuration from the config.json file in the current folder.
ws = Workspace.from_config()
print(ws.name, ws.location, ws.resource_group, sep='\t')

Create experiment

Create an experiment to track the runs in your workspace. A workspace can have muliple experiments.

experiment_name = 'sklearn-mnist'

from azureml.core import Experiment
exp = Experiment(workspace=ws, name=experiment_name)

Create or Attach existing compute resource

By using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. You will submit Python code to run on this VM later in the tutorial. The code below creates the compute clusters for you if they don't already exist in your workspace.

Creation of compute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace the code will skip the creation process.

from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
import os

# choose a name for your cluster
compute_name = os.environ.get("AML_COMPUTE_CLUSTER_NAME", "cpu-cluster")
compute_min_nodes = os.environ.get("AML_COMPUTE_CLUSTER_MIN_NODES", 0)
compute_max_nodes = os.environ.get("AML_COMPUTE_CLUSTER_MAX_NODES", 4)

# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
vm_size = os.environ.get("AML_COMPUTE_CLUSTER_SKU", "STANDARD_D2_V2")


if compute_name in ws.compute_targets:
    compute_target = ws.compute_targets[compute_name]
    if compute_target and type(compute_target) is AmlCompute:
        print("found compute target: " + compute_name)
else:
    print("creating new compute target...")
    provisioning_config = AmlCompute.provisioning_configuration(vm_size = vm_size,
                                                                min_nodes = compute_min_nodes, 
                                                                max_nodes = compute_max_nodes)

    # create the cluster
    compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
    
    # can poll for a minimum number of nodes and for a specific timeout. 
    # if no min node count is provided it will use the scale settings for the cluster
    compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
    
     # For a more detailed view of current AmlCompute status, use get_status()
    print(compute_target.get_status().serialize())

You now have the necessary packages and compute resources to train a model in the cloud.

Explore data

Before you train a model, you need to understand the data that you are using to train it. In this section you learn how to:

  • Download the MNIST dataset
  • Display some sample images

Download the MNIST dataset

Use Azure Open Datasets to get the raw MNIST data files. Azure Open Datasets are curated public datasets that you can use to add scenario-specific features to machine learning solutions for more accurate models. Each dataset has a corrseponding class, MNIST in this case, to retrieve the data in different ways.

This code retrieves the data as a FileDataset object, which is a subclass of Dataset. A FileDataset references single or multiple files of any format in your datastores or public urls. The class provides you with the ability to download or mount the files to your compute by creating a reference to the data source location. Additionally, you register the Dataset to your workspace for easy retrieval during training.

Follow the how-to to learn more about Datasets and their usage in the SDK.

from azureml.core import Dataset
from azureml.opendatasets import MNIST

data_folder = os.path.join(os.getcwd(), 'data')
os.makedirs(data_folder, exist_ok=True)

mnist_file_dataset = MNIST.get_file_dataset()
mnist_file_dataset.download(data_folder, overwrite=True)

mnist_file_dataset = mnist_file_dataset.register(workspace=ws,
                                                 name='mnist_opendataset',
                                                 description='training and test dataset',
                                                 create_new_version=True)

Display some sample images

Load the compressed files into numpy arrays. Then use matplotlib to plot 30 random images from the dataset with their labels above them. Note this step requires a load_data function that's included in an utils.py file. This file is included in the sample folder. Please make sure it is placed in the same folder as this notebook. The load_data function simply parses the compresse files into numpy arrays.

# make sure utils.py is in the same directory as this code
from utils import load_data
import glob


# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the model converge faster.
X_train = load_data(glob.glob(os.path.join(data_folder,"**/train-images-idx3-ubyte.gz"), recursive=True)[0], False) / 255.0
X_test = load_data(glob.glob(os.path.join(data_folder,"**/t10k-images-idx3-ubyte.gz"), recursive=True)[0], False) / 255.0
y_train = load_data(glob.glob(os.path.join(data_folder,"**/train-labels-idx1-ubyte.gz"), recursive=True)[0], True).reshape(-1)
y_test = load_data(glob.glob(os.path.join(data_folder,"**/t10k-labels-idx1-ubyte.gz"), recursive=True)[0], True).reshape(-1)


# now let's show some randomly chosen images from the traininng set.
count = 0
sample_size = 30
plt.figure(figsize = (16, 6))
for i in np.random.permutation(X_train.shape[0])[:sample_size]:
    count = count + 1
    plt.subplot(1, sample_size, count)
    plt.axhline('')
    plt.axvline('')
    plt.text(x=10, y=-10, s=y_train[i], fontsize=18)
    plt.imshow(X_train[i].reshape(28, 28), cmap=plt.cm.Greys)
plt.show()

Train on a remote cluster

For this task, you submit the job to run on the remote training cluster you set up earlier. To submit a job you:

  • Create a directory
  • Create a training script
  • Create a script run configuration
  • Submit the job

Create a directory

Create a directory to deliver the necessary code from your computer to the remote resource.

import os
script_folder = os.path.join(os.getcwd(), "sklearn-mnist")
os.makedirs(script_folder, exist_ok=True)

Create a training script

To submit the job to the cluster, first create a training script. Run the following code to create the training script called train.py in the directory you just created.

%%writefile $script_folder/train.py

import argparse
import os
import numpy as np
import glob

from sklearn.linear_model import LogisticRegression
import joblib

from azureml.core import Run
from utils import load_data

# let user feed in 2 parameters, the dataset to mount or download, and the regularization rate of the logistic regression model
parser = argparse.ArgumentParser()
parser.add_argument('--data-folder', type=str, dest='data_folder', help='data folder mounting point')
parser.add_argument('--regularization', type=float, dest='reg', default=0.01, help='regularization rate')
args = parser.parse_args()

data_folder = args.data_folder
print('Data folder:', data_folder)

# load train and test set into numpy arrays
# note we scale the pixel intensity values to 0-1 (by dividing it with 255.0) so the model can converge faster.
X_train = load_data(glob.glob(os.path.join(data_folder, '**/train-images-idx3-ubyte.gz'), recursive=True)[0], False) / 255.0
X_test = load_data(glob.glob(os.path.join(data_folder, '**/t10k-images-idx3-ubyte.gz'), recursive=True)[0], False) / 255.0
y_train = load_data(glob.glob(os.path.join(data_folder, '**/train-labels-idx1-ubyte.gz'), recursive=True)[0], True).reshape(-1)
y_test = load_data(glob.glob(os.path.join(data_folder, '**/t10k-labels-idx1-ubyte.gz'), recursive=True)[0], True).reshape(-1)

print(X_train.shape, y_train.shape, X_test.shape, y_test.shape, sep = '\n')

# get hold of the current run
run = Run.get_context()

print('Train a logistic regression model with regularization rate of', args.reg)
clf = LogisticRegression(C=1.0/args.reg, solver="liblinear", multi_class="auto", random_state=42)
clf.fit(X_train, y_train)

print('Predict the test set')
y_hat = clf.predict(X_test)

# calculate accuracy on the prediction
acc = np.average(y_hat == y_test)
print('Accuracy is', acc)

run.log('regularization rate', np.float(args.reg))
run.log('accuracy', np.float(acc))

os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=clf, filename='outputs/sklearn_mnist_model.pkl')

Notice how the script gets data and saves models:

  • The training script reads an argument to find the directory containing the data. When you submit the job later, you point to the dataset for this argument: parser.add_argument('--data-folder', type=str, dest='data_folder', help='data directory mounting point')
  • The training script saves your model into a directory named outputs.
    joblib.dump(value=clf, filename='outputs/sklearn_mnist_model.pkl')
    Anything written in this directory is automatically uploaded into your workspace. You'll access your model from this directory later in the tutorial.

The file utils.py is referenced from the training script to load the dataset correctly. Copy this script into the script folder so that it can be accessed along with the training script on the remote resource.

import shutil
shutil.copy('utils.py', script_folder)

Configure the training job

Create a ScriptRunConfig object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on. Configure the ScriptRunConfig by specifying:

  • The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution.
  • The compute target. In this case you will use the AmlCompute you created
  • The training script name, train.py
  • An environment that contains the libraries needed to run the script
  • Arguments required from the training script.

In this tutorial, the target is AmlCompute. All files in the script folder are uploaded into the cluster nodes for execution. The data_folder is set to use the dataset.

First, create the environment that contains: the scikit-learn library, azureml-dataset-runtime required for accessing the dataset, and azureml-defaults which contains the dependencies for logging metrics. The azureml-defaults also contains the dependencies required for deploying the model as a web service later in the part 2 of the tutorial.

Once the environment is defined, register it with the Workspace to re-use it in part 2 of the tutorial.

from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies

# to install required packages
env = Environment('tutorial-env')
cd = CondaDependencies.create(pip_packages=['azureml-dataset-runtime[pandas,fuse]', 'azureml-defaults'], conda_packages = ['scikit-learn==0.22.1'])

env.python.conda_dependencies = cd

# Register environment to re-use later
env.register(workspace = ws)

Then, create the ScriptRunConfig by specifying the training script, compute target and environment.

from azureml.core import ScriptRunConfig

args = ['--data-folder', mnist_file_dataset.as_mount(), '--regularization', 0.5]

src = ScriptRunConfig(source_directory=script_folder,
                      script='train.py', 
                      arguments=args,
                      compute_target=compute_target,
                      environment=env)

Submit the job to the cluster

Run the experiment by submitting the ScriptRunConfig object. And you can navigate to Azure portal to monitor the run.

run = exp.submit(config=src)
run

Since the call is asynchronous, it returns a Preparing or Running state as soon as the job is started.

Monitor a remote run

In total, the first run takes approximately 10 minutes. But for subsequent runs, as long as the dependencies in the Azure ML environment don't change, the same image is reused and hence the container start up time is much faster.

Here is what's happening while you wait:

  • Image creation: A Docker image is created matching the Python environment specified by the Azure ML environment. The image is built and stored in the ACR (Azure Container Registry) associated with your workspace. Image creation and uploading takes about 5 minutes.

    This stage happens once for each Python environment since the container is cached for subsequent runs. During image creation, logs are streamed to the run history. You can monitor the image creation progress using these logs.

  • Scaling: If the remote cluster requires more nodes to execute the run than currently available, additional nodes are added automatically. Scaling typically takes about 5 minutes.

  • Running: In this stage, the necessary scripts and files are sent to the compute target, then data stores are mounted/copied, then the entry_script is run. While the job is running, stdout and the files in the ./logs directory are streamed to the run history. You can monitor the run's progress using these logs.

  • Post-Processing: The ./outputs directory of the run is copied over to the run history in your workspace so you can access these results.

You can check the progress of a running job in multiple ways. This tutorial uses a Jupyter widget as well as a wait_for_completion method.

Jupyter widget

Watch the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.

from azureml.widgets import RunDetails
RunDetails(run).show()

By the way, if you need to cancel a run, you can follow these instructions.

Get log results upon completion

Model training happens in the background. You can use wait_for_completion to block and wait until the model has completed training before running more code.

# specify show_output to True for a verbose log
run.wait_for_completion(show_output=True) 

Display run results

You now have a model trained on a remote cluster. Retrieve all the metrics logged during the run, including the accuracy of the model:

print(run.get_metrics())

In the next tutorial you will explore this model in more detail.

Register model

The last step in the training script wrote the file outputs/sklearn_mnist_model.pkl in a directory named outputs in the VM of the cluster where the job is executed. outputs is a special directory in that all content in this directory is automatically uploaded to your workspace. This content appears in the run record in the experiment under your workspace. Hence, the model file is now also available in your workspace.

You can see files associated with that run.

print(run.get_file_names())

Register the model in the workspace so that you (or other collaborators) can later query, examine, and deploy this model.

# register model 
model = run.register_model(model_name='sklearn_mnist', model_path='outputs/sklearn_mnist_model.pkl')
print(model.name, model.id, model.version, sep='\t')

Next steps

In this Azure Machine Learning tutorial, you used Python to:

  • Set up your development environment
  • Access and examine the data
  • Train multiple models on a remote cluster using the popular scikit-learn machine learning library
  • Review training details and register the best model

You are ready to deploy this registered model using the instructions in the next part of the tutorial series:

Tutorial 2 - Deploy models

Impressions

Loading...
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/Pan13725490013/MachineLearningNotebooks.git
git@gitee.com:Pan13725490013/MachineLearningNotebooks.git
Pan13725490013
MachineLearningNotebooks
MachineLearningNotebooks
master

搜索帮助