Part 2: Machine learning model deployment on Microsoft Azure

Read Time: 3 min

In previous post, we learned to design machine learning model on Azure notebook, how to create workspace on Azure and how to register trained model on cloud. “Part 2: Machine learning model deployment on Microsoft Azure ” is continuation of part 1, where we will learn how to deploy the model on Azure container instance.

Process followed for deployment

  • Register the trained model
  • Create python script for initialization and run
  • Select the target where to deploy the model
  • Test and validate the model

Register the model

To start with deployment, First you need to register the model within workspace. You can check it in more detail in previous post.

from azureml.core.model import Model
model = Model.register(workspace=ws_new, model_path="./model_fd1.pkl", model_name="frauddetect")

Once it’s registered, it will be visible like this on Azure portal:

Once it’s registered to workspace, we need to prepare for deployment using following three things.

  • Scoring Script
  • Dependencies
  • Configuration information

Scoring script

Scoring script is a python file where you need to write initialization and run function. It can be renamed with other name as well.


init() is used to call the registered model from workspace


run function is used to load the test data and predict the result.

def init():
    global model
    model_path = Model.get_model_path(model_name = 'frauddetection')
    model = joblib.load(model_path)
 def run(data):
    result = model.predict(data)
    return result.tolist()

Set dependencies

Dependencies include versions of the libraries used. Using these dependencies, we need to create .yml file so that it can be fed to Azure instance.

For example, for this project I have used following libraries.

from azureml.core.conda_dependencies import CondaDependencies

dependencies = CondaDependencies()


Deploy as web service

First let’s understand, what is web service?

Web service is kind of software which self-maintained, distributed in such a way that it can be used to create products and services. For this project we are going to Azure web services so that we can inference from the model any time and any location.

To create web service, you need to set configuration like this:

from azureml.core.webservice import AciWebservice, Webservice
from azureml.core.image import ContainerImage
aci_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)

Create container image

Container is a unchangeable file where executable code is kept. To create container image, you need to pass environment dependencies and scoring script as argument.

# Build a container image
image_config = ContainerImage.image_configuration(execution_script = "",runtime = "python", conda_file = "myenv.yml")

Deploy as web service

Finally deploy the web service as follows:

service = Webservice.deploy_from_model(name = "frauddetection1",
                                        deployment_config = aci_config,
                                        models = [model],
                                        image_config = image_config,
                                        workspace = ws_new)

# The service deployment can take several minutes: wait for completion.
service.wait_for_deployment(show_output = True)

Once service is created, it will show output like this:

Consume web service

To consume web serivce means to use it for testing i.e. for prediction on random data.

Finally you can test the service by feeding the sample data:

sample = json.dumps({"data": [3160041896,185.5,4823,1,5,0,0,0,0,0]})
result =

This is how machine learning model deployment is done on Microsoft Azure