Suppose You have a Machine Learning project and you want to show it to your friend, how will you do it? You want to make something, through that your friend can understand it better. Obviously, you cannot directly share your Jupyter notebook with the end-users and ask them to simply run and get output.

Model building is not the only step in the Machine Learning Lifecycle. The end goal is to put the model in production so that anyone can use it.

This is where model deployment comes into the picture. It is probably the most essential step in the Machine Learning lifecycle.

But sadly not many people talk about model deployment. Most people talk about model building. That’s why we thought to focus on model deployment, particularly in this blog.

In this blog, We will be discussing what exactly is model deployment, what does it entail, and most importantly creating APIs using Flask. There are various ways you can deploy your models. But today, we will learn to Deploy Machine Learning Models using Flask.

Didn’t understand anything??

No worries! We are going to explain everything in detail.

So, what can you expect from this blog?

At the end of the blog, you will be able to deploy your models using Flask. You will be creating a UI for the project and the users can enter the values and get the prediction in a beautiful format. Does that sound interesting to you? I hope yes.

Two years ago not many people spoke about inference and the talk was mostly around how we developed models. But now we have crossed the early adoption stage and are talking about democratising AI. And that’s where inference is taking the centre stage. In many cases, target deployment is not as much as we see in the training environment.

Sunil Kumar Vuppala, Principal Scientist at Philips Research

Table of Content

  • What is Model Deployment?
  • Flask framework
  • Creating a Conda Environment
  • Installing Flask
  • Problem Statement
  • Saving the pickle file of the trained model
  • Creating a Webpage
  • Creating the main flask application
  • Let’s see the final Webpage
  • Conclusion

What is Model Deployment?

We have understood the problem statement, preprocessed the data, and also created machine learning models. Now, what next?

We want to make our model accessible to end-users as well. Right? Here comes the Model Deployment part.

Integrating a machine learning model into an existing production environment where it can take in an input and return an output.

[Image Source: Google]

What is Flask Framework

Flask is a micro web framework written in Python. It provides you with tools, libraries, and technologies that allow you to build a web application.

What are the features of flask?

  • Integrated support for unit testing.
  • RESTful request dispatching.
  • Uses a Ninja2 template engine.
  • Support for secure cookies (client-side sessions).
  • Extensive documentation.
  • Google app engine compatibility.
  • APIs are nicely shaped and coherent
  • Easily deployable in production

Let us get started with Flask and integrate our Machine Learning Model with Flask

Step1: Creating a Conda Environment

Why new Conda Environment is needed?

A virtual environment is a tool that helps to keep dependencies required by different projects separate by creating isolated spaces for them that contain per-project dependencies for them. 

Suppose you have one project that requires Tensorflow 2.6 while the other requires Tensorflow 2.7. You cannot install both on the same base environment. That is why creating a separate virtual environment is important.

Commands to create a new conda environment

Open your anaconda prompt or command prompt and type the following command to create a new Conda environment.

conda create -n myenv python==3.8

Activating the new conda environment created

conda activate myenv

Step 2: Installing Flask

 pip install Flask

Let’s deploy a use case with the help of flask

[Source: Chatbotslife]

For this blog, we will be working on the Customer Churn prediction.

This project aims at predicting if a customer will leave the bank or not, based on various factors.

The factors are:

  • Credit Score
  • Geography
  • Gender
  • Age
  • Tenure
  • Balance
  • NumOfProducts
  • HasCrCard
  • IsActiveMember
  • Estimated Salary

You can download the dataset from the link given below:

Step 3: Saving the pickle file of the trained model

We have created the prediction model using a Random Forest classifier. Then, we had created a pickle file of the trained model.

Pickle in Python is primarily used in serializing and deserializing a Python object structure. In other words, it’s the process of converting a Python object into a byte stream to store it in a file/database, maintain program state across sessions, or transport data over the network.

Step 4: Creating a Webpage

Now, to create an interface where users can interact with our Machine Learning model, we will create a webpage using HTML, CSS, and Bootstrap.

Here, we have created an HTML form that will take required inputs from the user namely Credit Score, Age, Tenure, Account Balance, Number of Products, credit card status, active member status, salary, location, gender. Then the form will send the values entered by the user to the predict route.

POST route is used when a user wants to send some data to the server like to send a form.

After getting the values, the application will return the output on the web page at the bottom.

Step 5: Creating main Flask Application

Now, we will create the main flask application which will get the inputs from the webpage and render the output on the screen. Also, it will deploy the website on the local server.

First of all, create a new file “app.py”.

We are first importing the required libraries.

render_template -> Renders the HTML Template specified

request ->  allows you to send HTTP requests using Python

jsonify ->  serializes data to JavaScript Object Notation (JSON) format, wraps it in a Response object with the application/json mimetype. 

pickle -> used for serializing and de-serializing a Python object structure

Using Flask(__name__), we are instantiating an object of Flask.

At last, we are loading the pickle file.

Next, we will create the home page route.

Now comes the main logic of the application. We will now create the route for getting the inputs from the user as well as rendering the output.

In this route, firstly, it takes the inputs entered by the user using the request module. And then, we are sending the variables to the predict method to get the output.

At last, we are printing the output on the web page.

Step 6: Let’s test the output now on webpage

Now comes the final moment. Let’s see the final output ?.

Conclusion

This was the way of deploying ML Models to Flask. Wasn’t that simple? Yes, you can easily do the same and bring your models to life. Make your models accessible to anyone in the world.

Want to learn with us live? like learning along with practising?

We are organizing an end-to-end Machine Learning Bootcamp where you will learn Deployment using Flask, Database Integration, Docker.

You can find more details about Bootcamp below: