Deployment of ML model using FASTAPI

Flask vs FASTAPI vs Django ???

Both Django and Flask are great frameworks,no doubt but FASTAPi is making rapid progress.

Django comes in handy when a service depends on a database, needs a simple admin interface and perhaps a requires a nice web GUI. All that comes out of the box with Django thanks to its amazing ORM, admin app and template engine.

When a simple micro-service that exposes a couple API Endpoints is needed, this is where Flask shines. Personally speaking ,I have used flask extensively for deploying ML models.

However when it comes to RESTful microservices, both Flask and Django did not live up to expectations when it came to performance and development speed. This is where FAST API beats the above two.

Why FASTAPI is better than FLASK??

The reasoning is pretty straightforward. Flask uses WSGI whereas FASTAPI uses ASGI . For those who dont know WSGI and ASGI,let me explain it briefly

WSGI (stands for Web Server Gateway Interface) where you can define your application as a callable that takes two arguments the first argument environ describes the request and the environment the server running in and the second argument is a synchronous callable which you call to start the response to yield the body.

It doesn’t have the ability to officially deal with Web Sockets. Wsgi.websocket is an unofficial work around though. WSGI can’t also work with HTTP/2. We also can’t use async or await with WSGI.

ASGI stands for Asynchronous Server Gateway Interface. In ASGI also you define your application as a callable which is asynchronous by default.

ASGI is a successor of the successful WSGI. ASGI’s goal is to continue become the standard compatibility between web servers, frameworks and applications like WSGI in asynchronous python.In ASGI there are three arguments the scope which is similar to the environ in WSGI which gives an idea about the specific connection. Receive and Send where you as an application has to receive and send messages both are asynchronous callable. This allows multiple incoming events and outgoing events for each application .

The main advantage is that it allows background coroutine so the application is able to do other things such listening for events.

In short WSGI is synchronous whereas ASGI is asynchronous.

Some of the advantages that are offered by FASTAPI :

FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints.

The key features are:

So lets get started and build our ML model….

The dataset we are using is Bank Note authenticatiion .You can download it from the link.

The data has 4 independent features ( Variance ,skewness,Kurtosis and entropy) and one dependent feature class label ( Bank note authentic or not)

Dataset

Using standard train test split we divide the dataset into training and test dataset.Note I am not performing any EDA as the purpose of this blog is deployment using FASTAPI.

I have not normalised the data since Iwould be using a random forest to build a model. Its robust to class imbalance as well as outliers too.

After building the model,we would be storing it in a pickle file.

After this we need to create 2 files. One being requirements.txt and the other being Procfile. The contents of the Procfile are as follows

web: gunicorn -w 4 -k uvicorn.workers.UvicornWorker app:app

Its telling that its basically a webapp. Gunicorn is being used with 4 workers.

You can use Gunicorn to manage Uvicorn and run multiple of these concurrent processes.That way, you get the best of concurrency and parallelism.

For requirements,txt we need to install fastapi and gunicorn.

pip install fastapi gunicorn

Now lets go into details of app.py file and see each step in a detailed manner

Step 1 Library imports

import uvicorn
from fastapi
import FastAPI
from BankNotes import BankNote
import numpy as np
import pickle
import pandas as pd

After making sure that all the packages are installed , we can move to the next step.

Step 2 Creating the FASTAPI object

app = FastAPI()
pickle_in = open("classifier.pkl","rb")
classifier=pickle.load(pickle_in)

Step 3 Defining the routes through the created object

Step 4 Defining the port no for the app

Step 5 Command to run the app

uvicorn app:app --reload

In your terminal you can write out this command ,the point to note that first app refers to the filename whereas the second app refers to the object we created in step 2.

Now once we have the model running in our system we can deploy it in the cloud platform. The platform I am using for this blog is Heroku : very easy and simple to use.

Step 6 Once you have logged into your heroku account,you should be able to create a new app.

Logging into your account

Step 7 Next you should create a new app for your project. Choose a unique name

Unique name to app

Next upload all your project files to Github and connect to your repository and then click deploy main branch. Once all dependencies are installed you willhave your app running on the desired URL and append /docs to it to see the UI.

Connecting with Github

NOTE Make sure you have requirements.txt and Procfile in your repository

UI of app

We can clearly analyse what parameters our ML model will take and also see their data types too.

data type of features
trying out the ML model

with the UI we can easily see the response of our request.

Response

To see live demo head over to this link

For all the code used in the blog you can refer my repository.

Hope after reading this blog now you can go about deploying your Ml model with the help of FASTAPI. Happy Learning :)

ML engineer | Data scientist