Model Deployment with FastAPI, Azure, and Docker | by Sabrine Bendimerad | Sep, 2024


A Complete Guide to Serving a Machine Learning Model with FastAPI

Towards Data Science
pixabay.com

Welcome to this third article in my MLOps series. In the first article, we explored Docker and how it simplifies application packaging. In the second article, we managed machine learning models using MLflow, Azure, and Docker. Now, in this third part, we’ll bring everything together by building a FastAPI application that serves our previously stored model on Azure. This allows us to create a prediction service that can be accessed globally!

An API is like a bridge. Whenever you interact with a library in Python, you’re using its API. It’s the public part of an application that you can interact with, while everything behind it is hidden.

APIs are commonly used to communicate with web applications, and they provide a set of URLs that return data (You send a request with some parameters and get a response back). Most often, the data comes back in formats like JSON or XML, which are easy to parse. This is different from websites that return HTML, which includes info for rendering pages. With APIs, you get just the raw data.

Some APIs are public, while others are private. When building an API, you decide what data to share, how to…

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here