Demo project for deployment of ML models using FastAPI.
- Python 3.x
- pip (Python package manager)
- Create a Python virtual environment.
python3 -m venv model_api_env
- Activate newly created venv.
source model_api_env/bin/activate
- Install FastAPI.
pip install fastapi
- Edit Inbound rules in cloud environment and enable a port for ModelAPI deployment.
- Allow inbound traffic in the OS firewall.
sudo ufw allow <PORT>
- Clone ModelAPI.
git clone https://github.com/pramit-d/ModelAPI
- Install all the required Python packages.
cd ModelAPI pip install -r requirements.txt
- Run ModelAPI.
uvicorn main:app --host 0.0.0.0 --port <PORT>
-
Access ModelAPI:
<VM_IP_ADDRESS>:<PORT>
-
Interactive API docs:
<VM_IP_ADDRESS>:<PORT>/docs
You will see the automatic interactive API documentation (provided by Swagger UI).
-
Use ModelAPI to predict images.
It will predict an image from the mentioned list.
[airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck]
-
Model: You can replace the pre-trained model (image.h5) with your own trained model by placing it in the project directory and updating the model = models.load_model("image.h5") line in main.py.
-
Input Data: By default, the API is configured to accept image files for prediction. However, if you want to accept different types of input data, such as text, you can modify the predict_image function in main.py to handle the new data format appropriately. For example, if you want to support text input, you can update the function to accept text input instead of an image file. You'll also need to modify the model input accordingly and update the logic to preprocess the text data before making predictions.
-
Class Names: Update the class_names list in main.py with your own class names.
The model utilized in this project has been obtained from Hugging Face.