💾 Monitor predictions in HTTP API endpoints

In this tutorial, you’ll learn to monitor the predictions of a FastAPI inference endpoint and log model predictions in a Rubrix dataset.

This tutorial walks you through 4 basic steps:

  • 💾 Load the model you want to use.

  • 🔄 Convert model output to Rubrix format.

  • 💻 Create a FastAPI endpoint.

  • 🤖 Add middleware to automate logging to Rubrix

Let’s get started!

Transformers Log Demo spaCy Log Demo

Setup Rubrix

Rubrix, is a free and open-source tool to explore, annotate, and monitor data for NLP projects.

If you are new to Rubrix, check out the ⭐ Github repository.

If you have not installed and launched Rubrix, check the Setup and Installation guide.

Once installed, you only need to import Rubrix:

Install tutorial dependencies

Apart from Rubrix, we’ll need the following libraries: - transformers - spaCy - uvicorn - FastAPI

And the following models: - distilbert-base-uncased-finetuned-sst-2-english : a sentiment-analysis model - en_core_web_sm : spaCy’s trained pipeline for English

To install all requirements, run the following commands :

[ ]:
# spaCy
!pip install spacy
# spaCy pipeline
!python -m spacy download en_core_web_sm
# FastAPI
!pip install fastapi
# transformers
!pip install transformers
# uvicorn
!pip install uvicorn[standard]

The transformer’s pipeline will be downloaded in the next step.

Loading models

Let’s get and load our model pretrained pipeline and apply it to one of our dataset records:

[ ]:
from transformers import pipeline
import spacy

transformers_pipeline = pipeline("sentiment-analysis", return_all_scores=True)
spacy_pipeline = spacy.load("en_core_web_sm")

For more informations about using the transformers library with Rubrix, check the tutorial How to label your data and fine-tune a 🤗 sentiment classifier

Model output

Let’s try the transformer’s pipeline in this example:

[ ]:
from pprint import pprint

batch = ['I really like rubrix!']
predictions = transformers_pipeline(batch)
pprint(predictions)

Looks like the predictions is a list containing lists of two elements : - The first dictionnary containing the NEGATIVE sentiment label and its score. - The second dictionnary containing the same data but for POSITIVE sentiment.

Convert output to Rubrix format

To log the output to rubrix we should supply a list of dictionnaries, each dictonnary containing two keys: - labels : value is a list of strings, each string being the label of the sentiment. - scores : value is a list of floats, each float being the probability of the sentiment.

[ ]:
rubrix_format = [
    {
        "labels": [p["label"] for p in prediction],
        "scores": [p["score"] for p in prediction],
    }
    for prediction in predictions
]
pprint(rubrix_format)

Create prediction endpoint

[ ]:
from fastapi import FastAPI
from typing import List

app_transformers = FastAPI()

# prediction endpoint using transformers pipeline
@app_transformers.post("/")
def predict_transformers(batch: List[str]):
    predictions = transformers_pipeline(batch)
    return [
        {
            "labels": [p["label"] for p in prediction],
            "scores": [p["score"] for p in prediction],
        }
        for prediction in predictions
    ]

Add Rubrix logging middleware to the application

[ ]:
from rubrix.client.asgi import RubrixLogHTTPMiddleware

app_transformers.add_middleware(
    RubrixLogHTTPMiddleware,
    api_endpoint="/transformers/", #the endpoint that will be logged
    dataset="monitoring_transformers", #your dataset name
    # you could post-process the predict output with a custom record_mapper function
    # record_mapper=custom_text_classification_mapper,
)

Do the same for spaCy

We’ll add a custom mapper to convert spaCy’s output to TokenClassificationRecord format

Mapper

[ ]:
import re
import datetime

from rubrix.client.models import TokenClassificationRecord

def custom_mapper(inputs, outputs):
    spaces_regex = re.compile(r"\s+")
    text = inputs
    return TokenClassificationRecord(
            text=text,
            tokens=spaces_regex.split(text),
            prediction=[
                    (entity["label"], entity["start"], entity["end"])
                    for entity in (
                            outputs.get("entities") if isinstance(outputs, dict) else outputs
                    )
            ],
            event_timestamp=datetime.datetime.now(),
    )

FastAPI application

[ ]:
app_spacy = FastAPI()

app_spacy.add_middleware(
    RubrixLogHTTPMiddleware,
    api_endpoint="/spacy/",
    dataset="monitoring_spacy",
    records_mapper=custom_mapper
)

# prediction endpoint using spacy pipeline
@app_spacy.post("/")
def predict_spacy(batch: List[str]):
    predictions = []
    for text in batch:
        doc = spacy_pipeline(text)  # spaCy Doc creation
        # Entity annotations
        entities = [
            {"label": ent.label_, "start": ent.start_char, "end": ent.end_char}
            for ent in doc.ents
        ]

        prediction = {
            "text": text,
            "entities": entities,
        }
        predictions.append(prediction)
    return predictions

Putting it all together

[ ]:
app = FastAPI()

@app.get("/")
def root():
    return {"message": "alive"}

app.mount("/transformers", app_transformers)
app.mount("/spacy", app_spacy)

Launch the appplication

To launch the application, copy the whole code into a file named main.py and run the following command:

[ ]:
!uvicorn main:app

Transformers demo

Transformers Log Demo

spaCy demo

spaCy Log Demo

Summary

In this tutorial, we have learnt to automatically log model outputs into Rubrix, this can be used to continuosly and transparently monitor HTTP inference endpoints.

Next steps

🙋‍♀️ Join the Rubrix community! A good place to start is the discussion forum.

⭐ Rubrix Github repo to stay updated.