💫 Explore and analyze spaCy NER pipelines

In this tutorial, we will learn to log spaCy Name Entity Recognition (NER) predictions.

This is useful for:

  • 🧐Evaluating pre-trained models.

  • 🔎Spotting frequent errors both during development and production.

  • 📈Improving your pipelines over time using Rubrix annotation mode.

  • 🎮Monitoring your model predictions using Rubrix integration with Kibana

Let’s get started!

Introduction

In this tutorial we will learn how to explore and analyze spaCy NER pipelines in an easy way.

We will load the Gutenberg Time dataset from the Hugging Face Hub and use a transformer-based spaCy model for detecting entities in this dataset and log the detected entities into a Rubrix dataset. This dataset can be used for exploring the quality of predictions and for creating a new training set, by correcting, adding and validating entities.

Then, we will use a smaller spaCy model for detecting entities and log the detected entities into the same Rubrix dataset for comparing its predictions with the previous model. And, as a bonus, we will use Rubrix and spaCy on a more challenging dataset: IMDB.

Setup

Rubrix is a free and open-source tool to explore, annotate, and monitor data for NLP projects.

If you are new to Rubrix, visit and ⭐ star Rubrix for more materials like and detailed docs: Github repo

If you have not installed and launched Rubrix yet, check the Setup and Installation guide.

For this tutorial we also need the third party libraries datasets and of course spaCy together with pytorch, which can be installed via git:

[ ]:
%pip install torch -qqq
%pip install datasets "spacy[transformers]~=3.0" protobuf -qqq

Our dataset

For this tutorial, we’re going to use the Gutenberg Time dataset from the Hugging Face Hub. It contains all explicit time references in a dataset of 52,183 novels whose full text is available via Project Gutenberg. From extracts of novels, we are surely going to find some NER entities.

[ ]:
from datasets import load_dataset

dataset = load_dataset("gutenberg_time", split="train", streaming=True)

Let’s have a look at the first 5 examples of the train set.

[ ]:
import pandas as pd

pd.DataFrame(dataset.take(5))

Logging spaCy NER entities into Rubrix

Using a Transformer-based pipeline

Let’s download our Roberta-based pretrained pipeline and instantiate a spaCy nlp pipeline with it.

[ ]:
!python -m spacy download en_core_web_trf
[ ]:
import spacy

nlp = spacy.load("en_core_web_trf")

Now let’s apply the nlp pipeline to the first 50 examples in our dataset, collecting the tokens and NER entities.

[ ]:
import rubrix as rb
from tqdm.auto import tqdm

records = []

for record in tqdm(list(dataset.take(50))):
    # We only need the text of each instance
    text = record["tok_context"]

    # spaCy Doc creation
    doc = nlp(text)

    # Entity annotations
    entities = [
        (ent.label_, ent.start_char, ent.end_char)
        for ent in doc.ents
    ]

    # Pre-tokenized input text
    tokens = [token.text for token in doc]

    # Rubrix TokenClassificationRecord list
    records.append(
        rb.TokenClassificationRecord(
            text=text,
            tokens=tokens,
            prediction=entities,
            prediction_agent="en_core_web_trf",
        )
    )
[ ]:
rb.log(records=records, name="gutenberg_spacy_ner")

If you go to the gutenberg_spacy_ner dataset in Rubrix you can explore the predictions of this model.

You can:

  • Filter records containing specific entity types,

  • See the most frequent “mentions” or surface forms for each entity. Mentions are the string values of specific entity types, such as for example “1 month” can be the mention of a duration entity. This is useful for error analysis, to quickly see potential issues and problematic entity types,

  • Use the free-text search to find records containing specific words,

  • And validate, include or reject specific entity annotations to build a new training set.

Using a smaller but more efficient pipeline

Now let’s compare with a smaller, but more efficient pre-trained model.

Let’s first download it:

[ ]:
!python -m spacy download en_core_web_sm
[ ]:
import spacy

nlp = spacy.load("en_core_web_sm")
[ ]:
records = []    # Creating and empty record list to save all the records

for record in tqdm(list(dataset.take(50))):

    text = record["tok_context"]  # We only need the text of each instance
    doc = nlp(text)    # spaCy Doc creation

    # Entity annotations
    entities = [
        (ent.label_, ent.start_char, ent.end_char)
        for ent in doc.ents
    ]

    # Pre-tokenized input text
    tokens = [token.text  for token in doc]


    # Rubrix TokenClassificationRecord list
    records.append(
        rb.TokenClassificationRecord(
            text=text,
            tokens=tokens,
            prediction=entities,
            prediction_agent="en_core_web_sm",
        )
    )
[ ]:
rb.log(records=records, name="gutenberg_spacy_ner")

Exploring and comparing en_core_web_sm and en_core_web_trf models

If you go to your gutenberg_spacy_ner dataset, you can explore and compare the results of both models.

To only see predictions of a specific model, you can use the predicted by filter, which comes from the prediction_agent parameter of your TextClassificationRecord.

spacy_models_meta

Explore the IMDB dataset

So far, both spaCy pretrained models seem to work pretty well. Let’s try with a more challenging dataset, which is more dissimilar to the original training data these models have been trained on.

[ ]:
imdb = load_dataset("imdb", split="test")
[ ]:
records = []
for record in tqdm(imdb.select(range(50))):
    # We only need the text of each instance
    text = record["text"]

    # spaCy Doc creation
    doc = nlp(text)

    # Entity annotations
    entities = [
        (ent.label_, ent.start_char, ent.end_char)
        for ent in doc.ents
    ]

    # Pre-tokenized input text
    tokens = [token.text  for token in doc]

    # Rubrix TokenClassificationRecord list
    records.append(
        rb.TokenClassificationRecord(
            text=text,
            tokens=tokens,
            prediction=entities,
            prediction_agent="en_core_web_sm",
        )
    )
[ ]:
rb.log(records=records, name="imdb_spacy_ner")

Exploring this dataset highlights the need of fine-tuning for specific domains.

For example, if we check the most frequent mentions for Person, we find two highly frequent missclassified entities: gore (the film genre) and Oscar (the prize).

You can easily check every example by using the filters and search-box.

Summary

In this tutorial, you learned how to log and explore differnt spaCy NER models with Rubrix. Now you can:

  • Build custom dashboards using Kibana to monitor and visualize spaCy models.

  • Build training sets using pre-trained spaCy models.

Next steps

🙋‍♀️ Join the Rubrix community! A good place to start is the discussion forum.

⭐ Rubrix Github repo to stay updated.