Weak supervision

This guide gives you a brief introduction to weak supervision with Rubrix.

Rubrix currently supports weak supervision for text classification use cases, but we’ll be adding support for token classification (e.g., Named Entity Recognition) soon.

This feature is experimental, you can expect some changes in the Python API. Please report on Github any issue you encounter.

Labeling workflow

Rubrix weak supervision in a nutshell

Doing weak supervision with Rubrix should be straightforward. Keeping the same spirit as other parts of the library, you can virtually use any weak supervision library or method, such as Snorkel or Flyingsquid.

Rubrix weak supervision support is built around two basic abstractions:

Rule

A rule encodes an heuristic for labeling a record.

Heuristics can be defined using Elasticsearch’s queries:

plz = Rule(query="plz OR please", label="SPAM")

or with Python functions (similar to Snorkel’s labeling functions, which you can use as well):

def contains_http(record: rb.TextClassificationRecord) -> Optional[str]:
    if "http" in record.inputs["text"]:
        return "SPAM"

Besides textual features, Python labeling functions can exploit metadata features:

def author_channel(record: rb.TextClassificationRecord) -> Optional[str]:
    # the word channel appears in the comment author name
    if "channel" in record.metadata["author"]:
        return "SPAM"

A rule should either return a string value, that is a weak label, or a None type in case of abstention.

Weak Labels

Weak Labels objects bundle and apply a set of rules to the records of a Rubrix dataset. Applying a rule to a record means assigning a weak label or abstaining.

This abstraction provides you with the building blocks for training and testing weak supervision “denoising”, “label” or even “end” models:

rules = [contains_http, author_channel]
weak_labels = WeakLabels(
    rules=rules,
    dataset="weak_supervision_yt"
)

# returns a summary of the applied rules
weak_labels.summary()

More information about these abstractions can be found in the Python Labeling module docs.

Built-in label models

To make things even easier for you, we provide wrapper classes around the most common label models, that directly consume a WeakLabels object. This makes working with those models a breeze. Take a look at the list of built-in models in the labeling module docs.

Workflow

A typical workflow to use weak supervision is:

  1. Create a Rubrix dataset with your raw dataset. If you actually have some labelled data you can log it into the the same dataset.

  2. Define a set of rules, exploring and trying out different things directly in the Rubrix web app.

  3. Create a WeakLabels object and apply the rules. Typically, you’ll iterate between this step and step 2.

  4. Once you are satisfied with your weak labels, use the matrix of the WeakLabels instance with your library/method of choice to build a training set or even train a downstream text classification model.

This guide shows you an end-to-end example using Snorkel and Flyingsquid. Let’s get started!

Example dataset

We’ll be using a well-known dataset for weak supervision examples, the YouTube Spam Collection dataset, which is a binary classification task for detecting spam comments in Youtube videos.

[1]:
import pandas as pd

# load data
train_df = pd.read_csv('../tutorials/data/yt_comments_train.csv')
test_df = pd.read_csv('../tutorials/data/yt_comments_test.csv')

# preview data
train_df.head()
[1]:
Unnamed: 0 author date text label video
0 0 Alessandro leite 2014-11-05T22:21:36 pls http://www10.vakinha.com.br/VaquinhaE.aspx... -1.0 1
1 1 Salim Tayara 2014-11-02T14:33:30 if your like drones, plz subscribe to Kamal Ta... -1.0 1
2 2 Phuc Ly 2014-01-20T15:27:47 go here to check the views :3 -1.0 1
3 3 DropShotSk8r 2014-01-19T04:27:18 Came here to check the views, goodbye. -1.0 1
4 4 css403 2014-11-07T14:25:48 i am 2,126,492,636 viewer :D -1.0 1

1. Create a Rubrix dataset with unlabelled data and test data

Let’s load the train (non-labelled) and the test (containing labels) dataset.

[ ]:
import rubrix as rb

# build records from the train dataset
records = [
    rb.TextClassificationRecord(
        inputs=row.text,
        metadata={"video":row.video, "author": row.author}
    )
    for i,row in train_df.iterrows()
]

# build records from the test dataset
labels = ["HAM", "SPAM"]
records += [
    rb.TextClassificationRecord(
        inputs=row.text,
        annotation=labels[row.label],
        metadata={"video":row.video, "author": row.author}
    )
    for i,row in test_df.iterrows()
]

# log records to Rubrix
rb.log(records, name="weak_supervision_yt")

After this step, you have a fully browsable dataset available at http://localhost:6900/weak_supervision_yt (or the base URL where your Rubrix instance is hosted).

2. Defining rules

Let’s now define some of the rules proposed in the tutorial Snorkel Intro Tutorial: Data Labeling.

Remember you can use Elasticsearch’s query string DSL and test your queries directly in the web app. Available fields in the query are described in the Rubrix web app reference.

[4]:
from rubrix.labeling.text_classification import Rule, WeakLabels

#  rules defined as Elasticsearch queries
check_out = Rule(query="check out", label="SPAM")
plz = Rule(query="plz OR please", label="SPAM")
subscribe = Rule(query="subscribe", label="SPAM")
my = Rule(query="my", label="SPAM")
song = Rule(query="song", label="HAM")
love = Rule(query="love", label="HAM")

Besides using the UI, if you want to quickly see the effect of a rule, you can do:

[10]:
# display full length text
pd.set_option('display.max_colwidth', None)

# get the subset for the rule query
rb.load(name="weak_supervision_yt", query="plz OR please")[['inputs']]
[10]:
inputs
0 {'text': 'Thank you. Please give your email. '}
1 {'text': 'HUH HYUCK HYUCK IM SPECIAL WHO'S WATCHING THIS IN 2015 IM FROM AUSTRALIA OR SOMETHING GIVE ME ATTENTION PLEASE IM JUST A RAPPER WITH A DREAM IM GONNA SHARE THIS ON GOOGLE PLUS BECAUSE IM SO COOL.'}
2 {'text': 'Media is Evil! Please see and share: W W W. THE FARRELL REPORT. NET Top Ex UK Police Intelligence Analyst turned Whistleblower Tony Farrell exposes a horrific monstrous cover-up perpetrated by criminals operating crimes from inside Mainstream Entertainment and Media Law firms. Beware protect your children!! These devils brutally target innocent people. These are the real criminals linked to London's 7/7 attacks 2005. MUST SEE AND MAKE VIRAL!!! Also see UK Column video on 31st January 2013.'}
3 {'text': 'hey guys if you guys can please SUBSCRIBE to my channel ,i'm a young rapper really dedicated i post a video everyday ,i post a verse (16 bars)(part of a song)everyday to improve i'm doing this for 365 days ,right now i'm on day 41 i'm doing it for a whole year without missing one day if you guys can please SUBSCRIBE and follow me on my journey to my dream watch me improve, it really means a lot to me thank you (:, i won't let you down i promise(: i'm lyrical i keep it real!'}
4 {'text': 'Please do buy these new Christmas shirts! You can buy at any time before December 4th and they are sold worldwide! Don't miss out: http://teespring.com/treechristmas'}
... ...
181 {'text': 'Please subscribe to us and thank you'}
182 {'text': 'My honest opinion. It's a very mediocre song. Nothing unique or special about her music, lyrics or voice. Nothing memorable like Billie Jean or Beat It. Before her millions of fans reply with hate comments, i know this is a democracy and people are free to see what they want. But then don't I have the right to express my opinion? Please don't reply with dumb comments lie "if you don't like it don't watch it". I just came here to see what's the buzz about(661 million views??) and didn't like what i saw. OK?'}
183 {'text': 'EVERYONE PLEASE GO SUBSCRIBE TO MY CHANNEL OR JUST LOON AT MY VIDEOS'}
184 {'text': 'please suscribe i am bored of 5 subscribers try to get it to 20!'}
185 {'text': 'https://www.facebook.com/eeccon/posts/733949243353321?comment_id=734237113324534&offset=0&total_comments=74 please like frigea marius gabriel comment :D'}

186 rows × 1 columns

You can also define plain Python labeling functions:

[ ]:
import re

# rules defined as Python labeling functions
def contains_http(record: rb.TextClassificationRecord):
    if "http" in record.inputs["text"]:
        return "SPAM"

def short_comment(record: rb.TextClassificationRecord):
    return "HAM" if len(record.inputs["text"].split()) < 5 else None

def regex_check_out(record: rb.TextClassificationRecord):
    return "SPAM" if re.search(r"check.*out", record.inputs["text"], flags=re.I) else None

3. Building and analizing weak labels

[ ]:
# bundle our rules in a list
rules = [check_out, plz, subscribe, my, song, love, contains_http, short_comment, regex_check_out]

# apply the rules to a dataset to obtain the weak labels
weak_labels = WeakLabels(
    rules=rules,
    dataset="weak_supervision_yt"
)
[26]:
# show some stats about the rules, see the `summary()` docstring for details
weak_labels.summary()
[26]:
polarity coverage overlaps conflicts correct incorrect precision
check out {SPAM} 0.235379 0.229147 0.028763 90 0 1.000000
plz OR please {SPAM} 0.089166 0.079099 0.019175 40 0 1.000000
subscribe {SPAM} 0.108341 0.084372 0.028763 60 0 1.000000
my {SPAM} 0.190316 0.167306 0.050815 82 12 0.872340
song {HAM} 0.139981 0.085331 0.034995 78 18 0.812500
love {HAM} 0.097795 0.075743 0.032119 56 14 0.800000
contains_http {SPAM} 0.096357 0.066155 0.045062 12 0 1.000000
short_comment {HAM} 0.259827 0.113135 0.058965 168 16 0.913043
regex_check_out {SPAM} 0.220997 0.220518 0.026846 90 0 1.000000
total {SPAM, HAM} 0.764621 0.447267 0.116970 676 60 0.918478

4. Using the weak labels

At this step you have at least two options:

  1. Use the weak labels for training a “denoising” or label model to build a less noisy training set. Highly popular options for this are Snorkel or Flyingsquid. After this step, you can train a downstream model with the “clean” labels.

  2. Use the weak labels directly with recent “end-to-end” (e.g., Weasel) or joint models (e.g., COSINE).

Let’s see some examples:

Label model with Snorkel

Snorkel is by far the most popular option for using weak supervision, and Rubrix provides built-in support for it. Using Snorkel with Rubrix’s WeakLabels is as simple as:

[ ]:
%pip install snorkel -qqq
[ ]:
from rubrix.labeling.text_classification import Snorkel

# we pass our WeakLabels instance to our Snorkel label model
label_model = Snorkel(weak_labels)

# we train the model
label_model.fit()

# we check its performance
label_model.score()

After fitting your label model, you can quickly explore its predictions, before building a training set for training a downstream text classifier.

This step is useful for validation, manual revision, or defining score thresholds for accepting labels from your label model (for example, only considering labels with a score greater then 0.8.)

[ ]:
# get your training records with the predictions of the label model
records_for_training = label_model.predict()

# log the records to a new dataset in Rubrix
rb.log(records_for_training, name="snorkel_results")

Label model with FlyingSquid

FlyingSquid is a powerful method developed by Hazy Research, a research group from Stanford behind ground-breaking work on programmatic data labeling, including Snorkel. FlyingSquid uses a closed-form solution for fitting the label model with great speed gains and similar performance.

[ ]:
%pip install flyingsquid pgmpy -qqq

By default, the WeakLabels class uses -1 as value for an abstention. FlyingSquid, though, expects a value of 0. With Rubrix you can define a custom label2int mapping like this:

[ ]:
weak_labels = WeakLabels(rules=rules, dataset="weak_supervision_yt", label2int={None: 0, 'SPAM': -1, 'HAM': 1})
[ ]:
from flyingsquid.label_model import LabelModel

# train our label model
label_model = LabelModel(len(weak_labels.rules))
label_model.fit(L_train=weak_labels.matrix(has_annotation=False),verbose=True)

After fitting your label model, you can quickly explore its predictions, before building a training set for training a downstream text classifier.

This step is useful for validation, manual revision, or defining score thresholds for accepting labels from your label model (for example, only considering labels with a score greater then 0.8.)

[ ]:
# get the part of the weak label matrix that has no corresponding annotation
train_matrix = weak_labels.matrix(has_annotation=False)

# get predictions from our label model
predictions = label_model.predict_proba(L_matrix=train_matrix)
predicted_labels = label_model.predict(L_matrix=train_matrix)
preds = [[('SPAM', pred[0]), ('HAM', pred[1])] for pred in predictions]

# get the records that do not have an annotation
train_records = weak_labels.records(has_annotation=False)
[ ]:
# add the predictions to the records
def add_prediction(record, prediction):
    record.prediction = prediction
    return record

train_records_with_lm_prediction = [
    add_prediction(rec, pred)
    for rec, pred, label in zip(train_records, preds, predicted_labels)
    if label != weak_labels.label2int[None] # exclude records where the label model abstains
]

# log a new dataset to Rubrix
rb.log(train_records_with_lm_prediction, name="flyingsquid_results")

Joint Model with Weasel

Weasel lets you train downstream models end-to-end using directly weak labels. In contrast to Snorkel or FlyingSquid, which are two-stage approaches, Weasel is a one-stage method that jointly trains the label and the end model at the same time. For more details check out the End-to-End Weak Supervision paper presented at NeurIPS 2021.

In this guide we will show you, how you can train a Hugging Face transformers model directly with weak labels using Weasel. Since Weasel uses PyTorch Lightning for the training, some basic knowledge of PyTorch is helpful, but not strictly necessary.

First, we need to install the Weasel python package:

[ ]:
!python -m pip install git+https://github.com/autonlab/weasel#egg=weasel[all]

Before we get started, we need to define some classes, that wrap our data and our end model in a way Weasel can work with them.

[ ]:
from weasel.datamodules.base_datamodule import AbstractWeaselDataset, AbstractDownstreamDataset
from weasel.models.downstream_models.base_model import DownstreamBaseModel
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from torch.utils.data import DataLoader
import torch


class TrainDataset(AbstractWeaselDataset):
    def __init__(self, L, inputs):
        super().__init__(L, None)
        self.inputs = inputs

        if self.L.shape[0] != len(self.inputs):
            raise ValueError("L and inputs have different number of samples")

    def __getitem__(self, item):
        return self.L[item], self.inputs[item]


class TestDataset(AbstractDownstreamDataset):
    def __init__(self, inputs, Y):
        super().__init__(None, Y)
        self.inputs = inputs

        if len(self.Y) != len(self.inputs):
            raise ValueError("inputs and Y have different number of samples")

    def __getitem__(self, item):
        return self.inputs[item], self.Y[item]

class TrainCollator:
    def __init__(self, tokenizer):
        self._tokenizer = tokenizer
    def __call__(self, batch):
        L = torch.stack([b[0] for b in batch])
        inputs = {key: [b[1][key] for b in batch] for key in batch[0][1]}
        return L, self._tokenizer.pad(inputs, return_tensors="pt")


class TestCollator:
    def __init__(self, tokenizer):
        self._tokenizer = tokenizer
    def __call__(self, batch):
        Y = torch.stack([b[1] for b in batch])
        inputs = {key: [b[0][key] for b in batch] for key in batch[0][0]}
        return self._tokenizer.pad(inputs, return_tensors="pt"), Y


class TransformersEndModel(DownstreamBaseModel):
    def __init__(self, name: str, num_labels: int = 2):
        super().__init__()
        self.out_dim = num_labels
        self.model = AutoModelForSequenceClassification.from_pretrained(name, num_labels=num_labels)

    def forward(self, kwargs):
        model_output = self.model(**kwargs)
        return model_output["logits"]

The first step is to obtain our weak labels. For this we use the same rules and data set as in the examples above (Snorkel and FlyingSquid).

[ ]:
# obtain our weak labels
weak_labels = WeakLabels(
    rules=rules,
    dataset="weak_supervision_yt"
)

In a second step we instantiate our end model, which in our case will be a pre-trained transformer from the Hugging Face Hub. Here we choose the small ELECTRA model by Google that shows excellent performance given its moderate number of parameters. Due to its size, you can fine-tune it on your CPU within a reasonable amount of time.

[ ]:
# instantiate our transformers end model
end_model = TransformersEndModel("google/electra-small-discriminator", num_labels=2)

With our end-model at hand, we can now instantiate the Weasel model. Apart from the end-model, it also includes a neural encoder that tries to estimate latent labels.

[ ]:
from weasel.models import Weasel

# instantiate our weasel end-to-end model
weasel = Weasel(
    end_model=end_model,
    num_LFs=len(weak_labels.rules),
    n_classes=2,
    encoder={'hidden_dims': [32, 10]},
    optim_encoder={'name': 'adam', 'lr': 1e-4},
    optim_end_model={'name': 'adam', 'lr': 5e-5},
)

Afterwards, we wrap our data in torch Datasets and DataLoaders, so that Weasel and PyTorch Lightning can work with it. In this step we also tokenize the data. Here we need to be careful to use the corresponding tokenizer to our end model.

[ ]:
# tokenizer for our transformers end model
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")

# torch data set of our training data
train_ds = TrainDataset(
    L=weak_labels.matrix(has_annotation=False),
    inputs=[tokenizer(rec.inputs["text"], truncation=True)
          for rec in weak_labels.records(has_annotation=False)],
)

# torch data set of our test data
test_ds = TestDataset(
    inputs=[tokenizer(rec.inputs["text"], truncation=True)
          for rec in weak_labels.records(has_annotation=True)],
    Y=weak_labels.annotation(),
)

# torch data loader for our training data
train_loader = DataLoader(
    dataset=train_ds,
    collate_fn=TrainCollator(tokenizer),
    batch_size=8,
)

# torch data loader for our test data
test_loader = DataLoader(
    dataset=test_ds,
    collate_fn=TestCollator(tokenizer),
    batch_size=16,
)

Now we have everything ready to start the training of our Weasel model. For the training process, Weasel relies on the excellent PyTorch Lightning Trainer. It provides tons of options and features to optimize the training process, but the defaults below should give you reasonable results. Keep in mind that you are fine-tuning a full-blown transformer model, albeit a small one.

[ ]:
import pytorch_lightning as pl

# instantiate the pytorch-lightning trainer
trainer = pl.Trainer(
    gpus=0,  # >= 1 to use GPU(s)
    max_epochs=2,
    logger=None,
    callbacks=[pl.callbacks.ModelCheckpoint(monitor="Val/accuracy", mode="max")]
)

# fit the model end-to-end
trainer.fit(
    model=weasel,
    train_dataloaders=train_loader,
    val_dataloaders=test_loader
)

After the training we can call the Trainer.test method to check the final performance. The model should have achieved an accuracy of around 0.94.

[ ]:
trainer.test(dataloaders=test_loader)  # List of test metrics

To use the model for inference, you can either use its predict method:

[ ]:
# Example text for the inference
text = "In my head this is like 2 years ago.. Time FLIES"

# Get predictions for the example text
predicted_probs, predicted_label = weasel.predict(
    tokenizer(text, return_tensors="pt")
)

# Map predicted int to label
weak_labels.int2label[int(predicted_label)]  # HAM

Or you can instantiate one of the popular transformers pipelines, providing directly the end-model and the tokenizer:

[ ]:
from transformers import pipeline

# modify the id2label mapping of the model
weasel.end_model.model.config.id2label = weak_labels.int2label

# create transformers pipeline
classifier = pipeline("text-classification", model=weasel.end_model.model, tokenizer=tokenizer)

# use pipeline for predictions
classifier(text)  # [{'label': 'HAM', 'score': 0.6110987663269043}]