📰 Building a news classifier with weak supervision

In this tutorial, we will build a news classifier using rules and weak supervision:

  • 📰 For this example, we use the AG News dataset but you can follow this process to programmatically label any dataset.

  • 🤿 The train split without labels is used to build a training set with rules, Rubrix and Snorkel’s Label model.

  • 🔧 The test set is used for evaluating our weak labels, label model and downstream news classifier.

  • 🤯 We achieve 0.84 macro avg. f1-score without using a single example from the original dataset and using a pretty lightweight model (scikit-learn’s MultinomialNB).

The following diagram shows the overall process for using Weak supervision with Rubrix:

Labeling workflow

Introduction

Weak supervision is a branch of machine learning where noisy, limited, or imprecise sources are used to provide supervision signal for labeling large amounts of training data in a supervised learning setting. This approach alleviates the burden of obtaining hand-labeled data sets, which can be costly or impractical. Instead, inexpensive weak labels are employed with the understanding that they are imperfect, but can nonetheless be used to create a strong predictive model. [Wikipedia]

For a broader introduction to weak supervision, as well as further references, we recommend the excellent overview by Alex Ratner et al..

This tutorial aims to be a practical introduction to weak supervision and will walk you through its entire process. First we will generate weak labels with Rubrix, combine these labels with Snorkel, and finally train a classifier with Scikit Learn.

Setup

Rubrix, is a free and open-source tool to explore, annotate, and monitor data for NLP projects.

If you are new to Rubrix, check out the ⭐ Github repository.

If you have not installed and launched Rubrix yet, check the Setup and Installation guide.

For this tutorial we also need some third party libraries that can be installed via pip:

[ ]:
%pip install snorkel datasets sklearn -qqq

1. Load test and unlabelled datasets into Rubrix

First, let’s download the ag_news data set and have a quick look at it.

[ ]:
from datasets import load_dataset

# load our data
dataset = load_dataset("ag_news")

# get the index to label mapping
labels = dataset["test"].features["label"].names
[2]:
import pandas as pd

# quick look at our data
with pd.option_context('display.max_colwidth', None):
    display(dataset["test"].to_pandas().head())
text label
0 Fears for T N pension after talks Unions representing workers at Turner Newall say they are 'disappointed' after talks with stricken parent firm Federal Mogul. 2
1 The Race is On: Second Private Team Sets Launch Date for Human Spaceflight (SPACE.com) SPACE.com - TORONTO, Canada -- A second\team of rocketeers competing for the #36;10 million Ansari X Prize, a contest for\privately funded suborbital space flight, has officially announced the first\launch date for its manned rocket. 3
2 Ky. Company Wins Grant to Study Peptides (AP) AP - A company founded by a chemistry researcher at the University of Louisville won a grant to develop a method of producing better peptides, which are short chains of amino acids, the building blocks of proteins. 3
3 Prediction Unit Helps Forecast Wildfires (AP) AP - It's barely dawn when Mike Fitzpatrick starts his shift with a blur of colorful maps, figures and endless charts, but already he knows what the day will bring. Lightning will strike in places he expects. Winds will pick up, moist places will dry and flames will roar. 3
4 Calif. Aims to Limit Farm-Related Smog (AP) AP - Southern California's smog-fighting agency went after emissions of the bovine variety Friday, adopting the nation's first rules to reduce air pollution from dairy cow manure. 3

Now we will log the test split of our data set to Rubrix, which we will be using for testing our label and downstream models.

[ ]:
import rubrix as rb

# build our test records
records = [
    rb.TextClassificationRecord(
        inputs=record["text"],
        metadata={"split": "test"},
        annotation=labels[record["label"]]
    )
    for record in dataset["test"]
]

# log the records to Rubrix
rb.log(records, name="news")

In a second step we log the train split without labels. Remember, our goal is to programmatically build a training set using rules and weak supervision.

[ ]:
# build our training records without labels
records = [
    rb.TextClassificationRecord(
        inputs=record["text"],
        metadata={"split": "unlabelled"},
    )
    for record in dataset["train"].select(range(5000))
]

# log the records to Rubrix
rb.log(records, name="news")

The result of the above is the following dataset in Rubrix, with 127,600 records (120,000 unlabelled and 7,600 for testing).

You can use the web app to find good rules for programmatic labeling!

2. Interactive weak labeling: Finding and defining rules

After logging the dataset, you can find and save rules directly with the UI. Then, you can read the rules with Python to train a label or downstream model, as we’ll see in the next step.

3. Denoise weak labels with Snorkel’s Label Model

The goal at this step is to denoise the weak labels we’ve just created using rules. There are several approaches to this problem using different statistical methods.

In this tutorial, we’re going to use Snorkel but you can actually use any other Label model or weak supervision method, such as FlyingSquid for example (see the Weak supervision guide for more details). For convenience, Rubrix defines a simple wrapper over Snorkel’s Label Model so it’s easier to use with Rubrix weak labels and datasets

Let’s first read the rules defined in our dataset and create our weak labels:

[27]:
from rubrix.labeling.text_classification import load_rules, WeakLabels

rules = load_rules(dataset="news")

weak_labels = WeakLabels(
    rules=rules,
    dataset="news"
)
weak_labels.summary()
[27]:
polarity coverage overlaps conflicts correct incorrect precision
sci* {Sci/Tech} 0.016600 0.003176 0.001588 138 33 0.807018
dollar* {Business} 0.016592 0.006723 0.002990 108 41 0.724832
*ball {Sports} 0.030132 0.010015 0.001425 257 31 0.892361
conflict {World} 0.003052 0.000999 0.000287 23 5 0.821429
financ* {Business} 0.019620 0.007622 0.005298 90 70 0.562500
match {Sports} 0.008629 0.002138 0.000287 78 7 0.917647
goal {Sports} 0.005585 0.001774 0.000395 41 9 0.820000
election {World} 0.017235 0.011789 0.002192 128 27 0.825806
president* {World} 0.053346 0.018590 0.007188 353 130 0.730849
techn* {Sci/Tech} 0.030310 0.012277 0.005143 193 75 0.720149
software {Sci/Tech} 0.030132 0.010380 0.003354 209 47 0.816406
computer* {Sci/Tech} 0.027312 0.011782 0.003664 192 61 0.758893
game {Sports} 0.038768 0.010333 0.002672 252 79 0.761329
team {Sports} 0.031867 0.010875 0.002874 242 62 0.796053
minist* {World} 0.033455 0.008923 0.004191 259 33 0.886986
stock* {Business} 0.041123 0.017800 0.006933 311 56 0.847411
oil {Business} 0.035817 0.014694 0.004376 247 60 0.804560
internet {Sci/Tech} 0.028234 0.009032 0.002889 216 39 0.847059
total {Sports, Sci/Tech, World, Business} 0.378056 0.079171 0.025546 3337 865 0.794146
[ ]:
from rubrix.labeling.text_classification import Snorkel

# create the label model
label_model = Snorkel(weak_labels)

# fit the model
label_model.fit()

# test it with labeled test set
label_model.score()

3. Prepare our training set

Now, we already have a “denoised” training set, which we can prepare for training a downstream model. The label model predict returns TextClassificationRecord objects with the predictions from the label model.

We can either refine and review these records using the Rubrix web app, use them as is, or filter them by score, for example.

In this case, we assume the predictions are precise enough and use them without any revision. Our training set has ~38,000 records, which corresponds to all records where the label model has not abstained.

[30]:
import pandas as pd

# get records with the predictions from the label model
records = label_model.predict()

# build a simple dataframe with text and the prediction with the highest score
df_train = pd.DataFrame([
    {"text": record.inputs["text"], "label": label_model.weak_labels.label2int[record.prediction[0][0]]}
    for record in records
])

# quick look at our training data with the weak labels from our label model
with pd.option_context('display.max_colwidth', None):
    display(df_train)
text label
0 Biotech Bug Busters Try to Save Venezuela Art Works (Reuters) Reuters - Biotechnology is meeting art\in Venezuela as scientists try to save the country's art\treasures from being ruined by its tropical insects, fungi and\humidity. 0
1 Wolves not entirely to blame for farm losses Paris - Wolves, lions, cheetahs and other predators inflict relatively few losses on livestock and farmers gain only a temporary boost if these marauders are culled, New Scientist says. 0
2 EU, U.S. Talks on Aircraft Aid Grounded BRUSSELS (Reuters) - U.S. and EU negotiators disagreed on Thursday about state aid for aircraft rivals Airbus and Boeing, winding up no closer on a sensitive issue that has gathered steam before the U.S. presidential election. 1
3 Gold Fields Appeal to Exchange on Harmony Bid Fails (Update1) Gold Fields Ltd. #39;s appeal to South Africa #39;s stock market regulator to block Harmony Gold Mining Co. #39;s 43.9 billion rand (\$7. 2
4 Video Games Go Live for Annual Awards Show LOS ANGELES (Reuters) - A felon will host, a Playboy model will work the red carpet, and "the most destructive band in history" will play on the first major live video game awards show, airing on Spike TV on Tuesday. 3
... ... ...
45393 Singapore's Economy Grows in 2004 (AP) AP - Singapore's economy expanded 5.4 percent in the fourth quarter from a year ago and grew by 8.1 percent for the full year in 2004, the city-state's Ministry of Trade said Monday. 1
45394 Krispy Kreme Posts Loss, Stock Off 16 Pct LOS ANGELES (Reuters) - Krispy Kreme Doughnuts Inc. <A HREF="http://www.investor.reuters.com/FullQuote.aspx?ticker=KKD.N target=/stocks/quickinfo/fullquote">KKD.N</A> on Monday reported a quarterly loss due to store closings and sluggish sales, sending its stock down 16 percent. 2
45395 Sun partners for high-speed Ethernet Sun Microsystems will integrate drivers for S2io's Xframe 10 Gigabit Ethernet Adapter into the Solaris operating system for Sparc, AMD Opteron, and Intel Xeon servers. In addition, S2io will partner with Sun to develop a TCP/IP offload engine with remote direct memory access functionality to enhance performance and scalability in intense computer and server environments. 0
45396 Taking the Pulse of Planet Earth Scientists are planning to take the pulse of the planet -- and more -- in an effort to improve weather forecasts, predict energy needs months in advance, anticipate disease outbreaks and even tell fishermen where the catch will be abundant. 0
45397 Woodward Can Make Switch - Hogg Scotland back row forward Allister Hogg sees no reason why England coach Sir Clive Woodward cannot make the switch from rugby to football. 3

45398 rows × 2 columns

[31]:
# for the test set, we can retrieve the records with validated annotations (the original ag_news test set)
df_test = rb.load("news", query="status:Validated")

# transform data to match our training set format
df_test['text'] = df_test.inputs.transform(lambda r: r['text'])
df_test['annotation'] = df_test['annotation'].apply(
    lambda r: label_model.weak_labels.label2int[r]
)

4. Train a downstream model with scikit-learn

Now, let’s train our final model using scikit-learn:

[32]:
from sklearn.feature_extraction.text import TfidfTransformer, CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline

# define our final classifier
classifier = Pipeline([
    ('vect', CountVectorizer()),
    ('clf', MultinomialNB())
])

# fit the classifier
classifier.fit(
    X=df_train.text.tolist(),
    y=df_train.label.values
)
[32]:
Pipeline(steps=[('vect', CountVectorizer()), ('clf', MultinomialNB())])
[33]:
# compute the test accuracy
accuracy = classifier.score(
    X=df_test.text.tolist(),
    y=label_model.weak_labels.annotation()
)

print(f"Test accuracy: {accuracy}")
Test accuracy: 0.8418681318681319

Not too bad! 🥳

We have achieved around 0.84 accuracy without even using a single example from the original ag_news train set and with a small set of rules (less than 30). Also, we’ve improved over the 0.81 accuracy of our Label Model.

Finally, let’s take a look at more detailed metrics:

[37]:
from sklearn import metrics

# get predictions for the test set
predicted = classifier.predict(df_test.text.tolist())

print(metrics.classification_report(label_model.weak_labels.annotation(), predicted, target_names=labels))
              precision    recall  f1-score   support

       World       0.78      0.86      0.82      2285
      Sports       0.85      0.86      0.85      2278
    Business       0.88      0.67      0.76      2236
    Sci/Tech       0.87      0.98      0.92      2301

    accuracy                           0.84      9100
   macro avg       0.84      0.84      0.84      9100
weighted avg       0.84      0.84      0.84      9100

At this point, we could go back to the UI to define more rules for those labels with less performance. Looking at the above table, we might want to add some more rules for increasing the recall of the Business label.

Summary

In this tutorial, we saw how you can leverage weak supervision to quickly build up a large training data set, and use it for the training of a first lightweight model.

Rubrix is a very handy tool to start the weak supervision process by making it easy to find a good set of starting rules, and to reiterate on them dynamically. Since Rubrix also provides built-in support for the most common label models, you can get from rules to weak labels in a few straight forward steps. For more suggestions on how to leverage weak labels, you can checkout our weak supervision guide where we describe an interesting approach to jointly train the label and a transformers downstream model.

Next steps

If you are interested in the topic of weak supervision check our weak supervision guide.

⭐ Rubrix Github repo to stay updated.

🙋‍♀️ Join the Rubrix community on Slack

Appendix. Create rules and weak labels from Python

For some use cases, you might want to use Python for defining labeling rules and generating weak labels. Rubrix provides you with the ability to define and test rules and labeling functions directly using Python. This might be useful for combining it with rules defined in the UI, and for leveraging structured resources such as lexicons and gazeteers which are easier to use directly a programmatic environment.

In this section, we define the rules we’ve defined in the UI, this time directly using Python:

[14]:
from rubrix.labeling.text_classification import Rule

# define queries and patterns for each category (using ES DSL)
queries = [
  (["money", "financ*", "dollar*"], "Business"),
  (["war", "gov*", "minister*", "conflict"], "World"),
  (["footbal*", "sport*", "game", "play*"], "Sports"),
  (["sci*", "techno*", "computer*", "software", "web"], "Sci/Tech")
]

# define rules
rules = [
    Rule(query=term, label=label)
    for terms,label in queries
    for term in terms
]
[ ]:
from rubrix.labeling.text_classification import WeakLabels

# generate the weak labels
weak_labels = WeakLabels(
    rules=rules,
    dataset="news"
)

On our machine it took around 24 seconds to apply the rules and to generate weak labels for the 127,600 examples.

Typically, you want to iterate on the rules and check their statistics. For this, you can use weak_labels.summary method:

[16]:
weak_labels.summary()
[16]:
polarity coverage overlaps conflicts correct incorrect precision
money {Business} 0.008276 0.002437 0.001936 30 37 0.447761
financ* {Business} 0.019655 0.005893 0.005188 80 55 0.592593
dollar* {Business} 0.016591 0.003542 0.002908 87 37 0.701613
war {World} 0.011779 0.003213 0.001348 75 26 0.742574
gov* {World} 0.045078 0.010878 0.006270 170 174 0.494186
minister* {World} 0.030031 0.007531 0.002821 193 22 0.897674
conflict {World} 0.003041 0.001003 0.000102 18 4 0.818182
footbal* {Sports} 0.013166 0.004945 0.000439 107 7 0.938596
sport* {Sports} 0.021191 0.007045 0.001223 139 23 0.858025
game {Sports} 0.038879 0.014083 0.002375 216 71 0.752613
play* {Sports} 0.052453 0.016889 0.005063 268 112 0.705263
sci* {Sci/Tech} 0.016552 0.002735 0.001309 114 26 0.814286
techno* {Sci/Tech} 0.027218 0.008433 0.003174 155 60 0.720930
computer* {Sci/Tech} 0.027320 0.011058 0.004459 159 54 0.746479
software {Sci/Tech} 0.030243 0.009655 0.003346 184 41 0.817778
web {Sci/Tech} 0.015376 0.004067 0.001607 76 25 0.752475
total {World, Sci/Tech, Business, Sports} 0.317022 0.053582 0.019561 2071 774 0.727944

From the above, we see that our rules cover around 30% of the original training set with an average precision of 0.72. Our hope is that the label and downstream models will improve both the recall and the precision of the final classifier.