Classification¶

This section describes a family of BERT-based models that solve a variety of different classification tasks.

Insults detection is a binary classification task of identying wether a given sequence is an insult of another participant of communication.

Sentiment analysis is a task of classifying the polarity of the the given sequence. The number of classes may vary depending on the data: positive/negative binary classification, multiclass classification with a neutral class added or with a number of different emotions.

The models trained for the paraphrase detection task identify whether two sentences expressed with different words convey the same meaning.

Topic classification refers to the task of classifying an utterance by the topic which belongs to the conversational domain.

2. Get started with the model¶

[ ]:

!pip install -q deeppavlov


Then make sure that all the required packages for the model are installed.

[ ]:

!python -m deeppavlov install insults_kaggle_bert


insults_kaggle_bert is the name of the model’s config_file. What is a Config File?

Configuration file defines the model and describes its hyperparameters. To use another model, change the name of the config_file here and further. The full list of NER models with their config names can be found in the table.

3. Use the model for prediction¶

3.1 Predict using Python¶

After model installation build it from the config and use it for prediction.

[ ]:

from deeppavlov import build_model



Input format: List[sentences]

Output format: List[labels]

[ ]:

model(['You are kind of stupid', 'You are a wonderful person!'])

['Insult', 'Not Insult']


3.2 Predict using CLI¶

You can also get predictions in an interactive mode through CLI (Command Line Interface).

[ ]:

!python deeppavlov interact insults_kaggle_bert -d


-d is an optional download key (alternative to download=True in Python code). The key -d is used to download the pre-trained model along with embeddings and all other files needed to run the model.

Or make predictions for samples from stdin.

[ ]:

!python deeppavlov predict insults_kaggle_bert -f <file-name>


4. Evaluation¶

4.1 Evaluate from Python¶

[ ]:

from deeppavlov import evaluate_model



4.2 Evaluate from CLI¶

[ ]:

!python -m deeppavlov evaluate insults_kaggle_bert -d


5. Train the model on your data¶

5.1 Train your model from Python¶

To train the model on your data, you need to change the path to the training data in the config_file.

Parse the config_file and change the path to your data from Python.

[ ]:

from deeppavlov import train_model
from deeppavlov.core.commands.utils import parse_config

model_config = parse_config('insults_kaggle_bert')

# dataset that the model was trained on

~/.deeppavlov/downloads/insults_data


Provide a data_path to your own dataset. You can also change any of the hyperparameters of the model.

[ ]:

# download and unzip a new example dataset
!wget http://files.deeppavlov.ai/datasets/insults_data.tar.gz
!tar -xzvf "insults_data.tar.gz"

[ ]:

# provide a path to the directory with your train, valid and test files


Train the model using new config¶

[ ]:

model = train_model(model_config)


[ ]:

model(['You are kind of stupid', 'You are a wonderful person!'])

['Insult', 'Not Insult']


5.2 Train your model from CLI¶

To train the model on your data, create a copy of a config file and change the data_path variable in it. After that, train the model using your new config_file. You can also change any of the hyperparameters of the model.

[ ]:

!python -m deeppavlov train model_config.json


6. Models list¶

The table presents a list of all of the classification models available in DeepPavlov Library.

Config name

Language

Dataset

Model Size

Metric

Score

insults_kaggle_bert

En

Insults

Insults

1.1 GB

ROC-AUC

0.8770

paraphraser_rubert

Ru

Paraphrase

Paraphrase Corpus

2.0 GB

F1

0.8738

paraphraser_convers_distilrubert _2L

Ru

Paraphrase

Paraphrase Corpus

1.2 GB

F1

0.7396

paraphraser_convers_distilrubert _6L

Ru

Paraphrase

Paraphrase Corpus

1.6 GB

F1

0.8354

sentiment_sst_conv_bert

En

Sentiment

SST

1.1 GB

Accuracy

0.6626

Ru

Sentiment

6.2 GB

F1-macro

0.9961

rusentiment_bert

Ru

Sentiment

RuSentiment

1.3 GB

F1-weighted

0.7005

rusentiment_convers_bert

Ru

Sentiment

RuSentiment

1.5 GB

F1-weighted

0.7724

topics_distilbert_base_uncased

En

Topics

DeepPavlov Topics

6.2 GB

F1-macro

0.9961