Classification models in DeepPavlov

In DeepPavlov one can find code for training and using classification models which are implemented as a number of different neural networks or sklearn models. Models can be used for binary, multi-class or multi-label classification. List of available classifiers (more info see below):

  • BERT classifier (see here) builds BERT 8 architecture for classification problem on Tensorflow.

  • Keras classifier (see here) builds neural network on Keras with tensorflow backend.

  • Sklearn classifier (see here) builds most of sklearn classifiers.

Quick start

Command line

INSTALL First whatever model you have chose you would need to install additional requirements:

python -m deeppavlov install <path_to_config>

where <path_to_config> is a path to one of the provided config files or its name without an extension, for example “intents_snips”.

To download pre-trained models, vocabs, embeddings on the dataset of interest one should run the following command providing corresponding name of the config file (see above) or provide flag -d for commands like interact, telegram, train, evaluate.:

python -m deeppavlov download  <path_to_config>

where <path_to_config> is a path to one of the provided config files or its name without an extension, for example “intents_snips”.

When using KerasClassificationModel for Windows platforms one have to set KERAS_BACKEND to tensorflow:

set "KERAS_BACKEND=tensorflow"

INTERACT One can run the following command to interact in command line interface with provided config:

python -m deeppavlov interact <path_to_config> [-d]

where <path_to_config> is a path to one of the provided config files or its name without an extension, for example “intents_snips”. With the optional -d parameter all the data required to run selected pipeline will be downloaded.

TRAIN After preparing the config file (including change of dataset, pipeline elements or parameters) one can train model from scratch or from pre-trained model optionally. To train model from scratch one should set load_path to an empty or non-existing directory, and save_path to a directory where trained model will be saved. To train model from saved one should set load_path to existing directory containing model’s files (pay attention that model can be loaded from saved only if the clue sizes of network layers coincide, other parameters of model as well as training parameters, embedder, tokenizer, preprocessor and postprocessors could be changed but be attentive in case of changing embedder - different embeddings of tokens will not give the same results). Then training can be run in the following way:

python -m deeppavlov train <path_to_config>

where <path_to_config> is a path to one of the provided config files or its name without an extension, for example “intents_snips”. With the optional -d parameter all the data required to run selected pipeline will be downloaded.

Python code

One can also use these configs in python code. When using KerasClassificationModel for Windows platform one needs to set KERAS_BACKEND to tensorflow in the following way:

import os

os.environ["KERAS_BACKEND"] = "tensorflow"

INTERACT To download required data one have to set download parameter to True. Then one can build and interact a model from configuration file:

from deeppavlov import build_model, configs

CONFIG_PATH = configs.classifiers.intents_snips  # could also be configuration dictionary or string path or `pathlib.Path` instance

model = build_model(CONFIG_PATH, download=True)  # in case of necessity to download some data

model = build_model(CONFIG_PATH, download=False)  # otherwise

print(model(["What is the weather in Boston today?"]))

>>> [['GetWeather']]

TRAIN Also training can be run in the following way:

from deeppavlov import train_model, configs

CONFIG_PATH = configs.classifiers.intents_snips  # could also be configuration dictionary or string path or `pathlib.Path` instance

model = train_model(CONFIG_PATH, download=True)  # in case of necessity to download some data

model = train_model(CONFIG_PATH, download=False)  # otherwise

BERT models

BERT (Bidirectional Encoder Representations from Transformers) 8 is a Transformer pre-trained on masked language model and next sentence prediction tasks. This approach showed state-of-the-art results on a wide range of NLP tasks in English.

deeppavlov.models.bert.BertClassifierModel (see here) provides easy to use solution for classification problem using pre-trained BERT. Several pre-trained English, multi-lingual and Russian BERT models are provided in our BERT documentation.

Two main components of BERT classifier pipeline in DeepPavlov are deeppavlov.models.preprocessors.BertPreprocessor (see here) and deeppavlov.models.bert.BertClassifierModel (see here). Non-processed texts should be given to bert_preprocessor for tokenization on subtokens, encoding subtokens with their indices and creating tokens and segment masks. If one processed classes to one-hot labels in pipeline, one_hot_labels should be set to true.

bert_classifier has a dense layer of number of classes size upon pooled outputs of Transformer encoder, it is followed by softmax activation (sigmoid if multilabel parameter is set to true in config).

Neural Networks on Keras

deeppavlov.models.classifiers.KerasClassificationModel (see here) contains a number of different neural network configurations for classification task. Please, pay attention that each model has its own parameters that should be specified in config. Information about parameters could be found here. One of the available network configurations can be chosen in model_name parameter in config. Below the list of available models is presented:

  • cnn_model – Shallow-and-wide CNN 1 with max pooling after convolution,

  • dcnn_model – Deep CNN with number of layers determined by the given number of kernel sizes and filters,

  • cnn_model_max_and_aver_pool – Shallow-and-wide CNN 1 with max and average pooling concatenation after convolution,

  • bilstm_model – Bidirectional LSTM,

  • bilstm_bilstm_model – 2-layers bidirectional LSTM,

  • bilstm_cnn_model – Bidirectional LSTM followed by shallow-and-wide CNN,

  • cnn_bilstm_model – Shallow-and-wide CNN followed by bidirectional LSTM,

  • bilstm_self_add_attention_model – Bidirectional LSTM followed by self additive attention layer,

  • bilstm_self_mult_attention_model – Bidirectional LSTM followed by self multiplicative attention layer,

  • bigru_model – Bidirectional GRU model.

Sklearn models

deeppavlov.models.sklearn.SklearnComponent (see here) is a universal wrapper for all sklearn model that could be fitted. One can set model_class parameter to full name of model (for example, sklearn.feature_extraction.text:TfidfVectorizer or sklearn.linear_model:LogisticRegression). Parameter infer_method should be set to class method for prediction (predict, predict_proba, predict_log_proba or transform). As for text classification in DeepPavlov we assign list of labels for each sample, it is required to ensure that output of a classifier-sklearn_component is a list of labels for each sample. Therefore, for sklearn component classifier one should set ensure_list_output to true.

Pre-trained models

We also provide with pre-trained models for classification on DSTC 2 dataset, SNIPS dataset, “AG News” dataset, “Detecting Insults in Social Commentary”, Twitter sentiment in Russian dataset.

DSTC 2 dataset does not initially contain information about intents, therefore, Dstc2IntentsDatasetIterator (deeppavlov/dataset_iterators/dstc2_intents_interator.py) instance extracts artificial intents for each user reply using information from acts and slots.

Below we give several examples of intent construction:

System: “Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?”

User: “cheap restaurant”

In the original dataset this user reply has characteristics

"goals": {"pricerange": "cheap"},
"db_result": null,
"dialog-acts": [{"slots": [["pricerange", "cheap"]], "act": "inform"}]}

This message contains only one intent: inform_pricerange.

User: “thank you good bye”,

In the original dataset this user reply has characteristics

"goals": {"food": "dontcare", "pricerange": "cheap", "area": "south"},
"db_result": null,
"dialog-acts": [{"slots": [], "act": "thankyou"}, {"slots": [], "act": "bye"}]}

This message contains two intents (thankyou, bye). Train, valid and test division is the same as on web-site.

SNIPS dataset contains intent classification task for 7 intents (approximately 2.4 samples per intent):

  • GetWeather

  • BookRestaurant

  • PlayMusic

  • AddToPlaylist

  • RateBook

  • SearchScreeningEvent

  • SearchCreativeWork

Initially, classification model on SNIPS dataset 7 was trained only as an example of usage that is why we provide pre-trained model for SNIPS with embeddings trained on DSTC-2 dataset that is not the best choice for this task. Train set is divided to train and validation sets to illustrate basic_classification_iterator work.

Detecting Insults in Social Commentary dataset contains binary classification task for detecting insults for participants of conversation. Train, valid and test division is the same as for the Kaggle challenge.

AG News dataset contains topic classification task for 5 classes (range from 0 to 4 points scale). Test set is initial one from a web-site, valid is a Stratified division 1/5 from the train set from web-site with 42 seed, and the train set is the rest.

Twitter mokoron dataset contains sentiment classification of Russian tweets for positive and negative replies 2. It was automatically labeled. Train, valid and test division is made by hands (Stratified division: 1/5 from all dataset for test set with 42 seed, then 1/5 from the rest for validation set with 42 seed). Two provided pre-trained models were trained on the same dataset but with and without preprocessing. The main difference between scores is caused by the fact that some symbols (deleted during preprocessing) were used for automatic labelling. Therefore, it can be considered that model trained on preprocessed data is based on semantics while model trained on unprocessed data is based on punctuation and syntax.

RuSentiment dataset contains sentiment classification of social media posts for Russian language within 5 classes ‘positive’, ‘negative’, ‘neutral’, ‘speech’, ‘skip’.

SentiRuEval dataset contains sentiment classification of reviews for Russian language within 4 classes ‘positive’, ‘negative’, ‘neutral’, ‘both’. Datasets on four different themes ‘Banks’, ‘Telecom’, ‘Restaurants’, ‘Cars’ are combined to one big dataset.

Questions on Yahoo Answers labeled as either informational or conversational dataset contains intent classification of English questions into two category: informational (0) and conversational (1) questions. The dataset includes some additional metadata but for the presented pre-trained model only Title of questions and Label were used. Embeddings were obtained from language model (ELMo) fine-tuned on the dataset

L6 - Yahoo! Answers Comprehensive Questions and Answers. We do not provide datasets, both are available upon request to Yahoo Research. Therefore, this model is available only for interaction.

Stanford Sentiment Treebank contains 5-classes fine-grained sentiment classification of sentences. Each sentence were initially labelled with floating point value from 0 to 1. For fine-grained classification the floating point labels are converted to integer labels according to the intervals [0, 0.2], (0.2, 0.4], (0.4, 0.6], (0.6, 0.8], (0.8, 1.0] corresponding to very negative, negative, neutral, positive, very positive classes.

Yelp Reviews contains 5-classes sentiment classification of product reviews. The labels are 1, 2, 3, 4, 5 corresponding to very negative, negative, neutral, positive, very positive classes. The reviews are long enough (cut up to 200 subtokens).

Task

Dataset

Lang

Model

Metric

Valid

Test

Downloads

28 intents

DSTC 2

En

DSTC 2 emb

Accuracy

0.7613

0.7733

800 Mb

Wiki emb

0.9629

0.9617

8.5 Gb

BERT

0.9673

0.9636

800 Mb

7 intents

SNIPS-2017 7

DSTC 2 emb

F1-macro

0.8591

800 Mb

Wiki emb

0.9820

8.5 Gb

Tfidf + SelectKBest + PCA + Wiki emb

0.9673

8.6 Gb

Wiki emb weighted by Tfidf

0.9786

8.5 Gb

Insult detection

Insults

Reddit emb

ROC-AUC

0.9263

0.8556

6.2 Gb

English BERT

0.9255

0.8612

1200 Mb

English Conversational BERT

0.9389

0.8941

1200 Mb

5 topics

AG News

Wiki emb

Accuracy

0.8922

0.9059

8.5 Gb

Intent

Yahoo-L31

Yahoo-L31 on conversational BERT

ROC-AUC

0.9436

1200 Mb

Sentiment

SST

5-classes SST on conversational BERT

Accuracy

0.6456

0.6715

400 Mb

5-classes SST on multilingual BERT

0.5738

0.6024

660 Mb

Yelp

5-classes Yelp on conversational BERT

0.6925

0.6842

400 Mb

5-classes Yelp on multilingual BERT

0.5896

0.5874

660 Mb

Sentiment

Twitter mokoron

Ru

RuWiki+Lenta emb w/o preprocessing

0.9965

0.9961

6.2 Gb

RuWiki+Lenta emb with preprocessing

0.7823

0.7759

6.2 Gb

RuSentiment

RuWiki+Lenta emb

F1-weighted

0.6541

0.7016

6.2 Gb

Twitter emb super-convergence 6

0.7301

0.7576

3.4 Gb

ELMo

0.7519

0.7875

700 Mb

Multi-language BERT

0.6809

0.7193

1900 Mb

Conversational RuBERT

0.7548

0.7742

657 Mb

Intent

Ru like`Yahoo-L31`_

Conversational vs Informational on ELMo

ROC-AUC

0.9412

700 Mb

How to train on other datasets

We provide dataset reader BasicClassificationDatasetReader and dataset BasicClassificationDatasetIterator to work with .csv and .json files. These classes are described in readers docs and dataset iterators docs.

Data files should be in the following format (classes can be separated by custom symbol given in the config as class_sep, here class_sep=","):

x

y

text_0

class_0

text_1

class_0

text_2

class_1,class_2

text_3

class_1,class_0,class_2

To train model one should

  • set data_path to the directory to which train.csv should be downloaded,

  • set save_path to the directory where the trained model should be saved,

  • set all other parameters of model as well as embedder, tokenizer and preprocessor to desired ones.

Then training process can be run in the same way:

python -m deeppavlov train <path_to_config>

Comparison

The comparison of the presented model is given on SNIPS dataset 7. The evaluation of model scores was conducted in the same way as in 3 to compare with the results from the report of the authors of the dataset. The results were achieved with tuning of parameters and embeddings trained on Reddit dataset.

Model

AddToPlaylist

BookRestaurant

GetWheather

PlayMusic

RateBook

SearchCreativeWork

SearchScreeningEvent

api.ai

0.9931

0.9949

0.9935

0.9811

0.9992

0.9659

0.9801

ibm.watson

0.9931

0.9950

0.9950

0.9822

0.9996

0.9643

0.9750

microsoft.luis

0.9943

0.9935

0.9925

0.9815

0.9988

0.9620

0.9749

wit.ai

0.9877

0.9913

0.9921

0.9766

0.9977

0.9458

0.9673

snips.ai

0.9873

0.9921

0.9939

0.9729

0.9985

0.9455

0.9613

recast.ai

0.9894

0.9943

0.9910

0.9660

0.9981

0.9424

0.9539

amazon.lex

0.9930

0.9862

0.9825

0.9709

0.9981

0.9427

0.9581

Shallow-and-wide CNN

0.9956

0.9973

0.9968

0.9871

0.9998

0.9752

0.9854

How to improve the performance

  • One can use FastText 4 to train embeddings that are better suited for considered datasets.

  • One can use some custom preprocessing to clean texts.

  • One can use ELMo 5 or BERT 8.

  • All the parameters should be tuned on the validation set.

References

1(1,2)

Kim Y. Convolutional neural networks for sentence classification //arXiv preprint arXiv:1408.5882. – 2014.

2

Ю. В. Рубцова. Построение корпуса текстов для настройки тонового классификатора // Программные продукты и системы, 2015, №1(109), –С.72-78

3

https://www.slideshare.net/KonstantinSavenkov/nlu-intent-detection-benchmark-by-intento-august-2017

4
  1. Bojanowski, E. Grave, A. Joulin, T. Mikolov, Enriching Word Vectors with Subword Information.

5

Peters, Matthew E., et al. “Deep contextualized word representations.” arXiv preprint arXiv:1802.05365 (2018).

6

Smith L. N., Topin N. Super-convergence: Very fast training of residual networks using large learning rates. – 2018.

7(1,2,3)

Coucke A. et al. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces //arXiv preprint arXiv:1805.10190. – 2018.

8(1,2,3)

Devlin J. et al. Bert: Pre-training of deep bidirectional transformers for language understanding //arXiv preprint arXiv:1810.04805. – 2018.