deeppavlov.models.classifiers

class deeppavlov.models.classifiers.keras_classification_model.KerasClassificationModel(text_size: int, model_name: str, optimizer: str = 'Adam', loss: str = 'binary_crossentropy', lear_rate: float = 0.01, lear_rate_decay: float = 0.0, last_layer_activation='sigmoid', confident_threshold: float = 0.5, **kwargs)[source]

Class implements Keras model for classification task for multi-class multi-labeled data.

Parameters:
  • text_size – maximal length of text in tokens (words), longer texts are cutted, shorter ones are padded by zeros (pre-padding)
  • model_name – particular method of this class to initialize model configuration
  • optimizer – function name from keras.optimizers
  • loss – function name from keras.losses.
  • lear_rate – learning rate for optimizer.
  • lear_rate_decay – learning rate decay for optimizer
  • last_layer_activation – parameter that determines activation function after classification layer. For multi-label classification use sigmoid, otherwise, softmax.
  • confident_threshold – boundary value of probability for converting probabilities to labels. The value is from 0 to 1. If all probabilities are lower than confident_threshold, label with the highest probability is assigned. If last_layer_activation is softmax (not multi-label classification), assign to 1.
  • classes – list of classes names presented in the dataset (in config it is determined as keys of vocab over y)
  • embedder – embedder
  • tokenizer – tokenizer
opt

dictionary with all model parameters

tokenizer

tokenizer class instance

fasttext_model

fasttext model instance

classes

list of considered classes

n_classes

number of considered classes

model

keras model itself

epochs_done

number of epochs that were done

batches_seen

number of epochs that were seen

train_examples_seen

number of training samples that were seen

sess

tf session

optimizer

keras.optimizers instance

__call__(data: List[str], *args) → Tuple[numpy.ndarray, List[dict]][source]

Infer on the given data

Parameters:
  • data – list of sentences
  • *args – additional arguments
Returns:

vector of probabilities to belong with each class or list of labels sentence belongs with

Return type:

for each sentence

texts2vec(sentences: List[List[str]]) → numpy.ndarray[source]

Convert texts to vector representations using embedder and padding up to self.opt[“text_size”] tokens

Parameters:sentences – list of lists of tokens
Returns:array of embedded texts
train_on_batch(texts: List[str], labels: list) → [<class 'float'>, typing.List[float]][source]

Train the model on the given batch

Parameters:
  • texts – list of texts
  • labels – list of labels
Returns:

metrics values on the given batch

infer_on_batch(texts: List[str], labels: list = None) → [<class 'float'>, typing.List[float], <class 'numpy.ndarray'>][source]

Infer the model on the given batch

Parameters:
  • texts – list of texts
  • labels – list of labels
Returns:

metrics values on the given batch, if labels are given predictions, otherwise

cnn_model(kernel_sizes_cnn: List[int], filters_cnn: int, dense_size: int, coef_reg_cnn: float = 0.0, coef_reg_den: float = 0.0, dropout_rate: float = 0.0, **kwargs) → keras.engine.training.Model[source]

Build un-compiled model of shallow-and-wide CNN.

Parameters:
  • kernel_sizes_cnn – list of kernel sizes of convolutions.
  • filters_cnn – number of filters for convolutions.
  • dense_size – number of units for dense layer.
  • coef_reg_cnn – l2-regularization coefficient for convolutions.
  • coef_reg_den – l2-regularization coefficient for dense layers.
  • dropout_rate – dropout rate used after convolutions and between dense layers.
  • kwargs – other non-used parameters
Returns:

uncompiled instance of Keras Model

Return type:

keras.models.Model

dcnn_model(kernel_sizes_cnn: List[int], filters_cnn: int, dense_size: int, coef_reg_cnn: float = 0.0, coef_reg_den: float = 0.0, dropout_rate: float = 0.0, **kwargs) → keras.engine.training.Model[source]

Build un-compiled model of deep CNN.

Parameters:
  • kernel_sizes_cnn – list of kernel sizes of convolutions.
  • filters_cnn – number of filters for convolutions.
  • dense_size – number of units for dense layer.
  • coef_reg_cnn – l2-regularization coefficient for convolutions.
  • coef_reg_den – l2-regularization coefficient for dense layers.
  • dropout_rate – dropout rate used after convolutions and between dense layers.
  • kwargs – other non-used parameters
Returns:

uncompiled instance of Keras Model

Return type:

keras.models.Model

cnn_model_max_and_aver_pool(kernel_sizes_cnn: List[int], filters_cnn: int, dense_size: int, coef_reg_cnn: float = 0.0, coef_reg_den: float = 0.0, dropout_rate: float = 0.0, **kwargs) → keras.engine.training.Model[source]

Build un-compiled model of shallow-and-wide CNN where average pooling after convolutions is replaced with concatenation of average and max poolings.

Parameters:
  • kernel_sizes_cnn – list of kernel sizes of convolutions.
  • filters_cnn – number of filters for convolutions.
  • dense_size – number of units for dense layer.
  • coef_reg_cnn – l2-regularization coefficient for convolutions. Default: 0.0.
  • coef_reg_den – l2-regularization coefficient for dense layers. Default: 0.0.
  • dropout_rate – dropout rate used after convolutions and between dense layers. Default: 0.0.
  • kwargs – other non-used parameters
Returns:

uncompiled instance of Keras Model

Return type:

keras.models.Model

bilstm_model(units_lstm: int, dense_size: int, coef_reg_lstm: float = 0.0, coef_reg_den: float = 0.0, dropout_rate: float = 0.0, rec_dropout_rate: float = 0.0, **kwargs) → keras.engine.training.Model[source]

Build un-compiled BiLSTM.

Parameters:
  • units_lstm (int) – number of units for LSTM.
  • dense_size (int) – number of units for dense layer.
  • coef_reg_lstm (float) – l2-regularization coefficient for LSTM. Default: 0.0.
  • coef_reg_den (float) – l2-regularization coefficient for dense layers. Default: 0.0.
  • dropout_rate (float) – dropout rate to be used after BiLSTM and between dense layers. Default: 0.0.
  • rec_dropout_rate (float) – dropout rate for LSTM. Default: 0.0.
  • kwargs – other non-used parameters
Returns:

uncompiled instance of Keras Model

Return type:

keras.models.Model

bilstm_bilstm_model(units_lstm_1: int, units_lstm_2: int, dense_size: int, coef_reg_lstm: float = 0.0, coef_reg_den: float = 0.0, dropout_rate: float = 0.0, rec_dropout_rate: float = 0.0, **kwargs) → keras.engine.training.Model[source]

Build un-compiled two-layers BiLSTM.

Parameters:
  • units_lstm_1 – number of units for the first LSTM layer.
  • units_lstm_2 – number of units for the second LSTM layer.
  • dense_size – number of units for dense layer.
  • coef_reg_lstm – l2-regularization coefficient for LSTM. Default: 0.0.
  • coef_reg_den – l2-regularization coefficient for dense layers. Default: 0.0.
  • dropout_rate – dropout rate to be used after BiLSTM and between dense layers. Default: 0.0.
  • rec_dropout_rate – dropout rate for LSTM. Default: 0.0.
  • kwargs – other non-used parameters
Returns:

uncompiled instance of Keras Model

Return type:

keras.models.Model

bilstm_cnn_model(units_lstm: int, kernel_sizes_cnn: List[int], filters_cnn: int, dense_size: int, coef_reg_lstm: float = 0.0, coef_reg_cnn: float = 0.0, coef_reg_den: float = 0.0, dropout_rate: float = 0.0, rec_dropout_rate: float = 0.0, **kwargs) → keras.engine.training.Model[source]

Build un-compiled BiLSTM-CNN.

Parameters:
  • units_lstm – number of units for LSTM.
  • kernel_sizes_cnn – list of kernel sizes of convolutions.
  • filters_cnn – number of filters for convolutions.
  • dense_size – number of units for dense layer.
  • coef_reg_lstm – l2-regularization coefficient for LSTM. Default: 0.0.
  • coef_reg_cnn – l2-regularization coefficient for convolutions. Default: 0.0.
  • coef_reg_den – l2-regularization coefficient for dense layers. Default: 0.0.
  • dropout_rate – dropout rate to be used after BiLSTM and between dense layers. Default: 0.0.
  • rec_dropout_rate – dropout rate for LSTM. Default: 0.0.
  • kwargs – other non-used parameters
Returns:

uncompiled instance of Keras Model

Return type:

keras.models.Model

cnn_bilstm_model(kernel_sizes_cnn: List[int], filters_cnn: int, units_lstm: int, dense_size: int, coef_reg_cnn: float = 0.0, coef_reg_lstm: float = 0.0, coef_reg_den: float = 0.0, dropout_rate: float = 0.0, rec_dropout_rate: float = 0.0, **kwargs) → keras.engine.training.Model[source]

Build un-compiled BiLSTM-CNN.

Parameters:
  • kernel_sizes_cnn – list of kernel sizes of convolutions.
  • filters_cnn – number of filters for convolutions.
  • units_lstm – number of units for LSTM.
  • dense_size – number of units for dense layer.
  • coef_reg_cnn – l2-regularization coefficient for convolutions. Default: 0.0.
  • coef_reg_lstm – l2-regularization coefficient for LSTM. Default: 0.0.
  • coef_reg_den – l2-regularization coefficient for dense layers. Default: 0.0.
  • dropout_rate – dropout rate to be used after BiLSTM and between dense layers. Default: 0.0.
  • rec_dropout_rate – dropout rate for LSTM. Default: 0.0.
  • kwargs – other non-used parameters
Returns:

uncompiled instance of Keras Model

Return type:

keras.models.Model

bilstm_self_add_attention_model(units_lstm: int, dense_size: int, self_att_hid: int, self_att_out: int, coef_reg_lstm: float = 0.0, coef_reg_den: float = 0.0, dropout_rate: float = 0.0, rec_dropout_rate: float = 0.0, **kwargs) → keras.engine.training.Model[source]

Method builds uncompiled model of BiLSTM with self additive attention.

Parameters:
  • units_lstm – number of units for LSTM.
  • self_att_hid – number of hidden units in self-attention
  • self_att_out – number of output units in self-attention
  • dense_size – number of units for dense layer.
  • coef_reg_lstm – l2-regularization coefficient for LSTM. Default: 0.0.
  • coef_reg_den – l2-regularization coefficient for dense layers. Default: 0.0.
  • dropout_rate – dropout rate to be used after BiLSTM and between dense layers. Default: 0.0.
  • rec_dropout_rate – dropout rate for LSTM. Default: 0.0.
  • kwargs – other non-used parameters
Returns:

uncompiled instance of Keras Model

Return type:

keras.models.Model

bilstm_self_mult_attention_model(units_lstm: int, dense_size: int, self_att_hid: int, self_att_out: int, coef_reg_lstm: float = 0.0, coef_reg_den: float = 0.0, dropout_rate: float = 0.0, rec_dropout_rate: float = 0.0, **kwargs) → keras.engine.training.Model[source]

Method builds uncompiled model of BiLSTM with self multiplicative attention.

Parameters:
  • units_lstm – number of units for LSTM.
  • self_att_hid – number of hidden units in self-attention
  • self_att_out – number of output units in self-attention
  • dense_size – number of units for dense layer.
  • coef_reg_lstm – l2-regularization coefficient for LSTM. Default: 0.0.
  • coef_reg_den – l2-regularization coefficient for dense layers. Default: 0.0.
  • dropout_rate – dropout rate to be used after BiLSTM and between dense layers. Default: 0.0.
  • rec_dropout_rate – dropout rate for LSTM. Default: 0.0.
  • kwargs – other non-used parameters
Returns:

uncompiled instance of Keras Model

Return type:

keras.models.Model

bigru_model(units_lstm: int, dense_size: int, coef_reg_lstm: float = 0.0, coef_reg_den: float = 0.0, dropout_rate: float = 0.0, rec_dropout_rate: float = 0.0, **kwargs) → keras.engine.training.Model[source]

Method builds uncompiled model BiGRU.

Parameters:
  • units_lstm – number of units for GRU.
  • dense_size – number of units for dense layer.
  • coef_reg_lstm – l2-regularization coefficient for GRU. Default: 0.0.
  • coef_reg_den – l2-regularization coefficient for dense layers. Default: 0.0.
  • dropout_rate – dropout rate to be used after BiGRU and between dense layers. Default: 0.0.
  • rec_dropout_rate – dropout rate for GRU. Default: 0.0.
  • kwargs – other non-used parameters
Returns:

uncompiled instance of Keras Model

Return type:

keras.models.Model