Question Answering Model for SQuAD dataset

Task definition

Question Answering on SQuAD dataset is a task to find an answer on question in a given context (e.g, paragraph from Wikipedia), where the answer to each question is a segment of the context:

Context:

In meteorology, precipitation is any product of the condensation of atmospheric water vapor that falls under gravity. The main forms of precipitation include drizzle, rain, sleet, snow, graupel and hail… Precipitation forms as smaller droplets coalesce via collision with other rain drops or ice crystals within a cloud. Short, intense periods of rain in scattered locations are called “showers”.

Question:

Where do water droplets collide with ice crystals to form precipitation?

Answer:

within a cloud

Datasets, which follow this task format:

Models

SQuAD model in DeepPavlov is based on BERT. The model predicts answer start and end position in a given context. Their performance is compared in pretrained models section of this documentation.

BERT

Pretrained BERT can be used for Question Answering on SQuAD dataset just by applying two linear transformations to BERT outputs for each subtoken. First/second linear transformation is used for prediction of probability that current subtoken is start/end position of an answer.

BERT for SQuAD model documentation on PyTorch torch_transformers_squad:TorchTransformersSquad.

Configuration

Default configs could be found in deeppavlov/configs/squad/ folder.

Prerequisites

Before using the model make sure that all required packages are installed running the command:

python -m deeppavlov install squad_bert

By running this command we will install requirements for deeppavlov/configs/squad/squad_bert.json.

Model usage from Python

from deeppavlov import build_model

model = build_model('squad_bert', download=True)
model(['DeepPavlov is library for NLP and dialog systems.'], ['What is DeepPavlov?'])

Model usage from CLI

Training

Warning: training with default config requires about 10Gb on GPU. Run following command to train the model:

python -m deeppavlov train squad_bert

Interact mode

Interact mode provides command line interface to already trained model.

To run model in interact mode run the following command:

python -m deeppavlov interact squad_bert

Model will ask you to type in context and question.

Pretrained models:

SQuAD

We have all pretrained model available to download:

python -m deeppavlov download squad_bert

It achieves ~88 F-1 score and ~80 EM on SQuAD-v1.1 dev set.

In the following table you can find comparison with published results. Results of the most recent competitive solutions could be found on SQuAD Leadearboad.

Model (single model)

EM (dev)

F-1 (dev)

DeepPavlov BERT

81.49

88.86

BiDAF + Self Attention + ELMo

85.6

QANet

75.1

83.8

FusionNet

75.3

83.6

R-Net

71.1

79.5

BiDAF

67.7

77.3

SQuAD with contexts without correct answers

In the case when answer is not necessary present in given context we have squad_noans with pretrained model. This model outputs empty string in case if there is no answer in context. squad_noans was trained on SQuAD2.0 dataset.

Special trainable no_answer token is added to output of self-attention layer and it makes model able to select no_answer token in cases, when answer is not present in given context.

We got 57.88 EM and 65.91 F-1 on ground truth Wikipedia article (we used the same Wiki dump as DrQA):

Model config

EM (dev)

F-1 (dev)

DeepPavlov

75.54

83.56

Simple and Effective Multi-Paragraph Reading Comprehension

59.14

67.34

DrQA

49.7

Pretrained model is available and can be downloaded (~2.5Gb):

python -m deeppavlov download qa_squad2_bert

SDSJ Task B

Pretrained model is available and can be downloaded:

python -m deeppavlov download squad_ru_bert

Link to SDSJ Task B dataset: http://files.deeppavlov.ai/datasets/sber_squad-v1.1.tar.gz

Model config

EM (dev)

F-1 (dev)

DeepPavlov RuBERT

66.21

84.71