Question Answering Model for SQuAD dataset

Task definition

Question Answering on SQuAD dataset is a task to find an answer on question in a given context (e.g, paragraph from Wikipedia), where the answer to each question is a segment of the context:

Context:

In meteorology, precipitation is any product of the condensation of atmospheric water vapor that falls under gravity. The main forms of precipitation include drizzle, rain, sleet, snow, graupel and hail… Precipitation forms as smaller droplets coalesce via collision with other rain drops or ice crystals within a cloud. Short, intense periods of rain in scattered locations are called “showers”.

Question:

Where do water droplets collide with ice crystals to form precipitation?

Answer:

within a cloud

Datasets, which follow this task format:

Model

Question Answering Model is based on R-Net, proposed by Microsoft Research Asia (“R-NET: Machine Reading Comprehension with Self-matching Networks”) and its implementation by Wenxuan Zhou.

Model documentation: SquadModel

Configuration

Default config could be found at deeppavlov/configs/squad/squad.json

Prerequisites

Before using the model make sure that all required packages are installed running the command:

python -m deeppavlov install squad

Model usage from Python

from deeppavlov import build_model, configs

model = build_model(configs.squad.squad, download=True)
model(['DeepPavlov is library for NLP and dialog systems.'], ['What is DeepPavlov?'])

Model usage from CLI

Training

Warning: training with default config requires about 9Gb on GPU. Run following command to train the model:

python -m deeppavlov train deeppavlov/configs/squad/squad.json

Interact mode

Interact mode provides command line interface to already trained model.

To run model in interact mode run the following command:

python -m deeppavlov interact deeppavlov/configs/squad/squad.json

Model will ask you to type in context and question.

Pretrained models:

SQuAD

Pretrained model is available and can be downloaded (~2.4Gb):

python -m deeppavlov download deeppavlov/configs/squad/squad.json

It achieves ~80 F-1 score and ~71 EM on SQuAD-v1.1 dev set.

In the following table you can find comparison with published results. Results of the most recent competitive solutions could be found on SQuAD Leadearboad.

Model (single model) EM (dev) F-1 (dev)
DeepPavlov 71.49 80.34
BiDAF + Self Attention + ELMo 85.6
QANet 75.1 83.8
FusionNet 75.3 83.6
R-Net 71.1 79.5
BiDAF 67.7 77.3

SQuAD with contexts without correct answers

In the case when answer is not necessary present in given context we have squad_noans config with pretrained model. This model outputs empty string in case if there is no answer in context. This model was trained not on SQuAD dataset. For each question-context pair from SQuAD we extracted contexts from the same Wikipedia article and ranked them according to tf-idf score between question and context. In this manner we built dataset with contexts without an answer.

Special trainable no_answer token is added to output of self-attention layer and it makes model able to select no_answer token in cases, when answer is not present in given context.

We got 57.88 EM and 65.91 F-1 on ground truth Wikipedia article (we used the same Wiki dump as DrQA):

Model config EM (dev) F-1 (dev)
DeepPavlov 57.88 65.91
Simple and Effective Multi-Paragraph Reading Comprehension 59.14 67.34
DrQA 49.7

Pretrained model is available and can be downloaded (~2.5Gb):

python -m deeppavlov download deeppavlov/configs/squad/multi_squad_noans.json

SDSJ Task B

Pretrained model is available and can be downloaded (~4.8Gb):

python -m deeppavlov download deeppavlov/configs/squad/squad_ru.json
Model config EM (dev) F-1 (dev)
DeepPavlov 60.62 80.04