Goal-Oriented Dialogue Bot

This component of DeepPavlov Library also known as Go-Bot is designed to enable development of the ML-driven goal-oriented dialogue bots.

It supports two different approaches to define domain model and behavior of a given bot, including DSTC2 dataset and a (limited) subset of RASA DSLs (Domain-Specific Languages).

To experiment with the Go-Bot you can pick one of the two available pre-trained models designed around the DSTSC2 dataset (English), or follow a tutorial for using RASA DSLs.

Quick DSTC2-Based Demos

To quickly try out the Go-Bot capabilities you can use one of the two available pretrained models for DSTC2 dataset (English). Check them out by running this code:

from deeppavlov import build_model, configs

bot1 = build_model(configs.go_bot.gobot_dstc2, download=True)

bot1(['hi, i want restaurant in the cheap pricerange'])
bot1(['bye'])

bot2 = build_model(configs.go_bot.gobot_dstc2_best, download=True)

bot2(['hi, i want chinese restaurant'])
bot2(['bye'])

If some required packages are missing, install all the requirements by running in command line:

python -m deeppavlov install gobot_dstc2

Intro

The Go-Bot is based on 1 which introduces Hybrid Code Networks (HCNs) that combine an RNN with domain-specific knowledge and system action templates.

alt text Diagram 1. Dotted lines correspond to unrequired (optional) modules, black squares to trained models, trapezes are modules that depend on a dataset and must be provided by software developer.

Here is a simple example of interaction with a trained dialogue bot (available for download):

Note

Models that rely on fasttext word embeddings will require 8.5 GB of disk space, those that use only one-hot encoding of words will require less than 300Mb of disk space.

>> Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?
x::hi, i want cheap food
2018-05-16 12:28:48.459 INFO in 'deeppavlov.models.go_bot.bot'['bot'] at line 346: Made api_call with {'pricerange': 'cheap'}, got 22 results.
>> The lucky star serves chinese food in the cheap price range.
x::is there anything with french food?
2018-05-16 12:29:02.795 INFO in 'deeppavlov.models.go_bot.bot'['bot'] at line 346: Made api_call with {'pricerange': 'cheap', 'food': 'french'}, got 0 results.
>> Sorry there is no french restaurant in the cheap price range.
x::ok, maybe there is something in expensive pricerange?
2018-05-16 12:29:23.337 INFO in 'deeppavlov.models.go_bot.bot'['bot'] at line 346: Made api_call with {'pricerange': 'expensive', 'food': 'french'}, got 2 results.
>> Cote serves french food in the expensive price range.
x::in what part of town is it?
2018-05-16 12:29:31.193 INFO in 'deeppavlov.models.go_bot.bot'['bot'] at line 346: Made api_call with {'pricerange': 'expensive', 'food': 'french'}, got 2 results.
>> Restaurant two two is a nice place in the north of town serving tasty french food.
x::ok, give me their phone number
>> The phone number of restaurant two two is 01223 351880.
x::address and post code too
>> Restaurant two two is on 22 chesterton road chesterton, c.b 4, 3 a.x.
x::ok, thank you!
>> Restaurant two two serves french food in the expensive price range.
x::and if i wanted an italian restuarant?
2018-05-16 12:30:15.98 INFO in 'deeppavlov.models.go_bot.bot'['bot'] at line 346: Made api_call with {'pricerange': 'expensive', 'food': 'italian'}, got 4 results.
>> Frankie and bennys serves italian food in the expensive price range.
x::more cheaply priced?
2018-05-16 12:30:36.560 INFO in 'deeppavlov.models.go_bot.bot'['bot'] at line 346: Made api_call with {'pricerange': 'cheap', 'food': 'italian'}, got 5 results.
>> Da vinci pizzeria serves italian food in the cheap price range.
x::ok, give me their phone number and address, please
>> The phone number of da vinci pizzeria is 01223 351707 and it is on 20 milton road chesterton.
x::thanks
>> Da vinci pizzeria serves italian food in the cheap price range.
x::bye
>> You are welcome!

Quick Start: DSTC2

Building Goal-Oriented Bot Using DSTC-2

DSTC is a set of competitions originally known as “Dialog State Tracking Challenges” (DSTC, for short). First challenge was organized in 2012-2013. Starting as an initiative to provide a common testbed for the task of Dialog State Tracking, the first Dialog State Tracking Challenge (DSTC) was organized in 2013, followed by DSTC2&3 in 2014, DSTC4 in 2015, and DSTC5 in 2016. Given the remarkable success of the first five editions, and understanding both, the complexity of the dialog phenomenon and the interest of the research community in a wider variety of dialog related problems, the DSTC rebranded itself as “Dialog System Technology Challenges” for its sixth edition. Then, DSTC6 and DSTC7 have been completed in 2017 and 2018, respectively.

DSTC-2 released a large number of training dialogs related to restaurant search. Compared to DSTC (which was in the bus timetables domain), DSTC 2 introduced changing user goals, tracking ‘requested slots’ as well as the new Restaurants domain.

Historically, DeepPavlov’s Go-Bot used this DSTC-2 approach to defining domain model and behavior of the goal-oriented bots. In this section you will learn how to use this approach to build a DSTC-2-based Go-Bot.

Requirements

TO TRAIN a go_bot model you should have:

  1. (optional, but recommended) pretrained named entity recognition model (NER)

  2. (optional, but recommended) pretrained intents classifier model

  3. (optional) any sentence (word) embeddings for english

TO INFER from a go_bot model you should additionally have:

  1. pretrained vocabulary of dataset utterance tokens

    • it is trained in the same config as go_bot model

  2. pretrained goal-oriented bot model

    • config configs/go_bot/gobot_dstc2.json is recommended

    • slot_filler section of go_bot’s config should match NER’s configuration

    • intent_classifier section of go_bot’s config should match classifier’s configuration

Configs

For a working exemplary config see configs/go_bot/gobot_dstc2.json (model without embeddings).

A minimal model without slot_filler, intent_classifier and embedder is configured in configs/go_bot/gobot_dstc2_minimal.json.

The best state-of-the-art model (with attention mechanism, relies on embedder and does not use bag-of-words) is configured in configs/go_bot/gobot_dstc2_best.json.

Usage example

To interact with a pretrained go_bot model using commandline run:

python -m deeppavlov interact <path_to_config> [-d]

where <path_to_config> is one of the provided config files.

You can also train your own model by running:

python -m deeppavlov train <path_to_config> [-d]

The -d parameter downloads

  • data required to train your model (embeddings, etc.);

  • a pretrained model if available (provided not for all configs).

Pretrained for DSTC2 models are available for

After downloading required files you can use the configs in your python code. To infer from a pretrained model with config path equal to <path_to_config>:

from deeppavlov import build_model

CONFIG_PATH = '<path_to_config>'
model = build_model(CONFIG_PATH)

utterance = ""
while utterance != 'exit':
    print(">> " + model([utterance])[0])
    utterance = input(':: ')

Config parameters

To configure your own pipelines that contain a "go_bot" component, refer to documentation for GoalOrientedBot and GoalOrientedBotNetwork classes.

Datasets

DSTC2

The Hybrid Code Network model was trained and evaluated on a modification of a dataset from Dialogue State Tracking Challenge 2 2. The modifications were as follows:

  • new turns with api calls

    • added api_calls to restaurant database (example: {"text": "api_call area=\"south\" food=\"dontcare\" pricerange=\"cheap\"", "dialog_acts": ["api_call"]})

  • new actions

    • bot dialog actions were concatenated into one action (example: {"dialog_acts": ["ask", "request"]} -> {"dialog_acts": ["ask_request"]})

    • if a slot key was associated with the dialog action, the new act was a concatenation of an act and a slot key (example: {"dialog_acts": ["ask"], "slot_vals": ["area"]} -> {"dialog_acts": ["ask_area"]})

  • new train/dev/test split

    • original dstc2 consisted of three different MDP policies, the original train and dev datasets (consisting of two policies) were merged and randomly split into train/dev/test

  • minor fixes

    • fixed several dialogs, where actions were wrongly annotated

    • uppercased first letter of bot responses

    • unified punctuation for bot responses

See deeppavlov.dataset_readers.dstc2_reader.DSTC2DatasetReader for implementation.

Your data

Dialogs

If your model uses DSTC2 and relies on "dstc2_reader" (DSTC2DatasetReader), all needed files, if not present in the DSTC2DatasetReader.data_path directory, will be downloaded from web.

If your model needs to be trained on different data, you have several ways of achieving that (sorted by increase in the amount of code):

  1. Use "dialog_iterator" in dataset iterator config section and "dstc2_reader" in dataset reader config section (the simplest, but not the best way):

    • set dataset_reader.data_path to your data directory;

    • your data files should have the same format as expected in DSTC2DatasetReader.read() method.

  2. Use "dialog_iterator" in dataset iterator config section and "your_dataset_reader" in dataset reader config section (recommended):

    • clone deeppavlov.dataset_readers.dstc2_reader.DSTC2DatasetReader to YourDatasetReader;

    • register as "your_dataset_reader";

    • rewrite so that it implements the same interface as the origin. Particularly, YourDatasetReader.read() must have the same output as DSTC2DatasetReader.read().

      • train — training dialog turns consisting of tuples:

        • first tuple element contains first user’s utterance info (as dictionary with the following fields):

          • text — utterance string

          • intents — list of string intents, associated with user’s utterance

          • db_result — a database response (optional)

          • episode_done — set to true, if current utterance is the start of a new dialog, and false (or skipped) otherwise (optional)

        • second tuple element contains second user’s response info

          • text — utterance string

          • act — an act, associated with the user’s utterance

      • valid — validation dialog turns in the same format

      • test — test dialog turns in the same format

  3. Use your own dataset iterator and dataset reader (if 2. doesn’t work for you):

Templates

You should provide a maping from actions to text templates in the format

action1<tab>template1
action2<tab>template2
...
actionN<tab>templateN

where filled slots in templates should start with “#” and mustn’t contain whitespaces.

For example,

bye You are welcome!
canthear  Sorry, I can't hear you.
expl-conf_area  Did you say you are looking for a restaurant in the #area of town?
inform_area+inform_food+offer_name  #name is a nice place in the #area of town serving tasty #food food.

It is recommended to use "DefaultTemplate" value for template_type parameter.

Quick Start: RASA DSLs

Building Goal-Oriented Bot Using RASA DSLs

While DSTC-2 schemas format is quite rich, preparing this kind of dataset with all required annotations might be challenging. To simplify the process of building goal-oriented bots using DeepPavlov technology, we have introduced a (limited) support for defining them using RASA DSLs.

DSLs, known as Domain-Specific Languages, provide a rich mechanism to define the behavior, or “the what”, while the underlying system uses the parser to transform these definitions into commands that implement this behavior, or “the how” using the system’s components.

RASA.ai is an another well-known Open Source Conversational AI Framework. Their approach to defining the domain model and behavior of the goal-oriented bots is quite simple for building simple goal-oriented bots. In this section you will learn how to use key parts of RASA DSLs (configuration files) to build your own goal-oriented chatbot based on the DeepPavlov’s Go-Bot framework.

We encourage you to read the tutorial notebook to get better understanding of how to build basic and more advanced goal-oriented bots with these RASA DSLs.

Note: As mentioned in our blog post, this is the very beginning of our work focused on supporting RASA DSLs as a way to configure DeepPavlov-based goal-oriented chatbots.

While there are several configuration files used by the RASA platform, each with their own corresponding DSL (mostly re-purposed Markdown and YAML), for now only three essential files: stories.md, nlu.md, domain.yml are supported by the DeepPavlov’s Go-Bot.

These files allows you to define user stories that match intents and bot actions, intents with slots and entities, as well as the training data for the NLU components.

In this release, only a subset of the functionality in these files is supported by now.

stories.md

stories.md is a mechanism used to teach your chatbot how to respond to user messages. It allows you to control your chatbot’s dialog management.

The full RASA functionality is described in the original documentation.

The format supported by DeepPavlov is the subset of features described in “What makes up a story” section.

The original format features are: User Messages, Actions, Events, Checkpoints, OR Statements, End-to-End Story Evaluation Format.

  • We do support all the functionality of User Messages format feature.

  • We do support only utterance actions of the Actions format feature. Custom actions are not supported yet.

  • We do not support Events, Checkpoints and OR Statements format features.

format

see the original documentation for the detailed stories.md format description.

Stories file is a markdown file of the following format:

## story_title(not used by algorithm, but useful to work with for humans)
* user_action_label{"1st_slot_present_in_action": "slot1_value", .., "Nth_slot_present_in_action": "slotN_value"}
 - system_respective_utterance
* another_user_action_of_the_same_format
  - another_system_response
...

## another_story_title
...

nlu.md

nlu.md represents an NLU model of your chatbot. It allows you to provide training examples that show how your chatbot should understand user messages, and then train a model through these examples.

We do support the format described in the Markdown format section of the original RASA documentation with the following limitations:

  • an extended entities annotation format ([<entity-text>]{"entity": "<entity name>", "role": "<role name>", ...}) is not supported

  • synonyms, regex features and lookup tables format features are not supported

format

see the original documentation on the RASA NLU markdown format for the detailed nlu.md format description.

NLU file is a markdown file of the following format:

## intent:possible_user_action_label_1
- An example of user text that has the possible_user_action_label_1 action label
- Another example of user text that has the possible_user_action_label_1 action label
...

## intent:possible_user_action_label_N
- An example of user text that has the (possible_user_action_label_N)[action_label] action label
<!-- Slotfilling dataset is provided as an inline markup of user texts -->
...

domain.yml

domain.yml helps you to define the universe your chatbot lives in: what user inputs it expects to get, what actions it should be able to predict, how to respond, and what information to store.

The format supported by DeepPavlov is the same as the described in the original documentation with the following limitations:

  • only textual slots are allowed

  • only slot classes are allowed as entity classes

  • only textual response actions are allowed with currently no variables support

format

see the original documentation on the RASA Domains YAML config format for the detailed domain.yml format description.

Domain file is a YAML file of the following format:

# slots section lists the possible slot names (aka slot types)
# that are used in the domain (i.e. relevant for bot's tasks)
# currently only type: text is supported
slots:
  slot1_name:
    type: text
  ...
  slotN_name:
    type: text

# entities list now follows the slots list 2nd level keys
# and is present to support upcoming features. Stay tuned for updates with this!
entities:
- slot1_name
...
- slotN_name

# intents section lists the intents that can appear in the stories
# being kept together they do describe the user-side part of go-bot's experience
intents:
  - user_action_label
  - another_user_action_of_the_same_format
  ...

# responses section lists the system response templates.
# Despite system response' titles being usually informative themselves
#   (one could even find them more appropriate when no actual "Natural Language" is needed
#    (e.g. for buttons actions in bot apps))
# It is though extremely useful to be able to serialize the response title to text.
# That's what this section content is needed for.
responses:
  system_utterance_1:
    - text: "The text that system responds with"
  another_system_response:
    - text: "Here some text again"

Database (Optional)

If your dataset doesn’t imply any api calls to an external database, just do not set database and api_call_action parameters and skip the section below.

Otherwise, you should

  1. provide sql table with requested items or

  2. construct such table from provided in train samples db_result items. This can be done with the following script:

    python -m deeppavlov train configs/go_bot/database_<your_dataset>.json
    

    where configs/go_bot/database_<your_dataset>.json is a copy of configs/go_bot/database_dstc2.json with configured save_path, primary_keys and unknown_value.

Comparison

Scores for different modifications of our bot model and comparison with existing benchmarks:

Dataset

Lang

Model

Metric

Test

Downloads

DSTC 2 (modified)

En

basic bot

Turn Accuracy

0.380

10 Mb

bot with slot filler

0.542

400 Mb

bot with slot filler, intents & attention

0.553

8.5 Gb

DSTC 2

Bordes and Weston (2016) 3

0.411

Eric and Manning (2017) 4

0.480

Perez and Liu (2016) 5

0.487

Williams et al. (2017) 1

0.556