License Apache 2.0 Python 3.6 tensorflow 1.4

Dialogue Bot for goal-oriented task

The dialogue bot is based on [1] which introduces Hybrid Code Networks (HCNs) that combine an RNN with domain-specific knowledge and system action templates.

alt text Diagram 1. Dotted lines correspond to unrequired (optional) modules, black squares to trained models, trapezes are modules that depend on a dataset and must be provided by software developer.

Here is a simple example of interaction with a trained dialogue bot (can be downloaded with deeppavlov/

>> Hello, welcome to the Phystech restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?
:: hi, i want some food
>> What part of town do you have in mind?
:: north of Dolgoprudny
>> Would you like something in the cheap, moderate, or expensive price range?
:: something chep, please
>> "Time to eat" is a nice place in the north of town and the prices are cheap.
:: what kind of food does it serve? 
>> "Time to eat" serves russian food.
:: i like that! give me their address then
>> Sure, "Time to eat" is on Sobina Square, 1/4.
:: and phone number. In case i wanted anything for delivery.
>> The phone number of "Time to eat" is +7(965)173-37-33.
:: I won't need that, but give me their post code too.
>> The post code of "Time to eat" is 141700.
:: nice, thanks!
>> "Time to eat" is a nice place in the centre of town serving tasty creative food.
:: good bye
>> You are welcome!
:: exit



To use a go_bot model you should have:

  1. (optional, but recommended) pretrained named entity recognition model (NER)
  2. (optional, but recommended) pretrained intents classifier model
  3. (optional) downloaded english fasttext embeddings trained on wiki (
    • you can use any english embeddings of your choice, but edit go_bot config accordingly
  4. pretrained goal-oriented bot model itself
    • config deeppavlov/skills/go_bot/config.json is recommended
    • slot_filler section of go_bot’s config should match NER’s configuration
    • intent_classifier section of go_bot’s config should match classifier’s configuration
    • double-check that corresponding load_paths point to NER and intent classifier model files

Config parameters:

For a working exemplary config see deeeppavlov/skills/go_bot/config.json (model without embeddings).

A minimal model without slot_filler, intent_classifier and embedder is configured in deeeppavlov/skills/go_bot/config_minimal.json.

A full model (with fasttext embeddings) configuration is in deeeppavlov/skills/go_bot/config_all.json

Usage example

from deeppavlov.core.commands.infer import build_model_from_config
from deeppavlov.core.commands.utils import set_usr_dir
from deeppavlov.core.common.file import read_json

CONFIG_PATH = 'path/to/config.json'

model = build_model_from_config(read_json(CONFIG_PATH))

utterance = ""
while utterance != 'quit':
    print(">> " + model.infer(utterance))
    utterance = input(':: ')
cd deeppavlov
python3 interact path/to/config.json


Config parameters

To be used for training, your config json file should include parameters:

Do not forget to set train_now parameters to true for vocabs.word_vocab, model and sections.

See deeeppavlov/skills/go_bot/config.json for details.

Train run

The easiest way to run the training is by using deeppavlov/ script:

cd deeppavlov
python3 train path/to/config.json



The Hybrid Code Network model was trained and evaluated on a modification of a dataset from Dialogue State Tracking Challenge 2 [2]. The modifications were as follows:

Your data

If your model uses DSTC2 and relies on dstc2_datasetreader DatasetReader, all needed files, if not present in the dataset_reader.data_path directory, will be downloaded from internet.

If your model needs to be trained on different data, you have several ways of achieving that (sorted by increase in the amount of code):

  1. Use "dialog_dataset" in dataset config section and "dstc2_datasetreader" in dataset reader config section (the simplest, but not the best way):
  2. Use "dialog_dataset" in dataset config section and "your_dataset_reader" in dataset reader config section (recommended):
    • clone deeppavlov.dataset_readers.dstc2_dataset_reader:DSTC2DatasetReader to YourDatasetReader;
    • register as "your_dataset_reader";
    • rewrite so that it implements the same interface as the origin. Particularly, must have the same output as
      • train — training dialog turns consisting of tuples:
        • first tuple element contains first user’s utterance info
          • text — utterance string
          • intents — list of string intents, associated with user’s utterance
          • db_result — a database response (optional)
          • episode_done — set to true, if current utterance is the start of a new dialog, and false (or skipped) otherwise (optional)
        • second tuple element contains second user’s response info
          • text — utterance string
          • act — an act, associated with the user’s utterance
      • valid — validation dialog turns in the same format
      • test — test dialog turns in the same format

#TODO: change str act to a list of acts

  1. Use your own dataset and dataset reader (if 2. doesn’t work for you):


As far as our dataset is a modified version of official DSTC2-dataset [2], resulting metrics can’t be compared with evaluations on the original dataset.

But comparisons for bot model modifications trained on out DSTC2-dataset are presented:

Model Config Test action accuracy Test turn accuracy
basic bot config_minimal.json 0.5271 0.4853
bot with slot filler & fasttext embeddings   0.5305 0.5147
bot with slot filler & intents config.json 0.5436 0.5261
bot with slot filler & intents & embeddings config_all.json 0.5307 0.5145

#TODO: add dialog accuracies


[1] Jason D. Williams, Kavosh Asadi, Geoffrey Zweig, Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning – 2017

[2] Dialog State Tracking Challenge 2 dataset