class Dict, embedder: deeppavlov.core.models.component.Component, source_vocab: deeppavlov.core.models.component.Component, target_vocab: deeppavlov.core.models.component.Component, start_of_sequence_token: str, end_of_sequence_token: str, knowledge_base_keys, save_path: str, load_path: str = None, debug: bool = False, **kwargs)[source]

A goal-oriented bot based on a sequence-to-sequence rnn. For implementation details see Seq2SeqGoalOrientedBotNetwork. Pretrained for KvretDatasetReader dataset.

  • network_parameters – parameters passed to object of Seq2SeqGoalOrientedBotNetwork class.
  • embedder – word embeddings model, see deeppavlov.models.embedders.
  • source_vocab – vocabulary of input tokens.
  • target_vocab – vocabulary of bot response tokens.
  • start_of_sequence_token – token that defines start of input sequence.
  • end_of_sequence_token – token that defines end of input sequence and start of output sequence.
  • debug – whether to display debug output.
  • **kwargs – parameters passed to parent NNModel class.
class int, source_vocab_size: int, target_vocab_size: int, target_start_of_sequence_index: int, target_end_of_sequence_index: int, knowledge_base_entry_embeddings: numpy.ndarray, kb_attention_hidden_sizes: List[int], decoder_embeddings: numpy.ndarray, learning_rate: float, beam_width: int = 1, end_learning_rate: float = None, decay_steps: int = 1000, decay_power: float = 1.0, dropout_rate: float = 0.0, state_dropout_rate: float = 0.0, optimizer: str = 'AdamOptimizer', **kwargs)[source]

The GoalOrientedBotNetwork is a recurrent network that encodes user utterance and generates response in a sequence-to-sequence manner.

For network architecture is similar to .

  • hidden_size – RNN hidden layer size.
  • source_vocab_size – size of a vocabulary of encoder tokens.
  • target_vocab_size – size of a vocabulary of decoder tokens.
  • target_start_of_sequence_index – index of a start of sequence token during decoding.
  • target_end_of_sequence_index – index of an end of sequence token during decoding.
  • knowledge_base_entry_embeddings – matrix with embeddings of knowledge base entries, size is (number of entries, embedding size).
  • kb_attention_hidden_sizes – list of sizes for attention hidden units.
  • decoder_embeddings – matrix with embeddings for decoder output tokens, size is (targer_vocab_size + number of knowledge base entries, embedding size).
  • beam_width – width of beam search decoding.
  • learning_rate – learning rate during training.
  • end_learning_rate – if set, learning rate starts from learning_rate value and decays polynomially to the value of end_learning_rate.
  • decay_steps – number of steps of learning rate decay.
  • decay_power – power used to calculate learning rate decay for polynomial strategy.
  • dropout_rate – probability of weights’ dropout.
  • state_dropout_rate – probability of rnn state dropout.
  • optimizer – one of tf.train.Optimizer subclasses as a string.
  • **kwargs – parameters passed to a parent TFModel class.
load(*args, **kwargs)[source]

Load model parameters from self.load_path

save(*args, **kwargs)[source]

Save model parameters to self.save_path

class deeppavlov.models.seq2seq_go_bot.kb.KnowledgeBase(save_path: str, load_path: str = None, tokenizer: Callable = None, *args, **kwargs)[source]

A custom dictionary that encodes knowledge facts from KvretDatasetReader data.


>>> from deeppavlov.models.seq2seq_go_bot.kb import KnowledgeBase
>>> kb = KnowledgeBase(save_path="kb.json", load_path="kb.json")
>>>['person1'], [['name', 'hair', 'eyes']], [[{'name': 'Sasha', 'hair': 'long   dark', 'eyes': 'light blue '}]])

>>> kb(['person1'])
[[('sasha_name', 'Sasha'), ('sasha_hair', 'long   dark'), ('sasha_eyes', 'light blue ')]]

>>> kb(['person_that_doesnt_exist'])
  • save_path – path to save the dictionary with knowledge.
  • load_path – path to load the json with knowledge.
  • tokenizer – tokenizer used to split entity values into tokens (inputs batch of strings and outputs batch of lists of tokens).
  • **kwargs – parameters passed to parent Estimator.
class deeppavlov.models.seq2seq_go_bot.kb.KnowledgeBaseEntityNormalizer(remove: bool = False, denormalize: bool = False, **kwargs)[source]

Uses instance of KnowledgeBase to normalize or to undo normalization of entities in the input utterance.

To normalize is to substitute all mentions of database entities with their normalized form.

To undo normalization is to substitute all mentions of database normalized entities with their original form.


>>> from deeppavlov.models.seq2seq_go_bot.kb import KnowledgeBase
>>> kb = KnowledgeBase(save_path="kb.json", load_path="kb.json", tokenizer=lambda strings: [s.split() for s in strings])
>>>['person1'], [['name', 'hair', 'eyes']], [[{'name': 'Sasha', 'hair': 'long   dark', 'eyes': 'light blue '}]])
>>> kb(['person1'])
[[('sasha_name', ['Sasha']), ('sasha_hair', ['long', 'dark']), ('sasha_eyes', ['light','blue'])]]

>>> from deeppavlov.models.seq2seq_go_bot.kb import KnowledgeBaseEntityNormalizer
>>> normalizer = KnowledgeBaseEntityNormalizer(denormalize=False, remove=False)
>>> normalizer([["some", "guy", "with", "long", "dark", "hair", "said", "hi"]], kb(['person1']))
[['some', 'guy', 'with', 'sasha_hair', 'hair', 'said', 'hi']]

>>> denormalizer = KnowledgeBaseEntityNormalizer(denormalize=True)
>>> denormalizer([['some', 'guy', 'with', 'sasha_hair', 'hair', 'said', 'hi']], kb(['person1']))
[['some', 'guy', 'with', 'long', 'dark', 'hair', 'said', 'hi']]

>>> remover = KnowledgeBaseEntityNormalizer(denormalize=False, remove=True)
>>> remover([["some", "guy", "with", "long", "dark", "hair", "said", "hi"]], kb(['person1']))
[['some', 'guy', 'with', 'hair', 'said', 'hi']
  • denormalize – flag indicates whether to normalize or to undo normalization (“denormalize”).
  • remove – flag indicates whether to remove entities or not while normalizing (denormalize=False). Is ignored for denormalize=True.
  • **kwargs – parameters passed to parent Component class.