Our goal is to enable AI-application developers and researchers with:
A set of pre-trained NLP models, pre-defined dialog system components (ML/DL/Rule-based), and pipeline templates;
A framework for implementing and testing their own dialog models;
Tools for application integration with adjacent infrastructure (messengers, helpdesk software, etc.);
Benchmarking environments for conversational models and uniform access to relevant datasets.
Modelis any NLP model that doesn’t necessarily communicates with the user in natural language.
Componentis a reusable functional part of a
Rule-based Modelscannot be trained.
Machine Learning Modelscan be trained only stand alone.
Deep Learning Modelscan be trained independently and in an end-to-end mode being joined in a chain.
Chainerbuilds a model pipeline from heterogeneous components (Rule-based/ML/DL). It allows one to train and infer models in a pipeline as a whole.
The smallest building block of the library is a
Component stands for any kind of function in an NLP pipeline. It can
be implemented as a neural network, a non-neural ML model, or a
Components can be joined into a
solves a larger NLP task than a
Component. However, in terms of
Models are not different from
Most of DeepPavlov models are built on top of PyTorch. Other external libraries can be used to build basic components.