deeppavlov.core.trainers

Trainer classes.

class deeppavlov.core.trainers.FitTrainer(chainer_config: dict, *, batch_size: int = - 1, metrics: Iterable[Union[str, dict]] = ('accuracy',), evaluation_targets: Iterable[str] = ('valid', 'test'), show_examples: bool = False, max_test_batches: int = - 1, **kwargs)[source]

Trainer class for fitting and evaluating Estimators

Parameters
  • chainer_config"chainer" block of a configuration file

  • batch_size – batch_size to use for partial fitting (if available) and evaluation, the whole dataset is used if batch_size is negative or zero (default is -1)

  • metrics – iterable of metrics where each metric can be a registered metric name or a dict of name and inputs where name is a registered metric name and inputs is a collection of parameter names from chainer’s inner memory that will be passed to the metric function; default value for inputs parameter is a concatenation of chainer’s in_y and out fields (default is ('accuracy',))

  • evaluation_targets – data types on which to evaluate trained pipeline (default is ('valid', 'test'))

  • show_examples – a flag used to print inputs, expected outputs and predicted outputs for the last batch in evaluation logs (default is False)

  • max_test_batches – maximum batches count for pipeline testing and evaluation, ignored if negative (default is -1)

  • **kwargs – additional parameters whose names will be logged but otherwise ignored

evaluate(iterator: DataLearningIterator, evaluation_targets: Optional[Iterable[str]] = None) Dict[str, dict][source]

Run test() on multiple data types using provided data iterator

Parameters
  • iteratorDataLearningIterator used for evaluation

  • evaluation_targets – iterable of data types to evaluate on

Returns

a dictionary with data types as keys and evaluation reports as values

fit_chainer(iterator: Union[DataFittingIterator, DataLearningIterator]) None[source]

Build the pipeline Chainer and successively fit Estimator components using a provided data iterator

get_chainer() Chainer[source]

Returns a Chainer built from self.chainer_config for inference

test(data: Iterable[Tuple[Collection[Any], Collection[Any]]], metrics: Optional[Collection[Metric]] = None, *, start_time: Optional[float] = None, show_examples: Optional[bool] = None) dict[source]

Calculate metrics and return reports on provided data for currently stored Chainer

Parameters
  • data – iterable of batches of inputs and expected outputs

  • metrics – collection of metrics namedtuples containing names for report, metric functions and their inputs names (if omitted, self.metrics is used)

  • start_time – start time for test report

  • show_examples – a flag used to return inputs, expected outputs and predicted outputs for the last batch in a result report (if omitted, self.show_examples is used)

Returns

a report dict containing calculated metrics, spent time value, examples count in tested data and maybe examples

train(iterator: Union[DataFittingIterator, DataLearningIterator]) None[source]

Calls fit_chainer() with provided data iterator as an argument

class deeppavlov.core.trainers.NNTrainer(chainer_config: dict, *, batch_size: int = 1, epochs: int = - 1, start_epoch_num: int = 0, max_batches: int = - 1, metrics: Iterable[Union[str, dict]] = ('accuracy',), train_metrics: Optional[Iterable[Union[str, dict]]] = None, metric_optimization: str = 'maximize', evaluation_targets: Iterable[str] = ('valid', 'test'), show_examples: bool = False, tensorboard_log_dir: Optional[Union[Path, str]] = None, max_test_batches: int = - 1, validate_first: bool = True, validation_patience: int = 5, val_every_n_epochs: int = - 1, val_every_n_batches: int = - 1, log_every_n_batches: int = - 1, log_every_n_epochs: int = - 1, log_on_k_batches: int = 1, **kwargs)[source]
Trainer class for training and evaluating pipelines containing Estimators and an NNModel
Parameters
  • chainer_config"chainer" block of a configuration file

  • batch_size – batch_size to use for partial fitting (if available) and evaluation, the whole dataset is used if batch_size is negative or zero (default is 1)

  • epochs – maximum epochs number to train the pipeline, ignored if negative or zero (default is -1)

  • start_epoch_num – starting epoch number for reports (default is 0)

  • max_batches – maximum batches number to train the pipeline, ignored if negative or zero (default is -1)

  • metrics – iterable of metrics where each metric can be a registered metric name or a dict of name and inputs where name is a registered metric name and inputs is a collection of parameter names from chainer’s inner memory that will be passed to the metric function; default value for inputs parameter is a concatenation of chainer’s in_y and out fields; the first metric is used for early stopping (default is ('accuracy',))

  • train_metrics – metrics calculated for train logs (if omitted, metrics argument is used)

  • metric_optimization – one of 'maximize' or 'minimize' — strategy for metric optimization used in early stopping (default is 'maximize')

  • evaluation_targets – data types on which to evaluate a trained pipeline (default is ('valid', 'test'))

  • show_examples – a flag used to print inputs, expected outputs and predicted outputs for the last batch in evaluation logs (default is False)

  • tensorboard_log_dir – path to a directory where tensorboard logs can be stored, ignored if None (default is None)

  • validate_first – flag used to calculate metrics on the 'valid' data type before starting training (default is True)

  • validation_patience – how many times in a row the validation metric has to not improve for early stopping, ignored if negative or zero (default is 5)

  • val_every_n_epochs – how often (in epochs) to validate the pipeline, ignored if negative or zero (default is -1)

  • val_every_n_batches – how often (in batches) to validate the pipeline, ignored if negative or zero (default is -1)

  • log_every_n_epochs – how often (in epochs) to calculate metrics on train data, ignored if negative or zero (default is -1)

  • log_every_n_batches – how often (in batches) to calculate metrics on train data, ignored if negative or zero (default is -1)

  • log_on_k_batches – count of random train batches to calculate metrics in log (default is 1)

  • max_test_batches – maximum batches count for pipeline testing and evaluation, overrides log_on_k_batches, ignored if negative (default is -1)

  • **kwargs – additional parameters whose names will be logged but otherwise ignored

Trainer saves the model if it sees progress in scores. The full rules look like following:

  • For the validation savepoint:
    • 0-th validation (optional). Don’t save model, establish a baseline.

    • 1-th validation.
      • If we have a baseline, save the model if we see an improvement, don’t save otherwise.

      • If we don’t have a baseline, save the model.

    • 2nd and later validations. Save the model if we see an improvement

  • For the at-train-exit savepoint:
    • Save the model if it happened before 1st validation (to capture early training results), don’t save otherwise.

evaluate(iterator: DataLearningIterator, evaluation_targets: Optional[Iterable[str]] = None) Dict[str, dict]

Run test() on multiple data types using provided data iterator

Parameters
  • iteratorDataLearningIterator used for evaluation

  • evaluation_targets – iterable of data types to evaluate on

Returns

a dictionary with data types as keys and evaluation reports as values

fit_chainer(iterator: Union[DataFittingIterator, DataLearningIterator]) None

Build the pipeline Chainer and successively fit Estimator components using a provided data iterator

get_chainer() Chainer

Returns a Chainer built from self.chainer_config for inference

test(data: Iterable[Tuple[Collection[Any], Collection[Any]]], metrics: Optional[Collection[Metric]] = None, *, start_time: Optional[float] = None, show_examples: Optional[bool] = None) dict

Calculate metrics and return reports on provided data for currently stored Chainer

Parameters
  • data – iterable of batches of inputs and expected outputs

  • metrics – collection of metrics namedtuples containing names for report, metric functions and their inputs names (if omitted, self.metrics is used)

  • start_time – start time for test report

  • show_examples – a flag used to return inputs, expected outputs and predicted outputs for the last batch in a result report (if omitted, self.show_examples is used)

Returns

a report dict containing calculated metrics, spent time value, examples count in tested data and maybe examples

train(iterator: DataLearningIterator) None[source]

Call fit_chainer() and then train_on_batches() with provided data iterator as an argument

train_on_batches(iterator: DataLearningIterator) None[source]

Train pipeline on batches using provided data iterator and initialization parameters