metrics

Different Metric functions.

deeppavlov.metrics.accuracy.sets_accuracy(y_true: [<class 'list'>, <class 'numpy.ndarray'>], y_predicted: [<class 'list'>, <class 'numpy.ndarray'>]) → float[source]

Calculate accuracy in terms of sets coincidence

Parameters:
  • y_true – true values
  • y_predicted – predicted values
Returns:

portion of samples with absolutely coincidental sets of predicted values

deeppavlov.metrics.accuracy.classification_accuracy(y_true: List[list], y_predicted: List[Tuple[list, dict]]) → float[source]

Calculate accuracy in terms of sets coincidence for special case of predictions (from classification KerasIntentModel)

Parameters:
  • y_true – true labels
  • y_predicted – predictions. Each prediction is a tuple of two elements (predicted_labels, dictionary like {“label_i”: probability_i} )
Returns:

portion of samples with absolutely coincidental sets of predicted values

deeppavlov.metrics.fmeasure_classification.classification_fmeasure(y_true: List[list], y_predicted: List[Tuple[list, dict]], average='macro') → float[source]

Calculate F1-measure macro

Parameters:
  • y_true – true binary labels
  • y_predicted – predictions. Each prediction is a tuple of two elements (predicted_labels, dictionary like {“label_i”: probability_i} ) where probability is float or keras.tensor
  • average – determines the type of averaging performed on the data
Returns:

F1-measure

deeppavlov.metrics.fmeasure_classification.classification_fmeasure_weighted(y_true: List[list], y_predicted: List[Tuple[list, dict]], average='weighted') → float[source]

Calculate F1-measure weighted

Parameters:
  • y_true – true binary labels
  • y_predicted – predictions. Each prediction is a tuple of two elements (predicted_labels, dictionary like {“label_i”: probability_i} ) where probability is float or keras.tensor
  • average – determines the type of averaging performed on the data
Returns:

F1-measure

deeppavlov.metrics.log_loss.classification_log_loss(y_true: List[list], y_predicted: List[Tuple[list, dict]]) → float[source]

Calculate log loss for classification module

Parameters:
  • y_true – true binary labels
  • y_predicted – predictions. Each prediction is a tuple of two elements (predicted_labels, dictionary like {“label_i”: probability_i} ) where probability is float or keras.tensor
Returns:

log loss

deeppavlov.metrics.roc_auc_score.classification_roc_auc_score(y_true: List[list], y_predicted: List[Tuple[list, dict]]) → float[source]

Compute Area Under the Curve (AUC) from prediction scores.

Parameters:
  • y_true – true binary labels
  • y_predicted – predictions. Each prediction is a tuple of two elements (predicted_labels, dictionary like {“label_i”: probability_i} )
Returns:

Area Under the Curve (AUC) from prediction scores