metrics¶
Different Metric functions.
- deeppavlov.metrics.accuracy.sets_accuracy(y_true: [<class 'list'>, <class 'numpy.ndarray'>], y_predicted: [<class 'list'>, <class 'numpy.ndarray'>]) float [source]¶
Calculate accuracy in terms of sets coincidence
- Parameters
y_true – true values
y_predicted – predicted values
- Returns
portion of samples with absolutely coincidental sets of predicted values
- Alias:
sets_accuracy
- deeppavlov.metrics.fmeasure.round_f1(y_true, y_predicted)[source]¶
Calculates F1 (binary) measure.
- Parameters
y_true – list of true values
y_predicted – list of predicted values
- Returns
F1 score
- Alias:
f1
- deeppavlov.metrics.fmeasure.round_f1_macro(y_true, y_predicted)[source]¶
Calculates F1 macro measure.
- Parameters
y_true – list of true values
y_predicted – list of predicted values
- Returns
F1 score
- Alias:
f1_macro
- deeppavlov.metrics.fmeasure.round_f1_weighted(y_true, y_predicted)[source]¶
Calculates F1 weighted measure.
- Parameters
y_true – list of true values
y_predicted – list of predicted values
- Returns
F1 score
- Alias:
f1_weighted
- deeppavlov.metrics.fmeasure.ner_f1(y_true, y_predicted)[source]¶
Calculates F1 measure for Named Entity Recognition task.
- Parameters
y_true – list of true values
y_predicted – list of predicted values
- Returns
F1 score
- Alias:
ner_f1
- deeppavlov.metrics.fmeasure.ner_token_f1(y_true, y_predicted, print_results=False)[source]¶
Calculates F1 measure for Named Entity Recognition task without taking into account BIO or BIOES markup.
- Parameters
y_true – list of true values
y_predicted – list of predicted values
print_results – if True, then F1 score for each entity type is printed
- Returns
F1 score
- Alias:
ner_f1
- deeppavlov.metrics.log_loss.sk_log_loss(y_true: Union[List[List[float]], List[List[int]], ndarray], y_predicted: Union[List[List[float]], List[List[int]], ndarray]) float [source]¶
Calculates log loss.
- Parameters
y_true – list or array of true values
y_predicted – list or array of predicted values
- Returns
Log loss
- Alias:
log_loss
- deeppavlov.metrics.roc_auc_score.roc_auc_score(y_true: Union[List[List[float]], List[List[int]], ndarray], y_pred: Union[List[List[float]], List[List[int]], ndarray]) float [source]¶
Compute Area Under the Curve (AUC) from prediction scores.
- Parameters
y_true – true binary labels
y_pred – target scores, can either be probability estimates of the positive class
- Returns
Area Under the Curve (AUC) from prediction scores
- Alias:
roc_auc