mlnext.score.eval_metrics_all#

mlnext.score.eval_metrics_all(y: List[ndarray], y_hat: List[ndarray]) Dict[str, float][source]#

Calculates combined accuracy, f1, precision, recall and AUC scores for multiple arrays. The arrays are shorted to the minimum length of the corresponding partner and stacked on top of each other to calculated the combined scores.

Parameters:
  • y (np.ndarray) – Ground truth.

  • y_hat (np.ndarray) – Prediction.

Returns:

Returns a dict with all scores.

Return type:

T.Dict[str, float]

Example

>>> y = [np.ones((10, 1)), np.zeros((10, 1))]
>>> y_hat = [np.ones((10, 1)), np.zeros((10, 1))]
>>> eval_metrics_all(y, y_hat)
{'accuracy': 1.0, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0,
 'roc_auc': 1.0}