mlnext.score.eval_metrics#

mlnext.score.eval_metrics(y: ndarray, y_hat: ndarray) Dict[str, float][source]#

Calculates accuracy, f1, precision, recall and recall_anomalies.

Parameters:
  • y (np.ndarray) – Ground truth labels.

  • y_hat (np.ndarray) – Predictions (0 or 1).

Returns:

Returns a dict with all scores.

Return type:

T.Dict[str, float]

Example

>>> y, y_hat = np.ones((10, 1)), np.ones((10, 1))
>>> eval_metrics(y, y_hat)
{'accuracy': 1.0, 'precision': 1.0, 'recall': 1.0, 'f1': 1.0,
 'anomalies': 1.0}