/
Learning status flag

Learning status flag

[ { "severity": "severe", "severity_num": 3, "nodes_affected": 3, "metrics_affected": 29, "event_type": "updated", "id": "67766ea89a79cb09b34db8c9", "event_occured": "2025-01-02T10:54:00Z", "alert_status": 0.1724137931034483,

Above is an example of a top section of an anomaly alert. Below, the “alert_status” is explained.

The more data Eyer observes, the most trustworthy the alerts become. For this reason we decided to introduce a multilevel flag that gives an indication on how confident one can be in an alert. The flag is calculated individually for every single time series/metric and it is propagated at the level of the alert as the mean of all the flags of the metrics involved in the alert.

Learning status on single metric:

5 - Not reliable data collection, not enough statistic

4 - Data interruption for at least 72 hours for HF and 7 days for LF at the time of the last relearning (coming soon)

3 - We have collected enough data to start to see patterns for parts of the day, but there is noise for some of the hours.

2 - We have collected a good amount of data for parts of the day, but there is still noise for some of the hours.

1 - We have collected enough data to start to see patterns for the day.

0 - data collection is good for the day.

For the first few weeks all the metrics and the alerts will have flag 5, than most of the metrics should escalate to 1 and 0. Those metric that after one month are still in status 5 are to sparse to be analysed by out algorithm. These metrics can be a considerable source of noise. Our recommendation to filter out noisy alerts is to focus on alerts with learning status flag < 2.

 

Related content

Alerting
More like this
Anomaly alerts - structure and data explained
Anomaly alerts - structure and data explained
More like this
Algorithms overview
Algorithms overview
More like this
Grafana - set up alerts based on anomaly alerts from Eyer
Grafana - set up alerts based on anomaly alerts from Eyer
More like this
ML training
More like this
Best Practice - validate anomaly alerts
Best Practice - validate anomaly alerts
More like this