Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Different anomalies on single time series are grouped in an alert containing several nodes, to give a bigger context to each anomaly and also to reduce the number of alerts sent to the user. Time series that are in the same node are already considered related, so they will always be alerted together. To capture inter-nodes relations we are using the groups created by using correlations among nodes. Time series which belongs to nodes in the same group are alerted together.

Nodes that have at least one metric with an anomaly that reaches a certain score are included in an alert. The alert is customisable to alert on different criticalities: yellow, orange, or red. As more time series are added to the alert the user is re-alerted by using an alerting formula. The alert formula takes into consideration not only how many nodes in a group have anomalies of a certain criticality, but also how many time series have reached that criticality in each of the nodes. As the alert grows not all the single metric anomalies are immediately re-alerted, but they are alerting in growing batches.

Alerting is enabled once the first metrics of your environment are onboarded to the ML pipeline (Onboarding, preprocessing and filtering of the data ). The onboarding of metrics happens after a minimum of 7 days, this is to allow enough data to learn baselines and correlations. As more data are collected baselines and correlations are improved and the alerting will get less noisy as the first few weeks have passed.

  • No labels