Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Different anomalies on in a single time series are grouped in into an alert containing potentially several nodes, to give a bigger context to , which can include several related nodes. This provides more context for each anomaly and also to reduce reduces the number of alerts sent to the user. Time series that are in within the same node are already considered seen as related, so they will are always be alerted together. To capture inter-nodes relations we are using the groups created by using relationships between different nodes, we use groups based on correlations among nodes. Time series which belongs to from nodes in the same group are also alerted together.

Alerting is enabled once the first metrics of your environment's metrics are onboarded to the ML pipeline, which takes at least 7 days to gather enough data for baselines and correlations (Onboarding, preprocessing and filtering of the data ). The onboarding of metrics happens after a minimum of 7 days, this is to allow enough data to learn baselines and correlations. As more data are is collected, these baselines and correlations are improved and the alerting will get less noisy as improve, reducing the noise in alerts over the first few weeks have passed.

When receiving an alert (Alerts - structure and data explained ) there is Each alert includes a field for the severity of the alert itself alert's severity and a field for the severity of each deviation included in the alert. Both the severity of the alert and the severity of the anomalies can be used to setup (Alerts - structure and data explained ). You can use both severities to set up notifications and automated actions.

...