This document describes some best practices and examples to help you get the most out of your alerts configuration.
Before reading this, we recommend you read Alerts concepts and workflow.
Define policies for entities or people
When designing your policies, consider:
- The parts of your architecture that need personnel to be responsible for them
- The individuals who are responsible for one or more parts of your infrastructure
An organization may have multiple entities monitored by APM, Browser, Infrastructure, and Synthetic. Examples of considerations for different teams:
- Operations personnel may need notifications for poor back-end performance, such as server memory and load averages.
- The product owner may need notifications for positive front-end performance, such as improved end user Apdex scores or sales being monitored in dashboards.
By following these best practices, key personnel will receive actionable notifications for the metrics that matter to them, and overall, the organization will be able to identify and respond to trends or patterns more efficiently.
Control how many notifications you get
The more conditions you define, the more incidents can be triggered and monitored. For example, your organization may need an alerting solution to accommodate extensive IT systems. Create policies with multiple conditions for multiple monitored entities that notify you through one or more notification channels. Set incident preference to determine how violations lead to notifications.
On the other hand, your organization may not need an extensive alerting structure. The fewer conditions you define and the more minimal your incident preference is, the fewer violations will be opened. For example, for a simple alerting solution, you could create a simple policy with only an email notification channel.
Set thresholds for conditions
Set the thresholds for your policy's conditions to meaningful levels for your environment. Here are some suggested guidelines:
|Set threshold levels||Avoid setting thresholds too low. For example, if you set a CPU condition threshold of 75% for 5 minutes on your production servers, and it routinely goes over that level, this will increase the likelihood of un-actionable alerts or false positives.|
|Experimenting with settings||You do not need to edit files or restart software, so feel free to make quick changes to your threshold levels and adjust as necessary.|
Adjust your conditions over time.
|Disable settings||You can disable any condition in a policy. This is useful, for example, if you want to continue using other conditions in the policy while you experiment with other metrics or thresholds.|
In most of our products (except Infrastructure), the color-coded health status indicator in the user interface changes as the alerting threshold escalates or returns to normal. This allows you to monitor a situation through our UI before a critical threshold passes, without needing to receive specific notifications about it.
For example, you can define a critical (red) threshold that notifies you when the error percentage for your app is above 10 percent at least once in any five minute period. You can also define an optional warning (yellow) threshold with different criteria.
Select notification channels
You can create notification channels first and then assign policies to them. You can also create policies first and then assign notification channels to them. This flexibility allows you to tailor who gets notified, using the method that is most useful to them.
For example, you could:
- Identify your operations team's Slack channel as a general level of alerting, and use the on-call PagerDuty contact as an after-hours or escalated level of alerting.
- Create webhooks with customized messages for a variety of situations or personnel.
By tailoring notifications to the most useful channel and policy, you can avoid alert fatigue and help the right personnel receive and respond to incidents they care about in a systematic way.
To learn more about using alerts: