Alerts best practices

This document describes some best practices and examples to help you get the most out of your configuration with New Relic Alerts.

Before reading this document, we recommend you first read New Relic Alerts concepts and workflow.

Define policies for entities or people

When designing your alert policies, consider:

  • The parts of your architecture that need personnel to be responsible for them
  • The individuals who are responsible for one or more parts of your infrastructure

An organization may have multiple entities monitored by New Relic APM, Browser, Infrastructure, and Synthetics. Examples of considerations for different teams:

  • Software developers may need alert notifications for both front-end and back-end performance, such as webpage response time and page load JavaScript errors.
  • Operations personnel may need alert notifications for poor back-end performance, such as server memory and load averages.
  • The product owner may need alert notifications for positive front-end performance, such as improved end user Apdex scores or sales being monitored by Insights.

By following alerting best practices, key personnel will receive actionable alert notifications for the metrics that matter to them, and overall, the organization will be able to identify and respond to trends or patterns more efficiently.

Decide how many alert notifications

The more alert conditions you define, the more incidents can be triggered and monitored. For example, your organization may need an alerting solution to accommodate extensive IT systems. Create alert policies with multiple conditions for multiple monitored entities that notify you through one or more notification channels. Set incident preference to determine how violations lead to notifications.

On the other hand, your organization may not need an extensive alerting structure. The fewer alert conditions you define and the more minimal your incident preference is, the fewer violations will be opened. For example, for a simple alerting solution, you could create a simple alert policy with only an email notification channel.

Set thresholds for conditions

Set the thresholds for your alert policy's conditions to meaningful levels for your environment. Here are some suggested guidelines:

Action Recommendations
Set threshold levels Avoid setting thresholds too low. For example, if you set a CPU alerting threshold of 75% for 5 minutes on your production servers, and it routinely goes over that level, this will increase the likelihood of un-actionable alerts or false positives.
Experimenting with settings You do not need to edit files or restart software, so feel free to make quick changes to your threshold levels and adjust as necessary.
Adjust settings

Adjust your conditions over time.

  • As you use our products to help you optimize your entity's performance, tighten your thresholds to keep pace with your improved performance.
  • If you are rolling out something that you know will negatively impact your performance for a period of time, loosen your thresholds to allow for this.
Disable settings You can disable any alert condition in a policy. This is useful, for example, if you want to continue using other alert conditions for the policy while you experiment with other metrics or thresholds.

In most of our products (except Infrastructure), the color-coded health status indicator in the user interface changes as the alerting threshold escalates or returns to normal. This allows you to monitor a situation through our UI before a critical threshold passes, without needing to receive specific notifications about it.

For example, you can define a critical (red) threshold that notifies you when the error percentage for your app is above 10 percent at least once in any five minute period. You can also define an optional warning (yellow) threshold with different criteria.

Select notification channels

You can create notification channels first and then assign alert policies to them. You can also create alert policies first and then assign notification channels to them. This flexibility allows you to tailor who gets notified, using the method that is most useful to them.

For example, you could:

  • Identify your operations team's Slack channel as a general level of alerting, and use the on-call PagerDuty contact as an after-hours or escalated level of alerting.
  • Create webhooks with customized messages for a variety of situations or personnel.

By tailoring notifications to the most useful channel and policy, you can avoid alert fatigue and help the right personnel receive and respond to incidents they care about in a systematic way.

What's next?

To learn more about using New Relic Alerts:

For more help

Recommendations for learning more: