• /
  • EnglishEspañolFrançais日本語한국어Português
  • EntrarComeçar agora

Alert Conditions for Cloud Cost Intelligence

preview

We're still working on this feature, but we'd love for you to try it out!

This feature is currently provided as part of a preview program pursuant to our pre-release policies.

After setting up Cloud Cost Intelligence, create alert policies with thresholds to receive proactive notifications before exceeding financial limits and avoid unexpected charges. Within a budget, you can set multiple alert thresholds based on the percentage of budget usage.

Create a new alert condition

An alert condition is a continuously running query that measures a given set of events against a defined threshold and opens an incident when the threshold is met for a specified duration.

  1. Go to one.newrelic.com > Alerts > Alert Policies.
  2. On the policy list page, click + New alert condition.
  3. To build alerts from scratch, use Write your own query.

Set your signal behavior

You can use an NRQL query to define the signals you want an alert condition to use as the foundation for your alert. For this example, you will be using this query:

FROM CloudCost SELECT sum(line_item_unblended_cost) FACET product_region_code

Using this query for your alert condition tells New Relic you want to know the unblended cost broken down by product region code.

To learn more about using NRQL (New Relic's query language), visit our NRQL documentation.

Run and preview your signal

  1. After you've defined your signal, click Run. A chart will appear and display the parameters that you've set.

    Dica

    To set up cross-account alerts, select a data account from the drop-down list. Note that you can only query data from one account at a time for cross-account alerts.

    Create a new alert condition page with advanced signal settings highlighted

    For this example, the chart will show the estimated usage cost for your company's cloud resources, broken down by product region code. This allows you to monitor the cost of your cloud resources across different regions.

  2. Click Next and begin configuring your alert condition.

Set thresholds for alert conditions

You now have a fully defined condition and rules for opening an incident if thresholds are breached. Name the condition and attach it to a policy to complete the setup.

Fine-tune alert condition page with window duration highlighted

Add alert condition details

Your alert condition is fully defined and will create an incident when your thresholds are breached. Name this condition and attach it to a policy to complete the setup.

A policy is the sorting system for your incidents. You can connect policies to workflows to define where you want New Relic to send this information and how often.

A screenshot demonstrating how you can name a new alert condition.

Field name

Description

Name your alert condition

A best practice for naming your condition involves a structured format that conveys essential information at a glance. Include the following elements in your condition names:

  • Priority: Indicate the severity or urgency of the alert, like P1, P2, P3.
  • Signal: Specify the metric or condition being monitored, like High Avg Latency or Low Throughput.
  • Entity: Identify the affected system, application, or component.
An example of a well-formed condition name following this structure would be P2 | High Avg Latency | Cloud Cost Intelligence.

Existing policy

If you already have a policy you want to connect to an alert condition, then select the existing policy. See alert policies for more information.

New policy

Balancing responsiveness and fatigue in your alerting strategy is crucial. Let's explore the policy options:

  1. One issue per policy (default):
    • Pros: Reduces noise and ensures immediate action.
    • Cons: Groups all incidents under one issue, even if triggered by different conditions. It's not ideal for multiple pageview concerns.
  2. One issue per condition:
    • Pros: Creates separate issues for each condition, ideal for isolating and addressing specific latency issues.
    • Cons: Can generate more alerts, potentially leading to fatigue.
  3. An issue for every incident:
    • Pros: Provides granular detail for external systems but is not optimal for internal consumption due to potential overload.
    • Cons: It is the noisiest option, and it is challenging to track broader trends and prioritize effectively.
See creating policies for more information.

Close open alert events

An incident automatically closes when the targeted signal returns to a non-breaching state for the period indicated in the condition's thresholds. This wait time is called the recovery period.

When an incident closes automatically:

  1. The closing timestamp is backdated to the start of the recovery period.
  2. The evaluation resets and restarts from when the previous incident ended.

All conditions have an incident time limit setting that automatically force close a long-lasting incident. New Relic automatically defaults to 3 days and recommends that you use our default settings for your first alert.

Another way to close an open incident when the signal does not return data is by configuring a loss of signal threshold. Refer to the lost signal threshold section above for more details.

Title template

Since you're creating an alert condition that lets you know if there are any latency issues with your unblended cost, you want to make sure your developers have all the information they need when notified about this incident. You will use workflows to notify a team Slack channel when an incident is created. See custom incident for more information.

Description template

Using the title template is optional but we recommend it. An alert condition defines a set of thresholds you want to monitor. If any of those thresholds are breached, an incident is created. Meaningful title templates help you pinpoint issues and resolve outages faster. See title templates for more information.

Runbook URL

An operations runbook detailing investigation, triage, or remediation steps is often linked in this field.

To learn more about cross-account alerts, see Cross-account alerts.

Copyright © 2026 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.