This page describes limits and rules pertaining to New Relic :
Category | Limited condition | Minimum value | Maximum value |
---|---|---|---|
Alert policies | 1 character | 128 characters | |
N/A | 10K policies | ||
Alert conditions | Matched data points per minute, per account (learn more) | N/A | 300M |
Alert query scan operations per minute, per account (learn more) | N/A | 2.5B | |
1 character | 128 characters | ||
0 conditions | 500 conditions | ||
0 conditions | 4K conditions | ||
Targets (product entities) per condition | 1 target | 5K targets for NRQL conditions 1K targets for non-NRQL conditions | |
Thresholds per condition | 1 Warning or 1 Critical | 1 Warning and 1 Critical | |
Alert incidents | 4K characters | ||
30 seconds | 2 hours | ||
Incidents per issue | 1 incident | 10K incidents Incidents beyond this limit will not be persisted. | |
Incident search API: page size | 1 page (less than or equal to 25 incidents) | 1K pages (25K incidents) TipOnly use the | |
Workflows | N/A | Initial limit: 1K | |
Workflow filter size | 1 character | 4,096 characters per workflow | |
Notification channels (Legacy) | Channel limitations |
NRDB alert query matched data points per minute
The alert condition Matched data points per minute
limit applies to the total rate of matched data points for the alerting queries in a New Relic account.
If this limit is exceeded, you won't be able to create or update conditions for the impacted account until the rate goes below the limit. Existing alert conditions aren't affected.
You can see your matched data points and any limit incidents in the limits UI.
To understand what conditions are leading to the most throughput, you can perform a query like:
FROM NrAiSignalSELECT sum(aggregatedDataPointsCount) AS 'alert matched data points'FACET conditionId
Some tips on optimizing your matched data points:
- If you're using sliding windows, note that this can significantly increase the number of data points. To lower the number of data points, you can use a longer aggregation duration.
- Use
WHERE
clauses to scope down the amount of data being alerted on. UsingWHERE
instead ofFACET
can produce more efficient alerts in some cases. - Combine similar alerts. If you have several alert conditions that are similar, consider grouping them together with combined filters.
To request a limit increase, talk to your New Relic account representative.
Note that using sliding windows can significantly increase the number of data points. Consider using a longer duration of Sliding window aggregation to reduce the number of data points produced.
Alert query scan operations per minute
The alert condition Alert query scan operations per minute
limit applies to the total rate of query scan operations on ingested events.
A query scan operation is the work performed by the New Relic pipeline to match ingested events to alert queries registered in a New Relic account.
If this limit is exceeded, you won't be able to create or update conditions for the impacted account until the rate goes below the limit. Existing alert conditions aren't affected.
You can see your query scan operations and any limit incidents in the limits UI.
When matching events to alert queries, all events from the data type that the query references must be examined. Here are a few common ways to have fewer events in a given data type (which will decrease the alert query scan operations):
When alerting on logs data, use log partitions to limit which logs are being scanned for alert queries.
When alerting on custom events, break up larger custom event types.
Use custom events instead of alerting on transaction events.
Create metrics to aggregate data.
Use metric timeslice queries when possible instead of alerting on transaction events.
In addition to the above tips, cleaning up any unused or unneeded alert queries (alert conditions) will decrease the number of query scan operations.
To request a limit increase, talk to your New Relic account representative.