You sent metric data points to the Metric API, and are not seeing what you expect when querying the data. Use the following checklist to determine the root cause:
- Make sure you are querying the data correctly.
- Check the HTTP status codes returned by the API. Issues like authorization failures can be diagnosed with HTTP status codes.
- If you are sending data from a Prometheus server via New Relic's remote_write endpoint, check your Prometheus server logs for errors or non-2xx HTTP responses from the New Relic endpoint.
- Query your account for
NrIntegrationErrorevents. New Relic's ingestion endpoints are asynchronous, meaning the endpoint verifies the payload after it returns the HTTP response. If any issues occur while verifying your payload, then an
NrIntegrationErrorevent will be created in your account. New Relic also uses
NrIntegrationErrorevents to notify customers when various rate limits have been reached.
View error details
For an introduction to using the
NrIntegrationError event, see
Here's an example NRQL for examining issues with Metric API ingest:
SELECT count(*) FROM NrIntegrationError WHERE newRelicFeature = 'Metrics' FACET category, message LIMIT 100 SINCE 24 hours ago
category indicates the type of error and the
message provides more detailed information about the error. If the
rateLimit, then you should also examine the
rateLimitType field for more information on the type of rate limiting.
Description and solution
There is an issue with the JSON payload. These include JSON syntax errors, attribute names, or values that are too long.
You are sending too many datapoints per minute. If you get this error, you can either send data less frequently, or request changes to your metric rate limits by contacting your New Relic account representative, or visiting our Support portal.
You have an attribute with a high number of unique values, like
You have Prometheus servers reporting too many unique timeseries via New Relic's remote_write endpoint.
Reduce the number of unique timeseries reported by modifying your Prometheus server configuration to reduce the number of targets being scraped, or by using relabel rules in the remote_write section of your server configuration to drop timeseries or highly unique labels.
Too many requests per minute are being sent. To resolve this, put more datapoints in each request, and send them less frequently.
You have exceeded your daily error group limit. Incoming error groups will be dropped for the remainder of the day and will continue as normal after UTC midnight.
To resolve this, reduce the amount of unique error messages collected by New Relic.
Match errors to ingested payloads
NrIntegrationError event is created as a result of a syntax issue with the HTTP request payload, then the event contains the attributes
apiKeyPrefixmatches the first 6 characters of the API key used to send the data.
requestIdsent in the HTTP response.
To view these fields, run this NRQL query:
SELECT message, apiKeyPrefix, requestId FROM NrIntegrationError LIMIT 100
To verify a specific
requestId, run this NRQL query:
SELECT * FROM NrIntegrationError WHERE requestId = 'REQUEST_ID'
Programmatically retrieve NrIntegrationError events
To programmatically retrieve these errors:
Create an HTTP request as shown below:
If your organization hosts data in the EU data center, ensure you're using the EU region endpoints.bash$curl -H "Accept: application/json" -H "X-Query-Key:YOUR_API_KEY_HERE" "https://insights-api.newrelic.com/v1/accounts/YOUR_ACCOUNT_HERE/query?nrql=SELECT%20*%20FROM%20NrIntegrationError%20where%20newRelicFeature='Metrics'"