• EnglishEspañol日本語한국어Português
  • Log inStart now

Troubleshooting OpenTelemetry with New Relic

Troubleshooting OpenTelemetry with New Relic may just be a matter of making sure you are following best practices, but sometimes you may need to take additional steps to diagnose your issues. Here are some examples of specific problems you might encounter, along with steps and tools to resolve them.

OpenTelemetry data sent via OTLP is not queryable

Problem

You sent OpenTelemetry metrics, logs, or traces using OTLP and are unable to view the data. Before digging deeper, make sure you've checked the following:

Solution

There are number of tools you can use to validate the successful delivery of telemetry data to our platform. A good first step is to check the data management hub to facet data ingest and determine how much data is arriving from various sources. You can also use metrics and events or query builder to look for data faceted by instrumentation.provider or newrelic.source attributes:

FROM Log, Metric, Span SELECT datapointcount() WHERE instrumentation.provider = 'opentelemetry' FACET instrumentation.provider, newrelic.source

This query should tell you whether data is arriving via OTLP. If the data you expect is not present, try this alternate query:

FROM Log, Metric, Span SELECT count(*) where newrelic.source LIKE 'api.%.otlp'

You can also check for integration errors by querying NrIntegrationError events. This can help you determine whether you have configuration or format issues or if you've run into our platform limits.

Important

The ingest limits for metrics, logs, and traces via OTLP are the same as our other data ingest API limits.

Various parts of the New Relic UI rely on the presence of specific attributes to function properly. You can use the NRQL console feature in many places to check the WHERE or FACET clauses of the query for required attributes. You can also edit those clauses and re-run the query to determine whether there is data present with those attributes missing. Examples of required attributes include service.name and service.instance.id. For a more complete list of examples, see resources.

OpenTelemetry log correlation isn't working

Problem

You've correlated your logs with your service using service.name(OpenTelemetry logs: Best practices) so you can see logs associated with traces, but you don't see any logs in the New Relic UI. In this scenario, the log data has made it to New Relic, but isn't showing up in the distributed trace UI with corresponding spans.

Solution

To correlate your logs with trace data, the logs need to include the trace context that is contained trace_id and span_id. However, to ensure your logs show up in the New Relic UI, you'll need to configure rules in your log pipeline to translate trace_id and span_id to trace.id and span.id.

OpenTelemetry entities or relationships are missing

Problem

You sent OpenTelemetry data from a service or infrastructure component and either the entity or its relationships are missing or incorrect.

Solution

OpenTelemetry entities will be synthesized based on the public rules described for the EXT-SERVICE entity type. The standard rule to match relies on the presence of the service.name dimension which follows the OpenTelemetry semantic conventions.

To set the service.name with the OpenTelemetry Java SDK, include it in your resource:

var resource = Resource.getDefault()
.merge(Resource.builder().put(SERVICE_NAME, serviceName).build());

Depending on the SDK, you may also set the service.name by declaring it in the OTEL_RESOURCE_ATTRIBUTES or OTEL_SERVICE_NAME environment variables.

For , you can use a structured log template to inject the service.name. See Logs in context with Log4j2 for an example.

Tip

For more OpenTelemetry examples with New Relic, visit the newrelic-opentelemetry-examples repository on GitHub.

Troubleshooting the OpenTelemetry Collector

The best place for collector troubleshooting tips and monitoring practices is the up-to-date guidelines in the OpenTelemetry community. See the links below for community troubleshooting documents.

Collector logs

Set the log level in the config service::telemetry::logs. The default level is INFO. Supported levels are: DEBUG, INFO, WARN, ERROR, DPANIC, PANIC, FATAL.

For troubleshooting tips, see logs troubleshooting (GitHub).

Collector metrics

The following NRQL query shows all the available metrics from the collector itself in New Relic:

FROM Metric SELECT uniques(metricName) WHERE metricName like ‘otelcol_%LIMIT MAX

For troubleshooting tips, see:

Collector traces

For troubleshooting tips, see zPages (Github).

Copyright © 2024 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.