Python custom metrics API

The Python agent custom metric API provides calls which enable you to report additional custom metrics. These custom metrics can be charted using custom dashboards through the New Relic UI.

The API call format is newrelic.agent. followed by the API name and values; for example, newrelic.agent.record_custom_metric(). For your convenience, the headings below show the API name, followed by the complete API call format and additional details.

Custom metric API calls

For most use cases, the custom metric API is the simplest way to collect custom metrics. The custom metric API includes the following call:

newrelic.agent.record_custom_metric(name, value, application=None)

Records a custom metric.

newrelic.agent.record_custom_metric('Custom/Value', value())

If the application is the default value of None, the custom metrics will be recorded against the application associated with the currently monitored transaction. A call where the application is not provided should therefore be restricted to being used in code executed as part of a web transaction or background task.

To record custom metrics from a distinct background thread or other code, it is necessary to pass the application object corresponding to the application to which the custom metrics are to be reported.

application = newrelic.agent.register_application()

def report_custom_metrics():
    while True:
        newrelic.agent.record_custom_metric('Custom/Value', value(), application)
        time.sleep(10.0)

thread = threading.Thread(target=report_custom_metrics)
thread.setDaemon(True)
thread.start()

The value of the custom metric being recorded can be a numeric, or can instead be a dictionary which corresponds to an already aggregated data sample for a specific metric.

When supplying a dictionary for the metric value, the fields that can be supplied are:

  • count
  • total
  • min
  • max
  • sum_of_squares

These fields have the same meanings as what is described for the New Relic Plugins API. The Plugins API metric naming and value reference documentation can be used as a general indicator for what can be done with metrics reported via record_custom_metric(). The only requirement is that since metrics are being reported via the APM agent API, the Custom/ prefix and not the Component/ prefix must be used for metric names else it will not be selectable in the metric chooser of the custom dashboard editor.

newrelic.agent.record_custom_metrics(metrics, application=None)

Records a set of custom metrics, where the passed metrics can be any iterable object which yields (name, value) tuples.

def metrics():
    yield 'Custom/Value-1', 1
    yield 'Custom/Value-2', 2
    yield 'Custom/Value-3', 3

newrelic.agent.record_custom_metrics(metrics())

The same rules regarding naming of custom metrics applies as is described for record_custom_metric(). The application against which the custom metrics are recorded is also determined in the same way.

Data source API calls

The data source API provides a way for generating metrics using a pull-style API rather than the push-style API implemented by record_custom_metric(). The use of a pull-style API allows metrics to be generated which are synchronized with the harvest cycle. This can be important in generating rate or utilization metrics.

The data source API includes the following calls:

newrelic.agent.register_data_source(source, application=None, name=None, settings=None, \**properties)

Registers a data source to be polled at the completion of each harvest cycle to generate additional custom metrics.

Metrics returned by a data source can be a simple (name, value) tuple where the value is a numeric, or value can instead be a dictionary which corresponds to an already aggregated data sample for a specific metric.

When returning a dictionary as the metric value, the fields that can be supplied are:

  • count
  • total
  • min
  • max
  • sum_of_squares

These fields have the same meanings as what is described for the New Relic Plugins API. The platform API metric naming and value reference documentation can be used as a general indicator for what can be done with metrics reported via the agent data source mechanism. The only requirement is that since metrics are being reported via the APM agent API, the Custom/ prefix and not the Component/ prefix must be used for metric names else it will not be selectable in the metric chooser of the custom dashboard editor.

If set to the default of None, the data source will be polled at the end of each harvest cycle for each instrumented app. Alternatively, when an application object is supplied, then the data source will only be polled to generate metrics for that one specific application.

In cases where it would be necessary for a data source to retain distinct data for each reporting application, then a factory pattern can be used to create a separate instance of the data source for each application it may be used with.

The name provided when registering a data source is only used for logging purposes and will default to being the name provided by the data source itself.

In addition to use the register_data_source() API call within an application to register a data source, one can also be configured using the agent configuration file. By using the agent configuration file, you can register additional data sources for custom metrics without needing to modify your code.

To add the data source using the agent configuration file add a special section with prefix data-source:. The prefix should be followed by a unique value to distinguish the section for that of another data source if more than one is specified.

[data-source:memory-usage]
enabled = true
function = samplers:memory_metrics
# application = ...
# name = ...

[data-source:cpu-usage]
enabled = true
function = samplers:CPUMetricsDataSource
# application = ...
# name = ...

If the data source was specified by a function, then the name would be module:function. If a class then module:class. The module must be able to be found on the Python module search path.

As with register_data_source, the application to report data to and the name are optional.

newrelic.agent.data_source_generator(name=None, \**properties)

The data_source_generator decorator is used to wrap a data source implemented as a generator. This would be used where there is no need to retain state information between calls to generate any metrics, and where the one instance of the data source can be used against multiple applications.

import psutil
import os
 
@newrelic.agent.data_source_generator(name='Memory Usage')
def memory_metrics():
    pid = os.getpid()
    p = psutil.Process(os.getpid())
    m = p.get_memory_info()
    yield ('Custom/Memory/Physical', float(m.rss)/(1024*1024))
    yield ('Custom/Memory/Virtual', float(m.vms)/(1024*1024))

The name provided when using the decorator is only used for logging purposes. If it is not provided, then the callable name derived from the decorated function will instead be used as the name of the data source.

newrelic.agent.data_source_factory(name=None, \**properties)

The data_source_factory decorator is used to wrap a data source implemented as a class, where a separate instance of the data source is required due to needing to retain state, or where it needs to synchronize generation of metrics with the harvest cycle interval, such as when generating rate or utilization metrics.

import os
import time
import multiprocessing

@newrelic.agent.data_source_factory(name='CPU Usage')
class CPUMetricsDataSource(object):

    def __init__(self, settings, environ):
        self.last_timestamp = None
        self.times = None
 
    def start(self):
        self.last_timestamp = time.time()
        self.times = os.times()
 
    def stop(self):
        self.last_timestamp = None
        self.times = None

    def __call__(self):
        if self.times is None:
            return

        now = time.time()
        new_times = os.times()
        elapsed_time = now - self.last_timestamp
        user_time = new_times[0] - self.times[0]
        utilization = user_time / (elapsed_time*multiprocessing.cpu_count())
        self.last_timestamp = now
        self.times = new_times

        yield ('Custom/CPU/User Time', user_time)
        yield ('Custom/CPU/User/Utilization', utilization)

newrelic.agent.register_data_source(CPUMetricsDataSource)

The name provided when using the decorator is only used for logging purposes. If it is not provided, then the callable name derived from the decorated function will instead be used as the name of the data source.

For more help

Additional documentation resources include:

Join the discussion about Python in the New Relic Online Technical Community! The Technical Community is a public platform to discuss and troubleshoot your New Relic toolset.

If you need additional help, get support at support.newrelic.com.