In order to help you get the most from using the Python agent to monitor your web application, here are some tips and tricks.
Decorators and introspection
When naming a transaction or a function trace node, the Python agent needs to introspect the name of the function being called. Where the function is a raw function or class method, this all works fine. If however a decorator has been applied to the function or class method, if the decorator hasn't been implemented to preserve the ability to introspect the wrapped function, then the agent will use the name of the decorator wrapper function instead of the wrapped function.
The typical manner in which a decorator is implemented is to use the pattern:
def decorator(f): def _decorator(): f() return _decorator
@decoratordef foo(): pass
The problem with this implementation is that if we look at the value of foo.__name__
it will be _decorator
and not foo
. If the decorator is contained in a separate module, looking at foo.__module__
it will be the name of the module the decorator is declared in and not the name of the module the wrapped function is contained in.
In order to ensure modules and functions are named correctly, use the wraps()
decorator from the standard library functools module to wrap the inner decorator function:
import functools
def decorator(f): @functools.wraps(f) def _decorator(): f() return _decorator
@decoratordef foo(): pass
For more information, see the documentation for the wraps()
function on the Python website.
Identifying a transaction
When the Python agent records error details for a transaction or captures a slow transaction trace, there is no unique identifier for it that you could capture to try and use to correlate it with other data you may separately record for the transaction. This makes it difficult to match up information from the UI with your own web application logging, for example.
If you want to be able to do such cross matching, or simply capture against a transaction some additional detail that may be help in this situation, then you can use the agent API to record custom parameters against the transaction. If an error occurs, or a slow transaction trace is recorded, which is then displayed in the UI, these additional parameters you supply will then be displayed under the Custom parameters section of the error or trace details.
To add such additional details, you can use the add_custom_attribute()
function in the newrelic.agent module.
Important
Do not use brackets [suffix]
at the end of your transaction name. the agent automatically strips brackets from the name. Instead, use parentheses (suffix)
or other symbols if needed.
Take for example the following code from the polls application in the Django web framework tutorial.
from django.shortcuts import render_to_response, get_object_or_404
def detail(request, poll_id): p = get_object_or_404(Poll, pk=poll_id) return render_to_response('polls/detail.html', {'poll': p})
If you wanted to record the poll ID against the transaction so available if a problem occurs, you would use:
import newrelic.agent
def detail(request, poll_id): newrelic.agent.add_custom_attribute('poll_id', poll_id) p = get_object_or_404(Poll, pk=poll_id) return render_to_response('polls/detail.html', {'poll': p})
The value used for a custom parameter can be any value that is able to be serialized by the JSON encoder provided by the json module. This includes tuples, lists and dictionaries.
Recommendation: Restrict values to integer, float or string values. If you do use any more complex data structures, avoid trying to associate too big a data structure, as it will need to be cached until harvested, packaged up and then sent to the core application.
Simple values are also preferred, because if the value cannot be serialized to JSON, then the error details or slow transaction may be thrown away.
Recording custom metrics
We can only instrument publicly available third-party modules for Python that we know about. To extend instrumentation for your own code, use the techniques described in adding Python instrumentation. This will give you coverage in performance breakdown data for web transactions and in slow transaction traces.
You can record arbitrary performance data via an API call (for example: timing or computer resource data). Then use metrics and events to charts and track that metric. You can use custom metrics to unify your monitoring.
To record custom metrics, use the record_custom_metric()
function in the newrelic.agent module. It takes two arguments, being the name of the metric and the value. The value should be an integer or floating point value.
import newrelic.agent
def create_account(request, ...): ... newrelic.agent.record_custom_metric('Custom/Signups', 1) ... return render_to_response('...', ...)
Start the path of the metric name Custom/
to avoid namespace collisions with existing agent metrics (for example, Custom/MyMetric
).
The custom metrics will be aggregated across each harvest cycle and then sent to the collector. In this case each 1
will be added up to record the number of signups in the harvest cycle.
Although function trace instrumentation would be used to track function calls or time spent in a block of code it does not allow separate custom charting. Custom metrics can still be used to record and chart the time related to how long something takes.
To make this easier, you can predefine a context manager class for timing and recording of the metric.
class CustomMetricTimedNode(object):
def __init__(self, name): self.name = name self.start_time = None self.end_time = None
def __enter__(self): self.start_time = time.time() return self
def __exit__(self, exc, value, tb): if not self.start_time: return self.end_time = time.time() duration = self.end_time - self.start_time newrelic.agent.record_custom_metric(self.name, duration)
This can then be used in conjunction with a with
statement around a block of code.
def function(): with CustomMetricTimedNode('Custom/TimedNode1'): time.sleep(0.01)
The context manager could be applied to functions by creating a function decorator.
def custom_metric_timed_node(name): def _decorator(f): @functools.wraps(f) def _wrapper(*args, **kwargs): with CustomMetricTimedNode(name): return f(*args, **kwargs) return _wrapper return _decorator
with it being used as:
@custom_metric_timed_node('Custom/TimedNode2')def function(): time.sleep(0.01)
As when adding instrumentation for function tracing, it is recommended that you choose metric names within a finite set and relatively small in length. You should avoid names which are unbounded in value otherwise the volume of them makes it hard to display them in a meaningful way. A large number of unique names can also result in excessive memory being used and the agent forcibly normalizing your names to limit how many unique names there are.
Caution
Collecting too many metrics can impact the performance of both your application and the agent. To avoid potential data problems, try to keep the total number of unique metrics introduced by custom instrumentation under 2000.