Notes
This release of the Python agent is a hotfix release to address a problem where the agent could fail to validate the SSL certificate of the New Relic collector in some environments.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
Bug Fix
If the Python agent was used in an environment where the certifi
package was installed, the Python agent would use the certifi
CA certificates bundle to validate the certificate of the New Relic collector. However, the latest release of certifi
(November 20, 2015) removed some older CA certificates with 1024-bit keys.
The SSL certificate for the New Relic collector is cross-signed with both a 1024-bit certificate and a 2048-bit certificate, but in some circumstances, the stronger root certificate was not used for validation. When the 1024-bit certificate was no longer included in the certifi
bundle, SSL validation would fail. Affected customers would see warnings in their agent log stating "Data collector is not contactable" due to an SSLError
.
To address this issue, the agent no longer uses the certifi
CA certificates bundle, nor the certificates bundled with requests
. Instead, it only uses the CA bundle included with the agent to validate the New Relic collector certificate.
Notes
This release of the Python agent is a hotfix release to address a problem where the package failed to install under certain circumstances.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
Bug Fix
The README.rst
file contained non-ASCII characters, which could result in a UnicodeDecodeError
during installation. Those characters have been removed.
Notes
This release of the Python agent reports error events to Insights and captures enhanced error data to support the new Advanced Error Analytics feature in APM.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
New Feature
- Error Events
The Python agent now sends TransactionError events for Advanced Error Analytics, which power the new APM Errors functionality (currently in Beta). This allows users to create charts that facet and filter their error data by attributes, as well as explore their error events in Insights. For details, see the APM Errors documentation.
Changed Feature
- Additional Attributes collected
The agent now collects additional attributes for web transactions:
- HTTP request headers:
Host
andAccept
- HTTP response header :
Content-Length
Bug Fix
- Improved unicode support for exception messages
Unicode exception messages will still be preserved, even if sys.setdefaultencoding()
has been called to change the default encoding.
Notes
This release of the Python agent adds much more flexibility around what attributes are sent to New Relic, and where they are displayed.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
New Feature
- Flexible capturing of attributes
Attributes are key-value pairs that contain additional information to be added to an event or transaction. These key-value pairs can be viewed within transaction traces in New Relic APM, traced errors in New Relic APM, transaction events in Insights, and page views in Insights.
A number of new configuration settings have been introduced to allow you to customize exactly which attributes will be sent to each of these destinations.
For details, see Python agent attributes.
Deprecated Settings
Several configuration settings have been deprecated. The most commonly used of the deprecated settings are capture_params
and ignored_params
. It is still possible to achieve the same functionality as the old settings by using the new attributes.include
and attributes.exclude
settings. For examples, see Python agent attribute examples.
A complete list of deprecated settings can be found in deprecated configuration settings.
While the usage of deprecated settings is still supported, we recommend upgrading your configuration to use the new settings as soon as possible.
Changed Feature
Previously, it was possible to save a list, dict, or tuple as an attribute value that could be displayed in transaction and error traces. However, these same attributes could not be displayed in Insights events. Now, all attributes are handled in a consistent manner, which means that all attribute values must be one of the following types:
Python 2: str, unicode, int, long, float, boolPython 3: str, bytes, int, float, bool
All values which are not one of these types are automatically converted by calling str(value)
.
Notes
This release of the Python agent adds the ability to strip exception messages from error traces, in order to prevent the inadvertent capture of sensitive information.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
New Features
- Allowing Exception Messages
Because an exception message can contain sensitive information, the agent now provides the ability to strip exception messages before sending error traces to APM. Exception messages will be stripped automatically in high-security mode.
For exception messages you know to be safe, you can add them to an allow list so that those messages are passed unaltered to APM. Two new configuration settings control this feature: strip_exception_messages.enabled
and strip_exception_messages.whitelist
.
Bug Fixes
capture_request_params
API disabled for high-security mode
When operating in high-security mode, the agent should not capture query string parameters. However, prior to this release, it was possible to call newrelic.agent.capture_request_params(flag=True)
, even if the agent was in high-security mode, and the agent would capture and report query string parameters. Now, the capture_request_params
API call does not override the capture_params
setting when the agent is in high-security mode, so query parameters are not captured.
Notes
This release of the Python agent adds the ability to customize the hostname displayed in the APM UI, as well as updating the solrpy and pysolr instrumentation so that Solr metrics will now appear in the Databases tab in the UI.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
New Features
- Customize hostname displayed in APM
A new configuration setting has been added: process_host.display_name
. When set in the newrelic.ini
configuration file, the display name will be used in the APM UI, in place of the hostname that the agent automatically captures. In addition, the display name can be set using the NEW_RELIC_PROCESS_HOST_DISPLAY_NAME
environment variable.
Features Changed
- Update solrpy and pysolr instrumention
Previously, solrpy and pysolr instrumentation reported metrics in the Solr
namespace. Now, to align them with our recent changes to SQL and NoSQL instrumentation, solrpy and pysolr have been updated to report metrics in the Datastore
namespace, which means that time spent in calls to Solr will be listed in both the main overview chart, as well as in the Databases tab in the UI.
Notes
This release of the Python agent adds support for Django 1.8.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
New Features
- Support for Django 1.8.
Features Changed
- The list of modules loaded by the application will no longer include version numbers. In certain cases, attempting to determine the version numbers of packages can potentially generate excessive CPU overhead, so it has been preemptively disabled to prevent any such occurrence.
Bugs Fixed
- When using the psycopg2 Postgres database adapter, if the
pscyopg2.extras.register_json()
function was used, then instrumentation for the psycopg2 module would fail. Now,register_json()
is instrumented correctly. - If a Django class based view was registered as the view handler in urls.py, the transaction was named after the class name, and not the method of the class based view which handled the request. Now, the transaction is named after the method.
Notes
This release of the Python agent adds instrumentation for Elasticsearch as a new datastore product and a more granular breakdown of various SQL operations in the “Databases” tab in the APM UI. In addition, the stack traces captured by the agent are now being trimmed to remove any code snippets.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
New Features
Improved SQL Breakdown
This agent release adds the ability to see the breakdown of time spent in SQL statements such as CREATE, DROP, ALTER, SET, CALL, EXEC, EXECUTE, COMMIT and ROLLBACK. Execution of stored procedures through the callproc() or CALL statements will provide further breakdown based on the name of the stored procedure.
Elasticsearch Support
Instrumentation support for the official Elasticsearch client module and the separate pyelasticsearch module have been added. Time spent in calls made to Elasticsearch will be listed in both the main overview chart, as well as in the Databases tab in the UI. Previously, calls to Elasticsearch would have been shown as time spent in external web service calls.
Features Changed
Remove code snippets in stack traces
Stack traces captured for errors and slow SQL queries will no longer include code snippets. This change is to prevent the possibility of capturing sensitive data embedded within the code. It reduces the overhead in capturing stack trace information, and also avoids a potential problem caused when the code on disk has changed in the time since the process was started.
Bugs Fixed
- Ensure that messages sent to the data collector containing parts which were already compressed and encoded, were not being compressed a second time at the HTTP request level causing additional overhead.
- Guard against a potential agent error where an invalid URL was being passed to an instrumented external web service client.
- Motor (an asynchronous MongoDB library) incorrectly returns a non string object when the agent tries to access the
__name__
attribute on Motor objects. This caused the agent to fail when calculating the name for an object, since we rely on this value being a string as specified by the Python object model definition. The agent now overrides the incorrect behavior of Motor to ensure that we can still generate names of objects correctly. - When using Python 3 and audit logging was enabled, if messages being sent to our data collector were large enough that they were being compressed at the HTTP request level, the audit logging code would fail due to a bytes/Unicode mismatch.
- Instrumentation for the decr() method of umemcache client for Memcached was incorrectly calling the stats() method.
Notes
This release of the Python agent is a minor bug fix release, including changes which may help to reduce the incidence of spurious warnings about being able to communicate with our service.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
Bugs Fixed
Improved management of the network connection to our service
- When the agent registered itself with our data collector, it wasn't closing the socket connection immediately and instead it was holding it open for up to a minute when the first batch of data would be reported. If the socket connection was being closed remotely during that time, a
BadStatusLine
exception would be seen in the logs when the attempt was made to upload data. - When the agent received an internal restart request from our data collector as the result of a server side configuration change, the socket connection wasn't being closed explicitly. In the case of CPython it would still be cleaned up and closed immediately due to reference counting, but under PyPy when it was closed was dependent on when PyPy garbage collection occurred. This could mean that the socket descriptor could stay in use for a while.
- When the agent registered itself with our data collector, it wasn't closing the socket connection immediately and instead it was holding it open for up to a minute when the first batch of data would be reported. If the socket connection was being closed remotely during that time, a
Compatibility modules for transitioning from Python 2 to Python 3
When compatibility modules for Python 2/3 migration such as
pies2overrides
andfuture
were installed in a Python 2 installation, they were installing modules which mimic modules which would normally only ever exist in a Python 3 installation. The presence of these modules were confusing the agent's instrumentation mechanisms. The result of this was that use ofhttp.client
from Python 3 in a Python 2 application would fail.Failures when making calls to external web services
- If a HTTP client module was supplied
None
as the value for the URL being requested, this would cause an exception when the agent was recording the data for that transaction. - Use of the
ExternalTrace
context manager class directly, for recording external web services calls, would fail if there was no active transaction. This could occur in the time before the agent has successfully been able to register with our data collector.
- If a HTTP client module was supplied
Setting of response content length when using Django
The Django middleware installed by the agent to perform insertion of RUM monitoring code into responses, would always set the
Content-Length
even if it was not previously set. This could cause issues where a frontend had been set up with an expectation thatContent-Length
headers would never exist.
Notes
This release of the Python agent includes a major update to how we capture and represent metrics for both SQL databases and NoSQL datastores. Metrics for both types of products will now be displayed in a unified "Databases" tab in the APM UI, and these metrics will also be associated with the specific product being used. In addition, we have enabled support for having key transactions on other transactions, such as background tasks.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
New Features
Unified view for SQL database and NoSQL datastore products.
The response time charts in the application overview page will now include NoSQL datastores such as Memcached, Redis and MongoDB and also the product name of existing SQL databases such as MySQL, Postgres, Oracle etc.
We also introduced a new unified view within the APM UI for visualizing time spent in queries made against SQL databases and NoSQL datastore products.
For existing SQL databases, in addition to the existing breakdown of SQL statements and operations, the queries are now also associated with the database product being used.
For NoSQL datastores such as Memcached, Redis and MongoDB, we have now added information about operations performed against those products, similar to what is being done for SQL databases.
Because this introduces a notable change to how SQL database metrics are collected, it is important that you upgrade the Python agent version on all hosts. If you are unable to transition to the latest agent version on all hosts at the same time, you can still access old and new metric data for SQL databases, but the information will be split across two separate views.
Key transactions for other transactions.
In addition to being able to create key transactions for web transactions, it is now possible to create key transactions for other transactions, such as background tasks executed by Celery.
Key transactions enable the setting of a per transaction Apdex and alerts, as well as the ability to run X-Ray sessions, including transaction specific thread profiling.
No SQL datastore instrumentation.
To complement our new unified view for SQL database and NoSQL datastore products in the APM UI, we have upgraded our existing instrumentation for Redis and MongoDB. Previously, these only provided a function breakdown in transaction summaries and sample transaction traces, but now they will report into the unified view for database and datastore products under their respective product categories.
Existing instrumentation for Memcached clients have similarly been updated, as well as new support for the
pymemcache
module being added.