Notes
This release of the Python agent adds instrumentation for Elasticsearch as a new datastore product and a more granular breakdown of various SQL operations in the “Databases” tab in the APM UI. In addition, the stack traces captured by the agent are now being trimmed to remove any code snippets.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
New Features
Improved SQL Breakdown
This agent release adds the ability to see the breakdown of time spent in SQL statements such as CREATE, DROP, ALTER, SET, CALL, EXEC, EXECUTE, COMMIT and ROLLBACK. Execution of stored procedures through the callproc() or CALL statements will provide further breakdown based on the name of the stored procedure.
Elasticsearch Support
Instrumentation support for the official Elasticsearch client module and the separate pyelasticsearch module have been added. Time spent in calls made to Elasticsearch will be listed in both the main overview chart, as well as in the Databases tab in the UI. Previously, calls to Elasticsearch would have been shown as time spent in external web service calls.
Features Changed
Remove code snippets in stack traces
Stack traces captured for errors and slow SQL queries will no longer include code snippets. This change is to prevent the possibility of capturing sensitive data embedded within the code. It reduces the overhead in capturing stack trace information, and also avoids a potential problem caused when the code on disk has changed in the time since the process was started.
Bugs Fixed
- Ensure that messages sent to the data collector containing parts which were already compressed and encoded, were not being compressed a second time at the HTTP request level causing additional overhead.
- Guard against a potential agent error where an invalid URL was being passed to an instrumented external web service client.
- Motor (an asynchronous MongoDB library) incorrectly returns a non string object when the agent tries to access the
__name__
attribute on Motor objects. This caused the agent to fail when calculating the name for an object, since we rely on this value being a string as specified by the Python object model definition. The agent now overrides the incorrect behavior of Motor to ensure that we can still generate names of objects correctly. - When using Python 3 and audit logging was enabled, if messages being sent to our data collector were large enough that they were being compressed at the HTTP request level, the audit logging code would fail due to a bytes/Unicode mismatch.
- Instrumentation for the decr() method of umemcache client for Memcached was incorrectly calling the stats() method.
Notes
This release of the Python agent is a minor bug fix release, including changes which may help to reduce the incidence of spurious warnings about being able to communicate with our service.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
Bugs Fixed
Improved management of the network connection to our service
- When the agent registered itself with our data collector, it wasn't closing the socket connection immediately and instead it was holding it open for up to a minute when the first batch of data would be reported. If the socket connection was being closed remotely during that time, a
BadStatusLine
exception would be seen in the logs when the attempt was made to upload data. - When the agent received an internal restart request from our data collector as the result of a server side configuration change, the socket connection wasn't being closed explicitly. In the case of CPython it would still be cleaned up and closed immediately due to reference counting, but under PyPy when it was closed was dependent on when PyPy garbage collection occurred. This could mean that the socket descriptor could stay in use for a while.
- When the agent registered itself with our data collector, it wasn't closing the socket connection immediately and instead it was holding it open for up to a minute when the first batch of data would be reported. If the socket connection was being closed remotely during that time, a
Compatibility modules for transitioning from Python 2 to Python 3
When compatibility modules for Python 2/3 migration such as
pies2overrides
andfuture
were installed in a Python 2 installation, they were installing modules which mimic modules which would normally only ever exist in a Python 3 installation. The presence of these modules were confusing the agent's instrumentation mechanisms. The result of this was that use ofhttp.client
from Python 3 in a Python 2 application would fail.Failures when making calls to external web services
- If a HTTP client module was supplied
None
as the value for the URL being requested, this would cause an exception when the agent was recording the data for that transaction. - Use of the
ExternalTrace
context manager class directly, for recording external web services calls, would fail if there was no active transaction. This could occur in the time before the agent has successfully been able to register with our data collector.
- If a HTTP client module was supplied
Setting of response content length when using Django
The Django middleware installed by the agent to perform insertion of RUM monitoring code into responses, would always set the
Content-Length
even if it was not previously set. This could cause issues where a frontend had been set up with an expectation thatContent-Length
headers would never exist.
Notes
This release of the Python agent includes a major update to how we capture and represent metrics for both SQL databases and NoSQL datastores. Metrics for both types of products will now be displayed in a unified "Databases" tab in the APM UI, and these metrics will also be associated with the specific product being used. In addition, we have enabled support for having key transactions on other transactions, such as background tasks.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
New Features
Unified view for SQL database and NoSQL datastore products.
The response time charts in the application overview page will now include NoSQL datastores such as Memcached, Redis and MongoDB and also the product name of existing SQL databases such as MySQL, Postgres, Oracle etc.
We also introduced a new unified view within the APM UI for visualizing time spent in queries made against SQL databases and NoSQL datastore products.
For existing SQL databases, in addition to the existing breakdown of SQL statements and operations, the queries are now also associated with the database product being used.
For NoSQL datastores such as Memcached, Redis and MongoDB, we have now added information about operations performed against those products, similar to what is being done for SQL databases.
Because this introduces a notable change to how SQL database metrics are collected, it is important that you upgrade the Python agent version on all hosts. If you are unable to transition to the latest agent version on all hosts at the same time, you can still access old and new metric data for SQL databases, but the information will be split across two separate views.
Key transactions for other transactions.
In addition to being able to create key transactions for web transactions, it is now possible to create key transactions for other transactions, such as background tasks executed by Celery.
Key transactions enable the setting of a per transaction Apdex and alerts, as well as the ability to run X-Ray sessions, including transaction specific thread profiling.
No SQL datastore instrumentation.
To complement our new unified view for SQL database and NoSQL datastore products in the APM UI, we have upgraded our existing instrumentation for Redis and MongoDB. Previously, these only provided a function breakdown in transaction summaries and sample transaction traces, but now they will report into the unified view for database and datastore products under their respective product categories.
Existing instrumentation for Memcached clients have similarly been updated, as well as new support for the
pymemcache
module being added.
Notes
This release of the Python agent includes bug fixes for issues with agent registration and use of proxies which could result in no data being reported.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
Bug Fixes
Agent not recovering when errors occur during registration
If the agent initially registered with our data collector successfully, but subsequently failed to upload agent setting information due to a transient backend or network issue, the agent was not recovering from the error properly. The consequence of this was that the agent would not completely start up and no data would be collected or reported by that process. The operation of the web application as a whole would not have been affected. This issue, which was introduced in version 2.36.0.30 of the agent, is now fixed.
Agent not able to connect via some proxy servers
The Python agent was not able to connect to our data collector to register when certain proxy server installations or configurations were being used. We have updated the version of the internal HTTP client library used to resolve the issue.
Identification of Python web server being used
The Python agent was incorrectly reporting the Python web server being used as Tornado when both the 'gunicorn' and 'tornado' Python modules were being imported, even if the Tornado web server module wasn't actually being used. This did not affect the operation of the agent but could lead to confusion when trying to debug deployment issues.
重要
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes
This release of the Python agent enables the collection of transaction traces for synthetic requests by default, and adds the individual trip trace visualization for Cross Application Tracing.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent, see Status of the Python agent.
New Features
Cross Application Tracing for individual transactions
Version 2.38.0.31 of the Python agent introduced aggregated transaction maps. In this release, we have built on that feature to add the ability to view an end-to-end visualization of an individual transaction, in order to better understand what backend applications and external services were called during the course of a single transaction.
Features Changed
Synthetic support enabled by default
When the Python agent added support for the collection of transaction traces for synthetics requests in version 2.32.0.28, the feature had to be explicitly enabled in the agent configuration file. With this release, support for synthetics is enabled by default.
Bug Fixes
Non ASCII characters in transaction names
If a transaction name contained a Unicode character outside of the ASCII range, the generation of the cross application tracing attributes would result in a UnicodeEncodeError. This bug affected versions 2.38.0.31 through 2.38.2.33. The issue is fixed in the current release.
Parsing SQL statements referencing the database schema
Previously, SQL statements that referenced a table from a database schema namespace, such as “schema.table_name”, would not always be parsed correctly, resulting in the table being identified as just “schema”. The agent now handles this case correctly, and the table is reported as “schema.table_name”.
Additional support for Tornado 1.X
If Tornado 1.X was being used, the instrumentation would fail at run time causing all web requests to fail. Monitoring of Tornado 1.X applications should now work correctly.
重要
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes
This release of the Python agent fixes a memory leak which affects our Tornado instrumentation, and which was a factor in the recent issues we had with the Django template instrumentation.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent see Status of the Python agent.
Bug Fixes
Due to an issue with the low level function wrappers we use to instrument third party Python modules, memory was being leaked and process memory usage could increase over time. This issue affects version 2.32.0 through 2.38.0 of the Python agent and has been impacting on our Tornado instrumentation.
The memory leak has also been identified as the root cause for memory leaks in our Django template instrumentation, affecting versions 2.32.0.28 through 2.36.0.30. The Django template instrumentation was disabled in the prior 2.38.0.31 release while we investigated the cause. It remains disabled by default.
重要
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes
This release of the Python agent adds support for the mapping features of Cross Application Tracing, which provide a visualization of how requests flow through multiple applications within a distributed system.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent see Status of the Python agent.
New Features
Cross Application Tracing
The Cross Application Tracing feature is now enhanced with the aggregate transaction map visualizations. This new visualization will help users spot bottlenecks in external services and give them an end-to-end understanding of what other applications and services are called within a transaction. All agents involved in the cross application communication must be upgraded to see the complete graph.
Improvements
Tornado 3.x Instrumentation
The newly refactored Tornado instrumentation is turned on by default. This was introduced in the python agent version 2.32.0.28 under an optional feature flag. This change improves upon the stability of our Tornado instrumentation and accounts for the incremental changes introduced to the Tornado 3.x source tree. It also provides a more granular view of where the time is spent in a WebTransaction, by distinguishing the time spent doing work vs time spent waiting on a asynchronous call.
Please be advised that we currently do not have instrumentation for Tornado 4.x, but we are working to add support for it.
Capacity Analysis for mod_wsgi
The capacity analysis report shows how many instances of your web application is running and how busy they are. For a web application, the measure of how busy the application is, is calculated by looking at how much of the time the total available set of request handler threads are busy.
For the case of using Apache/mod_wsgi in daemon mode, the measure of how busy your application is, has been getting over reported due to the nature of how threads are managed by mod_wsgi. If you are using mod_wsgi version 4.1.0 or higher, this measure will now be more accurately reported as it will use information provided by mod_wsgi about the total number of request handler threads which are available, rather than trying to calculate it based on what threads have been seen to be handling requests.
Bug Fixes
In version 2.28.0.26 and 2.32.0.28 of the agent, we added new features to track template includes and inclusion tags when using the Django template rendering system. We have had a few reports that indicate that for some users, but not everyone, that the instrumentation implementing these features may be causing an ongoing growth in memory usage for the web application processes over time. Right now, we don't understand the root cause for why this might be occurring in some instances, so to be on the safe side we have disabled these two features while we investigate further.
The consequence of these features being disabled is that the web transaction performance breakdown and sample transaction traces will no longer show as a separate metric or entry where one template has been included in another. Further, if the tracking of special inclusion tags had been configured, they will also no longer be shown. We anticipate re-enabling the features once we have worked out what is occurring and made changes to ensure it does not occur again.
重要
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes
This release of the Python agent provides improvements to how we report agent configuration information. This enables us to better help you debug any issues you may experience with configuring the agent.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent see Status of the Python agent.
New Features
Configuration Information
Agent configuration information displayed in the New Relic UI will now reflect the final configuration used by the agent for an application. This includes the result of any server side configuration settings, which are applied on top of the agent defaults, agent configuration file or environment variables. Previously, the result of applying the server side configuration settings was not being displayed.
Bug Fixes
- If running a system with a Linux 3.X kernel, the agent could fail when attempting to register with our data collector. No data would subsequently be collected or reported. This would occur for versions of Python 2 prior to Python 2.7.3 and versions of Python 3 prior to Python 3.3, if they were built on a system with a Linux 3.X kernel.
重要
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes
This release of the Python agent provides support for Labels and Rollups, making it possible to organize your applications in the APM UI into meaningful categories.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent see Status of the Python agent.
New Features
Labels and Rollups
The Python agent now supports the ability to apply labels to applications, so that you can easily sort, filter, and page through all of the applications on your account's Applications list.
Configuration can be done in the newrelic.ini file:
labels = Server:One;Data Center:PrimaryLabels can also be configured by setting a
NEW_RELIC_LABELS
environment variable:NEW_RELIC_LABELS=Server:One;Data Center:PrimaryMore information on using labels to categorize your applications can be found in the New Relic APM documentation.
New CPU Reporting in Environment Snapshot
Previously, the Python agent captured two CPU-related values to report to the Environment Snapshot: Logical Processors and Physical Processors. Now, it captures the following three values:
- Logical Processors: The total number of hyper-threaded execution contexts available, including execution contexts that may exist on the same core. This value remains unchanged from previous agents.
- Physical Cores: The total number of physical CPU cores available, counting hyper-threaded siblings as a single core. This value was previously reported as "Physical Processors."
- Physical Processor Packages: The total number of processor packages or dies (each of which may contain multiple physical cores). This value is new with this agent release.
Bug Fixes
- A "Runtime Error: transaction already active" will no longer be seen in the case where the agent created nested transaction wrappers and
newrelic.agent.ignore_transaction()
was called within the outer wrapper but outside the inner wrapper. Previously, this error could have also been triggered when using the WSGI environment setting fornewrelic.ignore_transaction
set by SetEnv in mod_wsgi. - Prior to this version, the HTTP_REFERER URL reported for a transaction contained query parameters, even if the
capture_params
setting was set to False. Now, thecapture_params
setting is respected when reporting the HTTP_REFERER URL.
重要
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes
This release of the Python agent provides a preview of a significant overhaul of our instrumentation for Tornado version 3.2 and earlier. Further improvements to our Django instrumentation are also included, allowing time spent in rendering Django sub templates to be viewed separately.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent see Status of the Python agent.
New Features
Breakout of Django template rendering
Previously when using Django templates, if the Django
include
tag was being used, the time spent rendering that sub template was shown under the genericTemplate/Render
category. When theinclude
tag is now used, a distinct metric is instead created corresponding to the rendering time for that sub template. This will appear in the transaction performance breakdown and also in sample transaction traces. In the case of sample transaction traces, as the name of the template will be included in the metric name, it now also provides additional context for understanding where any time is being spent when rendering that template.
Features Changed
Preview of improved Tornado instrumentation
Our instrumentation for Tornado version 3.2 and earlier has been experiencing a number of issues which could result in recording of data stopping and in some cases cause runtime exceptions which would affect the outcome of the executing web request. Upon investigation we found this has come about due to incremental changes in the internals of Tornado which were performed after we originally implemented the instrumentation for Tornado and which we didn't pick up as having being made. The Tornado developers have also recently released version 4.0 of Tornado. This version of Tornado has much more significant internal changes that result in our instrumentation failing completely.
Before we could embark on trying to support version 4.0 of Tornado, we deemed it necessary to completely overhaul our existing instrumentation for older versions first in order to gives us good foundation on which to then implement support for version 4.0 of Tornado. This release of the agent provides a preview of the improved instrumentation for older versions of Tornado prior to version 4.0. The new instrumentation is not enabled by default and you will need to explicitly enable it. We have provided it as an opt in change at this point to allow you to properly test with the update first and provide us with any feedback on it and any remaining issues you may find.
To enable the preview of the new Tornado instrumentation, you will need to add to the
[newrelic]
section of the agent configuration file the setting:feature_flag = tornado.instrumentation.r2If you are running on Heroku and are not using an agent configuration file, you can instead set the
NEW_RELIC_FEATURE_FLAG
environment variable. You can do this by running the Heroku command:heroku config:set NEW_RELIC_FEATURE_FLAG=tornado.instrumentation.r2
Bug Fixes
When an exception was raised by a WSGI application during the yielding of response content via a generator, the recording of that web request by the agent may not be closed off properly. This would result in no further web requests handled by that thread being recorded and reported. If this occurred for all request handler threads, then no data would then be reported for the whole web process. This issue relates to behaviour of the Python garbage collector and when Python objects are destroyed. At this point in time we believe this only affects users of pypy and does not affect users of CPython as the reference counting model of CPython usually gives more deterministic behaviour around when Python objects are destroyed. We don't however rule out that it could affect CPython and may explain a situation matching this problem we have seen in a couple of cases where users were using uWSGI.
When attempting to use Tornado as a worker for gunicorn, an exception could occur on startup which would result in gunicorn failing immediately and exiting. In order to use this combination it was previously necessary to disable the gunicorn specific instrumentation related to WSGI applications by adding to your agent configuration file:
[import-hook:gunicorn.app.base]enabled = falseIf you were using the Tornado worker with gunicorn and using this workaround, the underlying problem has now been addressed and you should be able to remove that section from your agent configuration file.
The mechanism we used for applying function wrappers for instrumentation was not being performed in the most optimal way for methods of classes. This could result in problems, including unexpected runtime exceptions, especially when trying to apply instrumentation to class methods, or methods of an existing instance of a type.