If you're using Celery as a distributed task queuing system, you can use the Python agent to record Celery processes as non-web transactions.
To get the Python agent working with Celery, first follow the agent installation instructions to install, configure, and test the agent. Then use these Celery-specific instructions.
Run Celery
The command you use to run Celery with the agent depends on your Celery version and your specific setup.
Select the application name
The app_name
setting in the Python agent configuration file sets the app name displayed in the New Relic UI.
- If your Python agent monitors Celery tasks and you set
app_name
to the same value used in your application agent'sapp_name
, the data from both sources will be combined in the UI under that name. - If you set different names, the data appears separately in the UI as two different names.
By setting multiple app names in the agent config files, you can monitor both the combined data and the segregated data. Here's a common way to do this, using a Django application as an example:
Ignore task retry errors
When using Celery, a task can possibly fail and raise a celery.exceptions:Retry
or celery.exceptions:RetryTaskError
exception. The exception depends on which version of Celery is used.
To ignore these errors, set this from the New Relic Application settings UI or from the agent config file. UI changes override config file changes per the configuration precedence rules.
To use ignore error settings from the UI:
- From one.newrelic.com, select APM > (select an app) > Settings > Application.
- Select Server-side agent configuration.
- From Error collection, enter the errors you want to ignore, separated by commas.
To ignore these errors using the agent config file, use the ignore_errors
setting and a space-separated list of exceptions:
error_collector.ignore_errors = celery.exceptions:Retry celery.exceptions:RetryTaskError
Troubleshooting
When a Celery worker process is killed suddenly, the agent is not able to complete its normal shutdown process, which means its final data payload is not sent. This results in the agent reporting fewer Celery transactions than expected or no transactions at all.
This may occur when using the CELERYD_MAX_TASKS_PER_CHILD
setting, which defines the maximum number of tasks a pool worker process can execute before it's replaced with a new one. If used, the worker is forcibly shut down when that limit is reached, which means that that data is not recorded by the agent. For example, if MAX_TASKS_PER_CHILD = 1
, this results in no data being reported.
How to troubleshoot this will depend on why you want to use the MAX_TASKS_PER_CHILD
limit in your application.
- To allow the normal shutdown process, return this to the default no-limit setting.
- To lessen the impact of the problem, raise the
MAX_TASKS_PER_CHILD
limit.
Important
For version 3.2.2.94 or higher, the Python agent will shut down when the MAX_TASKS_PER_CHILD
limit is reached. No data will be lost.
Important
Monitoring the main process of Celery isn't possible with the agent, it can only monitor the worker processes. See Activate application warning