Integrate the Python agent on Microsoft Azure Container Apps and App Service
preview
We're still working on this feature, but we'd love for you to try it out!
This feature is currently provided as part of a preview program pursuant to our pre-release policies.
This document provides solutions on New Relic integration into Azure hosted Python applications without having to modify the application code. There are two supported instances of this capability:
Support for New Relic integration for App Services using containerized images is not available.
Compatibility and requirements
Before you begin, we recommend the following:
Starting with a Container App or App Service that has been deployed
Installing the Azure CLI in your environment if not using the Azure Portal
Info
Keep in mind that the Python agent doesn't capture telemetry for Azure Functions without our integration. We recommend installing the Azure Functions monitoring integration if you'd like to collect data about Azure Functions.
Integrate the Python agent onto Container Apps
In certain cases, an app managed through Azure Container Apps already has an image that cannot be modified by the user (or the user may simply not want to modify the app). This provides a way to integrate New Relic into the environment without having to make any modifications to the code that builds the containerized image.
This can be done through the Azure Portal or the Azure CLI.
az containerapp show --name $CONTAINER_APP_NAME --resource-group $RESOURCE_GROUP --output yaml > demoapp.yaml
This file will produce a partial template file which contains information about the container app. Additional information will need to be added to link New Relic to the app.
Mount volume to container app
In properties > template, there will be a section called volumes. Right now, this says volumes: null. We will replace that line with these lines:
bash
$
volumes:
$
- name: $VOLUME_NAME
$
storageName: $STORAGE_MOUNT_NAME
$
storageType: AzureFile
Where $STORAGE_MOUNT_NAME is what was used in Step 2 and $VOLUME_NAME is a name of your choosing
Add init container
In properties > template, there will be a section called initContainers. Right now, this says initContainers: null. We will replace that line with these lines:
bash
$
initContainers:
$
- args:
$
- -c
$
- cp-r /instrumentation /mnt/
$
command:
$
- /bin/sh
$
image: docker.io/newrelic/newrelic-python-init
$
name: nr-init-container
Link volume to containers
In properties > template, we now have containers and initContainers sections. Within each of these sections, add the following lines:
bash
$
volumeMounts:
$
- mountPath: /mnt/instrumentation
$
volumeName: $VOLUME_NAME
Where $VOLUME_NAME is the name chosen earlier
Update container app with new configuration
az containerapp update --name $CONTAINER_APP_NAME --resource-group $RESOURCE_GROUP --yaml demoapp.yaml
This should re-deploy the container app. Wait a few minutes for the init container to finish running.
Integrate the Python agent onto Azure App Service
Currently App Services only support sidecars but not init containers. Until that support is available, this prebuild script can be used. Note: This only works for App Services using code and not for containerized images.
This can be done through the Azure Portal or the Azure CLI:
az webapp config appsettings set --name ${WEB_APP_NAME} --resource-group ${RESOURCE_GROUP} --settings NEW_RELIC_LICENSE_KEY=$NEW_RELIC_LICENSE_KEY NEW_RELIC_AZURE_OPERATOR_ENABLED=true NEW_RELIC_APP_NAME="Azure Service App" PYTHONPATH="/home:/home/workspace/newrelic"
If a specific agent version is desired, add the AGENT_VERSION environment variable with the version number, prepended by v (e.g. v10.0.0) as shown in the example below:
az webapp config appsettings set --name ${WEB_APP_NAME} --resource-group ${RESOURCE_GROUP} --settings AGENT_VERSION=v10.0.0
Add prebuild.sh as a startup file setting
az webapp config set --resource-group ${RESOURCE_GROUP} --name ${WEB_APP_NAME} --startup-file "/home/prebuild.sh"
This will take a few minutes.
Troubleshooting [#troubleshooting]
In some cases, telemetry may not be available, or that the prebuild.sh script may cause the existing application to fail re-deployment. To remedy this, enable these environment variables: