Learn how to configure your synthetics job manager by using environment variables in your configuration.
Important
Custom modules, permanent data storage, and user defined environment variables are not supported for the synthetics job manager at this time.
As a note, New Relic is not liable for any modifications you make to the synthetics job manager files.
Environment variables
Environmental variables allow you to fine-tune the synthetics job manager configuration to meet your specific environmental and functional needs.
The variables are provided at startup using the -e, --env
argument.
The following table shows all the environment variables that synthetics job manager supports. PRIVATE_LOCATION_KEY
is required, and all other variables are optional.
Name | Description |
---|---|
| REQUIRED. Private location key, as found on the Private Location entity list. |
| Format: Default: |
| Points the synthetics job manager to a given |
| For US-based accounts, the endpoint is: For EU-based accounts, the endpoint is: Ensure your synthetics job manager can connect to the appropriate endpoint in order to serve your monitor. |
| The Docker Registry domain where the runtime images are hosted. Use this to override |
| The Docker repository / organization where the runtime images are hosted. Use this to override |
| Proxy server host used for Horde communication. Format: |
| Proxy server port used for Horde communication. Format: |
| Proxy server username used for Horde communication. Format: |
| Proxy server password used for Horde communication. Format: |
| Accept self signed proxy certificates for the proxy server connection used for Horde communication? Acceptable values: |
| The maximum amount of seconds that your monitor checks are allowed to run. This value must be an integer between 0 seconds (excluded) and 900 seconds (included) (for example, from 1 second to 15 minutes). Default: 180 seconds |
| When contacting New Relic Support, they may ask you to increase this to Default: |
The variables are provided at startup using the --set
argument.
The following list shows all the environment variables that synthetics job manager supports. synthetics.privateLocationKey
is required, and all other variables are optional.
A number of additional advanced settings are available and fully documented in our Helm chart README
Name | Description |
---|---|
| REQUIRED if |
| REQUIRED if |
| Number of replicas to maintain with your installation Default: |
| The name of the secret object used to pull an image from a specified container registry. |
| Name override used for your Deployment, replacing the default. |
| Release version of synthetics-job-manager to use instead of the version specified in chart.yml. |
| If contacting New Relic Support, they may ask you to increase this to Default: |
| For US-based accounts, the endpoint is: For EU-based accounts, the endpoint is: Ensure your synthetics job manager can connect to the appropriate endpoint in order to serve your monitor. |
| The Docker Registry and Organization where the Minion Runner image is hosted. Use this to override |
| Proxy server used for Horde communication. Format: |
| Proxy server port used for Horde communication. Format: |
| Accept self signed certificates when using a proxy server for Horde communication. Acceptable values: |
| Proxy server username for Horde communication. Format: |
| Proxy server password for Horde communication. Format: |
| The maximum amount of seconds that your monitor checks are allowed to run. This value must be an integer between 0 seconds (excluded) and 900 seconds (included) (for example, from 1 second to 15 minutes). Default: 180 seconds |
| The container to pull. Default: |
| The pull policy. Default: |
| Set a custom security context for the synthetics-job-manager pod. |
| Whether or not the persistent ping runtime should be deployed. This can be disabled if you do not use ping monitors. Default: |
| The number of ping runtime containers to deploy. Increase the replicaCount to scale the deployment based on your ping monitoring needs. Default: |
| The container image to pull for the ping runtime. Default: |
| The pull policy for the ping-runtime container. Default: |
| Whether or not the Node.js API runtime should be deployed. This can be disabled if you do not use scripted API monitors. Default: |
| The number of Node.js API runtime Default: |
| The number of Node.js API runtime Default: |
| The container image to pull for the Node.js API runtime. Default: |
| The pull policy for the Node.js API runtime container. Default: |
| Whether or not the Node.js browser runtime should be deployed. This can be disabled if you do not use simple or scripted browser monitors. Default: |
| The number of Chrome browser runtime Default: |
| The number of Chrome browser runtime Default: |
| The container image to pull for the Node.js browser runtime. Default: |
| The pull policy for the Node.js browser runtime container. Default: |
Sizing considerations for Kubernetes and Docker
Tip
Docker specific sizing considerations will be available soon.
If you're working in larger environments, you may need to customize the job manager configuration to meet minimum requirements to execute synthetic monitors efficiently. Many factors can impact sizing requirements for a synthetics job manager deployment, including:
- If all runtimes are required based on expected usage
- The number of jobs per minute by monitor type (ping, simple or scripted browser, and scripted API)
- Job duration, including jobs that time out at around 3 minutes
- The number of job failures. For job failures, automatic retries are scheduled when a monitor starts to fail to provide built-in 3/3 retry logic. These additional jobs add to the throughput requirements of the synthetic job manager.
In addition to the sizing configuration settings listed below, additional synthetics job managers can be deployed with the same private location key to load balance jobs across multiple environments.
Each runtime used by the K8s synthetic job manager can be sized independently by setting values in the helm chart.
Additional ping runtimes can be started to help execute ping monitor load by increasing the ping-runtime.replicaCount
setting from the default value of 1
.
The Node.js API and Node.js Browser runtimes are sized independently using a combination of the parallelism
and completions
settings. Ideal configurations for these settings will vary based on customer requirements.
The parallelism
setting controls how many jobs of that type (Node.js API or Node.js Browser) can run concurrently in your K8s cluster. The parallelism
setting is the equivalent of the synthetics.heavyWorkers
configuration in the containerized private minion (CPM). Ensure that your K8s cluster has enough resources available to run this number of concurrent runtimes based on their resource request and limit values.
The completions
setting controls how many jobs of this type must complete before waiting on the every 1 minute CronJob
schedule to start additional jobs of this runtime type. The completions
setting should be set to an estimate of how many jobs can be completed in one minute, taking the parallelism
setting into account. When completions
is greater than 1, completed jobs will display in kubectl get pods
but resources are released as soon as those jobs are marked complete or failed.
Formulae to help calculate a baseline for completions
and parallelism
for each runtime:
completions = 60 / avg job durationparallelism = jobs per minute / completions
Different runtimes will likely have different job durations and rates. The following queries can be used to obtain average duration and rate for a private location.
# non-ping average job duration by runtime typeFROM SyntheticCheck SELECT average(duration) AS 'avg job duration' WHERE type != 'SIMPLE' AND location = 'YOUR_PRIVATE_LOCATION' FACET type SINCE 1 hour ago
# non-ping jobs per minute by runtime typeFROM SyntheticCheck SELECT rate(uniqueCount(id), 1 minute) AS 'jobs per minute' WHERE type != 'SIMPLE' AND location = 'YOUR_PRIVATE_LOCATION' FACET type SINCE 1 hour ago
Tip
The above queries are based on current results. If your private location does not have any results or the job manager is not performing at its best, query results may not be accurate. In that case, try a few different values for completions
and parallelism
until you see jobs filling at least 1 minute with completions (enough completions) and the queue is not growing (enough parallelism).
Example | Description |
---|---|
| The runtime will execute 1 job per minute. After 1 job completes, the |
| The runtime will execute 1 job at a time. After the job completes, a new job will start immediately. After the |
| The runtime will execute 3 jobs at once. After any of these jobs complete, a new job will start immediately. After the |
If jobs take longer to complete, fewer completions are needed to fill 1 minute but more parallel pods are needed. Similarly, if more jobs need to be processed per minute, more parallel pods are needed.
You can use the completions
value to obtain a good starting point for the value of parallelism
. The parallelism
setting directly affects how many jobs per minute can be run. Too small a value and the queue may grow. Too large a value and the node may become resource constrained.
If your parallelism
settings is working well to keep the queue at zero, it's not a bad idea to set a higher value for completions
than what was initially calculated from 60 / avg job duration
. This can help to accommodate cases where jobs complete faster than expected and avoid the scenario where the completions
value is reached before 1 minute has elapsed. The most efficient completions
setting will fill the entire minute with running and completing jobs. If not enough completions, job processing for that runtime will sit idle for some number of seconds until the next CronJob
can spin up another set of completions
.