• EnglishEspañol日本語한국어Português
  • Log inStart now

Synthetics job manager configuration

This doc will guide you through configuring your synthetics job manager by showing you how to:

User-defined variables for scripted monitors

Private synthetics job managers let you configure environment variables for scripted monitors. These variables are managed locally on the SJM and can be accessed via $env.USER_DEFINED_VARIABLES. You can set user-defined variables in two ways. You can mount a JSON file or you can supply an environment variable to the SJM on launch. If both are provided, the SJM will only use values provided by the environment.

Accessing user-defined environment variables from scripts

To reference a configured user-defined environment variable, use the reserved $env.USER_DEFINED_VARIABLES followed by the name of a given variable with dot notation.

For example, $env.USER_DEFINED_VARIABLES.MY_VARIABLE

Caution

User-defined environment variables are not sanitized from logs. Consider using the secure credentials feature for sensitive information.

Custom node modules

Custom node modules are provided in both CPM and SJM. They allow you to create a customized set of node modules and use them in scripted monitors (scripted API and scripted browser) for synthetic monitoring.

To set up the modules:

  1. Create a directory with a package.json file following npm official guidelines in the root folder. The SJM will install any dependencies listed in the package.json's dependencies field. These dependencies will be available when running monitors on the private synthetics job manager. See an example of this below.
  1. Once you create the custom modules directory and the package.json, apply it to your SJM for Docker and Kubernetes.

  2. To check if the modules were installed correctly or if any errors occurred, review the SJM logs for the section titled "... Initialization of Custom Modules ...". These logs will include the npm installation logs, providing information regarding the installation process and any potential errors encountered.

Now you can add "require('smallest');" into the script of monitors you send to this private location.

Change package.json for custom modules

In addition to local and hosted modules, you can utilize Node.js modules as well. To update the custom modules used by your SJM, make changes to the package.json file, and restart the SJM. During the reboot process, the SJM will recognize the configuration change and automatically perform cleanup and re-installation operations to ensure the updated modules are applied.

Caution

Local modules: While your package.json can include any local module, these modules must reside inside the tree under your custom module directory. If stored outside the tree, the initialization process will fail and you will see an error message in the docker logs after launching SJM.

Permanent data storage

Users may want to use permanent data storage to provide the user_defined_variables.json file or support custom node modules (not yet available to private Synthetics Job Managers).

Docker

To set permanent data storage on Docker:

  1. Create a directory on the host where you are launching the Job Manager. This is your source directory.
  2. Launch the Job Manager, mounting the source directory to the target directory /var/lib/newrelic/synthetics.

Example:

docker run ... -v /sjm-volume:/var/lib/newrelic/synthetics:rw ...

Kubernetes

To set permanent data storage on Kubernetes, the user has two options:

  1. Provide an existing PersistentVolumeClaim (PVC) for an existing PersistentVolume (PV), setting the synthetics.persistence.existingClaimName configuration value.

Example:

helm install ... --set synthetics.persistence.existingClaimName=sjm-claim ...
  1. Provide an existing PersistentVolume (PV) name, setting the synthetics.persistence.existingVolumeName configuration value. Helm will generate a PVC for the user.

The user may optionally set the following values as well:

  • synthetics.persistence.storageClass: the storage class of the existing PV. If not provided, Kubernetes will use the default storage class.
  • synthetics.persistence.size: the size for the claim. If not set, the default is currently 2Gi.
helm install ... --set synthetics.persistence.existingVolumeName=sjm-volume --set synthetics.persistence.storageClass=standard ...

Environment variables

Environmental variables allow you to fine-tune the synthetics job manager configuration to meet your specific environmental and functional needs.

Sizing considerations for Kubernetes and Docker

Tip

Docker specific sizing considerations will be available soon.

If you're working in larger environments, you may need to customize the job manager configuration to meet minimum requirements to execute synthetic monitors efficiently. Many factors can impact sizing requirements for a synthetics job manager deployment, including:

  • If all runtimes are required based on expected usage
  • The number of jobs per minute by monitor type (ping, simple or scripted browser, and scripted API)
  • Job duration, including jobs that time out at around 3 minutes
  • The number of job failures. For job failures, automatic retries are scheduled when a monitor starts to fail to provide built-in 3/3 retry logic. These additional jobs add to the throughput requirements of the synthetic job manager.

In addition to the sizing configuration settings listed below, additional synthetics job managers can be deployed with the same private location key to load balance jobs across multiple environments.

Kubernetes

Each runtime used by the Kubernetes synthetic job manager can be sized independently by setting values in the helm chart.

Additional ping runtimes can be started to help execute ping monitor load by increasing the ping-runtime.replicaCount setting from the default value of 1.

The Node.js API and Node.js Browser runtimes are sized independently using a combination of the parallelism and completions settings. Ideal configurations for these settings will vary based on customer requirements.

The parallelism setting controls how many pods of a particular runtime run concurrently. The parallelism setting is the equivalent of the synthetics.heavyWorkers configuration in the containerized private minion (CPM). Ensure that your Kubernetes cluster has enough resources available to run this number of pods based on their resource request and limit values.

The completions setting controls how many pods of a particular runtime must complete before the CronJob can start another Kubernetes Job for that runtime. Note the difference between a Kubernetes Job (capital J) versus a synthetics monitor job. For improved efficiency, completions should be set to 6-10x the parallelism value. This can help to minimize the "nearing the end of completions" inefficiency where fewer than the parallelism number pods could end up running as the Kubernetes Job waits for all completions to finish.

When completions is greater than 1, pods with a "Completed" status will remain visible in the output of kubectl get pods -n YOUR_NAMESPACE until all completions defined in the Kubernetes Job have been met, for example 6/6 completions. Resources are released from the node when a pod has a status of Completed or Failed.

A Kubernetes Job age of 5 minutes (kubectl get jobs -n YOUR_NAMESPACE) is a conservative target to account for variability in how long it takes pods to complete and how many synthetics jobs need to run per minute (jobs rate). The following equations can be used as a starting point for completions and parallelism for each runtime. Adjustments may need to be made based on observations of private location queue growth.

completions = 300 / avg job duration (s)
parallelism = synthetics jobs per 5 minutes / completions

Different runtimes will likely have different synthetics job durations and rates. The following queries can be used to obtain average duration and rate for a private location.

# non-ping average job duration by runtime type
FROM SyntheticCheck SELECT average(duration) AS 'avg job duration' WHERE type != 'SIMPLE' AND location = 'YOUR_PRIVATE_LOCATION' FACET type SINCE 1 hour ago
# non-ping jobs per minute by runtime type
FROM SyntheticCheck SELECT rate(uniqueCount(id), 5 minutes) AS 'jobs per 5 minutes' WHERE type != 'SIMPLE' AND location = 'YOUR_PRIVATE_LOCATION' FACET type SINCE 1 hour ago

Tip

The above queries are based on current results. If your private location does not have any results or the job manager is not performing at its best, query results may not be accurate. In that case, try a few different values for completions and parallelism until you see a kubectl get jobs -n YOUR_NAMESPACE duration of at least 5 minutes (enough completions) and the queue is not growing (enough parallelism).

Example

Description

parallelism=1

completions=1

The runtime will execute 1 synthetics job per minute. After 1 job completes, the CronJob configuration will start a new job at the next minute. Throughput will be extremely limited with this configuration.

parallelism=1

completions=6

The runtime will execute 1 synthetics job at a time. After the job completes, a new job will start immediately. After the completions setting number of jobs completes, the CronJob configuration will start a new Kubernetes Job and reset the completions counter. Throughput will be limited, but slightly better. A single long running synthetics job will block the processing of any other synthetics jobs of this type.

parallelism=3

completions=24

The runtime will execute 3 synthetics jobs at once. After any of these jobs complete, a new job will start immediately. After the completions setting number of jobs completes, the CronJob configuration will start a new Kubernetes Job and reset the completions counter. Throughput is much better with this or similar configurations. A single long running synthetics job will have limited impact to the processing of other synthetics jobs of this type.

If synthetics jobs take longer to complete, fewer completions are needed to fill 5 minutes with jobs but more parallel pods will be needed. Similarly, if more synthetics jobs need to be processed per minute, more parallel pods will be needed. The parallelism setting directly affects how many synthetics jobs per minute can be run. Too small a value and the queue may grow. Too large a value and nodes may become resource constrained.

If your parallelism settings is working well to keep the queue at zero, setting a higher value for completions than what is calculated from 300 / avg job duration can help to improve efficiency in a couple of ways:

  • Accommodate variability in job durations such that at least 1 minute is filled with synthetics jobs, which is the minimum CronJob duration.
  • Reduce the number of completions cycles to minimize the "nearing the end of completions" inefficiency where the next set of completions can't start until the final job completes.

It's important to note that the completions value should not be too large or the CronJob will experience warning events like the following:

8m40s Warning TooManyMissedTimes cronjob/synthetics-node-browser-runtime too many missed start times: 101. Set or decrease .spec.startingDeadlineSeconds or check clock skew

Tip

Please keep in mine that New Relic is not liable for any modifications you make to the synthetics job manager files.

Copyright © 2024 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.