The job manager can operate in a Docker container system environment, a Podman container system environment, or a Kubernetes container orchestration system environment. The job manager will auto-detect its environment to select the appropriate operating mode.
Synthetics job manager features
Because the synthetics job manager operates as a container instead of a virtual machine, we've simplified installation, getting started , and updating your job management and orchestration. It also comes with some additional functionality:
Ability to leverage a Docker container as a sandbox environment.
Kubernetes-specific features
The job manager introduces some additional Kubernetes-specific functionality:
Uses Kubernetes CronJob resources to run non-ping monitors
Doesn't require privileged access to the Docker socket
Supports hosted and on-premise Kubernetes clusters
Supports various container engines such as Docker and Containerd
Deployable via Helm charts as well as configuration YAMLs
Allows configurable job runtime (ping, API, and browser) based resource allocation for optimum resource management
Observability offered via the New Relic Kubernetes cluster explorer
System requirements and compatibility
To host synthetics job managers, your system must meet the minimum requirements for the chosen system environment.
Caution
Do not modify any synthetics job manager files. New Relic is not liable for any modifications you
make. For more information, contact your account representative or a New Relic
technical sales rep.
Compatibility for
Requirements
Operating system
Linux kernel: 3.10 or higher macOS: 10.11 or higher Windows: Windows 10 64-bit or higher
You must also configure Docker to run Linux containers in order for synthetics job managers to work on Windows systems.
The Docker synthetics job manager is not designed for use with container orchestrators such as, AWS ECS, Docker Swarm, Apache Mesos, Azure Container Instances, etc. Running the Docker synthetics job manager in a container orchestrator results in unexpected issues as it functions as an orchestrator itself. If you're using container orchestration, see our Kubernetes synthetics job manager requirements.
The Podman synthetics job manager is not designed for use with container orchestrators such as, AWS ECS, Docker Swarm, Apache Mesos, Azure Container Instances, etc. Running the Docker synthetics job manager in a container orchestrator results in unexpected issues as it functions as an orchestrator itself. If you're using container orchestration, see our Kubernetes synthetics job manager requirements.
Compatibility for
Requirements
Operating system
Linux kernel: 3.10 or higher macOS: 10.11 or higher
Linux containers, including job manager, only run on Linux K8s nodes.
Processor
A modern, multi-core CPU
Synthetics job manager pod
CPU (vCPU/Core): 0.5 up to 0.75 Memory: 800Mi up to 1600Mi
Resources allocated to a synthetics job manager pod are user configurable.
Ping runtime pod
CPU (vCPU/Core): 0.5 up to 0.75 Memory: 500Mi up to 1Gi
Additional considerations:
Resources allocated to a ping runtime pod are user configurable.
The maximum limit-request resource ratio for both CPU and memory is 2.
Node.js API runtime pod
CPU (vCPU/Core): 0.5 up to 0.75 Memory: 1250Mi up to 2500Mi
Additional considerations:
Resources allocated to a Node.js API runtime pod are user configurable.
The maximum limit-request resource ratio for both CPU and memory is 2.
Node.js browser runtime pod
CPU (vCPU/Core): 1.0 up to 1.5 Memory: 2000Mi up to 3000Mi
Additional considerations:
Resources allocated to a Node.js browser runtime pod are user configurable.
The maximum limit-request resource ratio for both CPU and memory is 2.
To view versions, dependencies, default values for how many runtime pods start with each synthetics job manager and more, please see Show help and examples below.
Private location key
Before launching synthetics job managers, you must have a private location key. Your synthetics job manager uses the key to authenticate against New Relic and run monitors associated with that private location.
In the Private locations index, locate the private location you want your synthetics job manager to be assigned to.
Note the key associated with the private location with the key
icon.
Install, update, and configure synthetics job manager
If you're running more than one Docker or Podman private location containers in the same host, you'll have port conflicts. To avoid this port contention, make sure you do the following when you start setting up job managers:
Unless you are hosting the images in a local image repository, you need to allow connections to docker.io through your firewall so that Docker or Podman can pull the synthetics job manager and synthetics runtime images. When the synthetics job manager starts up, the runtime images are pulled automatically. See Docker environment configuration, Podman environment configuration, and Kubernetes environment configuration for details on how to set a local repository and the runner registry endpoint.
Check if the synthetics job manager pod is up and running:
bash
$
kubectl get -n YOUR_NAMESPACE pods
Once the status attribute of each pod is shown as running, your synthetics job manager is up and ready to run monitors assigned to your private location.
Stop or delete the synthetics job manager
On a Docker or Podman container system environment, use the respective stop procedure to stop the synthetics job manager. On a Kubernetes container orchestration system environment, use the Kubernetes delete procedure to stop the synthetics job manager from running.
Delete the namespace set up for the synthetics job manager in your Kubernetes cluster:
bash
$
kubectl delete namespace YOUR_NAMESPACE
Sandboxing and Dependencies
Sandboxing and dependencies are applicable to the synthetics job manager in a Docker or Podman container system environment.
The synthetics job manager runs in Docker and is able to leverage Docker as a sandboxing technology. This ensures complete isolation of the monitor execution, which improves security, reliability, and repeatability. Every time a scripted or browser monitor is executed, the synthetics job manager creates a brand new Docker container to run it in using the matching runtime.
The synthetics job manager container needs to be configured to communicate with the Docker engine in order to spawn additional runtime containers. Each spawned container is then dedicated to run a check associated with the synthetic monitor running on the private location the synthetics job manager container is associated with.
There is a crucial dependency at launch. To enable sandboxing, ensure that:
Your writable Docker UNIX socket is mounted at /var/run/docker.sock or DOCKER_HOSTenvironment variable. For more information, see Docker's Daemon socket option.
Caution
Core count on the host determines how many runtime containers the synthetics job manager can run concurrently on the host. Since memory requirements are scaled to the expected count of runtime containers, we recommend not running multiple synthetics job managers on the same host to avoid resource contention.
The synthetics job manager runs in Podman and is able to leverage Podman as a sandboxing technology. This ensures complete isolation of the monitor execution, which improves security, reliability, and repeatability. Every time a scripted or browser monitor is executed, the synthetics job manager creates a brand new Podman container to run it in using the matching runtime.
The synthetics job manager container needs to be configured to communicate with the Podman engine in order to spawn additional runtime containers. Each spawned container is then dedicated to run a check associated with the synthetic monitor running on the private location the synthetics job manager container is associated with.
There is a crucial dependency at launch. To enable sandboxing, ensure that you have:
Create a Podman API service, sets up Podman to provide HTTP API access.
mkdir -p ~/.config/systemd/user
touch ~/.config/systemd/user/podman-api.service
vi ~/.config/systemd/user/podman-api.service
Define the service to expose the Podman's API at port 8000.
[Unit]
Description=Podman API Service
After=default.target
[Service]
Type=simple
ExecStart=/usr/bin/podman system service -t 0 tcp:0.0.0.0:8000
Restart=on-failure
[Install]
WantedBy=default.target
Enabled and started the Podman API service.
systemctl --user daemon-reload
systemctl --user enable podman-api.service
systemctl --user start podman-api.service
systemctl --user status podman-api.service
Caution
Core count on the host determines how many runtime containers the synthetics job manager can run concurrently on the host. Since memory requirements are scaled to the expected count of runtime containers, we recommend not running multiple synthetics job managers on the same host to avoid resource contention.
Security, sandboxing, and running as non-root
By default, the software running inside a synthetics job manager is executed with root user privileges. This is suitable for most scenarios, as the execution is sandboxed.
To run the synthetics job manager as a non-root user, additional steps are required:
If your environment requires you to run the synthetics job manager as a non-root user, follow this procedure. In the following example, the non-root user is my_user.
Ensure that my_user can use the Docker engine on the host:
Verify that my_user has read/write permissions to all the directories and volumes passed to synthetics job manager. To set these permission, use the chmod command.
Get the uid of my_user for use in the run command: id -u my_user.
Once these conditions are met, use the option "-u <uid>:<gid>" when launching synthetics job manager:
bash
$
docker run ... -u1002...
OR
bash
$
docker run ... -u1002-eDOCKER_HOST=http://localhost:2375 ...
Understand your Docker, Podman, or Kubernetes environments
Below is additional information about maintaining and understanding the job manager's container environment. View license information, understand the job manager's network settings, and check out the Docker image repo.
For a synthetics job manager in the Kubernetes container orchestration system environment, the following Helm show commands can be used to view the chart.yaml and the values.yaml, respectively:
bash
$
helm show chart YOUR_REPO_NAME/synthetics-job-manager
bash
$
helm show values YOUR_REPO_NAME/synthetics-job-manager
Some of our open-source software is listed under multiple software licenses, and in that case we have listed the license we've chosen to use. Our license information is also available in the our licenses documentation.
For both Docker and Kubernetes, the synthetics job manager and its runtime containers will inherit network settings from the host. For an example of this on a Docker container system environment, see the Docker site.
A bridge network is created for communication between the synthetics job manager and runtime containers. This means networking command options like --network and --dns passed to the synthetics job manager container at launch (such as through Docker run commands on a Docker container system environment) are not inherited or used by the runtime containers.
When these networks are created, they pull from the default IP address pool configured for daemon. For an example of this on a Docker container system environment, see the Docker site.
In case of Podman, we don't use bridge network for communication between the synthetics job manager and runtime containers instead we use a Podman Pod. All containers in a Podman pod share the same network namespace. This means they share the same IP address within that pod. In this case the although the containers share the same IP, their services are exposed on different ports.
A single synthetics job manager Docker image serves the Docker, Podman, and Kubernetes container orchestration system environment. The Docker image is hosted on the Docker Hub. To make sure your Docker image is up-to-date, see the Docker Hub newrelic/synthetics-job-manager repository.
Additional considerations for synthetics job manager connection
Connection
Description
Synthetics job managers without Internet access
A synthetics job manager can operate without access to the internet, but with some exceptions. The synthetics job manager needs to be able to contact the "synthetics-horde.nr-data.net" domain. This is necessary for it to report data to New Relic and to receive monitors to execute. Ask your network administration if this is a problem and how to set up exceptions.
Communicate with synthetics via a proxy
To set up communication with New Relic by proxy, use the environment variables named HORDE_API_PROXY*.
Arguments passed at launch
This applies to Docker and Podman container environment only. Arguments passed to the synthetics job manager container at launch do not get passed on to the runtime containers spawned by the synthetics job manager. Docker and Podman has no concept of "inheritance" or a "hierarchy" of containers, and we don't copy the configuration that is passed from synthetics job manager to the runtime containers. However, in case of Podman arguments passed at the pod level are shared between the synthetics job manager and runtime containers with in the pod. The only shared configuration between them is the one set at the Docker daemon level.