Our VMware Tanzu integration helps you understand the health and performance of your Tanzu environment. Query data from different Tanzu instances and cloud providers, and go from high level views down to the most granular data, such as the last duration of the garbage collector pause.
VMware Tanzu data visualized in a New Relic One dashboard.
The integration uses Loggregator to collect metrics and events generated by all Tanzu platform components and applications that run on cells. It connects to our platform by instrumenting the VMware Tanzu Application Service (TAS) and the Cloud Foundry Application Runtime (CFAR).
To collect data from VMware PKS, use the New Relic Cluster Monitoring integration.
With the New Relic VMware Tanzu integration you can:
- Monitor the health of your deployments using our extensive collection of charts and dashboards.
- Set alerts based on any metrics collected from Firehose.
- Retrieve logs and metrics related to user apps deployed on the platform.
- Stream metrics from platform components and health metrics from BOSH-deployed VMs.
- Filter logs and metrics by configuring the nozzle during and after the installation.
- Scale the number of instances of the nozzle to support different volumes of data.
- Use the data retrieved to monitor Key Performance and Key Capacity Scaling indicators.
- Instrument and monitor multiple VMware Tanzu instances using the same account.
- Optionally send LogMessage and HttpStartStop envelopes to New Relic Logs, including logs in context support for LogMessage envelopes.
Our integration is compatible with VMware Tanzu (Pivotal Platform) version 2.5 to 2.11, and Ops Manager version 2.5 to 2.10. BOSH stemcells must be based on Ubuntu Xenial.
Before installing the integration, make sure that you need a VMware Tanzu account.
This integration sends custom events and logs. If you find you are reaching the custom event data collection and data retention limits of your subscription, please reach out to your New Relic representative.
You can also deploy the nozzle as a standard application, edit the manifest, and run
cf push from the command line; see how to build and deploy the integration in our GitHub repository.
Once you install and activate the VMware Tanzu integration, you can find the data and predefined charts in one.newrelic.com > Infrastructure > Third-party services > VMware Tanzu dashboard. You can query the data to create custom charts and dashboards, and add them to your account.
If you collect data from multiple Tanzu environments, use
pcf.IP attributes with
FACET to discriminate between events from different Tanzu deployments.
Tanzu metrics are aggregated in order to reduce memory and network consumption. However, you can increase the number of samples acting on the drain interval in the configuration.
Many prebuilt dashboards and charts displaying VMware Tanzu data are available upon request. Contact your New Relic representative to get them added to your New Relic account.
Here is a sample NRQL query that sets up an alert on memory consumption related to the
SELECT average(app.memory.used) FROM PCFContainerMetric WHERE metric.name = 'app.memory' AND app.space.name = 'system' FACET app.instance.uid
Here is the resulting chart in New Relic One:
For more information on NRQL queries and how to set up different notification channels for alerts, see Create alert conditions for NRQL queries.
Creating alert conditions from Infrastructure > Settings is currently not supported for this integration.
The VMware Tanzu integration provides the following metric data:
- Shared fields (Aggregation, App, Decoration)
If the value of
app.disk, two additional fields are available:
Total available disk in bytes
Disk currently used in percentage
If the value of
app.memory, two additional fields are available:
Total available memory in bytes
Memory currently used as percentage
Current value of the counter
Length of response (in bytes)
Duration of the HTTP request (in milliseconds)
Method of the request
Role of the emitting process in the request cycle (server or client)
Remote address of the request. For a server, this should be the origin of the request
ID for tracking the lifecycle of the request
UNIX timestamp (in nanoseconds) when the request was sent (by a client) or received (by a server)
Status code returned with the response to the request
UNIX timestamp (in nanoseconds) when the request was received
Destination of the request
Contents of the UserAgent header on the request
Application that emitted the message (or to which the application is related)
Type of the message (
Instance that emitted the message
Source of the message. For Cloud Foundry, this can be
UNIX timestamp (in nanoseconds) when the log was written
A flat list of key-value pairs fetched from Loggregator. For an extensive list, see the official documentation.
VMWare Tanzu metrics contain shared data fields in the following categories:
Fields generated by the aggregation process.
Maximum value of the metric recorded by the nozzle from the last aggregated metric sent
Minimum value of the metric recorded by the nozzle from the last aggregated metric sent
Name of the reported metric
Note: the field may contain hundreds of different values
Last received value of the metric
Number of samples of the metric received by the nozzle since the last aggregated metric sent
Sum of all the metric values recorded by the nozzle from the last aggregated metric sent
Metric type (for example,
Metric unit. For example,
Fields that describe the source of the data.
Status of the application
Id of the application instance
Number of instances required
Name of the application
Organization the application belongs to
Space where the application is running
Fields that contain information related to the agent, the PCF environment, and a timestamp.
Shared by all data types.
Nozzle IP address
Agent subscription ID, registered at the firehose
Version of the nozzle
API URL of your Tanzu environment
IP address (used to uniquely identify source)
Deployment name (used to uniquely identify source)
API URL of your Tanzu environment
Index of job (used to uniquely identify the source)
Job name (used to uniquely identify the source)
Unique description of the origin of the event
UNIX timestamp (in milliseconds) of the event.
Type of wrapped event
source of the custom event