Our VMware Tanzu integration helps you understand the health and performance of your Tanzu environment. Query data from different Tanzu instances and cloud providers, and go from high level views down to the most granular data, such as the last duration of the garbage collector pause.
VMware Tanzu data visualized in a New Relic dashboard.
The integration uses Loggregator to collect metrics and events generated by all Tanzu platform components and applications that run on cells. It connects to our platform by instrumenting the VMware Tanzu Application Service (TAS) and the Cloud Foundry Application Runtime (CFAR).
Tip
To collect data from VMware PKS, use the New Relic Cluster Monitoring integration.
Features
With the New Relic VMware Tanzu integration you can:
- Monitor the health of your deployments using our extensive collection of charts and dashboards.
- Set alerts based on any metrics collected from Firehose.
- Retrieve logs and metrics related to user apps deployed on the platform.
- Stream metrics from platform components and health metrics from BOSH-deployed VMs.
- Filter logs and metrics by configuring the nozzle during and after the installation.
- Scale the number of instances of the nozzle to support different volumes of data.
- Use the data retrieved to monitor Key Performance and Key Capacity Scaling indicators.
- Instrument and monitor multiple VMware Tanzu instances using the same account.
- Optionally send LogMessage and HttpStartStop envelopes to New Relic's Logs UI, including logs in context support for LogMessage envelopes.
Compatibility and requirements
Our integration is compatible with VMware Tanzu Application Service version 2.10 to 3.0, and Ops Manager version 2.8 to 3.0. BOSH stemcells must be based on Ubuntu Xenial.
Before installing the integration, make sure that you need a VMware Tanzu account.
Tip
This integration sends custom events and logs. If you find you are reaching the custom event data collection and data retention limits of your subscription, please reach out to your New Relic representative.
Install and activate
The quickest way to install the VMware Tanzu integration is by importing the nr-firehose-nozzle tile into Ops Manager. For more information, see the VMware Tanzu documentation.
You can also deploy the nozzle as a standard application, edit the manifest, and run cf push
from the command line; see how to build and deploy the integration in our GitHub repository.
Did this doc help with your installation?
Find and use data
Once you install and activate the VMware Tanzu integration, you can find the data and predefined charts in one.newrelic.com > Infrastructure > Third-party services > VMware Tanzu dashboard. You can query the data to create custom charts and dashboards, and add them to your account.
If you collect data from multiple Tanzu environments, use pcf.domain
and pcf.IP
attributes with WHERE
or FACET
to discriminate between events from different Tanzu deployments.
Important
Tanzu metrics are aggregated in order to reduce memory and network consumption. However, you can increase the number of samples acting on the drain interval in the configuration.
Tip
Many prebuilt dashboards and charts displaying VMware Tanzu data are available upon request. Contact your New Relic representative to get them added to your New Relic account.
Set up an alert
VMware Tanzu provides a list of indicators on key performance and key capacity scaling, together with warning and critical values that you can monitor using NRQL alert conditions.
Here is a sample NRQL query that sets up an alert on memory consumption related to the system
space:
SELECT average(app.memory.used) FROM PCFContainerMetric WHERE metric.name = 'app.memory' AND app.space.name = 'system' FACET app.instance.uid
Here is the resulting chart:
For more information on NRQL queries and how to set up different notification channels for alerts, see Create alert conditions for NRQL queries.
Important
Creating alert conditions from Infrastructure > Settings is currently not supported for this integration.
Metric data
The VMware Tanzu integration provides the following metric data:
- PCFContainerMetric
- PCFCounterEvent
- PCFHttpStartStop
- PCFLogMessage
- PCFValueMetric
- Shared fields (Aggregation, App, Decoration)
PCFContainerMetric
Resource usage of an app in a container. Contains all the shared Aggregation, App, and Decoration fields.
If the value of metric.name
is app.disk
, two additional fields are available:
Name | Description |
---|---|
| Total available disk in bytes |
| Disk currently used in percentage |
If the value of metric.name
is app.memory
, two additional fields are available:
Name | Description |
---|---|
| Total available memory in bytes |
| Memory currently used as percentage |
PCFCounterEvent
Increment of a counter. Contains all the shared Aggregation and Decoration fields.
Name | Description |
---|---|
| Current value of the counter |
PCFHttpStartStop
The whole lifecycle of an HTTP request. Contains all the shared Decoration fields. These events can optionally be sent to New Relic for visualization in the Logs UI.
Name | Description |
---|---|
| Length of response (in bytes) |
| Duration of the HTTP request (in milliseconds) |
| Method of the request |
| Role of the emitting process in the request cycle (server or client) |
| Remote address of the request. For a server, this should be the origin of the request |
| ID for tracking the lifecycle of the request |
| UNIX timestamp (in nanoseconds) when the request was sent (by a client) or received (by a server) |
| Status code returned with the response to the request |
| UNIX timestamp (in nanoseconds) when the request was received |
| Destination of the request |
| Contents of the UserAgent header on the request |
PCFLogMessage
Log lines and associated metadata. Contains all the shared Aggregation, App, and Decoration fields. These events can optionally be sent to New Relic for visualization in the Logs UI.
Name | Description |
---|---|
| Application that emitted the message (or to which the application is related) |
| Log message |
| Type of the message ( |
| Instance that emitted the message |
| Source of the message. For Cloud Foundry, this can be |
| UNIX timestamp (in nanoseconds) when the log was written |
PCFValueMetric
A flat list of key-value pairs fetched from Loggregator. For an extensive list, see the official documentation.
Contains all the shared Aggregation and Decoration fields.
Fields shared across metric data
VMWare Tanzu metrics contain shared data fields in the following categories:
Aggregation fields
Fields generated by the aggregation process.
Shared by PCFCounterEvent
, PCFContainerMetric
, and PCFValueMetric
.
Name | Description |
---|---|
| Maximum value of the metric recorded by the nozzle from the last aggregated metric sent |
| Minimum value of the metric recorded by the nozzle from the last aggregated metric sent |
| Name of the reported metric Note: the field may contain hundreds of different values |
| Last received value of the metric |
| Number of samples of the metric received by the nozzle since the last aggregated metric sent |
| Sum of all the metric values recorded by the nozzle from the last aggregated metric sent |
| Metric type (for example, |
| Metric unit. For example, |
App fields
Fields that describe the source of the data.
Shared by PCFContainerMetric
and PCFLogMessage
.
Name | Description |
---|---|
| Status of the application |
| Id of the application instance |
| Number of instances required |
| Name of the application |
| Organization the application belongs to |
| Space where the application is running |
Decoration fields
Fields that contain information related to the agent, the PCF environment, and a timestamp.
Shared by all data types.
Name | Description |
---|---|
| Nozzle ID |
| Nozzle IP address |
| Agent subscription ID, registered at the firehose |
| Version of the nozzle |
| API URL of your Tanzu environment |
| IP address (used to uniquely identify source) |
| Deployment name (used to uniquely identify source) |
| API URL of your Tanzu environment |
| Index of job (used to uniquely identify the source) |
| Job name (used to uniquely identify the source) |
| Unique description of the origin of the event |
| UNIX timestamp (in milliseconds) of the event. Example: |
| Type of wrapped event |
| source of the custom event |