• EnglishEspañol日本語한국어Português
  • EntrarComeçar agora

Google VertexAI monitoring integration

New Relic's integrations include an integration for reporting your GCP Run data to our products. Here, we explain how to activate the integration and what data it collects.

Activate integration

To enable the integration, follow the standard procedures to connect your GCP service to New Relic.

Configuration and polling

You can change the polling frequency and filter data using configuration options.

Default polling information for the GCP Run integration:

  • New Relic polling interval: 5 minutes

Find and use data

To find your integration data, go to one.newrelic.com > All capabilities > Infrastructure > GCP and select an integration.

Data is attached to the following event types:

Entity

Event Type

Provider

Endpoint

GcpVertexAiEndpointSample

GcpVertexAiEndpoint

Feature store

GcpVertexAiFeaturestoreSample

GcpVertexAiFeaturestore

Feature Online Store

GcpVertexAiFeatureOnlineStoreSample

GcpVertexAiFeatureOnlineStore

Location

GcpVertexAiLocationSample

GcpVertexAiLocation

Index

GcpVertexAiIndexSample

GcpVertexAiIndex

PipelineJob

GcpVertexAiPipelineJobSample

GcpVertexAiPipelineJob

For more on how to use your data, see Understand and use integration data.

Metric data

This integration collects GCP data for VertexAI.

VertexAI Endpoint data

Metric

Unit

Description

prediction.online.accelerator.duty_cycle

Percent

Average fraction of time over the past sample period during which the accelerator(s) were actively processing.

prediction.online.accelerator.memory.bytes_used

Bytes

Amount of accelerator memory allocated by the deployed model replica.

prediction.online.error_count

Count

Number of online prediction errors.

prediction.online.memory.bytes_used

Bytes

Amount of memory allocated by the deployed model replica and currently in use.

prediction.online.network.received_bytes_count

Bytes

Number of bytes received over the network by the deployed model replica.

prediction.online.network.sent_bytes_count

Bytes

Number of bytes sent over the network by the deployed model replica.

prediction.online.prediction_count

Count

Number of online predictions.

prediction.online.prediction_latencies

Milliseconds

Online prediction latency of the deployed model.

prediction.online.private.prediction_latencies

Milliseconds

Online prediction latency of the private deployed model.

prediction.online.replicas

Count

Number of active replicas used by the deployed model.

prediction.online.response_count

Count

Number of different online prediction response codes.

prediction.online.target_replicas

Count

Target number of active replicas needed for the deployed model.

VertexAI Featurestore data

Metric

Unit

Description

featurestore.cpu_load

Percent

The average CPU load for a node in the Featurestore online storage.

featurestore.cpu_load_hottest_node

Percent

The CPU load for the hottest node in the Featurestore online storage.

featurestore.node_count

Count

The number of nodes for the Featurestore online storage.

featurestore.online_entities_updated

Count

Number of entities updated on the Featurestore online storage.

featurestore.online_serving.latencies

Milliseconds

Online serving latencies by EntityType.

featurestore.online_serving.request_bytes_count

Bytes

Request size by EntityType.

featurestore.online_serving.request_count

Count

Featurestore online serving count by EntityType.

featurestore.online_serving.response_size

Bytes

Response size by EntityType.

featurestore.storage.billable_processed_bytes

Bytes

Number of bytes billed for offline data processed.

featurestore.storage.stored_bytes

Bytes

Bytes stored in Featurestore.

featurestore.streaming_write.offline_processed_count

Count

Number of streaming write requests processed for offline storage.

featurestore.streaming_write.offline_write_delays

Seconds

Time (in second) since the write API is called until it is written to offline storage.

VertexAI FeatureOnlineStore data

Metric

Unit

Description

featureonlinestore.online_serving.request_count

Count

Number of serving count by FeatureView.

featureonlinestore.online_serving.serving_bytes_count

Bytes

Serving response size by FeatureView.

featureonlinestore.online_serving.serving_latencies

Milliseconds

Online serving latencies by FeatureView.

featureonlinestore.running_sync

Milliseconds

Number of running syncs at given point of time.

featureonlinestore.serving_data_ages

Seconds

Measure of the serving data age in seconds.

featureonlinestore.serving_data_by_sync_time

Count

Breakdown of data in Feature Online Store by synced timestamp.

featureonlinestore.storage.bigtable_cpu_load

Percent

The average CPU load of nodes in the Feature Online Store.

featureonlinestore.storage.bigtable_cpu_load_hottest_node

Percent

The CPU load of the hottest node in the Feature Online Store.

featureonlinestore.storage.bigtable_nodes

Count

The number of nodes for the Feature Online Store(Bigtable).

featureonlinestore.storage.stored_bytes

Count

Bytes stored in the Feature Online Store.

VertexAI Location data

Metric

Unit

Description

online_prediction_requests_per_base_model

Count

Number of requests per base model.

quota.online_prediction_requests_per_base_model.exceeded

Count

Number of attempts to exceed the limit on quota metric.

quota.online_prediction_requests_per_base_model.limit

Count

Current limit on quota metric.

quota.online_prediction_requests_per_base_model.usage

Count

Current usage on quota metric.

executing_vertexai_pipeline_jobs

Count

Number of pipeline jobs being executed.

executing_vertexai_pipeline_tasks

Count

Number of pipeline tasks being executed.

VertexAI Index data

Metric

Unit

Description

matching_engine.stream_update.datapoint_count

Count

Number of successfully upserted or removed datapoints.

matching_engine.stream_update.latencies

Milliseconds

The latencies between the user receives a UpsertDatapointsResponse or RemoveDatapointsResponse and that update takes effect.

matching_engine.stream_update.request_count

Count

Number of stream update requests.

VertexAI Pipeline Job data

Metric

Unit

Description

pipelinejob.duration

Seconds

Runtime seconds of the pipeline job being executed (from creation to end).

pipelinejob/task_completed_count

Count

Total number of completed Pipeline Tasks.

Copyright © 2024 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.