PostgreSQL monitoring integration

New Relic Infrastructure's PostgreSQL integration reports data from your PostgreSQL instance to New Relic Infrastructure. This document explains how to install and activate the PostgreSQL integration, and describes the data collected.

Access to this feature depends on your subscription level. Requires Infrastructure Pro.

Features

The New Relic PostgreSQL on-host integration receives and sends inventory metrics from your PostgreSQL instance to the New Relic platform, where you can aggregate and visualize key performance metrics. Data about the level of instance, database, and cluster can help you more easily find the source of any problems that arise.

Compatibility and requirements

To use the PostgreSQL integration, ensure your system meets these requirements:

Install

On-host integrations do not automatically update. For best results, you should occasionally update the integration and update the Infrastructure agent.

To install the PostgreSQL integration:

  1. Follow the procedures to install the Infrastructure integration package using the file name nri-postgresql.
  2. From the command line, change the directory to the integrations folder:

    cd /etc/newrelic-infra/integrations.d
  3. Create a copy of the sample configuration file by running:

    sudo cp postgresql-config.yml.sample postgresql-config.yml
  4. Edit the postgresql-config.yml configuration file using the configuration settings.
  5. Before you restart the Infrastructure agent, create a user with READ permissions on the required functions.
  6. Restart the Infrastructure agent.

PostgreSQL users and permissions

Create a user with READ permissions on:

  • pg_stat_database
  • pg_stat_database_conflicts
  • pg_stat_bgwriter

You can complete this step before or after you configure the postgresql-config.yml file. To create the user for the PostgreSQL integration:

CREATE USER new_relic WITH PASSWORD 'PASSWORD';
GRANT SELECT ON pg_stat_database TO new_relic;
GRANT SELECT ON pg_stat_database_conflicts TO new_relic;
GRANT SELECT ON pg_stat_bgwriter TO new_relic;

Configure

Use the PostgreSQL integration's postgresql-config.yml configuration file to put required login credentials and configure how data is collected. For an example configuration, see the example config file.

Commands

The postgresql-config.yml file provides one command:

  • all_data: collects both inventory and metric data for the configured PostgreSQL instance..

Arguments

The all_data command accepts the following arguments:

  • username: The user name for the PostgreSQL connection. Required.

  • password: The password for the PostgreSQL connection. Required.

  • hostname: The hostname for the PostgreSQL connection. Default: localhost.

  • port: The port where PostgreSQL is running. Default: 5432.

  • collection_list: A JSON object describing the list of entities to collect, using this format:

    '{"database_name":{"schema_name":{"table_name":["index_name"]}'
  • pgbouncer: Collect pgbouncer metrics. Default: false.

  • enable_ssl: Determines if SSL is enabled. If true, ssl_cert_location and ssl_key_location are required. Default: false.

  • trust_server_certificates: If true, the server certificate is not verified for SSL. If false, the server certificate identified in ssl_root_cert_location is verified. Default: false.

  • ssl_root_cert_location: Absolute path to PEM-encoded root certificate file. Required if trust_server_certificates is false.

  • ssl_cert_location: Absolute path to PEM-encoded client certificate file. Required if enable_ssl is true.

  • ssl_key_location: Absolute path to PEM-encoded client key file. Required if enable_ssl is true.

  • timeout: Maximum wait for connection, in seconds. Set to 0 for no timeout. Default: 10.

Example postgresql-config.yml file configuration:

PostgreSQL configuration file
integration_name: com.newrelic.postgresql

instances:
  # A name for the collection
  - name: sample_postgres
    # The only supported command is all_data.
    command: all_data
    arguments:
      # The username for the Postgres instance. Required.
      username: postgres
      # The password for the Postgres instance. Required.
      password: pass
      # The hostname for the Postgres instance. Defaults to localhost.
      hostname: psql-sample.localnet
      # The port of the Postgres instance. If PgBouncer is being used,
      # use the port it is running on. Defaults to 5432.
      port: 6432
      # The JSON object which contains the entities to monitor. The nesting
      # levels of JSON are database name -> schema name -> table name -> index name.
      collection_list: '{"postgres":{"public":{"pg_table1":["pg_index1","pg_index2"],"pg_table2":[]}}}'
      # True if SSL is to be used. Defaults to false.
      enable_ssl: true
      # True if the SSL certificate should be trusted without validating.
      # Setting this to true may open up the monitoring service to MITM attacks.
      # Defaults to false.
      trust_server_certificate: false
      ssl_root_cert_location: /etc/newrelic-infra/root_cert.crt
      ssl_cert_location: /etc/newrelic-infra/postgresql.crt
      ssl_key_location: /etc/newrelic-infra/postgresql.key
      timeout:  10
    # Used to label your data.
    labels:
      env: production
      role: postgresql

Find and use data

To find your integration data in Infrastructure, go to infrastructure.newrelic.com > Integrations > On-host integrations and select one of the PostgreSQL integration links.

To query and visualize your PostgreSQL data in New Relic Insights, use any of the following Insights event types:

Metric data

The PostgreSQL integration collects the following database metric attributes. Some attributes apply to all PostgreSQL event types. Some metric names are prefixed with a category indicator and a period, such as db. or index. metric names.

To include these attributes in your Insights query, use the PostgresqlDatabaseSample event type.

PostgreSQLDatabaseSample attributes Description
db.connections

Number of backends currently connected to this database.

db.commitsPerSecond

Committed transactions per second.

db.rollbacksPerSecond

Transactions rolled back per second.

db.readsPerSecond

Number of disk blocks read in this database per second.

db.bufferHitsPerSecond

Number of times disk blocks were found already in the buffer cache, so that a read was not necessary. This only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache.

db.rowsReturnedPerSecond

Rows returned by queries per second.

db.rowsFetchedPerSecond

Rows fetched by queries per second.

db.rowsInsertedPerSecond

Rows inserted per second.

db.rowsUpdatedPerSecond

Rows updated per second.

db.rowsDeletedPerSecond

Rows deleted per second.

db.conflicts.tablespacePerSecond

Number of queries in this database that have been canceled due to dropped tablespaces.

db.conflicts.locksPerSecond

Number of queries in this database that have been canceled due to lock timeouts.

db.conflicts.snapshotPerSecond

Number of queries in this database that have been canceled due to old snapshots.

db.conflicts.bufferpinPerSecond

Number of queries in this database that have been canceled due to pinned buffers.

db.conflicts.deadlockPerSecond

Number of queries in this database that have been canceled due to deadlocks.

db.tempFilesCreatedPerSecond

Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (for example, sorting or hashing), and regardless of the log_temp_files setting.

db.tempWrittenInBytesPerSecond

Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and regardless of the log_temp_files setting.

db.deadlocksPerSecond

Number of deadlocks detected in this database.

db.readTimeInMillisecondsPerSecond

Time spent reading data file blocks by backends in this database, in milliseconds.

db.writeTimeInMillisecondsPerSecond

Time spent writing data file blocks by backends in this database, in milliseconds.

To include these attributes in your Insights query, use the PostgresqlIndexSample event type.

PostgreSQLIndexSample attributes Description
index.sizeInBytes

The size of an index.

index.rowsReadPerSecond

The number of index entries returned by scans on this index.

index.rowsFetchedPerSecond

The number of index entries fetched by scans on this index.

To include these attributes in your Insights query, use the PostgresqlInstanceSample event type.

PostgreSQLInstanceSample attributes Description
bgwriter.checkpointsScheduledPerSecond

Number of scheduled checkpoints that have been performed.

bgwriter.checkpointsRequestedPerSecond

Number of requested checkpoints that have been performed.

bgwriter.buffersWrittenForCheckpointsPerSecond

Number of buffers written during checkpoints.

bgwriter.buffersWrittenByBackgroundWriterPerSecond

Number of buffers written by the background writer.

bgwriter.backgroundWriterStopsPerSecond

Number of times the background writer stopped a cleaning scan because it had written too many buffers.

bgwriter.buffersWrittenByBackendPerSecond

Number of buffers written directly by a backend.

bgwriter.buffersAllocatedPerSecond

Number of buffers allocated.

bgwriter.backendFsyncCallsPerSecond

Number of times a backend had to execute its own fsync call. Normally the background writer handles them even when the backend does its own write.

bgwriter.checkpointWriteTimeInMillisecondsPerSecond

Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds.

bgwriter.checkpointSyncTimeInMillisecondsPerSecond

Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds.

To include these attributes in your Insights query, use the PostgresqlTableSample event type.

PostgreSQLTableSample attributes Description
table.totalSizeInBytes

The total disk space used by the table, including indexes and TOAST data.

table.indexSizeInBytes

The total disk space used by indexes attached to the specified table.

table.liveRows

Number of live rows.

table.deadRows

Number of dead rows.

table.indexBlocksReadPerSecond

The number of disk blocks read from all indexes on this table.

table.indexBlocksHitPerSecond

The number of buffer hits in all indexes on this table.

table.indexToastBlocksReadPerSecond

The number of disk blocks read from this table's TOAST table index.

table.indexToastBlocksHitPerSecond

The number of buffer hits in this table's TOAST table index.

table.lastVacuum

Time of last vacuum on table.

table.lastAutoVacuum

Time of last automatic vacuum on table.

table.lastAnalyze

Time of last analyze on table.

table.lastAutoAnalyze

Time of last automatic analyze on table.

table.sequentialScansPerSecond

Number of sequential scans initiated on this table per second.

table.sequentialScanRowsFetchedPerSecond

Number of live rows fetched by sequential scans per second.

table.indexScansPerSecond

Number of index scans initiated on this table.

table.indexScanRowsFetchedPerSecon

Number of live rows fetched by index scans.

table.rowsInsertedPerSecond

Rows inserted per second.

table.rowsUpdatedPerSecond

Rows updated per second.

table.rowsDeletedPerSecond

Rows deleted per second.

table.bloatSizeInBytes

Size of bloat in bytes.

table.dataSizeInBytes

Size of disk spaced used by the main fork of the table.

table.bloatRatio

Fraction of table data size that is bloat.

To include these attributes in your Insights query, use the PgBouncerSample event type.

PgBouncerSample attributes Description
pgbouncer.stats.transactionsPerSecond

The transaction rate.

pgbouncer.stats.queriesPerSecond

The query rate.

pgbouncer.stats.bytesInPerSecond

The total network traffic received.

pgbouncer.stats.bytesOutPerSecond

The total network traffic sent.

pgbouncer.stats.totalTransactionDurationInMillisecondsPerSecond

Time spent by pgbouncer in transaction.

pgbouncer.stats.totalQueryDurationInMillisecondsPerSecond

Time spent by pgbouncer actively querying PostgreSQL.

pgbouncer.stats.avgTransactionCount

The average number of transactions per second in last stat period.

pgbouncer.stats.avgTransactionDurationInMilliseconds

The average transaction duration.

pgbouncer.stats.avgQueryCount

The average number of queries per second in last stat period.

pgbouncer.stats.avgBytesIn

The client network traffic received.

pgbouncer.stats.avgBytesOut

The client network traffic sent.

pgbouncer.stats.avgQueryDurationInMilliseconds

The average query duration.

pgbouncer.pools.clientConnectionsActive

Client connections linked to server connection and able to process queries.

pgbouncer.pools.clientConnectionsWaiting

Client connections waiting on a server connection.

pgbouncer.pools.serverConnectionsActive

Server connections linked to a client connection.

pgbouncer.pools.serverConnectionsIdle

Server connections idle and ready for a client query.

pgbouncer.pools.serverConnectionsUsed

Server connections idle more than server_check_delay, needing server_check_query.

pgbouncer.pools.serverConnectionsTested

Server connections currently running either server_reset_query or server_check_query.

pgbouncer.pools.serverConnectionsLogin

Server connections currently in the process of logging in.

pgbouncer.pools.maxwaitInMilliseconds

Age of oldest unserved client connection.

Inventory data

The PostgreSQL integration collects each setting from pg_settings along with its boot_val and reset_val. The inventory data appears on the Infrastructure Inventory page, under the config/postgresql source.

Troubleshooting

Here are some troubleshooting tips for the PostgreSQL integration:

  • If you have connection problems, make sure you can connect to the cluster from the same box with psql.
  • If you have problems collecting PgBouncer metrics, make sure you are connected to the instance through PgBouncer. Default port is 6432.

For more help

Recommendations for learning more: