PostgreSQL monitoring integration

The New Relic PostgreSQL on-host integration receives and sends inventory metrics from your PostgreSQL instance to the New Relic platform, where you can aggregate and visualize key performance metrics. Data from instances, databases, and clusters helps you find the source of problems.

Read on to install the integration, and to see what data we collect.

Compatibility and requirements

Our integration is compatible with PostgreSQL 9.0 or higher.

Before installing the integration, make sure that you meet the following requirements:

This integration is released as open source under the MIT license on Github.

Install and activate

To install the PostgreSQL integration, follow the instructions for your environment:

ECS

See Monitor service running on ECS.

Kubernetes

See Monitor service running on Kubernetes.

Linux
  1. Follow the procedures to install the Infrastructure integration package using the file name nri-postgresql.
  2. Change directory to the integrations folder:

    cd /etc/newrelic-infra/integrations.d
  3. Copy of the sample configuration file:

    sudo cp postgresql-config.yml.sample postgresql-config.yml
  4. Edit the postgresql-config.yml file as described in the configuration settings.
  5. Before you restart the Infrastructure agent, create a user with READ permissions on the required functions.
  6. Restart the Infrastructure agent.
Windows
  1. Download the nri-postgresql .MSI installer image from:

    http://download.newrelic.com/infrastructure_agent/windows/integrations/nri-postgresql/nri-postgresql-amd64.msi

  2. To install from the Windows command prompt, run:
    msiexec.exe /qn /i PATH\TO\nri-postgresql-amd64.msi
  3. In the Integrations directory, C:\Program Files\New Relic\newrelic-infra\integrations.d\, create a copy of the sample configuration file by running:

    cp postgresql-config.yml.sample postgresql-config.yml
  4. Edit the postgresql-config.yml file as described in the configuration settings.
  5. Restart the infrastructure agent.

Additional notes:

PostgreSQL users and permissions

Create a user with READ permissions on:

  • pg_stat_database
  • pg_stat_database_conflicts
  • pg_stat_bgwriter

You can complete this step before or after you configure the postgresql-config.yml file. To create the user for the PostgreSQL integration:

CREATE USER new_relic WITH PASSWORD 'PASSWORD';
GRANT SELECT ON pg_stat_database TO new_relic;
GRANT SELECT ON pg_stat_database_conflicts TO new_relic;
GRANT SELECT ON pg_stat_bgwriter TO new_relic;

Configure the integration

The PostgreSQL integration's configuration is how you can set required login credentials and configure how data is collected. Which options you change depend on your setup and preference.

There are several ways to configure the integration, depending on how it was installed:

Config options are below. For an example configuration, see the example config file.

Commands

The configuration accepts the following commands:

  • all_data: collects both inventory and metric data for the configured PostgreSQL instance..

Arguments

The all_data command accepts the following arguments:

  • username: the user name for the PostgreSQL connection. Required.

  • password: the password for the PostgreSQL connection. Required.

  • hostname: the hostname for the PostgreSQL connection. Default: localhost.

  • port: the port where PostgreSQL is running. Default: 5432.

  • collection_list: a JSON array, a JSON object, or the string literal ALL that specifies the entities to be collected. Required except for PgBouncer.

    • JSON array: Interpreted as a list of database names from which to collect all relevant metrics, including any tables and indexes belonging to that database. For example:

      collection_list: '["postgres"]'
      
    • JSON object: only entities specified in the object will be collected, no automatic discovery will be performed. The levels of JSON are database name -> schema name -> table name -> index name. For example:

      collection_list: '{"postgres":{"public":{"pg_table1":["pg_index1","pg_index2"],"pg_table2":[]}}}'
    • ALL: collect metrics for all databases, schemas, tables, and indexes discovered. For example:

      collection_list: 'ALL'
  • pgbouncer: collect pgbouncer metrics. Default: false.

  • enable_ssl: determines if SSL is enabled. If true, ssl_cert_location and ssl_key_location are required. Default: false.

  • trust_server_certificates: if true, the server certificate is not verified for SSL. If false, the server certificate identified in ssl_root_cert_location is verified. Default: false.

  • ssl_root_cert_location: absolute path to PEM-encoded root certificate file. Required if trust_server_certificates is false.

  • ssl_cert_location: absolute path to PEM-encoded client certificate file. Required if enable_ssl is true.

  • ssl_key_location: absolute path to PEM-encoded client key file. Required if enable_ssl is true.

  • timeout: maximum wait for connection, in seconds. Set to 0 for no timeout. Default: 10.

  • database: the PostgreSQL database to connect to. Default: postgres.

  • custom_metrics_query: a SQL query which required columns metric_name, metric_type, and metric_value.metric_type can be gauge, rate, delta, or attribute. Additional columns collected with the query are added to the metric set as attributes.

Example postgresql-config.yml file configuration:

PostgreSQL configuration file
integration_name: com.newrelic.postgresql

instances:
  # A name for the collection
  - name: sample_postgres
    # The only supported command is all_data.
    command: all_data
    arguments:
      # The username for the Postgres instance. Required.
      username: postgres
      # The password for the Postgres instance. Required.
      password: pass
      # The hostname for the Postgres instance. Defaults to localhost.
      hostname: psql-sample.localnet
      # The port of the Postgres instance. If PgBouncer is being used,
      # use the port it is running on. Defaults to 5432.
      port: 6432
      # The JSON object which contains the entities to monitor. The nesting
      # levels of JSON are database name -> schema name -> table name -> index name.
      collection_list: '{"postgres":{"public":{"pg_table1":["pg_index1","pg_index2"],"pg_table2":[]}}}'
      # True if SSL is to be used. Defaults to false.
      enable_ssl: true
      # True if the SSL certificate should be trusted without validating.
      # Setting this to true may open up the monitoring service to MITM attacks.
      # Defaults to false.
      trust_server_certificate: false
      ssl_root_cert_location: /etc/newrelic-infra/root_cert.crt
      ssl_cert_location: /etc/newrelic-infra/postgresql.crt
      ssl_key_location: /etc/newrelic-infra/postgresql.key
      timeout:  10
      # custom_metrics_query:
        # select
          # 'rows_inserted' as "metric_name",
          # 'delta' as "metric_type",
          # sd.tup_inserted as "metric_value",
          # sd.datid as "database_id"
        # from pg_stat_database sd;
    # Used to label your data.
    labels:
      env: production
      role: postgresql

Find and use data

Data from this service is reported to an integration dashboard.

Metrics are attached to these event types:

You can query this data for troubleshooting purposes or to create custom charts and dashboards.

For more on how to find and use your data, see Understand integration data.

Metric data

The PostgreSQL integration collects the following database metric attributes. Some attributes apply to all PostgreSQL event types. Some metric names are prefixed with a category indicator and a period, such as db. or index. metric names.

These attributes are attached to the PostgresqlDatabaseSample event type:

PostgreSQLDatabaseSample attributes Description
db.connections

Number of backends currently connected to this database.

db.commitsPerSecond

Committed transactions per second.

db.rollbacksPerSecond

Transactions rolled back per second.

db.readsPerSecond

Number of disk blocks read in this database per second.

db.bufferHitsPerSecond

Number of times disk blocks were found already in the buffer cache, so that a read was not necessary. This only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache.

db.rowsReturnedPerSecond

Rows returned by queries per second.

db.rowsFetchedPerSecond

Rows fetched by queries per second.

db.rowsInsertedPerSecond

Rows inserted per second.

db.rowsUpdatedPerSecond

Rows updated per second.

db.rowsDeletedPerSecond

Rows deleted per second.

db.conflicts.tablespacePerSecond

Number of queries in this database that have been canceled due to dropped tablespaces.

db.conflicts.locksPerSecond

Number of queries in this database that have been canceled due to lock timeouts.

db.conflicts.snapshotPerSecond

Number of queries in this database that have been canceled due to old snapshots.

db.conflicts.bufferpinPerSecond

Number of queries in this database that have been canceled due to pinned buffers.

db.conflicts.deadlockPerSecond

Number of queries in this database that have been canceled due to deadlocks.

db.tempFilesCreatedPerSecond

Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (for example, sorting or hashing), and regardless of the log_temp_files setting.

db.tempWrittenInBytesPerSecond

Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and regardless of the log_temp_files setting.

db.deadlocksPerSecond

Number of deadlocks detected in this database.

db.readTimeInMillisecondsPerSecond

Time spent reading data file blocks by backends in this database, in milliseconds.

db.writeTimeInMillisecondsPerSecond

Time spent writing data file blocks by backends in this database, in milliseconds.

These attributes are attached to the PostgresqlIndexSample event type:

PostgreSQLIndexSample attributes Description
index.sizeInBytes

The size of an index.

index.rowsReadPerSecond

The number of index entries returned by scans on this index.

index.rowsFetchedPerSecond

The number of index entries fetched by scans on this index.

These attributes are attached to the​ PostgresqlInstanceSample event type:

PostgreSQLInstanceSample attributes Description
bgwriter.checkpointsScheduledPerSecond

Number of scheduled checkpoints that have been performed.

bgwriter.checkpointsRequestedPerSecond

Number of requested checkpoints that have been performed.

bgwriter.buffersWrittenForCheckpointsPerSecond

Number of buffers written during checkpoints.

bgwriter.buffersWrittenByBackgroundWriterPerSecond

Number of buffers written by the background writer.

bgwriter.backgroundWriterStopsPerSecond

Number of times the background writer stopped a cleaning scan because it had written too many buffers.

bgwriter.buffersWrittenByBackendPerSecond

Number of buffers written directly by a backend.

bgwriter.buffersAllocatedPerSecond

Number of buffers allocated.

bgwriter.backendFsyncCallsPerSecond

Number of times a backend had to execute its own fsync call. Normally the background writer handles them even when the backend does its own write.

bgwriter.checkpointWriteTimeInMillisecondsPerSecond

Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds.

bgwriter.checkpointSyncTimeInMillisecondsPerSecond

Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds.

These attributes are attached to the​ PostgresqlTableSample event type:

PostgreSQLTableSample attributes Description
table.totalSizeInBytes

The total disk space used by the table, including indexes and TOAST data.

table.indexSizeInBytes

The total disk space used by indexes attached to the specified table.

table.liveRows

Number of live rows.

table.deadRows

Number of dead rows.

table.indexBlocksReadPerSecond

The number of disk blocks read from all indexes on this table.

table.indexBlocksHitPerSecond

The number of buffer hits in all indexes on this table.

table.indexToastBlocksReadPerSecond

The number of disk blocks read from this table's TOAST table index.

table.indexToastBlocksHitPerSecond

The number of buffer hits in this table's TOAST table index.

table.lastVacuum

Time of last vacuum on table.

table.lastAutoVacuum

Time of last automatic vacuum on table.

table.lastAnalyze

Time of last analyze on table.

table.lastAutoAnalyze

Time of last automatic analyze on table.

table.sequentialScansPerSecond

Number of sequential scans initiated on this table per second.

table.sequentialScanRowsFetchedPerSecond

Number of live rows fetched by sequential scans per second.

table.indexScansPerSecond

Number of index scans initiated on this table.

table.indexScanRowsFetchedPerSecon

Number of live rows fetched by index scans.

table.rowsInsertedPerSecond

Rows inserted per second.

table.rowsUpdatedPerSecond

Rows updated per second.

table.rowsDeletedPerSecond

Rows deleted per second.

table.bloatSizeInBytes

Size of bloat in bytes.

table.dataSizeInBytes

Size of disk spaced used by the main fork of the table.

table.bloatRatio

Fraction of table data size that is bloat.

These attributes are attached to the​ PgBouncerSample event type:

PgBouncerSample attributes Description
pgbouncer.stats.transactionsPerSecond

The transaction rate.

pgbouncer.stats.queriesPerSecond

The query rate.

pgbouncer.stats.bytesInPerSecond

The total network traffic received.

pgbouncer.stats.bytesOutPerSecond

The total network traffic sent.

pgbouncer.stats.totalTransactionDurationInMillisecondsPerSecond

Time spent by pgbouncer in transaction.

pgbouncer.stats.totalQueryDurationInMillisecondsPerSecond

Time spent by pgbouncer actively querying PostgreSQL.

pgbouncer.stats.avgTransactionCount

The average number of transactions per second in last stat period.

pgbouncer.stats.avgTransactionDurationInMilliseconds

The average transaction duration.

pgbouncer.stats.avgQueryCount

The average number of queries per second in last stat period.

pgbouncer.stats.avgBytesIn

The client network traffic received.

pgbouncer.stats.avgBytesOut

The client network traffic sent.

pgbouncer.stats.avgQueryDurationInMilliseconds

The average query duration.

pgbouncer.pools.clientConnectionsActive

Client connections linked to server connection and able to process queries.

pgbouncer.pools.clientConnectionsWaiting

Client connections waiting on a server connection.

pgbouncer.pools.serverConnectionsActive

Server connections linked to a client connection.

pgbouncer.pools.serverConnectionsIdle

Server connections idle and ready for a client query.

pgbouncer.pools.serverConnectionsUsed

Server connections idle more than server_check_delay, needing server_check_query.

pgbouncer.pools.serverConnectionsTested

Server connections currently running either server_reset_query or server_check_query.

pgbouncer.pools.serverConnectionsLogin

Server connections currently in the process of logging in.

pgbouncer.pools.maxwaitInMilliseconds

Age of oldest unserved client connection.

Inventory data

The PostgreSQL integration collects each setting from pg_settings along with its boot_val and reset_val. The inventory data appears on the Infrastructure Inventory page, under the config/postgresql source.

Troubleshooting

Here are some troubleshooting tips for the PostgreSQL integration:

  • If you have connection problems, make sure you can connect to the cluster from the same box with psql.
  • If you have problems collecting PgBouncer metrics, make sure you are connected to the instance through PgBouncer. Default port is 6432.
  • If you get the error message Error creating list of entities to collect: pq: unsupported startup parameter: extra_float_digits, set ignore_startup_parameters = extra_float_digits in the PgBouncer config file.

For more help

Recommendations for learning more: