With our Apache Flink dashboard, you can easily track your logs, keep an eye on your instrumentation sources, and get an overview of uptime and downtime for all your applications' instances. Built with our infrastructure agent and our Prometheus OpenMetrics integration, Flink take advantage of OpenMetrics endpoint scraping, so you can view all your most important data, all in one place.
After setting up Flink with New Relic, your data will display in dashboards like these, right out of the box.
Install the infrastructure agent and Prometheus OpenMetrics integration
Before getting Flink data into New Relic, first install our infrastructure agent, then expose your metrics by installing Prometheus OpenMetrics.
- Follow our guided install to instrument your system with the infrastructure agent. Or if you prefer, you can install the infrastructure agent via the command line.
- Install our Prometheus OpenMetrics integration.
Configure Prometheus OpenMetrics for Apache Flink
After you've installed Prometheus OpenMetrics, you need to configure the nri-prometheus-config.yml
file. Your configuration file should match our snippet in the nri-prometheus repository:
integrations:- name: nri-prometheus config: standalone: false # Defaults to true. When standalone is set to `false`, `nri-prometheus` requires an infrastructure agent to send data. emitters: infra-sdk # When running with infrastructure agent emitters will have to include infra-sdk cluster_name: "YOUR_CLUSTER_NAME_HERE" # Match the name of your cluster with the name seen in New Relic. targets: - description: "YOUR_DESCRIPTION_HERE" urls: ["'job-cluster:9249', 'taskmanager1:9249', 'taskmanager2:9249'"] # tls_config: # ca_file_path: "/etc/etcd/etcd-client-ca.crt" # cert_file_path: "/etc/etcd/etcd-client.crt" # key_file_path: "/etc/etcd/etcd-client.key" verbose: false # Defaults to false. This determines whether or not the integration should run in verbose mode. audit: false # Defaults to false and does not include verbose mode. Audit mode logs the uncompressed data sent to New Relic and can lead to a high log volume. # scrape_timeout: "YOUR_TIMEOUT_DURATION" # `scrape_timeout` is not a mandatory configuration and defaults to 30s. The HTTP client timeout when fetching data from endpoints. scrape_duration: "5s" # worker_threads: 4 # `worker_threads` is not a mandatory configuration and defaults to `4` for clusters with more than 400 endpoints. Slowly increase the worker thread until scrape time falls between the desired `scrape_duration`. Note: Increasing this value too much results in huge memory consumption if too many metrics are scraped at once. insecure_skip_verify: false # Defaults to false. Determins if the integration should skip TLS verification or not.timeout: 10s
Manually set up log forwarding
While the infrastructure agent should send logs to Flink dashboards, you may need to set up log forwarding manually. To do this:
- Go to your
logging.yml
file. - Add the following snippet anywhere to the file:
- name: flink-log file: /home/flink-virtualbox/flink/build-target/log/flink_taskmanager.log attributes: logtype: flink-logs
Get Apache Flink metrics as a dashboard
Once you've installed the Apache Flink quickstart, you can see your critical Apache Flink data in New Relic. To find your dashboard in New Relic: Go to one.newrelic.com > Dashboards, then select Apache Flink. You can now query your data. For example:
FROM Metric SELECT sum(flink_jobmanager_job_totalNumberOfCheckpoints) AS 'Total Number of Checkpoints'
What's next?
If you want to further customize your Apache Flink dashboards, you can learn more about building NRQL queries and managing your in the New Relic UI:
- Introduction to the query builder to create basic and advanced queries.
- Introduction to dashboards to customize your dashboard and carry out different actions.
- Manage your dashboard to adjust your dashboards display mode, or to add more content to your dashboard.