With New Relic's real-time profiling for Java using Java Flight Recorder (JFR) metrics, you can run continuous, always-on profiling of your Java code in production environments. The accompanying JVM cluster timeline view provides a fast and intuitive way to diagnose cluster-wide performance problems. For example, you can quickly see how an application’s deployment affects the overall health of the cluster.
Find performance bottlenecks
Troubleshooting performance bottlenecks in your Java application or service can help you better understand the following:
- Where you’re wasting resources
- When an incident occurs
- What happened during an incident
- What performance issues led up to an incident
To make troubleshooting faster and easier, you need to see the high fidelity runtime characteristics of your code running on the JVM, and you need that data in real time.
The New Relic JFR daemon runs as its own Java process and monitors a JVM for JFR events over remote JMX. Using the New Relic Java telemetry SDK as the underlying implementation, the JFR daemon converts JFR events into New Relic telemetry types and reports them to New Relic's metric and event ingest APIs.
On startup the JFR daemon checks if the application you are monitoring is also being monitored by the New Relic Java agent.
- If the Java agent is present, then the daemon will obtain the entity GUID associated with the application. Both the JFR data and the data collected by the agent will then report to the same APM entity.
- If the the Java agent is not detected, then the daemon will report as a unique entity under the app name that you configured for it.
- The JFR daemon must be run with Java 11 or higher.
- The application being monitored by the daemon must use a version of Java that supports Java Flight Recorder: Java 8 (specifically, version
8u262+) or higher.
- An Insights insert API key.
- JFR daemon jar.
- Required for flamegraphs, otherwise optional but recommended: Install of New Relic Java agent version 6.1.0 or higher on your JVM. The JFR daemon can run without a Java agent being included, but if the Java agent is present it will combine agent and daemon data into a single APM application.
Apps running with the JFR daemon should expect the JFR subsystem to use about 150MB of additional memory.
To monitor an application with the JFR daemon you must first expose a remote JMX port on the application that you wish to monitor by adding the following system properties:
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
By default, the JFR daemon will connect via JMX to localhost on port 1099 and send data to New Relic's US production metric and event ingest endpoints. To change this behavior, see the fully documented configuration options.
To use the JFR daemon:
Download the latest version of the JFR daemon jar: jfr-daemon-n.n.n.jar.
Register an Insert API key for publishing data to New Relic.
Required: Assign your Insert API key to the
Recommended: Set the name of the application being monitored to the
NEW_RELIC_APP_NAMEenvironment variable. If this is not set, then the default app name will be used.
Start your application and the JFR daemon:
java -jar jfr-daemon-n.n.n.jar
Optional (but recommended): Include the Java agent:
Install Java agent version 6.1.0 or higher to monitor your application.
Configure the Java agent to allow it to communicate with the JMX daemon by adding the following configuration to the
jmx: enabled: true linkingMetadataMBean: true
linkingMetadataMBean allows the JFR daemon to acquire the entity GUID that was generated by the Java agent for the application. If the entity GUID is successfully acquired, then data collected by the daemon will be reported to the same application as the Java agent. Any name configured for the daemon using
NEW_RELIC_APP_NAMEwould be overridden in favor of the name specified by the agent.
View your data
To view your data, go to one.newrelic.com > Entity explorer > (select service) > More Views > Realtime Profiling Java.
Understand JVM cluster behavior over time
The JVM cluster timeline view shows the JVM behavior across your entire cluster. This timeline enables quicker troubleshooting and issue detection; for example, at a glance you can see:
- How a recent deployment affected the rest of the JVM cluster
- When a JVM restarted
- How an individual instance was affected by its noisy neighbor
To make troubleshooting easier, you need to see the high fidelity runtime characteristics of your code running on the JVM, and you need that data in real time.
Each row of the timeline represents a specific JVM over time. Inside each row, a box represents a 5-minute period of that JVM’s life. From least severe to most severe, yellow, orange, and red traffic lights indicate anomalous behavior for a JVM, so you can drill down into that instance and the right time period when investigating errors or other performance issues.
The details panel for each JVM provides several critical views:
- How resources are allocated within a process
- How garbage collection affects performance
- How to track garbage collection with logs
- How CPU is used
Identify resource intensive code paths using flamegraphs
- Early access to our flamegraphs feature is available upon request. All other real-time profiling features are available by default. To request this early access, please email firstname.lastname@example.org.
- At this time, this requires deployment of both the JFR daemon and Java agent.
Use flamegraphs to identify the Java classes and methods that are most frequently executed in your application code. By using flamegraphs to optimize the hot spots in your code you can reduce resource consumption and increase your application’s overall performance.