In order to report request queuing, New Relic agents depend on an HTTP header set by the frontend web server (such as Apache or Nginx) or load balancer (such as HAProxy or F5). These examples use the X-Request-Start
header, since it is has broader support across platforms.
If this does not work with your server configuration for request queuing, try using the X-Queue-Start
header. The syntax should otherwise be the same.
Apache
Apache's mod_headers module includes a %t
variable that is formatted correctly. To enable request queue reporting, add this code to your Apache config:
RequestHeader set X-Request-Start "%t"
Nginx
If you are using Nginx version 1.2.6 or higher and the latest version of the Ruby, Python, or PHP agent, Nginx can easily be configured to report queue time. (For Nginx versions 1.2.6 or lower, you must recompile Nginx with a module or patch.)
Configuring with Nginx 1.2.6 or higher uses the ${msec}
variable, which is a number in seconds with milliseconds resolution. For more information, see http://nginx.org/en/docs/http/ngx_http_core_module.html#variables.
Add the appropriate information to your Nginx config:
Nginx configuration | Values |
---|---|
General Nginx use |
|
Passenger | Version 5 or higher:
Older versions:
|
fastcgi |
|
uWSGI |
|
F5 load balancers
For F5 load balancers, use this configuration snippet:
when HTTP_REQUEST_SEND { # TCL 8.4 so we have to calculate the time in millisecond resolution # Calculation from: https://groups.google.com/forum/? fromgroups=#!topic/comp.lang.tcl/tV9H6TDv0t8 set secs [clock seconds] set ms [clock clicks -milliseconds] set base [expr { $secs * 1000 }] set fract [expr { $ms - $base }] if { $fract >= 1000 } { set diff [expr { $fract / 1000 }] incr secs $diff incr fract [expr { -1000 * $diff }] } set micros [format "%d%03d000" $secs $fract]
# Want this header inserted as if coming from the client clientside { HTTP::header insert X-Request-Start "t=${micros}" } }
Network timing
Even with request queuing configured, the frontend server's setup can still affect network time in your browser data. This is because the frontend server does not add the queuing time header until after it actually accepts and processes the request.
The queuing time headers can never account for backlog in the listener socket used to accept requests. For example, if the frontend server's configuration results in a backlog of requests that queue in the listener socket, page load timing will show an increase in network time.