• EnglishEspañol日本語한국어Português
  • Log inStart now

Logging best practices guide

Welcome to the New Relic logging best practices guide. Here you'll find in-depth recommendations for how to optimize our logs features and manage data consumption.

Tip

Got lots of logs? Check out our tutorial on how to optimize and manage them.

Forwarding logs

Here are some tips on log forwarding to supplement our log forwarding docs:

  • When forwarding logs, we recommend using our New Relic infrastructure agent and/or APM agents. If you cannot use New Relic agents, use other supported agents (like FluentBit, Fluentd, and Logstash).

    Here are some Github example configurations for supported logging agents:

  • Add a logtype attribute to all the data you forward. The attribute is required to use our built-in parsing rules and can also be used to create custom parsing rules based on the data type. The logtype attribute is considered a well known attribute and is used in our quickstart dashboards for log summary information.

  • Use our built-in parsing rules for well-known log types. We will automatically parse logs from many different well-known log types when you set the relevant logtype attribute.

    Here's an example of how to add the logtype attribute to a log forwarded by our infrastructure agent:

    logs:
    - name: mylog
    file: /var/log/mylog.log
    attributes:
    logtype: mylog
  • Use New Relic integrations for forwarding logs for other common data types such as:

Data partitions

If you're consuming 1 TB+ per day of log data, you definitely should work on an ingest governance plan for logs, including a plan to partition the data in a way that gives functional and thematic groupings. We suggest a best practice of no more than 1 TB per partition. This serves both performance and usability. If you sent all your logs to one giant "bucket" (the default log partition) in a single account, you could experience slower queries or failed queries. For more details, see NRQL query rate limits.

One way to improve query performance is by limiting the time range being searched. Searching for logs over long periods of time will return more results and require more time. Avoid searches longer than one week whenever possible, and use the time-range selector to narrow searches to smaller, more specific time windows.

Another way to improve search performance is by using data partitions. Here are some best practices for data partitions:

  • Make sure you use partitions early in your logs onboarding process. Create a strategy for using partitions so that your users know where to search and find specific logs. That way your alerts, dashboards, and saved views don't need to be modified if you implement partitions later in your logs journey.

  • To avoid limits and improve query time, create a data partition rule for specific data types when your total daily ingestion exceeds 1 TB per day. Certain high volume infrastructure logs (i.e, K8S logs, CDN logs, VPC logs, Syslog logs or Web logs) can be good candidates for their own partitions.

  • Even if your ingest volume is low, you can also use data partitions for a logical separation of data, or just to improve query performance across separate data types.

  • Use a secondary data partition namespace for data that you want a shorter retention time for. Secondary data partitions have a default 30-day retention period.

  • To search data partitions in the Logs UI, you must select the appropriate partition(s), open the partition selector and check the partitions you want to search. If you're using NRQL, use the FROM clause to specify the Log or Log_<partion> to search. For example:

    FROM Log_<my_partition_name> SELECT * SINCE 1 hour ago

    Or to search logs on multiple partitions:

    FROM Log, Log_<my_partition_name> SELECT * SINCE 1 hour ago

Parsing logs

Parsing your logs at ingest is the best way to make your log data more usable by you and other users in your organization. When you parse out attributes, you can easily use them to search in the Logs UI and in NRQL without having to parse data at query time. This also allows you to use them easily in and dashboards.

For parsing logs we recommend that you:

  • Parse your logs at ingest to create attributes (or fields), which you can use when searching or creating and alerts. Attributes can be strings of data or numerical values.

  • Use the logtype attribute you added to your logs at ingest, along with other NRQL WHERE clauses to match the data you want to parse. Write specific matching rules to filter logs as precisely as possible. For example:

    WHERE logtype='mylog' AND message LIKE '%error%'
  • Use our built-in parsing rules and associated logtype attribute whenever possible. If the built-in rules don't work for your data, use a different logtype attribute name (i.e., apache_logs vs apache, iis_w3c_custom vs iis_w3c), and then create a new parsing rule in the UI using a modified version of the built-in rules so that it works for your log data format.

  • Use our Parsing UI to test and validate your Grok rules. Using the Paste log option, you can paste in one of your log messages to test your Grok expression before creating and saving a permanent parsing rule.

  • Use external FluentBit configurations for parsing multi-line logs and for other, more extensive pre-parsing before ingesting into New Relic. For details and configuration of multi-line parsing with our infrastructure agent, see this blog post.

  • Create optimized Grok patterns to match the filtered logs to extract attributes. Avoid using expensive Grok patterns like GREEDYDATA excessively. If you need help identifying any sub-optimal parsing rules, talk to your New Relic account representative.

GROK best practices

  • Use Grok types to specify the type of attribute value to extract. If omitted, values are extracted as strings. This is important especially for numerical values if you want to be able to use NRQL functions (i.e., monthOf(), max(), avg(), >, <, etc.) on these attributes.
  • Use the Parsing UI to test your Grok patterns. You can paste sample logs in the Parsing UI to validate your Grok or Regex patterns and if they're extracting the attributes as you expect.
  • Add anchors to parsing logic (i.e., ^) to indicate the beginning of a line, or $ at the end of a line.
  • Use ()? around a pattern to identify optional fields
  • Avoid overusing expensive Grok patterns like '%{GREEDYDATA}. Try to always use a valid Grok pattern and Grok type when extracting attributes.

Drop filter rules

Drop logs at ingest

  • Create drop filter rules to drop logs that are not useful or that are not required to satisfy any use cases for dashboards, alerts or troubleshooting

Drop attributes from your logs at ingest

  • Create drop rules to drop unused attributes from your logs.
  • Drop the message attribute after parsing. If you parse the message attribute to create new attributes from the data, drop the message field.
  • Example: If you're forwarding data from AWS infrastructure, you can create drop rules to drop any AWS attributes that may create unwanted data bloat.

New Relic logs sizing

  • How we bill for storage may differ from some of our competitors. How we meter for log data is similar to how we meter and bill for other types of data, which is defined in Usage calculation.
  • If our cloud integrations (AWS, Azure, GCP) are installed, we will add cloud metadata to every log record, which will add to the overall ingest bill. This data can be dropped though to reduce ingest.
  • The main drivers for log data overhead are below, in order of impact:

Searching logs

  • For common log searches, create and use Saved views in the UI. Create a search for your data and click + Add Column to add additional attributes to the UI table. You can then move the columns around so that they appear in the order you want, then save it as a saved view with either private or public permissions. Configure the saved views to be public so that you and other users can easily run common searches with all relevant attribute data displayed. This is good practice for 3rd-party applications like apache, nginx, etc. so users can easily see those logs without searching.

  • Use the query builder to run searches using NRQL. The query builder allows you to use all the advanced functions available in NRQL.

  • Create or use available quickstart dashboards to answer common questions about your logs and to look at your log data over time in time series graphs. Create dashboards with multiple panels to slice and dice your log data in many different ways.

  • Use our advanced NRQL functions like capture() or aparse() to parse data at search time.

  • Install the Logs analysis and/or APM logs monitoring quickstart dashboards to quickly gain more insight into your log data. To add these dashboards go to one.newrelic.com > Add Data > Logging > Dashboards.

What's next?

See Get started with .

Copyright © 2024 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.