• EnglishEspañol日本語한국어Português
  • ログイン今すぐ開始

Optimize your data ingest

Optimize

Data ingest governance is a practice of getting optimal value for the telemetry data collected by an organization. This is especially important for a complex organization that has numerous business units and working groups. This is the third part of a four-part guide to optimizing your New Relic data ingest, and is part of our series on observability maturity.

Before you start

This guide contains detailed recommendations for optimizing your data ingest. Before using this guide, we recommend you review our general data management docs.

Desired outcome

Maximize the observability value of your data by optimizing data ingest. Reduce non-essential ingest data so you can stay within your budget.

Process

The process will include these steps:

We'll explain these steps in more detail.

Prioritize your observability objectives

One of the most important parts of the data ingest governance framework is to align collected telemetry with observability value drivers. You need to ensure that you understand the primary observability objective is when you configure new telemetry.

When you introduce new telemetry you want to understand what it delivers to your overall observability solution. Your new data might overlap with other data. If you consider introducing telemetry that you can't align to any of the key objectives you may reconsider introducing that data.

Objectives include:

  • Meeting an internal SLA
  • Meeting an external SLA
  • Supporting feature innovation (A/B performance and adoption testing)
  • Monitor customer experience
  • Hold vendors and internal service providers to their SLA
  • Business process health monitoring
  • Other compliance requirements

Alignment to these objectives are what allow you to make flexible and intuitive decisions about prioritizing one set of data over another and helping guide teams know where to start when instrumenting new platforms and services.

Develop an optimization plan

For this section, we'll make two core assumptions:

Use the following examples to help you visualize how you would assess your own telemetry ingest and make the sometimes hard decisions that are needed to get within budget. Although each of these examples tries to focus on a value driver, most instrumentation serves more than one value driver. This is the hardest part of data ingest governance.

ヒント

We recommend you to track the plan in a task managing tool you're familiar with. This helps to manage the optimization plan, and also to understand the effect each optimization task takes. You can use this Data optimization plan template.

Use data reduction techniques to execute your plan

At this stage you've given thought to all of the kinds of telemetry in your account(s) and how it relates to your value drivers. This section will provide detailed technical instructions and examples on how to reduce a variety of telemetry types.

There are two main ways to approach data reduction:

  • Through configuration
  • Through using drop rules

Optimization through configuration

This section includes various ways to configure New Relic features to optimize data reporting and ingest:

Optimization with drop rules

A simple rule for understanding what you can do with drop rules is: If you can query it you can drop it.

Drop filter rules help you accomplish several important goals:

  • Lower costs by storing only the logs relevant to your account.
  • Protect privacy and security by removing personal identifiable information (PII).
  • Reduce noise by removing irrelevant events and attributes.

A note of caution: When creating drop rules, you're responsible for ensuring that the rules accurately identify and discard the data that meets the conditions that you've established. You're also responsible for monitoring the rule, as well as the data you disclose to New Relic. Always test and retest your queries and, after the drop rule is installed, make sure it works as intended. Creating a dashboard to monitor your data pre- and post-drop will help.

Here's some guidance for using drop rules to optimize data ingest for specific tools:

Exercise

Answering the following questions will help you develop confidence in your ability to develop and execute optimization plans. You may want to use the Data ingest baseline and Data ingest entity breakdown dashboards from the Baselining section. Install those dashboards as described and see how many of these questions you can answer.

Questions
Show three drop rules in which you could reduce this organization's ingest by at least 5% per month? Include the NerdGraph syntax for your drop rule in your response.
Suggest three instrumentation configuration changes you could implement to reduce this organization's ingest by at least 5% per month? Include the configurations snippets in your response.
What are three things you could do to reduce data volume from K8s monitoring? How much data reduction could you achieve? What are potential trade-offs of this reduction? (for example, would they lose any substantial observability?)
1. Use the data ingest governance baseline dashboard to identify an account that's sending a large amount of log data to New Relic.
2. Find and select that account from the account switcher.
3. Navigate to the logs page of the account and select patterns from the left-side menu.
4. Review the log patterns shown and give some examples of the low value log patterns. What makes them low value? How much total reduction you could achieve by dropping these logs?
Based on your overall analysis of this organization, what telemetry is underutilized?

Conclusion

The process section showed us how to associate our telemetry with specific observability value drivers or objectives. This can make the hard decision of optimizing our account ingest somewhat easier. We learned how to describe a high level optimization plan that optimizes ingest while protecting our objectives. Finally we were introduced a rich set of recipes for configuration and drop-rule-based ingest optimizations.

Copyright © 2024 New Relic株式会社。

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.