For New Relic organizations on our usage-based pricing model, several factors can impact your costs. This doc will help you estimate your New Relic data ingest costs.
The amount of ingested data can vary from one New Relic organization to the next, based on what kinds of things are monitored, what features are used, the monitored applications' behaviors, and more. It's vital to understand the various factors that affect data ingest to know where in the approximate range ingest will fall. For example, our logs-in-context feature will add more data to the logs and the percentage amount will vary based on the size of the log lines (smaller log events = higher % increase). Given such variability, the best way to estimate your costs is to set up a test New Relic account and then extrapolate your usage from that. Your actual usage is shown in the data management UI.
Here are some tips for extrapolating usage from a New Relic account:
If you're just signing up to New Relic, consider creating a test installation with an environment similar to what you'll need moving forward. Then use the baseline ingest from the trial to extrapolate what your full environment would require. To do this, create a new free account, and use our instant observability options to get started reporting data. Note that APM, infrastructure monitoring, and logs tend to produce the bulk of most customer's data, but that can vary.
As we say above, data ingest and associated cost can vary greatly for accounts, depending on your architecture and New Relic setup, and that's why we recommend creating a New Relic account to extrapolate usage. But if you don't wish to create a test New Relic account, you can use our cost estimator spreadsheet, which auto-populates a rough estimated cost. You can use the cost estimator spreadsheet for directional guidance and approximations based on simplified inputs that you provide with the built-in assumptions (that may not be true for your use case), but it's not a guarantee of your actual costs. To get started, make a copy of this Google spreadsheet.
Note that this spreadsheet gives you an option of choosing one of two data options: Data Plus at US$0.50 per GB and original data option at US$0.30 per GB. If your organization has a different data cost, you'll have to adjust the cost estimate to match that difference.
The instructions below will give you instructions on how to fill out the spreadsheet. Note that the spreadsheet provides you only an estimate: it's not a binding billing proposal.
To arrive at the ingest rates used in the estimator, we analyzed about 10,000 existing New Relic organizations of various sizes to arrive at the ingest rates used in our calculations. Note that you get 100 GBs of data ingest per month for free.
The following sections explain how to use the various parts of the spreadsheet.
Ingest rates are measured per agent, not per host. You might have multiple agents monitoring a single host.
In the APM data volume section of the spreadsheet, you estimate whether you have low, medium, or high ingest rates from APM agents. We've built an average of the data volume for all APM agent types into the spreadsheet calculator. When filling out this section, consider these questions:
How many APM agents will you deploy?
What types of applications will you monitor? Understanding how the application is used and the application complexity is important. For example, e-commerce apps will have much higher throughput than an internal application.
Will you use features that contribute to higher ingest rates? See the criteria questions that follow for more detail.
Criteria for calculating ingest rates per APM agent
In general, use higher ingest rates for applications that are integration/business tiers, large business-to-consumer (B2C) sites, or have significant custom instrumentation or metrics. That means, select High in these cases:
For app behaviors and environments where you expect high throughput and a high number of errors, and the app is in a production environment.
For complex app architectures (for example, a single front-end request spawns multiple back-end requests).
If you have a high number of key transactions.
If you have custom instrumentation and APM metrics.
For transactions with a lot of attributes.
Add APM agent ingest to the spreadsheet:
Add the number of APM agents that you will monitor.
Approximate the amount of ingest you'll need for your agents and select one of the options. In general, if you're on the Standard pricing edition (the edition new organizations start at), you can probably select Low:
Sizing your infrastructure monitoring data ingest depends on the number of agents and integrations you have, and how much data they're each reporting.
When calculating the volume of your infrastructure ingest, take into account:
How many infrastructure agents do you think you'll need?
Which integrations contribute to higher ingest rates? The following are some approximate sizes. You should also take the size of your environments into account. If they're very large, for example, these rates might not be accurate.
Add infrastructure agent ingest to the spreadsheet:
At step 3 in the spreadsheet, input your estimated number of infrastructure agents. To determine this, decide how many hosts you'll run infrastructure agents on.
At step 4, assign a size for the volume of your infrastructure:
Start with your base ingest rate as Low if you'll have only a few on-host integrations.
Adjust to medium or high depending on how many and how high the volume of your integrations. Consider whether you have cloud integrations with large footprints, or a large number of database on-host integrations, or multiple or large Kubernetes clusters. For example:
If running two or more low or medium impact integrations such as cloud or on-host integrations, choose Medium ingest rate.
If running all three types of integrations (oh-host, cloud, containers) or observing really large Kubernetes environments, choose High for your ingest rates.
For this section, you add an estimated amount of ingest in gigabytes.
Because each vendor measures log data differently, there is no easy way to establish a baseline estimate of log volume in New Relic from an existing implementation. The best way to estimate your log volume is to send a sample amount of log data and extrapolate.
Log events are stored and metered as JSON objects, which are always larger than the original raw log on disk.
You can adjust the data retention settings for each data source. To learn about retention and the baselines, see Data retention.
Retention considerations:
For each additional month (30 days) of retention on top of your existing retention (the default retention periods for our original data option, and 90 days for the Data Plus option), the cost is $0.05 per GB ingested per month.
Retention is added evenly across all namespaces up to a maximum of 395 days. Retention cannot be extended for just one namespace (for example, just logs or custom events). The increased rate is applied to all ingested data.
In section 7 of the spreadsheet, select the additional months of retention that you want.
View the calculated estimate
When you complete the extended retention section, the total estimated price is displayed in the Calculations section of the spreadsheet.
Other potential data ingest costs
Because this billing calculation was designed for newer customers, it uses the implementations and costs that our newer customers often have. For example, we haven't provided cost estimates for browser monitoring, mobile monitoring, network performance monitoring, or other services. (Maybe worth noting: neither our basic alerting features nor our synthetic monitors contribute to data ingest.) For many organizations, these other costs will often represent only 5% or so of the costs examined and calculated in the spreadsheet. But be aware that high levels of data ingest by other tools can make that higher.
Other billing factors
Data ingest is one billing factor. To learn about others, see Usage-based pricing.