Google Cloud Observability pricing
The pricing for Google Cloud Observability lets you control your usage and spending. Google Cloud Observability products are priced by data volume or usage. You can use the free data usage allotments to get started with no upfront fees or commitments.
The following tables summarize the pricing information for Cloud Logging, Cloud Monitoring, and Cloud Trace.
Cloud Logging pricing summary
Feature | Price1 | Free allotment per month | Effective date |
---|---|---|---|
Logging storage* except for vended network logs. |
$0.50/GiB; One-time charge for streaming logs into log bucket storage for indexing, querying, and analysis; includes up to 30 days of storage in log buckets. No additional charges for querying and analyzing log data. |
First 50 GiB/project/month | July 1, 2018 |
Vended network logs storage† | $0.25/GiB; One-time charge for streaming network telemetry logs into log bucket storage for indexing, querying, and analysis; includes up to 30 days of storage in log buckets. No additional charges for querying and analyzing log data. |
Not applicable | October 1, 2024 |
Logging retention‡ | $0.01 per GiB per month for logs retained more than 30 days; billed monthly according to retention. | Logs retained for the default retention period don't incur a retention cost. | January 1, 2022 |
Log Router♣ | No additional charge | Not applicable | Not applicable |
Log Analytics♥ | No additional charge | Not applicable | Not applicable |
_Required
log bucket.† Vended logs are Google Cloud networking logs that are generated by Google Cloud services when the generation of these logs is enabled. Vended logs include VPC Flow Logs, Firewall Rules Logging, and Cloud NAT logs. These logs are also subject to Network telemetry pricing. For more information, see Vended logs.
‡ There are no retention charges for logs stored in the
_Required
log bucket,
which has a fixed retention period of 400 days.♣ Log routing is defined as forwarding logs received through the Cloud Logging API to a supported destination. Destination charges might apply to routed logs.
♥ There is no charge to upgrade a log bucket to use Log Analytics or to issue SQL queries from the Log Analytics page.
Note: The pricing language for Cloud Logging changed on July 19, 2023; however, the free allotments and the rates haven't changed. Your bill might refer to the old pricing language.
Cloud Monitoring pricing summary
Feature | Price | Free allotment per month | Effective date |
---|---|---|---|
All Monitoring data except data ingested by using Managed Service for Prometheus |
$0.2580/MiB1: first 150–100,000 MiB $0.1510/MiB: next 100,000–250,000 MiB $0.0610/MiB: >250,000 MiB |
All non-chargeable Google Cloud metrics First 150 MiB per billing account for metrics charged by bytes ingested |
July 1, 2018 |
Metrics ingested by using Google Cloud Managed Service for Prometheus, including GKE control plane metrics | $0.06/million samples†: first 0-50 billion samples ingested# $0.048/million samples: next 50-250 billion samples ingested $0.036/million samples: next 250-500 billion samples ingested $0.024/million samples: >500 billion samples ingested |
Not applicable | August 8, 2023 |
Monitoring API calls | $0.01/1,000 Read API calls (Write API calls are free) |
First 1 million Read API calls included per billing account | July 1, 2018 |
Execution of Monitoring uptime checks | $0.30/1,000 executions‡ | 1 million executions per Google Cloud project | October 1, 2022 |
Execution of Monitoring Synthetic Monitors | $1.20/1,000 executions* | 100 executions per billing account | November 1, 2023 |
Alerting policies | $1.50 per month for each condition in an alerting policy $0.35 per 1,000,000 time series returned by the query of a metric alerting policy condition♣ |
Not applicable | April 2026 |
# Samples are counted per billing account.
‡ Executions are charged to the billing account in which they are defined. For more information, see Pricing for uptime-check execution.
* Executions are charged to the billing account in which they are defined. For each execution, you might incur additional charges from other Google Cloud services, including services such as Cloud Run functions, Cloud Storage, and Cloud Logging. For information about these additional charges, see the pricing document for the respective Google Cloud service.
♣ For more information, see Pricing for alerting.
Cloud Trace pricing summary
Feature | Price | Free allotment per month | Effective date |
---|---|---|---|
Trace ingestion | $0.20/million spans | First 2.5 million spans per billing account | November 1, 2018 |
For detailed information about the costs for Google Cloud Observability products, see the following sections of this page:
For information about GKE Enterprise pricing, see GKE Enterprise.
Viewing your usage
To view your current usage, go to the Cloud Billing Reports page of the Google Cloud console
Based on your current usage data, you can estimate your bills by using the pricing calculator.
For example, consider a configuration where every Compute Engine VM instance generates 10 GiB of chargeable logs and 20 MiB of chargeable metrics per month. By using the pricing calculator you can determine the expected Cloud Monitoring and Cloud Logging costs:
1 VM | 10 VMs | 100 VMs | 1000 VMs | |
---|---|---|---|---|
Metrics cost per month | $0.00 | $12.90 | $477.30 | $5,121.30 |
Logging cost per month | $0.00 | $25.00 | $475.00 | $4,975.00 |
Total Cost: | $0.00 | $37.90 | $952.30 | $10,096.30 |
Configuring a billing alert
To be notified if your billable or forecasted charges exceed a budget, create an alert by using the Budgets and alerts page of the Google Cloud console:
-
In the Google Cloud console, go to the Billing page:
You can also find this page by using the search bar.
If you have more than one Cloud Billing account, then do one of the following:
- To manage Cloud Billing for the current project, select Go to linked billing account.
- To locate a different Cloud Billing account, select Manage billing accounts and choose the account for which you'd like to set a budget.
- In the Billing navigation menu, select Budgets & alerts.
- Click Create budget.
- Complete the budget dialog. In this dialog, you select Google Cloud projects and products, and then you create a budget for that combination. By default, you are notified when you reach 50%, 90%, and 100% of the budget. For complete documentation, see Set budgets and budget alerts.
Cloud Logging
Log buckets are the Logging containers that store logs data.
Logging charges for the volume of log data that is stored
in the _Default
log bucket and in user-defined log buckets.
Pricing applies to non-vended network logs when the volume exceeds the
free monthly allotment,
and to vended network logs.
For the _Default
log bucket and for user-defined log buckets,
Logging also charges when logs are
retained for more than the default retention period, which is
30 days.
There are no additional charges by Logging to route logs,
to use the Cloud Logging API, to configure
log scopes,
or for logs stored in the _Required
log bucket,
which has a fixed retention period of 400 days.
This section provides information about the following topics:
- Cloud Logging storage model
- Storage pricing
- Retention pricing
- Vended network logs
- Reduce your logs storage
- Log-based metrics pricing
- Create alerting policy on monthly log bytes ingested
For a summary of pricing information, see Cloud Logging pricing summary.
For limits that apply to your use of Logging, including data retention periods, see Quotas and limits.
To view and understand your Cloud Logging usage data, see Estimating your bills.
Cloud Logging storage model
For each Google Cloud project, Logging automatically
creates two log buckets: _Required
and _Default
.
For these two buckets, Logging automatically creates log sinks
named _Required
and _Default
that route logs to the correspondingly named
log buckets. You can't disable or modify the _Required
sink. You can disable
or otherwise modify the _Default
sink to prevent the _Default
bucket from
storing new logs.
You can create user-defined log buckets in any of your Google Cloud projects. You can also configure sinks to route any combination of logs, even across Google Cloud projects in your Google Cloud organization, to these log buckets.
For the _Default
log bucket and for user-defined log buckets, you can
configure a custom retention period.
You can upgrade your log buckets to use Log Analytics. There is no charge to upgrade a log bucket to use Log Analytics.
For more information on Cloud Logging buckets and sinks, see Routing and storage overview.
Storage pricing
Logging doesn't charge for logs stored in the _Required
bucket.
You can't delete the _Required
bucket or modify the _Required
sink.
The _Required
bucket stores the following logs:
- Admin Activity audit logs
- System Event audit logs
- Google Workspace Admin Audit logs
- Enterprise Groups Audit logs
- Login Audit logs
- Access Transparency logs. For information about enabling Access Transparency logs, see the Access Transparency logs documentation.
Logging charges for the pre-indexed volume of logs data that is
stored in the _Default
log bucket and in user-defined log buckets,
when the total volume exceeds the free monthly allotment.
Every write of a log entry to the _Default
log bucket or to a
user-defined log bucket counts toward your storage allotment.
For example, if you have sinks that route a log entry to
three log buckets, then that log entry is stored three times.
Retention pricing
The following table lists the data retention periods for logs stored in log buckets:
Bucket | Default retention period | Custom retention |
---|---|---|
_Required |
400 days | Not configurable |
_Default |
30 days | Configurable |
User-defined | 30 days | Configurable |
Logging charges retention costs when the logs are retained longer
than the default retention period. You can't configure the retention period
for the _Required
log bucket.
There are no retention costs when logs are stored only for
the default retention period of the log bucket.
If you shorten the retention period of a log bucket, then there is a seven-day grace period in which expired logs aren't deleted. You can't query or view expired logs. However, in those seven days, you can restore full access by extending the retention period of the log bucket. Logs stored during the grace period count toward your retention costs.
If you route a log entry to multiple log buckets, then you can be charged
storage and retention costs multiple times. For example, suppose you route
a log entry to the _Default
log bucket and to a user-defined log bucket.
Also, assume that you configure a custom retention period for both buckets
that is longer than 30 days. For this configuration,
you receive two storage charges and two retention charges.
Vended network logs
Vended network logs are available only when you configure log generation. The services that generate vended network logs charge for log generation. If you store these logs in a log bucket or route them to another supported destination, then you are also subject to charges from Cloud Logging or the destination. For information about log-generation costs, see Network telemetry pricing.
To learn how to enable vended network logs, see Configure VPC Flow Logs, Use Firewall Rules Logging, and Cloud NAT: Logs and metrics.
To find your vended network logs, in the Logs Explorer filter by the following log names:
projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows
projects/PROJECT_ID/logs/compute.googleapis.com%2Ffirewall
projects/PROJECT_ID/logs/compute.googleapis.com%2Fnat_flows
projects/PROJECT_ID/logs/networkmanagement.googleapis.com%2Fvpc_flows
Reduce your logs storage
To reduce your Cloud Logging storage costs, configure exclusion filters on your log sinks to exclude certain logs from being routed. Exclusion filters can remove all log entries that match the filter, or they can remove only a percentage of the logs. When a log entry matches an exclusion filter of a sink, the sink doesn't route the log entry to the destination. Excluded log entries don't count against your storage allotment. For instructions on setting exclusion filters, see Logs exclusions.
Another option to reducing your Cloud Logging storage costs is to route logs out of Cloud Logging to a supported destination. Cloud Logging doesn't charge to route logs to supported destinations. However, you might be charged when logs are received by a destination:
For information about routing logs out of Cloud Logging, see Route logs to supported destinations.
Logs-based metrics pricing
System-defined logs-based metrics are provided for all Google Cloud projects and are non-chargeable.
User-defined logs-based metrics are a class of Cloud Monitoring custom metrics and are chargeable. For pricing details, see Chargeable metrics.
For more information, see Overview of logs-based metrics.
Create alerting policy on monthly log bytes ingested
To create an alerting policy that triggers when the number of log bytes written to your log buckets exceeds your user-defined limit for Cloud Logging, use the following settings.
New condition Field |
Value |
---|---|
Resource and Metric | In the Resources menu, select Global. In the Metric categories menu, select Logs-based metric. In the Metrics menu, select Monthly log bytes ingested. |
Filter | None. |
Across time series Time series aggregation |
sum |
Rolling window | 60 m |
Rolling window function | max |
Configure alert trigger Field |
Value |
---|---|
Condition type | Threshold |
Alert trigger | Any time series violates |
Threshold position | Above threshold |
Threshold value | You determine the acceptable value. |
Retest window | Minimum acceptable value is 30 minutes. |
Cloud Monitoring
Monitoring charges for the following:
Metrics measured by bytes ingested, when the ingested metric data exceeds the free monthly metric allotment.
Non-chargeable metrics don't count towards the allotment limit.
Metrics measured by number of samples ingested.
Cloud Monitoring API read calls that exceed the free monthly API allotment.
Monitoring API write calls don't count towards the allotment limit.
Execution of uptime checks.
Execution of synthetic monitors.
Alerting policy conditions measured by number of active conditions per month.
Time series returned by the query of an alerting policy condition.
In Monitoring, ingestion refers to the process of writing time series to Monitoring. Each time series include some number of data points; those data points are the basis for ingestion charges. For pricing information, see Cloud Monitoring pricing.
This section provides the following information:
- Definitions of chargeable and non-chargeable metrics.
- Descriptions of byte- and sample-based ingestion strategies.
- Pricing examples for metrics charged by bytes ingested.
- Pricing examples for metrics charged by samples ingested.
- Pricing examples for execution of uptime checks (Effective date: October 1, 2022).
- Pricing examples for execution of synthetic monitors (Effective date: November 1, 2023).
- Descriptions and examples of pricing for alerting (Effective date: April 2026).
For the current pricing information, see Cloud Monitoring Pricing.
For limits that apply to your use of Monitoring, see Quotas and limits.
To view your current usage, do one of the following:
-
In the Google Cloud console, go to the Billing page:
You can also find this page by using the search bar.
-
In the Google Cloud console, go to the settings Settings page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
Based on your current usage data, you can estimate your bills.
Non-chargeable metrics
Metric data from Google Cloud, GKE Enterprise, and Knative isn't chargeable. Non-chargeable (free) metrics include the following:
- Google Cloud metrics. For additional information, see Footnote 2.
- GKE Enterprise metrics. For additional information, see Footnote 2.
- Istio metrics
- Knative metrics
- Google Kubernetes Engine system metrics
agent.googleapis.com/agent/
metrics
Chargeable metrics
All metric data, except for those metrics listed in the section titled Non-chargeable metrics, is chargeable. Most metric ingestion is charged by the number of bytes, but some is charged by the number of samples; these pricing models are described in the following sections.
The following factors contribute to ingestion costs:
The type of data points—scalar values or distribution values—collected by the metrics.
- For information about the data type associated with a specific metric type, see the list of metrics.
- For information about scalar and distribution data types, see Value types.
The number of data points written to time series. This value depends on the frequency with which the data is sampled and the cardinality of your data. The cardinality determines how many time series are generated for a combination of metric and monitored-resource types; for more information, see Cardinality.
The values for the metric and resource labels that are part of your time series don't contribute to your charges.
Metrics charged by bytes ingested
The following metrics are chargeable and priced by the number of bytes ingested:
Agent metrics under
agent.googleapis.com
, except theagent.googleapis.com/agent/
groupAs of August 6, 2021, the
agent.googleapis.com/processes/
metrics will be charged at 5% of the volume rate for other chargeable metrics. For example, ingesting 100 MiB of process metrics will cost the same as ingesting 5 MiB of other chargeable metrics.3Metrics from third-party integrations with the Ops Agent. These metrics are ingested into Cloud Monitoring with identifiers of the form
workload.googleapis.com/APPLICATION.METRIC
; for example, the metric typeworkload.googleapis.com/nginx.requests
falls into this category.OpenTelemetry Protocol (OTLP) metrics ingested into Cloud Monitoring as
workload.googleapis.com
metrics by the Ops Agent. This is a configuration option; for more information, see Ingestion formats for OTLP metrics.Custom metrics, including but not limited to those metrics sent by using the Cloud Monitoring API or language-specific client libraries, OpenCensus, and OpenTelemetry.
For pricing purposes, the ingestion volume is computed as follows:
- For a scalar data type: 8 bytes for each data point written to a time series. User-defined logs-based counter metrics fall into this category.
- For a distribution data type: 80 bytes for each data point written to a time series.
For information about data points in time series, see Time series: data from a monitored resource.
Metrics charged by samples ingested
The following metrics are chargeable and priced by the number of samples ingested:
- Metrics from Google Cloud Managed Service for Prometheus:
prometheus.googleapis.com
metrics.
For pricing purposes, the sample count is computed as follows:
- For a scalar data type: 1 for each point written to a time series.
- For a distribution data type: 2 for each point written to a time series, plus 1 for each histogram bucket that has a non-zero count.
For information about data points in time series, see Time series: data from a monitored resource.
Alerting on metrics ingested
It isn't possible to create an alert based on the monthly metrics ingested. However, you can create an alert for your Cloud Monitoring costs. For information, see Configuring a billing alert.
Pricing examples based on bytes ingested
The following examples illustrate how to get an estimate of costs for collecting metric data for metrics charged by bytes ingested. These examples are intended to illustrate calculations; for comprehensive estimates, use the Pricing Calculator. If you access this tool, use Google Cloud Observability product to enter you metric, logging, and trace data.
The basic scenario is this: You have some number of monitored resources, such as Compute Engine, Google Kubernetes Engine, or App Engine, that are writing data from some number of metrics each month.
The variables across the scenarios include:
- The number of resources.
- The number of metrics.
- Whether the metrics are Google Cloud metrics or not.
- The rate at which the metric data is written.
The examples in this section are for Monitoring pricing as of July 2020.
Common background
In the following pricing examples, each metric data point ingested is assumed to be of type double, int64, or bool; these count as 8 bytes for pricing purposes. There are roughly 730 hours (365 days / 12 months * 24 hours) in a month, or 43,800 minutes.
For one metric writing data at the rate of 1 data point/minute for one month:
- Total data points is: 43,800
- Total volume ingested is:
- 350,400 bytes (43,800 data points * 8 bytes)
- 0.33416748 MiB (350,400 bytes / 1,048,576 bytes/MiB)
For one metric writing data at the rate of 1 data point/hour for one month:
- Total data points is: 730
- Total volume ingested:
- 5,840 bytes (730 data points * 8 bytes)
- 0.005569458 MiB (5,840 bytes / 1,048,576 bytes/MiB)
Examples
Scenario 1: You have 1,000 resources, each writing 75 metrics. These are Google Cloud metrics only, writing at the rate of 1 data point/minute.
- Monthly ingestion: 25,063 MiB: 0.33416748 MiB for one metric * 75,000 (that is, 1,000 resources, 75 metrics)
- Approximate cost per month: $0.00 (Google Cloud metrics are included free)
MiB ingested | Rate ($/MiB) | Cost ($) | |
---|---|---|---|
unlimited | 0.00 | $0.00 | |
Total | 25,063 | $0.00 |
Scenario 2: You have 1,000 resources, each writing 75 custom metrics. These are chargeable metrics writing at the rate of 1 data point/minute.
- Monthly ingestion: 25,063 MiB (same as above)
- Approximate cost per month: $6,427.55
MiB ingested | Rate ($/MiB) | Cost ($) | |
---|---|---|---|
150 | 0.00 | $0.00 | |
24,913 | 0.258 | $6,427.55 | |
Total | 25,063 | $6,427.55 |
Scenario 3: You have 1,000 resources, each writing 75 custom metrics. These are chargeable metrics writing at the rate of 1 data point/hour.
- Monthly ingestion: 418 MiB = 0.005569458 MiB for one metric * 75,000
- Approximate cost per month: $69.14
MiB ingested | Rate ($/MiB) | Cost ($) | |
---|---|---|---|
150 | 0.00 | $0.00 | |
267 | 0.258 | $69.14 | |
Total | 417 | $69.14 |
Scenario 4: You have 1 resource writing 500,000 metrics. These are chargeable metrics writing each at the rate of 1 data point/minute.
- Monthly ingestion: 167,084 MiB: 0.33416748 MiB for one metric * 500,000
- Approximate cost per month: $35,890.98
MiB ingested | Rate ($/MiB) | Cost ($) | |
---|---|---|---|
150 | 0.00 | $0.00 | |
99,850 | 0.258 | $25,761.30 | |
67,084 | 0.151 | $10,129.68 | |
Total | 167,084 | $35,890.98 |
Pricing for controllability and predictability
Pricing for Managed Service for Prometheus is designed to be controllable. Because you are charged on a per-sample basis, you can use the following levers to control costs:
Sampling period: Changing the metric-scraping period from 15 seconds to 60 seconds can result in a 75% cost savings, without sacrificing cardinality. You can configure sampling periods on a per-job, per-target, or global basis.
Filtering: You can use filtering to reduce the number of samples sent to to the service's global datastore; for more information, see Filtering exported metrics. Use metric-relabeling configs in your Prometheus scrape configuration to drop metrics at ingestion time, based on label matchers.
Keep high-cardinality, low-value data local. You can run standard Prometheus alongside the managed service, using the same scape configs, and keep data locally that's not worth sending to the service's global datastore.
Pricing for Managed Service for Prometheus is designed to be predictable.
You are not penalized for having sparse histograms. Samples are counted only for the first non-zero value and then when the value for bucketn is greater than the value in bucketn-1. For example, a histogram with values
10 10 13 14 14 14
counts as three samples, for the first, third and fourth buckets.Depending on how many histograms you use, and what you use them for, the exclusion of unchanged buckets from pricing might typically result in 20% to 40% fewer samples being counted for billing purposes than the absolute number of histogram buckets would indicate.
By charging on a per-sample basis, you are not penalized for rapidly scaled and unscaled, preemptible, or ephemeral containers, like those created by HPA or GKE Autopilot.
If Managed Service for Prometheus charged on a per-metric basis, then you would pay for a full month's cardinality, all at once, each time a new container was spun up. With per-sample pricing, you pay only while the container is running.
Queries, including alert queries
All queries issued by the user, including queries issued when Prometheus recording rules are run, are charged through Cloud Monitoring API calls. For the current rate, see the summary table for Managed Service for Prometheus pricing or Monitoring pricing.
Pricing examples based on samples ingested
The following examples illustrate how to estimate the costs for collecting metrics charged by samples ingested. Sample-based charging is used for Google Cloud Managed Service for Prometheus.
These examples are intended to illustrate calculation techniques, not to provide billing data.
The basic scenario is this: You have some number of containers or pods that are writing points across some number of time series each month. The data might consist of scalar values or distributions.
The variables across the scenarios include:
- The number of containers or pods.
- The number of time series.
- Whether the data consists of scalar values, distributions, or both.
- The rate at which the data is written.
Counting samples
Before you can estimate prices, you need to know how to count samples. The number of samples counted for a value depends on the following:
- Whether the value is a scalar or a distribution
- The rate at which the values are written
This section describes how to estimate the number of samples written for a time series over the monthly billing period.
In a month, there are roughly 730 hours (365 days / 12 months * 24 hours), 43,800 minutes, or 2,628,000 seconds.
If a time series writes scalar values, then each value counts as one sample. The number of samples written in a month depends only on how frequently the values are written. Consider the following examples:
- For values written every 15 seconds:
- Write rate: 1 value/15s = 1 sample/15s
- Samples per month: 175,200 (1 sample/15s * 2,628,000 seconds/month)
- For values written every 60 seconds:
- Write rate: 1 value/60s = 1 sample/60s
- Samples per month: 43,800 (1 sample/60s * 2,628,000 seconds/month)
If a time series writes distribution values, then each value can contain 2 + n samples, where n is the number of buckets in the histogram. The number of samples written in a month depends on the number of buckets in your histograms and on how frequently the values are written.
For example, each instance of a 50-bucket histogram can contain 52 samples. If the values are written once every 60 seconds, then a 50-bucket histogram writes at most 2,277,600 samples per month. If the histogram has 100 buckets and is written once every 60 seconds, then each histogram can contain 102 samples and writes at most 4,467,600 samples per month.
Most distribution time series contain fewer than the maximum number of samples. In practice, between 20% and 40% of histogram buckets are empty. This percentage is even higher for users with sparse histograms, such as those generated by Istio.
When counting samples for pricing, only buckets with non-empty values are included. The maximum number of samples per histogram is 2 + n . If 25% of your buckets are empty, then the expected number of samples is 2 + .75n per histogram. If 40% of your buckets are empty, then the expected number of samples is 2 + .60n per histogram.
The following calculations and summary table show the maximum number of samples and more realistic expected numbers of samples:
For 50-bucket histogram values written every 15 seconds:
- Write rate: 1 value/15s
- Maximum samples:
- Per histogram: 52
- Per month: 9,110,400 (52 * 1 value/15s * 2,628,000 seconds/month)
- Expected samples, assuming 25% empty:
- Per histogram: 39.5 (2 + .75(50), or 2 + (50 - 12.5))
- Per month: 6,920,400 (39.5 * 1 value/15s * 2,628,000 seconds/month)
- Expected samples, assuming 40% empty:
- Per histogram: 32 (2 + .6(50), or 2 + (50 - 20))
- Per month: 5,606,400 (32 * 1 value/15s * 2,628,000 seconds/month)
For 50-bucket histogram values written every 60 seconds:
- Write rate: 1 value/60s
- Maximum samples:
- Per histogram: 52
- Per month: 2,277,600 (52 * 1 value/60s * 2,628,000 seconds/month)
- Expected samples, assuming 25% empty:
- Per histogram: 39.5 (2 + .75(50), or 2 + (50 - 12.5))
- Per month: 1,730,100 (39.5 * 1 value/60s * 2,628,000 seconds/month)
- Expected samples, assuming 40% empty:
- Per histogram: 32 (2 + .6(50), or 2 + (50 - 20))
- Per month: 1,401,600 (32 * 1 value/60s * 2,628,000 seconds/month)
For 100-bucket histogram values written every 15 seconds:
- Write rate: 1 value/15s
- Maximum samples:
- Per histogram: 102
- Per month: 17,870,400 (102 * 1 value/15s * 2,628,000 seconds/month)
- Expected samples, assuming 25% empty:
- Per histogram: 77 (2 + .75(100), or 2 + (100 - 25))
- Per month: 13,490,400 (77 * 1 value/15s * 2,628,000 seconds/month)
- Expected samples, assuming 40% empty:
- Per histogram: 62 (2 + .6(100), or 2 + (100 - 40))
- Per month: 10,862,400 (62 * 1 value/15s * 2,628,000 seconds/month)
For 100-bucket histogram values written every 60 seconds:
- Write rate: 1 value/60s
- Maximum samples:
- Per histogram: 102
- Per month: 4,467,600 (102 * 1 value/60s * 2,628,000 seconds/month)
- Expected samples, assuming 25% empty:
- Per histogram: 77 (2 + .75(100), or 2 + (100 - 25))
- Per month: 3,372,600 (77 * 1 value/60s * 2,628,000 seconds/month)
- Expected samples, assuming 40% empty:
- Per histogram: 62 (2 + .6(100), or 2 + (100 - 40))
- Per month: 2,715,600 (62 * 1 value/60s * 2,628,000 seconds/month)
The following table summarizes the preceding information:
Bucket count | Write rate | Samples per month (max) |
Samples per month (25% empty) |
Samples per month (40% empty) |
---|---|---|---|---|
50 | 1 sample/15s | 9,110,400 | 6,920,400 | 5,606,400 |
50 | 1 sample/60s | 2,277,600 | 1,730,100 | 1,401,600 |
100 | 1 sample/15s | 17,870,400 | 13,490,400 | 10,862,400 |
100 | 1 sample/60s | 4,467,600 | 3,372,600 | 2,715,600 |
Examples
To estimate prices, count the number of samples written over a month and apply the pricing values. Samples are priced by the million, for stacked ranges, as follows:
Ingestion range | Managed Service for Prometheus | Maximum for range |
---|---|---|
Up to 50 billion (50,000 million) | $0.06/million | $3,000.00 |
50 billion to 250 billion (250,000 million) | $0.048/million | $9,600.00 |
250 billion to 500 billion (500,000 million) | $0.036/million | $9,000.00 |
Over 500 billion (500,000 million) | $0.024/million |
The rest of this section works through possible scenarios.
Scenario 1: You have 100 containers, each writing 1,000 scalar times series.
Variant A: If each time series is written every 15 seconds (1 sample/15s), then the number of samples written per month is 17,520,000,000 (175,200 samples/month * 1,000 time series * 100 containers), or 17,520 million.
Variant B: If each time series is written every 60 seconds (1 sample/60s), then the number of samples written per month is 4,380,000,000 (43,800 samples/month * 1,000 time series * 100 containers), or 4,380 million.
In both of these cases, there are fewer than 50,000 million samples, so only the first rate applies. No samples are charged at the other rates.
Variant | Samples ingested | Ingestion range | Managed Service for Prometheus ($0.06, $0.048, $0.036, $0.024) |
---|---|---|---|
A (1 sample/15s) Total |
17,520 million 17,520 million |
Up to 50,000 million Up to 250,000 million Up to 500,000 million Over 500,000 million |
$1,051.20 $1,051.20 |
B (1 sample/60s) Total |
4,380 million 4,380 million |
Up to 50,000 million Up to 250,000 million Up to 500,000 million Over 500,000 million |
$262.80 $262.80 |
Scenario 2: You have 1,000 containers, each writing 1,000 scalar times series.
Variant A: If each time series is written every 15 seconds (1 sample/15s), then the number of samples written per month is 175,200,000,000, or 175,200 million:
- The first 50,000 million samples are charged at the first rate.
- The remaining 125,200 million samples are charged at the second rate.
- There are no samples charged at the other rates.
Variant B: If each time series is written every 60 seconds (1 sample/60s), then the number of samples written per month is 43,800,000,000, or 43,800 million. This monthly value is less than 50,000 million samples, so only the first rate applies.
Variant | Samples ingested | Ingestion range | Managed Service for Prometheus ($0.06, $0.048, $0.036, $0.024) |
---|---|---|---|
A (1 sample/15s) Total |
50,000 million 125,200 million 175,200 million |
Up to 50,000 million Up to 250,000 million Up to 500,000 million Over 500,000 million |
$3,000.00 $6,009.60 $9,009.60 |
B (1 sample/60s) Total |
43,800 million 43,800 million |
Up to 50,000 million Up to 250,000 million Up to 500,000 million Over 500,000 million |
$2,628.00 $2,628.00 |
Scenario 3: You have 100 containers, each writing 1,000 100-bucket distribution times series. You expect 25% of the buckets to be empty.
Variant A: If each time series is written every 15 seconds (1 sample/15s), then the number of samples written per month is 1,349,040,000,000 (13,490,400 samples/month * 1,000 time series * 100 containers), or 1,349,040 million.
- The first 50,000 million samples are charged at the first rate.
- The next 200,000 million samples are charged at the second rate.
- The next 250,000 million samples are charged at the third rate.
- The remaining 749,040 million samples are charged at the fourth rate.
Variant B: If each time series is written every 60 seconds (1 sample/60s), then the number of samples written per month is 337,260,000,000 (3,372,600 samples/month * 1,000 time series * 100 containers), or 337,260 million.
- The first 50,000 million samples are charged at the first rate.
- The next 200,000 million samples are charged at the second rate.
- The remaining 87,260 million samples are charged at the third rate.
Variant | Samples ingested | Ingestion range | Managed Service for Prometheus ($0.06, $0.048, $0.036, $0.024) |
---|---|---|---|
A (1 sample/15s) Total |
50,000 million 200,000 million 250,000 million 749,040 million 1,349,040 million |
Up to 50,000 million Up to 250,000 million Up to 500,000 million Over 500,000 million |
$3,000.00 $9,600.00 $9,000.00 $17,976.96 $39,576.96 |
B (1 sample/60s) Total |
50,000 million 200,000 million 87,260 million 337,260 million |
Up to 50,000 million Up to 250,000 million Up to 500,000 million Over 500,000 million |
$3,000.00 $9,600.00 $3,141.36 $15,741.36 |
Scenario 4: You have 1,000 containers, each writing 10,000 100-bucket distribution times series. You expect 40% of the buckets to be empty.
Variant A: If each time series is written every 15 seconds (1 sample/15s), then the number of samples written per month is 108,624,000,000,000 (10,862,400 samples/month * 10,000 time series * 1,000 containers), or 108,624,000 million.
- The first 50,000 million samples are charged at the first rate.
- The next 200,000 million samples are charged at the second rate.
- The next 250,000 million samples are charged at the third rate.
- The remaining 108,124,000 million samples are charged at the fourth rate.
Variant B: If each time series is written every 60 seconds (1 sample/60s), then the number of samples written per month is 27,156,000,000,000 (2,715,600 samples/month * 10,000 time series * 1,000 containers), or 27,156,000 million.
- The first 50,000 million samples are charged at the first rate.
- The next 200,000 million samples are charged at the second rate.
- The next 250,000 million samples are charged at the third rate.
- The remaining 26,656,000 million samples are charged at the fourth rate.
Variant | Samples ingested | Ingestion range | Managed Service for Prometheus ($0.06, $0.048, $0.036, $0.024) |
---|---|---|---|
A (1 sample/15s) Total |
50,000 million 200,000 million 250,000 million 108,124,000 million 108,624,000 million |
Up to 50,000 million Up to 250,000 million Up to 500,000 million Over 500,000 million |
$3,000.00 $9,600.00 $9,000.00 $2,594,976.00 $2,616,576.00 |
B (1 sample/60s) Total |
50,000 million 200,000 million 250,000 million 26,656,000 million 27,156,000 million |
Up to 50,000 million Up to 250,000 million Up to 500,000 million Over 500,000 million |
$3,000.00 $9,600.00 $9,000.00 $639,744.00 $661,344.00 |
Scenario 5: You have the following:
1,000 containers, each writing 1,000 scalar times series every 15 seconds. The number of samples written per month is 175,200,000,000, or 175,200 million. (Scenario 2, variant A.)
1,000 containers, each writing 10,000 100-bucket distribution times series every 15 seconds. You expect 40% of the buckets to be empty. The number of samples written per month is 108,624,000,000,000 or 108,624,000 million. (Scenario 4, variant A.)
The total number of samples per month is 108,799,200 million (175,200 million + 108,624,000 million).
- The first 50,000 million samples are charged at the first rate.
- The next 200,000 million samples are charged at the second rate.
- The next 250,000 million samples are charged at the third rate.
- The remaining 108,299,200 million samples are charged at the fourth rate.
Variant | Samples ingested | Ingestion range | Managed Service for Prometheus ($0.06, $0.048, $0.036, $0.024) |
---|---|---|---|
2A + 4A Total |
50,000 million 200,000 million 250,000 million 108,299,200 million 108,799,200 million |
Up to 50,000 million Up to 250,000 million Up to 500,000 million Over 500,000 million |
$3,000.00 $9,600.00 $9,000.00 $2,599,180.80 $2,620,780.80 |
Pricing for uptime-check execution (Effective date: October 1, 2022)
Monitoring charges for each regional execution of an uptime check, beyond the free monthly allotment of 1 million executions. A check that executes in three regions counts as three executions.
The cost for uptime-check execution is $0.30/1,000 executions. The charge appears on your bill as SKU "CA14-D3DE-E67F" for "Monitoring Uptime Checks".
The following examples illustrate how to estimate the costs for executing uptime checks. These examples are intended to illustrate calculation techniques, not to provide billing data.
Counting executions of uptime checks
To estimate the cost of your uptime checks, you need to know how many regional executions occur in a month. Monitoring charges $0.30/1,000 executions, with a free monthly allotment of 1 million executions.
To estimate the cost of your uptime checks, you can use the following calculation:
(EXECUTIONS_PER_MONTH - 1,000,000) * .0003
For each uptime check, the number of executions depends on the following configuration choices:
How frequently the uptime check executes: every minute, 5 minutes, 10 minutes, or 15 minutes.
The number of regions in which the uptime check executes.
The number of targets the uptime check is configured for. If the uptime check is configured for a single VM, then the number of targets is 1. If the uptime check is configured for a resource group, then the number of targets is the number of resources in the group.
When you configure an uptime check, you specify a location for the uptime check, and each location maps to one or more regions. The following table shows the valid locations for uptime checks and the regions to which they map:
Location for uptime-check configuration | Includes Google Cloud regions |
---|---|
ASIA_PACIFIC |
asia-southeast1 |
EUROPE |
europe-west1 |
SOUTH_AMERICA |
southamerica-east1 |
USA |
us-central1 ,
us-east4 ,
us-west1
|
GLOBAL |
All regions included by other locations |
You must configure your uptime checks to execute in at least three regions.
To estimate the number of executions for an uptime check, you need to know how many regions are covered by the uptime-check location:
ASIA_PACIFIC
,EUROPE
, andSOUTH_AMERICA
each include 1 region.USA
includes 3 regions.GLOBAL
includes 6 regions.
In a month, there are roughly 730 hours (365 days / 12 months * 24 hours) or 43,800 minutes.
An uptime check configured to run once a minute in
USA
runs in a 3 regions. If this uptime check is configured to check a single VM, then this uptime check executes 131,400 (3 * 43,800) times in a month. If the check is configured to check a 10-member resource group, then the uptime check executes 1,314,000 (10 * 131,400) times in a month.An uptime check configured to run once a minute in
ASIA_PACIFIC
,EUROPE
, andUSA
runs in 5 regions. This uptime check executes 219,000 times in a month if configured for a single target.
The following table shows the hourly and monthly execution counts for a single uptime check configured to run with different frequencies in different numbers of regions:
Frequency of check execution, once every: |
Number of regions |
Hourly executions per target |
Monthly executions per target |
---|---|---|---|
1 minute | 3 4 5 6 |
180 240 300 360 |
131,400 175,200 219,000 262,800 |
5 minutes | 3 4 5 6 |
36 48 60 72 |
26,280 35,040 43,000 52,660 |
10 minutes | 3 4 5 6 |
18 24 30 36 |
13,140 17,520 21,900 26,280 |
15 minutes | 3 4 5 6 |
12 16 20 24 |
8,760 11,680 14,600 17,520 |
Examples
To estimate prices, determine your total monthly executions and subtract 1,000,000. Any remaining executions are charged at $0.30/1,000 executions, so multiply the remaining executions by .0003.
(EXECUTIONS_PER_MONTH - 1,000,000) * .0003
Scenario 1: You have 1 uptime check in location USA
that checks 1 VM once a minute. This check runs in 3 regions.
The check executes 131,400 times a month and costs nothing.
Total monthly executions |
Chargeable monthly executions (over 1,000,000) |
Cost ($0.30/1,000 executions) |
---|---|---|
131,400 | 0 | $0.00 |
Scenario 2: You have 1 uptime check in location USA
that checks a 10-member resource group once a minute. This check runs in 3
regions. The check executes 10 * 131,400 times a month and
costs $94.20/month. The only difference between this scenario and Scenario 1
is the number of targets.
Total monthly executions |
Chargeable monthly executions (over 1,000,000) |
Cost ($0.30/1,000 executions) |
---|---|---|
1,314,000 (10 targets) | 314,000 | $94.20 |
Scenario 3: You have 10 GLOBAL
uptime checks,
each of which checks 1 VM once a minute. These checks run in 6 regions,
so each check executes 262,800 times a month. The total monthly
executions is 2,628,000 (10 * 262,800). This scenario costs
$488.40/month.
Total monthly executions |
Chargeable monthly executions (over 1,000,000) |
Cost ($0.30/1,000 executions) |
---|---|---|
2,628,000 | 1,628,000 | $488.40 |
Scenario 4: You have 5 uptime checks in location USA
that check 1 VM once every 5 minutes. These checks run in 3 regions, so each
check executes 26,280 times a month. The total monthly executions
for this set of checks is 105,120 (4 * 26,280).
You also have 2 GLOBAL
uptime checks that check 1 VM once every 15
minutes. These checks run in 6 regions, so each check executes
17,520 times a month. The total monthly executions for this
set of checks is 35,040 (2 * 17,520).
Your total monthly executions is 140,160 (105,120 + 35,040). This scenario costs nothing.
Total monthly executions |
Chargeable monthly executions (over 1,000,000) |
Cost ($0.30/1,000 executions) |
---|---|---|
140,160 | 0 | $0.00 |
Pricing for synthetic-monitor execution (Effective date: November 1, 2023)
Cloud Monitoring charges for each execution of a synthetic monitor, beyond the free allotment per month of 100 executions per billing account. For example, if you create 3 synthetic monitors and configure each of them to execute every 5 minutes, then your total number of executions per month is 26,784:
Number of executions per month = 3 synthetic monitors * 1 execution per monitor per 5 minutes *
1440 minutes per day * 31 days per month
= 26,784
To determine the number of chargeable executions, subtract the free allotment from the total number of executions, and then multiple the result by the cost:
Total monthly executions |
Chargeable monthly executions (over 100 executions per billing account) |
Cost ($1.20/1,000 executions) |
---|---|---|
26,784 | 26,684 | $32.02 |
Pricing for alerting
Starting no sooner than April 2026, Cloud Monitoring will begin charging for alerting. The pricing model is as follows:
- $1.50 per month for each condition in an alerting policy.
- $0.35 per 1,000,000 time series returned by the query of a metric alerting policy condition.
This section provides the following information:
- Definitions of alerting terminology.
- Examples of charges for various alerting policy configurations.
- Suggestions for reducing costs by consolidating or deleting alerting policies.
- Information about opting out of billing for alerting policies.
Definitions
Condition: The condition of an alerting policy describes when a resource, or a group of resources, is in a state that requires a response.
- Alerting policies that use filters to create metric-threshold or metric-absence queries can combine up to six conditions.
- Alerting policies with a the following query types can have only a single condition:
The charge is for each condition $1.50 per month. To stop being charged for a condition, you must delete the alerting policy. Snoozing or disabling the policy doesn't stop you from being charged.
Metric and log-based alerting policies: Alerting policies that use any condition type except log-match conditions are metric alerting policies; the conditions of metric alerting policies return time series. During each execution period, conditions in metric alerting policies execute their queries against the Cloud Monitoring datastore. The returned time series are then evaluated against a threshold to determine whether the alerting policy fires.
Log-based alerting policies use log-match conditions. Log-match conditions return no time series.
Execution period: How frequently Cloud Monitoring executes your condition. For most condition types, this is 30 seconds and can't be changed. Conditions that use a PromQL query can set this period. For more information, see Increase the length of the execution period (PromQL only).
Time series returned: During every execution period, a metric alerting policy executes the query of its condition against the Cloud Monitoring datastore. Cloud Monitoring returns time series data as a response to each query. Each time series in the response counts as one time series returned.
The number of time series returned in a month is determined by three factors:
- The shape and scope of the underlying data.
- The filters and aggregations you use in the query of your condition.
- The execution period.
For example, consider a configuration where you have the following:
- 100 virtual machines (VMs), where each VM belongs to one service.
- Each VM emits one metric,
metric_name
, which has a label with 10 values. - Five total services.
Since you have 100 VMs, which each can generate 10 time series (one for each label value), you have a total of 1,000 underlying time series. Each VM also contains a metadata-like label that records which of your five services the VM belongs to.
You could configure your alerting policies in the following ways by using PromQL, where each configuration results in a different number of time series returned per execution period:
Configuration PromQL query Time series returned per period No aggregation rate(
metric_name
[1m])1,000 Aggregate to the VM sum by (vm) (rate(
metric_name
[1m]))100 Aggregate to label value sum by (label_key) (rate(
metric_name
[1m]))10 Aggregate to the service sum by (service) (rate(
metric_name
[1m]))5 Aggregate to label value and service sum by (service, label_key) (rate(
)metric_name
[1m])50 Aggregate to the fleet sum (rate(
metric_name
[1m]))1 Filter and aggregate to one VM sum (rate(
metric_name
{vm="my_vm_name"}[1m]))1 Filter and aggregate to one service sum (rate(
metric_name
{service="my_service_name"}[1m]))1
Pricing examples
The following examples take place in a 30-day month, resulting in the following evaluation periods:
- 86,400 30-second execution periods per month
- 172,800 15-second execution periods per month (PromQL queries only)
Example 1: One policy, aggregating to the VM, 30 seconds
In this example, use the following configurations:
Data
- 100 VMs
- Each VM emits one metric,
metric_name
metric_name
has one label, which has 10 values
- One alert condition
- Condition aggregates to the VM level
- 30-second execution period
- Condition cost: 1 condition * $1.50 per month = $1.50 per month
- Time series cost: 100 time series returned per period * 86,400 periods per month = 8.6 million time series returned per month * $0.35 per million time series = $3.02 per month
- Total cost: $4.52 per month
Example 2: 100 policies (one per VM), aggregating to the VM, 30 seconds
In this example, use the following configurations:
Data
- 100 VMs
- Each VM emits one metric,
metric_name
metric_name
has one label, which has 10 values
- 100 conditions
- Each condition is filtered and aggregated to one VM
- 30-second execution period
- Condition cost: 100 conditions * $1.50 per month = $150 per month
- Time series cost: 100 conditions * 1 time series returned per condition per period * 86,400 periods per month = 8.6 million time series returned per month * $0.35 per million time series = $3.02 per month
- Total cost: $153.02 per month
Example 3: One policy, aggregating to the VM, 15 seconds
In this example, use the following configurations:
Data
- 100 VMs
- Each VM emits one metric,
metric_name
metric_name
has one label, which has 10 values
- One PromQL alert condition
- Condition aggregates to the VM level
- 15-second execution period
- Condition cost: 1 condition * $1.50 per month = $1.50 per month
- Time series cost: 100 time series returned per period * 172,800 periods per month = 17.3 million time series returned per month * $0.35 per million time series = $6.05 per month
- Total cost: $7.55 per month
Example 4: Aggregate one policy to each service, 30 seconds
In this example, use the following configurations:
Data
- 100 VMs, where each VM belongs to one service
- Five total services
- Each VM emits one metric,
metric_name
metric_name
has one label, which has 10 values
- One condition
- Condition aggregates to the service level
- 30-second execution period
- Condition cost: 1 condition * $1.50 per month = $1.50 per month
- Time series cost: 5 time series returned per period * 86,400 periods per month = 432,000 time series returned per month * $0.35 per million time series = $0.14 per month
- Total cost: $1.64 per month
Example 5: Aggregate one policy to the VM; higher underlying cardinality per VM, 30 seconds
In this example, use the following configurations:
Data
- 100 VMs
- Each VM emits one metric,
metric_name
metric_name
has 100 labels with 1,000 values each
- One condition
- Condition aggregates to the VM level
- 30-second execution period
- Condition cost: 1 condition * $1.50 per month = $1.50 per month
- Time series cost: 100 time series returned per period * 86,400 periods per month = 8.5 million time series returned per month * $0.35 per million time series = $3.02 per month
- Total cost: $4.52 per month
Example 6: Aggregate one policy to the VM; union two metrics in one condition, 30 seconds
In this example, use the following configurations:
Data
- 100 VMs
- Each VM emits two metrics,
metric_name_1
andmetric_name_2
- Both metrics have one label with 10 values each
- One condition
- Condition aggregates to the VM level
- Condition uses an
OR
operator to union the metrics - 30-second execution period
- Condition cost: 1 condition * $1.50 per month = $1.50 per month
- Time series cost: 2 metrics * 100 time series returned per metric per period * 86400 periods per month = 17.3 million time series returned per month * $0.35 per million time series = $6.05 per month
- Total cost: $7.55 per month
Example 7: 100 log-based alerting policies
In this example, use the following configuration:
Alerting policies
- 100 conditions (one condition per log-based alerting policy)
- Condition cost: 100 condition * $1.50 per month = $150.00 per month
- Time series cost: $0 (Log-based alerting policies do not return time series.)
- Total cost: $150.00 per month
Suggestions for reducing your alerting bill
When you configure your metric-based alerting policies, use the following suggestions to help reduce the cost of your alerting bills.Consolidate alerting policies to operate over more resources
Because of the $1.50-per-condition cost, it is more cost effective to use one alerting policy to monitor multiple resources than it is to use one alerting policy to monitor each resource. For example, compare Example 1 to Example 2: In both examples, you monitor the same number of resources. However, Example 2 uses 100 alerting policies, while Example 1 uses only one alerting policy. As a result, Example 1 is almost $150 cheaper per month.
Aggregate to only the level that you need to alert on
Aggregating to higher levels of granularity results in higher costs than aggregating to lower levels of granularity. For example, aggregating to the Google Cloud project level is cheaper than aggregating to the cluster level, and aggregating to the cluster level is cheaper than aggregating to the cluster and namespace level.
For example, compare Example 1 to Example 4: Both examples operate over the same underlying data and have a single alerting policy. However, because the alerting policy in Example 4 aggregates to the service, it is less expensive than the alerting policy in Example 1, which aggregates more granularly to the VM.
In addition, compare Example 1 to Example 5: In this case, the metric cardinality in Example 5 is 10,000 times higher than the metric cardinality in Example 1. However, because the alerting policy in Example 1 and in Example 5 both aggregate to the VM, and because the number of VMs is the same in both examples, the examples are equivalent in price.
When you configure your alerting policies, choose aggregation levels that work best for your use case. For example, if you care about alerting on CPU utilization, then you might want to aggregate to the VM and CPU level. If you care about alerting on latency by endpoint, then you might want to aggregate to the endpoint level.
Don't alert on raw, unaggregated data
Monitoring uses a dimensional metrics system, where any metric has total cardinality equal to the number of resources monitored multiplied by the number of label combinations on that metric. For example, if you have 100 VMs emitting a metric, and that metric has 10 labels with 10 values each, then your total cardinality is 100 * 10 * 10 = 10,000.
As a result of how cardinality scales, alerting on raw data can be extremely expensive. In the previous example, you have 10,000 time series returned for each execution period. However, if you aggregate to the VM, then you have only 100 time series returned per execution period, regardless of the label cardinality of the underlying data.
Alerting on raw data also puts you at risk for increased time series when your metrics receive new labels. In the previous example, if a user adds a new label to your metric, then your total cardinality increases to 100 * 11 * 10 = 11,000 time series. In this case, your number of returned time series increases by 1,000 each execution period even though your alerting policy is unchanged. If you instead aggregate to the VM, then, despite the increased underlying cardinality, you still have only 100 time series returned.
Filter out unnecessary responses
Configure your conditions to evaluate only data that's necessary for your alerting needs. If you wouldn't take action to fix something, then exclude it from your alerting policies. For example, you probably don't need to alert on an intern's development VM.
To reduce unnecessary alerts and costs, you can filter out time series that aren't important. You can use Google Cloud metadata labels to tag assets with categories and then filter out the unneeded metadata categories.
Use top-stream operators to reduce the number of time series returned
If your condition uses a PromQL or an MQL query, then you can use a top-streams operator to select a number of the time series returned with the highest values:
For example, a topk(metric, 5)
clause in a PromQL query limits
the number of time series returned to five in each execution period.
Limiting to a top number of time series might result in missing data and faulty alerts, such as:
- If more than N time series violate your threshold, then you will miss data outside the top N time series.
- If a violating time series occurs outside the top N time series, then your incidents might auto-close despite the excluded time series still violating the threshold.
- Your condition queries might not show you important context such as baseline time series that are functioning as intended.
To mitigate such risks, choose large values for N and use the top-streams operator only in alerting policies that evaluate many time series, such as alerts for individual Kubernetes containers.
Increase the length of the execution period (PromQL only)
If your condition uses a PromQL query, then you can modify the length
of your execution period by setting the
evaluationInterval
field in the
condition.
Longer evaluation intervals result in fewer time series returned per month; for example, a condition query with a 15-second interval runs twice as often as a query with a 30-second interval, and a query with a 1-minute interval runs half as often as a query with a 30-second interval.
Opting out
If you have an existing Google Cloud contract that doesn't expire until April 2026, you can delay billing for alerting until your contract is due for renewal by requesting an exemption from the Cloud Monitoring alerting billing team. Exemptions for customers with active contracts will be considered on a case-by-case basis.
You can request an exemption until November 1, 2024. To request a billing exemption until contract renewal, fill out the billing-exemption request form.
Error Reporting
Error data can be reported to your Google Cloud project by using the Error Reporting API or the Cloud Logging API.
There are no charges for using Error Reporting. However, you might incur Cloud Logging costs because log entries are generated and then stored by Cloud Logging.
For limits that apply to your use of Error Reporting, see Quotas and limits.
Cloud Profiler
There is no cost associated with using Cloud Profiler.
For limits that apply to your use of Profiler, see Quotas and limits.
Cloud Trace
Trace charges are based on the number of trace spans ingested and scanned. When latency data is sent to Trace, it's packaged as a trace that is composed of spans, and the spans are ingested by the Cloud Trace backend. When you view trace data, the stored spans are scanned by Cloud Trace. This section provides the following information:
- Defines chargeable and non-chargeable trace spans
- Provides a pricing example
- Provides information on how to reduce your trace span ingestion.
- Provides settings for an alerting policy that can notify you if your trace span ingestion reaches a threshold.
For the current pricing information, see Cloud Trace Pricing.
For limits that apply to your use of Trace, see Quotas and limits.
For information about how to view your current or past usage, see Estimating your bills.
Non-chargeable trace spans
Cloud Trace pricing doesn't apply to spans auto-generated by App Engine Standard, Cloud Run functions or Cloud Run: ingestion of these traces are non-chargeable.
Auto-generated traces don't consume Cloud Trace API quota, and these traces are not counted in the API usage metrics.
Chargeable trace spans
Ingestion of trace spans except for those spans listed in the section titled Non-chargeable traces, are chargeable and are priced by ingested volume. This includes spans created by instrumentation you add to your App Engine Standard application.
Pricing examples
The example is for Trace pricing as of July 2020.
- If you ingest 2 million spans in a month, your cost is $0. (Your first 2.5 million spans ingested in a month are free.)
- If you ingest 14 million spans in a month, your cost is $2.30. (Your first 2.5 million spans in a month are free. The remaining spans' cost is calculated as 11.5 million spans * $0.20/million spans = $2.30.)
- If you ingest 1 billion spans in a month, your cost is $199. (Your first 2.5 million spans in a month are free. The remaining spans' cost is calculated as 997.5 million spans * $0.20/million spans = $199.50.)
Reducing your trace usage
To control Trace span ingestion volume, you can manage your trace sampling rate to balance how many traces you need for performance analysis with your cost tolerance.
For high-traffic systems, most customers can sample at 1 in 1,000 transactions, or even 1 in 10,000 transactions, and still have enough information for performance analysis.
Sampling rate is configured with the Cloud Trace client libraries.
Alerting on monthly spans ingested
To create an alerting policy that triggers when your monthly Cloud Trace spans ingested exceeds a user-defined limit, use the following settings.
New condition Field |
Value |
---|---|
Resource and Metric | In the Resources menu, select Global. In the Metric categories menu, select Billing. In the Metrics menu, select Monthly trace spans ingested. |
Filter | |
Across time series Time series aggregation |
sum |
Rolling window | 60 m |
Rolling window function | max |
Configure alert trigger Field |
Value |
---|---|
Condition type | Threshold |
Alert trigger | Any time series violates |
Threshold position | Above threshold |
Threshold value |
You determine the acceptable value. |
Retest window | Minimum acceptable value is 30 minutes. |
GKE Enterprise
There is no charge for GKE Enterprise system logs and metrics. Control plane logs, control plane metrics, and a curated subset of Kube state metrics are enabled by default for GKE clusters on Google Cloud that are registered at cluster creation time in a GKE Enterprise enabled project. Control plane logs incur Cloud Logging charges, while default-on metrics are included at no additional charge.
For the list of included GKE logs and metrics, see What logs are collected and Available metrics.
In a Google Distributed Cloud cluster, GKE Enterprise system logs and metrics include the following:
- Logs and metrics from all components in an admin cluster
- Logs and metrics from components in these namespaces in a user cluster:
kube-system
,gke-system
,gke-connect
,knative-serving
,istio-system
,monitoring-system
,config-management-system
,gatekeeper-system
,cnrm-system
Frequently asked questions
Which product features are free to use?
Usage of Google Cloud Observability products is priced by data volume. Other than the data volume costs described on this page, usage of all additional Google Cloud Observability product features is free.
How much will I have to pay?
To estimate your usage costs, see Estimating your bills.
To get help with billing questions, see Billing questions.
How do I understand the details of my usage?
Several metrics let you drill into and understand your logs and metrics volume using Metrics Explorer. see View detailed usage in Metrics Explorer for details.
If you're interested in learning how to manage your costs, see these blog posts:
- Cloud Logging pricing for Cloud Admins: How to approach it & save cost
- Four steps to managing your Cloud Logging costs on a budget
How do metrics scopes and log scopes affect billing?
For the most part, metrics scopes and log scopes don't affect billing. Logs and metrics are charged by the project, billing account, folder, or organization that receives the data. The metrics scope for a project defines the collection of the resources whose metrics the project can view and monitor. When you define a metrics scope, you don't affect which resource receives metric data or cause data to be duplicated. Similarly, a log scope only lists the resources that store or route the log entries that you want to view.
For example, suppose your organization has 100 virtual machines (VMs): 60 VMs are hosted by Project-A and 40 VMs are in Project-B. Project-A receives and stores the metrics for its VMs, and it's charged when metrics are chargeable. Similarly, Project-B receives and stores the metrics for its VMs, and it's charged when metrics are chargeable. If you create a metrics scope that includes Project-A and Project-B, then you can view the combined metrics for your 100 VMs. You can now view just the metrics for Project-A, just the metrics of Project-B, or the combination of metrics. Even though you have two ways to view the metrics of Project-A, there aren't billing implications.
What happens if I go over the free allotments?
You are automatically billed for any usage over your free allotments. You don't lose any logs or metrics. To better understand your potential costs, review Estimating your bills.
You can create an alerting policy that monitors your usage and notifies you when you approach the threshold for billing.
I have a large number of Google Cloud logs in my project(s) that I do not use. I am concerned about charges for these logs. How do I avoid this?
You can exclude logs to control which logs are ingested into Logging. See Reducing your logs usage for details.
Will services that send logs to my project receive an error if logs are excluded?
No. Services that send log entries cannot determine whether the log entries are ingested into Logging or not.
Will I be charged twice for Virtual Private Cloud flow logs?
If you send your VPC flow logs to Logging, VPC flow logs generation charges are waived, and only Logging charges apply. However, if you send them and then exclude your VPC flow logs from Logging, VPC flow logs charges apply. For more information, see Google Cloud Pricing Calculator and then select the tab titled "Cloud Load Balancing and Network Services".
1 For pricing purposes, all units are treated as binary measures, for example, as mebibytes (MiB, or 220 bytes) or gibibytes (GiB, or 230 bytes).
2 There is no charge for Google Cloud metrics or GKE Enterprise metrics that are measured at up to 1 data point per minute, the current highest resolution. In the future, metrics measured at higher resolutions might incur a charge.
3 Process metrics are currently collected at a pre-defined default rate of once per minute, which can't be changed. This data generally changes slowly, so these metrics are currently over-sampled. Therefore, charging process metrics at 5% of the standard rate aligns with the standard rate if the metrics were sampled at 20-minute intervals. Users who collect 100 MiB of data from these metrics are charged for only 5 MiB.
What's next
- Read the Google Cloud Observability documentation.
- Try the Pricing calculator.
- Learn about Google Cloud Observability solutions and use cases.