Prometheus: Aggregation for Prometheus, Thanos or other remote write storage with M3
This document is a getting started guide to using M3 Coordinator or both M3 Coordinator and M3 Aggregator roles to aggregate metrics for a compatible Prometheus remote write storage backend.
What’s required is any Prometheus storage backend that supports the Prometheus Remote write protocol.
Testing with docker compose
To test out a full end-to-end example you can clone the M3 repository and use the corresponding guide for the M3 and Prometheus remote write stack docker compose development stack.
Basic guide with single M3 Coordinator sidecar aggregation
Start by downloading the M3 Coordinator config template.
Update the endpoints with your Prometheus Remote Write compatible storage setup. You should endup with config similar to:
# This points to a Prometheus started with `--storage.tsdb.retention.time=720h`
- name: unaggregated
# This points to a Prometheus started with `--storage.tsdb.retention.time=1440h`
- name: aggregated
# Should match retention of a Prometheus instance. Coordinator uses it for routing metrics correctly.
# Resolution instructs M3Aggregator to downsample incoming metrics at given rate.
# By tuning resolution we can control how much storage Prometheus needs at the cost of query accuracy as range shrinks.
# Another example of Prometheus configured for a very long retention but with 1h resolution
# Because of downsample: all == false metrics are downsampled based on mapping and rollup rules.
- name: historical
More advanced deployments
Refer to the M3 Aggregation for any Prometheus remote write storage for more details on more advanced deployment options.