Skip to content

GreptimeDB vs. Prometheus

Prometheus stores metrics locally. For long-term retention at scale, you add Thanos or Mimir — that's 5-8 components for one signal. GreptimeDB replaces the whole stack with one database.

Struggling to scale Prometheus?
There's a structural fix.

Prometheus scraped it. Thanos archived it. Mimir distributed it. You're running three systems to do one job. The multi-value model Prometheus lacks, SQL correlation you've been wanting, a unified cost model for metrics + logs — these aren't add-ons. They're reasons the three-pillar model is over.

CHALLENGER

Prometheus + Thanos/Mimir

Pull-based scraper · PromQL only · 5-8 components to scale

  • High ops overhead at scale
  • Siloed metrics workflow for broader observability
  • Single-value time series — no multi-value or wide event model
  • Deep analysis requires exporting to a separate data warehouse
VS

GREPTIMEDB

GreptimeDB

Metrics + Logs + Traces · SQL + PromQL · Native object storage

  • One database, stateless scale-out
  • SQL + PromQL on top of object storage
  • Multi-value rows (wide events), not just single-value series
  • Replace Prometheus + data warehouse with one system
Architecture comparison

Why scaling Prometheus means adding components - and why GreptimeDB doesn't.

Prometheus Stack

5-8 COMPONENTS

Prometheus (scraper + local TSDB)

remote write

Thanos Sidecar / Mimir Distributor

compact + index

Thanos Compactor + Store Gateway

query fan-out

Thanos Querier / Mimir Query Frontend

long-term

S3 object storage (via Thanos/Mimir)

GreptimeDB

1 DATABASE

Frontend node (stateless, auto-scale)

compute-storage disaggregation

Datanode (compute, stateless)

native object storage

  • PromQL + SQL in same query
  • Metrics + Logs + Traces same engine
  • Scale compute and storage independently
Feature comparison
DimensionGreptimeDBPrometheusThanos / Mimir
Query languageSQL + PromQL (dual)PromQL onlyPromQL only
Data modelMulti-value rows (wide events)Single-value time seriesSingle-value time series
Data typesMetrics + Logs + TracesMetrics onlyMetrics only
StorageNative object storage (S3, OSS, GCS)Local diskObject storage via sidecar (Thanos) or native (Mimir)
Scaling modelCompute-storage disaggregation, statelessFederation onlyMulti-component (ops heavy)
OpenTelemetryNative OTLP (all signals)Metrics only (remote write)Metrics only (Thanos); OTLP metrics supported (Mimir)
Continuous aggregationBuilt-in SQL aggregation + Flow streaming engineRecording rules (limited)Recording rules (limited)
High availabilityNative clustering with automatic failoverRequires manual federationMulti-component HA setup
LicenseApache 2.0Apache 2.0Thanos: Apache 2.0; Mimir: AGPLv3
Migration effortPromQL-compatible, remote write readyRequires infra redesign

Thanos and Mimir have different architectures. This column summarizes common patterns.

Migration path - as fast as one week

PromQL-compatible. Remote write ready. No rewrite required.

Remote Write redirect

Docs

Point Prometheus remote_write to GreptimeDB endpoint. Zero downtime. Existing scrape configs untouched.

30 min

Grafana datasource

Docs

Replace Grafana datasource — GreptimeDB exposes a Prometheus-compatible API. Existing dashboards work immediately.

1 hour

Backfill history

Export historical data from Thanos snapshots or promtool. Bulk insert via gRPC without impacting live writes.

1-3 days

Decommission Thanos

After data validation, shut down Thanos Sidecar, Compactor, and Store Gateway one by one.

2 Weeks

DeepXplore
DeepXplore
CEO
We replaced Thanos with GreptimeDB for Prometheus long-term storage. Queries that used to crawl now return significantly faster, and we no longer manage Thanos sidecars, compactors, and store gateways.

Stay in the loop

Join our community