Skip to content

GreptimeDB vs. ClickHouse

Great analytics engine — ClickStack adds OTLP ingestion for observability, but it is separate from the core OLAP engine. GreptimeDB has PromQL, OTLP, and traces built into one unified engine.

ClickHouse queries fast.
But observability needs more than a query engine.

ClickHouse is a high-performance columnar OLAP database where time is just another dimension — not the organizing principle of storage. ClickStack adds OTLP ingestion and ClickHouse has early-stage experimental PromQL, but these are additions on top of the OLAP core. Using ClickHouse for observability typically requires buffering middleware, ingestion workers, and transformation pipelines before data reaches the database. Click here to read the full report of GreptimeDB vs. ClickHouse in log scenarios.

CHALLENGER

ClickHouse

OLAP engine — observability runs on ClickStack, a separate layer

  • Time is just another column — storage layout not optimized for time-series access patterns
  • Observability ingestion typically needs Redis buffering + workers + transform pipelines
  • Early-stage PromQL (experimental, limited functions); ClickStack OTLP is separate from OLAP core
  • Log storage 50% larger in [log benchmark](/blogs/2025-04-01-clickhhouse-greptimedb-log-monitoring) (2.6GB vs 1.3GB)
VS

GREPTIMEDB

GreptimeDB

SQL + PromQL + OTLP + Jaeger — all built in

  • Timestamp-first storage layout — designed for time-series from the ground up
  • Native OTLP endpoint — SDK writes directly to database, no intermediate queue
  • Dynamic schema — new span attributes auto-create columns, no migration needed
  • 50% lower log storage with better compression
Architecture comparison

ClickHouse is OLAP-first. Observability runs on ClickStack, a separate layer.

ClickHouse Stack

4-8 COMPONENTS

Collectors / ETL / Kafka pipeline

ingest + transform

ClickHouse shards + replicas

store + compute

Materialized views + rollups

optimize queries

Separate tooling for observability UX

dashboard + alert adaptation

GreptimeDB

1 DATABASE

Frontend node (stateless, auto-scale)

query + ingest gateway

Datanode (compute, stateless)

native object storage

  • PromQL + SQL support out of the box
  • Logs, metrics, and traces in one engine
  • Scale compute and storage independently
Feature comparison
DimensionGreptimeDBClickHouse
Query languageSQL + PromQL (dual, native)SQL primary, early-stage experimental PromQL support
Data typesMetrics + Logs + Traces in one databaseOLAP; observability via ClickStack
PromQL supportNative PromQL query engineExperimental (limited to basic functions like rate/delta/increase)
OpenTelemetryNative OTLP (all signals)OTLP ingestion via ClickStack (separate from core OLAP engine)
Trace supportNative Jaeger Query APIVia ClickStack
Storage designTimestamp-first layout, optimized for time-series accessTime is another column; general-purpose OLAP layout
Schema evolutionDynamic — new attributes auto-create columnsRequires ALTER TABLE or schema migration
Storage formatApache Parquet (columnar, compressed)MergeTree engine family (columnar)
Log storage efficiency1.3GB (13% compression ratio)2.6GB (26% compression ratio) — 2x larger
Storage backendNative object storage (S3, OSS, GCS)Local disk primary, S3 via cold storage
Ingestion pipelineSDK → native OTLP endpoint → database (no middleware)Typically requires buffering (Redis/Kafka) + ingestion workers
Scaling modelCompute-storage disaggregation, statelessShared-nothing sharding, stateful replicas
Continuous aggregationBuilt-in SQL + Flow streaming engineMaterialized views + aggregating MergeTree
LicenseApache 2.0Apache 2.0
Integration effortOut-of-the-box observability stackClickStack + configuration for observability

Storage comparison from log benchmark tests. Results vary by workload and configuration.

Migration path - as fast as one week

Start with your highest-pressure signal and consolidate step by step.

Redirect new ingestion

Docs

Route incoming observability streams to GreptimeDB using compatible protocols while existing ClickHouse pipelines keep running.

30 min

Switch dashboard datasource

Docs

Update Grafana datasource and keep dashboards online with minimal query rewrites.

1 hour

Backfill historical data

Export historical partitions and bulk load into GreptimeDB for unified retention and query.

1-3 days

Retire custom glue

After verification, remove redundant ETL and protocol-conversion components built around ClickHouse.

2 Weeks

Poizon
Poizon
Staff Engineer
We built a cost-effective, real-time monitoring architecture with GreptimeDB. P99 query latency dropped from seconds to milliseconds after replacing multi-stage ETL pipelines with GreptimeDB's unified approach.

Stay in the loop

Join our community