Skip to content

GreptimeDB vs. Grafana Loki

Loki hitting performance ceiling? Labels-only indexing was a clever trade-off — until your team needed to search log bodies at scale.

Promtail pushed it.
Loki indexed labels only.

Grafana Loki is a horizontally scalable, multi-tenant log aggregation system inspired by Prometheus. Loki indexes only label metadata — not the log content itself. This keeps storage cheap, but means every log body query is a brute-force scan. At scale, that trade-off becomes a ceiling: large queries time out, search is limited to minutes of data, and ad hoc troubleshooting slows down. Click here to read the full performance comparison report between GreptimeDB and Loki.

CHALLENGER

Grafana Loki

Label-first logging stack that scales well but limits ad hoc text analytics

  • Label-only index — log body search is brute-force at scale
  • Large queries time out, search range limited to minutes
  • Separate systems still needed for metrics and traces
VS

GREPTIMEDB

GreptimeDB

Full-text index + SQL for logs, unified with metrics and traces

  • Full-text index — keyword search 40-80x faster in [benchmark tests](/blogs/2025-08-07-beyond-loki-greptimedb-log-scenario-performance-report)
  • Sub-second queries across hours/days of log data
  • Unified engine for logs, metrics, and traces in one system
Architecture comparison

Why scaling Loki-based logging adds pipeline complexity - and why GreptimeDB stays simpler.

Loki Stack

4-7 COMPONENTS

Promtail / Fluent Bit / Vector

collect + parse + ship

Loki Distributor + Ingester

write + chunk

Index Gateway + Querier

fan-out query

Compactor + Object Storage

retention + optimize

GreptimeDB

1 DATABASE

Frontend node (stateless, auto-scale)

query + ingest gateway

Datanode (compute, stateless)

native object storage

  • LogQL-like migration path via Loki Push API
  • Full-text + structured query in SQL
  • Scale compute and storage independently
Feature comparison
DimensionGreptimeDBGrafana Loki
Indexing strategyFull-text index + inverted index + secondary indexLabel-based indexing only (no full-text)
Query languageSQL + PromQL (dual interface)LogQL
Data typesMetrics + Logs + Traces in one databaseLogs only (separate systems for metrics/traces)
Query performanceSub-second with full-text search (40-80x on keyword queries)Fast label queries, brute-force text search at scale
Log processingBuilt-in Pipeline engine for parsing and transformationBasic parsing with structured metadata
Storage formatApache Parquet (columnar, compressed)Custom chunks (compressed log streams)
Storage architectureCompute-storage disaggregation, native object storageDistributed with object storage backends
Ingestion protocolsSQL, gRPC, OTLP, Loki Push API, Elasticsearch Bulk API, HTTPHTTP Push API, Promtail, Fluent Bit, Vector
OpenTelemetryNative OTLP (all signals)Native OTLP log ingestion supported; query model still label-index-based
LicenseApache 2.0AGPL 3.0
Operational complexitySingle system for all observability dataRequires Mimir + Tempo for complete observability

Performance data from benchmark tests. Results vary by workload and configuration. See the full benchmark report.

Migration path - as fast as one week

Loki Push API compatible. Keep existing agents, migrate step by step.

Redirect ingest endpoint

Docs

Switch Promtail / Fluent Bit / Vector output to GreptimeDB Loki Push API endpoint with no downtime.

30 min

Grafana datasource

Docs

Replace Grafana datasource to GreptimeDB. Existing explore workflows and dashboards continue to work.

1 hour

Backfill historical logs

Export log chunks from object storage and bulk import to GreptimeDB while live ingestion continues.

1-3 days

Decommission Loki cluster

After data validation, gradually turn off Loki components and keep GreptimeDB as the primary log backend.

2 Weeks

OceanBase
OceanBase
Staff Engineer
Migrating from Loki to GreptimeDB enables high-performance querying of massive log data at scale, offers multi-cloud deployment flexibility, and significantly simplifies application and deployment architecture.

Stay in the loop

Join our community