Last November, we shared the key highlights and release plan for GreptimeDB v1.0 and shipped four beta releases. This week, we're publishing the first release candidate of v1.0.0 — marking the feature freeze and the start of the stability validation phase on the road to v1.0 GA.
Development Stats
Here are the development statistics for rc.1 (2025-12-29 → 2026-02-02, 35 days):
- 100 PRs merged, touching 595 files
- Code changes: +40,116 / −9,157
- 12 contributors
Breakdown of major improvements:
- 34 feature enhancements: region repartition, query trace improvements,
json_getfunction enhancements, etc. - 28 bug fixes: pipeline loading, timestamp index inference, histogram result corrections, etc.
- 4 refactors: removal of
region_numbers,CreateTableProcedurerefactoring, etc. - 3 performance optimizations: primary key filter speedups, PromQL query optimizations, etc.
Thanks to all 12 contributors who participated in this release!

If you're interested in building observability databases, come join the GreptimeDB community — contributions of all kinds are welcome.
Highlights
Region Repartition
As workloads evolve, a table's data distribution and hotspot patterns can change. A single-partition table may need to scale out into a distributed partitioned table; a partitioned table may need additional partitions to handle heavier read/write loads; or small partitions may need to be merged to reduce data fragmentation.
RC1 introduces region repartition, allowing users to dynamically adjust partition rules and data distribution at runtime — without rebuilding tables or migrating data.
Split — Split one partition into multiple partitions. Useful when a single partition grows too large or a hotspot emerges:
ALTER TABLE sensor_readings SPLIT PARTITION (
device_id < 100
) INTO (
device_id < 100 AND area < 'South',
device_id < 100 AND area >= 'South'
);Merge — Merge multiple partitions into one. Useful when partitions are too fine-grained, wasting resources:
ALTER TABLE sensor_readings MERGE PARTITION (
device_id < 100 AND area < 'South',
device_id < 100 AND area >= 'South'
);You can also control execution behavior with a WITH clause:
ALTER TABLE sensor_readings SPLIT PARTITION (
device_id < 100
) INTO (
device_id < 100 AND area < 'South',
device_id < 100 AND area >= 'South'
) WITH (
TIMEOUT = '5m',
WAIT = false
);- When
WAIT = false, the statement returns aprocedure_idimmediately. You can check execution status viaADMIN procedure_state(procedure_id). TIMEOUTsets the overall time limit for the operation.
Limitations:
- Only supported in distributed cluster deployments
- Shared object storage and GC must be enabled
- All Datanodes must have access to the same object storage
Metric Engine Primary Key Filter Optimization
RC1 introduces a fast-path optimization for primary key filtering in the Metric Engine. By comparing the byte-encoded representation of primary key values directly (using memcomparable), it avoids the overhead of per-value decoding and materialization.
Microbenchmark results:
| Scenario | After | Before | Speedup |
|---|---|---|---|
| eq/dense | 13.65 ns | 275.98 ns | 20.2× |
| gt/dense | 14.12 ns | 312.15 ns | 22.1× |
| lt_eq/dense | 12.77 ns | 268.72 ns | 21.0× |
| or_eq/dense | 14.16 ns | 1.29 µs | 91.1× |
| eq/sparse | 101.60 ns | 376.19 ns | 3.7× |
| gt/sparse | 108.18 ns | 370.65 ns | 3.4× |
With the default dense codec, primary key filtering is 20–90× faster. Sparse codec scenarios also see 3–11× improvements.
Other Improvements
json_getenhancement: Thejson_getUDF now supports user-specified return types.- PromQL optimization: Optimized the PromQL planner to use TSID, skipping unnecessary label columns.
- Query trace tuning: Improved query tracing for better observability into query execution.
- BulkMemtable optimization: Part compaction no longer requires encoding to Parquet, reducing overhead.
- MySQL/PostgreSQL protocol: Added more functions for improved compatibility.
- Flow enhancement: Added support for the
last_non_nullfunction. - PromQL parser upgrade: Partial compatibility with Prometheus 3.0 syntax.
Notable Bug Fixes
- Write stall: Fixed an issue where flush logic could cause an unrecoverable write stall under certain conditions.
- Pipeline loading: Fixed pipeline loading failures.
- Timestamp index inference: Fixed incorrect timestamp index inference.
- Histogram results: Corrected histogram query result calculations.
- JSON Unicode parsing: Fixed parsing of JSON strings containing Unicode code point literals.
COPY FROM: Fixed a bug whereCOPY FROMmight not correctly import CSV data.
Compatibility Notes
This release contains the following breaking changes.
1. Heartbeat Configuration Change
Heartbeat configuration is now managed centrally by Metasrv. If you previously configured [heartbeat] settings in datanode.toml, you need to remove them.
Heartbeat intervals are now controlled via Metasrv's heartbeat_interval option:
- Frontend heartbeat interval = 6 ×
heartbeat_interval - Flownode/Datanode heartbeat interval = 1 ×
heartbeat_interval
For example, if heartbeat_interval is set to 3s, the Frontend heartbeat interval will be 18s, and the Flownode/Datanode heartbeat interval will be 3s.
In most cases, the default value works well and no changes are needed.
2. Removal of region_numbers Field
The TableMeta.region_numbers field has been removed.
Impact: After upgrading, downgrading to an older version may cause compatibility issues. Older versions may fail when reading data produced by the new version, as the region_numbers field no longer exists.
Closing
For the full changelog, see the GitHub Release.
Thanks to all contributors and users. We'll continue working toward 1.0 GA as planned.


