Releases: tikv/pd
Releases · tikv/pd
pd-server v2.1.0
-
Optimize availability
- Introduce the version control mechanism and support rolling update of the cluster compatibly
- Enable
Raft PreVote
among PD nodes to avoid leader reelection when network recovers after network isolation - Enable
raft learner
by default to lower the risk of unavailable data caused by machine failure during scheduling - TSO allocation is no longer affected by the system clock going backwards
- Support the
Region merge
feature to reduce the overhead brought by metadata
-
Optimize the scheduler
- Optimize the processing of Down Store to speed up making up replicas
- Optimize the hotspot scheduler to improve its adaptability when traffic statistics information jitters
- Optimize the start of Coordinator to reduce the unnecessary scheduling caused by restarting PD
- Optimize the issue that Balance Scheduler schedules small Regions frequently
- Optimize Region merge to consider the number of rows within the Region
- Add more commands to control the scheduling policy
- Improve PD simulator to simulate the scheduling scenarios
-
API and operation tools
- Add the
GetPrevRegion
interface to support theTiDB reverse scan
feature - Add the
BatchSplitRegion
interface to speed up TiKV Region splitting - Add the
GCSafePoint
interface to support distributed GC in TiDB - Add the
GetAllStores
interface, to support distributed GC in TiDB - pd-ctl supports:
- pd-recover doesn't need to provide the
max-replica
parameter
- Add the
-
Metrics
- Add related metrics for
Filter
- Add metrics about etcd Raft state machine
- Add related metrics for
-
Performance
- Optimize the performance of Region heartbeat to reduce the memory overhead brought by heartbeats
- Optimize the Region tree performance
- Optimize the performance of computing hotspot statistics
pd-server v2.0.9
pd-server v2.1.0-rc.5
pd-server v2.1.0-rc.4
- Fix the issue that the tombstone TiKV is not removed from Grafana #1261
- Fix the data race issue when grpc-go configures the status #1265
- Fix the issue that the PD server gets stuck caused by etcd startup failure #1267
- Fix the issue that data race might occur during leader switching #1273
- Fix the issue that extra warning logs might be output when TiKV becomes tombstone #1280
pd-server v2.1.0-rc.3
pd-server v2.1.0-rc.2
Features
- Support the
GetAllStores
interface - Add the statistics of scheduling estimation in Simulator
Improvements
- Optimize the handling process of down stores to make up replicas as soon as possible
- Optimize the start of Coordinator to reduce the unnecessary scheduling caused by restarting PD
- Optimize the memory usage to reduce the overhead caused by heartbeats
- Optimize error handling and improve the log information
- Support querying the Region information of a specific store in pd-ctl
- Support querying the topN Region information based on version
- Support more accurate TSO decoding in pd-ctl
Bug fix
- Fix the issue that pd-ctl uses the
hot store
command to exit wrongly
pd-server v2.1.0-rc.1
Features
- Introduce the version control mechanism and support rolling update of the cluster with compatibility
- Enable the
region merge
feature - Support the
GetPrevRegion
interface - Support splitting Regions in batch
- Support storing the GC safepoint
Improvements
- Optimize the issue that TSO allocation is affected by the system clock going backwards
- Optimize the performance of handling Region hearbeats
- Optimize the Region tree performance
- Optimize the performance of computing hotspot statistics
- Optimize returning the error code of API interface
- Add options of controlling scheduling strategies
- Prohibit using special characters in
label
- Improve the scheduling simulator
- Support splitting Regions using statistics in pd-ctl
- Support formatting JSON output by calling
jq
in pd-ctl - Add metrics about etcd Raft state machine
Bug fixes
- Fix the issue that the namespace is not reloaded after switching Leader
- Fix the issue that namespace scheduling exceeds the schedule limit
- Fix the issue that hotspot scheduling exceeds the schedule limit
- Fix the issue that wrong logs are output when the PD client closes
- Fix the wrong statistics of Region heartbeat latency
pd-server v2.0.5
Bug Fixes
- Fix the issue that replicas migration uses up TiKV disks space in some scenarios
- Fix the crash issue caused by AdjacentRegionScheduler
pd-server v2.1.0-beta
Improvements
- Enable Raft PreVote between PD nodes to avoid leader reelection when network recovers after network isolation
- Optimize the issue that Balance Scheduler schedules small Regions frequently
- Optimize the hotspot scheduler to improve its adaptability in traffic statistics information jitters
- Skip the Regions with a large number of rows when scheduling
region merge
- Enable
raft learner
by default to lower the risk of unavailable data caused by machine failure during scheduling - Remove
max-replica
frompd-recover
- Add
Filter
metrics
Bug Fixes
- Fix the issue that Region information is not updated after tikv-ctl unsafe recovery
- Fix the issue that TiKV disk space is used up caused by replica migration in some scenarios
Compatibility notes
- Do not support rolling back to v2.0.x or earlier due to update of the new version storage engine
- Enable
raft learner
by default in the new version of PD. If the cluster is upgraded from 1.x to 2.1, the machine should be stopped before upgrade or a rolling update should be first applied to TiKV and then PD
pd-server v2.0.4
- Improve the behavior of the unset scheduling argument
max-pending-peer-count
by changing it to no limit for the maximum number ofPendingPeer
s