Releases: pingcap/tiup
Releases · pingcap/tiup
v1.3.5
v1.3.4
v1.3.3
Fixes
- Fix the issue that tiup will hang forever when reloading a stopped cluster (#1044, @9547)
- Fix the issue that
tiup mirror merge
does not work on official offline package (#1121, @lucklove) - Fix the issue that there may be no retry when download component failed (#1137, @lucklove)
- Fix the issue that PD dashboard does not report grafana address in playground (#1142, @9547)
- Fix the issue that the default selected version may be a preprelease version (#1128, @lucklove)
- Fix the issue that the error message is confusing when the patched tar is not correct (#1175, @lucklove)
Improvements
- Add darwin-arm64 not support hint in install script (#1123, @terasum)
- Improve playground welcome information for connecting TiDB (#1133, @dveeden)
- Bind latest stable grafana and prometheus in DM deploying (#1129, @lucklove)
- Use the advertised host instead of 0.0.0.0 for tiup-playground (#1152, @9547)
- Check tarball checksum on tiup-server when publish component (#1163, @lucklove)
v1.3.2
Fixes
- Fix the issue that the grafana and alertmanager target not set in prometheus.yaml (#1041, @9547)
- Fix the issue that grafana deployed by tiup-dm missing home.json (#1056, @lucklove)
- Fix the issue that the expires of cloned mirror is shourened after publish component to it (#1051, @lucklove)
- Fix the issue that tiup-cluster may remove wrong paths for imported cluster on scale-in (#1068, @AstroProfundis)
- Risk of this issue: If an imported cluster has deploy dir ending with
/
, and sub dirs as<deploy-dir>//sub
, it could results to delete wrong paths on scale-in
- Risk of this issue: If an imported cluster has deploy dir ending with
- Fix the issue that imported
*_exporter
has wrong binary path (#1101, @AstroProfundis)
Improvements
v1.3.1
Fixes
- Workaround the issue that store IDs in PDs may not monotonically assigned (#1011, @AstroProfundis)
- Currently, the ID allocator is guaranteed not to allocate duplicated IDs, but when PD leader changes multiple times, the IDs may not be monotonic
- For tiup < v1.2.1, the command
tiup cluster display
may delete store (without confirm) by mistake due to this issue (high risk) - For tiup >= v1.2.1 and <= v1.3.0, the command
tiup cluster display
may displayup
stores astombstone
, and encourages the user to delete them with the commandtiup cluster prune
(medium risk)
- For tiup < v1.2.1, the command
- Currently, the ID allocator is guaranteed not to allocate duplicated IDs, but when PD leader changes multiple times, the IDs may not be monotonic
- Fix the issue that the
cluster check
always fail on thp check even though the thp is disabled (#1005, @lucklove) - Fix the issue that the command
tiup mirror merge -h
outputs wrong usage (#1008, @lucklove)- The syntax of this command should be
tiup mirror merge <mirror-dir-1> [mirror-dir-N]
but it outputstiup mirror merge <base> <mirror-dir-1> [mirror-dir-N]
- The syntax of this command should be
- Fix the issue that prometheus doesn't collect drainer metrics (#1012, @SE-Bin)
Improvements
- Optimize the scene that the display command waits for a long time ,because the status cannot be obtained in time when network problems occur.(#986, @9547)
- Cluster and dm component support version input without leading 'v' (#1009, @AstroProfundis)
- When a user try to clean logs with the command
tiup cluster clean --logs
,add a warning to explain that we will stop the cluster before clean logs (#1029, @lucklove)
v1.3.0
New Features
- Modify TiFlash's query memory limit from 10GB to 0(unlimited) in playground cluster (#907, @LittleFall)
- Import configuration into topology meta when migrating a cluster from Ansible (#766, @yuzhibotao)
- Before, we stored imported ansible config in ansible-imported-configs which is hidden for users, in this release, we merge the configs into meta.yaml so that the user can see the config with the command
tiup cluster edit
- Before, we stored imported ansible config in ansible-imported-configs which is hidden for users, in this release, we merge the configs into meta.yaml so that the user can see the config with the command
- Enhance the
tiup mirror
command (#860, @lucklove)- Support merge two or more mirrors into one
- Support publish component to local mirror besides remote mirror
- Support add component owner to local mirror
- Partially support deploy cluster with hostname besides ip address (EXPERIMENTAL) (#948,#949, @fln)
- Not usable for production, as there would be issue if a hostname resolves to a new IP address after deployment
- Support setting custom timeout for waiting instances up in playground-cluster (#968, @unbyte)
- Support check and disable THP in
tiup cluster check
(#964, @anywhy) - Support sign remote manifest and rotate root.json (#967, @lucklove)
Fixes
- Fixed the issue that the public key created by TiUP was not removed after the cluster was destroyed (#910, @9547)
- Fix the issue that user defined grafana username and password not imported from tidb-ansible cluster correctly (#937, @AstroProfundis)
- Fix the issue that playground cluster not quiting components with correct order: TiDB -> TiKV -> PD (#933, @unbyte)
- Fix the issue that TiKV reports wrong advertise address when
--status-addr
is set to a wildcard address like0.0.0.0
(#951, @lucklove) - Fix the issue that Prometheus doesn't reload target after scale-in action (#958, @9547)
- Fix the issue that the config file for TiFlash missing in playground cluster (#969, @unbyte)
- Fix Tilfash startup failed without stderr output when numa is enabled but numactl cannot be found (#984, @lucklove)
- Fix the issue that the deployment environment fail to copy config file when zsh is configured (#982, @9547)
Improvements
- Enable memory buddyinfo monitoring on node_exporter to collect exposes statistics of memory fragments (#904, @9547)
- Move error logs dumped by tiup-dm and tiup-cluster to
${TIUP_HOME}/logs
(#908, @9547) - Allow run pure TiKV (without TiDB) cluster in playground cluster (#926, @sticnarf)
- Add confirm stage for upgrade action (#963, @Win-Man)
- Omit debug log from console output in tiup-cluster (#977, @AstroProfundis)
- Prompt list of paths to be deleted before processing in the clean action of tiup-cluster (#981, #993, @AstroProfundis)
- Make error message of monitor port conflict more readable (#966, @JaySon-Huang)
v1.2.5
Fixes
- Fix the issue that can't operate the cluster which have tispark workers without tispark master (#924, @AstroProfundis)
- Root cause: once the tispark master been removed from the cluster, any later action will be reject by TiUP
- Fix: make it possible for broken clusters to fix no tispark master error by scaling out a new tispark master node
- Fix the issue that it report
pump node id not found
while drainer node id not found (#925, @lucklove)
Improvements
- Support deploy TiFlash on multi-disks with "storage" configurations since v4.0.9 (#931, #938, @JaySon-Huang)
- Check duplicated pd_servers.name in the topology before truly deploy the cluster (#922, @anywhy)
v1.2.4
Fixes
- Fix the issue that Pump & Drainer has different node id between tidb-ansible and TiUP (#903, @lucklove)
- For the cluster imported from tidb-ansible, if the pump or drainer is restarted, it will start with a new node id
- Risk of this issue: binlog may not work correctly after restart pump or drainer
- Fix the issue that audit log may get lost in some special case (#879, #882, @9547)
- If the user execute two commands one follows the other, and the second one quit in 1 second, the audit log of the first command will be overwirten by the second one
- Risk caused by this issue: some audit logs may get lost in above case
- Fix the issue that new component deployed with
tiup cluster scale-out
doesn't auto start when rebooting (#905, @9547)- Risk caused by this issue: the cluster may be unavailable after rebooting
- Fix the issue that data directory of tiflash is not deleted if multiple data directories are specified (#871, @9547)
- Fix the issue that
node_exporter
andblackbox_exporter
not cleaned up after scale-in all instances on specified host (#857, @9547) - Fix the issue that the patch command will fail when try to patch dm cluster (#884, @lucklove)
- Fix the issue that the bench component report
Error 1105: client has multi-statement capability disabled
(#887, @mahjonp) - Fix the issue that the TiSpark node can't be upgraded (#901, @lucklove)
- Fix the issue that tiup-playground can't start TiFlash with newest nightly PD (#902, @lucklove)
Improvements
- Ignore no tispark master error when listing clusters since the master node may be remove by
scale-in --force
(#920, @AstroProfundis)
v1.2.3
v1.2.1
Risk Events
A critical bug that introduced in V1.0.0 had been fixed in v1.0.8.
if the user want to scale in some TiKV nodes with the command tiup cluster scale-in
with tiup-cluster, TiUP may delete TiKV nodes by mistake, causing the TiDB cluster data loss
The root cause:
- while TiUP treats these TiKV nodes' state as
tombstone
by mistake, it would report an error that confuses the user. - Then the user would execute the command
tiup cluster display
to confirm the real state of the cluster, but thedisplay
command also displays these TiKV nodes are intombstone
state too; - what's worse, the
display
command will destroy tombstone nodes automatically, no user confirmation required. So these TiKV nodes were destroyed by mistake.
To prevent this, we introduce a more safe manual way to clean up tombstone nodes in this release.
Improvements
- Introduce a more safe way to cleanup tombstone nodes (#858, @lucklove)
- When an user
scale-in
a TiKV server, it's data is not deleted until the user executes adisplay
command, it's risky because there is no choice for user to confirm - We have add a
prune
command for the cleanup stage, the display command will not cleanup tombstone instance any more
- When an user
- Skip auto-start the cluster before the scale-out action because there may be some damaged instance that can't be started (#848, @lucklove)
- In this version, the user should make sure the cluster is working correctly by themselves before executing
scale-out
- In this version, the user should make sure the cluster is working correctly by themselves before executing
- Introduce a more graceful way to check TiKV labels (#843, @lucklove)
- Before this change, we check TiKV labels from the config files of TiKV and PD servers, however, servers imported from tidb-ansible deployment don't store latest labels in local config, this causes inaccurate label information
- After this we will fetch PD and TiKV labels with PD api in display command
Fixes
- Fix the issue that there is datarace when concurrent save the same file (#836, @9547)
- We found that while the cluster deployed with TLS supported, the ca.crt file was saved multi times in parallel, this may lead to the ca.crt file to be left empty
- The influence of this issue is that the tiup client may not communicate with the cluster
- Fix the issue that files copied by TiUP may have different mode with origin files (#844, @lucklove)
- Fix the issue that the tiup script not updated after
scale-in
PD (#824, @9547)