You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What version of Kubernetes are you using?
Client Version: v1.31.1
Kustomize Version: v5.4.2
What version of TiDB Operator are you using?
v1.6.0
What did you do?
We deployed a tidb cluster with 3 replicas of pd, tikv and tidb. After the cluster is initialized, we set the status.report-status to false in spec.tidb.config and applied the change.
After the TiDB operator successfully reconfigures the TiDB cluster, it loses the connectivity to the TiDB cluster, and it mistakenly thinks that the TiDB cluster is unhealthy and constantly tries to run failover. The failover spawns new pods, however, the operator still cannot contact the new pods.
What did you expect to see?
We expected to see either the tidb restarts and new configuration takes effect or the tidb operator rejects the change since the operations of tidb operator depend on HTTP API service.
What did you see instead?
The last pod terminated and restarted. After that, tidb operator could not connect to the cluster and hanged.
The text was updated successfully, but these errors were encountered:
Bug Report
What version of Kubernetes are you using?
Client Version: v1.31.1
Kustomize Version: v5.4.2
What version of TiDB Operator are you using?
v1.6.0
What did you do?
We deployed a tidb cluster with 3 replicas of pd, tikv and tidb. After the cluster is initialized, we set the
status.report-status
tofalse
inspec.tidb.config
and applied the change.After the TiDB operator successfully reconfigures the TiDB cluster, it loses the connectivity to the TiDB cluster, and it mistakenly thinks that the TiDB cluster is unhealthy and constantly tries to run failover. The failover spawns new pods, however, the operator still cannot contact the new pods.
The healthcheck fails at
tidb-operator/pkg/manager/member/tidb_member_manager.go
Line 303 in 24fa283
which constantly triggers the Failover function.
How to reproduce
status.status-port
to thespec.tidb.config
What did you expect to see?
We expected to see either the tidb restarts and new configuration takes effect or the tidb operator rejects the change since the operations of tidb operator depend on HTTP API service.
What did you see instead?
The last pod terminated and restarted. After that, tidb operator could not connect to the cluster and hanged.
The text was updated successfully, but these errors were encountered: