title | summary | aliases | |||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Deploy a TiDB Cluster Using TiUP |
Learn how to easily deploy a TiDB cluster using TiUP. |
|
This guide describes how to deploy a TiDB Self-Managed cluster using TiUP in the production environment.
TiUP is a cluster operation and maintenance tool introduced in TiDB v4.0. It provides TiUP cluster, a Golang-based component for managing TiDB clusters. By using the TiUP cluster, you can easily perform routine database operations, such as deploying, starting, stopping, destroying, scaling, upgrading TiDB clusters, and managing TiDB cluster parameters.
TiUP also supports deploying TiDB, TiFlash, TiCDC, and the monitoring system. This guide introduces how to deploy TiDB clusters with different topologies.
Make sure that you have read the following documents:
In addition, it is recommended to learn the Best Practices for TiDB Security Configuration.
You can deploy TiUP on the control machine in either of the two ways: online deployment and offline deployment.
Log in to the control machine using a regular user account (take the tidb
user as an example). Subsequent TiUP installation and cluster management can be performed by the tidb
user.
-
Install TiUP by running the following command:
{{< copyable "shell-regular" >}}
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
-
Set TiUP environment variables:
-
Redeclare the global environment variables:
{{< copyable "shell-regular" >}}
source .bash_profile
-
Confirm whether TiUP is installed:
{{< copyable "shell-regular" >}}
which tiup
-
-
Install the TiUP cluster component:
{{< copyable "shell-regular" >}}
tiup cluster
-
If TiUP is already installed, update the TiUP cluster component to the latest version:
{{< copyable "shell-regular" >}}
tiup update --self && tiup update cluster
If
Updated successfully!
is displayed, the TiUP cluster is updated successfully. -
Verify the current version of your TiUP cluster:
{{< copyable "shell-regular" >}}
tiup --binary cluster
Perform the following steps in this section to deploy a TiDB cluster offline using TiUP:
Method 1: Download the offline binary packages (TiUP offline package included) of the target TiDB version using the following links. You need to download both the server and toolkit packages. Note that your downloading means you agree to the Privacy Policy.
https://download.pingcap.org/tidb-community-server-{version}-linux-{arch}.tar.gz
https://download.pingcap.org/tidb-community-toolkit-{version}-linux-{arch}.tar.gz
Tip:
{version}
in the link indicates the version number of TiDB and{arch}
indicates the architecture of the system, which can beamd64
orarm64
. For example, the download link forv8.5.0
in theamd64
architecture ishttps://download.pingcap.org/tidb-community-toolkit-v8.5.0-linux-amd64.tar.gz
.
Method 2: Manually pack an offline component package using tiup mirror clone
. The detailed steps are as follows:
-
Install the TiUP package manager online.
-
Install the TiUP tool:
{{< copyable "shell-regular" >}}
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
-
Redeclare the global environment variables:
{{< copyable "shell-regular" >}}
source .bash_profile
-
Confirm whether TiUP is installed:
{{< copyable "shell-regular" >}}
which tiup
-
-
Pull the mirror using TiUP.
-
Pull the needed components on a machine that has access to the Internet:
{{< copyable "shell-regular" >}}
tiup mirror clone tidb-community-server-${version}-linux-amd64 ${version} --os=linux --arch=amd64
The command above creates a directory named
tidb-community-server-${version}-linux-amd64
in the current directory, which contains the component package necessary for starting a cluster. -
Pack the component package by using the
tar
command and send the package to the control machine in the isolated environment:{{< copyable "shell-regular" >}}
tar czvf tidb-community-server-${version}-linux-amd64.tar.gz tidb-community-server-${version}-linux-amd64
tidb-community-server-${version}-linux-amd64.tar.gz
is an independent offline environment package.
-
-
Customize the offline mirror, or adjust the contents of an existing offline mirror.
If you want to adjust an existing offline mirror (such as adding a new version of a component), take the following steps:
-
When pulling an offline mirror, you can get an incomplete offline mirror by specifying specific information via parameters, such as the component and version information. For example, you can pull an offline mirror that includes only the offline mirror of TiUP v1.12.3 and TiUP Cluster v1.12.3 by running the following command:
{{< copyable "shell-regular" >}}
tiup mirror clone tiup-custom-mirror-v1.12.3 --tiup v1.12.3 --cluster v1.12.3
If you only need the components for a particular platform, you can specify them using the
--os
or--arch
parameters. -
Refer to the step 2 of "Pull the mirror using TiUP", and send this incomplete offline mirror to the control machine in the isolated environment.
-
Check the path of the current offline mirror on the control machine in the isolated environment. If your TiUP tool is of a recent version, you can get the current mirror address by running the following command:
{{< copyable "shell-regular" >}}
tiup mirror show
If the output of the above command indicates that the
show
command does not exist, you might be using an older version of TiUP. In this case, you can get the current mirror address from$HOME/.tiup/tiup.toml
. Record this mirror address. In the following steps,${base_mirror}
is used to refer to this address. -
Merge an incomplete offline mirror into an existing offline mirror:
First, copy the
keys
directory in the current offline mirror to the$HOME/.tiup
directory:{{< copyable "shell-regular" >}}
cp -r ${base_mirror}/keys $HOME/.tiup/
Then use the TiUP command to merge the incomplete offline mirror into the mirror in use:
{{< copyable "shell-regular" >}}
tiup mirror merge tiup-custom-mirror-v1.12.3
-
When the above steps are completed, check the result by running the
tiup list
command. In this document's example, the outputs of bothtiup list tiup
andtiup list cluster
show that the corresponding components ofv1.12.3
are available.
-
After sending the package to the control machine of the target cluster, install the TiUP component by running the following commands:
{{< copyable "shell-regular" >}}
tar xzvf tidb-community-server-${version}-linux-amd64.tar.gz && \
sh tidb-community-server-${version}-linux-amd64/local_install.sh && \
source /home/tidb/.bash_profile
The local_install.sh
script automatically runs the tiup mirror set tidb-community-server-${version}-linux-amd64
command to set the current mirror address to tidb-community-server-${version}-linux-amd64
.
If you download the offline packages via download links, you need to merge the server package and the toolkit package into an offline mirror. If you manually package the offline component packages using the tiup mirror clone
command, you can skip this step.
Run the following commands to merge the offline toolkit package into the server package directory:
tar xf tidb-community-toolkit-${version}-linux-amd64.tar.gz
ls -ld tidb-community-server-${version}-linux-amd64 tidb-community-toolkit-${version}-linux-amd64
cd tidb-community-server-${version}-linux-amd64/
cp -rp keys ~/.tiup/
tiup mirror merge ../tidb-community-toolkit-${version}-linux-amd64
To switch the mirror to another directory, run the tiup mirror set <mirror-dir>
command. To switch the mirror to the online environment, run the tiup mirror set https://tiup-mirrors.pingcap.com
command.
Run the following command to create a cluster topology file:
{{< copyable "shell-regular" >}}
tiup cluster template > topology.yaml
In the following two common scenarios, you can generate recommended topology templates by running commands:
-
For hybrid deployment: Multiple instances are deployed on a single machine. For details, see Hybrid Deployment Topology.
{{< copyable "shell-regular" >}}
tiup cluster template --full > topology.yaml
-
For geo-distributed deployment: TiDB clusters are deployed in geographically distributed data centers. For details, see Geo-Distributed Deployment Topology.
{{< copyable "shell-regular" >}}
tiup cluster template --multi-dc > topology.yaml
Run vi topology.yaml
to see the configuration file content:
{{< copyable "shell-regular" >}}
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
server_configs: {}
pd_servers:
- host: 10.0.1.4
- host: 10.0.1.5
- host: 10.0.1.6
tidb_servers:
- host: 10.0.1.7
- host: 10.0.1.8
- host: 10.0.1.9
tikv_servers:
- host: 10.0.1.1
- host: 10.0.1.2
- host: 10.0.1.3
monitoring_servers:
- host: 10.0.1.4
grafana_servers:
- host: 10.0.1.4
alertmanager_servers:
- host: 10.0.1.4
The following examples cover seven common scenarios. You need to modify the configuration file (named topology.yaml
) according to the topology description and templates in the corresponding links. For other scenarios, edit the configuration template accordingly.
Application | Configuration task | Configuration file template | Topology description |
---|---|---|---|
OLTP | Deploy minimal topology | Simple minimal configuration template Full minimal configuration template |
This is the basic cluster topology, including tidb-server, tikv-server, and pd-server. |
HTAP | Deploy the TiFlash topology | Simple TiFlash configuration template Full TiFlash configuration template |
This is to deploy TiFlash along with the minimal cluster topology. TiFlash is a columnar storage engine, and gradually becomes a standard cluster topology. |
Replicate incremental data using TiCDC | Deploy the TiCDC topology | Simple TiCDC configuration template Full TiCDC configuration template |
This is to deploy TiCDC along with the minimal cluster topology. TiCDC supports multiple downstream platforms, such as TiDB, MySQL, Kafka, MQ, and storage services. |
Use OLAP on Spark | Deploy the TiSpark topology | Simple TiSpark configuration template Full TiSpark configuration template |
This is to deploy TiSpark along with the minimal cluster topology. TiSpark is a component built for running Apache Spark on top of TiDB/TiKV to answer the OLAP queries. Currently, TiUP cluster's support for TiSpark is still experimental. |
Deploy multiple instances on a single machine | Deploy a hybrid topology | Simple configuration template for hybrid deployment Full configuration template for hybrid deployment |
The deployment topologies also apply when you need to add extra configurations for the directory, port, resource ratio, and label. |
Deploy TiDB clusters across data centers | Deploy a geo-distributed deployment topology | Configuration template for geo-distributed deployment | This topology takes the typical architecture of three data centers in two cities as an example. It introduces the geo-distributed deployment architecture and the key configuration that requires attention. |
Note:
- For parameters that should be globally effective, configure these parameters of corresponding components in the
server_configs
section of the configuration file.- For parameters that should be effective on a specific node, configure these parameters in the
config
of this node.- Use
.
to indicate the subcategory of the configuration, such aslog.slow-threshold
. For more formats, see TiUP configuration template.- If you need to specify the user group name to be created on the target machine, see this example.
For more configuration description, see the following configuration examples:
- TiDB
config.toml.example
- TiKV
config.toml.example
- PD
config.toml.example
- TiFlash
config.toml.example
Note:
You can use secret keys or interactive passwords for security authentication when you deploy TiDB using TiUP:
- If you use secret keys, specify the path of the keys through
-i
or--identity_file
.- If you use passwords, add the
-p
flag to enter the password interaction window.- If password-free login to the target machine has been configured, no authentication is required.
In general, TiUP creates the user and group specified in the
topology.yaml
file on the target machine, with the following exceptions:
- The user name configured in
topology.yaml
already exists on the target machine.- You have used the
--skip-create-user
option in the command line to explicitly skip the step of creating the user.
Before you run the deploy
command, use the check
and check --apply
commands to detect and automatically repair potential risks in the cluster:
-
Check for potential risks:
{{< copyable "shell-regular" >}}
tiup cluster check ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]
-
Enable automatic repair:
{{< copyable "shell-regular" >}}
tiup cluster check ./topology.yaml --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa]
-
Deploy a TiDB cluster:
{{< copyable "shell-regular" >}}
tiup cluster deploy tidb-test v8.5.0 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]
In the tiup cluster deploy
command above:
tidb-test
is the name of the TiDB cluster to be deployed.v8.5.0
is the version of the TiDB cluster to be deployed. You can see the latest supported versions by runningtiup list tidb
.topology.yaml
is the initialization configuration file.--user root
indicates logging into the target machine as theroot
user to complete the cluster deployment. Theroot
user is expected to havessh
andsudo
privileges to the target machine. Alternatively, you can use other users withssh
andsudo
privileges to complete the deployment.[-i]
and[-p]
are optional. If you have configured login to the target machine without password, these parameters are not required. If not, choose one of the two parameters.[-i]
is the private key of the root user (or other users specified by--user
) that has access to the target machine.[-p]
is used to input the user password interactively.
At the end of the output log, you will see Deployed cluster `tidb-test` successfully
. This indicates that the deployment is successful.
{{< copyable "shell-regular" >}}
tiup cluster list
TiUP supports managing multiple TiDB clusters. The preceding command outputs information of all the clusters currently managed by TiUP, including the cluster name, deployment user, version, and secret key information:
For example, run the following command to check the status of the tidb-test
cluster:
{{< copyable "shell-regular" >}}
tiup cluster display tidb-test
Expected output includes the instance ID, role, host, listening port, and status (because the cluster is not started yet, so the status is Down
/inactive
), and directory information.
Since TiUP cluster v1.9.0, safe start is introduced as a new start method. Starting a database using this method improves the security of the database. It is recommended that you use this method.
After safe start, TiUP automatically generates a password for the TiDB root user and returns the password in the command-line interface.
Note:
After safe start of a TiDB cluster, you cannot log in to TiDB using a root user without a password. Therefore, you need to record the password returned in the command output for future logins.
The password is generated only once. If you do not record it or you forgot it, refer to Forget the
root
password to change the password.
Method 1: Safe start
{{< copyable "shell-regular" >}}
tiup cluster start tidb-test --init
If the output is as follows, the start is successful:
{{< copyable "shell-regular" >}}
Started cluster `tidb-test` successfully.
The root password of TiDB database has been changed.
The new password is: 'y_+3Hwp=*AWz8971s6'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be got again in future.
Method 2: Standard start
{{< copyable "shell-regular" >}}
tiup cluster start tidb-test
If the output log includes Started cluster `tidb-test` successfully
, the start is successful. After standard start, you can log in to a database using a root user without a password.
{{< copyable "shell-regular" >}}
tiup cluster display tidb-test
If the output log shows Up
status, the cluster is running properly.
If you have deployed TiFlash along with the TiDB cluster, see the following documents:
If you have deployed TiCDC along with the TiDB cluster, see the following documents to stream data:
If you want to scale out or scale in your TiDB cluster without interrupting the online services, see Scale a TiDB Cluster Using TiUP.