From 69827783e5deeddffc0bbd50c112ea96e4a9a53a Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Mon, 3 Jun 2024 14:51:08 +1000 Subject: [PATCH] Move crm cluster join to bootstrap chapter --- xml/book_full_install.xml | 2 +- xml/ha_add_nodes.xml | 240 ----------------------------------- xml/ha_autoyast_deploy.xml | 148 +++++++++++++++++++++ xml/ha_bootstrap_install.xml | 72 +++++++++++ xml/ha_log_in.xml | 4 +- 5 files changed, 223 insertions(+), 243 deletions(-) delete mode 100644 xml/ha_add_nodes.xml create mode 100644 xml/ha_autoyast_deploy.xml diff --git a/xml/book_full_install.xml b/xml/book_full_install.xml index f7c2fb46..3b0e9278 100644 --- a/xml/book_full_install.xml +++ b/xml/book_full_install.xml @@ -67,7 +67,7 @@ - + diff --git a/xml/ha_add_nodes.xml b/xml/ha_add_nodes.xml deleted file mode 100644 index ffed805a..00000000 --- a/xml/ha_add_nodes.xml +++ /dev/null @@ -1,240 +0,0 @@ - - - - %entities; -]> - - - Adding more nodes - - - - - - - - - yes - - - - - - Adding nodes with <command>crm cluster join</command> - - You can add more nodes to the cluster with the crm cluster join bootstrap script. - The script only needs access to an existing cluster node, and completes the basic setup - on the current machine automatically. - - - For more information, run the crm cluster join --help command. - - - Adding nodes with <command>crm cluster join</command> - - - Log in to a node as &rootuser;, or as a user with sudo privileges. - - - - - Start the bootstrap script: - - - - - If you set up the first node as &rootuser;, you can run this command with - no additional parameters: - -&prompt.root;crm cluster join - - - - If you set up the first node as a sudo user, you must - specify the user and node with the option: - -&prompt.user;sudo crm cluster join -c USER@&node1; - - - - If you set up the first node as a sudo user with SSH agent forwarding, - use the following command: - -&prompt.user;sudo --preserve-env=SSH_AUTH_SOCK crm cluster join --use-ssh-agent -c USER@&node1; - - - - If NTP is not configured to start at boot time, a message - appears. The script also checks for a hardware watchdog device. - You are warned if none is present. - - - - - If you did not already specify &node1; - with , you will be prompted for the IP address of the first node. - - - - - If you did not already configure passwordless SSH access between - both machines, you will be prompted for the password of the first node. - - - After logging in to the specified node, the script copies the - &corosync; configuration, configures SSH and &csync;, - brings the current machine online as a new cluster node, and - starts the service needed for &hawk2;. - - - - - Repeat this procedure for each node. You can check the status of the cluster at any time - with the crm status command, or by logging in to &hawk2; and navigating to - StatusNodes. - - - - - Adding nodes manually - - - - - - - Adding nodes with &ay; - - - After you have installed and set up a two-node cluster, you can extend the - cluster by cloning existing nodes with &ay; and adding the clones to the cluster. - - - &ay; uses profiles that contains installation and configuration data. - A profile tells &ay; what to install and how to configure the installed system to - get a ready-to-use system in the end. This profile can then be used - for mass deployment in different ways (for example, to clone existing - cluster nodes). - - - For detailed instructions on how to use &ay; in various scenarios, - see the - &ayguide; for &sls; &productnumber;. - - - - Identical hardware - - assumes you are rolling - out &productname; &productnumber; to a set of machines with identical hardware - configurations. - - - If you need to deploy cluster nodes on non-identical hardware, refer to the - &deploy; for &sls; &productnumber;, - chapter Automated Installation, section - Rule-Based Autoinstallation. - - - - - Cloning a cluster node with &ay; - - - Make sure the node you want to clone is correctly installed and - configured. For details, see the &haquick; or - . - - - - - Follow the description outlined in the &sle; - &productnumber; &deploy; for simple mass - installation. This includes the following basic steps: - - - - - Creating an &ay; profile. Use the &ay; GUI to create and modify - a profile based on the existing system configuration. In &ay;, - choose the &ha; module and click the - Clone button. If needed, adjust the configuration - in the other modules and save the resulting control file as XML. - - - If you have configured DRBD, you can select and clone this module in - the &ay; GUI, too. - - - - - Determining the source of the &ay; profile and the parameter to - pass to the installation routines for the other nodes. - - - - - Determining the source of the &sls; and &productname; - installation data. - - - - - Determining and setting up the boot scenario for autoinstallation. - - - - - Passing the command line to the installation routines, either by - adding the parameters manually or by creating an - info file. - - - - - Starting and monitoring the autoinstallation process. - - - - - - - - After the clone has been successfully installed, execute the following - steps to make the cloned node join the cluster: - - - - Bringing the cloned node online - - - Transfer the key configuration files from the already configured nodes - to the cloned node with &csync; as described in - . - - - - - To bring the node online, start the cluster services on the cloned - node as described in . - - - - - - The cloned node now joins the cluster because the - /etc/corosync/corosync.conf file has been applied to - the cloned node via &csync;. The CIB is automatically synchronized - among the cluster nodes. - - - - diff --git a/xml/ha_autoyast_deploy.xml b/xml/ha_autoyast_deploy.xml new file mode 100644 index 00000000..4cb8732e --- /dev/null +++ b/xml/ha_autoyast_deploy.xml @@ -0,0 +1,148 @@ + + + + %entities; +]> + + + + Deploying nodes with &ay; + + + + After you have installed and set up a two-node cluster, you can extend the + cluster by cloning existing nodes with &ay; and adding the clones to the cluster. + + &ay; uses profiles that contains installation and configuration data. + A profile tells &ay; what to install and how to configure the installed system to + get a ready-to-use system in the end. This profile can then be used + for mass deployment in different ways (for example, to clone existing cluster nodes). + + + For detailed instructions on how to use &ay; in various scenarios, see the + + &ayguide; for &sls; &productnumber;. + + + + + yes + + + + + + + Identical hardware + + assumes you are rolling + out &productname; &productnumber; to a set of machines with identical hardware + configurations. + + + If you need to deploy cluster nodes on non-identical hardware, refer to the + &deploy; for &sls; &productnumber;, + chapter Automated Installation, section + Rule-Based Autoinstallation. + + + + + Cloning a cluster node with &ay; + + + Make sure the node you want to clone is correctly installed and + configured. For details, see the &haquick; or + . + + + + + Follow the description outlined in the &sle; + &productnumber; &deploy; for simple mass + installation. This includes the following basic steps: + + + + + Creating an &ay; profile. Use the &ay; GUI to create and modify + a profile based on the existing system configuration. In &ay;, + choose the &ha; module and click the + Clone button. If needed, adjust the configuration + in the other modules and save the resulting control file as XML. + + + If you have configured DRBD, you can select and clone this module in + the &ay; GUI, too. + + + + + Determining the source of the &ay; profile and the parameter to + pass to the installation routines for the other nodes. + + + + + Determining the source of the &sls; and &productname; + installation data. + + + + + Determining and setting up the boot scenario for autoinstallation. + + + + + Passing the command line to the installation routines, either by + adding the parameters manually or by creating an + info file. + + + + + Starting and monitoring the autoinstallation process. + + + + + + + + After the clone has been successfully installed, execute the following + steps to make the cloned node join the cluster: + + + + Bringing the cloned node online + + + Transfer the key configuration files from the already configured nodes + to the cloned node with &csync; as described in + . + + + + + To bring the node online, start the cluster services on the cloned + node as described in . + + + + + + The cloned node now joins the cluster because the + /etc/corosync/corosync.conf file has been applied to + the cloned node via &csync;. The CIB is automatically synchronized + among the cluster nodes. + + + diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml index 770b3faf..9247ca5b 100644 --- a/xml/ha_bootstrap_install.xml +++ b/xml/ha_bootstrap_install.xml @@ -291,4 +291,76 @@ + + + + Adding nodes with <command>crm cluster join</command> + + You can add more nodes to the cluster with the crm cluster join bootstrap script. + The script only needs access to an existing cluster node, and completes the basic setup + on the current machine automatically. + + + For more information, run the crm cluster join --help command. + + + Adding nodes with <command>crm cluster join</command> + + + Start the bootstrap script: + + + + + If you set up the first node as &rootuser;, you can run this command with + no additional parameters: + +&prompt.root;crm cluster join + + + + If you set up the first node as a sudo user, you must + specify the user and node with the option: + +&prompt.user;sudo crm cluster join -c USER@&node1; + + + + If you set up the first node as a sudo user with SSH agent forwarding, + use the following command: + +&prompt.user;sudo --preserve-env=SSH_AUTH_SOCK crm cluster join --use-ssh-agent -c USER@&node1; + + + + If NTP is not configured to start at boot time, a message + appears. The script also checks for a hardware watchdog device. + You are warned if none is present. + + + + + If you did not already specify the first cluster node + with , you will be prompted for its IP address. + + + + + If you did not already configure passwordless SSH access between the cluster nodes, + you will be prompted for the password of the first node. + + + After logging in to the specified node, the script copies the + &corosync; configuration, configures SSH and &csync;, + brings the current machine online as a new cluster node, and + starts the service needed for &hawk2;. + + + + + Repeat this procedure for each node. You can check the status of the cluster at any time + with the crm status command, or by logging in to &hawk2; and navigating to + StatusNodes. + + diff --git a/xml/ha_log_in.xml b/xml/ha_log_in.xml index 914ef49f..29e89fc9 100644 --- a/xml/ha_log_in.xml +++ b/xml/ha_log_in.xml @@ -46,7 +46,7 @@ (or be generated) locally on the node, not on a remote system. - To log into to the first cluster node as the &rootuser; user, run the following command: + To log in to the first cluster node as the &rootuser; user, run the following command: user@local> ssh root@NODE1 @@ -60,7 +60,7 @@ locally on the node, not on a remote system. - To log into to the first cluster node as a sudo user, run the + To log in to the first cluster node as a sudo user, run the following command: user@local> ssh USER@NODE1