From ccf8a89c18cd23024ab7b9a53e1a445a8b16e60d Mon Sep 17 00:00:00 2001 From: Tahlia Richardson <3069029+tahliar@users.noreply.github.com> Date: Thu, 30 Nov 2023 12:17:42 +1000 Subject: [PATCH] Add SBD advice from archived TIDs (#359) * Add note about /etc/crm/profiles.yml bsc#1202661 jsc#DOCTEAM-725 * Add SBD advice from TIDs bsc#1202661 jsc#DOCTEAM-725 * Add review suggestions --- xml/article_installation.xml | 8 +++++ xml/ha_storage_protection.xml | 57 ++++++++++++++++++++++++++++++----- 2 files changed, 57 insertions(+), 8 deletions(-) diff --git a/xml/article_installation.xml b/xml/article_installation.xml index 2ea80427..adbc5ab6 100644 --- a/xml/article_installation.xml +++ b/xml/article_installation.xml @@ -292,6 +292,14 @@ + + Cluster configuration for different platforms + + The crm cluster init script detects the system environment (for example, + &ms; Azure) and adjusts certain cluster settings based on the profile for that environment. + For more information, see the file /etc/crm/profiles.yml. + + diff --git a/xml/ha_storage_protection.xml b/xml/ha_storage_protection.xml index 461414f7..1e0adf66 100644 --- a/xml/ha_storage_protection.xml +++ b/xml/ha_storage_protection.xml @@ -185,7 +185,7 @@ - Requirements + Requirements and restrictions You can use up to three SBD devices for storage-based fencing. @@ -216,6 +216,18 @@ An SBD device can be shared between different clusters, as long as no more than 255 nodes share the device. + + + Fencing does not work with an asymmetric SBD setup. When using more + than one SBD device, all nodes must have a slot in all SBD devices. + + + + + When using more than one SBD device, all devices must have the same configuration, + for example, the same timeout values. + + For clusters with more than two nodes, you can also use SBD in diskless mode. @@ -284,7 +296,8 @@ Calculation of timeouts When using SBD as a fencing mechanism, it is vital to consider the timeouts - of all components, because they depend on each other. + of all components, because they depend on each other. When using more than one + SBD device, all devices must have the same timeout values. @@ -611,7 +624,8 @@ stonith-timeout = Timeout (msgwait) + 20% If your SBD device resides on a multipath group, use the - and options to adjust the timeouts to use for SBD. For + and options to adjust the timeouts to use for SBD. If you initialized + more than one device, you must set the same timeout values for all devices. For details, see . All timeouts are given in seconds: &prompt.root;sbd -d /dev/disk/by-id/DEVICE_ID -4 180 -1 90 create @@ -689,6 +703,15 @@ Timeout (msgwait) : 10 . + + + Copy the configuration file to all nodes by using csync2: + +&prompt.root;csync2 -xv + + For more information, see . + + After you have added your SBD devices to the SBD configuration file, @@ -706,9 +729,18 @@ Timeout (msgwait) : 10 cluster services are started. - Restart the cluster services on each node: - &prompt.root;crm cluster restart + Restart the cluster services on all nodes at once by using the + option: + &prompt.root;crm cluster restart --all This automatically triggers the start of the SBD daemon. + + Restart cluster services for SBD changes + + If any SBD metadata changes, you must restart the cluster services again. To keep critical + cluster resources running during the restart, consider putting the cluster into maintenance + mode first. For more information, see . + + @@ -969,9 +1001,18 @@ SBD_WATCHDOG_TIMEOUT=5 cluster services are started. - Restart the cluster services on each node: - &prompt.root;crm cluster restart - This automatically triggers the start of the SBD daemon. + Restart the cluster services on all nodes at once by using the + option: + &prompt.root;crm cluster restart --all + This automatically triggers the start of the SBD daemon. + + Restart cluster services for SBD changes + + If any SBD metadata changes, you must restart the cluster services again. To keep critical + cluster resources running during the restart, consider putting the cluster into maintenance + mode first. For more information, see . + +