Skip to content

Commit

Permalink
Add SBD advice from TIDs
Browse files Browse the repository at this point in the history
bsc#1202661
jsc#DOCTEAM-725
  • Loading branch information
tahliar committed Nov 23, 2023
1 parent 0665f04 commit a28235f
Showing 1 changed file with 44 additions and 7 deletions.
51 changes: 44 additions & 7 deletions xml/ha_storage_protection.xml
Original file line number Diff line number Diff line change
Expand Up @@ -216,6 +216,18 @@
<para>An SBD device can be shared between different clusters, as
long as no more than 255 nodes share the device. </para>
</listitem>
<listitem>
<para>
When using more than one SBD device, all nodes must have a slot in all SBD devices.
Fencing does not work with an asymmetric SBD setup.
</para>
</listitem>
<listitem>
<para>
When using more than one SBD device, all devices must have the same configuration,
for example, the same timeout values.
</para>
</listitem>
<listitem>
<para>For clusters with more than two nodes, you can also use SBD in
<emphasis>diskless</emphasis> mode.
Expand Down Expand Up @@ -284,7 +296,8 @@
<title>Calculation of timeouts</title>
<para>
When using SBD as a fencing mechanism, it is vital to consider the timeouts
of all components, because they depend on each other.
of all components, because they depend on each other. When using for than one
SBD device, all devices must have the same timeout values.
</para>
<variablelist>
<varlistentry>
Expand Down Expand Up @@ -611,7 +624,8 @@ stonith-timeout = Timeout (msgwait) + 20%</screen>
</step>
<step>
<para>If your SBD device resides on a multipath group, use the <option>-1</option>
and <option>-4</option> options to adjust the timeouts to use for SBD. For
and <option>-4</option> options to adjust the timeouts to use for SBD. If you initialized
more than one device, you must set the same timeout values for all devices. For
details, see <xref linkend="sec-ha-storage-protect-watchdog-timings"/>.
All timeouts are given in seconds:</para>
<screen>&prompt.root;<command>sbd -d /dev/disk/by-id/<replaceable>DEVICE_ID</replaceable> -4 180</command><co xml:id="co-ha-sbd-msgwait"/> <command>-1 90</command><co xml:id="co-ha-sbd-watchdog"/> <command>create</command></screen>
Expand Down Expand Up @@ -689,6 +703,15 @@ Timeout (msgwait) : 10
<link xlink:href="https://www.suse.com/support/kb/doc/?id=000019356"/>.
</para>
</step>
<step>
<para>
Copy the configuration file to all nodes by using <command>csync2</command>:
</para>
<screen>&prompt.root;<command>csync2 -xv</command></screen>
<para>
For more information, see <xref linkend="sec-ha-installation-setup-csync2"/>.
</para>
</step>
</procedure>

<para>After you have added your SBD devices to the SBD configuration file,
Expand All @@ -706,9 +729,16 @@ Timeout (msgwait) : 10
cluster services are started.</para>
</step>
<step>
<para>Restart the cluster services on each node:</para>
<screen>&prompt.root;<command>crm cluster restart</command></screen>
<para>Restart the cluster services on all nodes at once by using the <option>--all</option>
option:</para>
<screen>&prompt.root;<command>crm cluster restart --all</command></screen>
<para> This automatically triggers the start of the SBD daemon. </para>
<important>
<title>Restart cluster services for SBD changes</title>
<para>
If any SBD metadata changes, you must restart the cluster services again.
</para>
</important>
</step>
</procedure>

Expand Down Expand Up @@ -969,9 +999,16 @@ SBD_WATCHDOG_TIMEOUT=5</screen>
cluster services are started.</para>
</step>
<step>
<para>Restart the cluster services on each node:</para>
<screen>&prompt.root;<command>crm cluster restart</command></screen>
<para> This automatically triggers the start of the SBD daemon. </para>
<para>Restart the cluster services on all nodes at once by using the <option>--all</option>
option:</para>
<screen>&prompt.root;<command>crm cluster restart --all</command></screen>
<para> This automatically triggers the start of the SBD daemon. </para>
<important>
<title>Restart cluster services for SBD changes</title>
<para>
If any SBD metadata changes, you must restart the cluster services again.
</para>
</important>
</step>
<step>
<para>
Expand Down

0 comments on commit a28235f

Please sign in to comment.