Skip to content

Commit

Permalink
Merge branch 'feature/proofing_corrections' into maintenance/SLEHA12SP4
Browse files Browse the repository at this point in the history
- integrated all proofing corrections (only affected Admin Guide,
  the other 5 guides/articles did not have any proofing changes)
  • Loading branch information
taroth21 committed Nov 28, 2018
2 parents 3aa0d1b + 3270406 commit 3716271
Show file tree
Hide file tree
Showing 11 changed files with 66 additions and 68 deletions.
4 changes: 2 additions & 2 deletions xml/ha_clvm.xml
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ cLVM) for more information and details to integrate here - really helpful-->
<listitem>
<para>
Check if the <systemitem class="daemon">lvmetad</systemitem> daemon is
disabled because it cannot work with cLVM. In <filename>/etc/lvm/lvm.conf</filename>,
disabled, because it cannot work with cLVM. In <filename>/etc/lvm/lvm.conf</filename>,
the keyword <literal>use_lvmetad</literal> must be set to <literal>0</literal>
(the default is <literal>1</literal>).
Copy the configuration to all nodes, if necessary.
Expand Down Expand Up @@ -516,7 +516,7 @@ cLVM) for more information and details to integrate here - really helpful-->


<sect2 xml:id="sec.ha.clvm.scenario.iscsi">
<title>Scenario: cLVM With iSCSI on SANs</title>
<title>Scenario: cLVM with iSCSI on SANs</title>
<para>
The following scenario uses two SAN boxes which export their iSCSI
targets to several clients. The general idea is displayed in
Expand Down
7 changes: 3 additions & 4 deletions xml/ha_concepts.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
<para>
&productnamereg; is an integrated suite of open source clustering
technologies that enables you to implement highly available physical and
virtual Linux clusters, and to eliminate single point of failure. It
virtual Linux clusters, and to eliminate single points of failure. It
ensures the high availability and manageability of critical
resources including data, applications, and services. Thus, it helps you
maintain business continuity, protect data integrity, and reduce
Expand Down Expand Up @@ -150,7 +150,6 @@
<para>
&productname; supports the clustering of both physical and
virtual Linux servers. Mixing both types of servers is supported as well.
&sls; &productnumber; ships with &xen;,
&sls; &productnumber; ships with Xen and KVM (Kernel-based Virtual Machine).
Both are open source virtualization hypervisors. Virtualization guest
systems (also known as VMs) can be managed as services by the cluster.
Expand Down Expand Up @@ -185,7 +184,7 @@
centers. The cluster usually uses unicast for communication between
the nodes and manages failover internally. Network latency is usually
low (&lt;5&nbsp;ms for distances of approximately 20 miles). Storage
preferably is connected by fibre channel. Data replication is done by
is preferably connected by fibre channel. Data replication is done by
storage internally, or by host based mirror under control of the cluster.
</para>
</listitem>
Expand Down Expand Up @@ -749,7 +748,7 @@
data or complete resource recovery. For this Pacemaker comes with a
fencing subsystem, stonithd. &stonith; is an acronym for <quote>Shoot
The Other Node In The Head</quote>.
It usually is implemented with a &stonith; shared block device, remote
It is usually implemented with a &stonith; shared block device, remote
management boards, or remote power switches. In &pace;, &stonith;
devices are modeled as resources (and configured in the CIB) to
enable them to be easily used.
Expand Down
22 changes: 11 additions & 11 deletions xml/ha_config_basics.xml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
<para>Two-node clusters</para>
</listitem>
<listitem>
<para>clusters with more than two nodes. This means usually an odd number of nodes.</para>
<para>Clusters with more than two nodes. This usually means an odd number of nodes.</para>
</listitem>
</itemizedlist>
<para>
Expand Down Expand Up @@ -82,9 +82,9 @@
</formalpara>
<formalpara>
<title>Usage scenario:</title>
<para>Classical stretched clusters, focus on service high availability
<para>Classic stretched clusters, focus on high availability of services
and local data redundancy. For databases and enterprise
resource planning. One of the most popular setup during the last
resource planning. One of the most popular setups during the last few
years.
</para>
</formalpara>
Expand All @@ -102,7 +102,7 @@
</formalpara>
<formalpara>
<title>Usage scenario:</title>
<para>Classical stretched cluster, focus on service high availability
<para>Classic stretched cluster, focus on high availability of services
and data redundancy. For example, databases, enterprise resource planning.
</para>
</formalpara>
Expand Down Expand Up @@ -224,7 +224,7 @@
Whenever communication fails between one or more nodes and the rest of the
cluster, a cluster partition occurs. The nodes can only communicate with
other nodes in the same partition and are unaware of the separated nodes.
A cluster partition is defined to have quorum (is <quote>quorate</quote>)
A cluster partition is defined as having quorum (can <quote>quorate</quote>)
if it has the majority of nodes (or votes).
How this is achieved is done by <emphasis>quorum calculation</emphasis>.
Quorum is a requirement for fencing.
Expand Down Expand Up @@ -256,7 +256,7 @@ C = number of cluster nodes</screen>
of cluster nodes.
Two-node clusters make sense for stretched setups across two sites.
Clusters with an odd number of nodes can be built on either one single
site or might be spread across three sites.
site or might being spread across three sites.
</para>
</listitem>
</varlistentry>
Expand Down Expand Up @@ -322,9 +322,9 @@ C = number of cluster nodes</screen>
or a single node <quote>quorum</quote>&mdash;or not.
</para>
<para>
For two node clusters the only meaningful behaviour is to always
react in case of quorum loss. The first step always should be
trying to fence the lost node.
For two-node clusters the only meaningful behavior is to always
react in case of quorum loss. The first step should always be
to try to fence the lost node.
</para>
</listitem>
</varlistentry>
Expand Down Expand Up @@ -450,7 +450,7 @@ C = number of cluster nodes</screen>
use the following settings:
</para>
<example>
<title>Excerpt of &corosync; Configuration for a N-Node Cluster</title>
<title>Excerpt of &corosync; Configuration for an N-Node Cluster</title>
<screen>quorum {
provider: corosync_votequorum <co xml:id="co.corosync.quorum.n-node.corosync_votequorum"/>
expected_votes: <replaceable>N</replaceable> <co xml:id="co.corosync.quorum.n-node.expected_votes"/>
Expand All @@ -470,7 +470,7 @@ C = number of cluster nodes</screen>
<para>
Enables the wait for all (WFA) feature.
When WFA is enabled, the cluster will be quorate for the first time
only after all nodes have been visible.
only after all nodes have become visible.
To avoid some start-up race conditions, setting <option>wait_for_all</option>
to <literal>1</literal> may help.
For example, in a five-node cluster every node has one vote and thus,
Expand Down
6 changes: 3 additions & 3 deletions xml/ha_docupdates.xml
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ toms 2014-08-12:
(<link xlink:href="&bsc;1098429"/>,
<link xlink:href="&bsc;1108586"/>,
<link xlink:href="&bsc;1108604"/>,
<link xlink:href="&bsc;1108624"/>.
<link xlink:href="&bsc;1108624"/>).
</para>
</listitem>
</itemizedlist>
Expand Down Expand Up @@ -203,7 +203,7 @@ toms 2014-08-12:
</listitem>
<listitem>
<para>
Added chapter <xref linkend="cha.ha.maintenance"/>. Moved respective
Added <xref linkend="cha.ha.maintenance"/>. Moved respective
sections from <xref linkend="cha.ha.config.basics"/>,
<xref linkend="cha.conf.hawk2"/>, and <xref linkend="cha.ha.manual_config"/>
there. The new chapter gives an overview of different options the cluster stack
Expand Down Expand Up @@ -464,7 +464,7 @@ toms 2014-08-12:
<listitem>
<para>
In <xref linkend="sec.ha.cluster-md.overview"/>, mentioned that each
disk need to be accessible by Cluster MD on each node (<link
disk needs to be accessible by Cluster MD on each node (<link
xlink:href="https://bugzilla.suse.com/show_bug.cgi?id=938502"/>).
</para>
</listitem>
Expand Down
10 changes: 5 additions & 5 deletions xml/ha_fencing.xml
Original file line number Diff line number Diff line change
Expand Up @@ -149,10 +149,10 @@
increasingly popular and may even become standard in off-the-shelf
computers. However, if they share a power supply with their host (a
cluster node), they might not work when needed. If a node stays without
power, the device supposed to control it would be useless. Therefor, it
is highly recommended using battery backed Lights-out devices.
Another aspect is this devices are accessed by network. This might
imply single point of failure, or security concerns.
power, the device supposed to control it would be useless. Therefore, it
is highly recommended to use battery backed lights-out devices.
Another aspect is that these devices are accessed by network. This might
imply a single point of failure, or security concerns.
</para>
</listitem>
</varlistentry>
Expand Down Expand Up @@ -434,7 +434,7 @@ hostlist</screen>
<para>The Kdump plug-in must be used in concert with another, real &stonith;
device, for example, <literal>external/ipmi</literal>.
The order of the fencing devices must be specified by <command>crm configure
fencing_topology</command>. To achieve that Kdump is checked before
fencing_topology</command>. For Kdump to be checked before
triggering a real fencing mechanism (like <literal>external/ipmi</literal>),
use a configuration similar to the following:</para>
<screen>fencing_topology \
Expand Down
2 changes: 1 addition & 1 deletion xml/ha_glossary.xml
Original file line number Diff line number Diff line change
Expand Up @@ -453,7 +453,7 @@ performance will be met during a contractual measurement period.</para>
<glossentry xml:id="gloss.quorum"><glossterm>quorum</glossterm>
<glossdef>
<para>
In a cluster, a cluster partition is defined to have quorum (is
In a cluster, a cluster partition is defined to have quorum (can
<quote>quorate</quote>) if it has the majority of nodes (or votes).
Quorum distinguishes exactly one partition. It is part of the algorithm
to prevent several disconnected partitions or nodes from proceeding and
Expand Down
38 changes: 19 additions & 19 deletions xml/ha_maintenance.xml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
</para>
<para>
This chapter explains how to manually take down a cluster node without
negative side-effects. It also gives an overview of different options the
negative side effects. It also gives an overview of different options the
cluster stack provides for executing maintenance tasks.
</para>
</abstract>
Expand Down Expand Up @@ -147,7 +147,7 @@ Node <replaceable>&node2;</replaceable>: standby

<variablelist>
<varlistentry xml:id="vle.ha.maint.mode.cluster">
<!--<term>Putting the Cluster Into Maintenance Mode</term>-->
<!--<term>Putting the Cluster into Maintenance Mode</term>-->
<term><xref linkend="sec.ha.maint.mode.cluster" xrefstyle="select:title"/></term>
<listitem>
<para>
Expand All @@ -158,7 +158,7 @@ Node <replaceable>&node2;</replaceable>: standby
</listitem>
</varlistentry>
<varlistentry xml:id="vle.ha.maint.mode.node">
<!--<term>Putting a Node Into Maintenance Mode</term>-->
<!--<term>Putting a Node into Maintenance Mode</term>-->
<term><xref linkend="sec.ha.maint.mode.node" xrefstyle="select:title"/></term>
<listitem>
<para>
Expand All @@ -175,7 +175,7 @@ Node <replaceable>&node2;</replaceable>: standby
<para>
A node that is in standby mode can no longer run resources. Any resources
running on the node will be moved away or stopped (in case no other node
is eligible to run the resource). Also, all monitor operations will be
is eligible to run the resource). Also, all monitoring operations will be
stopped on the node (except for those with
<literal>role="Stopped"</literal>).
</para>
Expand All @@ -186,11 +186,11 @@ Node <replaceable>&node2;</replaceable>: standby
</listitem>
</varlistentry>
<varlistentry xml:id="vle.ha.maint.mode.rsc">
<!--<term>Putting a Resource Into Maintenance Mode</term>-->
<!--<term>Putting a Resource into Maintenance Mode</term>-->
<term><xref linkend="sec.ha.maint.mode.rsc" xrefstyle="select:title"/></term>
<listitem>
<para>
When this mode is enabled for a resource, no monitor operations will be
When this mode is enabled for a resource, no monitoring operations will be
triggered for the resource.
</para>
<para>
Expand All @@ -201,7 +201,7 @@ Node <replaceable>&node2;</replaceable>: standby
</listitem>
</varlistentry>
<varlistentry xml:id="vle.ha.maint.rsc.unmanaged">
<!--<term>Putting a Resource Into Unmanaged Mode</term>-->
<!--<term>Putting a Resource into Unmanaged Mode</term>-->
<term><xref linkend="sec.ha.maint.rsc.unmanaged" xrefstyle="select:title"/></term>
<listitem>
<para>
Expand Down Expand Up @@ -266,7 +266,7 @@ Node <replaceable>&node2;</replaceable>: standby
</sect1>

<sect1 xml:id="sec.ha.maint.mode.cluster">
<title>Putting the Cluster Into Maintenance Mode</title>
<title>Putting the Cluster into Maintenance Mode</title>
<para>
To put the cluster into maintenance mode on the &crmshell;, use the following command:</para>
<screen>&prompt.root;<command>crm</command> configure property maintenance-mode=true</screen>
Expand All @@ -275,7 +275,7 @@ Node <replaceable>&node2;</replaceable>: standby
<screen>&prompt.root;<command>crm</command> configure property maintenance-mode=false</screen>

<procedure xml:id="pro.ha.maint.mode.cluster.hawk2">
<title>Putting the Cluster Into Maintenance Mode with &hawk2;</title>
<title>Putting the Cluster into Maintenance Mode with &hawk2;</title>
<step>
<para>
Start a Web browser and log in to the cluster as described in
Expand Down Expand Up @@ -315,7 +315,7 @@ Node <replaceable>&node2;</replaceable>: standby
</sect1>

<sect1 xml:id="sec.ha.maint.mode.node">
<title>Putting a Node Into Maintenance Mode</title>
<title>Putting a Node into Maintenance Mode</title>
<para>
To put a node into maintenance mode on the &crmshell;, use the following command:</para>
<screen>&prompt.root;<command>crm</command> node maintenance <replaceable>NODENAME</replaceable></screen>
Expand All @@ -324,7 +324,7 @@ Node <replaceable>&node2;</replaceable>: standby
<screen>&prompt.root;<command>crm</command> node ready <replaceable>NODENAME</replaceable></screen>

<procedure xml:id="pro.ha.maint.mode.nodes.hawk2">
<title>Putting a Node Into Maintenance Mode with &hawk2;</title>
<title>Putting a Node into Maintenance Mode with &hawk2;</title>
<step>
<para>
Start a Web browser and log in to the cluster as described in
Expand Down Expand Up @@ -352,7 +352,7 @@ Node <replaceable>&node2;</replaceable>: standby
</sect1>

<sect1 xml:id="sec.ha.maint.node.standby">
<title>Putting a Node Into Standby Mode</title>
<title>Putting a Node into Standby Mode</title>
<para>
To put a node into standby mode on the &crmshell;, use the following command:</para>
<screen>&prompt.root;crm node standby <replaceable>NODENAME</replaceable></screen>
Expand All @@ -361,7 +361,7 @@ Node <replaceable>&node2;</replaceable>: standby
<screen>&prompt.root;crm node online <replaceable>NODENAME</replaceable></screen>

<procedure xml:id="pro.ha.maint.node.standby.hawk2">
<title>Putting a Node Into Standby Mode with &hawk2;</title>
<title>Putting a Node into Standby Mode with &hawk2;</title>
<step>
<para>
Start a Web browser and log in to the cluster as described in
Expand Down Expand Up @@ -394,7 +394,7 @@ Node <replaceable>&node2;</replaceable>: standby
</sect1>

<sect1 xml:id="sec.ha.maint.mode.rsc">
<title>Putting a Resource Into Maintenance Mode</title>
<title>Putting a Resource into Maintenance Mode</title>
<para>
To put a resource into maintenance mode on the &crmshell;, use the following command:</para>
<screen>&prompt.root;<command>crm</command> resource maintenance <replaceable>RESOURCE_ID</replaceable> true</screen>
Expand All @@ -403,7 +403,7 @@ Node <replaceable>&node2;</replaceable>: standby
<screen>&prompt.root;<command>crm</command> resource maintenance <replaceable>RESOURCE_ID</replaceable> false</screen>

<procedure xml:id="pro.ha.maint.mode.rsc.hawk2">
<title>Putting a Resource Into Maintenance Mode with &hawk2;</title>
<title>Putting a Resource into Maintenance Mode with &hawk2;</title>
<step>
<para>
Start a Web browser and log in to the cluster as described in
Expand Down Expand Up @@ -459,7 +459,7 @@ Node <replaceable>&node2;</replaceable>: standby
</sect1>

<sect1 xml:id="sec.ha.maint.rsc.unmanaged">
<title>Putting a Resource Into Unmanaged Mode</title>
<title>Putting a Resource into Unmanaged Mode</title>
<para>
To put a resource into unmanaged mode on the &crmshell;, use the following command:</para>
<screen>&prompt.root;<command>crm</command> resource unmanage <replaceable>RESOURCE_ID</replaceable></screen>
Expand All @@ -468,7 +468,7 @@ Node <replaceable>&node2;</replaceable>: standby
<screen>&prompt.root;<command>crm</command> resource manage <replaceable>RESOURCE_ID</replaceable></screen>

<procedure xml:id="pro.ha.maint.rsc.unmanaged.hawk2">
<title>Putting a Resource Into Unmanaged Mode with &hawk2;</title>
<title>Putting a Resource into Unmanaged Mode with &hawk2;</title>
<step>
<para>
Start a Web browser and log in to the cluster as described in
Expand Down Expand Up @@ -551,7 +551,7 @@ Node <replaceable>&node2;</replaceable>: standby
<step>
<para>
Check if you have resources of the type <literal>ocf:pacemaker:controld</literal>
or any dependencies on this type of resources. Resources of the type
or any dependencies on this type of resource. Resources of the type
<literal>ocf:pacemaker:controld</literal> are DLM resources.
</para>
<substeps>
Expand All @@ -564,7 +564,7 @@ Node <replaceable>&node2;</replaceable>: standby
<para>
The reason is that stopping &pace; also stops the &corosync; service, on
whose membership and messaging services DLM depends. If &corosync; stops,
the DLM resource will assume a split-brain scenario and trigger a fencing
the DLM resource will assume a split brain scenario and trigger a fencing
operation.
</para>
</step>
Expand Down
2 changes: 1 addition & 1 deletion xml/ha_rear.xml
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@
<para>
Understanding &rear;'s complex functionality is essential for making the
tool work as intended. Therefore, read this chapter carefully and
familiarize with &rear; before a disaster strikes. You should also be
familiarize yourself with &rear; before a disaster strikes. You should also be
aware of &rear;'s known limitations and test your system in advance.
</para>
</note>
Expand Down
5 changes: 2 additions & 3 deletions xml/ha_requirements.xml
Original file line number Diff line number Diff line change
Expand Up @@ -137,9 +137,8 @@
<para>
When using DRBD* to implement a mirroring RAID system that distributes
data across two machines, make sure to only access the device provided
by DRBD&mdash;never the backing device. Use bonded NICs. Same NICs as
the rest of the cluster uses are possible to leverage the redundancy
provided there.
by DRBD&mdash;never the backing device. Use bonded NICs. To leverage the
redundancy it is possible to use the same NICs as the rest of the cluster.
</para>
</listitem>
</itemizedlist>
Expand Down
Loading

0 comments on commit 3716271

Please sign in to comment.