diff --git a/xml/ha_cluster_lvm.xml b/xml/ha_cluster_lvm.xml
index bd9c55481..a740abda4 100644
--- a/xml/ha_cluster_lvm.xml
+++ b/xml/ha_cluster_lvm.xml
@@ -830,7 +830,7 @@ vdc 253:32 0 20G 0 disk
logical volume for a cmirrord setup on &productname; 11 or 12 as
described in ).
+ />.)
By default, mdadm reserves a certain amount of space
@@ -843,13 +843,13 @@ vdc 253:32 0 20G 0 disk
The must leave enough space on the device
for cluster MD to write its metadata to it. On the other hand, the offset
must be small enough for the remaining capacity of the device to accommodate
- all physical volume extents of the migrated volume. Because the volume can
+ all physical volume extents of the migrated volume. Because the volume may
have spanned the complete device minus the mirror log, the offset must be
smaller than the size of the mirror log.
- We recommend to set the to 128 KB.
- If no value is specified for the offset, its default value is 1 KB
+ We recommend to set the to 128 kB.
+ If no value is specified for the offset, its default value is 1 kB
(1024 bytes).
diff --git a/xml/ha_concepts.xml b/xml/ha_concepts.xml
index b502b51f9..a0bfe9ba6 100644
--- a/xml/ha_concepts.xml
+++ b/xml/ha_concepts.xml
@@ -622,7 +622,7 @@
Cluster Resource Manager (Pacemaker)
Pacemaker as cluster resource manager is the brain
- which reacts to events occurring in the cluster. Its is implemented as
+ which reacts to events occurring in the cluster. It is implemented as
pacemaker-controld, the cluster
controller, which coordinates all actions. Events can be nodes that join
or leave the cluster, failure of resources, or scheduled activities such
@@ -637,7 +637,7 @@
The local resource manager is located between the Pacemaker layer and the
resources layer on each node. It is implemented as pacemaker-execd daemon. Through this daemon,
- Pacemaker can start, stop and monitor resources.
+ Pacemaker can start, stop, and monitor resources.
@@ -688,8 +688,8 @@
Resources and Resource Agents
- In an &ha; cluster, the services that need to be highly available are
- called resources. Resource agents (RAs) are scripts that start, stop and
+ In a &ha; cluster, the services that need to be highly available are
+ called resources. Resource agents (RAs) are scripts that start, stop, and
monitor cluster resources.
diff --git a/xml/ha_config_basics.xml b/xml/ha_config_basics.xml
index a26b0eec3..199e7481f 100644
--- a/xml/ha_config_basics.xml
+++ b/xml/ha_config_basics.xml
@@ -224,7 +224,7 @@
Whenever communication fails between one or more nodes and the rest of the
cluster, a cluster partition occurs. The nodes can only communicate with
other nodes in the same partition and are unaware of the separated nodes.
- A cluster partition is defined as having quorum (can quorate)
+ A cluster partition is defined as having quorum (being quorate)
if it has the majority of nodes (or votes).
How this is achieved is done by quorum calculation.
Quorum is a requirement for fencing.
@@ -256,8 +256,8 @@ C = number of cluster nodes
We strongly recommend to use either a two-node cluster or an odd number
of cluster nodes.
Two-node clusters make sense for stretched setups across two sites.
- Clusters with an odd number of nodes can be built on either one single
- site or might being spread across three sites.
+ Clusters with an odd number of nodes can either be built on one single
+ site or might be spread across three sites.
diff --git a/xml/ha_fencing.xml b/xml/ha_fencing.xml
index a70115211..c007030be 100644
--- a/xml/ha_fencing.xml
+++ b/xml/ha_fencing.xml
@@ -184,15 +184,16 @@
pacemaker-fenced
- pacemaker-fenced is a daemon which can be accessed by local processes or over
+ pacemaker-fenced is a daemon which can be accessed by local processes or over
the network. It accepts the commands which correspond to fencing
operations: reset, power-off, and power-on. It can also check the
status of the fencing device.
- The pacemaker-fenced daemon runs on every node in the &ha; cluster. The
- pacemaker-fenced instance running on the DC node receives a fencing request
- from the pacemaker-controld. It is up to this and other pacemaker-fenced programs to carry
+ The pacemaker-fenced daemon runs on every node in the &ha; cluster. The
+ pacemaker-fenced instance running on the DC node receives a fencing request
+ from the pacemaker-controld. It
+ is up to this and other pacemaker-fenced programs to carry
out the desired fencing operation.
@@ -210,8 +211,9 @@
fence-agents package, too,
the plug-ins contained there are installed in
/usr/sbin/fence_*.) All &stonith; plug-ins look
- the same to pacemaker-fenced, but are quite different on the other side
- reflecting the nature of the fencing device.
+ the same to pacemaker-fenced,
+ but are quite different on the other side, reflecting the nature of the
+ fencing device.
Some plug-ins support more than one device. A typical example is
@@ -229,7 +231,7 @@
To set up fencing, you need to configure one or more &stonith;
- resources—the pacemaker-fenced daemon requires no configuration. All
+ resources—the pacemaker-fenced daemon requires no configuration. All
configuration is stored in the CIB. A &stonith; resource is a resource of
class stonith (see
). &stonith; resources
@@ -328,7 +330,7 @@ commit
outcome. The only way to do that is to assume that the operation is
going to succeed and send the notification beforehand. But if the
operation fails, problems could arise. Therefore, by convention,
- pacemaker-fenced refuses to terminate its host.
+ pacemaker-fenced refuses to terminate its host.
diff --git a/xml/ha_glossary.xml b/xml/ha_glossary.xml
index bf28aa024..c2bebb373 100644
--- a/xml/ha_glossary.xml
+++ b/xml/ha_glossary.xml
@@ -171,7 +171,7 @@
The management entity responsible for coordinating all non-local
- interactions in an &ha; cluster. The &hasi; uses Pacemaker as CRM.
+ interactions in a &ha; cluster. The &hasi; uses Pacemaker as CRM.
The CRM is implemented as pacemaker-controld. It interacts with several
components: local resource managers, both on its own node and on the other nodes,
@@ -282,8 +282,8 @@
isolated or failing cluster members. There are two classes of fencing:
resource level fencing and node level fencing. Resource level fencing ensures
exclusive access to a given resource. Node level fencing prevents a failed
- node from accessing shared resources entirely and prevents that resources run
- a node whose status is uncertain. This is usually done in a simple and
+ node from accessing shared resources entirely and prevents resources from running
+ on a node whose status is uncertain. This is usually done in a simple and
abrupt way: reset or power off the node.
@@ -329,7 +329,7 @@ performance will be met during a contractual measurement period.
The local resource manager is located between the Pacemaker layer and the
resources layer on each node. It is implemented as pacemaker-execd daemon. Through this daemon,
- Pacemaker can start, stop and monitor resources.
+ Pacemaker can start, stop, and monitor resources.
@@ -419,7 +419,7 @@ performance will be met during a contractual measurement period.
quorum
- In a cluster, a cluster partition is defined to have quorum (can
+ In a cluster, a cluster partition is defined to have quorum (be
quorate) if it has the majority of nodes (or votes).
Quorum distinguishes exactly one partition. It is part of the algorithm
to prevent several disconnected partitions or nodes from proceeding and
diff --git a/xml/ha_hawk2_history_i.xml b/xml/ha_hawk2_history_i.xml
index 9bf09c6b9..7e3503cd1 100644
--- a/xml/ha_hawk2_history_i.xml
+++ b/xml/ha_hawk2_history_i.xml
@@ -317,7 +317,7 @@
Viewing Transition Details in the History Explorer
For each transition, the cluster saves a copy of the state which it provides
- as input to pacemaker-schedulerd.
+ as input to pacemaker-schedulerd.
The path to this archive is logged. All
pe-* files are generated on the Designated
Coordinator (DC). As the DC can change in a cluster, there may be
@@ -376,7 +376,7 @@
crm history transition log peinput
This includes details from the following daemons:
- pacemaker-schedulerd ,
+ pacemaker-schedulerd,
pacemaker-controld, and
pacemaker-execd.
diff --git a/xml/ha_maintenance.xml b/xml/ha_maintenance.xml
index 8bc7d8ec7..0032ad1c0 100644
--- a/xml/ha_maintenance.xml
+++ b/xml/ha_maintenance.xml
@@ -147,7 +147,7 @@ Node &node2;: standby
-
+
@@ -158,7 +158,7 @@ Node &node2;: standby
-
+
@@ -169,7 +169,7 @@ Node &node2;: standby
-
+
@@ -186,7 +186,7 @@ Node &node2;: standby
-
+
@@ -266,16 +266,16 @@ Node &node2;: standby
- Putting the Cluster into Maintenance Mode
+ Putting the Cluster in Maintenance Mode
- To put the cluster into maintenance mode on the &crmshell;, use the following command:
+ To put the cluster in maintenance mode on the &crmshell;, use the following command:&prompt.root;crm configure property maintenance-mode=true
- To put the cluster back into normal mode after your maintenance work is done, use the following command:
+ To put the cluster back to normal mode after your maintenance work is done, use the following command:&prompt.root;crm configure property maintenance-mode=false
- Putting the Cluster into Maintenance Mode with &hawk2;
+ Putting the Cluster in Maintenance Mode with &hawk2;
Start a Web browser and log in to the cluster as described in
@@ -315,16 +315,16 @@ Node &node2;: standby
- Putting a Node into Maintenance Mode
+ Putting a Node in Maintenance Mode
- To put a node into maintenance mode on the &crmshell;, use the following command:
+ To put a node in maintenance mode on the &crmshell;, use the following command:&prompt.root;crm node maintenance NODENAME
- To put the node back into normal mode after your maintenance work is done, use the following command:
+ To put the node back to normal mode after your maintenance work is done, use the following command:&prompt.root;crm node ready NODENAME
- Putting a Node into Maintenance Mode with &hawk2;
+ Putting a Node in Maintenance Mode with &hawk2;
Start a Web browser and log in to the cluster as described in
@@ -352,16 +352,16 @@ Node &node2;: standby
- Putting a Node into Standby Mode
+ Putting a Node in Standby Mode
- To put a node into standby mode on the &crmshell;, use the following command:
+ To put a node in standby mode on the &crmshell;, use the following command:&prompt.root;crm node standby NODENAME
To bring the node back online after your maintenance work is done, use the following command:&prompt.root;crm node online NODENAME
- Putting a Node into Standby Mode with &hawk2;
+ Putting a Node in Standby Mode with &hawk2;
Start a Web browser and log in to the cluster as described in
@@ -518,7 +518,7 @@ Node &node2;: standby
- Rebooting a Cluster Node While In Maintenance Mode
+ Rebooting a Cluster Node While in Maintenance ModeImplications