The following procedure contains information for rebooting and deploying the management node that is currently hosting the LiveCD. At the end of this procedure, the LiveCD will no longer be active. The node it was using will join the Kubernetes cluster as the final of three master nodes, forming a quorum.
IMPORTANT: While the node is rebooting, it will only be available through Serial-Over-LAN (SOL) and local terminals. This procedure entails deactivating the LiveCD, meaning the LiveCD and all of its resources will be unavailable.
- Required services
- Notice of danger
- Hand-off
- Reboot
- Enable NCN disk wiping safeguard
- Remove the default NTP pool
- Configure DNS and NTP on each BMC
- Next topic
These services must be healthy before the reboot of the LiveCD can take place. If the health checks performed earlier in the install completed successfully (Validate CSM Health), then the following platform services will be healthy and ready for reboot of the LiveCD:
- Utility Storage (Ceph)
cray-bss
cray-dhcp-kea
cray-dns-unbound
cray-ipxe
cray-sls
cray-tftp
An administrator is strongly encouraged to be mindful of pitfalls during this segment of the CSM install. The steps below do contain warnings themselves, but overall there are risks:
- SSH will cease to work when the LiveCD reboots; the serial console will need to be used.
- Rebooting a remote ISO will dump all running changes on the PIT node; USB devices are accessible after the install.
- The NCN will never wipe a USB device during installation.
- Prior to shutting down the PIT node, learning the CMN IP addresses of the other NCNs will be helpful if troubleshooting is required.
This procedure entails deactivating the LiveCD, meaning the LiveCD and all of its resources will be unavailable.
The steps in this section load hand-off data before a later procedure reboots the LiveCD node.
-
Start a new typescript.
-
Exit the current typescript, if one is active.
pit# exit
-
Start a new typescript on the PIT node.
pit# mkdir -pv /var/www/ephemeral/prep/admin && pushd /var/www/ephemeral/prep/admin && script -af csm-livecd-reboot.$(date +%Y-%m-%d).txt pit# export PS1='\u@\H \D{%Y-%m-%d} \t \w # '
-
-
Upload SLS file.
NOTE: The environment variable
SYSTEM_NAME
must be set.pit# csi upload-sls-file --sls-file /var/www/ephemeral/prep/${SYSTEM_NAME}/sls_input_file.json
Expected output looks similar to the following:
2021/02/02 14:05:15 Retrieving S3 credentials ( sls-s3-credentials ) for SLS 2021/02/02 14:05:15 Uploading SLS file: /var/www/ephemeral/prep/eniac/sls_input_file.json 2021/02/02 14:05:15 Successfully uploaded SLS Input File.
-
Get a token to use for authenticated communication with the gateway.
NOTE:
api-gw-service-nmn.local
is legacy, and will be replaced withapi-gw-service.nmn
.pit# export TOKEN=$(curl -k -s -S -d grant_type=client_credentials \ -d client_id=admin-client \ -d client_secret=`kubectl get secrets admin-client-auth -o jsonpath='{.data.client-secret}' | base64 -d` \ https://api-gw-service-nmn.local/keycloak/realms/shasta/protocol/openid-connect/token | jq -r '.access_token')
-
Validate that
CSM_RELEASE
andCSM_PATH
variables are set.These variables were set and added to
/etc/environment
during the earlier Bootstrap PIT Node step of the install.CSM_PATH
should be the fully-qualified path to the expanded CSM release tarball on the PIT node.pit# echo "CSM_RELEASE=${CSM_RELEASE} CSM_PATH=${CSM_PATH}"
-
Upload NCN boot artifacts into S3.
-
Run the following command.
pit# artdir=/var/www/ephemeral/data && k8sdir=$artdir/k8s && cephdir=$artdir/ceph && csi handoff ncn-images \ --k8s-kernel-path $k8sdir/*.kernel \ --k8s-initrd-path $k8sdir/initrd.img*.xz \ --k8s-squashfs-path $k8sdir/secure-*.squashfs \ --ceph-kernel-path $cephdir/*.kernel \ --ceph-initrd-path $cephdir/initrd.img*.xz \ --ceph-squashfs-path $cephdir/secure-*.squashfs
The end of the command output contains a block similar to this:
Run the following commands so that the versions of the images that were just uploaded can be used in other steps: export KUBERNETES_VERSION=x.y.z export CEPH_VERSION=x.y.z
-
Run the
export
commands listed at the end of the output from the previous step.
-
-
Upload the
data.json
file to BSS, ourcloud-init
data source.If any changes have been made to this file (for example, as a result of any customizations or workarounds), then use the path to the modified file instead.
This step will prompt for the root password of the NCNs.
pit# csi handoff bss-metadata --data-file /var/www/ephemeral/configs/data.json || echo "ERROR: csi handoff bss-metadata failed"
-
Patch the metadata for the Ceph nodes to have the correct run commands.
pit# python3 /usr/share/doc/csm/scripts/patch-ceph-runcmd.py
-
Ensure that the DNS server value is correctly set to point toward Unbound at
10.92.100.225
(NMN) and10.94.100.225
(HMN).pit# csi handoff bss-update-cloud-init --set meta-data.dns-server="10.92.100.225 10.94.100.225" --limit Global
-
Preserve logs and configuration files if desired (optional).
After the PIT node is redeployed, all files on its local drives will be lost. It is recommended to retain some of the log files and configuration files, because they may be useful if issues are encountered during the remainder of the install.
The following commands create a
tar
archive of these files, storing it in a directory that will be backed up in the next step.pit# mkdir -pv /var/www/ephemeral/prep/logs && ls -d \ /etc/dnsmasq.d \ /etc/os-release \ /etc/sysconfig/network \ /opt/cray/tests/cmsdev.log \ /opt/cray/tests/install/logs \ /opt/cray/tests/logs \ /root/.canu \ /root/.config/cray/logs \ /root/csm*.{log,txt} \ /tmp/*.log \ /usr/share/doc/csm/install/scripts/csm_services/yapl.log \ /var/log/conman \ /var/log/zypper.log 2>/dev/null | sed 's_^/__' | xargs tar -C / -czvf /var/www/ephemeral/prep/logs/pit-backup-$(date +%Y-%m-%d_%H-%M-%S).tgz
-
Backup the bootstrap information from
ncn-m001
.NOTE: This preserves information that should always be kept together in order to fresh-install the system again.
-
Log in and set up passwordless SSH to the PIT node.
Copying only the public keys from
ncn-m002
andncn-m003
to the PIT node. Do not set up passwordless SSH from the PIT node or the key will have to be securely tracked or expunged if using a USB installation).The
ssh
commands below may prompt for the NCN root password.pit# ssh ncn-m002 cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys && ssh ncn-m003 cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys && chmod 600 /root/.ssh/authorized_keys
-
Back up files from the PIT to
ncn-m002
.pit# ssh ncn-m002 \ "mkdir -pv /metal/bootstrap rsync -e 'ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' -rltD -P --delete pit.nmn:/var/www/ephemeral/prep /metal/bootstrap/ rsync -e 'ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' -rltD -P --delete pit.nmn:${CSM_PATH}/cray-pre-install-toolkit*.iso /metal/bootstrap/"
-
Back up files from the PIT to
ncn-m003
.pit# ssh ncn-m003 \ "mkdir -pv /metal/bootstrap rsync -e 'ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' -rltD -P --delete pit.nmn:/var/www/ephemeral/prep /metal/bootstrap/ rsync -e 'ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' -rltD -P --delete pit.nmn:${CSM_PATH}/cray-pre-install-toolkit*.iso /metal/bootstrap/"
-
-
Set the PIT node to PXE boot.
-
List IPv4 boot options using
efibootmgr
.pit# efibootmgr | grep -Ei "ip(v4|4)"
-
Set and trim the boot order on the PIT node.
This only needs to be done for the PIT node, not for any of the other NCNs. See Setting boot order and Trimming boot order.
-
Tell the PIT node to PXE boot on the next boot.
Use
efibootmgr
to set the next boot device to the first PXE boot option. This step assumes the boot order was set up in the previous step.pit# efibootmgr -n $(efibootmgr | grep -Ei "ip(v4|4)" | awk '{print $1}' | head -n 1 | tr -d Boot*) | grep -i bootnext BootNext: 0014
-
-
Collect a backdoor login. Fetch the CMN IP address for
ncn-m002
for a backdoor during the reboot ofncn-m001
.-
Get the IP address.
pit# ssh ncn-m002 'ip a show bond0.cmn0 | grep inet'
Expected output will look similar to the following (exact values may differ):
inet 10.102.11.13/24 brd 10.102.11.255 scope global bond0.cmn0 inet6 fe80::1602:ecff:fed9:7820/64 scope link
-
Log in from another external machine to verify SSH is up and running for this session.
external# ssh [email protected] ncn-m002#
Keep this terminal active as it will enable
kubectl
commands during the bring-up of the new NCN. If the reboot successfully deploys the LiveCD, then this terminal can be exited.POINT OF NO RETURN: The next step will wipe the underlying nodes disks clean. It will ignore USB devices. RemoteISOs are at risk here; even though a backup has been performed of the PIT node, it is not possible to boot back to the same state. This is the last step before rebooting the node.
-
-
Wipe the disks on the PIT node.
WARNING: Risk of USER ERROR! Do not assume to wipe the first three disks (for example,
sda
,sdb
, andsdc
); they are not pinned to any physical disk layout. Choosing the wrong ones may result in wiping the USB device. USB devices can only be wiped by operators at this point in the install. USB devices are never wiped by the CSM installer.-
Select disks to wipe (SATA/NVME/SAS).
pit# md_disks="$(lsblk -l -o SIZE,NAME,TYPE,TRAN | grep -E '(sata|nvme|sas)' | sort -h | awk '{print "/dev/" $2}')"
-
Run a sanity check by printing disks into typescript or console.
pit# echo $md_disks
Expected output looks similar to the following:
/dev/sda /dev/sdb /dev/sdc
-
Wipe. This is irreversible.
pit# wipefs --all --force $md_disks
If any disks had labels present, output looks similar to the following:
/dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sda: 8 bytes were erased at offset 0x6fc86d5e00 (gpt): 45 46 49 20 50 41 52 54 /dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa /dev/sdb: 6 bytes were erased at offset 0x00000000 (crypto_LUKS): 4c 55 4b 53 ba be /dev/sdb: 6 bytes were erased at offset 0x00004000 (crypto_LUKS): 53 4b 55 4c ba be /dev/sdc: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sdc: 8 bytes were erased at offset 0x6fc86d5e00 (gpt): 45 46 49 20 50 41 52 54 /dev/sdc: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
If there was any wiping done, output should appear similar to the output above. If this is re-run, there may be no output or an ignorable error.
-
-
Quit the typescript session and copy the typescript file off of
ncn-m001
.-
Stop the typescript session:
pit# exit
-
Back up the completed typescript file by re-running the
rsync
commands in the Backup Bootstrap Information section.
-
-
(Optional) Setup ConMan or serial console, if not already on, from any laptop or other system with network connectivity to the cluster.
external# script -a boot.livecd.$(date +%Y-%m-%d).txt external# export PS1='\u@\H \D{%Y-%m-%d} \t \w # ' external# SYSTEM_NAME=eniac external# USERNAME=root external# export IPMI_PASSWORD=changeme external# ipmitool -I lanplus -U $USERNAME -E -H ${SYSTEM_NAME}-ncn-m001-mgmt chassis power status external# ipmitool -I lanplus -U $USERNAME -E -H ${SYSTEM_NAME}-ncn-m001-mgmt sol activate
-
Reboot the LiveCD.
pit# reboot
-
Wait for the node to boot, acquire its hostname (
ncn-m001
), and runcloud-init
.If all of that happens successfully, then skip the rest of this step and proceed to the next step. Otherwise, use the following information to remediate the problems.
NOTES:
- If the node has PXE boot issues, such as getting PXE errors or not pulling the
ipxe.efi
binary, see PXE boot troubleshooting. - If
ncn-m001
did not run all thecloud-init
scripts, then the following commands need to be run (but only in that circumstance).
ncn-m001# cloud-init clean ; cloud-init init ; cloud-init modules -m init ; \ cloud-init modules -m config ; cloud-init modules -m final
- If the node has PXE boot issues, such as getting PXE errors or not pulling the
-
Once
cloud-init
has completed successfully, log in and start a typescript (the IP address used here is the one noted forncn-m002
in an earlier step).external# ssh [email protected] ncn-m002# pushd /metal/bootstrap/prep/admin ncn-m002# script -af csm-verify.$(date +%Y-%m-%d).txt ncn-m002# export PS1='\u@\H \D{%Y-%m-%d} \t \w # ' ncn-m002# ssh ncn-m001
-
Run
kubectl get nodes
to see the full Kubernetes cluster.ncn-m001# kubectl get nodes
Expected output looks similar to the following:
NAME STATUS ROLES AGE VERSION ncn-m001 Ready control-plane,master 27s v1.20.13 ncn-m002 Ready control-plane,master 4h v1.20.13 ncn-m003 Ready control-plane,master 4h v1.20.13 ncn-w001 Ready <none> 4h v1.20.13 ncn-w002 Ready <none> 4h v1.20.13 ncn-w003 Ready <none> 4h v1.20.13
-
Restore and verify the site link.
Restore networking files from the manual backup taken during the Backup the bootstrap information step.
ncn-m001# SYSTEM_NAME=eniac ncn-m001# rsync ncn-m002:/metal/bootstrap/prep/${SYSTEM_NAME}/pit-files/ifcfg-lan0 /etc/sysconfig/network/ && \ wicked ifreload lan0 && \ wicked ifstatus lan0
Expected output looks similar to:
lan0 up link: #32, state up, mtu 1500 type: bridge, hwaddr 90:e2:ba:0f:11:c2 config: compat:suse:/etc/sysconfig/network/ifcfg-lan0 leases: ipv4 static granted addr: ipv4 172.30.53.88/20 [static]
-
Verify that the site link (
lan0
) and the VLANs have IP addresses.Examine the output to ensure that each interface has been assigned an IPv4 address.
ncn-m001# for INT in lan0 bond0.nmn0 bond0.hmn0 bond0.can0 bond0.cmn0 ; do ip a show $INT || echo "ERROR: Command failed: ip a show $INT" done
-
Verify that the default route is via the CMN.
ncn-m001# ip r show default
-
Verify that there is not a metal bootstrap IP address.
ncn-m001# ip a show bond0
-
Verify
zypper
repositories are empty and all remote SUSE repositories are disabled.If the
rm
command fails because the files do not exist, this is not an error and should be ignored.ncn-m001# rm -v /etc/zypp/repos.d/* && zypper ms --remote --disable
-
Download and install/upgrade the documentation RPM.
-
Exit the typescript and move the backup to
ncn-m001
.This is required to facilitate reinstallations, because it pulls the preparation data back over to the documented area (
ncn-m001
).ncn-m001# exit ncn-m002# exit # typescript exited ncn-m002# rsync -rltDv -P /metal/bootstrap ncn-m001:/metal/ && rm -rfv /metal/bootstrap ncn-m002# exit
The next steps require csi
from the installation media. csi
will not be provided on an NCN otherwise because
it is used for Cray installation and bootstrap.
-
SSH back into
ncn-m001
or restart a local console. -
Resume the typescript.
ncn-m001# script -af /metal/bootstrap/prep/admin/csm-verify.$(date +%Y-%m-%d).txt ncn-m001# export PS1='\u@\H \D{%Y-%m-%d} \t \w # '
-
Obtain access to CSI.
ncn-m001# mkdir -pv /mnt/livecd /mnt/rootfs /mnt/sqfs && \ mount -v /metal/bootstrap/cray-pre-install-toolkit-*.iso /mnt/livecd/ && \ mount -v /mnt/livecd/LiveOS/squashfs.img /mnt/sqfs/ && \ mount -v /mnt/sqfs/LiveOS/rootfs.img /mnt/rootfs/ && \ cp -pv /mnt/rootfs/usr/bin/csi /tmp/csi && \ /tmp/csi version && \ umount -vl /mnt/sqfs /mnt/rootfs /mnt/livecd
NOTE
/tmp/csi
will delete itself on the next reboot. The/tmp
directory istmpfs
and runs in memory; it will not persist on restarts. -
Authenticate with the cluster.
ncn-m001# export TOKEN=$(curl -k -s -S -d grant_type=client_credentials \ -d client_id=admin-client \ -d client_secret=`kubectl get secrets admin-client-auth -o jsonpath='{.data.client-secret}' | base64 -d` \ https://api-gw-service-nmn.local/keycloak/realms/shasta/protocol/openid-connect/token | jq -r '.access_token')
-
Set the wipe safeguard to allow safe reboots on all NCNs.
ncn-m001# /tmp/csi handoff bss-update-param --set metal.no-wipe=1
Run the following command on ncn-m001
to remove the default pool, which can cause contention issues with NTP.
ncn-m001# sed -i "s/^! pool pool\.ntp\.org.*//" /etc/chrony.conf
NOTE: Only follow this section if the NCNs are HPE hardware. If the system uses Gigabyte or Intel hardware, skip this section.
Configure DNS and NTP on the BMC for each management node except ncn-m001
.
However, the commands in this section are all run on ncn-m001
.
-
Validate that the system is HPE hardware.
ncn-m001# ipmitool mc info | grep "Hewlett Packard Enterprise" || echo "Not HPE hardware -- SKIP these steps"
-
Set environment variables.
Set the
IPMI_PASSWORD
andUSERNAME
variables to the BMC credentials for the NCNs.Using
read -s
for this prevents the credentials from being echoed to the screen or saved in the shell history.ncn-m001# read -s IPMI_PASSWORD ncn-m001# read -s USERNAME ncn-m001# export IPMI_PASSWORD USERNAME
-
Set
BMCS
variable to list of the BMCs for all master, worker, and storage nodes, exceptncn-m001-mgmt
:ncn-m001# BMCS=$(grep -Eo "[[:space:]]ncn-[msw][0-9][0-9][0-9]-mgmt([.]|[[:space:]]|$)" /etc/hosts | sed 's/^.*\(ncn-[msw][0-9][0-9][0-9]-mgmt\).*$/\1/' | sort -u | grep -v "^ncn-m001-mgmt$") ; echo $BMCS
Expected output looks similar to the following:
ncn-m002-mgmt ncn-m003-mgmt ncn-s001-mgmt ncn-s002-mgmt ncn-s003-mgmt ncn-w001-mgmt ncn-w002-mgmt ncn-w003-mgmt
-
Run the following to loop through all of the BMCs (except
ncn-m001-mgmt
) and apply the desired settings.ncn-m001# for BMC in $BMCS ; do echo "$BMC: Disabling DHCP and configure NTP on the BMC using data from cloud-init" /opt/cray/csm/scripts/node_management/set-bmc-ntp-dns.sh ilo -H $BMC -S -n echo echo "$BMC: Configuring DNS on the BMC using data from cloud-init" /opt/cray/csm/scripts/node_management/set-bmc-ntp-dns.sh ilo -H $BMC -d echo echo "$BMC: Showing settings" /opt/cray/csm/scripts/node_management/set-bmc-ntp-dns.sh ilo -H $BMC -s echo done ; echo "Configuration completed on all NCN BMCs"
After completing this procedure, proceed to Configure Administrative Access.