From 69b45b7a9f2808377c13302ec60b0a6f8095e853 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Thu, 11 Jul 2024 11:51:24 +0200 Subject: [PATCH 01/48] Adding HA subscription to Preparations --- adoc/SAP-EIC-Main.adoc | 3 +++ 1 file changed, 3 insertions(+) diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index fb9dddbd..887ed725 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -47,6 +47,9 @@ https://help.sap.com/docs/integration-suite?locale=en-US and search for the "Edg ** {slem} {slem_version} ** {rancher} {rancher_version} ** {lh} {lh_version} +** {sle_ha} * + ++++*+++ Only needed if you want to setup {rancher} in a high available setup. IMPORTANT: If you want to use different versions of {slem}, {rancher}, {rke} or {lh}, make sure to check the support matrix for the related solutions you want to use: https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/ From 9ef2c5668d36fe1bd493fe1a19f0c6ca0db869c8 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Thu, 11 Jul 2024 11:51:38 +0200 Subject: [PATCH 02/48] Adding explenation for $ and # --- adoc/SAP-EIC-Main.adoc | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 887ed725..6e356165 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -8,6 +8,7 @@ :slem: SUSE Linux Enterprise Micro :slem_version: 5.4 :sles_version: 15 SP5 +:sle_ha: SUSE Linux Enterprise High Availability Extension :lh: Longhorn :lh_version: 1.5.5 :rancher: Rancher Prime @@ -37,6 +38,9 @@ It will guide you through the steps of: NOTE: This guide does not contain information about sizing your landscapes. Visit https://help.sap.com/docs/integration-suite?locale=en-US and search for the "Edge Integration Cell Sizing Guide". +NOTE: In this guide we'll use $ and # for shell commands, where # means that the command needs to be executed as a root user and +$ that the command can be run by any user. + ++++ ++++ From 1e9bf3f83a8fa1e438a882321d55809e2c2d439d Mon Sep 17 00:00:00 2001 From: Ulrich Schairer Date: Fri, 12 Jul 2024 15:54:32 +0200 Subject: [PATCH 03/48] use rac for installation of cert-manager fixed some typos --- adoc/SAP-Rancher-RKE2-Installation.adoc | 6 ++-- adoc/SAPDI3-Rancher.adoc | 43 ++++++++++++++++++++++--- 2 files changed, 41 insertions(+), 8 deletions(-) diff --git a/adoc/SAP-Rancher-RKE2-Installation.adoc b/adoc/SAP-Rancher-RKE2-Installation.adoc index 9c761ef3..d7ea25ba 100644 --- a/adoc/SAP-Rancher-RKE2-Installation.adoc +++ b/adoc/SAP-Rancher-RKE2-Installation.adoc @@ -46,11 +46,11 @@ machineGlobalConfig: - ipvs-strict-arp=true ---- -To do so, apply all configuration as usuall and hit the *Edit as YAML* button in the creation step, as shown below: +To do so, apply all configuration as usual and hit the *Edit as YAML* button in the creation step, as shown below: image::SAP-Rancher-Create-Config-YAML.png[title=Rancher create custom cluster yaml config,scaledwidth=99%] -The excrept is to be located under *spec.rkeConfig*. An example can be seen here: +The excerpt is to be located under *spec.rkeConfig*. An example can be seen here: image::SAP-Rancher-Create-StrictARP.png[title=Rancher create Cluster with strict ARP, scaledwidth=99%] @@ -79,4 +79,4 @@ If your {rancher} instance does hold a self-signed certifcate, make sure to tick You can run the command on all nodes in parallel and don't have to wait until a single node is down. Once all machines are registered, you can see the cluster status at the top, changing from "updating" to "active". -At this point in time, your Kubernetes cluster is ready to be used. \ No newline at end of file +At this point in time, your Kubernetes cluster is ready to be used. diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index aee6fbd6..2197a65c 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -115,14 +115,13 @@ EOF Create configuration files for additional cluster nodes: ---- -# cat > /etc/rancher/rke2/config.yaml +# cat < /etc/rancher/rke2/config.yaml server: https://"FQDN of registration address":9345 token: 'your cluster token' tls-san: - FQDN of fixed registration address on load balancer - other hostname - IP v4 address - EOF ---- @@ -159,12 +158,46 @@ The easiest option to install Helm is to run: ==== Installing cert-manager +Even though cert-manager is available for deployment using the {rancher} Apps, we recommend to use the {rac}. + +==== Create Secret for {rac} +First we need to create a namespace and the *imagePullSecret* for installing the cert-manager. + + + ---- -$ helm repo add jetstack https://charts.jetstack.io -$ helm repo update -$ helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true +kubectl create namespace cert-manager +kubectl -n cert-manager create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-login= --docker-password= ---- +How to create the *imagePullSecret* is described in the Section xref:SAP-EIC-Main.adoc#imagePullSecret[]. + + +===== Installing the application + +You will need to login to the {rac}: + +---- +$ helm registry login dp.apps.rancher.io/charts -u -p +---- + +Now pull the helmchart from the {rac}: + +---- +$ helm pull oci://dp.apps.rancher.io/charts/cert-manager --untar +---- + + +Install cert-manager: + +---- +$ helm install --namespace cert-manager --set crds.enabled=true --set-json 'imagePullSecrets=[{"name":"application-collection"}]' cert-manager ./certmanager +---- + + + + + === Installing {rancher} To install {rancher}, you need to add the related Helm repository. From 1ccafe6cc89277b4611d7bc6102a70ebf9720c83 Mon Sep 17 00:00:00 2001 From: Ulrich Schairer Date: Fri, 12 Jul 2024 16:01:19 +0200 Subject: [PATCH 04/48] changed HERE doc --- adoc/SAP-EIC-SLEMicro.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/adoc/SAP-EIC-SLEMicro.adoc b/adoc/SAP-EIC-SLEMicro.adoc index 87f27732..de984a9f 100644 --- a/adoc/SAP-EIC-SLEMicro.adoc +++ b/adoc/SAP-EIC-SLEMicro.adoc @@ -174,7 +174,7 @@ To do so, create and populate the file */etc/modules-load.d/ip_vs.conf* on each [source, shell] ---- -# cat <> /etc/modules-load.d/ip_vs.conf +# cat < /etc/modules-load.d/ip_vs.conf ip_vs ip_vs_rr ip_vs_wrr From 797dfd1ae3c44f41907c8f25d10e737cf6832669 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Tue, 23 Jul 2024 15:34:43 +0200 Subject: [PATCH 05/48] Adding ns to imagePullSecret --- adoc/SAP-EIC-Main.adoc | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 6e356165..f0ab0c5f 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -147,9 +147,13 @@ Get your user name and your access token for the {rac}. Then run: ---- -$ kubectl create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username= --docker-password= +$ kubectl -n create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username= --docker-password= ---- +As secrets are namespace sensitive, you'll need to create this for every namespace needed. +If you're deploying {metallb}, {redis} and {pg} as described in this guide, you'll need to create the imagePullSecret in the related namespaces, +which means, you'll have three secrets with the same content but in three different namespaces. + ==== Creating an imagePullSecret using {rancher} You can also create an imagePullSecret using {rancher}. From 74e9c0e8087f4c4215b96702cbc113e4a9492697 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Tue, 23 Jul 2024 15:35:20 +0200 Subject: [PATCH 06/48] Fixing cert-manager --- adoc/SAPDI3-Rancher.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index 2197a65c..e1e99127 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -191,7 +191,7 @@ $ helm pull oci://dp.apps.rancher.io/charts/cert-manager --untar Install cert-manager: ---- -$ helm install --namespace cert-manager --set crds.enabled=true --set-json 'imagePullSecrets=[{"name":"application-collection"}]' cert-manager ./certmanager +$ helm install --namespace cert-manager --set crds.enabled=true --set-json 'global.imagePullSecrets=[{"name":"application-collection"}]' cert-manager ./cert-manager ---- From d7294a7a5580d6524d1e0cc5d1af2fea27efccc0 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Tue, 23 Jul 2024 15:36:26 +0200 Subject: [PATCH 07/48] Adding first landscape overview --- adoc/SAP-EIC-Main.adoc | 14 +++ images/src/svg/SAP-EIC-Architecture.svg | 127 ++++++++++++++++++++++++ 2 files changed, 141 insertions(+) create mode 100644 images/src/svg/SAP-EIC-Architecture.svg diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index f0ab0c5f..80b70926 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -79,6 +79,20 @@ Furthermore: ++++ +== Landscape Overview + +The following picture shows the landscape overview: + +image::SAP-EIC-Architecture.svg[scaledwidth=99%,opts=inline,Embedded] + +The dark blue rectangles represent Kubernetes clusters. + +The olive rectangles represent Kubernetes nodes that hold the roles of Control Plane and Worker combined. + +The green rectangles represent Kubernetes Control Plane nodes. + +The orange rectangles represent Kubernetes Worker nodes. + +In this document we'll guide you through the installation for the required clusters. + + == Installing {slem} {slem_version} There are several ways to install {slem} {slem_version}. For this best practice guide, we use the installation method via graphical installer. Further installation routines can be found in the https://documentation.suse.com/sle-micro/5.4/html/SLE-Micro-all/book-deployment-slemicro.html[Deployment Guide for SUSE Linux Enterprise Micro 5.4]. diff --git a/images/src/svg/SAP-EIC-Architecture.svg b/images/src/svg/SAP-EIC-Architecture.svg new file mode 100644 index 00000000..52a3732f --- /dev/null +++ b/images/src/svg/SAP-EIC-Architecture.svg @@ -0,0 +1,127 @@ + + + + + + + + + + + Rancher Cluster + + + + + + Control + Plane + + Worker + + + + Control + Plane + + Worker + + + + Control + Plane + + Worker + + + + Creates / Manages + Creates / Manages + + + + + Production + Cluster + + + + + + Control + Plane + + + + Control + Plane + + + + Control + Plane + + + + + Worker + + Worker + + Worker + + + + + + QA / Dev + Cluster + + + + + + Control + Plane + + + + Control + Plane + + + + Control + Plane + + + + + Worker + + Worker + + Worker + + + + + + + + + + + + + + + + + + + + + + + From a34b19db3c36eb1472c4967727f7416e9322f2e3 Mon Sep 17 00:00:00 2001 From: Dominik_Mathern Date: Wed, 24 Jul 2024 09:13:26 +0200 Subject: [PATCH 08/48] Change of metallb and longhorn parititon Added Longhorn partition and remove MetalLB kernel parameter. --- adoc/SAP-EIC-SLEMicro.adoc | 63 ++++++++++++++++++++++---------------- 1 file changed, 37 insertions(+), 26 deletions(-) diff --git a/adoc/SAP-EIC-SLEMicro.adoc b/adoc/SAP-EIC-SLEMicro.adoc index de984a9f..3810c39c 100644 --- a/adoc/SAP-EIC-SLEMicro.adoc +++ b/adoc/SAP-EIC-SLEMicro.adoc @@ -167,38 +167,49 @@ Per default {slem} runs a timer for *transactional-update* in the background whi ifdef::metallb[] // Needed due to Github issue: https://github.com/rancher/rke2/issues/3710 [#metal-slem] -=== Preparation for {metallb} -If you want to use {metallb} as a Kubernetes Load Balancer, you need to make sure that the kernel modules for ip_vs are loaded correctly on boot time. -To do so, create and populate the file */etc/modules-load.d/ip_vs.conf* on each cluster node as followed: +=== Preparation for {lh} +For {lh} we need to do some preparation steps. First we need to install addional packages on all worker nodes. Then we will attach a second disk to the worker nodes, create a filesystem ontop of it and mount it to the longhorn default location. -[source, shell] +We will install open-iscsi as a requirement for longhorn and Logical Volume Management for adding a filesystem to longhorn. ---- -# cat < /etc/modules-load.d/ip_vs.conf -ip_vs -ip_vs_rr -ip_vs_wrr -ip_vs_sh -EOF +transactional-update pkg install open-iscsi lvm2 ---- -endif::[] +After the needed packages are installed we will create with the Logical Volume Management a new logical volume. -// To do so, create a file on each cluster node named: +First we want to create a new physical volume. In our case the second disk is called vdb and we use this as longhorn volume. +---- +pvcreate /dev/vdb +---- + +After the physical volume is created we create a volume group called vgdata +---- +vgcreate vgdata /dev/vdb +---- + +Now we cann create the logical volume and we will use 100% of the disk. +---- +lvcreate -n lvlonghorn -l100%FREE vgdata +---- + +We will create the XFS filesystem on the logical volume. You don't need to create a partion on top of it. +---- +mkfs.xfs /dev/vgdata/lvlonghorn +---- -// ---- -// /etc/modules-load.d/ip_vs.conf -// ---- +Before we can mount the device we need to create the directory structure. +---- +mkdir -p /var/lib/longhorn +---- -// Now, you need to add the entries for the related kernel modules: -// ---- -// ip_vs -// ip_vs_rr -// ip_vs_wrr -// ip_vs_sh -// ---- +That the mount of the filesystem is persistent we add an entry into the fstab +---- +echo -e "/dev/vgdata/lvlonghorn /var/lib/longhorn xfs defaults 0 0" >> /etc/fstab +---- + +Now we can mount the filesystem +---- +mount -a +---- -// Reboot the nodes and check that the kernel modules are loaded successfully: -// ---- -// # lsmod | grep ip_vs -// ---- From 0c1e9f96d327e1e9fa087da5da1347b108eb882f Mon Sep 17 00:00:00 2001 From: Dominik_Mathern Date: Thu, 25 Jul 2024 11:30:51 +0200 Subject: [PATCH 09/48] Typo in service iscsid --- adoc/SAPDI3-Longhorn.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/adoc/SAPDI3-Longhorn.adoc b/adoc/SAPDI3-Longhorn.adoc index 3167d990..5fa48fe4 100644 --- a/adoc/SAPDI3-Longhorn.adoc +++ b/adoc/SAPDI3-Longhorn.adoc @@ -13,7 +13,7 @@ Before {lh} can be installed on a Kubernetes cluster, all nodes must have the `open-iscsi` package installed, and the ISCSI daemon needs to be started. To do so, run: ---- # zypper in -y open-iscsi -# systemctl iscsid enable --now +# systemctl enable iscsid --now ---- To make sure a node is prepared for {lh}, you can use the following script to check: From 161ba8c3de85b8bc7bb628227687975732756621 Mon Sep 17 00:00:00 2001 From: Dominik_Mathern Date: Thu, 25 Jul 2024 15:40:46 +0200 Subject: [PATCH 10/48] Update SAP-Rancher-RKE2-Installation.adoc Remove advanced configuration for MetalLB. --- adoc/SAP-Rancher-RKE2-Installation.adoc | 34 ------------------------- 1 file changed, 34 deletions(-) diff --git a/adoc/SAP-Rancher-RKE2-Installation.adoc b/adoc/SAP-Rancher-RKE2-Installation.adoc index d7ea25ba..dfdf88b6 100644 --- a/adoc/SAP-Rancher-RKE2-Installation.adoc +++ b/adoc/SAP-Rancher-RKE2-Installation.adoc @@ -27,40 +27,6 @@ In the next step, make sure to select a Kubernetes version that is supported by ++++ -// Section is only needed if metallb shall be used -// Ref.: https://forums.rancher.com/t/kube-proxy-settings-in-custom-rke2-cluster/40107/2 -// Ref.: https://github.com/rancher/rke2/issues/3710 -ifdef::metallb[] -[#metal-rke] -If you do not plan to use {metallb}, please continue xref:SAP-Rancher-RKE2-Installation.adoc#nmetallb[below]. - -To prepare {rke} for running {metallb}, you'll need to enable strictarp mode for ipvs in kube-proxy. -To enable strictarp for clusters you want to roll out using {rancher}, you'll need to add the following lines to your configuration: - - -[source,yaml] ----- -machineGlobalConfig: - kube-proxy-arg: - - proxy-mode=ipvs - - ipvs-strict-arp=true ----- - -To do so, apply all configuration as usual and hit the *Edit as YAML* button in the creation step, as shown below: - -image::SAP-Rancher-Create-Config-YAML.png[title=Rancher create custom cluster yaml config,scaledwidth=99%] - -The excerpt is to be located under *spec.rkeConfig*. An example can be seen here: - -image::SAP-Rancher-Create-StrictARP.png[title=Rancher create Cluster with strict ARP, scaledwidth=99%] - -endif::[] - -++++ - -++++ - -[#nmetallb] If you don't have any further requirements to Kubernetes, you can click the "Create" button at the very bottom. In any other cases talk to your administrators before making adjustements. From 0d4b574fa399e03e6b8cb88f4ce3c46e709191f5 Mon Sep 17 00:00:00 2001 From: Dominik_Mathern Date: Thu, 25 Jul 2024 15:57:35 +0200 Subject: [PATCH 11/48] ifdef for SLE Micro and Longhorn --- adoc/SAP-EIC-Main.adoc | 3 +- adoc/SAP-EIC-Metallb.adoc | 6 - adoc/SAP-EIC-SLEMicro.adoc | 157 ++++-------------------- adoc/SAP-Rancher-RKE2-Installation.adoc | 3 + adoc/SAPDI3-Longhorn.adoc | 5 +- 5 files changed, 34 insertions(+), 140 deletions(-) diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 80b70926..3b3ee63b 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -94,8 +94,7 @@ In this document we'll guide you through the installation for the required clust == Installing {slem} {slem_version} -There are several ways to install {slem} {slem_version}. For this best practice guide, we use the installation method via graphical installer. -Further installation routines can be found in the https://documentation.suse.com/sle-micro/5.4/html/SLE-Micro-all/book-deployment-slemicro.html[Deployment Guide for SUSE Linux Enterprise Micro 5.4]. +There are several ways to install {slem} {slem_version}. For this best practice guide, we use the installation method via graphical installer. But in cloud-native deployments it is highly recommended to use Infrastructure as Code technologies to fully automate the deployment and lifecycle processes. include::SAP-EIC-SLEMicro.adoc[SLEMicro] diff --git a/adoc/SAP-EIC-Metallb.adoc b/adoc/SAP-EIC-Metallb.adoc index 4fefc440..0a6aadc2 100644 --- a/adoc/SAP-EIC-Metallb.adoc +++ b/adoc/SAP-EIC-Metallb.adoc @@ -13,12 +13,6 @@ There are several ways to deploy {metallb}. In this guide we'll describe how to Please make sure to have a range of IP addresses available for configuring {metallb}. -===== Preparations - -Make sure the related Kernel modules are loaded on your Kubernetes worker nodes as described in xref:SAP-EIC-SLEMicro#metal-slem[]. - -Make sure you enabled strictarp as described in xref:SAP-Rancher-RKE2-Installation.adoc#metal-rke[] - ===== Installation of {metallb} diff --git a/adoc/SAP-EIC-SLEMicro.adoc b/adoc/SAP-EIC-SLEMicro.adoc index 3810c39c..59ef8f0f 100644 --- a/adoc/SAP-EIC-SLEMicro.adoc +++ b/adoc/SAP-EIC-SLEMicro.adoc @@ -1,134 +1,21 @@ [#SLEMicro] -=== Preparation +=== Installation On each server in your environment for {eic} and {rancher}, install {slem} {slem_version} as the operating system. -This chapter describes all recommended steps for the installation. +The manual installation is described in the {slem} {slem_version} Deployment Guide in our Documentation https://documentation.suse.com/sle-micro/{slem_version}/single-html/SLE-Micro-deployment/#cha-install[SLE Micro Deployment Guide]. -TIP: If you have already set up all machines and the operating system, -skip this chapter. - -++++ - -++++ - -* Mount the {slem} into your virtual machine and start the VM. -* When the boot menu appears select *Installation*. -+ -image::EIC_SLE_Micro_setup_boot_menu.png[title=SLE Micro Boot Menu,scaledwidth=99%] - -++++ - -++++ - -* Select your *Language*, *Keyboard Layout* and accept the License Agreement. -+ -image::EIC_SLE_Micro_setup_License_Agreement.png[title=SLE Micro Setup License Agreement,scaledwidth=99%] - -++++ - -++++ - -* It is recommended to use a static network configuration. -During the installation setup, the first time to adjust this is when the registration page is displayed. -In the upper right corner, click the button *Network Configuration ...*: - -image::EIC_SLE_Micro_setup_Registration.png[title=SLE Micro Setup Registration,scaledwidth=99%] - -++++ - -++++ - -* The *Network Settings* page is displayed. By default, the network adapter is configured to use DHCP. -To change this, click the Button *Edit*. -+ -image::EIC_SLE_Micro_setup_Network_Settings.png[title=SLE Micro Setup Network Settings,scaledwidth=99%] - -++++ - -++++ - -* On the *Network Card Setup* page, select *Statically Assigned IP Address* and fill in the fields *IP Address*, *Subnet Mask* and *Hostname*. -+ -image::EIC_SLE_Micro_setup_Network_Card_Setup.png[title=SLE Micro Setup Network Card,scaledwidth=99%] - -++++ - -++++ - -* Back to the *Network Settings* go top the *Hostname/DNS* Section and set your *hostname*, *Name Server* and *Domain Search*. -+ -image::EIC_SLE_Micro_setup_Network_Settings_DNS.png[title=SLE Micro Setup Hostname/DNS,scaledwidth=99%] - -++++ - -++++ - -* Then switch to the *Routing* Section and go to *Add*. -+ -image::EIC_SLE_Micro_setup_Network_Settings_Routing.png[title=SLE Micro Setup Hostname/DNS,scaledwidth=99%] +At the end of the installation process in the summary windows you need to check if these Security Settings are configured: -++++ - -++++ - -* Fill out the *Gateway* and set it as *Default Route*. -+ -image::EIC_SLE_Micro_setup_Network_Settings_default_route.png[title=SLE Micro Setup Network Settings Default Route,scaledwidth=99%] - -++++ - -++++ - -* You will come back to the *Registration* page and here we will select *Skip Registration* and will do it later. -+ -image::EIC_SLE_Micro_setup_skip_Registration.png[title=SLE Micro Setup Skip Registration,scaledwidth=99%] - -++++ - -++++ - -* In the next window you can change the NTP Server or keep the default. -+ -image::EIC_SLE_Micro_setup_NTP_Configuration.png[title=SLE Micro Setup NTP Configuration,scaledwidth=99%] - -++++ - -++++ - -* On the next page fill out our password for the *root* user and if you want you can import public ssh keys for the root user. -+ -image::EIC_SLE_Micro_setup_Authentication.png[title=SLE Micro Setup Authentication for the System Administrator "root",scaledwidth=99%] + ** The firewall will be disabled. + ** The SSH service will be enabled. + ** SELinux will be set in permissive mode. -++++ - -++++ - -* On the last page you see a summary of your *Installation Settings* where you can change the disk layout, software packages and more. Please make sure that: - - ** The firewall will be disabled. - ** The SSH service will be enabled. - ** Kdump status is disabled. - ** SELinux will be set in permissive mode. +We need to set SELinux into permissive mode, because some components of the Edge Integration Cell violated SELinux rules and the application will not work. -+ -image::EIC_SLE_Micro_setup_Installation_Settings01.png[title=SLE Micro Setup Installation Settings upper page,scaledwidth=99%] -image::EIC_SLE_Micro_setup_Installation_Settings02.png[title=SLE Micro Setup Installation Settings lower page,scaledwidth=99%] -* To disable Kdump, scroll down and click its label . This opens the *Kdump Start-Up* page. -On that page, make sure "Disable Kdump" is selected. - -* To set SELinux im permissive mode, scroll down and click on *Security*. This open the *Security* page. On the right site there is the menu entry *Selected Module*. Open the dropdown menu and select *Permissive*. - -* Click on *Install* and confirm the installation. -+ -image::EIC_SLE_Micro_setup_Confirm_Installation.png[title=SLE Micro Setup Confirm Installation,scaledwidth=99%] - -* After the installation is finished you need to reboot the system. -+ -image::EIC_SLE_Micro_setup_reboot.png[title=SLE Micro Setup reboot,scaledwidth=99%] - -* You will see a login screen and you can login with your choosen Username and password. +TIP: If you have already set up all machines and the operating system, +skip this chapter. === Register your system To bring your system up to date you need to register your system against a SUSE Manager, RMT server or direct to the SCC Portal. We will describe the process in our guide with the direct connect to the SCC. For more information please look into the {slem} documentation. @@ -154,7 +41,7 @@ Login into the system and after your system is registered you can update it with ---- === Disable automatic reboot -Per default {slem} runs a timer for *transactional-update* in the background which could automatic reboot your system. We will disable it. +Per default {slem} runs a timer for *transactional-update* in the background which could automatic reboot your system. We will disable it. ---- # systemctl --now disable transactional-update.timer @@ -164,19 +51,27 @@ Per default {slem} runs a timer for *transactional-update* in the background whi ++++ -ifdef::metallb[] -// Needed due to Github issue: https://github.com/rancher/rke2/issues/3710 -[#metal-slem] - === Preparation for {lh} -For {lh} we need to do some preparation steps. First we need to install addional packages on all worker nodes. Then we will attach a second disk to the worker nodes, create a filesystem ontop of it and mount it to the longhorn default location. +For {lh} we need to do some preparation steps. First we need to install addional packages on all worker nodes. Then we will attach a second disk to the worker nodes, create a filesystem ontop of it and mount it to the longhorn default location. The size of the second disk depends on your use case. + +We need to install some packages as a requirement for longhorn and Logical Volume Management for adding a filesystem to longhorn. +---- +# transactional-update pkg install lvm2 jq nfs-client cryptsetup open-iscsi +---- + +After the needed packages are installed you need to reboot your machine. +---- +# reboot +---- + +Now we can you enable the iscsid server. -We will install open-iscsi as a requirement for longhorn and Logical Volume Management for adding a filesystem to longhorn. ---- -transactional-update pkg install open-iscsi lvm2 +# systemctl enable iscsid --now ---- -After the needed packages are installed we will create with the Logical Volume Management a new logical volume. +==== Create filesystem for longhorn +Then we will create with the Logical Volume Management a new logical volume. First we want to create a new physical volume. In our case the second disk is called vdb and we use this as longhorn volume. ---- diff --git a/adoc/SAP-Rancher-RKE2-Installation.adoc b/adoc/SAP-Rancher-RKE2-Installation.adoc index dfdf88b6..1347ce44 100644 --- a/adoc/SAP-Rancher-RKE2-Installation.adoc +++ b/adoc/SAP-Rancher-RKE2-Installation.adoc @@ -27,6 +27,9 @@ In the next step, make sure to select a Kubernetes version that is supported by ++++ + + +[#nmetallb] If you don't have any further requirements to Kubernetes, you can click the "Create" button at the very bottom. In any other cases talk to your administrators before making adjustements. diff --git a/adoc/SAPDI3-Longhorn.adoc b/adoc/SAPDI3-Longhorn.adoc index 5fa48fe4..13dcd42b 100644 --- a/adoc/SAPDI3-Longhorn.adoc +++ b/adoc/SAPDI3-Longhorn.adoc @@ -8,13 +8,16 @@ This chapter details the minimum requirements to install {lh} and describes thre For more details, visit https://longhorn.io/docs/{lh_version}/deploy/install/ === Requirements - +ifndef::slem[] Before {lh} can be installed on a Kubernetes cluster, all nodes must have the `open-iscsi` package installed, and the ISCSI daemon needs to be started. To do so, run: + + ---- # zypper in -y open-iscsi # systemctl enable iscsid --now ---- +endif::[] To make sure a node is prepared for {lh}, you can use the following script to check: ---- From c9cfeca899727cf5ee5c08141bf0252032ec7445 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Thu, 25 Jul 2024 16:04:30 +0200 Subject: [PATCH 12/48] Adding rac to DI guide --- adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc b/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc index f24cebba..7f1180a2 100644 --- a/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc +++ b/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc @@ -7,9 +7,9 @@ :sles_version: 15 SP4 :sles4sap: SUSE Linux Enterprise Server for SAP Applications :lh: Longhorn -:rancher: SUSE Rancher +:rancher: Rancher Prime :harvester: Harvester - +:rac: Rancher Application Collection = {di} 3 on SUSE's Kubernetes Stack From bf24185a66d820cc91b110f54e252eb5d63e7c4f Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Mon, 29 Jul 2024 14:11:56 +0200 Subject: [PATCH 13/48] Improve ImagePullSecret reusabilty --- adoc/SAP-EIC-ImagePullSecrets.adoc | 51 ++++++++++++++++++++++ adoc/SAP-EIC-Main.adoc | 58 ++++---------------------- adoc/SAP-EIC-PostgreSQL.adoc | 2 +- adoc/SAP-EIC-Redis.adoc | 2 +- adoc/SAPDI3-Rancher.adoc | 2 +- adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc | 8 ++++ 6 files changed, 69 insertions(+), 54 deletions(-) create mode 100644 adoc/SAP-EIC-ImagePullSecrets.adoc diff --git a/adoc/SAP-EIC-ImagePullSecrets.adoc b/adoc/SAP-EIC-ImagePullSecrets.adoc new file mode 100644 index 00000000..645f6f58 --- /dev/null +++ b/adoc/SAP-EIC-ImagePullSecrets.adoc @@ -0,0 +1,51 @@ +[#imagePullSecret] += Creating an imagePullSecret for the {rac} + +To make the resources available for deployment, you need to create an imagePullSecret. +In this guide we use the name _application-collection_ for it. + +== Creating an imagePullSecret using kubectl + +Using `kubectl` to create the imagePullSecret is quite easy. +Get your user name and your access token for the {rac}. +Then run: + +---- +$ kubectl -n create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username= --docker-password= +---- + +As secrets are namespace sensitive, you'll need to create this for every namespace needed. + +++++ + +++++ + +== Creating an imagePullSecret using {rancher} + +You can also create an imagePullSecret using {rancher}. +Therefore, open {rancher} and enter your cluster. + +Navigate to *Storage* -> *Secrets* as shown below: + +image::EIC-Secrets-Menu.png[title=Secrets Menu,scaledwidth=99%] + +++++ + +++++ + +Click the *Create* button in the top right corner. + +image::EIC-Secrets-Overview.png[title=Secrets Overview,scaledwidth=99%] + +A window will appear asking you to select the Secret type. Select *Registry* as shown here: + +image::EIC-Secrets-Types.png[title=Secrets Type Selection,scaledwidth=99%] + +++++ + +++++ + +Enter a name such as _application-collection_ for the Secret. In the text box *Registry Domain Name*, enter _dp.apps.rancher.io_. +Enter your user name and password and click the *Create* button at the bottom right. + +image::EIC-Secret-Create.png[title=Secrets Creation Step,scaledwidth=99%] \ No newline at end of file diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 3b3ee63b..c585eb9a 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -146,57 +146,6 @@ You must log in to {rac}. This can be done as follows: $ helm registry login dp.apps.rancher.io/charts -u -p ---- - -[#imagePullSecret] -=== Creating an imagePullSecret - -To make the resources available for deployment, you need to create an imagePullSecret. -In this guide we use the name _application-collection_ for it. - -==== Creating an imagePullSecret using kubectl - -Using `kubectl` to create the imagePullSecret is quite easy. -Get your user name and your access token for the {rac}. -Then run: - ----- -$ kubectl -n create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username= --docker-password= ----- - -As secrets are namespace sensitive, you'll need to create this for every namespace needed. -If you're deploying {metallb}, {redis} and {pg} as described in this guide, you'll need to create the imagePullSecret in the related namespaces, -which means, you'll have three secrets with the same content but in three different namespaces. - -==== Creating an imagePullSecret using {rancher} - -You can also create an imagePullSecret using {rancher}. -Therefore, open {rancher} and enter your cluster. - -Navigate to *Storage* -> *Secrets* as shown below: - -image::EIC-Secrets-Menu.png[title=Secrets Menu,scaledwidth=99%] - -++++ - -++++ - -Click the *Create* button in the top right corner. - -image::EIC-Secrets-Overview.png[title=Secrets Overview,scaledwidth=99%] - -A window will appear asking you to select the Secret type. Select *Registry* as shown here: - -image::EIC-Secrets-Types.png[title=Secrets Type Selection,scaledwidth=99%] - -++++ - -++++ - -Enter a name such as _application-collection_ for the Secret. In the text box *Registry Domain Name*, enter _dp.apps.rancher.io_. -Enter your user name and password and click the *Create* button at the bottom right. - -image::EIC-Secret-Create.png[title=Secrets Creation Step,scaledwidth=99%] - ++++ ++++ @@ -255,6 +204,13 @@ to install {eic} in your prepared environments. [#Appendix] == Appendix +include::SAP-EIC-ImagePullSecrets.adoc[leveloffset=+2] + +++++ + +++++ + +[#selfSignedCertificates] === Using self-signed certificates In this chapter we will explain how to create self-signed certificates and how to make them available within Kubernetes. diff --git a/adoc/SAP-EIC-PostgreSQL.adoc b/adoc/SAP-EIC-PostgreSQL.adoc index ed6c1120..a16cf5e0 100644 --- a/adoc/SAP-EIC-PostgreSQL.adoc +++ b/adoc/SAP-EIC-PostgreSQL.adoc @@ -21,7 +21,7 @@ First we need to create a namespace and the *imagePullSecret* for installing the kubectl create namespace postgresql ---- -How to create the *imagePullSecret* is described in the Section xref:SAP-EIC-Main.adoc#imagePullSecret[]. +How to create the *imagePullSecret* is described in the Section xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[]. ===== Create Secret with certificates Second we need to create the Kubernetes secret with the certificates. You will find an example how to to dis in the xref:SAP-EIC-Main.adoc#Appendix[]. diff --git a/adoc/SAP-EIC-Redis.adoc b/adoc/SAP-EIC-Redis.adoc index 928e25a7..f835c965 100644 --- a/adoc/SAP-EIC-Redis.adoc +++ b/adoc/SAP-EIC-Redis.adoc @@ -28,7 +28,7 @@ The {redis} chart can be found at https://apps.rancher.io/applications/redis . ===== Deploy the chart -If you want to use self signed certificates, you can find instructions how to create such in xref:SAP-EIC-Main.adoc#Appendix[] +If you want to use self signed certificates, you can find instructions how to create such in xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[] Create a file *values.yaml* which holds some configuration for the {redis} Helm chart. The config may look like: diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index e1e99127..88415477 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -170,7 +170,7 @@ kubectl create namespace cert-manager kubectl -n cert-manager create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-login= --docker-password= ---- -How to create the *imagePullSecret* is described in the Section xref:SAP-EIC-Main.adoc#imagePullSecret[]. +How to create the *imagePullSecret* is described in the xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[]. ===== Installing the application diff --git a/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc b/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc index 7f1180a2..697c104b 100644 --- a/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc +++ b/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc @@ -84,6 +84,14 @@ include::SAPDI3-Install.adoc[DI-Install] ++++ +== Appendix + +include::SAP-EIC-ImagePullSecrets.adoc[leveloffset=+2] + +++++ + +++++ + :leveloffset: 0 // Standard SUSE Best Practices includes == Legal notice From dafcfe136ddbff534374977c2b246dcf467ecf27 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Mon, 29 Jul 2024 14:12:31 +0200 Subject: [PATCH 14/48] Improve link to self-signed certs --- adoc/SAP-EIC-PostgreSQL.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/adoc/SAP-EIC-PostgreSQL.adoc b/adoc/SAP-EIC-PostgreSQL.adoc index a16cf5e0..c8234fd3 100644 --- a/adoc/SAP-EIC-PostgreSQL.adoc +++ b/adoc/SAP-EIC-PostgreSQL.adoc @@ -24,7 +24,7 @@ kubectl create namespace postgresql How to create the *imagePullSecret* is described in the Section xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[]. ===== Create Secret with certificates -Second we need to create the Kubernetes secret with the certificates. You will find an example how to to dis in the xref:SAP-EIC-Main.adoc#Appendix[]. +Second we need to create the Kubernetes secret with the certificates. You will find an example how to to this in the xref:SAP-EIC-Main.adoc#selfSignedCertificates[]. ===== Installing the application From 4e5e7975d75a63e307a2cb8a270fb631004a3848 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Mon, 29 Jul 2024 14:12:45 +0200 Subject: [PATCH 15/48] Remove unused anchor --- adoc/SAP-Rancher-RKE2-Installation.adoc | 2 -- 1 file changed, 2 deletions(-) diff --git a/adoc/SAP-Rancher-RKE2-Installation.adoc b/adoc/SAP-Rancher-RKE2-Installation.adoc index 1347ce44..ac0a44ef 100644 --- a/adoc/SAP-Rancher-RKE2-Installation.adoc +++ b/adoc/SAP-Rancher-RKE2-Installation.adoc @@ -28,8 +28,6 @@ In the next step, make sure to select a Kubernetes version that is supported by ++++ - -[#nmetallb] If you don't have any further requirements to Kubernetes, you can click the "Create" button at the very bottom. In any other cases talk to your administrators before making adjustements. From 8b87f17827fa1df833ccf26c25ce4941d1e2f8cf Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Mon, 29 Jul 2024 14:21:06 +0200 Subject: [PATCH 16/48] Improve reusability --- adoc/SAP-EIC-Main.adoc | 6 +++--- adoc/SAP-EIC-Metallb.adoc | 10 +++++----- adoc/SAP-EIC-PostgreSQL.adoc | 8 ++++---- adoc/SAP-EIC-Redis.adoc | 4 ++-- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index c585eb9a..f517265d 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -154,7 +154,7 @@ $ helm registry login dp.apps.rancher.io/charts -u -p This chapter is intended to walk you through installing and configuring {metallb} on your Kubernetes cluster used for {eic}. -include::SAP-EIC-Metallb.adoc[Metallb] +include::SAP-EIC-Metallb.adoc[Metallb, leveloffset=2] ++++ ++++ @@ -171,7 +171,7 @@ For more information about persistence in {redis}, see https://redis.io/docs/management/persistence/. -include::SAP-EIC-Redis.adoc[] +include::SAP-EIC-Redis.adoc[leveloffset=2] ++++ @@ -184,7 +184,7 @@ include::SAP-EIC-Redis.adoc[] Before deploying {pg}, ensure that the requirements described at https://me.sap.com/notes/3247839 are met. -include::SAP-EIC-PostgreSQL.adoc[] +include::SAP-EIC-PostgreSQL.adoc[leveloffset=2] ++++ diff --git a/adoc/SAP-EIC-Metallb.adoc b/adoc/SAP-EIC-Metallb.adoc index 0a6aadc2..c72c42c0 100644 --- a/adoc/SAP-EIC-Metallb.adoc +++ b/adoc/SAP-EIC-Metallb.adoc @@ -1,10 +1,10 @@ -==== Installation and Configuration of {metallb} +== Installation and Configuration of {metallb} There are multiple ways to install the {metallb} software. In this guide we'll cover how to install {metallb} using kubectl or Helm. A complete overview and more details about {metallb} can be found on their link:https://metallb.universe.tf/[official website] -===== Pre-requisites +=== Pre-requisites Before starting the installation, make sure you meet all the requirements. In particular, you should pay attention to network addon compatibility. If you are trying to run {metallb} on a cloud platform, you should also look at the cloud compatibility page and make sure your cloud platform can work with {metallb} (most cannot). @@ -14,12 +14,12 @@ There are several ways to deploy {metallb}. In this guide we'll describe how to Please make sure to have a range of IP addresses available for configuring {metallb}. -===== Installation of {metallb} +=== Installation of {metallb} To install {metallb} run the following lines in your terminal: ---- -$ helm pull oci://dp.apps.rancher.io/charts/metallb --untar +$ helm pull oci://dp.apps.rancher.io/charts/metallb --version=0.14.7 --untar $ helm install --namespace=metallb --set-json 'imagePullSecrets=[{"name":"application-collection"}]' --create-namespace metallb ./metallb ---- @@ -27,7 +27,7 @@ $ helm install --namespace=metallb --set-json 'imagePullSecrets=[{"name":"applic ++++ -==== Configuration +== Configuration {metallb} needs two configurations to function properly: diff --git a/adoc/SAP-EIC-PostgreSQL.adoc b/adoc/SAP-EIC-PostgreSQL.adoc index c8234fd3..ab30784a 100644 --- a/adoc/SAP-EIC-PostgreSQL.adoc +++ b/adoc/SAP-EIC-PostgreSQL.adoc @@ -11,11 +11,11 @@ In this guide we'll describe one variant of installing {pg}. There are other possible ways to setup {pg} which are not focussed in this guide. It is also possible to install {pg} as a single instance on top of our operation system. We will focus on installing {pg} into a kubernetes cluster, because we also need a {redis} database and we will put them together into one cluster. -==== Deploying {pg} +== Deploying {pg} Even though {pg} is available for deployment using the {rancher} Apps, we recommend to use the {rac}. The {pg} chart can be found at https://apps.rancher.io/applications/postgresql. -==== Create Secret for {rac} +== Create Secret for {rac} First we need to create a namespace and the *imagePullSecret* for installing the {pg} database into the cluster. ---- kubectl create namespace postgresql @@ -23,10 +23,10 @@ kubectl create namespace postgresql How to create the *imagePullSecret* is described in the Section xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[]. -===== Create Secret with certificates +=== Create Secret with certificates Second we need to create the Kubernetes secret with the certificates. You will find an example how to to this in the xref:SAP-EIC-Main.adoc#selfSignedCertificates[]. -===== Installing the application +=== Installing the application You will need to login to the {rac} which can be done like: ---- diff --git a/adoc/SAP-EIC-Redis.adoc b/adoc/SAP-EIC-Redis.adoc index f835c965..aadc2b7f 100644 --- a/adoc/SAP-EIC-Redis.adoc +++ b/adoc/SAP-EIC-Redis.adoc @@ -16,7 +16,7 @@ link:https://redis.io/docs/management/sentinel/[Sentinel] instead of link:https://redis.io/docs/management/scaling/[Cluster] -==== Deploying Redis +== Deploying Redis Even though {redis} is available for deployment using the {rancher} Apps, we recommend to use the {rac}. The {redis} chart can be found at https://apps.rancher.io/applications/redis . @@ -26,7 +26,7 @@ The {redis} chart can be found at https://apps.rancher.io/applications/redis . ++++ -===== Deploy the chart +=== Deploy the chart If you want to use self signed certificates, you can find instructions how to create such in xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[] From d5b23d2c128ae82239a780a733e03376b8d392e4 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Mon, 29 Jul 2024 15:15:32 +0200 Subject: [PATCH 17/48] Improve readability by using cross links --- adoc/SAP-EIC-ImagePullSecrets.adoc | 9 +++++++++ adoc/SAP-EIC-Metallb.adoc | 8 ++++++++ adoc/SAP-EIC-PostgreSQL.adoc | 3 ++- adoc/SAP-EIC-Redis.adoc | 12 +++++++++++- adoc/SAPDI3-Rancher.adoc | 7 +++---- 5 files changed, 33 insertions(+), 6 deletions(-) diff --git a/adoc/SAP-EIC-ImagePullSecrets.adoc b/adoc/SAP-EIC-ImagePullSecrets.adoc index 645f6f58..fd59713e 100644 --- a/adoc/SAP-EIC-ImagePullSecrets.adoc +++ b/adoc/SAP-EIC-ImagePullSecrets.adoc @@ -16,6 +16,15 @@ $ kubectl -n create secret docker-registry application-collection -- As secrets are namespace sensitive, you'll need to create this for every namespace needed. +ifdef::eic[] +The related secret can then be used for the components: + +* xref:SAPDI3-Rancher.adoc#rancherIBS[Cert-Manager] +* xref:SAP-EIC-Metallb.adoc#metalIBS[MetalLB] +* xref:SAP-EIC-Redis.adoc#redisIPS[Redis] +* xref:SAP-EIC-PostgreSQL.adoc#pgIPS[PostgreSQL] +endif::[] + ++++ ++++ diff --git a/adoc/SAP-EIC-Metallb.adoc b/adoc/SAP-EIC-Metallb.adoc index c72c42c0..643e13c3 100644 --- a/adoc/SAP-EIC-Metallb.adoc +++ b/adoc/SAP-EIC-Metallb.adoc @@ -13,6 +13,14 @@ There are several ways to deploy {metallb}. In this guide we'll describe how to Please make sure to have a range of IP addresses available for configuring {metallb}. +Before you can deploy {metallb} from {rac}, you need to create the namespace and an ImagePullSecret. +To create the related namespace, run: +---- +$ kubectl create namespace metallb +---- + +[#metalIBS] +Instructions how to create the *imagePullSecret* can be found in xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[] === Installation of {metallb} diff --git a/adoc/SAP-EIC-PostgreSQL.adoc b/adoc/SAP-EIC-PostgreSQL.adoc index ab30784a..4d0b8dd8 100644 --- a/adoc/SAP-EIC-PostgreSQL.adoc +++ b/adoc/SAP-EIC-PostgreSQL.adoc @@ -18,9 +18,10 @@ The {pg} chart can be found at https://apps.rancher.io/applications/postgresql. == Create Secret for {rac} First we need to create a namespace and the *imagePullSecret* for installing the {pg} database into the cluster. ---- -kubectl create namespace postgresql +$ kubectl create namespace postgresql ---- +[#pgIPS] How to create the *imagePullSecret* is described in the Section xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[]. === Create Secret with certificates diff --git a/adoc/SAP-EIC-Redis.adoc b/adoc/SAP-EIC-Redis.adoc index aadc2b7f..000da39c 100644 --- a/adoc/SAP-EIC-Redis.adoc +++ b/adoc/SAP-EIC-Redis.adoc @@ -28,7 +28,17 @@ The {redis} chart can be found at https://apps.rancher.io/applications/redis . === Deploy the chart -If you want to use self signed certificates, you can find instructions how to create such in xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[] +To deploy the chart you'll need to create the related namespace and *imagePullSecret* first. +To create the namespace, run: +---- +$ kubectl create namespace redis +---- + +[#redisIPS] +Instructions how to create the *imagePullSecret* can be found in xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[] + + +If you want to use self signed certificates, you can find instructions how to create such in xref:SAP-EIC-Main.adoc#selfSignedCertificates[] Create a file *values.yaml* which holds some configuration for the {redis} Helm chart. The config may look like: diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index 88415477..eea8ec1a 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -162,14 +162,13 @@ Even though cert-manager is available for deployment using the {rancher} Apps, w ==== Create Secret for {rac} First we need to create a namespace and the *imagePullSecret* for installing the cert-manager. - - +To create the namespace, run: ---- -kubectl create namespace cert-manager -kubectl -n cert-manager create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-login= --docker-password= +$ kubectl create namespace cert-manager ---- +[#rancherIBS] How to create the *imagePullSecret* is described in the xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[]. From c940dd3856aa5dd934450efc5338eeeb8964c61f Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Mon, 29 Jul 2024 15:21:58 +0200 Subject: [PATCH 18/48] Fix SAPDI3-RKE2 guide --- adoc/SAPDI3-RKE2-Install.adoc | 1 + 1 file changed, 1 insertion(+) diff --git a/adoc/SAPDI3-RKE2-Install.adoc b/adoc/SAPDI3-RKE2-Install.adoc index 13b6e1f0..57bac772 100644 --- a/adoc/SAPDI3-RKE2-Install.adoc +++ b/adoc/SAPDI3-RKE2-Install.adoc @@ -11,6 +11,7 @@ :harvester: Harvester :k8s: Kubernetes :vmw: VMware +:rac: Rancher Application Collection = {di} 3 on Rancher Kubernetes Engine 2 From 65fcd76385577f2317b6d9b842cbf3b7826fd38c Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Tue, 30 Jul 2024 10:33:39 +0200 Subject: [PATCH 19/48] Improve landscape overview --- adoc/SAP-EIC-Main.adoc | 24 +++++-- images/src/svg/SAP-EIC-Architecture.svg | 90 +++++++++++++++---------- 2 files changed, 71 insertions(+), 43 deletions(-) diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index f517265d..7f0a47eb 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -81,17 +81,28 @@ Furthermore: == Landscape Overview -The following picture shows the landscape overview: +To run {eic} in a production ready and supported way, you'll need to setup multiple Kubernetes clusters and their nodes. +Those comprise a Kubernetes cluster where you'll install {rancher} to setup and manage the production and non-production clusters. +For this {rancher} cluster, we recommend using 3 Kubernetes nodes and a load balancer. + +The {eic} will need to run in a dedicated Kubernetes cluster. +For a HA setup of this cluster, we recommend using 3 Kubernetes Control Plane and 3 Kubernetes Worker nodes. + +To give you a graphical overview of what's needed, please take a look at the landscape overview: image::SAP-EIC-Architecture.svg[scaledwidth=99%,opts=inline,Embedded] -The dark blue rectangles represent Kubernetes clusters. + -The olive rectangles represent Kubernetes nodes that hold the roles of Control Plane and Worker combined. + -The green rectangles represent Kubernetes Control Plane nodes. + -The orange rectangles represent Kubernetes Worker nodes. +* The dark blue rectangles represent Kubernetes clusters. +* The olive rectangles represent Kubernetes nodes that hold the roles of Control Plane and Worker combined. +* The green rectangles represent Kubernetes Control Plane nodes. +* The orange rectangles represent Kubernetes Worker nodes. + +We'll use this graphical overview through the guide to visualize what's the next step and what it's for. -In this document we'll guide you through the installation for the required clusters. +++++ + +++++ == Installing {slem} {slem_version} There are several ways to install {slem} {slem_version}. For this best practice guide, we use the installation method via graphical installer. But in cloud-native deployments it is highly recommended to use Infrastructure as Code technologies to fully automate the deployment and lifecycle processes. @@ -102,7 +113,6 @@ include::SAP-EIC-SLEMicro.adoc[SLEMicro] ++++ -//TODO check dependencies of other doc files to adjust header hierarchy include::SAPDI3-Rancher.adoc[Rancher] ++++ diff --git a/images/src/svg/SAP-EIC-Architecture.svg b/images/src/svg/SAP-EIC-Architecture.svg index 52a3732f..93fdcde4 100644 --- a/images/src/svg/SAP-EIC-Architecture.svg +++ b/images/src/svg/SAP-EIC-Architecture.svg @@ -4,6 +4,23 @@ + @@ -12,60 +29,61 @@ - - + + Control Plane + Worker - - - Control - Plane + - Worker + + + Control + Plane + + Worker - - + + Control Plane + Worker - + Creates / Manages Creates / Manages - - + + Production Cluster - - + + Control Plane - - + + + Control Plane - - + + Control Plane - - Worker - - Worker - - Worker + + Worker + + Worker + + Worker @@ -76,29 +94,29 @@ - - + + Control Plane - - + + Control Plane - - + + Control Plane - - Worker - - Worker - - Worker + + Worker + + Worker + + Worker From f4d3b8f9fbc474f7639de03f464bbfec20173037 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Thu, 1 Aug 2024 14:27:33 +0200 Subject: [PATCH 20/48] Improve reusabilty of Rancher chapter --- adoc/SAP-EIC-Main.adoc | 3 +++ adoc/SAPDI3-RKE2-Install.adoc | 2 ++ adoc/SAPDI3-Rancher.adoc | 2 -- adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc | 2 ++ 4 files changed, 7 insertions(+), 2 deletions(-) diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 7f0a47eb..464ac3c8 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -113,6 +113,9 @@ include::SAP-EIC-SLEMicro.adoc[SLEMicro] ++++ +== Installing {rancher} + + include::SAPDI3-Rancher.adoc[Rancher] ++++ diff --git a/adoc/SAPDI3-RKE2-Install.adoc b/adoc/SAPDI3-RKE2-Install.adoc index 57bac772..8f51eb33 100644 --- a/adoc/SAPDI3-RKE2-Install.adoc +++ b/adoc/SAPDI3-RKE2-Install.adoc @@ -37,6 +37,8 @@ One runs {rancher} Management server and the other runs the actual workload, whi include::SAPDI3-Requirements.adoc[Requirements] +== Installing {rancher} + include::SAPDI3-Rancher.adoc[Rancher] include::SAPDI3-Longhorn.adoc[Longhorn] diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index eea8ec1a..de06ae46 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -1,7 +1,5 @@ [#Rancher] -== Installing {rancher} - === Preparation In order to have an high available {rancher} setup, you'll need a load balancer for you {rancher} nodes. diff --git a/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc b/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc index 697c104b..4f2a7185 100644 --- a/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc +++ b/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc @@ -63,6 +63,8 @@ https://docs.harvesterhci.io/v1.0/rancher/rancher-integration#rancher--harvester include::SAPDI3-Harvester-Installation.adoc[Harvester] +== Installing {rancher} + include::SAPDI3-Rancher.adoc[Rancher] include::SAPDI3-Harvester-Rancher.adoc[Harvester-Rancher] From b48fa5c4a1bde8ec842ace084b005fb486442cc0 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Thu, 1 Aug 2024 15:21:50 +0200 Subject: [PATCH 21/48] Fix typo --- adoc/SAP-EIC-ImagePullSecrets.adoc | 4 ++-- adoc/SAP-EIC-Metallb.adoc | 2 +- adoc/SAPDI3-Rancher.adoc | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/adoc/SAP-EIC-ImagePullSecrets.adoc b/adoc/SAP-EIC-ImagePullSecrets.adoc index fd59713e..2171b4ec 100644 --- a/adoc/SAP-EIC-ImagePullSecrets.adoc +++ b/adoc/SAP-EIC-ImagePullSecrets.adoc @@ -19,8 +19,8 @@ As secrets are namespace sensitive, you'll need to create this for every namespa ifdef::eic[] The related secret can then be used for the components: -* xref:SAPDI3-Rancher.adoc#rancherIBS[Cert-Manager] -* xref:SAP-EIC-Metallb.adoc#metalIBS[MetalLB] +* xref:SAPDI3-Rancher.adoc#rancherIPS[Cert-Manager] +* xref:SAP-EIC-Metallb.adoc#metalIPS[MetalLB] * xref:SAP-EIC-Redis.adoc#redisIPS[Redis] * xref:SAP-EIC-PostgreSQL.adoc#pgIPS[PostgreSQL] endif::[] diff --git a/adoc/SAP-EIC-Metallb.adoc b/adoc/SAP-EIC-Metallb.adoc index 643e13c3..c8228572 100644 --- a/adoc/SAP-EIC-Metallb.adoc +++ b/adoc/SAP-EIC-Metallb.adoc @@ -19,7 +19,7 @@ To create the related namespace, run: $ kubectl create namespace metallb ---- -[#metalIBS] +[#metalIPS] Instructions how to create the *imagePullSecret* can be found in xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[] === Installation of {metallb} diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index de06ae46..d6154afb 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -166,7 +166,7 @@ To create the namespace, run: $ kubectl create namespace cert-manager ---- -[#rancherIBS] +[#rancherIPS] How to create the *imagePullSecret* is described in the xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[]. From aaef2011757a6637adaac75cfd4783b219800b20 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Thu, 1 Aug 2024 16:46:54 +0200 Subject: [PATCH 22/48] Using landscape overview in doc --- adoc/SAP-EIC-Main.adoc | 26 ++- images/src/svg/SAP-EIC-Architecture-RKE2.svg | 159 +++++++++++++++++ .../src/svg/SAP-EIC-Architecture-Rancher.svg | 160 ++++++++++++++++++ images/src/svg/SAP-EIC-Architecture.svg | 92 +++++----- 4 files changed, 396 insertions(+), 41 deletions(-) create mode 100644 images/src/svg/SAP-EIC-Architecture-RKE2.svg create mode 100644 images/src/svg/SAP-EIC-Architecture-Rancher.svg diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 464ac3c8..3cfa4575 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -97,9 +97,11 @@ image::SAP-EIC-Architecture.svg[scaledwidth=99%,opts=inline,Embedded] * The green rectangles represent Kubernetes Control Plane nodes. * The orange rectangles represent Kubernetes Worker nodes. -We'll use this graphical overview through the guide to visualize what's the next step and what it's for. +We'll use this graphical overview through the guide to visualize what's the next step and what it's for. +Starting with the installation of the operating system of each machine/ Kubernetes node, we'll guide you through every step to take to get a fully set up Kubernetes landscape ready for the deployment of {eic}. + ++++ ++++ @@ -115,6 +117,15 @@ include::SAP-EIC-SLEMicro.adoc[SLEMicro] == Installing {rancher} +By now you should have installed the operating system on every Kubernetes node. +You're now ready to install a {rancher} cluster. +Taking a look again on the landscape overview, this means, we'll now cover how to setup the upper part of the given graphic: + +image::SAP-EIC-Architecture-Rancher.svg[scaledwidth=99%,opts=inline,Embedded] + +++++ + +++++ include::SAPDI3-Rancher.adoc[Rancher] @@ -123,6 +134,19 @@ include::SAPDI3-Rancher.adoc[Rancher] ++++ == Installing RKE2 using {rancher} + +After installing the {rancher} cluster, we can now facilitate this one to create the {rke} clusters for {eic}. +SAP recommends to setup not only a production landscape, but to have QA / Dev systems for {eic}. Both can be set up the same way using {rancher}. +How to do this is covered in this chapter. +Taking a look again on the landscape overview, this means, we'll now cover how to setup the lower part of the given graphic: + +image::SAP-EIC-Architecture-RKE2.svg[scaledwidth=99%,opts=inline,Embedded] + +++++ + +++++ + + include::SAP-Rancher-RKE2-Installation.adoc[] ++++ diff --git a/images/src/svg/SAP-EIC-Architecture-RKE2.svg b/images/src/svg/SAP-EIC-Architecture-RKE2.svg new file mode 100644 index 00000000..b88f1e28 --- /dev/null +++ b/images/src/svg/SAP-EIC-Architecture-RKE2.svg @@ -0,0 +1,159 @@ + + + + + + + + + + Rancher Cluster + + + + + + Control + Plane + + Worker + + + + Control + Plane + + Worker + + + + Control + Plane + + Worker + + + + Creates / Manages + Creates / Manages + + + + + Production + Cluster + + + + + + Control + Plane + + + + + Control + Plane + + + + Control + Plane + + + + + Worker + + Worker + + Worker + + + + + + QA / Dev + Cluster + + + + + + Control + Plane + + + + Control + Plane + + + + Control + Plane + + + + + Worker + + Worker + + Worker + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/images/src/svg/SAP-EIC-Architecture-Rancher.svg b/images/src/svg/SAP-EIC-Architecture-Rancher.svg new file mode 100644 index 00000000..6b784d5d --- /dev/null +++ b/images/src/svg/SAP-EIC-Architecture-Rancher.svg @@ -0,0 +1,160 @@ + + + + + + + + + + Rancher Cluster + + + + + + Control + Plane + + Worker + + + + Control + Plane + + Worker + + + + Control + Plane + + Worker + + + + Creates / Manages + Creates / Manages + + + + + Production + Cluster + + + + + + Control + Plane + + + + + Control + Plane + + + + Control + Plane + + + + + Worker + + Worker + + Worker + + + + + + QA / Dev + Cluster + + + + + + Control + Plane + + + + Control + Plane + + + + Control + Plane + + + + + Worker + + Worker + + Worker + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/images/src/svg/SAP-EIC-Architecture.svg b/images/src/svg/SAP-EIC-Architecture.svg index 93fdcde4..e006c921 100644 --- a/images/src/svg/SAP-EIC-Architecture.svg +++ b/images/src/svg/SAP-EIC-Architecture.svg @@ -1,18 +1,13 @@ - - - - + + Rancher Cluster - - + + Control Plane + Worker - - + + Control Plane + Worker - - + + Control Plane + Worker @@ -53,93 +66,92 @@ Creates / Manages - - + + Production Cluster - + Control Plane - + Control Plane - + Control Plane - Worker + Worker - Worker + Worker - Worker + Worker - - + + QA / Dev Cluster - + Control Plane - + Control Plane - + Control Plane - Worker + Worker - Worker + Worker - Worker + Worker - + - + + marker-end="url(#arrowhead)"/> - + - - + marker-end="url(#arrowhead)"/> + \ No newline at end of file From 101a73997ae27d626d7e670bc8e3679c5b460058 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Thu, 1 Aug 2024 16:47:08 +0200 Subject: [PATCH 23/48] Remove SLEM version from docinfo --- adoc/SAP-EIC-Main-docinfo.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/adoc/SAP-EIC-Main-docinfo.xml b/adoc/SAP-EIC-Main-docinfo.xml index aa0861a9..c96636c4 100644 --- a/adoc/SAP-EIC-Main-docinfo.xml +++ b/adoc/SAP-EIC-Main-docinfo.xml @@ -29,7 +29,7 @@ Longhorn --> -SUSE Linux Enterprise Micro 5.4 +SUSE Linux Enterprise Micro Rancher Kubernetes Engine 2 Longhorn Rancher Prime From f5e31095c22699057ff0e4d230e584cfe579a54a Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Fri, 2 Aug 2024 09:14:26 +0200 Subject: [PATCH 24/48] Remove not needed page break --- adoc/SAP-EIC-ImagePullSecrets.adoc | 4 ---- 1 file changed, 4 deletions(-) diff --git a/adoc/SAP-EIC-ImagePullSecrets.adoc b/adoc/SAP-EIC-ImagePullSecrets.adoc index 2171b4ec..088de46d 100644 --- a/adoc/SAP-EIC-ImagePullSecrets.adoc +++ b/adoc/SAP-EIC-ImagePullSecrets.adoc @@ -50,10 +50,6 @@ A window will appear asking you to select the Secret type. Select *Registry* as image::EIC-Secrets-Types.png[title=Secrets Type Selection,scaledwidth=99%] -++++ - -++++ - Enter a name such as _application-collection_ for the Secret. In the text box *Registry Domain Name*, enter _dp.apps.rancher.io_. Enter your user name and password and click the *Create* button at the bottom right. From c2a54719cd7e74501ba3ff5a9ddfbcbca278c1c6 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Fri, 2 Aug 2024 10:07:41 +0200 Subject: [PATCH 25/48] Fix typo --- adoc/SAP-Rancher-RKE2-Installation.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/adoc/SAP-Rancher-RKE2-Installation.adoc b/adoc/SAP-Rancher-RKE2-Installation.adoc index ac0a44ef..31587967 100644 --- a/adoc/SAP-Rancher-RKE2-Installation.adoc +++ b/adoc/SAP-Rancher-RKE2-Installation.adoc @@ -36,7 +36,7 @@ Once you've clicked the "Create" button, you should see a screen like this: image::SAP-Rancher-Create-Register.png[title=Rancher create registration,scaledwidth=99%] In the first step here, select the roles your node(s) should receive. -A common high avaiability setup holds: +A common high availability setup holds: * 3 x etcd / controll plane nodes * 3 x worker nodes From a3490525fc0330fcdf6776013f8c207eda652f17 Mon Sep 17 00:00:00 2001 From: Dominik_Mathern Date: Tue, 6 Aug 2024 09:23:50 +0200 Subject: [PATCH 26/48] Update SAP-EIC-Main.adoc --- adoc/SAP-EIC-Main.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 3cfa4575..b7da129a 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -1,5 +1,5 @@ :docinfo: - +//test // defining article ID [#art-sap-eic-slemicro54] From a56c2f3a6c77230761ffb1afbe662f479cf2c64d Mon Sep 17 00:00:00 2001 From: Dominik_Mathern Date: Tue, 6 Aug 2024 09:30:15 +0200 Subject: [PATCH 27/48] extend Longhorn and Rancher Prime --- adoc/SAPDI3-Longhorn.adoc | 15 ++++++++++++++- adoc/SAPDI3-Rancher.adoc | 16 ++++++++++++---- 2 files changed, 26 insertions(+), 5 deletions(-) diff --git a/adoc/SAPDI3-Longhorn.adoc b/adoc/SAPDI3-Longhorn.adoc index 13dcd42b..c10d9043 100644 --- a/adoc/SAPDI3-Longhorn.adoc +++ b/adoc/SAPDI3-Longhorn.adoc @@ -33,6 +33,18 @@ https://longhorn.io/docs/{lh_version}/deploy/install/install-with-rancher/ === Installing {lh} using Helm +ifdef::slem[] +---- +$ helm repo add rancher-v2.8-charts https://raw.githubusercontent.com/rancher/charts/release-v2.8 +$ helm repo update +$ helm upgrade --install longhorn-crd rancher-v2.8-charts/longhorn-crd \ +--namespace longhorn-system \ +--create-namespace +$ helm upgrade --install longhorn rancher-v2.8-charts/longhorn \ +--namespace longhorn-system +---- +endif::[] + To install Longhorn using Helm, run the following commands: ---- $ helm repo add longhorn https://charts.longhorn.io @@ -42,6 +54,7 @@ $ helm install longhorn longhorn/longhorn --namespace longhorn-system --create-n These commands will add the Longhorn Helm charts to the list of Helm repositories, update the Helm repository, and execute the installation of Longhorn. +ifndef::slem[] === Installing {lh} using `kubectl` You can install {lh} using `kubectl` with the following command: @@ -100,6 +113,6 @@ EOF $ kubectl -n longhorn-system apply -f longhorn-ingress.yaml ---- - +endif::[] For more details, visit https://longhorn.io/docs/{lh_version}/deploy/accessing-the-ui/longhorn-ingress/. \ No newline at end of file diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index d6154afb..600debaf 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -104,6 +104,7 @@ On the first master node: # mkdir -p /etc/rancher/rke2 # cat < /etc/rancher/rke2/config.yaml token: 'your cluster token' +system-default-registry: registry.rancher.com tls-san: - FQDN of fixed registration address on load balancer - other hostname @@ -116,6 +117,7 @@ Create configuration files for additional cluster nodes: # cat < /etc/rancher/rke2/config.yaml server: https://"FQDN of registration address":9345 token: 'your cluster token' +system-default-registry: registry.rancher.com tls-san: - FQDN of fixed registration address on load balancer - other hostname @@ -123,6 +125,9 @@ tls-san: EOF ---- +IMPORTANT: You also need take about ETCD Snapshots and to perfom backups of your Rancher instance. This is not part of this Document and you can find more information in our Documentation. + +IMPORTANT: For security reasons, we generally recommend activating the CIS profile when installing RKE2. This is currently still being validated and will be included in the documentation at a later date. Now it is time to enable and start the RKE2 components and run on each cluster node: ---- @@ -149,7 +154,7 @@ For convenience, the `kubectl` binary can be added to the *$PATH* and the given In order to install {rancher} and some of its required components, you'll need to use Helm. -The easiest option to install Helm is to run: +One way to install Helm is to run: ---- # curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash ---- @@ -188,7 +193,10 @@ $ helm pull oci://dp.apps.rancher.io/charts/cert-manager --untar Install cert-manager: ---- -$ helm install --namespace cert-manager --set crds.enabled=true --set-json 'global.imagePullSecrets=[{"name":"application-collection"}]' cert-manager ./cert-manager +$ helm install --namespace cert-manager \ +--set crds.enabled=true \ +--set-json 'global.imagePullSecrets=[{"name":"application-collection"}]' \ +cert-manager ./cert-manager ---- @@ -200,7 +208,7 @@ $ helm install --namespace cert-manager --set crds.enabled=true --set-json 'glob To install {rancher}, you need to add the related Helm repository. To achieve that, use the following command: ---- -$ helm repo add rancher https://charts.rancher.com/server-charts/prime +$ helm repo add rancher-prime https://charts.rancher.com/server-charts/prime ---- As a next step, create the cattle-system namespace in Kubernetes as follows: @@ -210,7 +218,7 @@ $ kubectl create namespace cattle-system The Kubernetes cluster is now ready for the installation of {rancher}: ---- -$ helm install rancher rancher/rancher \ +$ helm install rancher rancher-prime/rancher \ --namespace cattle-system \ --set hostname= \ --set replicas=3 From 7a83f704b049b4174055c436ffc7f86ec4736f18 Mon Sep 17 00:00:00 2001 From: Dominik_Mathern Date: Thu, 15 Aug 2024 10:30:43 +0200 Subject: [PATCH 28/48] Added Version Table --- adoc/SAP-EIC-Main.adoc | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index b7da129a..fece4d66 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -45,12 +45,29 @@ $ that the command can be run by any user. ++++ +== Supported and used Versions +There are several versions for the different used software components available. So we want to show you our support matrix, which versions we use in our Best Practise Guide. + +[cols="1,1"] +|=== +|Product | Version +|SUSE Linux Enterprise Micro | 5.5 +|Rancher Kubernetes Engine | 1.28 +|Rancher Prime | 2.8.5 +|Longhorn | 1.6 +|cert-manager | 1.15 +|MetalLB | 0.14.7 +|PostgresSQL | 15.7 +|Redis | 7.2.5 +|=== + + == Preparations * Get subscriptions for: -** {slem} {slem_version} -** {rancher} {rancher_version} -** {lh} {lh_version} +** {slem} +** {rancher} +** {lh} ** {sle_ha} * +++*+++ Only needed if you want to setup {rancher} in a high available setup. From ca8f018bd304d839c838084c2eac3ec23b06e265 Mon Sep 17 00:00:00 2001 From: Dominik_Mathern Date: Thu, 15 Aug 2024 10:30:58 +0200 Subject: [PATCH 29/48] added new Loadbalancer config --- adoc/SAPDI3-Rancher.adoc | 87 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 86 insertions(+), 1 deletion(-) diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index 600debaf..0299dd31 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -17,6 +17,91 @@ Setup a virtual machine or bare metal server with {sles} and the HA Extension or Create the configuration for haproxy. Here follows an example configuration file for haproxy, please adapt for the actual environment. + +ifdef::slem[] +---- +# cat < /etc/haproxy/haproxy.cfg +global + log /dev/log local0 + log /dev/log local1 notice + chroot /var/lib/haproxy + # stats socket /run/haproxy/admin.sock mode 660 level admin + stats timeout 30s + user haproxy + group haproxy + daemon + + # general hardlimit for the process of connections to handle, this is separate to backend/listen + # Added in 'global' AND 'defaults'!!! - global affects only system limits (ulimit/maxsock) and defaults affects only listen/backend-limits - hez + maxconn 400000 + + # Default SSL material locations + ca-base /etc/ssl/certs + crt-base /etc/ssl/private + + tune.ssl.default-dh-param 2048 + + # Default ciphers to use on SSL-enabled listening sockets. + # For more information, see ciphers(1SSL). This list is from: + # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ + ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5: !DSS + ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets + +defaults + mode tcp + log global + option tcplog + option redispatch + option tcpka + option dontlognull + retries 2 + timeout connect 5s + timeout client 5s + timeout server 5s + timeout tunnel 86400s + maxconn 400000 + +listen stats + bind *:9000 + mode http + stats hide-version + stats uri /stats + +listen rancher_apiserver + bind my_lb_address:6443 + option httpchk GET /healthz + http-check expect status 401 + server mynode1 mynode1.domain.local:6443 check check-ssl verify none + server mynode2 mynode2.domain.local:6443 check check-ssl verify none + server mynode3 mynode3.domain.local:6443 check check-ssl verify none +listen rancher_register + bind my_lb_address:9345 + option httpchk GET /ping + http-check expect status 200 + server mynode1 mynode1.domain.local:9345 check check-ssl verify none + server mynode2 mynode2.domain.local:9345 check check-ssl verify none + server mynode3 mynode3.domain.local:9345 check check-ssl verify none + +listen rancher_ingress80 + bind my_lb_address:80 + option httpchk GET / + http-check expect status 404 + server mynode1 mynode1.domain.local:80 check + server mynode2 mynode2.domain.local:80 check + server mynode3 mynode3.domain.local:80 check + +listen rancher_ingress443 + bind my_lb_address:443 + option httpchk GET / + http-check expect status 404 + server mynode1 mynode1.domain.local:443 check check-ssl verify none + server mynode2 mynode2.domain.local:443 check check-ssl verify none + server mynode3 mynode3.domain.local:443 check check-ssl verify none +EOF +---- +endif::[] + +ifndef::slem[] ---- # cat < /etc/haproxy/haproxy.cfg global @@ -76,7 +161,7 @@ backend rke2serverbackend server mynode1 192.168.122.20:9345 check EOF ---- - +endif::[] Check the configuration file: ---- # haproxy -f /path/to/your/haproxy.conf -c From 9938481569ee425bedc79b839b87ab9c9833fd4f Mon Sep 17 00:00:00 2001 From: Dominik_Mathern Date: Thu, 15 Aug 2024 21:56:40 +0200 Subject: [PATCH 30/48] Outsource login registry and simplify helm install --- ...IC-LoginRegistryApplicationCollection.adoc | 18 +++ adoc/SAP-EIC-Main.adoc | 103 ++++++++++++++++-- adoc/SAP-EIC-Metallb.adoc | 17 ++- adoc/SAP-EIC-PostgreSQL.adoc | 12 +- adoc/SAP-EIC-Redis.adoc | 7 +- 5 files changed, 133 insertions(+), 24 deletions(-) create mode 100644 adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc diff --git a/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc b/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc new file mode 100644 index 00000000..eba0b019 --- /dev/null +++ b/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc @@ -0,0 +1,18 @@ +[#LoginApplicationCollection] += Login into the Application Collection Registry + +To install the HELM Charts from the _application-collection_ you need to login into the registry. This needs to be done with the HELM client. + +To login to the {rac} which can be done like: +---- +$ helm registry login dp.apps.rancher.io/charts -u -p +---- + +ifdef::eic[] +The login process is needed for the following application installations: + +//* xref:SAPDI3-Rancher.adoc#rancherLIR[Cert-Manager] +* xref:SAP-EIC-Metallb.adoc#metalLIR[MetalLB] +* xref:SAP-EIC-Redis.adoc#redisLIR[Redis] +* xref:SAP-EIC-PostgreSQL.adoc#pgLIR[PostgreSQL] +endif::[] \ No newline at end of file diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index fece4d66..5de78268 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -18,9 +18,14 @@ :elm: SAP Edge Lifecycle Management :rac: Rancher Application Collection :redis: Redis +:redis_version: 7.2.5 :sis: SAP Integration Suite :pg: PostgreSQL +:pg_version: 15.7 :metallb: MetalLB +:metallb_version: 0.14.7 +:cm: cert-manager +:cm_version: 1.15.2 = {eic} on SUSE @@ -51,14 +56,15 @@ There are several versions for the different used software components available. [cols="1,1"] |=== |Product | Version -|SUSE Linux Enterprise Micro | 5.5 -|Rancher Kubernetes Engine | 1.28 -|Rancher Prime | 2.8.5 -|Longhorn | 1.6 -|cert-manager | 1.15 -|MetalLB | 0.14.7 -|PostgresSQL | 15.7 -|Redis | 7.2.5 + +|{slem} | {slem_version} +|{rke} | 1.28 +|{rancher} | {rancher_version} +|{lh} | {lh_version} +|{cm} | {cm_version} +|{metallb} | {metallb_version} +|{pg} | {pg_version} +|{redis} | {redis_version} |=== @@ -259,7 +265,7 @@ to install {eic} in your prepared environments. == Appendix include::SAP-EIC-ImagePullSecrets.adoc[leveloffset=+2] - +include::SAP-EIC-LoginRegistryApplicationCollection.adoc[leveloffset=+2] ++++ ++++ @@ -267,7 +273,7 @@ include::SAP-EIC-ImagePullSecrets.adoc[leveloffset=+2] [#selfSignedCertificates] === Using self-signed certificates -In this chapter we will explain how to create self-signed certificates and how to make them available within Kubernetes. +In this chapter we will explain how to create self-signed certificates and how to make them available within Kubernetes. We will describe to possible solutions to do this. You can create everything on the operation system layer or you also can use cert-manager in your downstream cluster. ==== Creating self-signed certificates @@ -332,8 +338,83 @@ For an example of uploading your certificates to Kubernetes, see the following e $ kubectl -n create secret generic --from-file=./root.pem --from-file=./server.pem --from-file=./server.key ---- -NOTE: Most applications are expecting to have the secret to be used in the same namespace as the application. +NOTE: All applications are expecting to have the secret to be used in the same namespace as the application. + +==== Using cert-manager +cert-manager needs to be available in your Downstream Cluster. To install cert-manager in your downstream cluster you can use the same installation steps which are described in the Rancher Prime installation. +First we need to create a selfsigned-issuer.yaml file: + +[source,yaml] +---- +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: selfsigned-issuer +spec: + selfSigned: {} +---- + +Then we create the a Certificate Ressource for the CA calles my-ca-cert.yaml: +[source,yaml] +---- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: my-ca-cert + namespace: cert-manager +spec: + isCA: true + commonName: .cluster.local + secretName: my-ca-secret + issuerRef: + name: selfsigned-issuer + kind: ClusterIssuer + dnsNames: + - ".cluster.local" + - "*..cluster.local" + +---- +For creating a ClusterIssuer using the Generated CA we create the my-ca-issuer.yaml file +[source,yaml] +---- +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: my-ca-issuer +spec: + ca: + secretName: my-ca-secret +---- +The last ressource which we need to create is the certificate itself. This certificate is signed by our created CA. You can name the yaml file application-name-certificate.yaml +[source,yaml] +---- +kind: Certificate +metadata: + name: + namespace: // need to be created manually. +spec: + dnsNames: + - .cluster.local + issuerRef: + group: cert-manager.io + kind: ClusterIssuer + name: my-ca-issuer + secretName: + usages: + - digital signature + - key encipherment +---- + +Apply the yaml file to your kubernetes cluster. +[source, bash] +---- +$ kubectl apply -f selfsigned-issuer.yaml +$ kubectl apply -f my-ca-cert.yaml +$ kubectl apply -f my-ca-issuer.yaml +$ kubectl apply -f application-name-certificate.yaml +---- +When you deploy your applications via HELM Charts you can use the generated certificate. In the Kubernetes Secret Certificate are 3 files stored. The tls.crt, tls.key and ca.crt which you cann use in the values.yaml file of your application. ++++ diff --git a/adoc/SAP-EIC-Metallb.adoc b/adoc/SAP-EIC-Metallb.adoc index c8228572..0189c5e4 100644 --- a/adoc/SAP-EIC-Metallb.adoc +++ b/adoc/SAP-EIC-Metallb.adoc @@ -24,11 +24,22 @@ Instructions how to create the *imagePullSecret* can be found in xref:SAP-EIC-Im === Installation of {metallb} +[#metalLIR] +Before you can install the application, you need to login into the registry. You can find the instruction in xref:SAP-EIC-LoginRegistryApplicationCollection.adoc#LoginApplicationCollection[] + To install {metallb} run the following lines in your terminal: +create a values.yaml file with the following configuration: + +[source,yaml] +---- +imagePullSecrets: + - name: application-collection +---- + +Then install the metallb application. ---- -$ helm pull oci://dp.apps.rancher.io/charts/metallb --version=0.14.7 --untar -$ helm install --namespace=metallb --set-json 'imagePullSecrets=[{"name":"application-collection"}]' --create-namespace metallb ./metallb +# helm install metallb oci://dp.apps.rancher.io/charts/metallb -f values.yaml --namespace=metallb --version 0.14.7 ---- ++++ @@ -53,7 +64,7 @@ metadata: namespace: metallb spec: addresses: - - 192.168.1.240-192.168.1.250 + - 192.168.1.240/32 EOF ---- diff --git a/adoc/SAP-EIC-PostgreSQL.adoc b/adoc/SAP-EIC-PostgreSQL.adoc index 4d0b8dd8..14da413b 100644 --- a/adoc/SAP-EIC-PostgreSQL.adoc +++ b/adoc/SAP-EIC-PostgreSQL.adoc @@ -25,14 +25,11 @@ $ kubectl create namespace postgresql How to create the *imagePullSecret* is described in the Section xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[]. === Create Secret with certificates -Second we need to create the Kubernetes secret with the certificates. You will find an example how to to this in the xref:SAP-EIC-Main.adoc#selfSignedCertificates[]. +Second we need to create the Kubernetes secret with the certificates. You will find an example how to do this in the xref:SAP-EIC-Main.adoc#selfSignedCertificates[]. === Installing the application - -You will need to login to the {rac} which can be done like: ----- -$ helm registry login dp.apps.rancher.io/charts -u -p ----- +[#pgLIR] +Before you can install the application, you need to login into the registry. You can find the instruction in xref:SAP-EIC-LoginRegistryApplicationCollection.adoc#LoginApplicationCollection[] Create a file *values.yaml* which holds some configuration for the {pg} Helm chart. The config may look like: @@ -78,8 +75,7 @@ persistentVolumeClaimRetentionPolicy: To install the application run: ---- -$ helm pull oci://dp.apps.rancher.io/charts/postgres --untar -$ helm install -f values.yaml --namespace=postgresql ./postgresql +$ helm install metallb oci://dp.apps.rancher.io/charts/postgres -f values.yaml --namespace=postgres ---- diff --git a/adoc/SAP-EIC-Redis.adoc b/adoc/SAP-EIC-Redis.adoc index 000da39c..bde1b875 100644 --- a/adoc/SAP-EIC-Redis.adoc +++ b/adoc/SAP-EIC-Redis.adoc @@ -40,6 +40,10 @@ Instructions how to create the *imagePullSecret* can be found in xref:SAP-EIC-Im If you want to use self signed certificates, you can find instructions how to create such in xref:SAP-EIC-Main.adoc#selfSignedCertificates[] +[#redisLIR] +Before you can install the application, you need to login into the registry. You can find the instruction in xref:SAP-EIC-LoginRegistryApplicationCollection.adoc#LoginApplicationCollection[] + + Create a file *values.yaml* which holds some configuration for the {redis} Helm chart. The config may look like: ---- @@ -68,6 +72,5 @@ tls: To install the application run: ---- -$ helm pull oci://dp.apps.rancher.io/charts/redis --untar -$ helm install -f values.yaml --namespace=redis --create-namespace redis ./redis +$ helm install metallb oci://dp.apps.rancher.io/charts/postgres -f values.yaml --namespace=redis ---- From 1bcda487dfa49698faf506d0879ac758e97b3b20 Mon Sep 17 00:00:00 2001 From: Dominik_Mathern Date: Fri, 16 Aug 2024 10:18:19 +0200 Subject: [PATCH 31/48] final commit. --- ...IC-LoginRegistryApplicationCollection.adoc | 2 +- adoc/SAP-EIC-Main.adoc | 13 ++- adoc/SAP-EIC-Metallb.adoc | 20 +++-- adoc/SAP-EIC-PostgreSQL.adoc | 3 + adoc/SAP-EIC-Redis.adoc | 17 +++- adoc/SAP-EIC-SLEMicro.adoc | 43 ++++++---- adoc/SAPDI3-Longhorn.adoc | 10 ++- adoc/SAPDI3-Rancher.adoc | 83 ++++++++++++------- 8 files changed, 128 insertions(+), 63 deletions(-) diff --git a/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc b/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc index eba0b019..9e111f9e 100644 --- a/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc +++ b/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc @@ -11,7 +11,7 @@ $ helm registry login dp.apps.rancher.io/charts -u -p ifdef::eic[] The login process is needed for the following application installations: -//* xref:SAPDI3-Rancher.adoc#rancherLIR[Cert-Manager] +* xref:SAPDI3-Rancher.adoc#rancherLIR[Cert-Manager] * xref:SAP-EIC-Metallb.adoc#metalLIR[MetalLB] * xref:SAP-EIC-Redis.adoc#redisLIR[Redis] * xref:SAP-EIC-PostgreSQL.adoc#pgLIR[PostgreSQL] diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 5de78268..499873a6 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -190,7 +190,7 @@ NOTE: Keep in mind that the descriptions and instructions below might differ fro === Logging in to {rac} -{rancher} instances prior to version 2.9 cannot integrate the {rac}. Therefore, you need to use the console and Helm. +To access the {rac} you need to login. Therefore, you can use the console and Helm client. The easiest way to do so is to use the built-in shell in {rancher}. To access it, navigate to your cluster and click *Kubectl Shell* as shown below: image::EIC-Rancher-Kubectl-Button.png[title=Rancher Shell Access,scaledwidth=99%] @@ -281,6 +281,8 @@ WARNING: We strongly advise against using self-signed certificates in production The first step is to create a certification authority (hereinafter referred to as CA) with a key and certificate. The following excerpt provides an example of how to create one with a passphrase of your choice: + +[source, bash] ---- $ openssl req -x509 -sha256 -days 1825 -newkey rsa:2048 -keyout rootCA.key -out rootCA.crt -passout pass: -subj "/C=DE/ST=BW/L=Nuremberg/O=SUSE" ---- @@ -288,13 +290,16 @@ $ openssl req -x509 -sha256 -days 1825 -newkey rsa:2048 -keyout rootCA.key -out This will give you the files `rootCA.key` and `rootCA.crt`. The server certificate requires a certificate signing request (hereinafter referred to as CSR). The following excerpt shows how to create such a CSR: + +[source, bash] ---- $ openssl req -newkey rsa:2048 -keyout domain.key -out domain.csr -passout pass: -subj "/C=DE/ST=BW/L=Nuremberg/O=SUSE" ---- Before you can sign the CSR, you need to add the DNS names of your Kuberntes Services to the CSR. Therefore, create a file with the content below and replace the ** and ** with the name of your Kubernetes service and the namespace in which it is placed: - + +[source, bash] ---- authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE @@ -322,6 +327,7 @@ $ openssl rsa -passin pass: -in domain.key -out server.key Some applications (like Redis) require a full certificate chain to operate. To get a full certificate chain, link the generated file _server.pem_ with the file _rootCA.crt_ as follows: +[source, bash] ---- $ cat server.pem rootCA.crt > chained.pem ---- @@ -333,7 +339,8 @@ You should then have the files _server.pem_, _server.key_ and _chained.pem_ that To use certificate files in Kubernetes, you need to save them as so-called *Secrets*. For an example of uploading your certificates to Kubernetes, see the following excerpt: - + +[source, bash] ---- $ kubectl -n create secret generic --from-file=./root.pem --from-file=./server.pem --from-file=./server.key ---- diff --git a/adoc/SAP-EIC-Metallb.adoc b/adoc/SAP-EIC-Metallb.adoc index 0189c5e4..cff01702 100644 --- a/adoc/SAP-EIC-Metallb.adoc +++ b/adoc/SAP-EIC-Metallb.adoc @@ -11,7 +11,7 @@ If you are trying to run {metallb} on a cloud platform, you should also look at There are several ways to deploy {metallb}. In this guide we'll describe how to use the {rac} to deploy {metallb}. -Please make sure to have a range of IP addresses available for configuring {metallb}. +Please make sure to have one IP addresses available for configuring {metallb}. Before you can deploy {metallb} from {rac}, you need to create the namespace and an ImagePullSecret. To create the related namespace, run: @@ -39,7 +39,10 @@ imagePullSecrets: Then install the metallb application. ---- -# helm install metallb oci://dp.apps.rancher.io/charts/metallb -f values.yaml --namespace=metallb --version 0.14.7 +$ helm install metallb oci://dp.apps.rancher.io/charts/metallb \ +-f values.yaml \ +--namespace=metallb \ +--version 0.14.7 ---- ++++ @@ -54,9 +57,9 @@ Then install the metallb application. - L2 advertisement configuration Create the configuration files for the {metallb} IP address pool: - +[source,bash] ---- -# cat <iprange.yaml +$ cat <iprange.yaml apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: @@ -69,9 +72,9 @@ EOF ---- and the layer 2 network advertisement: - +[source,bash] ---- -# cat < l2advertisement.yaml +$ cat < l2advertisement.yaml apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: @@ -82,7 +85,8 @@ EOF Apply the configuration: +[source,bash] ---- -# kubectl apply -f iprange.yaml -# kubectl apply -f l2advertisement.yaml +$ kubectl apply -f iprange.yaml +$ kubectl apply -f l2advertisement.yaml ---- diff --git a/adoc/SAP-EIC-PostgreSQL.adoc b/adoc/SAP-EIC-PostgreSQL.adoc index 14da413b..519b62e5 100644 --- a/adoc/SAP-EIC-PostgreSQL.adoc +++ b/adoc/SAP-EIC-PostgreSQL.adoc @@ -17,6 +17,7 @@ The {pg} chart can be found at https://apps.rancher.io/applications/postgresql. == Create Secret for {rac} First we need to create a namespace and the *imagePullSecret* for installing the {pg} database into the cluster. +[source, bash] ---- $ kubectl create namespace postgresql ---- @@ -33,6 +34,7 @@ Before you can install the application, you need to login into the registry. You Create a file *values.yaml* which holds some configuration for the {pg} Helm chart. The config may look like: +[source, yaml] ---- global: # -- Global override for container image registry pull secrets @@ -74,6 +76,7 @@ persistentVolumeClaimRetentionPolicy: ++++ To install the application run: +[source, bash] ---- $ helm install metallb oci://dp.apps.rancher.io/charts/postgres -f values.yaml --namespace=postgres ---- diff --git a/adoc/SAP-EIC-Redis.adoc b/adoc/SAP-EIC-Redis.adoc index bde1b875..6f91931b 100644 --- a/adoc/SAP-EIC-Redis.adoc +++ b/adoc/SAP-EIC-Redis.adoc @@ -30,6 +30,8 @@ The {redis} chart can be found at https://apps.rancher.io/applications/redis . To deploy the chart you'll need to create the related namespace and *imagePullSecret* first. To create the namespace, run: + +[source, bash] ---- $ kubectl create namespace redis ---- @@ -46,7 +48,15 @@ Before you can install the application, you need to login into the registry. You Create a file *values.yaml* which holds some configuration for the {redis} Helm chart. The config may look like: + +[source, yaml] ---- +images: + redis: + # -- Image name to use for the Redis container + repository: dp.apps.rancher.io/containers/redis + # -- Image tag to use for the Redis container + tag: 7.2.5 storageClassName: "longhorn" global: imagePullSecrets: ["application-collection"] @@ -70,7 +80,10 @@ tls: ---- To install the application run: - +[source, bash] ---- -$ helm install metallb oci://dp.apps.rancher.io/charts/postgres -f values.yaml --namespace=redis +$ helm install metallb oci://dp.apps.rancher.io/charts/redis \ +-f values.yaml \ +--namespace=redis +--version ---- diff --git a/adoc/SAP-EIC-SLEMicro.adoc b/adoc/SAP-EIC-SLEMicro.adoc index 59ef8f0f..1aef7e25 100644 --- a/adoc/SAP-EIC-SLEMicro.adoc +++ b/adoc/SAP-EIC-SLEMicro.adoc @@ -21,12 +21,15 @@ skip this chapter. To bring your system up to date you need to register your system against a SUSE Manager, RMT server or direct to the SCC Portal. We will describe the process in our guide with the direct connect to the SCC. For more information please look into the {slem} documentation. Registering the system is possible from the command line using the *transactional-update register* command. For information that goes beyond the scope of this section, refer to the inline documentation with *SUSEConnect --help*. To register {slem} with SUSE Customer Center, run *transactional-update register* as follows: +[source, bash] ---- -# transactional-update register -r REGISTRATION_CODE -e EMAIL_ADDRESS +$ transactional-update register -r REGISTRATION_CODE -e EMAIL_ADDRESS ---- To register with a local registration server, additionally provide the URL to the server: + +[source, bash] ---- -# transactional-update register -r REGISTRATION_CODE -e EMAIL_ADDRESS \ +$ transactional-update register -r REGISTRATION_CODE -e EMAIL_ADDRESS \ --url "https://suse_register.example.com/" ---- Replace *REGISTRATION_CODE* with the registration code you received with your copy of {slem}. Replace *EMAIL_ADDRESS* with the e-mail address associated with the SUSE account you or your organization uses to manage subscriptions. @@ -36,15 +39,17 @@ You can found more information in the {slem} {slem_version} link:https://documen === Update your system Login into the system and after your system is registered you can update it with the *transactional-update* command. +[source, bash] ---- -# transactional-update +$ transactional-update ---- === Disable automatic reboot Per default {slem} runs a timer for *transactional-update* in the background which could automatic reboot your system. We will disable it. +[source, bash] ---- -# systemctl --now disable transactional-update.timer +$ systemctl --now disable transactional-update.timer ---- ++++ @@ -55,56 +60,66 @@ Per default {slem} runs a timer for *transactional-update* in the background whi For {lh} we need to do some preparation steps. First we need to install addional packages on all worker nodes. Then we will attach a second disk to the worker nodes, create a filesystem ontop of it and mount it to the longhorn default location. The size of the second disk depends on your use case. We need to install some packages as a requirement for longhorn and Logical Volume Management for adding a filesystem to longhorn. +[source, bash] ---- -# transactional-update pkg install lvm2 jq nfs-client cryptsetup open-iscsi +$ transactional-update pkg install lvm2 jq nfs-client cryptsetup open-iscsi ---- After the needed packages are installed you need to reboot your machine. +[source, bash] ---- -# reboot +$ reboot ---- Now we can you enable the iscsid server. +[source, bash] ---- -# systemctl enable iscsid --now +$ systemctl enable iscsid --now ---- ==== Create filesystem for longhorn Then we will create with the Logical Volume Management a new logical volume. First we want to create a new physical volume. In our case the second disk is called vdb and we use this as longhorn volume. +[source, bash] ---- -pvcreate /dev/vdb +$ pvcreate /dev/vdb ---- After the physical volume is created we create a volume group called vgdata +[source, bash] ---- -vgcreate vgdata /dev/vdb +$ vgcreate vgdata /dev/vdb ---- Now we cann create the logical volume and we will use 100% of the disk. +[source, bash] ---- -lvcreate -n lvlonghorn -l100%FREE vgdata +$ lvcreate -n lvlonghorn -l100%FREE vgdata ---- We will create the XFS filesystem on the logical volume. You don't need to create a partion on top of it. +[source, bash] ---- -mkfs.xfs /dev/vgdata/lvlonghorn +$ mkfs.xfs /dev/vgdata/lvlonghorn ---- Before we can mount the device we need to create the directory structure. +[source, bash] ---- -mkdir -p /var/lib/longhorn +$ mkdir -p /var/lib/longhorn ---- That the mount of the filesystem is persistent we add an entry into the fstab +[source, bash] ---- -echo -e "/dev/vgdata/lvlonghorn /var/lib/longhorn xfs defaults 0 0" >> /etc/fstab +$ echo -e "/dev/vgdata/lvlonghorn /var/lib/longhorn xfs defaults 0 0" >> /etc/fstab ---- Now we can mount the filesystem +[source, bash] ---- -mount -a +$ mount -a ---- diff --git a/adoc/SAPDI3-Longhorn.adoc b/adoc/SAPDI3-Longhorn.adoc index c10d9043..2602c1e4 100644 --- a/adoc/SAPDI3-Longhorn.adoc +++ b/adoc/SAPDI3-Longhorn.adoc @@ -33,7 +33,9 @@ https://longhorn.io/docs/{lh_version}/deploy/install/install-with-rancher/ === Installing {lh} using Helm -ifdef::slem[] +ifdef::eic[] +To install Longhorn using Helm, run the following commands: +[source, bash] ---- $ helm repo add rancher-v2.8-charts https://raw.githubusercontent.com/rancher/charts/release-v2.8 $ helm repo update @@ -44,8 +46,9 @@ $ helm upgrade --install longhorn rancher-v2.8-charts/longhorn \ --namespace longhorn-system ---- endif::[] - +ifndef::eic[] To install Longhorn using Helm, run the following commands: +[source, bash] ---- $ helm repo add longhorn https://charts.longhorn.io $ helm repo update @@ -53,13 +56,14 @@ $ helm install longhorn longhorn/longhorn --namespace longhorn-system --create-n ---- These commands will add the Longhorn Helm charts to the list of Helm repositories, update the Helm repository, and execute the installation of Longhorn. - +endif::[] ifndef::slem[] === Installing {lh} using `kubectl` You can install {lh} using `kubectl` with the following command: [subs="attributes"] +[source, bash] ---- $ kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v{lh_version}/deploy/longhorn.yaml ---- diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index 0299dd31..68c8e383 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -11,14 +11,16 @@ If you do not plan to set up a high available {rancher} cluster, you can skip th Setup a virtual machine or bare metal server with {sles} and the HA Extension or use {sles4sap}. Install the haproxy package. +[source, bash] ---- -# zypper in haproxy +$ zypper in haproxy ---- Create the configuration for haproxy. Here follows an example configuration file for haproxy, please adapt for the actual environment. -ifdef::slem[] +ifdef::eic[] +[source, bash] ---- # cat < /etc/haproxy/haproxy.cfg global @@ -101,7 +103,8 @@ EOF ---- endif::[] -ifndef::slem[] +ifndef::eic[] +[source, bash] ---- # cat < /etc/haproxy/haproxy.cfg global @@ -163,31 +166,34 @@ EOF ---- endif::[] Check the configuration file: +[source, bash] ---- -# haproxy -f /path/to/your/haproxy.conf -c +$ haproxy -f /path/to/your/haproxy.conf -c ---- Enable and start the haproxy load balancer: +[source, bash] ---- -# systemctl enable haproxy -# systemctl start haproxy +$ systemctl enable haproxy +$ systemctl start haproxy ---- Do not forget to restart or reload haproxy if there were changes to the haproxy config file. - ==== Installing RKE2 To install RKE2, the script provided at https://get.rke2.io can be used as follows: +[source, bash] ---- -# curl -sfL https://get.rke2.io | sh - +$ curl -sfL https://get.rke2.io | sh - ---- For HA setups it is necessary to create RKE2 cluster configuration files in advance. On the first master node: +[source, bash] ---- -# mkdir -p /etc/rancher/rke2 -# cat < /etc/rancher/rke2/config.yaml +$ mkdir -p /etc/rancher/rke2 +$ cat < /etc/rancher/rke2/config.yaml token: 'your cluster token' system-default-registry: registry.rancher.com tls-san: @@ -198,8 +204,9 @@ EOF ---- Create configuration files for additional cluster nodes: +[source, bash] ---- -# cat < /etc/rancher/rke2/config.yaml +$ cat < /etc/rancher/rke2/config.yaml server: https://"FQDN of registration address":9345 token: 'your cluster token' system-default-registry: registry.rancher.com @@ -215,19 +222,24 @@ IMPORTANT: You also need take about ETCD Snapshots and to perfom backups of your IMPORTANT: For security reasons, we generally recommend activating the CIS profile when installing RKE2. This is currently still being validated and will be included in the documentation at a later date. Now it is time to enable and start the RKE2 components and run on each cluster node: +[source, bash] ---- -# systemctl enable rke2-server --now +$ systemctl enable rke2-server --now ---- To verify the installation, run the following command: + +[source, bash] ---- -# /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get nodes +$ /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get nodes ---- For convenience, the `kubectl` binary can be added to the *$PATH* and the given `kubeconfig` can be set via an environment variable: + +[source, bash] ---- -# export PATH=$PATH:/var/lib/rancher/rke2/bin/ -# export KUBECONFIG=/etc/rancher/rke2/rke2.yaml +$ export PATH=$PATH:/var/lib/rancher/rke2/bin/ +$ export KUBECONFIG=/etc/rancher/rke2/rke2.yaml ---- ++++ @@ -240,8 +252,9 @@ For convenience, the `kubectl` binary can be added to the *$PATH* and the given In order to install {rancher} and some of its required components, you'll need to use Helm. One way to install Helm is to run: +[source, bash] ---- -# curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash +$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash ---- ==== Installing cert-manager @@ -252,6 +265,7 @@ Even though cert-manager is available for deployment using the {rancher} Apps, w First we need to create a namespace and the *imagePullSecret* for installing the cert-manager. To create the namespace, run: +[source, bash] ---- $ kubectl create namespace cert-manager ---- @@ -262,46 +276,49 @@ How to create the *imagePullSecret* is described in the xref:SAP-EIC-ImagePullSe ===== Installing the application +ifdef::eic[] +[#rancherLIR] +Before you can install the application, you need to login into the registry. You can find the instruction in xref:SAP-EIC-LoginRegistryApplicationCollection.adoc#LoginApplicationCollection[] +endif::[] + +ifndef::eic[] You will need to login to the {rac}: +[source, bash] ---- $ helm registry login dp.apps.rancher.io/charts -u -p ---- +endif::[] -Now pull the helmchart from the {rac}: - ----- -$ helm pull oci://dp.apps.rancher.io/charts/cert-manager --untar ----- - - -Install cert-manager: - +[source, bash] ---- -$ helm install --namespace cert-manager \ +$ helm install cert-manager oci://dp.apps.rancher.io/charts/cert-manager \ --set crds.enabled=true \ --set-json 'global.imagePullSecrets=[{"name":"application-collection"}]' \ -cert-manager ./cert-manager +--namespace=cert-manager \ +--version 1.15.2 ---- - - - - === Installing {rancher} To install {rancher}, you need to add the related Helm repository. To achieve that, use the following command: + +[source, bash] ---- $ helm repo add rancher-prime https://charts.rancher.com/server-charts/prime ---- As a next step, create the cattle-system namespace in Kubernetes as follows: + +[source, bash] ---- $ kubectl create namespace cattle-system ---- The Kubernetes cluster is now ready for the installation of {rancher}: + +[source, bash] ---- $ helm install rancher rancher-prime/rancher \ --namespace cattle-system \ @@ -310,8 +327,10 @@ $ helm install rancher rancher-prime/rancher \ ---- During the rollout of {rancher}, you can monitor the progress using the following command: + +[source, bash] ---- -$ kubectl -n cattle-system rollout status deploy/rancher +$ kubectl -n cattle-system rollout status deploy/rancher-prime ---- When the deployment is done, you can access the {rancher} cluster at https://[]. From ec9fd50c69f12063c94e13ed31905697176befd7 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Tue, 27 Aug 2024 14:04:06 +0200 Subject: [PATCH 32/48] fix typo --- adoc/SAP-EIC-Metallb.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/adoc/SAP-EIC-Metallb.adoc b/adoc/SAP-EIC-Metallb.adoc index cff01702..3e98756f 100644 --- a/adoc/SAP-EIC-Metallb.adoc +++ b/adoc/SAP-EIC-Metallb.adoc @@ -11,7 +11,7 @@ If you are trying to run {metallb} on a cloud platform, you should also look at There are several ways to deploy {metallb}. In this guide we'll describe how to use the {rac} to deploy {metallb}. -Please make sure to have one IP addresses available for configuring {metallb}. +Please make sure to have one IP address available for configuring {metallb}. Before you can deploy {metallb} from {rac}, you need to create the namespace and an ImagePullSecret. To create the related namespace, run: From 42996ac614fa7897d1091dbedf13289dcbc5445a Mon Sep 17 00:00:00 2001 From: lpinne Date: Wed, 28 Aug 2024 10:14:06 +0200 Subject: [PATCH 33/48] SLES4SAP-hana-sr-guide-PerfOpt-15.adoc: clarify test case, issue #446 --- adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc index b2cc3bab..d711dc29 100644 --- a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc +++ b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc @@ -2394,7 +2394,7 @@ recovery procedure. database. .{testProc} -. Crash the secondary node by sending a 'fast-reboot' system request. +. Crash the secondary site node (currently primary {HANA}) by sending a 'fast-reboot' system request. + [subs="attributes,quotes"] ---- @@ -2402,7 +2402,7 @@ recovery procedure. ---- .{testRecover} -. If SBD fencing is used, pacemaker will not automatically restart +. If SBD fencing is used, pacemaker will not automatically restart after being fenced. In this case clear the fencing flag on all SBD devices and subsequently start pacemaker. + From c862df60837f7c83a60f38ec904fddadbd2298eb Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Mon, 2 Sep 2024 11:25:37 +0200 Subject: [PATCH 34/48] Improve support matrix & fix typos --- ...EIC-LoginRegistryApplicationCollection.adoc | 3 ++- adoc/SAP-EIC-Main.adoc | 18 ++++++++++-------- adoc/SAP-EIC-Metallb.adoc | 1 + 3 files changed, 13 insertions(+), 9 deletions(-) diff --git a/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc b/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc index 9e111f9e..1b94005c 100644 --- a/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc +++ b/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc @@ -3,7 +3,8 @@ To install the HELM Charts from the _application-collection_ you need to login into the registry. This needs to be done with the HELM client. -To login to the {rac} which can be done like: +To login to the {rac} run: +[source, bash] ---- $ helm registry login dp.apps.rancher.io/charts -u -p ---- diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 499873a6..43a19936 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -14,7 +14,7 @@ :rancher: Rancher Prime :rancher_version: 2.8.3 :rke: Rancher Kubernetes Engine 2 -:eic: SAP Edge Integration Cell +:eic: Edge Integration Cell :elm: SAP Edge Lifecycle Management :rac: Rancher Application Collection :redis: Redis @@ -50,8 +50,9 @@ $ that the command can be run by any user. ++++ -== Supported and used Versions -There are several versions for the different used software components available. So we want to show you our support matrix, which versions we use in our Best Practise Guide. +== Supported and used versions + +The support matrix below shows which versions of the given software we'll use in this guide. [cols="1,1"] |=== @@ -67,6 +68,11 @@ There are several versions for the different used software components available. |{redis} | {redis_version} |=== +IMPORTANT: If you want to use different versions of {slem}, {rancher}, {rke} or {lh}, make sure to check the support matrix for the related solutions you want to use: +https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/ + +For {redis} and {pg}, make sure to pick versions compatible to {eic}, which can be found in https://me.sap.com/notes/3247839 . + +Other versions of {metallb} or {cm} can be used but may have not been tested. + == Preparations @@ -78,10 +84,6 @@ There are several versions for the different used software components available. +++*+++ Only needed if you want to setup {rancher} in a high available setup. -IMPORTANT: If you want to use different versions of {slem}, {rancher}, {rke} or {lh}, make sure to check the support matrix for the related solutions you want to use: -https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/ - - Furthermore: * Check the storage requirements. @@ -273,7 +275,7 @@ include::SAP-EIC-LoginRegistryApplicationCollection.adoc[leveloffset=+2] [#selfSignedCertificates] === Using self-signed certificates -In this chapter we will explain how to create self-signed certificates and how to make them available within Kubernetes. We will describe to possible solutions to do this. You can create everything on the operation system layer or you also can use cert-manager in your downstream cluster. +In this chapter we will explain how to create self-signed certificates and how to make them available within Kubernetes. We will describe two possible solutions to do this. You can create everything on the operation system layer or you also can use cert-manager in your downstream cluster. ==== Creating self-signed certificates diff --git a/adoc/SAP-EIC-Metallb.adoc b/adoc/SAP-EIC-Metallb.adoc index 3e98756f..eb21a0eb 100644 --- a/adoc/SAP-EIC-Metallb.adoc +++ b/adoc/SAP-EIC-Metallb.adoc @@ -38,6 +38,7 @@ imagePullSecrets: ---- Then install the metallb application. +[source, bash] ---- $ helm install metallb oci://dp.apps.rancher.io/charts/metallb \ -f values.yaml \ From b600f9fbe4173b3c602337f568d47613fde8ed13 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Wed, 4 Sep 2024 10:58:27 +0200 Subject: [PATCH 35/48] Adding more source types --- adoc/SAP-EIC-Main.adoc | 3 +++ adoc/SAPDI3-Longhorn.adoc | 6 +++++- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 43a19936..cb0d877a 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -204,6 +204,7 @@ image::EIC-Rancher-Kubectl-Shell.png[title=Rancher Shell Overview,scaledwidth=99 You must log in to {rac}. This can be done as follows: +[source, bash] ---- $ helm registry login dp.apps.rancher.io/charts -u -p ---- @@ -313,6 +314,7 @@ DNS.2 = ..svc.cluster.local You can now use the previously created files _rootCA.key_ and _rootCA.crt_ with the extension file to sign the CSR. The example below shows how to do that by passing the extension file (here called _domain.ext_): +[source, bash] ---- $ openssl x509 -req -CA rootCA.crt -CAkey rootCA.key -in domain.csr -out server.pem -days 365 -CAcreateserial -extfile domain.ext -passin pass: ---- @@ -322,6 +324,7 @@ This creates a file called _server.pem_ which is your certificate to be used for Your _domain.key_ is still encrypted at this point, but the application requires an unencrypted server key. To decrypt, run the given command which will create the _server.key_. +[source, bash] ---- $ openssl rsa -passin pass: -in domain.key -out server.key ---- diff --git a/adoc/SAPDI3-Longhorn.adoc b/adoc/SAPDI3-Longhorn.adoc index 2602c1e4..9229c7df 100644 --- a/adoc/SAPDI3-Longhorn.adoc +++ b/adoc/SAPDI3-Longhorn.adoc @@ -12,7 +12,7 @@ ifndef::slem[] Before {lh} can be installed on a Kubernetes cluster, all nodes must have the `open-iscsi` package installed, and the ISCSI daemon needs to be started. To do so, run: - +[source, bash] ---- # zypper in -y open-iscsi # systemctl enable iscsid --now @@ -20,6 +20,7 @@ all nodes must have the `open-iscsi` package installed, and the ISCSI daemon nee endif::[] To make sure a node is prepared for {lh}, you can use the following script to check: +[source, bash] ---- $ curl -sSfL https://raw.githubusercontent.com/longhorn/longhorn/v1.6.2/scripts/environment_check.sh | bash ---- @@ -75,6 +76,7 @@ $ kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v{lh_vers * Create a basic _auth_ file: + +[source, bash] ---- $ USER=; \ PASSWORD=; \ @@ -83,12 +85,14 @@ $ USER=; \ * Create a secret from the file _auth_: + +[source, bash] ---- $ kubectl -n longhorn-system create secret generic basic-auth --from-file=auth ---- * Create the Ingress with basic authentication: + +[source, bash] ---- $ cat < longhorn-ingress.yaml apiVersion: networking.k8s.io/v1beta1 From c2754966ed8c750b71f55b7fbe5d48f23b3d030a Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Wed, 4 Sep 2024 11:14:05 +0200 Subject: [PATCH 36/48] Adding pagebreaks for readability --- adoc/SAP-EIC-Main.adoc | 5 ++++- adoc/SAP-EIC-Metallb.adoc | 4 ++++ adoc/SAP-EIC-SLEMicro.adoc | 4 ++++ adoc/SAPDI3-Rancher.adoc | 4 ++++ 4 files changed, 16 insertions(+), 1 deletion(-) diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index cb0d877a..504caaa9 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -73,6 +73,9 @@ https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/ + For {redis} and {pg}, make sure to pick versions compatible to {eic}, which can be found in https://me.sap.com/notes/3247839 . + Other versions of {metallb} or {cm} can be used but may have not been tested. +++++ + +++++ == Preparations @@ -268,10 +271,10 @@ to install {eic} in your prepared environments. == Appendix include::SAP-EIC-ImagePullSecrets.adoc[leveloffset=+2] -include::SAP-EIC-LoginRegistryApplicationCollection.adoc[leveloffset=+2] ++++ ++++ +include::SAP-EIC-LoginRegistryApplicationCollection.adoc[leveloffset=+2] [#selfSignedCertificates] === Using self-signed certificates diff --git a/adoc/SAP-EIC-Metallb.adoc b/adoc/SAP-EIC-Metallb.adoc index eb21a0eb..f3fd59b1 100644 --- a/adoc/SAP-EIC-Metallb.adoc +++ b/adoc/SAP-EIC-Metallb.adoc @@ -22,6 +22,10 @@ $ kubectl create namespace metallb [#metalIPS] Instructions how to create the *imagePullSecret* can be found in xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[] +++++ + +++++ + === Installation of {metallb} [#metalLIR] diff --git a/adoc/SAP-EIC-SLEMicro.adoc b/adoc/SAP-EIC-SLEMicro.adoc index 1aef7e25..d75ad376 100644 --- a/adoc/SAP-EIC-SLEMicro.adoc +++ b/adoc/SAP-EIC-SLEMicro.adoc @@ -78,6 +78,10 @@ Now we can you enable the iscsid server. $ systemctl enable iscsid --now ---- +++++ + +++++ + ==== Create filesystem for longhorn Then we will create with the Logical Volume Management a new logical volume. diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index 68c8e383..a0f43955 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -203,6 +203,10 @@ tls-san: EOF ---- +++++ + +++++ + Create configuration files for additional cluster nodes: [source, bash] ---- From 670f1b0b89641d90838ca2fb1296aca3aab845f6 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Wed, 4 Sep 2024 11:15:15 +0200 Subject: [PATCH 37/48] Unify spelling of Helm --- adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc | 2 +- adoc/SAP-EIC-Main.adoc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc b/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc index 1b94005c..3c899967 100644 --- a/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc +++ b/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc @@ -1,7 +1,7 @@ [#LoginApplicationCollection] = Login into the Application Collection Registry -To install the HELM Charts from the _application-collection_ you need to login into the registry. This needs to be done with the HELM client. +To install the Helm Charts from the _application-collection_ you need to login into the registry. This needs to be done with the Helm client. To login to the {rac} run: [source, bash] diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 504caaa9..37fedf98 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -429,7 +429,7 @@ $ kubectl apply -f my-ca-issuer.yaml $ kubectl apply -f application-name-certificate.yaml ---- -When you deploy your applications via HELM Charts you can use the generated certificate. In the Kubernetes Secret Certificate are 3 files stored. The tls.crt, tls.key and ca.crt which you cann use in the values.yaml file of your application. +When you deploy your applications via Helm Charts you can use the generated certificate. In the Kubernetes Secret Certificate are 3 files stored. The tls.crt, tls.key and ca.crt which you cann use in the values.yaml file of your application. ++++ From dca86e764a3ab554dc45b75ec5ba8326a87031a0 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Mon, 9 Sep 2024 14:57:13 +0200 Subject: [PATCH 38/48] Fix missing merge lines --- adoc/SAP-EIC-ImagePullSecrets.adoc | 2 +- adoc/SAP-EIC-Main.adoc | 57 ------------------------------ adoc/SAP-EIC-PostgreSQL.adoc | 4 --- 3 files changed, 1 insertion(+), 62 deletions(-) diff --git a/adoc/SAP-EIC-ImagePullSecrets.adoc b/adoc/SAP-EIC-ImagePullSecrets.adoc index 088de46d..8e73d62d 100644 --- a/adoc/SAP-EIC-ImagePullSecrets.adoc +++ b/adoc/SAP-EIC-ImagePullSecrets.adoc @@ -53,4 +53,4 @@ image::EIC-Secrets-Types.png[title=Secrets Type Selection,scaledwidth=99%] Enter a name such as _application-collection_ for the Secret. In the text box *Registry Domain Name*, enter _dp.apps.rancher.io_. Enter your user name and password and click the *Create* button at the bottom right. -image::EIC-Secret-Create.png[title=Secrets Creation Step,scaledwidth=99%] \ No newline at end of file +image::EIC-Secret-Create.png[title=Secrets Creation Step,scaledwidth=99%] diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 5ea3e94d..74968cf1 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -80,20 +80,10 @@ Other versions of {metallb} or {cm} can be used but may have not been tested. == Prerequisites * Get subscriptions for: -<<<<<<< HEAD ** {slem} ** {rancher} ** {lh} ** {sle_ha} * -======= -** {slem} {slem_version} -** {rancher} {rancher_version} -** {lh} {lh_version} - -IMPORTANT: If you want to use versions of {slem}, {rancher}, {rke} or {lh} other than those listed here, -be sure to check the support matrix for the specific solutions you plan to use. See -https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/. ->>>>>>> 8fa03f45af2c4d62213cc787393de38f00a7b33a +++*+++ Only needed if you want to setup {rancher} in a high available setup. @@ -222,53 +212,6 @@ You must log in to {rac}. This can be done as follows: $ helm registry login dp.apps.rancher.io/charts -u -p ---- -<<<<<<< HEAD -======= - -[#imagePullSecret] -=== Creating an imagePullSecret - -To make the resources available for deployment, you need to create an imagePullSecret. -In this guide we use the name _application-collection_ for it. - -==== Creating an imagePullSecret using kubectl - -Using `kubectl` to create the imagePullSecret is quite easy. -Get your user name and your access token for the {rac}. -Then run: - ----- -$ kubectl create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username= --docker-password= ----- - -==== Creating an imagePullSecret using {rancher} - -You can also create an imagePullSecret using {rancher}. -Therefore, open {rancher} and enter your cluster. - -Navigate to *Storage* -> *Secrets* as shown below: - -image::EIC-Secrets-Menu.png[title=Secrets Menu,scaledwidth=99%] - -++++ - -++++ - -Click *Create* in the top right corner. - -image::EIC-Secrets-Overview.png[title=Secrets Overview,scaledwidth=99%] - -A window will appear asking you to select the Secret type. Select *Registry* as shown here: - -image::EIC-Secrets-Types.png[title=Secrets Type Selection,scaledwidth=99%] - - -Enter a name such as _application-collection_ for the Secret. In the text box *Registry Domain Name*, enter _dp.apps.rancher.io_. -Enter your user name and password and click the *Create* button at the bottom right. - -image::EIC-Secret-Create.png[title=Secrets Creation Step,scaledwidth=99%] - ->>>>>>> 8fa03f45af2c4d62213cc787393de38f00a7b33a ++++ ++++ diff --git a/adoc/SAP-EIC-PostgreSQL.adoc b/adoc/SAP-EIC-PostgreSQL.adoc index 21178211..3f0c322e 100644 --- a/adoc/SAP-EIC-PostgreSQL.adoc +++ b/adoc/SAP-EIC-PostgreSQL.adoc @@ -76,10 +76,6 @@ persistentVolumeClaimRetentionPolicy: whenDeleted: Delete ---- -++++ - -++++ - To install the application run: [source, bash] ---- From 97062e33fdd3fcd040b5878b80b161225f76d078 Mon Sep 17 00:00:00 2001 From: Suse-KevinKlinger <59616796+Suse-KevinKlinger@users.noreply.github.com> Date: Mon, 9 Sep 2024 15:03:25 +0200 Subject: [PATCH 39/48] Sap eic (#1) * Adding HA subscription to Preparations * Adding explenation for $ and # * use rac for installation of cert-manager * Fixing cert-manager * Adding first landscape overview * Change of metallb and longhorn parititon *Added Longhorn partition and remove MetalLB kernel parameter. * Added Version Table * added new Loadbalancer config * Outsource login registry and simplify helm install --------- Co-authored-by: Ulrich Schairer Co-authored-by: Dominik_Mathern --- adoc/SAP-EIC-ImagePullSecrets.adoc | 56 ++++ ...IC-LoginRegistryApplicationCollection.adoc | 19 ++ adoc/SAP-EIC-Main-docinfo.xml | 2 +- adoc/SAP-EIC-Main.adoc | 260 +++++++++++++----- adoc/SAP-EIC-Metallb.adoc | 58 ++-- adoc/SAP-EIC-PostgreSQL.adoc | 45 +-- adoc/SAP-EIC-Redis.adoc | 50 +++- adoc/SAP-EIC-SLEMicro.adoc | 235 ++++++---------- adoc/SAP-Rancher-RKE2-Installation.adoc | 39 +-- adoc/SAPDI3-Longhorn.adoc | 31 ++- adoc/SAPDI3-RKE2-Install.adoc | 3 + adoc/SAPDI3-Rancher.adoc | 187 +++++++++++-- adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc | 14 +- images/src/svg/SAP-EIC-Architecture-RKE2.svg | 159 +++++++++++ .../src/svg/SAP-EIC-Architecture-Rancher.svg | 160 +++++++++++ images/src/svg/SAP-EIC-Architecture.svg | 157 +++++++++++ 16 files changed, 1136 insertions(+), 339 deletions(-) create mode 100644 adoc/SAP-EIC-ImagePullSecrets.adoc create mode 100644 adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc create mode 100644 images/src/svg/SAP-EIC-Architecture-RKE2.svg create mode 100644 images/src/svg/SAP-EIC-Architecture-Rancher.svg create mode 100644 images/src/svg/SAP-EIC-Architecture.svg diff --git a/adoc/SAP-EIC-ImagePullSecrets.adoc b/adoc/SAP-EIC-ImagePullSecrets.adoc new file mode 100644 index 00000000..8e73d62d --- /dev/null +++ b/adoc/SAP-EIC-ImagePullSecrets.adoc @@ -0,0 +1,56 @@ +[#imagePullSecret] += Creating an imagePullSecret for the {rac} + +To make the resources available for deployment, you need to create an imagePullSecret. +In this guide we use the name _application-collection_ for it. + +== Creating an imagePullSecret using kubectl + +Using `kubectl` to create the imagePullSecret is quite easy. +Get your user name and your access token for the {rac}. +Then run: + +---- +$ kubectl -n create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username= --docker-password= +---- + +As secrets are namespace sensitive, you'll need to create this for every namespace needed. + +ifdef::eic[] +The related secret can then be used for the components: + +* xref:SAPDI3-Rancher.adoc#rancherIPS[Cert-Manager] +* xref:SAP-EIC-Metallb.adoc#metalIPS[MetalLB] +* xref:SAP-EIC-Redis.adoc#redisIPS[Redis] +* xref:SAP-EIC-PostgreSQL.adoc#pgIPS[PostgreSQL] +endif::[] + +++++ + +++++ + +== Creating an imagePullSecret using {rancher} + +You can also create an imagePullSecret using {rancher}. +Therefore, open {rancher} and enter your cluster. + +Navigate to *Storage* -> *Secrets* as shown below: + +image::EIC-Secrets-Menu.png[title=Secrets Menu,scaledwidth=99%] + +++++ + +++++ + +Click the *Create* button in the top right corner. + +image::EIC-Secrets-Overview.png[title=Secrets Overview,scaledwidth=99%] + +A window will appear asking you to select the Secret type. Select *Registry* as shown here: + +image::EIC-Secrets-Types.png[title=Secrets Type Selection,scaledwidth=99%] + +Enter a name such as _application-collection_ for the Secret. In the text box *Registry Domain Name*, enter _dp.apps.rancher.io_. +Enter your user name and password and click the *Create* button at the bottom right. + +image::EIC-Secret-Create.png[title=Secrets Creation Step,scaledwidth=99%] diff --git a/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc b/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc new file mode 100644 index 00000000..3c899967 --- /dev/null +++ b/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc @@ -0,0 +1,19 @@ +[#LoginApplicationCollection] += Login into the Application Collection Registry + +To install the Helm Charts from the _application-collection_ you need to login into the registry. This needs to be done with the Helm client. + +To login to the {rac} run: +[source, bash] +---- +$ helm registry login dp.apps.rancher.io/charts -u -p +---- + +ifdef::eic[] +The login process is needed for the following application installations: + +* xref:SAPDI3-Rancher.adoc#rancherLIR[Cert-Manager] +* xref:SAP-EIC-Metallb.adoc#metalLIR[MetalLB] +* xref:SAP-EIC-Redis.adoc#redisLIR[Redis] +* xref:SAP-EIC-PostgreSQL.adoc#pgLIR[PostgreSQL] +endif::[] \ No newline at end of file diff --git a/adoc/SAP-EIC-Main-docinfo.xml b/adoc/SAP-EIC-Main-docinfo.xml index 94de316f..79f32c7e 100644 --- a/adoc/SAP-EIC-Main-docinfo.xml +++ b/adoc/SAP-EIC-Main-docinfo.xml @@ -29,7 +29,7 @@ Longhorn -SUSE Linux Enterprise Micro 5.4 +SUSE Linux Enterprise Micro Rancher Kubernetes Engine 2 Longhorn Rancher Prime diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index aa9fe7c8..74968cf1 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -1,5 +1,5 @@ :docinfo: - +//test // defining article ID [#art-sap-eic-slemicro54] @@ -8,18 +8,24 @@ :slem: SUSE Linux Enterprise Micro :slem_version: 5.4 :sles_version: 15 SP5 +:sle_ha: SUSE Linux Enterprise High Availability Extension :lh: Longhorn :lh_version: 1.5.5 :rancher: Rancher Prime :rancher_version: 2.8.3 :rke: Rancher Kubernetes Engine 2 -:eic: SAP Edge Integration Cell +:eic: Edge Integration Cell :elm: SAP Edge Lifecycle Management :rac: Rancher Application Collection :redis: Redis +:redis_version: 7.2.5 :sis: SAP Integration Suite :pg: PostgreSQL +:pg_version: 15.7 :metallb: MetalLB +:metallb_version: 0.14.7 +:cm: cert-manager +:cm_version: 1.15.2 = {eic} on SUSE @@ -37,6 +43,36 @@ It will guide you through the steps of: NOTE: This guide does not contain information about sizing your landscapes. Visit https://help.sap.com/docs/integration-suite?locale=en-US and search for the "Edge Integration Cell Sizing Guide". +NOTE: In this guide we'll use $ and # for shell commands, where # means that the command needs to be executed as a root user and +$ that the command can be run by any user. + +++++ + +++++ + +== Supported and used versions + +The support matrix below shows which versions of the given software we'll use in this guide. + +[cols="1,1"] +|=== +|Product | Version + +|{slem} | {slem_version} +|{rke} | 1.28 +|{rancher} | {rancher_version} +|{lh} | {lh_version} +|{cm} | {cm_version} +|{metallb} | {metallb_version} +|{pg} | {pg_version} +|{redis} | {redis_version} +|=== + +IMPORTANT: If you want to use different versions of {slem}, {rancher}, {rke} or {lh}, make sure to check the support matrix for the related solutions you want to use: +https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/ + +For {redis} and {pg}, make sure to pick versions compatible to {eic}, which can be found in https://me.sap.com/notes/3247839 . + +Other versions of {metallb} or {cm} can be used but may have not been tested. + ++++ ++++ @@ -44,14 +80,12 @@ https://help.sap.com/docs/integration-suite?locale=en-US and search for the "Edg == Prerequisites * Get subscriptions for: -** {slem} {slem_version} -** {rancher} {rancher_version} -** {lh} {lh_version} - -IMPORTANT: If you want to use versions of {slem}, {rancher}, {rke} or {lh} other than those listed here, -be sure to check the support matrix for the specific solutions you plan to use. See -https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/. +** {slem} +** {rancher} +** {lh} +** {sle_ha} * ++++*+++ Only needed if you want to setup {rancher} in a high available setup. Additionally, @@ -73,9 +107,35 @@ Additionally, ++++ +== Landscape Overview + +To run {eic} in a production ready and supported way, you'll need to setup multiple Kubernetes clusters and their nodes. +Those comprise a Kubernetes cluster where you'll install {rancher} to setup and manage the production and non-production clusters. +For this {rancher} cluster, we recommend using 3 Kubernetes nodes and a load balancer. + +The {eic} will need to run in a dedicated Kubernetes cluster. +For a HA setup of this cluster, we recommend using 3 Kubernetes Control Plane and 3 Kubernetes Worker nodes. + +To give you a graphical overview of what's needed, please take a look at the landscape overview: + +image::SAP-EIC-Architecture.svg[scaledwidth=99%,opts=inline,Embedded] + +* The dark blue rectangles represent Kubernetes clusters. +* The olive rectangles represent Kubernetes nodes that hold the roles of Control Plane and Worker combined. +* The green rectangles represent Kubernetes Control Plane nodes. +* The orange rectangles represent Kubernetes Worker nodes. + +We'll use this graphical overview through the guide to visualize what's the next step and what it's for. + + +Starting with the installation of the operating system of each machine/ Kubernetes node, we'll guide you through every step to take to get a fully set up Kubernetes landscape ready for the deployment of {eic}. + +++++ + +++++ + == Installing {slem} {slem_version} -There are several ways to install {slem} {slem_version}. For this best practice guide, we use the installation method via graphical installer. -Further installation routines can be found in the https://documentation.suse.com/sle-micro/5.4/html/SLE-Micro-all/book-deployment-slemicro.html[Deployment Guide for SUSE Linux Enterprise Micro 5.4]. +There are several ways to install {slem} {slem_version}. For this best practice guide, we use the installation method via graphical installer. But in cloud-native deployments it is highly recommended to use Infrastructure as Code technologies to fully automate the deployment and lifecycle processes. include::SAP-EIC-SLEMicro.adoc[SLEMicro] @@ -83,7 +143,18 @@ include::SAP-EIC-SLEMicro.adoc[SLEMicro] ++++ -//TODO check dependencies of other doc files to adjust header hierarchy +== Installing {rancher} + +By now you should have installed the operating system on every Kubernetes node. +You're now ready to install a {rancher} cluster. +Taking a look again on the landscape overview, this means, we'll now cover how to setup the upper part of the given graphic: + +image::SAP-EIC-Architecture-Rancher.svg[scaledwidth=99%,opts=inline,Embedded] + +++++ + +++++ + include::SAPDI3-Rancher.adoc[Rancher] ++++ @@ -91,6 +162,19 @@ include::SAPDI3-Rancher.adoc[Rancher] ++++ == Installing RKE2 using {rancher} + +After installing the {rancher} cluster, we can now facilitate this one to create the {rke} clusters for {eic}. +SAP recommends to setup not only a production landscape, but to have QA / Dev systems for {eic}. Both can be set up the same way using {rancher}. +How to do this is covered in this chapter. +Taking a look again on the landscape overview, this means, we'll now cover how to setup the lower part of the given graphic: + +image::SAP-EIC-Architecture-RKE2.svg[scaledwidth=99%,opts=inline,Embedded] + +++++ + +++++ + + include::SAP-Rancher-RKE2-Installation.adoc[] ++++ @@ -111,7 +195,7 @@ NOTE: Keep in mind that the descriptions and instructions below might differ fro === Logging in to {rac} -{rancher} instances prior to version 2.9 cannot integrate the {rac}. Therefore, you need to use the console and Helm. +To access the {rac} you need to login. Therefore, you can use the console and Helm client. The easiest way to do so is to use the built-in shell in {rancher}. To access it, navigate to your cluster and click *Kubectl Shell* as shown below: image::EIC-Rancher-Kubectl-Button.png[title=Rancher Shell Access,scaledwidth=99%] @@ -123,54 +207,11 @@ image::EIC-Rancher-Kubectl-Shell.png[title=Rancher Shell Overview,scaledwidth=99 You must log in to {rac}. This can be done as follows: +[source, bash] ---- $ helm registry login dp.apps.rancher.io/charts -u -p ---- - -[#imagePullSecret] -=== Creating an imagePullSecret - -To make the resources available for deployment, you need to create an imagePullSecret. -In this guide we use the name _application-collection_ for it. - -==== Creating an imagePullSecret using kubectl - -Using `kubectl` to create the imagePullSecret is quite easy. -Get your user name and your access token for the {rac}. -Then run: - ----- -$ kubectl create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username= --docker-password= ----- - -==== Creating an imagePullSecret using {rancher} - -You can also create an imagePullSecret using {rancher}. -Therefore, open {rancher} and enter your cluster. - -Navigate to *Storage* -> *Secrets* as shown below: - -image::EIC-Secrets-Menu.png[title=Secrets Menu,scaledwidth=99%] - -++++ - -++++ - -Click *Create* in the top right corner. - -image::EIC-Secrets-Overview.png[title=Secrets Overview,scaledwidth=99%] - -A window will appear asking you to select the Secret type. Select *Registry* as shown here: - -image::EIC-Secrets-Types.png[title=Secrets Type Selection,scaledwidth=99%] - - -Enter a name such as _application-collection_ for the Secret. In the text box *Registry Domain Name*, enter _dp.apps.rancher.io_. -Enter your user name and password and click the *Create* button at the bottom right. - -image::EIC-Secret-Create.png[title=Secrets Creation Step,scaledwidth=99%] - ++++ ++++ @@ -179,7 +220,7 @@ image::EIC-Secret-Create.png[title=Secrets Creation Step,scaledwidth=99%] This chapter is intended to guide you through installing and configuring {metallb} on your Kubernetes cluster used for {eic}. -include::SAP-EIC-Metallb.adoc[Metallb] +include::SAP-EIC-Metallb.adoc[Metallb, leveloffset=2] ++++ ++++ @@ -196,7 +237,7 @@ For more information about persistence in {redis}, see https://redis.io/docs/management/persistence/. -include::SAP-EIC-Redis.adoc[] +include::SAP-EIC-Redis.adoc[leveloffset=2] ++++ @@ -209,7 +250,7 @@ include::SAP-EIC-Redis.adoc[] Before deploying {pg}, ensure that the requirements described at https://me.sap.com/notes/3247839 are met. -include::SAP-EIC-PostgreSQL.adoc[] +include::SAP-EIC-PostgreSQL.adoc[leveloffset=2] ++++ @@ -229,9 +270,16 @@ to install {eic} in your prepared environments. [#Appendix] == Appendix +include::SAP-EIC-ImagePullSecrets.adoc[leveloffset=+2] +++++ + +++++ +include::SAP-EIC-LoginRegistryApplicationCollection.adoc[leveloffset=+2] + +[#selfSignedCertificates] === Using self-signed certificates -In this chapter we will explain how to create self-signed certificates and how to make them available within Kubernetes. +In this chapter we will explain how to create self-signed certificates and how to make them available within Kubernetes. We will describe two possible solutions to do this. You can create everything on the operation system layer or you also can use cert-manager in your downstream cluster. ==== Creating self-signed certificates @@ -239,6 +287,8 @@ WARNING: We strongly advise against using self-signed certificates in production The first step is to create a certificate authority (hereinafter referred to as CA) with a key and certificate. The following excerpt provides an example of how to create one with a passphrase of your choice: + +[source, bash] ---- $ openssl req -x509 -sha256 -days 1825 -newkey rsa:2048 -keyout rootCA.key -out rootCA.crt -passout pass: -subj "/C=DE/ST=BW/L=Nuremberg/O=SUSE" ---- @@ -246,13 +296,16 @@ $ openssl req -x509 -sha256 -days 1825 -newkey rsa:2048 -keyout rootCA.key -out This will give you the files `rootCA.key` and `rootCA.crt`. The server certificate requires a certificate signing request (hereinafter referred to as CSR). The following excerpt shows how to create such a CSR: + +[source, bash] ---- $ openssl req -newkey rsa:2048 -keyout domain.key -out domain.csr -passout pass: -subj "/C=DE/ST=BW/L=Nuremberg/O=SUSE" ---- Before you can sign the CSR, you need to add the DNS names of your Kuberntes Services to the CSR. Therefore, create a file with the content below and replace the ** and ** with the name of your Kubernetes service and the namespace in which it is placed: - + +[source, bash] ---- authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE @@ -264,6 +317,7 @@ DNS.2 = ..svc.cluster.local You can now use the previously created files _rootCA.key_ and _rootCA.crt_ with the extension file to sign the CSR. The example below shows how to do that by passing the extension file (here called _domain.ext_): +[source, bash] ---- $ openssl x509 -req -CA rootCA.crt -CAkey rootCA.key -in domain.csr -out server.pem -days 365 -CAcreateserial -extfile domain.ext -passin pass: ---- @@ -273,6 +327,7 @@ This creates a file called _server.pem_ which is your certificate to be used for Your _domain.key_ is still encrypted at this point, but the application requires an unencrypted server key. To decrypt, run the given command which will create the _server.key_. +[source, bash] ---- $ openssl rsa -passin pass: -in domain.key -out server.key ---- @@ -280,6 +335,7 @@ $ openssl rsa -passin pass: -in domain.key -out server.key Some applications (like Redis) require a full certificate chain to operate. To get a full certificate chain, link the generated file _server.pem_ with the file _rootCA.crt_ as follows: +[source, bash] ---- $ cat server.pem rootCA.crt > chained.pem ---- @@ -291,13 +347,89 @@ You should then have the files _server.pem_, _server.key_ and _chained.pem_ that To use certificate files in Kubernetes, you need to save them as so-called *Secrets*. For an example of uploading your certificates to Kubernetes, see the following excerpt: - + +[source, bash] ---- $ kubectl -n create secret generic --from-file=./root.pem --from-file=./server.pem --from-file=./server.key ---- -NOTE: Most applications are expecting to have the secret to be used in the same namespace as the application. +NOTE: All applications are expecting to have the secret to be used in the same namespace as the application. + +==== Using cert-manager +cert-manager needs to be available in your Downstream Cluster. To install cert-manager in your downstream cluster you can use the same installation steps which are described in the Rancher Prime installation. +First we need to create a selfsigned-issuer.yaml file: + +[source,yaml] +---- +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: selfsigned-issuer +spec: + selfSigned: {} +---- + +Then we create the a Certificate Ressource for the CA calles my-ca-cert.yaml: +[source,yaml] +---- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: my-ca-cert + namespace: cert-manager +spec: + isCA: true + commonName: .cluster.local + secretName: my-ca-secret + issuerRef: + name: selfsigned-issuer + kind: ClusterIssuer + dnsNames: + - ".cluster.local" + - "*..cluster.local" + +---- +For creating a ClusterIssuer using the Generated CA we create the my-ca-issuer.yaml file +[source,yaml] +---- +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: my-ca-issuer +spec: + ca: + secretName: my-ca-secret +---- +The last ressource which we need to create is the certificate itself. This certificate is signed by our created CA. You can name the yaml file application-name-certificate.yaml +[source,yaml] +---- +kind: Certificate +metadata: + name: + namespace: // need to be created manually. +spec: + dnsNames: + - .cluster.local + issuerRef: + group: cert-manager.io + kind: ClusterIssuer + name: my-ca-issuer + secretName: + usages: + - digital signature + - key encipherment +---- + +Apply the yaml file to your kubernetes cluster. +[source, bash] +---- +$ kubectl apply -f selfsigned-issuer.yaml +$ kubectl apply -f my-ca-cert.yaml +$ kubectl apply -f my-ca-issuer.yaml +$ kubectl apply -f application-name-certificate.yaml +---- +When you deploy your applications via Helm Charts you can use the generated certificate. In the Kubernetes Secret Certificate are 3 files stored. The tls.crt, tls.key and ca.crt which you cann use in the values.yaml file of your application. ++++ diff --git a/adoc/SAP-EIC-Metallb.adoc b/adoc/SAP-EIC-Metallb.adoc index 886da0d7..91563b6a 100644 --- a/adoc/SAP-EIC-Metallb.adoc +++ b/adoc/SAP-EIC-Metallb.adoc @@ -1,39 +1,60 @@ -==== Installing and configuring {metallb} +== Installation and Configuration of {metallb} There are multiple ways to install the {metallb} software. In this guide, we will cover how to install {metallb} using `kubectl` or Helm. A complete overview and more details about {metallb} can be found on the link:https://metallb.universe.tf/[official website for {metallb}] -===== Prerequisites +=== Pre-requisites Before starting the installation, ensure that all requirements are met. In particular, you should pay attention to network addon compatibility. If you are trying to run {metallb} on a cloud platform, you should also look at the cloud compatibility page and make sure your cloud platform works with {metallb} (note that most cloud platforms do *not*). There are several ways to deploy {metallb}. In this guide, we will describe how to use the {rac} to deploy {metallb}. -Make sure to have a range of IP addresses available for configuring {metallb}. +Please make sure to have one IP address available for configuring {metallb}. -===== Preparation +Before you can deploy {metallb} from {rac}, you need to create the namespace and an ImagePullSecret. +To create the related namespace, run: +---- +$ kubectl create namespace metallb +---- -Ensure that the associated kernel modules are loaded on your Kubernetes worker nodes as described in xref:SAP-EIC-SLEMicro#metal-slem[]. +[#metalIPS] +Instructions how to create the *imagePullSecret* can be found in xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[] -Make sure you enabled `strictarp` as described in xref:SAP-Rancher-RKE2-Installation.adoc#metal-rke[] +++++ + +++++ +=== Installation of {metallb} -===== Installing {metallb} +[#metalLIR] +Before you can install the application, you need to login into the registry. You can find the instruction in xref:SAP-EIC-LoginRegistryApplicationCollection.adoc#LoginApplicationCollection[] To install {metallb} run the following lines in your terminal: +create a values.yaml file with the following configuration: + +[source,yaml] ---- -$ helm pull oci://dp.apps.rancher.io/charts/metallb --untar -$ helm install --namespace=metallb --set-json 'imagePullSecrets=[{"name":"application-collection"}]' --create-namespace metallb ./metallb +imagePullSecrets: + - name: application-collection +---- + +Then install the metallb application. +[source, bash] +---- +$ helm install metallb oci://dp.apps.rancher.io/charts/metallb \ +-f values.yaml \ +--namespace=metallb \ +--version 0.14.7 ---- ++++ ++++ -==== Configuring {metallb} +== Configuration {metallb} needs two configurations to function properly: @@ -41,9 +62,9 @@ $ helm install --namespace=metallb --set-json 'imagePullSecrets=[{"name":"applic - L2 advertisement configuration Create the configuration files for the {metallb} IP address pool: - +[source,bash] ---- -# cat <iprange.yaml +$ cat <iprange.yaml apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: @@ -51,14 +72,14 @@ metadata: namespace: metallb spec: addresses: - - 192.168.1.240-192.168.1.250 + - 192.168.1.240/32 EOF ---- -Create the layer 2 network advertisement: - +and the layer 2 network advertisement: +[source,bash] ---- -# cat < l2advertisement.yaml +$ cat < l2advertisement.yaml apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: @@ -69,7 +90,8 @@ EOF Apply the configuration: +[source,bash] ---- -# kubectl apply -f iprange.yaml -# kubectl apply -f l2advertisement.yaml +$ kubectl apply -f iprange.yaml +$ kubectl apply -f l2advertisement.yaml ---- diff --git a/adoc/SAP-EIC-PostgreSQL.adoc b/adoc/SAP-EIC-PostgreSQL.adoc index 290987b7..3f0c322e 100644 --- a/adoc/SAP-EIC-PostgreSQL.adoc +++ b/adoc/SAP-EIC-PostgreSQL.adoc @@ -11,30 +11,35 @@ SUSE does *not* offer database support for {pg} on Kubernetes. To get support, go to link:https://www.postgresql.org/support/[The PostgreSQL Global Development Group]. -==== Deploying {pg} -Although {pg} is available for deployment using the {rancher} Apps, we recommend using the {rac}. +IMPORTANT:: +In this guide we'll describe one variant of installing {pg}. +There are other possible ways to setup {pg} which are not focussed in this guide. It is also possible to install {pg} as a single instance on top of our operation system. +We will focus on installing {pg} into a kubernetes cluster, because we also need a {redis} database and we will put them together into one cluster. + +== Deploying {pg} +Even though {pg} is available for deployment using the {rancher} Apps, we recommend to use the {rac}. The {pg} chart can be found at https://apps.rancher.io/applications/postgresql. -==== Creating the Secret for {rac} -First, create a namespace and the *imagePullSecret* for installing the {pg} database in the cluster. +== Create Secret for {rac} +First we need to create a namespace and the *imagePullSecret* for installing the {pg} database into the cluster. +[source, bash] ---- -kubectl create namespace postgresql +$ kubectl create namespace postgresql ---- -How to create the *imagePullSecret* is described in section xref:SAP-EIC-Main.adoc#imagePullSecret[]. - -===== Creating the Secret with certificates -Next, create the Kubernetes Secret with the certificates. You can find an example of the procedure in xref:SAP-EIC-Main.adoc#Appendix[]. +[#pgIPS] +How to create the *imagePullSecret* is described in the Section xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[]. -===== Installing the application +=== Create Secret with certificates +Second we need to create the Kubernetes secret with the certificates. You will find an example how to do this in the xref:SAP-EIC-Main.adoc#selfSignedCertificates[]. -Log in to the {rac}. This can be done as follows: ----- -$ helm registry login dp.apps.rancher.io/charts -u -p ----- +=== Installing the application +[#pgLIR] +Before you can install the application, you need to login into the registry. You can find the instruction in xref:SAP-EIC-LoginRegistryApplicationCollection.adoc#LoginApplicationCollection[] -Create a configuration file _values.yaml_ that holds some configurations for the {pg} Helm chart. -The configuration could look like this: +Create a file *values.yaml* which holds some configuration for the {pg} Helm chart. +The config may look like: +[source, yaml] ---- global: # -- Global override for container image registry pull secrets @@ -71,14 +76,10 @@ persistentVolumeClaimRetentionPolicy: whenDeleted: Delete ---- -++++ - -++++ - To install the application run: +[source, bash] ---- -$ helm pull oci://dp.apps.rancher.io/charts/postgres --untar -$ helm install -f values.yaml --namespace=postgresql ./postgresql +$ helm install metallb oci://dp.apps.rancher.io/charts/postgres -f values.yaml --namespace=postgres ---- diff --git a/adoc/SAP-EIC-Redis.adoc b/adoc/SAP-EIC-Redis.adoc index 00862f9e..90b29649 100644 --- a/adoc/SAP-EIC-Redis.adoc +++ b/adoc/SAP-EIC-Redis.adoc @@ -12,7 +12,15 @@ SUSE does not offer database support for {redis}. For support requests contact link:https://redis.com/[Redis Ltd.]. -==== Deploying Redis +IMPORTANT:: +In this guide we'll describe one variant of installing {redis} which is called Redis Cluster. +There are other possible ways to setup {redis} which are not focussed in this guide. +Please check out if you rather require +link:https://redis.io/docs/management/sentinel/[Sentinel] +instead of +link:https://redis.io/docs/management/scaling/[Cluster] + +== Deploying Redis Although {redis} is available for deployment using the {rancher} Apps, we recommend using the {rac}. The {redis} chart can be found at https://apps.rancher.io/applications/redis . @@ -22,13 +30,37 @@ The {redis} chart can be found at https://apps.rancher.io/applications/redis . ++++ -===== Deploying the chart +=== Deploy the chart + +To deploy the chart you'll need to create the related namespace and *imagePullSecret* first. +To create the namespace, run: + +[source, bash] +---- +$ kubectl create namespace redis +---- + +[#redisIPS] +Instructions how to create the *imagePullSecret* can be found in xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[] + + +If you want to use self signed certificates, you can find instructions how to create such in xref:SAP-EIC-Main.adoc#selfSignedCertificates[] -If you want to use self-signed certificates, you can find instructions how to create such certificates in xref:SAP-EIC-Main.adoc#Appendix[]. +[#redisLIR] +Before you can install the application, you need to login into the registry. You can find the instruction in xref:SAP-EIC-LoginRegistryApplicationCollection.adoc#LoginApplicationCollection[] -Create a configuration file _values.yaml_ that holds some configurations for the {redis} Helm chart. -The configuration could look like this: + +Create a file *values.yaml* which holds some configuration for the {redis} Helm chart. +The config may look like: + +[source, yaml] ---- +images: + redis: + # -- Image name to use for the Redis container + repository: dp.apps.rancher.io/containers/redis + # -- Image tag to use for the Redis container + tag: 7.2.5 storageClassName: "longhorn" global: imagePullSecrets: ["application-collection"] @@ -52,8 +84,10 @@ tls: ---- To install the application run: - +[source, bash] ---- -$ helm pull oci://dp.apps.rancher.io/charts/redis --untar -$ helm install -f values.yaml --namespace=redis --create-namespace redis ./redis +$ helm install metallb oci://dp.apps.rancher.io/charts/redis \ +-f values.yaml \ +--namespace=redis +--version ---- diff --git a/adoc/SAP-EIC-SLEMicro.adoc b/adoc/SAP-EIC-SLEMicro.adoc index 9a36c33f..744aaf50 100644 --- a/adoc/SAP-EIC-SLEMicro.adoc +++ b/adoc/SAP-EIC-SLEMicro.adoc @@ -1,137 +1,21 @@ [#SLEMicro] -=== Preparation +=== Installation On each server in your environment for {eic} and {rancher}, install {slem} {slem_version} as the operating system. -This chapter describes all recommended steps for the installation. +The manual installation is described in the {slem} {slem_version} Deployment Guide in our Documentation https://documentation.suse.com/sle-micro/{slem_version}/single-html/SLE-Micro-deployment/#cha-install[SLE Micro Deployment Guide]. -TIP: If you have already set up all machines and the operating system, -skip this chapter. - -++++ - -++++ - -* Mount the {slem} into your virtual machine and start the VM. -* When the boot menu appears select *Installation*. -+ -image::EIC_SLE_Micro_setup_boot_menu.png[title=SLE Micro Boot Menu,scaledwidth=99%] - -++++ - -++++ - -* Select your *Language*, *Keyboard Layout* and accept the License Agreement. -+ -image::EIC_SLE_Micro_setup_License_Agreement.png[title=SLE Micro Setup License Agreement,scaledwidth=99%] - -++++ - -++++ - -* It is recommended to use a static network configuration. -During the installation setup, the first time to adjust this is when the registration page is displayed. -In the upper right corner, click the button *Network Configuration ...*: - -image::EIC_SLE_Micro_setup_Registration.png[title=SLE Micro Setup Registration,scaledwidth=99%] - -++++ - -++++ - -* The *Network Settings* page is displayed. By default, the network adapter is configured to use DHCP. -To change this, click the Button *Edit*. -+ -image::EIC_SLE_Micro_setup_Network_Settings.png[title=SLE Micro Setup Network Settings,scaledwidth=99%] - -++++ - -++++ - -* On the *Network Card Setup* page, select *Statically Assigned IP Address* and fill in the fields *IP Address*, *Subnet Mask* and *Hostname*. -+ -image::EIC_SLE_Micro_setup_Network_Card_Setup.png[title=SLE Micro Setup Network Card,scaledwidth=99%] - -++++ - -++++ - -* Back to the *Network Settings* go top the *Hostname/DNS* Section and set your *hostname*, *Name Server* and *Domain Search*. -+ -image::EIC_SLE_Micro_setup_Network_Settings_DNS.png[title=SLE Micro Setup Hostname/DNS,scaledwidth=99%] - -++++ - -++++ - -* Then switch to the *Routing* Section and go to *Add*. -+ -image::EIC_SLE_Micro_setup_Network_Settings_Routing.png[title=SLE Micro Setup Hostname/DNS,scaledwidth=99%] - -++++ - -++++ - -* Fill out the *Gateway* and set it as *Default Route*. -+ -image::EIC_SLE_Micro_setup_Network_Settings_default_route.png[title=SLE Micro Setup Network Settings Default Route,scaledwidth=99%] - -++++ - -++++ - -* You will come back to the *Registration* page and here we will select *Skip Registration* and will do it later. -+ -image::EIC_SLE_Micro_setup_skip_Registration.png[title=SLE Micro Setup Skip Registration,scaledwidth=99%] - -++++ - -++++ - -* In the next window you can change the NTP Server or keep the default. -+ -image::EIC_SLE_Micro_setup_NTP_Configuration.png[title=SLE Micro Setup NTP Configuration,scaledwidth=99%] - -++++ - -++++ - -* On the next page, enter your password for the *root* user. If you want, you can also import public SSH keys for the *root* user. -+ -image::EIC_SLE_Micro_setup_Authentication.png[title=SLE Micro Setup Authentication for the System Administrator "root",scaledwidth=99%] - -++++ - -++++ +At the end of the installation process in the summary windows you need to check if these Security Settings are configured: -* On the last page you see a summary of your *Installation Settings* where you can change the disk layout, software packages and more. Make sure that: + ** The firewall will be disabled. + ** The SSH service will be enabled. + ** SELinux will be set in permissive mode. - ** the firewall will be disabled. - ** the SSH service will be enabled. - ** `kdump` status is disabled. - ** SELinux is set to permissive mode. - -+ -image::EIC_SLE_Micro_setup_Installation_Settings01.png[title=SLE Micro Setup Installation Settings upper page,scaledwidth=99%] -image::EIC_SLE_Micro_setup_Installation_Settings02.png[title=SLE Micro Setup Installation Settings lower page,scaledwidth=99%] - -* To disable `kdump`, scroll down and click its label. This opens the *Kdump Start-Up* page. -On that page, make sure *Disable Kdump* is selected. - -* To set SELinux to permissive mode, scroll down and click *Security*. This opens the *Security* page. -On the right site there is the menu entry *Selected Module*. Open the drop-down box and select *Permissive*. - -* Click *Install* and confirm the installation. -+ -image::EIC_SLE_Micro_setup_Confirm_Installation.png[title=SLE Micro Setup Confirm Installation,scaledwidth=99%] - -* After the installation is finished, reboot the system. -+ -image::EIC_SLE_Micro_setup_reboot.png[title=SLE Micro Setup reboot,scaledwidth=99%] - -* You will see a login screen. Log in with your user name and password. +We need to set SELinux into permissive mode, because some components of the Edge Integration Cell violated SELinux rules and the application will not work. +TIP: If you have already set up all machines and the operating system, +skip this chapter. === Registering your system To get your system up-to-date, you need to register it with SUSE Manager, an RMT server or directly with the SCC Portal. @@ -141,12 +25,13 @@ Registering the system is possible from the command line using the `transactiona For information that goes beyond the scope of this section, refer to the inline documentation with *SUSEConnect --help*. To register {slem} with SUSE Customer Center, run `transactional-update register` as follows: +[source, bash] ---- -# transactional-update register -r REGISTRATION_CODE -e EMAIL_ADDRESS +$ transactional-update register -r REGISTRATION_CODE -e EMAIL_ADDRESS ---- To register with a local registration server, additionally specify the URL to the server: ---- -# transactional-update register -r REGISTRATION_CODE -e EMAIL_ADDRESS \ +$ transactional-update register -r REGISTRATION_CODE -e EMAIL_ADDRESS \ --url "https://suse_register.example.com/" ---- Do not forget to replace @@ -162,7 +47,7 @@ Find more information about registering your system in the {slem} {slem_version} Log in to the system. After your system is registered, you can update it with the `transactional-update` command. ---- -# transactional-update +$ transactional-update ---- === Disabling automatic reboot @@ -170,49 +55,83 @@ Log in to the system. After your system is registered, you can update it with th Per default {slem} runs a timer for `transactional-update` in the background which could automatically reboot your system. Disable it with the following command: +[source, bash] ---- -# systemctl --now disable transactional-update.timer +$ systemctl --now disable transactional-update.timer ---- ++++ ++++ -ifdef::metallb[] -// Needed due to Github issue: https://github.com/rancher/rke2/issues/3710 -[#metal-slem] -=== Preparing for {metallb} +=== Preparation for {lh} +For {lh} we need to do some preparation steps. First we need to install addional packages on all worker nodes. Then we will attach a second disk to the worker nodes, create a filesystem ontop of it and mount it to the longhorn default location. The size of the second disk depends on your use case. -If you want to use {metallb} as a Kubernetes Load Balancer, you need to make sure that the kernel modules for ip_vs are loaded correctly at boot time. -To do so, create and populate the file _/etc/modules-load.d/ip_vs.conf_ on each cluster node as follows: +We need to install some packages as a requirement for longhorn and Logical Volume Management for adding a filesystem to longhorn. +[source, bash] +---- +$ transactional-update pkg install lvm2 jq nfs-client cryptsetup open-iscsi +---- -[source, shell] +After the needed packages are installed you need to reboot your machine. +[source, bash] ---- -# cat <> /etc/modules-load.d/ip_vs.conf -ip_vs -ip_vs_rr -ip_vs_wrr -ip_vs_sh -EOF +$ reboot ---- -endif::[] +Now we can you enable the iscsid server. -// To do so, create a file on each cluster node named: +[source, bash] +---- +$ systemctl enable iscsid --now +---- -// ---- -// /etc/modules-load.d/ip_vs.conf -// ---- +++++ + +++++ + +==== Create filesystem for longhorn +Then we will create with the Logical Volume Management a new logical volume. + +First we want to create a new physical volume. In our case the second disk is called vdb and we use this as longhorn volume. +[source, bash] +---- +$ pvcreate /dev/vdb +---- + +After the physical volume is created we create a volume group called vgdata +[source, bash] +---- +$ vgcreate vgdata /dev/vdb +---- + +Now we cann create the logical volume and we will use 100% of the disk. +[source, bash] +---- +$ lvcreate -n lvlonghorn -l100%FREE vgdata +---- + +We will create the XFS filesystem on the logical volume. You don't need to create a partion on top of it. +[source, bash] +---- +$ mkfs.xfs /dev/vgdata/lvlonghorn +---- + +Before we can mount the device we need to create the directory structure. +[source, bash] +---- +$ mkdir -p /var/lib/longhorn +---- + +That the mount of the filesystem is persistent we add an entry into the fstab +[source, bash] +---- +$ echo -e "/dev/vgdata/lvlonghorn /var/lib/longhorn xfs defaults 0 0" >> /etc/fstab +---- -// Now, you need to add the entries for the related kernel modules: -// ---- -// ip_vs -// ip_vs_rr -// ip_vs_wrr -// ip_vs_sh -// ---- +Now we can mount the filesystem +[source, bash] +---- +$ mount -a +---- -// Reboot the nodes and check that the kernel modules are loaded successfully: -// ---- -// # lsmod | grep ip_vs -// ---- diff --git a/adoc/SAP-Rancher-RKE2-Installation.adoc b/adoc/SAP-Rancher-RKE2-Installation.adoc index a89eb300..ff7fe7a6 100644 --- a/adoc/SAP-Rancher-RKE2-Installation.adoc +++ b/adoc/SAP-Rancher-RKE2-Installation.adoc @@ -27,42 +27,9 @@ In the next step, make sure you select a Kubernetes version that is supported by ++++ -// Section is only needed if metallb shall be used -// Ref.: https://forums.rancher.com/t/kube-proxy-settings-in-custom-rke2-cluster/40107/2 -// Ref.: https://github.com/rancher/rke2/issues/3710 -ifdef::metallb[] -[#metal-rke] -If you do not plan to use {metallb}, continue xref:SAP-Rancher-RKE2-Installation.adoc#nmetallb[below]. -To prepare {rke} for running {metallb}, you need to enable `strictarp` mode for `ipvs` in `kube-proxy`. -To enable `strictarp` for clusters you want to roll out using {rancher}, you need to add the following lines to your configuration: - - -[source,yaml] ----- -machineGlobalConfig: - kube-proxy-arg: - - proxy-mode=ipvs - - ipvs-strict-arp=true ----- - -To do so, apply all configurations as usuall and click the *Edit as YAML* button in the creation step, as shown below: - -image::SAP-Rancher-Create-Config-YAML.png[title=Rancher create custom cluster yaml config,scaledwidth=99%] - -The excerpt must be saved under _spec.rkeConfig_. An example can be seen here: - -image::SAP-Rancher-Create-StrictARP.png[title=Rancher create Cluster with strict ARP, scaledwidth=99%] - -endif::[] - -++++ - -++++ - -[#nmetallb] -If you do not have any further requirements to Kubernetes, click *Create* at the very bottom. -In all other cases speak to your administrators before making any adjustements. +If you don't have any further requirements to Kubernetes, you can click the "Create" button at the very bottom. +In any other cases talk to your administrators before making adjustements. After you click *Create*, you should see a screen like this: @@ -79,4 +46,4 @@ If your {rancher} instance holds a self-signed certifcate, make sure to activate You can run the command on all nodes in parallel. You do not need to wait until a single node is down. When all machines are registered, you can see the cluster status at the top, changing from "updating" to "active". -At this point in time, your Kubernetes cluster is ready to be used. \ No newline at end of file +At this point in time, your Kubernetes cluster is ready to be used. diff --git a/adoc/SAPDI3-Longhorn.adoc b/adoc/SAPDI3-Longhorn.adoc index 55ca6a67..ab6b2e67 100644 --- a/adoc/SAPDI3-Longhorn.adoc +++ b/adoc/SAPDI3-Longhorn.adoc @@ -8,13 +8,16 @@ This chapter details the minimum requirements to install {lh} and describes thre For more details, visit https://longhorn.io/docs/{lh_version}/deploy/install/ === Requirements - +ifndef::slem[] Before {lh} can be installed on a Kubernetes cluster, all nodes must have the `open-iscsi` package installed, and the ISCSI daemon needs to be started. To do so, run: + +[source, bash] ---- # zypper in -y open-iscsi -# systemctl iscsid enable --now +# systemctl enable iscsid --now ---- +endif::[] To esure a node is prepared for {lh}, you can use the following script to check: ---- @@ -30,7 +33,22 @@ https://longhorn.io/docs/{lh_version}/deploy/install/install-with-rancher/ === Installing {lh} using Helm +ifdef::eic[] +To install Longhorn using Helm, run the following commands: +[source, bash] +---- +$ helm repo add rancher-v2.8-charts https://raw.githubusercontent.com/rancher/charts/release-v2.8 +$ helm repo update +$ helm upgrade --install longhorn-crd rancher-v2.8-charts/longhorn-crd \ +--namespace longhorn-system \ +--create-namespace +$ helm upgrade --install longhorn rancher-v2.8-charts/longhorn \ +--namespace longhorn-system +---- +endif::[] +ifndef::eic[] To install Longhorn using Helm, run the following commands: +[source, bash] ---- $ helm repo add longhorn https://charts.longhorn.io $ helm repo update @@ -38,12 +56,14 @@ $ helm install longhorn longhorn/longhorn --namespace longhorn-system --create-n ---- These commands will add the Longhorn Helm charts to the list of Helm repositories, update the Helm repository, and execute the installation of Longhorn. - +endif::[] +ifndef::slem[] === Installing {lh} using `kubectl` You can install {lh} using `kubectl` with the following command: [subs="attributes"] +[source, bash] ---- $ kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v{lh_version}/deploy/longhorn.yaml ---- @@ -54,6 +74,7 @@ $ kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v{lh_vers * Create a basic _auth_ file: + +[source, bash] ---- $ USER=; \ PASSWORD=; \ @@ -62,12 +83,14 @@ $ USER=; \ * Create a Secret from the file _auth_: + +[source, bash] ---- $ kubectl -n longhorn-system create secret generic basic-auth --from-file=auth ---- * Create the Ingress with basic authentication: + +[source, bash] ---- $ cat < longhorn-ingress.yaml apiVersion: networking.k8s.io/v1beta1 @@ -96,6 +119,6 @@ EOF $ kubectl -n longhorn-system apply -f longhorn-ingress.yaml ---- - +endif::[] For more details, visit https://longhorn.io/docs/{lh_version}/deploy/accessing-the-ui/longhorn-ingress/. \ No newline at end of file diff --git a/adoc/SAPDI3-RKE2-Install.adoc b/adoc/SAPDI3-RKE2-Install.adoc index 13b6e1f0..8f51eb33 100644 --- a/adoc/SAPDI3-RKE2-Install.adoc +++ b/adoc/SAPDI3-RKE2-Install.adoc @@ -11,6 +11,7 @@ :harvester: Harvester :k8s: Kubernetes :vmw: VMware +:rac: Rancher Application Collection = {di} 3 on Rancher Kubernetes Engine 2 @@ -36,6 +37,8 @@ One runs {rancher} Management server and the other runs the actual workload, whi include::SAPDI3-Requirements.adoc[Requirements] +== Installing {rancher} + include::SAPDI3-Rancher.adoc[Rancher] include::SAPDI3-Longhorn.adoc[Longhorn] diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index d6330338..2815c1f2 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -1,7 +1,5 @@ [#Rancher] -== Installing {rancher} - === Preparation To have a highly available {rancher} setup, you need a load balancer for your {rancher} nodes. @@ -14,12 +12,100 @@ If you do not plan to set up a highly available {rancher} cluster, you can skip Set up a virtual machine or a bare metal server with {sles} and the SUSE Linux Enterprise High Availability or use {sles4sap}. Install the `haproxy` package. +[source, bash] ---- -# zypper in haproxy +$ zypper in haproxy ---- Create the configuration for `haproxy`. -Find an example configuration file for `haproxy` below and adapt for the actual environment. +Find an an example configuration file for `haproxy` below and adapt for the actual environment. + +ifdef::eic[] +[source, bash] +---- +# cat < /etc/haproxy/haproxy.cfg +global + log /dev/log local0 + log /dev/log local1 notice + chroot /var/lib/haproxy + # stats socket /run/haproxy/admin.sock mode 660 level admin + stats timeout 30s + user haproxy + group haproxy + daemon + + # general hardlimit for the process of connections to handle, this is separate to backend/listen + # Added in 'global' AND 'defaults'!!! - global affects only system limits (ulimit/maxsock) and defaults affects only listen/backend-limits - hez + maxconn 400000 + + # Default SSL material locations + ca-base /etc/ssl/certs + crt-base /etc/ssl/private + + tune.ssl.default-dh-param 2048 + + # Default ciphers to use on SSL-enabled listening sockets. + # For more information, see ciphers(1SSL). This list is from: + # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ + ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5: !DSS + ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets + +defaults + mode tcp + log global + option tcplog + option redispatch + option tcpka + option dontlognull + retries 2 + timeout connect 5s + timeout client 5s + timeout server 5s + timeout tunnel 86400s + maxconn 400000 + +listen stats + bind *:9000 + mode http + stats hide-version + stats uri /stats + +listen rancher_apiserver + bind my_lb_address:6443 + option httpchk GET /healthz + http-check expect status 401 + server mynode1 mynode1.domain.local:6443 check check-ssl verify none + server mynode2 mynode2.domain.local:6443 check check-ssl verify none + server mynode3 mynode3.domain.local:6443 check check-ssl verify none +listen rancher_register + bind my_lb_address:9345 + option httpchk GET /ping + http-check expect status 200 + server mynode1 mynode1.domain.local:9345 check check-ssl verify none + server mynode2 mynode2.domain.local:9345 check check-ssl verify none + server mynode3 mynode3.domain.local:9345 check check-ssl verify none + +listen rancher_ingress80 + bind my_lb_address:80 + option httpchk GET / + http-check expect status 404 + server mynode1 mynode1.domain.local:80 check + server mynode2 mynode2.domain.local:80 check + server mynode3 mynode3.domain.local:80 check + +listen rancher_ingress443 + bind my_lb_address:443 + option httpchk GET / + http-check expect status 404 + server mynode1 mynode1.domain.local:443 check check-ssl verify none + server mynode2 mynode2.domain.local:443 check check-ssl verify none + server mynode3 mynode3.domain.local:443 check check-ssl verify none +EOF +---- +endif::[] + +ifndef::eic[] +[source, bash] ---- # cat < /etc/haproxy/haproxy.cfg global @@ -79,34 +165,37 @@ backend rke2serverbackend server mynode1 192.168.122.20:9345 check EOF ---- - +endif::[] Check the configuration file: +[source, bash] ---- -# haproxy -f /path/to/your/haproxy.conf -c +$ haproxy -f /path/to/your/haproxy.conf -c ---- Enable and start the `haproxy` load balancer: ---- -# systemctl enable haproxy -# systemctl start haproxy +$ systemctl enable haproxy +$ systemctl start haproxy ---- Do not forget to restart or reload `haproxy` if any changes are made to the haproxy configuration file. - ==== Installing RKE2 To install RKE2, the script provided at https://get.rke2.io can be used as follows: +[source, bash] ---- -# curl -sfL https://get.rke2.io | sh - +$ curl -sfL https://get.rke2.io | sh - ---- For HA setups, it is necessary to create RKE2 cluster configuration files in advance. On the first master node: +[source, bash] ---- -# mkdir -p /etc/rancher/rke2 -# cat < /etc/rancher/rke2/config.yaml +$ mkdir -p /etc/rancher/rke2 +$ cat < /etc/rancher/rke2/config.yaml token: 'your cluster token' +system-default-registry: registry.rancher.com tls-san: - FQDN of fixed registration address on load balancer - other hostname @@ -114,34 +203,46 @@ tls-san: EOF ---- +++++ + +++++ + Create configuration files for additional cluster nodes: +[source, bash] ---- -# cat > /etc/rancher/rke2/config.yaml +$ cat < /etc/rancher/rke2/config.yaml server: https://"FQDN of registration address":9345 token: 'your cluster token' +system-default-registry: registry.rancher.com tls-san: - FQDN of fixed registration address on load balancer - other hostname - IP v4 address - EOF ---- +IMPORTANT: You also need take about ETCD Snapshots and to perfom backups of your Rancher instance. This is not part of this Document and you can find more information in our Documentation. + +IMPORTANT: For security reasons, we generally recommend activating the CIS profile when installing RKE2. This is currently still being validated and will be included in the documentation at a later date. Now enable and start the RKE2 components and run the following command on each cluster node: ---- -# systemctl enable rke2-server --now +$ systemctl enable rke2-server --now ---- To verify the installation, run the following command: + +[source, bash] ---- -# /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get nodes +$ /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get nodes ---- For convenience, the `kubectl` binary can be added to the *$PATH* and the given `kubeconfig` can be set via an environment variable: + +[source, bash] ---- -# export PATH=$PATH:/var/lib/rancher/rke2/bin/ -# export KUBECONFIG=/etc/rancher/rke2/rke2.yaml +$ export PATH=$PATH:/var/lib/rancher/rke2/bin/ +$ export KUBECONFIG=/etc/rancher/rke2/rke2.yaml ---- ++++ @@ -153,26 +254,56 @@ For convenience, the `kubectl` binary can be added to the *$PATH* and the given To install {rancher} and some of its required components, you need to use Helm. -The easiest option to install Helm is to run: +One way to install Helm is to run: +[source, bash] ---- -# curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash +$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash ---- ==== Installing cert-manager To install the `cert-manager` package, do the following: ---- -$ helm repo add jetstack https://charts.jetstack.io -$ helm repo update -$ helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true +$ kubectl create namespace cert-manager +---- + +[#rancherIPS] +How to create the *imagePullSecret* is described in the xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[]. + + +===== Installing the application + +ifdef::eic[] +[#rancherLIR] +Before you can install the application, you need to login into the registry. You can find the instruction in xref:SAP-EIC-LoginRegistryApplicationCollection.adoc#LoginApplicationCollection[] +endif::[] + +ifndef::eic[] +You will need to login to the {rac}: + +[source, bash] +---- +$ helm registry login dp.apps.rancher.io/charts -u -p +---- +endif::[] + +[source, bash] +---- +$ helm install cert-manager oci://dp.apps.rancher.io/charts/cert-manager \ +--set crds.enabled=true \ +--set-json 'global.imagePullSecrets=[{"name":"application-collection"}]' \ +--namespace=cert-manager \ +--version 1.15.2 ---- === Installing {rancher} To install {rancher}, you need to add the related Helm repository. To achieve that, use the following command: + +[source, bash] ---- -$ helm repo add rancher https://charts.rancher.com/server-charts/prime +$ helm repo add rancher-prime https://charts.rancher.com/server-charts/prime ---- Next, create the `cattle-system` namespace in Kubernetes as follows: @@ -181,16 +312,20 @@ $ kubectl create namespace cattle-system ---- The Kubernetes cluster is now ready for the installation of {rancher}: + +[source, bash] ---- -$ helm install rancher rancher/rancher \ +$ helm install rancher rancher-prime/rancher \ --namespace cattle-system \ --set hostname= \ --set replicas=3 ---- During the rollout of {rancher}, you can monitor the progress using the following command: + +[source, bash] ---- -$ kubectl -n cattle-system rollout status deploy/rancher +$ kubectl -n cattle-system rollout status deploy/rancher-prime ---- When the deployment is done, you can access the {rancher} cluster at https://[]. diff --git a/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc b/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc index f24cebba..4f2a7185 100644 --- a/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc +++ b/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc @@ -7,9 +7,9 @@ :sles_version: 15 SP4 :sles4sap: SUSE Linux Enterprise Server for SAP Applications :lh: Longhorn -:rancher: SUSE Rancher +:rancher: Rancher Prime :harvester: Harvester - +:rac: Rancher Application Collection = {di} 3 on SUSE's Kubernetes Stack @@ -63,6 +63,8 @@ https://docs.harvesterhci.io/v1.0/rancher/rancher-integration#rancher--harvester include::SAPDI3-Harvester-Installation.adoc[Harvester] +== Installing {rancher} + include::SAPDI3-Rancher.adoc[Rancher] include::SAPDI3-Harvester-Rancher.adoc[Harvester-Rancher] @@ -84,6 +86,14 @@ include::SAPDI3-Install.adoc[DI-Install] ++++ +== Appendix + +include::SAP-EIC-ImagePullSecrets.adoc[leveloffset=+2] + +++++ + +++++ + :leveloffset: 0 // Standard SUSE Best Practices includes == Legal notice diff --git a/images/src/svg/SAP-EIC-Architecture-RKE2.svg b/images/src/svg/SAP-EIC-Architecture-RKE2.svg new file mode 100644 index 00000000..b88f1e28 --- /dev/null +++ b/images/src/svg/SAP-EIC-Architecture-RKE2.svg @@ -0,0 +1,159 @@ + + + + + + + + + + Rancher Cluster + + + + + + Control + Plane + + Worker + + + + Control + Plane + + Worker + + + + Control + Plane + + Worker + + + + Creates / Manages + Creates / Manages + + + + + Production + Cluster + + + + + + Control + Plane + + + + + Control + Plane + + + + Control + Plane + + + + + Worker + + Worker + + Worker + + + + + + QA / Dev + Cluster + + + + + + Control + Plane + + + + Control + Plane + + + + Control + Plane + + + + + Worker + + Worker + + Worker + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/images/src/svg/SAP-EIC-Architecture-Rancher.svg b/images/src/svg/SAP-EIC-Architecture-Rancher.svg new file mode 100644 index 00000000..6b784d5d --- /dev/null +++ b/images/src/svg/SAP-EIC-Architecture-Rancher.svg @@ -0,0 +1,160 @@ + + + + + + + + + + Rancher Cluster + + + + + + Control + Plane + + Worker + + + + Control + Plane + + Worker + + + + Control + Plane + + Worker + + + + Creates / Manages + Creates / Manages + + + + + Production + Cluster + + + + + + Control + Plane + + + + + Control + Plane + + + + Control + Plane + + + + + Worker + + Worker + + Worker + + + + + + QA / Dev + Cluster + + + + + + Control + Plane + + + + Control + Plane + + + + Control + Plane + + + + + Worker + + Worker + + Worker + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/images/src/svg/SAP-EIC-Architecture.svg b/images/src/svg/SAP-EIC-Architecture.svg new file mode 100644 index 00000000..e006c921 --- /dev/null +++ b/images/src/svg/SAP-EIC-Architecture.svg @@ -0,0 +1,157 @@ + + + + + + + + + + Rancher Cluster + + + + + + Control + Plane + + Worker + + + + Control + Plane + + Worker + + + + Control + Plane + + Worker + + + + Creates / Manages + Creates / Manages + + + + + Production + Cluster + + + + + + Control + Plane + + + + + Control + Plane + + + + Control + Plane + + + + + Worker + + Worker + + Worker + + + + + + QA / Dev + Cluster + + + + + + Control + Plane + + + + Control + Plane + + + + Control + Plane + + + + + Worker + + Worker + + Worker + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file From 74a8712a674995b0f2b648ee610e2a843acc6199 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Tue, 10 Sep 2024 11:45:50 +0200 Subject: [PATCH 40/48] Fix merge issues --- adoc/SAP-EIC-Metallb.adoc | 8 ++++---- adoc/SAPDI3-Rancher.adoc | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/adoc/SAP-EIC-Metallb.adoc b/adoc/SAP-EIC-Metallb.adoc index 91563b6a..04c7f819 100644 --- a/adoc/SAP-EIC-Metallb.adoc +++ b/adoc/SAP-EIC-Metallb.adoc @@ -1,17 +1,17 @@ -== Installation and Configuration of {metallb} +== Installing and configuring of {metallb} There are multiple ways to install the {metallb} software. In this guide, we will cover how to install {metallb} using `kubectl` or Helm. A complete overview and more details about {metallb} can be found on the link:https://metallb.universe.tf/[official website for {metallb}] -=== Pre-requisites +=== Prerequisites Before starting the installation, ensure that all requirements are met. In particular, you should pay attention to network addon compatibility. If you are trying to run {metallb} on a cloud platform, you should also look at the cloud compatibility page and make sure your cloud platform works with {metallb} (note that most cloud platforms do *not*). There are several ways to deploy {metallb}. In this guide, we will describe how to use the {rac} to deploy {metallb}. -Please make sure to have one IP address available for configuring {metallb}. +Make sure to have one IP address available for configuring {metallb}. Before you can deploy {metallb} from {rac}, you need to create the namespace and an ImagePullSecret. To create the related namespace, run: @@ -54,7 +54,7 @@ $ helm install metallb oci://dp.apps.rancher.io/charts/metallb \ ++++ -== Configuration +== Configuring {metallb} {metallb} needs two configurations to function properly: diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index 2815c1f2..9f12394e 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -18,7 +18,7 @@ $ zypper in haproxy ---- Create the configuration for `haproxy`. -Find an an example configuration file for `haproxy` below and adapt for the actual environment. +Find an example configuration file for `haproxy` below and adapt for the actual environment. ifdef::eic[] [source, bash] From e6b793f8b40131ea0a8f3b1cb58760c0391ee576 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Tue, 10 Sep 2024 15:16:49 +0200 Subject: [PATCH 41/48] Fix further merge issues --- adoc/SAP-EIC-Metallb.adoc | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/adoc/SAP-EIC-Metallb.adoc b/adoc/SAP-EIC-Metallb.adoc index b9d0d58c..04c7f819 100644 --- a/adoc/SAP-EIC-Metallb.adoc +++ b/adoc/SAP-EIC-Metallb.adoc @@ -1,29 +1,17 @@ -<<<<<<< HEAD -== Installation and Configuration of {metallb} -======= == Installing and configuring of {metallb} ->>>>>>> sap-eic There are multiple ways to install the {metallb} software. In this guide, we will cover how to install {metallb} using `kubectl` or Helm. A complete overview and more details about {metallb} can be found on the link:https://metallb.universe.tf/[official website for {metallb}] -<<<<<<< HEAD -=== Pre-requisites -======= === Prerequisites ->>>>>>> sap-eic Before starting the installation, ensure that all requirements are met. In particular, you should pay attention to network addon compatibility. If you are trying to run {metallb} on a cloud platform, you should also look at the cloud compatibility page and make sure your cloud platform works with {metallb} (note that most cloud platforms do *not*). There are several ways to deploy {metallb}. In this guide, we will describe how to use the {rac} to deploy {metallb}. -<<<<<<< HEAD -Please make sure to have one IP address available for configuring {metallb}. -======= Make sure to have one IP address available for configuring {metallb}. ->>>>>>> sap-eic Before you can deploy {metallb} from {rac}, you need to create the namespace and an ImagePullSecret. To create the related namespace, run: From 65b91c0738bb15724214a4e7e8b6e4af92afa3e7 Mon Sep 17 00:00:00 2001 From: Kevin Klinger Date: Tue, 10 Sep 2024 15:35:38 +0200 Subject: [PATCH 42/48] Reintroduce SLEM version --- adoc/SAP-EIC-Main-docinfo.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/adoc/SAP-EIC-Main-docinfo.xml b/adoc/SAP-EIC-Main-docinfo.xml index 79f32c7e..94de316f 100644 --- a/adoc/SAP-EIC-Main-docinfo.xml +++ b/adoc/SAP-EIC-Main-docinfo.xml @@ -29,7 +29,7 @@ Longhorn -SUSE Linux Enterprise Micro +SUSE Linux Enterprise Micro 5.4 Rancher Kubernetes Engine 2 Longhorn Rancher Prime From 3dd7f94934cd97dfefaca6ef52178d5a84b3f14b Mon Sep 17 00:00:00 2001 From: Meike Chabowski Date: Tue, 10 Sep 2024 20:06:10 +0200 Subject: [PATCH 43/48] Implemented fixes and edits from doc review According to style guide and doc policies, fixed typos, wording, style, grammar. --- ...IC-LoginRegistryApplicationCollection.adoc | 4 +- adoc/SAP-EIC-Main.adoc | 28 +++++------- adoc/SAP-EIC-PostgreSQL.adoc | 25 +++++------ adoc/SAP-EIC-Redis.adoc | 29 +++++------- adoc/SAP-EIC-SLEMicro.adoc | 45 ++++++++----------- adoc/SAP-Rancher-RKE2-Installation.adoc | 4 +- 6 files changed, 58 insertions(+), 77 deletions(-) diff --git a/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc b/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc index 3c899967..eacf1e89 100644 --- a/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc +++ b/adoc/SAP-EIC-LoginRegistryApplicationCollection.adoc @@ -1,9 +1,9 @@ [#LoginApplicationCollection] = Login into the Application Collection Registry -To install the Helm Charts from the _application-collection_ you need to login into the registry. This needs to be done with the Helm client. +To install the Helm Charts from the _application-collection_ you need to log in into the registry. This needs to be done with the Helm client. -To login to the {rac} run: +To log in to the {rac}, run: [source, bash] ---- $ helm registry login dp.apps.rancher.io/charts -u -p diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 74968cf1..4e213192 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -109,14 +109,14 @@ Additionally, == Landscape Overview -To run {eic} in a production ready and supported way, you'll need to setup multiple Kubernetes clusters and their nodes. -Those comprise a Kubernetes cluster where you'll install {rancher} to setup and manage the production and non-production clusters. -For this {rancher} cluster, we recommend using 3 Kubernetes nodes and a load balancer. +To run {eic} in a production-ready and supported way, you need to set up multiple Kubernetes clusters and their nodes. +Those comprise a Kubernetes cluster where you will install {rancher} to set up and manage the production and non-production clusters. +For this {rancher} cluster, we recommend using three Kubernetes nodes and a load balancer. The {eic} will need to run in a dedicated Kubernetes cluster. -For a HA setup of this cluster, we recommend using 3 Kubernetes Control Plane and 3 Kubernetes Worker nodes. +For an HA setup of this cluster, we recommend using three Kubernetes control planes and three Kubernetes worker nodes. -To give you a graphical overview of what's needed, please take a look at the landscape overview: +For a graphical overview of what is needed, take a look at the landscape overview: image::SAP-EIC-Architecture.svg[scaledwidth=99%,opts=inline,Embedded] @@ -125,17 +125,17 @@ image::SAP-EIC-Architecture.svg[scaledwidth=99%,opts=inline,Embedded] * The green rectangles represent Kubernetes Control Plane nodes. * The orange rectangles represent Kubernetes Worker nodes. -We'll use this graphical overview through the guide to visualize what's the next step and what it's for. +We will use this graphic overview in the guide to illustrate what the next step is and what it is for. -Starting with the installation of the operating system of each machine/ Kubernetes node, we'll guide you through every step to take to get a fully set up Kubernetes landscape ready for the deployment of {eic}. +Starting with installing the operating system of each machine or Kubernetes node, we will walk you through all the steps you need to take to get a fully set up Kubernetes landscape for deploying {eic}. ++++ ++++ == Installing {slem} {slem_version} -There are several ways to install {slem} {slem_version}. For this best practice guide, we use the installation method via graphical installer. But in cloud-native deployments it is highly recommended to use Infrastructure as Code technologies to fully automate the deployment and lifecycle processes. +There are several ways to install {slem} {slem_version}. For this best practice guide, we use the installation method via graphical installer. But in cloud-native deployments it is highly recommended to use Infrastructur- as-Code technologies to fully automate the deployment and lifecycle processes. include::SAP-EIC-SLEMicro.adoc[SLEMicro] @@ -146,15 +146,11 @@ include::SAP-EIC-SLEMicro.adoc[SLEMicro] == Installing {rancher} By now you should have installed the operating system on every Kubernetes node. -You're now ready to install a {rancher} cluster. -Taking a look again on the landscape overview, this means, we'll now cover how to setup the upper part of the given graphic: +You are now ready to install a {rancher} cluster. +Taking a look again on the landscape overview, this means, we will now cover how to set up the upper part of the given graphic: image::SAP-EIC-Architecture-Rancher.svg[scaledwidth=99%,opts=inline,Embedded] -++++ - -++++ - include::SAPDI3-Rancher.adoc[Rancher] ++++ @@ -164,9 +160,9 @@ include::SAPDI3-Rancher.adoc[Rancher] == Installing RKE2 using {rancher} After installing the {rancher} cluster, we can now facilitate this one to create the {rke} clusters for {eic}. -SAP recommends to setup not only a production landscape, but to have QA / Dev systems for {eic}. Both can be set up the same way using {rancher}. +SAP recommends to set up not only a production landscape, but to have QA / Dev systems for {eic}. Both can be set up the same way using {rancher}. How to do this is covered in this chapter. -Taking a look again on the landscape overview, this means, we'll now cover how to setup the lower part of the given graphic: +Looking at the landscape overview again, we will now deal with setting up the lower part of the given graphic: image::SAP-EIC-Architecture-RKE2.svg[scaledwidth=99%,opts=inline,Embedded] diff --git a/adoc/SAP-EIC-PostgreSQL.adoc b/adoc/SAP-EIC-PostgreSQL.adoc index 3f0c322e..ba0e938a 100644 --- a/adoc/SAP-EIC-PostgreSQL.adoc +++ b/adoc/SAP-EIC-PostgreSQL.adoc @@ -1,10 +1,6 @@ :pg: PostgreSQL :redis: Redis -In the instructions below, we only describe one variant of installing {pg}. -There are other possible ways to set up {pg} which are not covered in this guide. It is also possible -to install {pg} as a single instance on the operating system. -We will focus on installing {pg} in a Kubernetes cluster as we also need a {redis} database and we will clustering that together. IMPORTANT:: SUSE does *not* offer database support for {pg} on Kubernetes. @@ -12,16 +8,17 @@ To get support, go to link:https://www.postgresql.org/support/[The PostgreSQL Gl IMPORTANT:: -In this guide we'll describe one variant of installing {pg}. -There are other possible ways to setup {pg} which are not focussed in this guide. It is also possible to install {pg} as a single instance on top of our operation system. -We will focus on installing {pg} into a kubernetes cluster, because we also need a {redis} database and we will put them together into one cluster. +The instructions below describe only one variant of installing {pg}. +There are other possible ways to set up {pg} which are not covered in this guide. +It is also possible to install {pg} as a single instance on the operating system. +We will focus on installing {pg} in a Kubernetes cluster as we also need a {redis} database and we will clustering that together. == Deploying {pg} Even though {pg} is available for deployment using the {rancher} Apps, we recommend to use the {rac}. The {pg} chart can be found at https://apps.rancher.io/applications/postgresql. -== Create Secret for {rac} -First we need to create a namespace and the *imagePullSecret* for installing the {pg} database into the cluster. +== Creating Secret for {rac} +First, create a namespace and the *imagePullSecret* for installing the {pg} database onto the cluster. [source, bash] ---- $ kubectl create namespace postgresql @@ -31,14 +28,14 @@ $ kubectl create namespace postgresql How to create the *imagePullSecret* is described in the Section xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[]. === Create Secret with certificates -Second we need to create the Kubernetes secret with the certificates. You will find an example how to do this in the xref:SAP-EIC-Main.adoc#selfSignedCertificates[]. +Second, create the Kubernetes secret with the certificates. You will find an example how to do this in the xref:SAP-EIC-Main.adoc#selfSignedCertificates[]. === Installing the application [#pgLIR] -Before you can install the application, you need to login into the registry. You can find the instruction in xref:SAP-EIC-LoginRegistryApplicationCollection.adoc#LoginApplicationCollection[] +Before you can install the application, you need to log in to the registry. You can find the instruction in xref:SAP-EIC-LoginRegistryApplicationCollection.adoc#LoginApplicationCollection[] -Create a file *values.yaml* which holds some configuration for the {pg} Helm chart. -The config may look like: +Create a file *values.yaml* which holds some configurations for the {pg} Helm chart. +The configuration may look like: [source, yaml] ---- global: @@ -76,7 +73,7 @@ persistentVolumeClaimRetentionPolicy: whenDeleted: Delete ---- -To install the application run: +To install the application, run: [source, bash] ---- $ helm install metallb oci://dp.apps.rancher.io/charts/postgres -f values.yaml --namespace=postgres diff --git a/adoc/SAP-EIC-Redis.adoc b/adoc/SAP-EIC-Redis.adoc index 90b29649..72ef7298 100644 --- a/adoc/SAP-EIC-Redis.adoc +++ b/adoc/SAP-EIC-Redis.adoc @@ -2,10 +2,6 @@ :redis: Redis -The following instructions describe only one variant of installing {redis} which is called Redis Cluster. -There are other possible ways to set up {redis} that are not covered in this guide. -Check if you require link:https://redis.io/docs/management/sentinel/[{redis} Sentinel] -instead of link:https://redis.io/docs/management/scaling/[{redis} Cluster]. IMPORTANT:: SUSE does not offer database support for {redis}. @@ -13,12 +9,11 @@ For support requests contact link:https://redis.com/[Redis Ltd.]. IMPORTANT:: -In this guide we'll describe one variant of installing {redis} which is called Redis Cluster. -There are other possible ways to setup {redis} which are not focussed in this guide. -Please check out if you rather require -link:https://redis.io/docs/management/sentinel/[Sentinel] -instead of -link:https://redis.io/docs/management/scaling/[Cluster] +The following instructions describe only one variant of installing {redis} which is called Redis Cluster. +There are other possible ways to set up {redis} that are not covered in this guide. +Check if you require link:https://redis.io/docs/management/sentinel/[{redis} Sentinel] +instead of link:https://redis.io/docs/management/scaling/[{redis} Cluster]. + == Deploying Redis @@ -30,9 +25,9 @@ The {redis} chart can be found at https://apps.rancher.io/applications/redis . ++++ -=== Deploy the chart +=== Deploying the chart -To deploy the chart you'll need to create the related namespace and *imagePullSecret* first. +To deploy the chart, create the related namespace and *imagePullSecret* first. To create the namespace, run: [source, bash] @@ -44,14 +39,14 @@ $ kubectl create namespace redis Instructions how to create the *imagePullSecret* can be found in xref:SAP-EIC-ImagePullSecrets.adoc#imagePullSecret[] -If you want to use self signed certificates, you can find instructions how to create such in xref:SAP-EIC-Main.adoc#selfSignedCertificates[] +If you want to use self-signed certificates, you can find instructions how to create such in xref:SAP-EIC-Main.adoc#selfSignedCertificates[] [#redisLIR] -Before you can install the application, you need to login into the registry. You can find the instruction in xref:SAP-EIC-LoginRegistryApplicationCollection.adoc#LoginApplicationCollection[] +Before you can install the application, you need to log in to the registry. You can find the instruction in xref:SAP-EIC-LoginRegistryApplicationCollection.adoc#LoginApplicationCollection[] -Create a file *values.yaml* which holds some configuration for the {redis} Helm chart. -The config may look like: +Create a file *values.yaml* which holds some configurations for the {redis} Helm chart. +The configuration may look like: [source, yaml] ---- @@ -83,7 +78,7 @@ tls: caCertFilename: "root.pem" ---- -To install the application run: +To install the application, run: [source, bash] ---- $ helm install metallb oci://dp.apps.rancher.io/charts/redis \ diff --git a/adoc/SAP-EIC-SLEMicro.adoc b/adoc/SAP-EIC-SLEMicro.adoc index 744aaf50..b80bab95 100644 --- a/adoc/SAP-EIC-SLEMicro.adoc +++ b/adoc/SAP-EIC-SLEMicro.adoc @@ -11,15 +11,15 @@ At the end of the installation process in the summary windows you need to check ** The SSH service will be enabled. ** SELinux will be set in permissive mode. -We need to set SELinux into permissive mode, because some components of the Edge Integration Cell violated SELinux rules and the application will not work. +Set SELinux into _permissive_ mode, because otherwise, some components of the Edge Integration Cell violate SELinux rules, and the application will not work. -TIP: If you have already set up all machines and the operating system, -skip this chapter. +TIP: If you have already set up all machines and the operating system, skip this chapter. + === Registering your system -To get your system up-to-date, you need to register it with SUSE Manager, an RMT server or directly with the SCC Portal. -We describe the process with the direct connection to SCC in the instructions below. For more information, see the {slem} documentation. +To get your system up-to-date, you need to register it with SUSE Manager, an RMT server, or directly with the SCC Portal. +Find the registrationprocess with a direct connection to SCC describedin the instructions below. For more information, see the {slem} documentation. Registering the system is possible from the command line using the `transactional-update register` command. For information that goes beyond the scope of this section, refer to the inline documentation with *SUSEConnect --help*. @@ -60,76 +60,69 @@ Disable it with the following command: $ systemctl --now disable transactional-update.timer ---- -++++ - -++++ - -=== Preparation for {lh} -For {lh} we need to do some preparation steps. First we need to install addional packages on all worker nodes. Then we will attach a second disk to the worker nodes, create a filesystem ontop of it and mount it to the longhorn default location. The size of the second disk depends on your use case. +=== Preparing for {lh} +For {lh} you need to do some preparation steps. First, install some addional packages on all worker nodes. Then attach a second disk to the worker nodes, create a filesystem ontop of it and mount it to the longhorn default location. The size of the second disk depends on your use case. -We need to install some packages as a requirement for longhorn and Logical Volume Management for adding a filesystem to longhorn. +Install some packages as a requirement for longhorn and Logical Volume Management for adding a file system to longhorn. [source, bash] ---- $ transactional-update pkg install lvm2 jq nfs-client cryptsetup open-iscsi ---- -After the needed packages are installed you need to reboot your machine. +After the needed packages are installed, you need to reboot your machine. [source, bash] ---- $ reboot ---- -Now we can you enable the iscsid server. +Now you can enable the _iscsid_ server. [source, bash] ---- $ systemctl enable iscsid --now ---- -++++ - -++++ +==== Creating file system for {lh} -==== Create filesystem for longhorn -Then we will create with the Logical Volume Management a new logical volume. +The next step is to create a new logical volume with the Logical Volume Managemen. -First we want to create a new physical volume. In our case the second disk is called vdb and we use this as longhorn volume. +First, you need to create a new physical volume. In our case the second disk is called _vdb_. Use this as longhorn volume. [source, bash] ---- $ pvcreate /dev/vdb ---- -After the physical volume is created we create a volume group called vgdata +After the physical volume is created, create a volume group called _vgdata_: [source, bash] ---- $ vgcreate vgdata /dev/vdb ---- -Now we cann create the logical volume and we will use 100% of the disk. +Now create the logical volume; use 100% of the disk. [source, bash] ---- $ lvcreate -n lvlonghorn -l100%FREE vgdata ---- -We will create the XFS filesystem on the logical volume. You don't need to create a partion on top of it. +On the logical volume, create the XFS file system. You do not need to create a partion on top of it. [source, bash] ---- $ mkfs.xfs /dev/vgdata/lvlonghorn ---- -Before we can mount the device we need to create the directory structure. +Before you can mount the device, you need to create the directory structure. [source, bash] ---- $ mkdir -p /var/lib/longhorn ---- -That the mount of the filesystem is persistent we add an entry into the fstab +Add an entry to _fstab_ to ensure that the mount of the file system is persistent: [source, bash] ---- $ echo -e "/dev/vgdata/lvlonghorn /var/lib/longhorn xfs defaults 0 0" >> /etc/fstab ---- -Now we can mount the filesystem +Finally, you can mount the file system as follows: [source, bash] ---- $ mount -a diff --git a/adoc/SAP-Rancher-RKE2-Installation.adoc b/adoc/SAP-Rancher-RKE2-Installation.adoc index ff7fe7a6..95c4b9f0 100644 --- a/adoc/SAP-Rancher-RKE2-Installation.adoc +++ b/adoc/SAP-Rancher-RKE2-Installation.adoc @@ -28,10 +28,10 @@ In the next step, make sure you select a Kubernetes version that is supported by ++++ -If you don't have any further requirements to Kubernetes, you can click the "Create" button at the very bottom. +If you do not have any further requirements to Kubernetes, you can click the *Create* button at the very bottom. In any other cases talk to your administrators before making adjustements. -After you click *Create*, you should see a screen like this: +After you clicked *Create*, you should see a screen like this: image::SAP-Rancher-Create-Register.png[title=Rancher create registration,scaledwidth=99%] From 30757e5cef09934a5b62459e358af4c08d971b17 Mon Sep 17 00:00:00 2001 From: Meike Chabowski Date: Wed, 11 Sep 2024 10:26:36 +0200 Subject: [PATCH 44/48] Implemented doc edits Fixed typos, punctuation, wording, grammar, style according to style guide and documentation policies. --- adoc/SAP-EIC-ImagePullSecrets.adoc | 2 +- adoc/SAP-EIC-Main.adoc | 24 +++++++++++++----------- adoc/SAP-EIC-SLEMicro.adoc | 6 +++--- adoc/SAPDI3-Rancher.adoc | 4 ++-- 4 files changed, 19 insertions(+), 17 deletions(-) diff --git a/adoc/SAP-EIC-ImagePullSecrets.adoc b/adoc/SAP-EIC-ImagePullSecrets.adoc index 8e73d62d..0a00bcd8 100644 --- a/adoc/SAP-EIC-ImagePullSecrets.adoc +++ b/adoc/SAP-EIC-ImagePullSecrets.adoc @@ -14,7 +14,7 @@ Then run: $ kubectl -n create secret docker-registry application-collection --docker-server=dp.apps.rancher.io --docker-username= --docker-password= ---- -As secrets are namespace sensitive, you'll need to create this for every namespace needed. +As secrets are namespace-sensitive, you need to create this for every namespace needed. ifdef::eic[] The related secret can then be used for the components: diff --git a/adoc/SAP-EIC-Main.adoc b/adoc/SAP-EIC-Main.adoc index 4e213192..ca73805c 100644 --- a/adoc/SAP-EIC-Main.adoc +++ b/adoc/SAP-EIC-Main.adoc @@ -43,8 +43,8 @@ It will guide you through the steps of: NOTE: This guide does not contain information about sizing your landscapes. Visit https://help.sap.com/docs/integration-suite?locale=en-US and search for the "Edge Integration Cell Sizing Guide". -NOTE: In this guide we'll use $ and # for shell commands, where # means that the command needs to be executed as a root user and -$ that the command can be run by any user. +NOTE: In this guide, we use $ and # for shell commands, where # means that the command needs to be executed as a root user and +$ means that the command can be run by any user. ++++ @@ -85,7 +85,7 @@ Other versions of {metallb} or {cm} can be used but may have not been tested. ** {lh} ** {sle_ha} * -+++*+++ Only needed if you want to setup {rancher} in a high available setup. ++++*+++ Only needed if you want to set up {rancher} in a high availability setup. Additionally, @@ -128,7 +128,7 @@ image::SAP-EIC-Architecture.svg[scaledwidth=99%,opts=inline,Embedded] We will use this graphic overview in the guide to illustrate what the next step is and what it is for. -Starting with installing the operating system of each machine or Kubernetes node, we will walk you through all the steps you need to take to get a fully set up Kubernetes landscape for deploying {eic}. +Starting with installing the operating system of each machine or Kubernetes node, we will walk you through all the steps you need to take to get a fully set-up Kubernetes landscape for deploying {eic}. ++++ @@ -159,7 +159,7 @@ include::SAPDI3-Rancher.adoc[Rancher] == Installing RKE2 using {rancher} -After installing the {rancher} cluster, we can now facilitate this one to create the {rke} clusters for {eic}. +After having installed the {rancher} cluster, we can now make use this one to create the {rke} clusters for {eic}. SAP recommends to set up not only a production landscape, but to have QA / Dev systems for {eic}. Both can be set up the same way using {rancher}. How to do this is covered in this chapter. Looking at the landscape overview again, we will now deal with setting up the lower part of the given graphic: @@ -352,8 +352,8 @@ $ kubectl -n create secret generic --from-file=./root.pem NOTE: All applications are expecting to have the secret to be used in the same namespace as the application. ==== Using cert-manager -cert-manager needs to be available in your Downstream Cluster. To install cert-manager in your downstream cluster you can use the same installation steps which are described in the Rancher Prime installation. -First we need to create a selfsigned-issuer.yaml file: +`cert-manager` needs to be available in your Downstream Cluster. To install `cert-manager` in your downstream cluster, you can use the same installation steps that are described in the Rancher Prime installation section. +First, create a _selfsigned-issuer.yaml_ file: [source,yaml] ---- @@ -365,7 +365,7 @@ spec: selfSigned: {} ---- -Then we create the a Certificate Ressource for the CA calles my-ca-cert.yaml: +Then create a Certificate Ressource for the CA called _my-ca-cert.yaml_: [source,yaml] ---- apiVersion: cert-manager.io/v1 @@ -385,7 +385,7 @@ spec: - "*..cluster.local" ---- -For creating a ClusterIssuer using the Generated CA we create the my-ca-issuer.yaml file +For creating a _ClusterIssuer_ using the Generated CA, create the _my-ca-issuer.yaml_ file: [source,yaml] ---- apiVersion: cert-manager.io/v1 @@ -396,7 +396,7 @@ spec: ca: secretName: my-ca-secret ---- -The last ressource which we need to create is the certificate itself. This certificate is signed by our created CA. You can name the yaml file application-name-certificate.yaml +The last ressource you need to create is the certificate itself. This certificate is signed by your created CA. You can name the yaml file _application-name-certificate.yaml_. [source,yaml] ---- kind: Certificate @@ -425,13 +425,15 @@ $ kubectl apply -f my-ca-issuer.yaml $ kubectl apply -f application-name-certificate.yaml ---- -When you deploy your applications via Helm Charts you can use the generated certificate. In the Kubernetes Secret Certificate are 3 files stored. The tls.crt, tls.key and ca.crt which you cann use in the values.yaml file of your application. +When you deploy your applications via Helm Charts, you can use the generated certificate. +In the Kubernetes Secret Certificate, three files are stored. These are the file _tls.crt_, _tls.key_ and _ca.crt_ which you cann use in the _values.yaml_ file of your application. ++++ ++++ :leveloffset: 0 + // Standard SUSE Best Practices includes == Legal notice include::common_sbp_legal_notice.adoc[] diff --git a/adoc/SAP-EIC-SLEMicro.adoc b/adoc/SAP-EIC-SLEMicro.adoc index b80bab95..1458f051 100644 --- a/adoc/SAP-EIC-SLEMicro.adoc +++ b/adoc/SAP-EIC-SLEMicro.adoc @@ -52,7 +52,7 @@ $ transactional-update === Disabling automatic reboot -Per default {slem} runs a timer for `transactional-update` in the background which could automatically reboot your system. +By default {slem} runs a timer for `transactional-update` in the background which could automatically reboot your system. Disable it with the following command: [source, bash] @@ -61,9 +61,9 @@ $ systemctl --now disable transactional-update.timer ---- === Preparing for {lh} -For {lh} you need to do some preparation steps. First, install some addional packages on all worker nodes. Then attach a second disk to the worker nodes, create a filesystem ontop of it and mount it to the longhorn default location. The size of the second disk depends on your use case. +For {lh} you need to do some preparation steps. First, install some addional packages on all worker nodes. Then attach a second disk to the worker nodes, create a file system ontop of it and mount it to the Longhorn default location. The size of the second disk depends on your use case. -Install some packages as a requirement for longhorn and Logical Volume Management for adding a file system to longhorn. +Install some packages as a requirement for longhorn and Logical Volume Management for adding a file system to Longhorn. [source, bash] ---- $ transactional-update pkg install lvm2 jq nfs-client cryptsetup open-iscsi diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index 9f12394e..b5a0ce7a 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -3,13 +3,13 @@ === Preparation To have a highly available {rancher} setup, you need a load balancer for your {rancher} nodes. -This section describes how to set up a custom load balancer using `haproxy`. If you already have a load balancer, you can make use of that to make {rancher} highly available. +This section describes how to set up a custom load balancer using `haproxy`. If you already have a load balancer, you can use that to make {rancher} highly available. If you do not plan to set up a highly available {rancher} cluster, you can skip this section. ==== Installing an `haproxy`-based load balancer -Set up a virtual machine or a bare metal server with {sles} and the SUSE Linux Enterprise High Availability or use {sles4sap}. +Set up a virtual machine or a bare metal server with {sles} and SUSE Linux Enterprise High Availability or use {sles4sap}. Install the `haproxy` package. [source, bash] From cfe565d56300f2ba7b012b80e3398a9ef67a9e9f Mon Sep 17 00:00:00 2001 From: lpinne Date: Fri, 20 Sep 2024 10:46:13 +0200 Subject: [PATCH 45/48] =?UTF-8?q?SLES4SAP-hana-angi-perfopt-15.adoc=20SLES?= =?UTF-8?q?4SAP-hana-angi-scaleout-perfopt-15.adoc=20SLES4SAP-hana-scaleOu?= =?UTF-8?q?t-PerfOpt-15.adoc=20SLES4SAP-hana-scaleout-multitarget-perfopt-?= =?UTF-8?q?15.adoc=20SLES4SAP-hana-sr-guide-PerfOpt-15.adoc=20SLES4SAP-han?= =?UTF-8?q?a-sr-guide-costopt-15.adoc:=20requirements,=20dos=20and=20don?= =?UTF-8?q?=C2=B4ts?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- adoc/SLES4SAP-hana-angi-perfopt-15.adoc | 7 ++++++- adoc/SLES4SAP-hana-angi-scaleout-perfopt-15.adoc | 8 ++++++++ adoc/SLES4SAP-hana-scaleOut-PerfOpt-15.adoc | 10 +++++++++- ...LES4SAP-hana-scaleout-multitarget-perfopt-15.adoc | 8 ++++++++ adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc | 12 ++++++++---- adoc/SLES4SAP-hana-sr-guide-costopt-15.adoc | 9 +++++++-- 6 files changed, 46 insertions(+), 8 deletions(-) diff --git a/adoc/SLES4SAP-hana-angi-perfopt-15.adoc b/adoc/SLES4SAP-hana-angi-perfopt-15.adoc index 05c468fa..803cb8cf 100644 --- a/adoc/SLES4SAP-hana-angi-perfopt-15.adoc +++ b/adoc/SLES4SAP-hana-angi-perfopt-15.adoc @@ -390,6 +390,8 @@ Linux cluster. * {HANA} feature Secondary Time Travel is not supported. * The {HANA} Fast Restart feature on RAM-tmfps and {HANA} on persistent memory can be used, as long as they are transparent to Linux HA. +* No manual actions must be performed on the {HANA} database while it is controlled +by the Linux cluster. All administrative actions need to be aligned with the cluster. // TODO PRIO3: align with manual pages SAPHanaSR(7) and susHanaSR.py(7) For the HA/DR provider hook scripts _susHanaSR.py_ and _susTkOver.py_, the following @@ -2844,7 +2846,8 @@ manually re-registering a site. rules mentioned in this setup guide are allowed. For public cloud refer to the cloud specific documentation. * Using {SAP} tools for attempting start/stop/takeover actions on a database -while the cluster is in charge of managing that database. +while the cluster is in charge of managing that database. Same for unregistering/disabling +system replication. IMPORTANT: As "migrating" or "moving" resources in crm-shell, HAWK or other tools would add client-prefer location rules, support is limited to maintenance @@ -3557,4 +3560,6 @@ include::common_gfdl1.2_i.adoc[] // // REVISION 0.1 2024/04 // - Initial version +// REVISION 0.2 2024/09 +// - updated requirements, dos and don´ts // diff --git a/adoc/SLES4SAP-hana-angi-scaleout-perfopt-15.adoc b/adoc/SLES4SAP-hana-angi-scaleout-perfopt-15.adoc index 9598ecb3..75262412 100644 --- a/adoc/SLES4SAP-hana-angi-scaleout-perfopt-15.adoc +++ b/adoc/SLES4SAP-hana-angi-scaleout-perfopt-15.adoc @@ -453,6 +453,8 @@ See also manual page susHanaSR.py(7). or it can be upgraded as described in respective documentation. Not allowed is mixing old and new cluster attributes or hook scripts within one cluster. +* No manual actions must be performed on the {HANA} database while it is controlled + by the Linux cluster. All administrative actions need to be aligned with the cluster. Find more details in the REQUIREMENTS section of manual pages SAPHanaSR-ScaleOut(7), ocf_suse_SAPHanaController(7), ocf_suse_SAPHanaFilesystem(7), @@ -2958,6 +2960,10 @@ In your project, *avoid* the following: * Adding location rules for the clone, multi-state or IP resource. Only location rules mentioned in this setup guide are allowed. +* Using {SAP} tools for attempting start/stop/takeover actions on a database + while the cluster is in charge of managing that database. Same for unregistering/disabling + system replication. + * As "migrating" or "moving" resources in _crm-shell_, HAWK or other tools would add client-location rules, these activities are completely forbidden! @@ -3242,4 +3248,6 @@ include::common_gfdl1.2_i.adoc[] // // REVISION 0.1 (2024-05-27) // - copied from classic scale-out multi-target +// REVISION 0.2 (2024-09-20) +// - updated requirements, dos and don´ts // diff --git a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-15.adoc b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-15.adoc index 9716afe7..381d0726 100644 --- a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-15.adoc +++ b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-15.adoc @@ -407,6 +407,8 @@ However, all nodes in one Linux cluster have to use the same style. or it can be upgraded as described in respective documentation. Not allowed is mixing old and new cluster attributes or hook scripts within one Linux cluster. +* No manual actions must be performed on the {HANA} database while it is controlled + by the Linux cluster. All administrative actions need to be aligned with the cluster. Find more details in the REQUIREMENTS section of manual pages SAPHanaSR-ScaleOut(7), ocf_suse_SAPHanaController(7), @@ -2483,6 +2485,10 @@ In your project, *avoid* the following: * Adding location rules for the clone, multi-state or IP resource. Only location rules mentioned in this setup guide are allowed. +* Using {SAP} tools for attempting start/stop/takeover actions on a database + while the cluster is in charge of managing that database. Same for unregistering/disabling + system replication. + * As "migrating" or "moving" resources in _crm-shell_, HAWK or other tools would add client-prefer location rules, these activities are completely *forbidden!*. @@ -2766,5 +2772,7 @@ include::common_gfdl1.2_i.adoc[] // REVISION 0.3 (2023-04-03) // - SAP native systemd support is default for HANA 2.0 SPS07 // REVISION 0.3a (2024-02-14) -// - HANA 2.0 SPS05 rev.059 Python 3 needed +// - HANA 2.0 SPS05 rev.059 Python 3 needed +// REVISION 0.3b (2024-09-20) +// - requirements, dos and don´ts // diff --git a/adoc/SLES4SAP-hana-scaleout-multitarget-perfopt-15.adoc b/adoc/SLES4SAP-hana-scaleout-multitarget-perfopt-15.adoc index b5adc806..d5f04339 100644 --- a/adoc/SLES4SAP-hana-scaleout-multitarget-perfopt-15.adoc +++ b/adoc/SLES4SAP-hana-scaleout-multitarget-perfopt-15.adoc @@ -445,6 +445,8 @@ See also manual page SAPHanaSrMultiTarget.py(7). or it can be upgraded as described in respective documentation. Not allowed is mixing old and new cluster attributes or hook scripts within one cluster. +* No manual actions must be performed on the {HANA} database while it is controlled + by the Linux cluster. All administrative actions need to be aligned with the cluster. Find more details in the REQUIREMENTS section of manual pages SAPHanaSR-ScaleOut(7), ocf_suse_SAPHanaController(7), @@ -2847,6 +2849,10 @@ In your project, *avoid* the following: * Adding location rules for the clone, multi-state or IP resource. Only location rules mentioned in this setup guide are allowed. +* Using {SAP} tools for attempting start/stop/takeover actions on a database + while the cluster is in charge of managing that database. Same for unregistering/disabling + system replication. + * As "migrating" or "moving" resources in _crm-shell_, HAWK or other tools would add client-location rules, these activities are completely *forbidden!*. @@ -3131,4 +3137,6 @@ include::common_gfdl1.2_i.adoc[] // - SAP native systemd support is default for HANA 2.0 SPS07 // REVISION 0.3a (2024-02-14) // - HANA 2.0 SPS05 rev.059 Python 3 needed +// REVISION 0.3b (2024-09-20) +// - requirements, dos and don´ts // diff --git a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc index d711dc29..6b79547b 100644 --- a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc +++ b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc @@ -418,6 +418,8 @@ Linux cluster. * {HANA} feature Secondary Time Travel is not supported. * The {HANA} Fast Restart feature on RAM-tmfps and {HANA} on persistent memory can be used, as long as they are transparent to Linux HA. +* No manual actions must be performed on the {HANA} database while it is controlled +by the Linux cluster. All administrative actions need to be aligned with the cluster. // TODO PRIO3: align with manual pages SAPHanaSR(7) and SAPHanaSR.py(7) For the HA/DR provider hook scripts SAPHanaSR.py and susTkOver.py, the following @@ -434,7 +436,6 @@ crm_attribute. * The hook provider needs to be added to the {HANA} global configuration, in memory and on disk (in persistence). - For the HA/DR provider hook script susChkSrv.py, the following requirements apply: * {HANA} 2.0 SPS05 or later provides the HA/DR provider hook method srServiceStateChanged() @@ -2790,8 +2791,8 @@ In your project, you should: * Define STONITH before adding other resources to the cluster. * Do intensive testing. * Tune the timeouts of operations of SAPHana and SAPHanaTopology. -* Start with the parameter values PREFER_SITE_TAKEOVER=”true”, AUTOMATED_REGISTER=”false” and -DUPLICATE_PRIMARY_TIMEOUT=”7200”. +* Start with the parameter values PREFER_SITE_TAKEOVER=”true”, AUTOMATED_REGISTER=”false” +and DUPLICATE_PRIMARY_TIMEOUT=”7200”. * Always wait for pending cluster actions to finish before doing something. * Set up a test cluster for testing configuration changes and administrative procedure before applying them on the production cluster. @@ -2809,7 +2810,8 @@ manually re-registering a site. rules mentioned in this setup guide are allowed. For public cloud refer to the cloud specific documentation. * Using {SAP} tools for attempting start/stop/takeover actions on a database -while the cluster is in charge of managing that database. +while the cluster is in charge of managing that database. Same for unregistering/disabling +system replication. IMPORTANT: As "migrating" or "moving" resources in crm-shell, HAWK or other tools would add client-prefer location rules, support is limited to maintenance @@ -3537,4 +3539,6 @@ include::common_gfdl1.2_i.adoc[] // REVISION 1.6c 2024/03 // - updated references // - pointer to SAPHanaSR-angi +// REVISION 1.6d 2024/09 +// - updated requirements, dos and don´ts // diff --git a/adoc/SLES4SAP-hana-sr-guide-costopt-15.adoc b/adoc/SLES4SAP-hana-sr-guide-costopt-15.adoc index 7e837835..cfe41723 100644 --- a/adoc/SLES4SAP-hana-sr-guide-costopt-15.adoc +++ b/adoc/SLES4SAP-hana-sr-guide-costopt-15.adoc @@ -390,6 +390,8 @@ _{refsidadm}_ is not allowed to terminate the processes of the other tenant user scenario. Hence, only one replicating database pair and one non-replicating database in the same cluster as described in this guide are supported for the cost-optimized scenario. +* No manual actions must be performed on the {HANA} database while it is controlled +by the Linux cluster. All administrative actions need to be aligned with the cluster. The {SLES4SAP} versions are: @@ -3049,7 +3051,8 @@ manually re-registering a site. rules mentioned in this setup guide are allowed. For public cloud refer to the cloud specific documentation. * Using {SAP} tools for attempting start/stop/takeover actions on a database -while the cluster is in charge of managing that database. +while the cluster is in charge of managing that database. Same for unregistering/disabling +system replication. IMPORTANT: As "migrating" or "moving" resources in crm-shell, HAWK or other tools would add client-prefer location rules, support is limited to maintenance @@ -3978,5 +3981,7 @@ include::common_gfdl1.2_i.adoc[] // REVISION 1.6 2023/04 // - SAP native systemd support is default for HANA 2.0 SPS07 // REVISION 1.6b 2024/02 -// - HANA 2.0 SPS05 rev.059 Python 3 needed +// - HANA 2.0 SPS05 rev.059 Python 3 needed +// REVISION 1.6c 2024/09 +// - requirements, dos and don´ts // From 0bf9d581bbe9c7b11734d7000ca2ab63f4072b31 Mon Sep 17 00:00:00 2001 From: Meike Chabowski Date: Tue, 1 Oct 2024 16:17:56 +0200 Subject: [PATCH 46/48] Templates for SBP adocs with metadata Can be used as templates for adoc, docinfo.xml and DC file creation --- adoc/template-sbp-docinfo.xml | 188 ++++++++++++++++++++++++++++++++++ adoc/template-sbp.adoc | 87 ++++++++++++++++ template-DC-sbp-filename | 19 ++++ 3 files changed, 294 insertions(+) create mode 100644 adoc/template-sbp-docinfo.xml create mode 100644 adoc/template-sbp.adoc create mode 100644 template-DC-sbp-filename diff --git a/adoc/template-sbp-docinfo.xml b/adoc/template-sbp-docinfo.xml new file mode 100644 index 00000000..699ae15b --- /dev/null +++ b/adoc/template-sbp-docinfo.xml @@ -0,0 +1,188 @@ + + + https://github.com/SUSE/suse-best-practices/issues/new + SAP Edge Integration Cell on SUSE + + + + + + + + + +Best Practices + + + + + SAP + + + + + + Data Intelligence + Containerization + Installation + + + +SAP Edge Integration Cell on SUSE + + +How to make use of SUSE’s full stack offerings for container workloads for an installation of SAP's Edge Integration Cell. + + +Install SAP's EIC on SUSE’s container workloads stack + + + + SUSE Linux Enterprise Micro + Rancher Kubernetes Engine + Rancher Prime + Longhorn + + + +SUSE Linux Enterprise Micro 5.4 +Rancher Kubernetes Engine 2 +Longhorn +Rancher Prime +SAP Integration Suite + + + + + + Kevin + Klinger + + + SAP Solution Architect + SUSE + + + + + Dominik + Mathern + + + SAP Solution Architect + SUSE + + + + + Dr. Ulrich + Schairer + + + SAP Solution Architect + SUSE + + + + + + + + + + + + + + + + + + + + + yyyy-mm-dd + + text text text + + + + yyyy-mm-dd older + + text text text + + + + + + + + text text text text + + + + Disclaimer: + Documents published as part of the SUSE Best Practices series have been contributed voluntarily + by SUSE employees and third parties. They are meant to serve as examples of how particular + actions can be performed. They have been compiled with utmost attention to detail. + However, this does not guarantee complete accuracy. SUSE cannot verify that actions described + in these documents do what is claimed or whether actions described have unintended consequences. + SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors + or the consequences thereof. + + diff --git a/adoc/template-sbp.adoc b/adoc/template-sbp.adoc new file mode 100644 index 00000000..bba78bec --- /dev/null +++ b/adoc/template-sbp.adoc @@ -0,0 +1,87 @@ +:docinfo: + +// Defining article ID +// Article ID is needed for revhistory in docinfo.xml file +// As ID, use the SBP filename, or if too long, shorten filename wisely +[#art-sbp-filename] + + +// If you use variables, define them here. Examples below +:sles: SUSE Linux Enterprise Server +:sles4sap: SUSE Linux Enterprise Server for SAP Applications +:slm: SUSE Linux Micro + + += + +== + +text text text text text text text + +* bullet 1 +** sub-bullet 1 +* bullet 2 +** sub-bullet 2 +* bullet 3 +** sub-bullet 3 + + +NOTE: text text text + +// if you need to insert a page break, use: +++++ + +++++ + +=== + +text text text text + +// include a link with text, example: +https://help.sap.com/docs/integration-suite/sap-integration-suite/setting-up-and-managing-edge-integration-cell[Installation Guide at help.sap.com] + +//include content from another adoc document, example +include::SAP-EIC-SLEMicro.adoc[SLEMicro] + +// include image with title, example +image::EIC-Rancher-Kubectl-Shell.png[title=Rancher Shell Overview,scaledwidth=99%] + +// insert code block or screen or command +---- +$ helm registry login dp.apps.rancher.io/charts -u -p +---- + +// add a section ID +[#section-id-name] +== + +// mark command or package, example +`kubectl` + +// make text bold, example +*Storage* + +// make text italic, example +_Secret type_ + + +// add pagebreak at the end of your content +++++ + +++++ + +// At the very end of the document, you need to add GPDL and Legal Notice + + +:leveloffset: 0 +// Standard SUSE Best Practices includes +== Legal notice +include::common_sbp_legal_notice.adoc[] + +++++ + +++++ + +// Standard SUSE Best Practices includes +:leveloffset: 0 +include::common_gfdl1.2_i.adoc[] diff --git a/template-DC-sbp-filename b/template-DC-sbp-filename new file mode 100644 index 00000000..6bd427ab --- /dev/null +++ b/template-DC-sbp-filename @@ -0,0 +1,19 @@ + MAIN="sbp-filename.adoc" + +ADOC_TYPE="article" + +ADOC_POST="yes" + +# ADOC_ATTRIBUTES="--attribute docdate=2024-09-11" + +# stylesheets +STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp +FALLBACK_STYLEROOT=/usr/share/xml/docbook/stylesheet/suse2022-ns + +XSLTPARAM="--stringparam publishing.series=sbp" + +#DRAFT=yes +ROLE="sbp" +#PROFROLE="sbp" + +DOCBOOK5_RNG_URI="http://docbook.org/xml/5.2/rng/docbookxi.rnc" From f1553d42cdf8e20ca49c6ef2996d929001343c95 Mon Sep 17 00:00:00 2001 From: Dominik_Mathern Date: Tue, 8 Oct 2024 10:26:17 +0200 Subject: [PATCH 47/48] Hotfix: RKE2 Installation Hotfix: We need to add the RKE2 Version, that we can install RKE2 through Rancher Prime private registry. --- adoc/SAPDI3-Rancher.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/adoc/SAPDI3-Rancher.adoc b/adoc/SAPDI3-Rancher.adoc index b5a0ce7a..369c81ff 100644 --- a/adoc/SAPDI3-Rancher.adoc +++ b/adoc/SAPDI3-Rancher.adoc @@ -185,7 +185,7 @@ Do not forget to restart or reload `haproxy` if any changes are made to the hapr To install RKE2, the script provided at https://get.rke2.io can be used as follows: [source, bash] ---- -$ curl -sfL https://get.rke2.io | sh - +$ curl -sfL https://get.rke2.io | INSTALL_RKE2_VERSION=v1.28.13-rke2r1 sh ---- For HA setups, it is necessary to create RKE2 cluster configuration files in advance. From 5bc272236d40b12c866f8618f7eb95f6f246d05e Mon Sep 17 00:00:00 2001 From: Meike Chabowski Date: Thu, 24 Oct 2024 15:16:46 +0200 Subject: [PATCH 48/48] Fixed two copy paste errors in code block Copy and paste errors caused two wrong code blocks. This is fixed now. --- adoc/SAP-EIC-PostgreSQL.adoc | 2 +- adoc/SAP-EIC-Redis.adoc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/adoc/SAP-EIC-PostgreSQL.adoc b/adoc/SAP-EIC-PostgreSQL.adoc index ba0e938a..38d86d16 100644 --- a/adoc/SAP-EIC-PostgreSQL.adoc +++ b/adoc/SAP-EIC-PostgreSQL.adoc @@ -76,7 +76,7 @@ persistentVolumeClaimRetentionPolicy: To install the application, run: [source, bash] ---- -$ helm install metallb oci://dp.apps.rancher.io/charts/postgres -f values.yaml --namespace=postgres +$ helm install postgres oci://dp.apps.rancher.io/charts/postgres -f values.yaml --namespace=postgres ---- diff --git a/adoc/SAP-EIC-Redis.adoc b/adoc/SAP-EIC-Redis.adoc index 72ef7298..4626c9ea 100644 --- a/adoc/SAP-EIC-Redis.adoc +++ b/adoc/SAP-EIC-Redis.adoc @@ -81,7 +81,7 @@ tls: To install the application, run: [source, bash] ---- -$ helm install metallb oci://dp.apps.rancher.io/charts/redis \ +$ helm install redis oci://dp.apps.rancher.io/charts/redis \ -f values.yaml \ --namespace=redis --version