From a28277e37331d14b9df6f7c966c1683aaeb7c259 Mon Sep 17 00:00:00 2001 From: Frank Sundermeyer Date: Mon, 4 Nov 2024 10:55:14 +0100 Subject: [PATCH] Sbp rework metadata 04042024 (#456) * Updated metadata to fit new schema --------- Co-authored-by: Meike Chabowski --- DC-CaaSP3-DataHub2 | 2 +- ...curity_Hardening_Guide_for_SAP_HANA_SLES12 | 2 +- ...ng_Guide_for_SAP_HANA_SLES15_SP2_and_later | 2 +- DC-RKE-SAP-DI31 | 2 +- DC-RKE2-SAP-DI31 | 2 +- DC-SAP-EIC | 2 +- ...NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud | 2 +- DC-SAP-S4HA10-setupguide-simplemount-sle15 | 2 +- DC-SAP-S4HA10-setupguide-sle15 | 2 +- DC-SAP-convergent-mediation-ha-setup-sle15 | 2 +- DC-SAP-nw740-sle15-setupguide | 2 +- DC-SAPDI-RKE-Harvester | 2 +- DC-SAPDI3-RKE2-Installation | 2 +- DC-SAPDI3-SUSE-VMWare-VSAN | 2 +- DC-SAP_HA740_SetupGuide_AWS | 2 +- DC-SAP_NW740_SLE12_SetupGuide | 2 +- DC-SAP_S4HA10_SetupGuide-SLE12 | 2 +- DC-SAP_S4HA10_SetupGuide-SLE15 | 2 +- DC-SBP-CaaSP4-SAP-Data-Hub-2 | 2 +- DC-SBP-CaaSP4-SAP-Data-Intelligence-3 | 2 +- DC-SBP-CloudLS-master | 2 +- DC-SBP-DI-30-CaaSP42-Install-Guide | 2 +- DC-SBP-Private-Registry | 2 +- DC-SBP-SAP-HANA-PerOpt-HA-Azure | 2 +- DC-SBP-SAP-MULTI-SID | 2 +- DC-SBP-SLES4SAP-HANAonKVM-SLES15SP2 | 2 +- DC-SBP-SLES4SAP-HANAonKVM-SLES15SP4 | 2 +- DC-SBP-SLES4SAP-sap-infra-monitoring | 2 +- DC-SLES-SAP-hana-scaleOut-PerfOpt-12-AWS | 2 +- DC-SLES4SAP-hana-angi-perfopt-15 | 2 +- DC-SLES4SAP-hana-angi-scaleout-perfopt-15 | 2 +- DC-SLES4SAP-hana-scaleOut-PerfOpt-12 | 2 +- DC-SLES4SAP-hana-scaleOut-PerfOpt-15 | 2 +- ...S4SAP-hana-scaleout-multitarget-perfopt-15 | 2 +- DC-SLES4SAP-hana-sr-guide-CostOpt-12 | 2 +- DC-SLES4SAP-hana-sr-guide-PerfOpt-12 | 2 +- DC-SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud | 2 +- DC-SLES4SAP-hana-sr-guide-PerfOpt-12_AWS | 2 +- DC-SLES4SAP-hana-sr-guide-PerfOpt-15 | 2 +- DC-SLES4SAP-hana-sr-guide-costopt-15 | 2 +- DC-SLES4SAP-hana-sr-guide-perfopt-15-aws | 2 +- DC-TRD-Linux-gs-wordpress-lamp-sles | 2 +- ...LES-SAP-HA-automation-quickstart-cloud-aws | 2 +- ...S-SAP-HA-automation-quickstart-cloud-azure | 3 +- ...LES-SAP-HA-automation-quickstart-cloud-gcp | 3 +- adoc/CaaSP40_DI3X_Install_Guide-docinfo.xml | 37 +- adoc/CaaSP40_DI3X_Install_Guide.adoc | 4 +- adoc/CaaSP4_DH2X_Install_Guide-docinfo.xml | 36 +- adoc/CaaSP4_DH2X_Install_Guide.adoc | 3 + adoc/CaaSP_DH2X_Install_Guide-docinfo.xml | 36 +- adoc/CaaSP_DH2X_Install_Guide.adoc | 5 +- adoc/CloudLS_master-docinfo.xml | 22 +- adoc/CloudLS_master.adoc | 4 + ...ning_Guide_for_SAP_HANA_SLES12-docinfo.xml | 23 +- ...y_Hardening_Guide_for_SAP_HANA_SLES12.adoc | 3 + ...ning_Guide_for_SAP_HANA_SLES15-docinfo.xml | 23 +- ...y_Hardening_Guide_for_SAP_HANA_SLES15.adoc | 3 + ..._SAP_HANA_SLES15_SP2_and_later-docinfo.xml | 29 +- ...ide_for_SAP_HANA_SLES15_SP2_and_later.adoc | 3 + adoc/Private-Registry-docinfo.xml | 35 +- adoc/Private-Registry.adoc | 3 + ...AP-DI-30-CaaSP42-Install-Guide-docinfo.xml | 35 +- adoc/SAP-DI-30-CaaSP42-Install-Guide.adoc | 3 + adoc/SAP-EIC-Main-docinfo.xml | 15 +- adoc/SAP-Multi-SID-docinfo.xml | 29 +- adoc/SAP-Multi-SID.adoc | 3 + ...50-SLE-15-Setup-Guide-AliCloud-docinfo.xml | 28 +- ...aver-7.50-SLE-15-Setup-Guide-AliCloud.adoc | 3 + ...S4HA10-setup-simplemount-sle15-docinfo.xml | 28 +- adoc/SAP-S4HA10-setup-simplemount-sle15.adoc | 3 + adoc/SAP-S4HA10-setupguide-sle15-docinfo.xml | 37 +- adoc/SAP-S4HA10-setupguide-sle15.adoc | 4 + ...rgent-mediation-ha-setup-sle15-docinfo.xml | 10 +- adoc/SAPDI3-RKE2-Install-docinfo.xml | 31 +- adoc/SAPDI3-RKE2-Install.adoc | 3 + adoc/SAPDI3-SUSE-VMWare-VSAN-docinfo.xml | 37 +- adoc/SAPDI3-SUSE-VMWare-VSAN.adoc | 3 + adoc/SAPDI3-SUSE_Kubernetes_Stack-docinfo.xml | 32 +- adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc | 3 + adoc/SAPDI31-RKE-Instguide-docinfo.xml | 25 +- adoc/SAPDI31-RKE-Instguide.adoc | 3 + adoc/SAPDI31-RKE2-Instguide-docinfo.xml | 27 +- adoc/SAPDI31-RKE2-Instguide.adoc | 3 + adoc/SAP_HA740_SetupGuide_AWS-docinfo.xml | 25 +- adoc/SAP_HA740_SetupGuide_AWS.adoc | 4 + adoc/SAP_HA740_SetupGuide_SLE12-docinfo.xml | 38 +- adoc/SAP_HA740_SetupGuide_SLE12.adoc | 4 + adoc/SAP_S4HA10_SetupGuide_SLE12-docinfo.xml | 28 +- adoc/SAP_S4HA10_SetupGuide_SLE12.adoc | 3 + adoc/SLES4SAP-HANAonKVM-15SP2-docinfo.xml | 23 +- adoc/SLES4SAP-HANAonKVM-15SP2.adoc | 3 + adoc/SLES4SAP-HANAonKVM-15SP4-docinfo.xml | 26 +- adoc/SLES4SAP-HANAonKVM-15SP4.adoc | 3 + .../SLES4SAP-hana-angi-perfopt-15-docinfo.xml | 13 +- ...-hana-angi-scaleout-perfopt-15-docinfo.xml | 12 +- ...P-hana-scaleOut-PerfOpt-12-AWS-docinfo.xml | 28 +- ...SLES4SAP-hana-scaleOut-PerfOpt-12-AWS.adoc | 3 + ...S4SAP-hana-scaleOut-PerfOpt-12-docinfo.xml | 27 +- adoc/SLES4SAP-hana-scaleOut-PerfOpt-12.adoc | 4 + ...S4SAP-hana-scaleOut-PerfOpt-15-docinfo.xml | 33 +- adoc/SLES4SAP-hana-scaleOut-PerfOpt-15.adoc | 4 + ...caleout-multitarget-perfopt-15-docinfo.xml | 29 +- ...-hana-scaleout-multitarget-perfopt-15.adoc | 3 + ...S4SAP-hana-sr-guide-CostOpt-12-docinfo.xml | 25 +- adoc/SLES4SAP-hana-sr-guide-CostOpt-12.adoc | 3 + ...P-hana-sr-guide-PerfOpt-12-AWS-docinfo.xml | 23 +- ...SLES4SAP-hana-sr-guide-PerfOpt-12-AWS.adoc | 3 + ...a-sr-guide-PerfOpt-12-Alicloud-docinfo.xml | 30 +- ...SAP-hana-sr-guide-PerfOpt-12-Alicloud.adoc | 3 + ...S4SAP-hana-sr-guide-PerfOpt-12-docinfo.xml | 26 +- adoc/SLES4SAP-hana-sr-guide-PerfOpt-12.adoc | 5 + ...S4SAP-hana-sr-guide-PerfOpt-15-docinfo.xml | 28 +- adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc | 3 + ...S4SAP-hana-sr-guide-costopt-15-docinfo.xml | 28 +- adoc/SLES4SAP-hana-sr-guide-costopt-15.adoc | 3 + ...P-hana-sr-guide-perfopt-15-aws-docinfo.xml | 30 +- ...SLES4SAP-hana-sr-guide-perfopt-15-aws.adoc | 4 + .../SLES4SAP-sap-infra-monitoring-docinfo.xml | 28 +- adoc/SLES4SAP-sap-infra-monitoring.adoc | 3 + ...D-Linux-gs-wordpress-lamp-sles-docinfo.xml | 39 +- adoc/TRD-Linux-gs-wordpress-lamp-sles.adoc | 3 + ...HA-automation-quickstart-cloud-docinfo.xml | 43 +- ...ES-SAP-HA-automation-quickstart-cloud.adoc | 3 + adoc/sap-nw740-sle15-setupguide-docinfo.xml | 27 +- adoc/sap-nw740-sle15-setupguide.adoc | 3 + adoc/sap_hana_azure-main_document-docinfo.xml | 41 +- adoc/sap_hana_azure-main_document.adoc | 4 + .../SAP_S4HA10_SetupGuide-docinfo.xml | 0 xml/MAIN-SBP-AMD-EPYC-2-SLES15SP1.xml | 164 +- xml/MAIN-SBP-AMD-EPYC-3-SLES15SP2.xml | 62 +- xml/MAIN-SBP-AMD-EPYC-4-SLES15SP4.xml | 1508 ++++++++--------- xml/MAIN-SBP-AMD-EPYC-SLES12SP3.xml | 42 +- xml/MAIN-SBP-CSP-UpdateInfra.xml | 66 +- xml/MAIN-SBP-DRBD.xml | 166 +- xml/MAIN-SBP-GCC-10.xml | 124 +- xml/MAIN-SBP-GCC-11.xml | 259 +-- xml/MAIN-SBP-GCC-12.xml | 683 ++++---- xml/MAIN-SBP-HANAonKVM-SLES12SP2.xml | 77 +- xml/MAIN-SBP-KMP-Manual-SLE12SP2.xml | 64 +- xml/MAIN-SBP-KMP-Manual.xml | 55 +- xml/MAIN-SBP-Migrate-z-KVM.xml | 180 +- xml/MAIN-SBP-Multi-PXE-Install.xml | 105 +- xml/MAIN-SBP-OracleWeblogic-SLES12SP3.xml | 42 +- xml/MAIN-SBP-Quilting-OSC.xml | 54 +- xml/MAIN-SBP-RPM-Packaging.xml | 218 +-- xml/MAIN-SBP-SAP-AzureSolutionTemplates.xml | 56 +- ...AIN-SBP-SLE-OffLine-Upgrade-Local-Boot.xml | 132 +- ...N-SBP-SLE15-Custom-Installation-Medium.xml | 129 +- xml/MAIN-SBP-SLES-MFAD.xml | 91 +- xml/MAIN-SBP-SLSA.xml | 143 +- xml/MAIN-SBP-SUMA-on-IBM-PowerVM.xml | 156 +- xml/MAIN-SBP-SUSE-oem-identification.xml | 102 +- xml/MAIN-SBP-SUSE-security-report-2021.xml | 208 ++- xml/MAIN-SBP-SUSE-security-report-2022.xml | 76 +- xml/MAIN-SBP-SUSE-security-report-2023.xml | 48 +- xml/MAIN-SBP-Spectre-Meltdown-L1TF.xml | 208 +-- xml/MAIN-SBP-intelsupport.xml | 142 +- xml/MAIN-SBP-oraclerac.xml | 181 +- xml/MAIN-SBP-oracleweblogic.xml | 87 +- xml/MAIN-SBP-performance-tuning.xml | 56 +- xml/MAIN-SBP-rook-ceph-kubernetes.xml | 762 +++++---- xml/MAIN-SBP-rpiquick-SLES12SP3.xml | 46 +- xml/MAIN-SBP-susemanager.xml | 60 +- 163 files changed, 4141 insertions(+), 3858 deletions(-) rename {adoc => attic}/SAP_S4HA10_SetupGuide-docinfo.xml (100%) diff --git a/DC-CaaSP3-DataHub2 b/DC-CaaSP3-DataHub2 index 0b08e942e..ce654028d 100644 --- a/DC-CaaSP3-DataHub2 +++ b/DC-CaaSP3-DataHub2 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2019-12-05" +# ADOC_ATTRIBUTES="--attribute docdate=2019-12-05" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-OS_Security_Hardening_Guide_for_SAP_HANA_SLES12 b/DC-OS_Security_Hardening_Guide_for_SAP_HANA_SLES12 index 6abe503a1..4de4741ac 100644 --- a/DC-OS_Security_Hardening_Guide_for_SAP_HANA_SLES12 +++ b/DC-OS_Security_Hardening_Guide_for_SAP_HANA_SLES12 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-02-09" +# ADOC_ATTRIBUTES="--attribute docdate=2022-02-09" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-OS_Security_Hardening_Guide_for_SAP_HANA_SLES15_SP2_and_later b/DC-OS_Security_Hardening_Guide_for_SAP_HANA_SLES15_SP2_and_later index dcba3e80c..15138ae2d 100644 --- a/DC-OS_Security_Hardening_Guide_for_SAP_HANA_SLES15_SP2_and_later +++ b/DC-OS_Security_Hardening_Guide_for_SAP_HANA_SLES15_SP2_and_later @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-02-09" +# ADOC_ATTRIBUTES="--attribute docdate=2022-02-09" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-RKE-SAP-DI31 b/DC-RKE-SAP-DI31 index c769bf288..944762108 100644 --- a/DC-RKE-SAP-DI31 +++ b/DC-RKE-SAP-DI31 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2021-03-11" +# ADOC_ATTRIBUTES="--attribute docdate=2021-03-11" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-RKE2-SAP-DI31 b/DC-RKE2-SAP-DI31 index 28df541a6..fbd003384 100644 --- a/DC-RKE2-SAP-DI31 +++ b/DC-RKE2-SAP-DI31 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2021-07-26" +# ADOC_ATTRIBUTES="--attribute docdate=2021-07-26" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAP-EIC b/DC-SAP-EIC index da2ca9045..56f3cbb67 100644 --- a/DC-SAP-EIC +++ b/DC-SAP-EIC @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2024-07-15" +# ADOC_ATTRIBUTES="--attribute docdate=2024-09-11" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAP-NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud b/DC-SAP-NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud index 300253eb6..22cd55085 100644 --- a/DC-SAP-NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud +++ b/DC-SAP-NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2021-02-23" +# ADOC_ATTRIBUTES="--attribute docdate=2021-02-23" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAP-S4HA10-setupguide-simplemount-sle15 b/DC-SAP-S4HA10-setupguide-simplemount-sle15 index 0bf73ef9c..5ad74e31c 100644 --- a/DC-SAP-S4HA10-setupguide-simplemount-sle15 +++ b/DC-SAP-S4HA10-setupguide-simplemount-sle15 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-02-28" +# ADOC_ATTRIBUTES="--attribute docdate=2022-02-28" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAP-S4HA10-setupguide-sle15 b/DC-SAP-S4HA10-setupguide-sle15 index 9bbf82531..862de2b36 100644 --- a/DC-SAP-S4HA10-setupguide-sle15 +++ b/DC-SAP-S4HA10-setupguide-sle15 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-02-28" +# ADOC_ATTRIBUTES="--attribute docdate=2022-02-28" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAP-convergent-mediation-ha-setup-sle15 b/DC-SAP-convergent-mediation-ha-setup-sle15 index b57b97d55..08159acd7 100644 --- a/DC-SAP-convergent-mediation-ha-setup-sle15 +++ b/DC-SAP-convergent-mediation-ha-setup-sle15 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2024-05-24" +# ADOC_ATTRIBUTES="--attribute docdate=2024-05-24" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAP-nw740-sle15-setupguide b/DC-SAP-nw740-sle15-setupguide index e19560e45..3714eeba1 100644 --- a/DC-SAP-nw740-sle15-setupguide +++ b/DC-SAP-nw740-sle15-setupguide @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2023-05-22" +# ADOC_ATTRIBUTES="--attribute docdate=2023-05-22" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAPDI-RKE-Harvester b/DC-SAPDI-RKE-Harvester index 6460cf660..32670c61b 100644 --- a/DC-SAPDI-RKE-Harvester +++ b/DC-SAPDI-RKE-Harvester @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2024-04-01" +# ADOC_ATTRIBUTES="--attribute docdate=2024-04-01" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAPDI3-RKE2-Installation b/DC-SAPDI3-RKE2-Installation index c397c1a72..2e12ca142 100644 --- a/DC-SAPDI3-RKE2-Installation +++ b/DC-SAPDI3-RKE2-Installation @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-11-11" +# ADOC_ATTRIBUTES="--attribute docdate=2022-11-11" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAPDI3-SUSE-VMWare-VSAN b/DC-SAPDI3-SUSE-VMWare-VSAN index ffc984afc..a69ee7af1 100644 --- a/DC-SAPDI3-SUSE-VMWare-VSAN +++ b/DC-SAPDI3-SUSE-VMWare-VSAN @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2023-07-24" +# ADOC_ATTRIBUTES="--attribute docdate=2023-07-24" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAP_HA740_SetupGuide_AWS b/DC-SAP_HA740_SetupGuide_AWS index a0aa90340..e3cdd903e 100644 --- a/DC-SAP_HA740_SetupGuide_AWS +++ b/DC-SAP_HA740_SetupGuide_AWS @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2020-02-06" +# ADOC_ATTRIBUTES="--attribute docdate=2020-02-06" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAP_NW740_SLE12_SetupGuide b/DC-SAP_NW740_SLE12_SetupGuide index 1529d31ed..b3974f196 100644 --- a/DC-SAP_NW740_SLE12_SetupGuide +++ b/DC-SAP_NW740_SLE12_SetupGuide @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2020-04-03" +# ADOC_ATTRIBUTES="--attribute docdate=2020-04-03" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAP_S4HA10_SetupGuide-SLE12 b/DC-SAP_S4HA10_SetupGuide-SLE12 index 45b0361de..286e4ded7 100644 --- a/DC-SAP_S4HA10_SetupGuide-SLE12 +++ b/DC-SAP_S4HA10_SetupGuide-SLE12 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2020-02-25" +# ADOC_ATTRIBUTES="--attribute docdate=2020-02-25" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SAP_S4HA10_SetupGuide-SLE15 b/DC-SAP_S4HA10_SetupGuide-SLE15 index 9bbf82531..862de2b36 100644 --- a/DC-SAP_S4HA10_SetupGuide-SLE15 +++ b/DC-SAP_S4HA10_SetupGuide-SLE15 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-02-28" +# ADOC_ATTRIBUTES="--attribute docdate=2022-02-28" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SBP-CaaSP4-SAP-Data-Hub-2 b/DC-SBP-CaaSP4-SAP-Data-Hub-2 index 9d6a4eada..0b4562f10 100644 --- a/DC-SBP-CaaSP4-SAP-Data-Hub-2 +++ b/DC-SBP-CaaSP4-SAP-Data-Hub-2 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2021-03-15" +# ADOC_ATTRIBUTES="--attribute docdate=2021-03-15" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SBP-CaaSP4-SAP-Data-Intelligence-3 b/DC-SBP-CaaSP4-SAP-Data-Intelligence-3 index da70cc5fa..2efce0d0b 100644 --- a/DC-SBP-CaaSP4-SAP-Data-Intelligence-3 +++ b/DC-SBP-CaaSP4-SAP-Data-Intelligence-3 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2021-03-11" +# ADOC_ATTRIBUTES="--attribute docdate=2021-03-11" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SBP-CloudLS-master b/DC-SBP-CloudLS-master index 00c60e6e9..3543df0fc 100644 --- a/DC-SBP-CloudLS-master +++ b/DC-SBP-CloudLS-master @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2019-07-24" +# ADOC_ATTRIBUTES="--attribute docdate=2019-07-24" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SBP-DI-30-CaaSP42-Install-Guide b/DC-SBP-DI-30-CaaSP42-Install-Guide index a56141ff8..1fb1300dc 100644 --- a/DC-SBP-DI-30-CaaSP42-Install-Guide +++ b/DC-SBP-DI-30-CaaSP42-Install-Guide @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2021-03-11" +# ADOC_ATTRIBUTES="--attribute docdate=2021-03-11" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SBP-Private-Registry b/DC-SBP-Private-Registry index 085cb9734..b91a76ca7 100644 --- a/DC-SBP-Private-Registry +++ b/DC-SBP-Private-Registry @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2021-01-27" +# ADOC_ATTRIBUTES="--attribute docdate=2021-01-27" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SBP-SAP-HANA-PerOpt-HA-Azure b/DC-SBP-SAP-HANA-PerOpt-HA-Azure index cb6cc5379..0c2eaf654 100644 --- a/DC-SBP-SAP-HANA-PerOpt-HA-Azure +++ b/DC-SBP-SAP-HANA-PerOpt-HA-Azure @@ -2,7 +2,7 @@ MAIN="sap_hana_azure-main_document.adoc" ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2020-04-02" +# ADOC_ATTRIBUTES="--attribute docdate=2020-04-02" STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp FALLBACK_STYLEROOT=/usr/share/xml/docbook/stylesheet/suse2022-ns diff --git a/DC-SBP-SAP-MULTI-SID b/DC-SBP-SAP-MULTI-SID index 0e3d575f2..eca18b3a3 100644 --- a/DC-SBP-SAP-MULTI-SID +++ b/DC-SBP-SAP-MULTI-SID @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2020-02-25" +# ADOC_ATTRIBUTES="--attribute docdate=2020-02-25" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SBP-SLES4SAP-HANAonKVM-SLES15SP2 b/DC-SBP-SLES4SAP-HANAonKVM-SLES15SP2 index a244c00d8..194105eac 100644 --- a/DC-SBP-SLES4SAP-HANAonKVM-SLES15SP2 +++ b/DC-SBP-SLES4SAP-HANAonKVM-SLES15SP2 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2021-12-02" +# ADOC_ATTRIBUTES="--attribute docdate=2021-12-02" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SBP-SLES4SAP-HANAonKVM-SLES15SP4 b/DC-SBP-SLES4SAP-HANAonKVM-SLES15SP4 index 829560060..ec5a4a758 100644 --- a/DC-SBP-SLES4SAP-HANAonKVM-SLES15SP4 +++ b/DC-SBP-SLES4SAP-HANAonKVM-SLES15SP4 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2023-11-29" +# ADOC_ATTRIBUTES="--attribute docdate=2023-11-29" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SBP-SLES4SAP-sap-infra-monitoring b/DC-SBP-SLES4SAP-sap-infra-monitoring index df59b6263..64546f6ba 100644 --- a/DC-SBP-SLES4SAP-sap-infra-monitoring +++ b/DC-SBP-SLES4SAP-sap-infra-monitoring @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2023-09-29" +# ADOC_ATTRIBUTES="--attribute docdate=2023-09-29" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SLES-SAP-hana-scaleOut-PerfOpt-12-AWS b/DC-SLES-SAP-hana-scaleOut-PerfOpt-12-AWS index a4f6565af..02dc46b43 100644 --- a/DC-SLES-SAP-hana-scaleOut-PerfOpt-12-AWS +++ b/DC-SLES-SAP-hana-scaleOut-PerfOpt-12-AWS @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-05-27" +# ADOC_ATTRIBUTES="--attribute docdate=2022-05-27" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SLES4SAP-hana-angi-perfopt-15 b/DC-SLES4SAP-hana-angi-perfopt-15 index 7b62cecf0..d285b9d4e 100644 --- a/DC-SLES4SAP-hana-angi-perfopt-15 +++ b/DC-SLES4SAP-hana-angi-perfopt-15 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2024-05-24" +# ADOC_ATTRIBUTES="--attribute docdate=2024-05-24" STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp FALLBACK_STYLEROOT=/usr/share/xml/docbook/stylesheet/suse2022-ns diff --git a/DC-SLES4SAP-hana-angi-scaleout-perfopt-15 b/DC-SLES4SAP-hana-angi-scaleout-perfopt-15 index e31c1ee4d..9ed85f51d 100644 --- a/DC-SLES4SAP-hana-angi-scaleout-perfopt-15 +++ b/DC-SLES4SAP-hana-angi-scaleout-perfopt-15 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2024-07-04" +# ADOC_ATTRIBUTES="--attribute docdate=2024-07-04" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SLES4SAP-hana-scaleOut-PerfOpt-12 b/DC-SLES4SAP-hana-scaleOut-PerfOpt-12 index 2d188f9ec..327e1fa33 100644 --- a/DC-SLES4SAP-hana-scaleOut-PerfOpt-12 +++ b/DC-SLES4SAP-hana-scaleOut-PerfOpt-12 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-12-07" +# ADOC_ATTRIBUTES="--attribute docdate=2022-12-07" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SLES4SAP-hana-scaleOut-PerfOpt-15 b/DC-SLES4SAP-hana-scaleOut-PerfOpt-15 index 64aa112e0..70bafe881 100644 --- a/DC-SLES4SAP-hana-scaleOut-PerfOpt-15 +++ b/DC-SLES4SAP-hana-scaleOut-PerfOpt-15 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-12-07" +# ADOC_ATTRIBUTES="--attribute docdate=2022-12-07" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SLES4SAP-hana-scaleout-multitarget-perfopt-15 b/DC-SLES4SAP-hana-scaleout-multitarget-perfopt-15 index f3be1315c..6e0ec55f5 100644 --- a/DC-SLES4SAP-hana-scaleout-multitarget-perfopt-15 +++ b/DC-SLES4SAP-hana-scaleout-multitarget-perfopt-15 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-12-22" +# ADOC_ATTRIBUTES="--attribute docdate=2022-12-22" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SLES4SAP-hana-sr-guide-CostOpt-12 b/DC-SLES4SAP-hana-sr-guide-CostOpt-12 index 4bebbd4c6..397e35b4f 100644 --- a/DC-SLES4SAP-hana-sr-guide-CostOpt-12 +++ b/DC-SLES4SAP-hana-sr-guide-CostOpt-12 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-12-07" +# ADOC_ATTRIBUTES="--attribute docdate=2022-12-07" STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp FALLBACK_STYLEROOT=/usr/share/xml/docbook/stylesheet/suse2022-ns diff --git a/DC-SLES4SAP-hana-sr-guide-PerfOpt-12 b/DC-SLES4SAP-hana-sr-guide-PerfOpt-12 index 6628f9c56..75c74b9f7 100644 --- a/DC-SLES4SAP-hana-sr-guide-PerfOpt-12 +++ b/DC-SLES4SAP-hana-sr-guide-PerfOpt-12 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-12-07" +# ADOC_ATTRIBUTES="--attribute docdate=2022-12-07" STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp FALLBACK_STYLEROOT=/usr/share/xml/docbook/stylesheet/suse2022-ns diff --git a/DC-SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud b/DC-SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud index b677818d8..89c43b7a8 100644 --- a/DC-SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud +++ b/DC-SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2019-05-24" +# ADOC_ATTRIBUTES="--attribute docdate=2019-05-24" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SLES4SAP-hana-sr-guide-PerfOpt-12_AWS b/DC-SLES4SAP-hana-sr-guide-PerfOpt-12_AWS index 62c2574ce..295392c83 100644 --- a/DC-SLES4SAP-hana-sr-guide-PerfOpt-12_AWS +++ b/DC-SLES4SAP-hana-sr-guide-PerfOpt-12_AWS @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2021-10-06" +# ADOC_ATTRIBUTES="--attribute docdate=2021-10-06" #stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-SLES4SAP-hana-sr-guide-PerfOpt-15 b/DC-SLES4SAP-hana-sr-guide-PerfOpt-15 index 9ed0c3412..61ed9f6ff 100644 --- a/DC-SLES4SAP-hana-sr-guide-PerfOpt-15 +++ b/DC-SLES4SAP-hana-sr-guide-PerfOpt-15 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-12-07" +# ADOC_ATTRIBUTES="--attribute docdate=2022-12-07" STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp FALLBACK_STYLEROOT=/usr/share/xml/docbook/stylesheet/suse2022-ns diff --git a/DC-SLES4SAP-hana-sr-guide-costopt-15 b/DC-SLES4SAP-hana-sr-guide-costopt-15 index 3c1272f28..9eb5d7ddf 100644 --- a/DC-SLES4SAP-hana-sr-guide-costopt-15 +++ b/DC-SLES4SAP-hana-sr-guide-costopt-15 @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-12-07" +# ADOC_ATTRIBUTES="--attribute docdate=2022-12-07" STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp FALLBACK_STYLEROOT=/usr/share/xml/docbook/stylesheet/suse2022-ns diff --git a/DC-SLES4SAP-hana-sr-guide-perfopt-15-aws b/DC-SLES4SAP-hana-sr-guide-perfopt-15-aws index 819450e16..45823d39a 100644 --- a/DC-SLES4SAP-hana-sr-guide-perfopt-15-aws +++ b/DC-SLES4SAP-hana-sr-guide-perfopt-15-aws @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2022-02-28" +# ADOC_ATTRIBUTES="--attribute docdate=2022-02-28" #stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/sbp diff --git a/DC-TRD-Linux-gs-wordpress-lamp-sles b/DC-TRD-Linux-gs-wordpress-lamp-sles index ff822b504..0bf9116a5 100644 --- a/DC-TRD-Linux-gs-wordpress-lamp-sles +++ b/DC-TRD-Linux-gs-wordpress-lamp-sles @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2021-11-17 --attribute env-daps=1" +# ADOC_ATTRIBUTES="--attribute docdate=2021-11-17 --attribute env-daps=1" # stylesheets STYLEROOT=/usr/share/xml/docbook/stylesheet/trd diff --git a/DC-TRD-SLES-SAP-HA-automation-quickstart-cloud-aws b/DC-TRD-SLES-SAP-HA-automation-quickstart-cloud-aws index abf3d114e..4f0024b28 100644 --- a/DC-TRD-SLES-SAP-HA-automation-quickstart-cloud-aws +++ b/DC-TRD-SLES-SAP-HA-automation-quickstart-cloud-aws @@ -4,7 +4,7 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES=" --attribute docdate=2021-09-09" +# ADOC_ATTRIBUTES=" --attribute docdate=2021-09-09" ADOC_ATTRIBUTES=" --attribute env-daps=1" # diff --git a/DC-TRD-SLES-SAP-HA-automation-quickstart-cloud-azure b/DC-TRD-SLES-SAP-HA-automation-quickstart-cloud-azure index 2307a83bc..d1d9cdfb1 100644 --- a/DC-TRD-SLES-SAP-HA-automation-quickstart-cloud-azure +++ b/DC-TRD-SLES-SAP-HA-automation-quickstart-cloud-azure @@ -4,7 +4,8 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2021-09-09 --attribute env-daps=1" +# ADOC_ATTRIBUTES=" --attribute docdate=2021-09-09" +ADOC_ATTRIBUTES=" --attribute env-daps=1" # # you maybe need daps -d DC.. -force to generate the output diff --git a/DC-TRD-SLES-SAP-HA-automation-quickstart-cloud-gcp b/DC-TRD-SLES-SAP-HA-automation-quickstart-cloud-gcp index 334ef672c..519b1f456 100644 --- a/DC-TRD-SLES-SAP-HA-automation-quickstart-cloud-gcp +++ b/DC-TRD-SLES-SAP-HA-automation-quickstart-cloud-gcp @@ -4,7 +4,8 @@ ADOC_TYPE="article" ADOC_POST="yes" -ADOC_ATTRIBUTES="--attribute docdate=2021-09-09 --attribute env-daps=1" +# ADOC_ATTRIBUTES=" --attribute docdate=2021-09-09" +ADOC_ATTRIBUTES=" --attribute env-daps=1" # # you maybe need daps -d DC.. -force to generate the output diff --git a/adoc/CaaSP40_DI3X_Install_Guide-docinfo.xml b/adoc/CaaSP40_DI3X_Install_Guide-docinfo.xml index 800ad78a9..9d24f200b 100644 --- a/adoc/CaaSP40_DI3X_Install_Guide-docinfo.xml +++ b/adoc/CaaSP40_DI3X_Install_Guide-docinfo.xml @@ -5,44 +5,32 @@ - - - -SUSE CaaS Platform, SLES for SAP 15 - + + -SUSE Best Practices - +Best Practices - Containerization SAP Containerization Data Intelligence Installation + Configuration -SAP Data Intelligence 3 on SUSE CaaS Platform 4: Installation Guide +SAP DI 3 on SUSE CaaSP 4 Installation Guide This document describes the installation and configuration of SUSE CaaS Platform 4 and SAP Data Intelligence 3. +Installing SAP DI 3 on SUSE CaaSP 4 - SUSE CaaSP - SAP DI - SLES for SAP + SUSE CaaS Platform + SUSE Linux Enterprise Server for SAP Applications -2021-03-11 SUSE CaaS Platform 4 SAP Data Intelligence 3 SUSE Linux Enterprise Server for SAP Applications 15 SP1 - - - Dr. Ulrich @@ -67,7 +55,14 @@ -2021-03-11 + + + 2021-03-11 + + + + + diff --git a/adoc/CaaSP40_DI3X_Install_Guide.adoc b/adoc/CaaSP40_DI3X_Install_Guide.adoc index 50d821a6f..1e75fcea4 100644 --- a/adoc/CaaSP40_DI3X_Install_Guide.adoc +++ b/adoc/CaaSP40_DI3X_Install_Guide.adoc @@ -1,6 +1,8 @@ - :docinfo: +// defining article ID +[#art-caasp40-sapdi3x-install] + = SAP Data Intelligence 3 on SUSE CaaS Platform 4: Installation Guide diff --git a/adoc/CaaSP4_DH2X_Install_Guide-docinfo.xml b/adoc/CaaSP4_DH2X_Install_Guide-docinfo.xml index 92018047b..527951d4d 100644 --- a/adoc/CaaSP4_DH2X_Install_Guide-docinfo.xml +++ b/adoc/CaaSP4_DH2X_Install_Guide-docinfo.xml @@ -5,31 +5,26 @@ - - - -SUSE CaaS Platform -4 + + -SUSE Best Practices - +Best Practices - Containerization SAP Data Intelligence Installation - Configuration + Containerization + Configuration -SAP Data Hub 2 on SUSE CaaS Platform 4: Installation Guide +SAP Data Hub 2 on SUSE CaaSP 4 Installation Guide This document describes the installation and configuration of SUSE CaaS Platform 4 and SAP Data Hub 2. +Installing SAP DH2 on SUSE CaaSP 4 - SUSE CaaSP - SAP Data Hub - SLES for SAP + SUSE CaaS Platform + SUSE Linux Enterprise Server for SAP Applications -2021-03-15 SUSE CaaS Platform 4 SUSE Linux Enterprise Server for SAP Applications 15 SP1 @@ -48,7 +43,7 @@ - + - + + + 2021-03-15 + + + + + diff --git a/adoc/CaaSP4_DH2X_Install_Guide.adoc b/adoc/CaaSP4_DH2X_Install_Guide.adoc index 1da1896cd..1489aa0fe 100644 --- a/adoc/CaaSP4_DH2X_Install_Guide.adoc +++ b/adoc/CaaSP4_DH2X_Install_Guide.adoc @@ -1,6 +1,9 @@ :docinfo: +// defining article ID +[#art-caasp4-sapdh2x-install] + = SAP Data Hub 2 on SUSE CaaS Platform 4: Installation Guide diff --git a/adoc/CaaSP_DH2X_Install_Guide-docinfo.xml b/adoc/CaaSP_DH2X_Install_Guide-docinfo.xml index f7dd3b236..b3b652024 100644 --- a/adoc/CaaSP_DH2X_Install_Guide-docinfo.xml +++ b/adoc/CaaSP_DH2X_Install_Guide-docinfo.xml @@ -5,35 +5,32 @@ -SUSE CaaS Platform -3 + + -SUSE Best Practices - +Best Practices - Containerization SAP Configuration Data Intelligence Installation + Containerization -SAP Data Hub 2 on SUSE CaaS Platform 3: Installation Guide +SAP Data Hub 2 on SUSE CaaSP 3 Installation Guide This document describes the installation and configuration of SUSE CaaS Platform 3 and SAP Data Hub 2. +Installing SAP DH2 on SUSE CaaSP 3 - SUSE CaaSP - SLES for SAP - SAP Data Hub - -2019-12-05 + SUSE CaaS Platform + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE CaaS Platform 3 -SUSE Linux Enterprise Server for SAP Applications 15 +SUSE Linux Enterprise Server for SAP Applications 12 SP5 and 15 SP1 SAP Data Hub 2 - - @@ -63,7 +60,6 @@ --> - @@ -75,7 +71,15 @@ - + + + 2019-12-05 + + + + + + diff --git a/adoc/CaaSP_DH2X_Install_Guide.adoc b/adoc/CaaSP_DH2X_Install_Guide.adoc index f4dc08787..dbec3035e 100644 --- a/adoc/CaaSP_DH2X_Install_Guide.adoc +++ b/adoc/CaaSP_DH2X_Install_Guide.adoc @@ -1,6 +1,9 @@ :docinfo: +// defining article ID +[#art-caasp3-sapdh2x-install] + = SAP Data Hub 2 on SUSE CaaS Platform 3: Installation Guide ++++ @@ -51,7 +54,7 @@ It is recommended to do the installation of SAP Data Hub Foundation from an exte The hardware and operating system specifications for the jump host can be for example as follows: -- SUSE Linux Enterprise Server 12 SP4 or SUSE Linux Enterprise Server 15 (or even openSUSE Leap 15.X) +- SUSE Linux Enterprise Server 12 SP5 or SUSE Linux Enterprise Server 15 (or even openSUSE Leap 15.X) - 2 Cores - 8 GiB RAM - Disk space: 50 GiB for `/`, including the space for the SAP Data Hub 2 software and at least 20 GiB for `/var/lib/docker` (necessary for the SAP Data Hub 2 installation) diff --git a/adoc/CloudLS_master-docinfo.xml b/adoc/CloudLS_master-docinfo.xml index 9aa720fdd..07c3758f1 100644 --- a/adoc/CloudLS_master-docinfo.xml +++ b/adoc/CloudLS_master-docinfo.xml @@ -9,11 +9,8 @@ - SUSE OpenStack Cloud, SUSE Enterprise Storage, SUSE Linux Enterprise Server - -SUSE Best Practices - +Best Practices Cloud @@ -24,12 +21,12 @@ Describes how to design and build a large, scalable private cloud to provide Infrastructure as a Service (IaaS) based on open source and open APIs +Describes design of large scale clouds - SLES - SOC - SES + SUSE Linux Enterprise + SUSE OpenStack Cloud + SUSE Enterprise Storage -2019-07-24 SUSE Linux Enterprise Server SUSE OpenStack Cloud @@ -123,7 +120,14 @@ - + + + 2019-07-24 + + + + + diff --git a/adoc/CloudLS_master.adoc b/adoc/CloudLS_master.adoc index b8fa6c976..55308686e 100644 --- a/adoc/CloudLS_master.adoc +++ b/adoc/CloudLS_master.adoc @@ -1,4 +1,8 @@ :docinfo: + +// defining article ID +[#art-cloudls-architecture] + = Large Scale SUSE OpenStack Clouds - An Architecture Guide // Start of the document diff --git a/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES12-docinfo.xml b/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES12-docinfo.xml index e6d4a9051..9c5f31952 100644 --- a/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES12-docinfo.xml +++ b/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES12-docinfo.xml @@ -6,25 +6,21 @@ - - SUSE Linux Enterprise Server for SAP Applications - 12 -SUSE Best Practices - +Best Practices SAP Security -Operating System Security Hardening Guide for SAP HANA for SUSE Linux Enterprise Server 12 +OS Security Hardening Guide for SAP HANA on SLES 12 This document guides through various hardening methods for SUSE Linux Enterprise Server for SAP Applications 12 to run SAP HANA. +How to harden SLES for SAP Applications to run SAP HANA - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications -2022-02-09 SUSE Linux Enterprise Server for SAP Applications 12 SAP HANA @@ -77,9 +73,16 @@ - + - + + + 2022-02-09 + + + + + diff --git a/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES12.adoc b/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES12.adoc index ef5ab29a5..edb3b94cb 100644 --- a/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES12.adoc +++ b/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES12.adoc @@ -1,6 +1,9 @@ :docinfo: :localdate: +// defining article ID +[#art-os-sec-guide-saphana-sles12] + = Operating System Security Hardening Guide for SAP HANA for SUSE Linux Enterprise Server 12 :Revision: 1.3 diff --git a/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15-docinfo.xml b/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15-docinfo.xml index 740e0c081..23acc72a1 100644 --- a/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15-docinfo.xml +++ b/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15-docinfo.xml @@ -6,25 +6,22 @@ - - SUSE Linux Enterprise Server for SAP Applications - 15 GA, SP1 -SUSE Best Practices - +Best Practices SAP Security -Operating System Security Hardening Guide for SAP HANA for SUSE Linux Enterprise Server 15 GA and SP1 +OS Security Hardening Guide for SAP HANA on SLES15 SP1 This document guides through various hardening methods for SUSE Linux Enterprise Server for SAP Applications 15 GA and SP1 to run SAP HANA. +Hardening SLES for SAP Applications 15 to run SAP HANA - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2022-02-09 SUSE Linux Enterprise Server for SAP Applications 15 GA and SP1 @@ -77,7 +74,15 @@ - + + + + 2022-02-09 + + + + + diff --git a/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15.adoc b/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15.adoc index 1cc818931..824ad90b1 100644 --- a/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15.adoc +++ b/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15.adoc @@ -1,6 +1,9 @@ :docinfo: :localdate: +// defining article ID +[#art-os-sec-guide-saphana-sles15sp1] + = Operating System Security Hardening Guide for SAP HANA for SUSE Linux Enterprise Server 15 GA and SP1 :Revision: 1.2 diff --git a/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15_SP2_and_later-docinfo.xml b/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15_SP2_and_later-docinfo.xml index e9478ecde..353c50fd4 100644 --- a/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15_SP2_and_later-docinfo.xml +++ b/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15_SP2_and_later-docinfo.xml @@ -1,3 +1,4 @@ + https://github.com/SUSE/suse-best-practices/issues/new @@ -6,25 +7,25 @@ - - SUSE Linux Enterprise Server for SAP Applications 15 SP2 and later - -SUSE Best Practices - +Best Practices SAP Security -Operating System Security Hardening Guide for SAP HANA for SUSE Linux Enterprise Server 15 SP2+ +OS Security Hardening Guide for SAP HANA on SLES15 SP2+ This document guides through various hardening methods for - SUSE Linux Enterprise Server for SAP Applications 15 SP2 and later to run SAP HANA. +SUSE Linux Enterprise Server for SAP Applications 15 SP2 and later to run SAP HANA. +How to harden SLES for SAP 15 SP2+ for SAP HANA - SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2022-02-09 SUSE Linux Enterprise Server for SAP Applications 15 SP2 and later @@ -78,7 +79,14 @@ - + + + 2022-02-09 + + + + + @@ -97,3 +105,4 @@ + diff --git a/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15_SP2_and_later.adoc b/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15_SP2_and_later.adoc index e5f3d0b95..dd56e3f5b 100644 --- a/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15_SP2_and_later.adoc +++ b/adoc/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15_SP2_and_later.adoc @@ -1,6 +1,9 @@ :docinfo: :localdate: +// defining article ID +[#art-os-sec-guide-saphana-sles15sp2] + = Operating System Security Hardening Guide for SAP HANA for SUSE Linux Enterprise Server 15 SP2 and later :Revision: 1.3 diff --git a/adoc/Private-Registry-docinfo.xml b/adoc/Private-Registry-docinfo.xml index 330bd8bd3..b8a445e6b 100644 --- a/adoc/Private-Registry-docinfo.xml +++ b/adoc/Private-Registry-docinfo.xml @@ -5,11 +5,9 @@ -SUSE CaaS Platform 4.5, RKE2, K3s - + -SUSE Best Practices - +Best Practices Containerization @@ -20,29 +18,18 @@ SUSE Private Registry Powered by Harbor 2.1 This guide provides instructions how to deploy and maintain a private container registry - using Harbor 2.1. +using Harbor 2.1. +Maintaining a private container registry with Harbor - SUSE CaaSP - RKE - K3s + SUSE CaaS Platform + Rancher Kubernetes Engine -2022-02-09 SUSE CaaS Platform 4.5 or higher Rancher Kubernetes Engine 2 K3s + + + 2021-01-27 + + + + + + This guide provides instructions how to deploy and maintain a private container registry using Harbor 2.1. diff --git a/adoc/Private-Registry.adoc b/adoc/Private-Registry.adoc index 1d065fec2..976f788d3 100644 --- a/adoc/Private-Registry.adoc +++ b/adoc/Private-Registry.adoc @@ -1,6 +1,9 @@ :imagesdir: ../images/src/png/ :docinfo: +// defining article ID +[#art-private-registry] + // Replacement entities :kube: Kubernetes :spr: SUSE Private Registry diff --git a/adoc/SAP-DI-30-CaaSP42-Install-Guide-docinfo.xml b/adoc/SAP-DI-30-CaaSP42-Install-Guide-docinfo.xml index e9c62aaa1..76f0445b9 100644 --- a/adoc/SAP-DI-30-CaaSP42-Install-Guide-docinfo.xml +++ b/adoc/SAP-DI-30-CaaSP42-Install-Guide-docinfo.xml @@ -5,14 +5,10 @@ - -SUSE CaaS Platform -4 + -SUSE Best Practices - +Best Practices - Containerization SAP @@ -20,14 +16,14 @@ Data Intelligence Installation -SAP Data Intelligence 3 on SUSE CaaSP 4.2: Installation Guide +SAP DI 3 on SUSE CaaSP 4.2 Installation Guide This document describes the installation and configuration of SUSE CaaS Platform 4.2 and SAP Data Intelligence 3. +How to install SAP DI 3 on SUSE CaaSP 4.2 - SUSE CaaSP - SLES for SAP - SAP DI + SUSE CaaS Platform + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2021-03-11 SUSE CaaS Platform 4.2 SUSE Linux Enterprise Server for SAP Applications 15 @@ -73,15 +69,20 @@ - - + + + 2021-03-11 + + + + + - -SAP Data Intelligence 3 is the tool set to govern big amount of data. -SUSE CaaS Platform 4 is the Kubernetes base that makes deploying SAP Data Intelligence 3 easy. -This document describes the installation and configuration of SUSE CaaS Platform 4 and SAP Data Intelligence 3. + SAP Data Intelligence 3 is the tool set to govern big amount of data. + SUSE CaaS Platform 4 is the Kubernetes base that makes deploying SAP Data Intelligence 3 easy. + This document describes the installation and configuration of SUSE CaaS Platform 4 and SAP Data Intelligence 3. Disclaimer: diff --git a/adoc/SAP-DI-30-CaaSP42-Install-Guide.adoc b/adoc/SAP-DI-30-CaaSP42-Install-Guide.adoc index eb224d907..83d128a7f 100644 --- a/adoc/SAP-DI-30-CaaSP42-Install-Guide.adoc +++ b/adoc/SAP-DI-30-CaaSP42-Install-Guide.adoc @@ -1,5 +1,8 @@ :docinfo: +// defining article ID +[#art-sapdi30-caasp42] + = SAP Data Intelligence 3 on CaaS Platform 4.2: Installation Guide diff --git a/adoc/SAP-EIC-Main-docinfo.xml b/adoc/SAP-EIC-Main-docinfo.xml index 94de316fd..201265de7 100644 --- a/adoc/SAP-EIC-Main-docinfo.xml +++ b/adoc/SAP-EIC-Main-docinfo.xml @@ -5,14 +5,12 @@ -SUSE Linux Enterprise Micro 5.4, RKE2, Rancher Prime, Longhorn - + + Best Practices - Containerization - SAP + SAP - Data Intelligence Containerization @@ -20,8 +18,8 @@ SAP Edge Integration Cell on SUSE -This document describes how to make use of SUSE’s full stack offerings for container workloads for an installation of SAP Edge Integration Cell. -Install SAP EIC on SUSE’s container workloads stack +This document describes how to make use of SUSE’s full stack offerings for container workloads for an installation of SAP's Edge Integration Cell. +Install SAP's EIC on SUSE’s container workloads stack SUSE Linux Enterprise Micro Rancher Kubernetes Engine @@ -33,6 +31,7 @@ Rancher Kubernetes Engine 2 Longhorn Rancher Prime +SAP Integration Suite @@ -91,7 +90,7 @@ SUSE® offers a full stack for your container workloads. This best practice document describes how you can make use of this offerings - for your installation of SAP Edge Integration Cell. The operations of SAP Edge Integration Cell and/or SAP Integration Suite are not covered in this document. + for your installation of Edge Integration Cell included with SAP Integration Suite. The operations of SAP Edge Integration Cell and/or SAP Integration Suite are not covered in this document. diff --git a/adoc/SAP-Multi-SID-docinfo.xml b/adoc/SAP-Multi-SID-docinfo.xml index 558611684..f95f14b5c 100644 --- a/adoc/SAP-Multi-SID-docinfo.xml +++ b/adoc/SAP-Multi-SID-docinfo.xml @@ -7,12 +7,8 @@ - - SUSE Linux Enterprise Server for SAP Applications - 12, 15 -SUSE Best Practices - +Best Practices SAP @@ -23,11 +19,17 @@ SAP S/4HANA and SAP NetWeaver Multi-SID Cluster Guide This document explains how to implement multiple SAP NetWeaver and S/4HANA systems in a High Availability Cluster solution. +HA solution with multiple SAP NW and S/4HANA systems - SLES for SAP - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2020-02-25 SUSE Linux Enterprise Server for SAP Applications 12 and 15 @@ -71,9 +73,14 @@ - - + + + 2020-02-25 + + + + + diff --git a/adoc/SAP-Multi-SID.adoc b/adoc/SAP-Multi-SID.adoc index 168993721..c7f61d13b 100644 --- a/adoc/SAP-Multi-SID.adoc +++ b/adoc/SAP-Multi-SID.adoc @@ -138,6 +138,9 @@ :localdate: +// defining article ID +[#art-sap-multi-sid] + // REVISION 1.0 2019/10 diff --git a/adoc/SAP-NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud-docinfo.xml b/adoc/SAP-NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud-docinfo.xml index 026456ca0..d5f3d04ff 100644 --- a/adoc/SAP-NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud-docinfo.xml +++ b/adoc/SAP-NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud-docinfo.xml @@ -7,12 +7,8 @@ - - SUSE Linux Enterprise Server for SAP Applications - 15 -SUSE Best Practices - +Best Practices SAP @@ -20,14 +16,20 @@ High Availability Clustering Cloud + Deployment -SAP NetWeaver Enqueue Replication 1 High Availability Cluster - Setup Guide for SAP NetWeaver 7.40 and 7.50 on Alibaba Cloud +SAP NetWeaver 7.40 and 7.50 ER 1 HA Cluster on AliCloud This guide describes how to set up a pacemaker cluster using SLES for SAP Applications 15 for the Enqueue Replication scenario on Alibaba Cloud. +Pacemaker cluster with SLES for SAP on Alibaba Cloud - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2021-02-23 SUSE Linux Enterprise Server for SAP Applications 15 SAP NetWeaver 7.40 and 7.50 @@ -103,8 +105,14 @@ - - + + + 2021-02-23 + + + + + diff --git a/adoc/SAP-NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud.adoc b/adoc/SAP-NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud.adoc index 59dcff3f6..ead91b87f 100644 --- a/adoc/SAP-NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud.adoc +++ b/adoc/SAP-NetWeaver-7.50-SLE-15-Setup-Guide-AliCloud.adoc @@ -2,6 +2,9 @@ :slesProdVersion: 15 +// defining article ID +[#art-sapnw750-sle15-alicloud] + = SAP NetWeaver Enqueue Replication 1 High Availability Cluster - SAP NetWeaver 7.40 and 7.50 on Alibaba Cloud: Setup Guide //:toc: diff --git a/adoc/SAP-S4HA10-setup-simplemount-sle15-docinfo.xml b/adoc/SAP-S4HA10-setup-simplemount-sle15-docinfo.xml index 9a8ddbb4f..efdec2b33 100644 --- a/adoc/SAP-S4HA10-setup-simplemount-sle15-docinfo.xml +++ b/adoc/SAP-S4HA10-setup-simplemount-sle15-docinfo.xml @@ -7,25 +7,27 @@ - - SUSE Linux Enterprise Server for SAP Applications - 15 -SUSE Best Practices - +Best Practices SAP High Availability Clustering + Deployment -SAP S/4 HANA - Enqueue Replication 2 High Availability Cluster With Simple Mount +SAP S/4 HANA ER 2 HA cluster with Simple Mount This document explains how to deploy an S/4 HANA Enqueue Replication 2 High Availability Cluster solution. +Deploying an S/4 HANA Enqueue Replication 2 HA cluster - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2022-02-28 SUSE Linux Enterprise Server for SAP Applications 15 SAP HANA @@ -80,8 +82,14 @@ - - + + + 2022-02-28 + + + + + diff --git a/adoc/SAP-S4HA10-setup-simplemount-sle15.adoc b/adoc/SAP-S4HA10-setup-simplemount-sle15.adoc index e48f4ca98..26f83c67d 100644 --- a/adoc/SAP-S4HA10-setup-simplemount-sle15.adoc +++ b/adoc/SAP-S4HA10-setup-simplemount-sle15.adoc @@ -6,6 +6,9 @@ :slesProdVersion: 15 // +// defining article ID +[#art-saphana-cluster-simplemount] + = SAP S/4 HANA - Enqueue Replication 2 High Availability Cluster With Simple Mount: Setup Guide // Revision {Revision} from {docdate} diff --git a/adoc/SAP-S4HA10-setupguide-sle15-docinfo.xml b/adoc/SAP-S4HA10-setupguide-sle15-docinfo.xml index b95dbaaaf..af72cb708 100644 --- a/adoc/SAP-S4HA10-setupguide-sle15-docinfo.xml +++ b/adoc/SAP-S4HA10-setupguide-sle15-docinfo.xml @@ -7,11 +7,8 @@ - - SUSE Linux Enterprise Server for SAP Applications - 15 -SUSE Best Practices +Best Practices SAP @@ -19,13 +16,19 @@ High Availability Clustering + Deployment -SAP S/4 HANA - Enqueue Replication 2 High Availability Cluster +SAP S/4 HANA Enqueue Replication 2 HA Cluster This document explains how to deploy an S/4 HANA Enqueue Replication 2 High Availability Cluster solution. +Deploying an S/4 Hana ER 2 HA cluster on SLES 15 - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2022-02-28 SUSE Linux Enterprise Server for SAP Applications 15 SAP HANA @@ -51,16 +54,6 @@ SUSE - - + + + 2022-02-28 + + + + + diff --git a/adoc/SAP-S4HA10-setupguide-sle15.adoc b/adoc/SAP-S4HA10-setupguide-sle15.adoc index bdefa9ed5..6a6586639 100644 --- a/adoc/SAP-S4HA10-setupguide-sle15.adoc +++ b/adoc/SAP-S4HA10-setupguide-sle15.adoc @@ -6,6 +6,10 @@ :slesProdVersion: 15 // +// defining article ID +[#art-sap-s4hana10-sle15] + + = SAP S/4 HANA - Enqueue Replication 2 High Availability Cluster: Setup Guide // == SAP S/4 HANA 1809 High Availability Cluster diff --git a/adoc/SAP-convergent-mediation-ha-setup-sle15-docinfo.xml b/adoc/SAP-convergent-mediation-ha-setup-sle15-docinfo.xml index 2e4683998..40b07288c 100644 --- a/adoc/SAP-convergent-mediation-ha-setup-sle15-docinfo.xml +++ b/adoc/SAP-convergent-mediation-ha-setup-sle15-docinfo.xml @@ -4,13 +4,10 @@ SAP Convergent Mediation ControlZone High Availability Cluster - Setup Guide SLES15 - + - - SUSE Linux Enterprise Server for SAP Applications - 15 Best Practices @@ -25,11 +22,11 @@ SAP Convergent Mediation ControlZone HA Cluster guide This document explains how to configure an SAP Convergent Mediation ControlZone High Availability Cluster solution on SLES 15 SP4 and newer Configure a Convergent Mediation HA Cluster on SLES 15 - + SUSE Linux Enterprise Server for SAP Applications 15 SAP Convergent Mediation @@ -77,7 +74,6 @@ - diff --git a/adoc/SAPDI3-RKE2-Install-docinfo.xml b/adoc/SAPDI3-RKE2-Install-docinfo.xml index 3207073c8..fe746b76b 100644 --- a/adoc/SAPDI3-RKE2-Install-docinfo.xml +++ b/adoc/SAPDI3-RKE2-Install-docinfo.xml @@ -5,32 +5,29 @@ - - Rancher Kubernetes Engine 2, SLES for SAP 15 SP4, SAP Data Intelligence 3.3 - + - -SUSE Best Practices - +Best Practices - Containerization SAP Configuration Data Intelligence Installation + Containerization SAP Data Intelligence 3 on Rancher Kubernetes Engine 2 This document describes the installation and configuration of SAP Data Intelligence 3 deployed on Rancher Kubernetes Engine 2. +Installing SAP Data Intelligence 3 on RKE2 and SLES 15 - SLES for SAP - RKE - SAP DI + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + Rancher Kubernetes Engine -2022-11-11 -SUSE Linux Enterprise Server for SAP Applications 15 SP4 +SUSE Linux Enterprise Server for SAP Applications 15 Rancher Kubernetes Engine 2 SAP Data Intelligence 3.3 @@ -75,8 +72,14 @@ - - + + + 2022-11-11 + + + + + diff --git a/adoc/SAPDI3-RKE2-Install.adoc b/adoc/SAPDI3-RKE2-Install.adoc index 8f51eb334..4d2aece08 100644 --- a/adoc/SAPDI3-RKE2-Install.adoc +++ b/adoc/SAPDI3-RKE2-Install.adoc @@ -1,6 +1,9 @@ :docinfo: +// defining article ID +[#art-art-sapdi3-rke2-install] + :di: SAP Data Intelligence :di_version: 3.3 :sles: SUSE Linux Enterprise Server diff --git a/adoc/SAPDI3-SUSE-VMWare-VSAN-docinfo.xml b/adoc/SAPDI3-SUSE-VMWare-VSAN-docinfo.xml index 342bf6c43..0073ec44f 100644 --- a/adoc/SAPDI3-SUSE-VMWare-VSAN-docinfo.xml +++ b/adoc/SAPDI3-SUSE-VMWare-VSAN-docinfo.xml @@ -1,38 +1,37 @@ https://github.com/SUSE/suse-best-practices/issues/new - SAP Data Intelligence 3 on SUSE's Kubernetes Stack + SAP Data Intelligence 3 on SUSE's Kubernetes Stack using VMware vSAN and vSphere - - Rancher Kubernetes Engine 2, VMWare vSAN, VMWare vSphere, SLES for SAP 15 SP4, SAP Data Intelligence 3 - + -SUSE Best Practices - +Best Practices - Containerization SAP + Containerization Configuration Data Intelligence Installation -SAP Data Intelligence 3 on Rancher Kubernetes Engine 2 using VMware vSAN and vSphere -This document describes the installation and configuration of SAP Data Intelligence 3 deployed on SUSE's RKE2 and VMWare vsphere and vsan. +SAP DI3 on RKE 2 with VMware vSAN and vSphere +This document describes the installation and configuration of SAP Data Intelligence 3 deployed on SUSE's RKE2 and VMWare vSphere and vSan. +Configuring SAP DI3 deployed on RKE2 and VMware - SLES for SAP - SAP DI - RKE + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + Rancher Kubernetes Engine -2023-07-24 SUSE Linux Enterprise Server for SAP Applications15 SP4 Rancher Kubernetes Engine 2 SAP Data Intelligence 3 - +VMware vSAN +VMware vSphere @@ -74,8 +73,14 @@ - - + + + 2023-07-24 + + + + + diff --git a/adoc/SAPDI3-SUSE-VMWare-VSAN.adoc b/adoc/SAPDI3-SUSE-VMWare-VSAN.adoc index 4af01f643..52d86a501 100644 --- a/adoc/SAPDI3-SUSE-VMWare-VSAN.adoc +++ b/adoc/SAPDI3-SUSE-VMWare-VSAN.adoc @@ -1,6 +1,9 @@ :docinfo: +// defining article ID +[#art-art-sapdi3-vmware-vsan] + :di: SAP Data Intelligence :di_version: 3.3 :sles: SUSE Linux Enterprise Server diff --git a/adoc/SAPDI3-SUSE_Kubernetes_Stack-docinfo.xml b/adoc/SAPDI3-SUSE_Kubernetes_Stack-docinfo.xml index 96bd2b25a..64aa529b8 100644 --- a/adoc/SAPDI3-SUSE_Kubernetes_Stack-docinfo.xml +++ b/adoc/SAPDI3-SUSE_Kubernetes_Stack-docinfo.xml @@ -5,40 +5,34 @@ - - Rancher Kubernetes Engine 2, Harvester, SUSE Linux Enterprise Server 15 SP4, SAP Data Intelligence 3 - + -SUSE Best Practices - +Best Practices - Containerization SAP + Containerization Configuration Data Intelligence Installation SAP Data Intelligence 3 on SUSE’s Kubernetes Stack This document describes the installation and configuration of SAP Data Intelligence 3 deployed on SUSE's Kubernetes stack, including Harvester, Rancher, RKE2 and Longhorn +Installing SAP DI3 on SUSE's Kubernetes Stack - SLES for SAP - SAP DI - RKE + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + Rancher Kubernetes Engine Harvester -2022-11-11 SUSE Linux Enterprise Server for SAP Applications 15 SP4 Rancher Kubernetes Engine 2 Harvester SAP Data Intelligence 3 - - - - @@ -79,8 +73,14 @@ - - + + + 2022-11-11 + + + + + diff --git a/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc b/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc index 4f2a71858..4be3412f3 100644 --- a/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc +++ b/adoc/SAPDI3-SUSE_Kubernetes_Stack.adoc @@ -1,6 +1,9 @@ :docinfo: +// defining article ID +[#art-sapdi3-suse-kubernetes-stack] + :di: SAP Data Intelligence :di_version: 3.3 :sles: SUSE Linux Enterprise Server diff --git a/adoc/SAPDI31-RKE-Instguide-docinfo.xml b/adoc/SAPDI31-RKE-Instguide-docinfo.xml index a13f981ed..10e4903fa 100644 --- a/adoc/SAPDI31-RKE-Instguide-docinfo.xml +++ b/adoc/SAPDI31-RKE-Instguide-docinfo.xml @@ -9,7 +9,7 @@ Rancher RKE 1, SUSE Linux Enterprise Server 15 SP2, SAP Data Intelligence 3.1 -SUSE Best Practices +Best Practices Containerization @@ -22,12 +22,17 @@ SAP Data Intelligence 3.1 on Rancher Kubernetes Engine 1 This document describes the installation and configuration of RKE 1 from SUSE and SAP Data Intelligence 3.1. +Installing RKE1 and SAP DI3 on SLES for SAP 15 - SLES for SAP - SAP DI - RKE + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + + Rancher Kubernetes Engine -2021-03-11 + SUSE Linux Enterprise Server for SAP Applications 15 SP2 Rancher Kubernetes Engine 1 @@ -73,6 +78,16 @@ + + + 2021-03-11 + + + + + + + diff --git a/adoc/SAPDI31-RKE-Instguide.adoc b/adoc/SAPDI31-RKE-Instguide.adoc index 355e5cfbf..d2eef3a01 100644 --- a/adoc/SAPDI31-RKE-Instguide.adoc +++ b/adoc/SAPDI31-RKE-Instguide.adoc @@ -1,5 +1,8 @@ :docinfo: +// defining article ID +[#art-sapdi31-rke-instguide] + = SAP Data Intelligence 3.1 on Rancher Kubernetes Engine 1: Installation Guide == Introduction diff --git a/adoc/SAPDI31-RKE2-Instguide-docinfo.xml b/adoc/SAPDI31-RKE2-Instguide-docinfo.xml index 4f797290b..5bba38664 100644 --- a/adoc/SAPDI31-RKE2-Instguide-docinfo.xml +++ b/adoc/SAPDI31-RKE2-Instguide-docinfo.xml @@ -9,7 +9,7 @@ Rancher Kubernetes Engine 2, SUSE Linux Enterprise Server 15 SP2, SAP Data Intelligence 3.1 -SUSE Best Practices +Best Practices Containerization @@ -20,15 +20,19 @@ Data Intelligence Installation -SAP Data Intelligence 3.1 on Rancher Kubernetes Engine 2 +SAP Data Intelligence 3.1 on RKE 2 This document describes the installation and configuration - of RKE 2 from SUSE and SAP Data Intelligence 3.1. +of RKE 2 from SUSE and SAP Data Intelligence 3.1. +Installation and configuration of SAP DI 3.1. on RKE2 - SLES for SAP - SAP DI - RKE + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + + Rancher Kubernetes Engine -2021-07-26 + SUSE Linux Enterprise Server for SAP Applications 15 SP2 Rancher Kubernetes Engine 2 @@ -74,6 +78,15 @@ + + + 2021-07-26 + + + + + + diff --git a/adoc/SAPDI31-RKE2-Instguide.adoc b/adoc/SAPDI31-RKE2-Instguide.adoc index 7cbdc35fb..92c05b1db 100644 --- a/adoc/SAPDI31-RKE2-Instguide.adoc +++ b/adoc/SAPDI31-RKE2-Instguide.adoc @@ -1,5 +1,8 @@ :docinfo: +// defining article ID +[#art-sapdi31-rke2-instguide] + = SAP Data Intelligence 3.1 on Rancher Kubernetes Engine 2: Installation Guide diff --git a/adoc/SAP_HA740_SetupGuide_AWS-docinfo.xml b/adoc/SAP_HA740_SetupGuide_AWS-docinfo.xml index f22f9115a..1114ea46e 100644 --- a/adoc/SAP_HA740_SetupGuide_AWS-docinfo.xml +++ b/adoc/SAP_HA740_SetupGuide_AWS-docinfo.xml @@ -7,13 +7,8 @@ - - SUSE Linux Enterprise Server for SAP - Applications - 12 SP3+ -SUSE Best Practices - +Best Practices SAP @@ -21,14 +16,17 @@ High Availability Clustering Cloud + Deployment SAP NetWeaver High Availability Cluster7.40 for the AWS Cloud Explains how to deploy a high availability cluster solution using SLES for SAP Applications 12 for the Enqueue Replication scenario on AWS +Deploying an HA solution with SLES for SAP 12 on AWS - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2020-02-06 SUSE Linux Enterprise Server for SAP Applications 12 SP3 and newer SAP NetWeaver 7.40 @@ -83,7 +81,6 @@ --> - @@ -95,8 +92,14 @@ - - + + + 2020-02-06 + + + + + diff --git a/adoc/SAP_HA740_SetupGuide_AWS.adoc b/adoc/SAP_HA740_SetupGuide_AWS.adoc index 31d40c79c..a3c252e84 100644 --- a/adoc/SAP_HA740_SetupGuide_AWS.adoc +++ b/adoc/SAP_HA740_SetupGuide_AWS.adoc @@ -7,6 +7,10 @@ // Start of the document // + +// defining article ID +[#art-sapha740-setup-aws] + = SAP NetWeaver High Availability Cluster 7.40 for the AWS Cloud: Setup Guide // Fabian Herschel, Bernd Schubert, Stefan Schneider (AWS) // 2019/6/28 diff --git a/adoc/SAP_HA740_SetupGuide_SLE12-docinfo.xml b/adoc/SAP_HA740_SetupGuide_SLE12-docinfo.xml index 2e0993199..b58a8158a 100644 --- a/adoc/SAP_HA740_SetupGuide_SLE12-docinfo.xml +++ b/adoc/SAP_HA740_SetupGuide_SLE12-docinfo.xml @@ -7,25 +7,24 @@ - - SUSE Linux Enterprise Server for SAP Applications - 12 - -SUSE Best Practices +Best Practices SAP High Availability Clustering + Deployment -SAP S/4 HANA - Enqueue Replication 1 High Availability Cluster +SAP S/4 HANA Enqueue Replication 1 HA Cluster on SLES12 This document explains how to deploy an SAP NetWeaver Enqueue Replication 1 High Availability Cluster solution. +Deploying an SAP NW ER1 HA cluster solution on SLES 12 - SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2020-04-03 SUSE Linux Enterprise Server for SAP Applications 12 SAP NetWeaver 7.40 and 7.50 @@ -42,16 +41,6 @@ - - Bernd - Schubert - - - SAP Solution Architect - SUSE - - - + - @@ -81,8 +69,14 @@ - - + + + 2020-04-03 + + + + + diff --git a/adoc/SAP_HA740_SetupGuide_SLE12.adoc b/adoc/SAP_HA740_SetupGuide_SLE12.adoc index 2965cb6ad..365b78c8a 100644 --- a/adoc/SAP_HA740_SetupGuide_SLE12.adoc +++ b/adoc/SAP_HA740_SetupGuide_SLE12.adoc @@ -1,5 +1,9 @@ :docinfo: + +// defining article ID +[#art-sapha740-setup-sle12] + :slesProdVersion: 12 = SAP NetWeaver Enqueue Replication 1 High Availability Cluster - SAP NetWeaver 7.40 and 7.50: Setup Guide for SUSE Linux Enterprise Server 12 diff --git a/adoc/SAP_S4HA10_SetupGuide_SLE12-docinfo.xml b/adoc/SAP_S4HA10_SetupGuide_SLE12-docinfo.xml index 19e6b1bcc..f8242128f 100644 --- a/adoc/SAP_S4HA10_SetupGuide_SLE12-docinfo.xml +++ b/adoc/SAP_S4HA10_SetupGuide_SLE12-docinfo.xml @@ -7,29 +7,27 @@ - - SUSE Linux Enterprise Server for SAP Applications - 12 -SUSE Best Practices - +Best Practices SAP High Availability Clustering + Deployment -SAP S/4 HANA - Enqueue Replication 2 High Availability Cluster -This document explains how to deploy an S/4 HANA Enqueue Replication 2 High Availability Cluster solution. +SAP S/4 HANA - Enqueue Replication 2 HA Cluster +This document explains how to deploy an SAP S/4 HANA Enqueue Replication 2 High Availability Cluster solution on SLES 12. +Deploying an SAP S/4 hana ER2 HA cluster on SLES12 - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2020-02-25 SUSE Linux Enterprise Server for SAP Applications 12 - @@ -70,8 +68,14 @@ - - + + + 2020-02-25 + + + + + diff --git a/adoc/SAP_S4HA10_SetupGuide_SLE12.adoc b/adoc/SAP_S4HA10_SetupGuide_SLE12.adoc index 69fefe382..49950c794 100644 --- a/adoc/SAP_S4HA10_SetupGuide_SLE12.adoc +++ b/adoc/SAP_S4HA10_SetupGuide_SLE12.adoc @@ -3,6 +3,9 @@ :slesProdVersion: 12 // +// defining article ID +[#art-sap-s4ha10-setup-sles12] + = SAP S/4 HANA - Enqueue Replication 2 High Availability Cluster: Setup Guide for SUSE Linux Enterprise Server 12 // == SAP S/4 HANA 1809 High Availability Cluster diff --git a/adoc/SLES4SAP-HANAonKVM-15SP2-docinfo.xml b/adoc/SLES4SAP-HANAonKVM-15SP2-docinfo.xml index 5ca058eb4..fdbb6acf3 100644 --- a/adoc/SLES4SAP-HANAonKVM-15SP2-docinfo.xml +++ b/adoc/SLES4SAP-HANAonKVM-15SP2-docinfo.xml @@ -7,12 +7,8 @@ - - SUSE Linux Enterprise Server for SAP Applications - 15 SP2 -SUSE Best Practices - +Best Practices SAP @@ -20,13 +16,14 @@ Virtualization Configuration -SUSE Best Practices for SAP HANA on KVM +SLES for SAP 15 SP2 Best Practices for SAP HANA on KVM This document describes how SLES for SAP Applications 15 SP2 with KVM should be configured to run SAP HANA for use in production environments. +Configuring KVM on SLES for SAP 15 SP2 for SAP HANA - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2021-12-02 SUSE Linux Enterprise Server for SAP Applications 15 SP2 @@ -70,8 +67,14 @@ - - + + + 2021-12-02 + + + + + diff --git a/adoc/SLES4SAP-HANAonKVM-15SP2.adoc b/adoc/SLES4SAP-HANAonKVM-15SP2.adoc index 7481d0dc9..039ffd5dd 100644 --- a/adoc/SLES4SAP-HANAonKVM-15SP2.adoc +++ b/adoc/SLES4SAP-HANAonKVM-15SP2.adoc @@ -2,6 +2,9 @@ :localdate: +// defining article ID +[#art-sap-hana-kvm-sles15sp2] + // Document Variables :DocumentName: SUSE Best Practices for SAP HANA on KVM :slesProdVersion: 15 SP2 diff --git a/adoc/SLES4SAP-HANAonKVM-15SP4-docinfo.xml b/adoc/SLES4SAP-HANAonKVM-15SP4-docinfo.xml index b9593769c..5d1e5b99c 100644 --- a/adoc/SLES4SAP-HANAonKVM-15SP4-docinfo.xml +++ b/adoc/SLES4SAP-HANAonKVM-15SP4-docinfo.xml @@ -7,12 +7,8 @@ - - SUSE Linux Enterprise Server for SAP Applications - 15 SP4 -SUSE Best Practices - +Best Practices SAP @@ -20,15 +16,17 @@ Virtualization Configuration -SUSE Best Practices for SAP HANA on KVM +SLES for SAP 15 SP4 Best Practices for SAP HANA on KVM This document describes how SLES for SAP Applications 15 SP4 with KVM should be configured to run SAP HANA for use in production environments. +Configuring KVM on SLES for SAP 15 SP4 for SAP HANA - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2023-11-29 -SUSE Linux Enterprise Server for SAP Applications 15 SP4 +SUSE Linux Enterprise Server for SAP Applications 15 SP4 and later @@ -80,8 +78,14 @@ - - + + + 2023-11-29 + + + + + diff --git a/adoc/SLES4SAP-HANAonKVM-15SP4.adoc b/adoc/SLES4SAP-HANAonKVM-15SP4.adoc index 9fbfd2f19..1700713aa 100644 --- a/adoc/SLES4SAP-HANAonKVM-15SP4.adoc +++ b/adoc/SLES4SAP-HANAonKVM-15SP4.adoc @@ -2,6 +2,9 @@ :localdate: +// defining article ID +[#art-sap-hana-kvm-sles15sp4] + // Document Variables :DocumentName: SUSE Best Practices for SAP HANA on KVM :slesProdVersion: 15 SP4 diff --git a/adoc/SLES4SAP-hana-angi-perfopt-15-docinfo.xml b/adoc/SLES4SAP-hana-angi-perfopt-15-docinfo.xml index a6c58ed0e..703b0d0a2 100644 --- a/adoc/SLES4SAP-hana-angi-perfopt-15-docinfo.xml +++ b/adoc/SLES4SAP-hana-angi-perfopt-15-docinfo.xml @@ -5,14 +5,10 @@ - + - - SUSE Linux Enterprise Server for SAP Applications - 15 SP4, SP5, SP6 Best Practices - SAP @@ -25,11 +21,11 @@ How to install and customize SLES for SAP Applications for SAP HANA system replication in the scale-up performance-optimized scenario using SAPHanaSR-angi Install SAP HANA SR Scale-Up performance-opt with angi - + SUSE Linux Enterprise Server for SAP Applications 15 @@ -82,9 +78,6 @@ - - - SUSE® Linux Enterprise Server for SAP Applications is optimized in various ways for SAP* diff --git a/adoc/SLES4SAP-hana-angi-scaleout-perfopt-15-docinfo.xml b/adoc/SLES4SAP-hana-angi-scaleout-perfopt-15-docinfo.xml index 2a4dab101..e4079022d 100644 --- a/adoc/SLES4SAP-hana-angi-scaleout-perfopt-15-docinfo.xml +++ b/adoc/SLES4SAP-hana-angi-scaleout-perfopt-15-docinfo.xml @@ -7,12 +7,8 @@ - - SUSE Linux Enterprise Server for SAP Applications - 15 SP4, SP5, SP6 SUSE Best Practices - SAP @@ -25,12 +21,11 @@ How to install and customize SLES for SAP Applications for SAP HANA system replication in the scale-out performance-optimized scenario using SAPHanaSR-angi Install SAP HANA SR Scale-Out performance-opt with angi - - + SUSE Linux Enterprise Server for SAP Applications 15 @@ -83,9 +78,6 @@ - - - SUSE® Linux Enterprise Server for SAP Applications is diff --git a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12-AWS-docinfo.xml b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12-AWS-docinfo.xml index 35bc212e7..a9d613ed0 100644 --- a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12-AWS-docinfo.xml +++ b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12-AWS-docinfo.xml @@ -6,30 +6,26 @@ - - - SUSE Linux Enterprise Server for SAP - Applications - 12 SP4+ -SUSE Best Practices - +Best Practices SAP High Availability + Clustering Cloud Installation -SAP HANA System Replication Scale-Out High Availability in Amazon Web Services +SAP HANA System Replication Scale-Out HA in AWS How to install and customize SLES for SAP Applications for SAP HANA Scale-Out system replication with automated failover in the AWS Cloud. +SAP HANA System Replication Scale-Out HA in AWS - SLES for SAP - -2022-05-27 + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications 12 SP4 or newer Amazon Web Services @@ -114,8 +110,14 @@ - - + + + 2022-05-27 + + + + + diff --git a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12-AWS.adoc b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12-AWS.adoc index 3976ea20e..218c0327f 100644 --- a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12-AWS.adoc +++ b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12-AWS.adoc @@ -2,6 +2,9 @@ include::Variables.adoc[] :docinfo: +// defining article ID +[#art-sap-hana-perfopt12-aws] + // // Start of the document // diff --git a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12-docinfo.xml b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12-docinfo.xml index cb22ef869..6b7848da2 100644 --- a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12-docinfo.xml +++ b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12-docinfo.xml @@ -5,14 +5,9 @@ - - - - SUSE Linux Enterprise Server for SAP Applications - 12 SP2+ + -SUSE Best Practices - +Best Practices SAP @@ -21,13 +16,16 @@ Clustering Installation -SAP HANA System Replication Scale-Out - Performance Optimized Scenario +SAP HANA SR Scale-Out Performance Optimized Scenario How to install and customize SLES for SAP Applications for SAP HANA Scale-Out system replication automation in the performance optimized scenario +SAP HANA SR Scale-Out Performance Optimized Scenario - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2022-12-07 SUSE Linux Enterprise Server for SAP Applications 12 SP2 or newer @@ -81,7 +79,14 @@ - + + + 2022-12-07 + + + + + SUSE® Linux Enterprise Server for SAP Applications is diff --git a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12.adoc b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12.adoc index f45da6370..5e3ac355f 100644 --- a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12.adoc +++ b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-12.adoc @@ -1,6 +1,10 @@ // Load document variables include::Variables.adoc[] :docinfo: + +// defining article ID +[#art-sap-hana-perfopt12] + // // Start of the document // diff --git a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-15-docinfo.xml b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-15-docinfo.xml index 0c0cb22e3..3730f1656 100644 --- a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-15-docinfo.xml +++ b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-15-docinfo.xml @@ -6,15 +6,9 @@ - - - - SUSE Linux Enterprise Server for SAP - Applications - 15 + -SUSE Best Practices - +Best Practices SAP @@ -23,13 +17,18 @@ Clustering Installation -SAP HANA System Replication Scale-Out - Performance Optimized Scenario -How to install and customize SLES for SAP Applications for SAP HANA Scale-Out system +SAP HANA SR Scale-Out Performance Optimized with SLES15 +How to install and customize SLES for SAP Applications 15 for SAP HANA Scale-Out system replication automation in the performance optimized scenario +SAP HANA SR Scale-Out Performance Optimized with SLES15 - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2022-12-07 SUSE Linux Enterprise Server for SAP Applications 15 @@ -83,8 +82,14 @@ - - + + + 2022-12-07 + + + + + diff --git a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-15.adoc b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-15.adoc index 381d07264..5ca2428b1 100644 --- a/adoc/SLES4SAP-hana-scaleOut-PerfOpt-15.adoc +++ b/adoc/SLES4SAP-hana-scaleOut-PerfOpt-15.adoc @@ -2,6 +2,10 @@ include::Var_SLES4SAP-hana-scaleOut-PerfOpt-15.txt[] include::Var_SLES4SAP-hana-scaleOut-PerfOpt-15-param.txt[] :docinfo: + +// defining article ID +[#art-sap-hana-perfopt15] + // // Start of the document // diff --git a/adoc/SLES4SAP-hana-scaleout-multitarget-perfopt-15-docinfo.xml b/adoc/SLES4SAP-hana-scaleout-multitarget-perfopt-15-docinfo.xml index acffc1e9d..6f60bdcb1 100644 --- a/adoc/SLES4SAP-hana-scaleout-multitarget-perfopt-15-docinfo.xml +++ b/adoc/SLES4SAP-hana-scaleout-multitarget-perfopt-15-docinfo.xml @@ -6,14 +6,8 @@ - - - SUSE Linux Enterprise Server for SAP - Applications - 15 -SUSE Best Practices - +Best Practices SAP @@ -22,12 +16,17 @@ Clustering Installation -SAP HANA System Replication Scale-Out - Multi-Target Performance-Optimized Scenario +SAP HANA SR Multi-Target Performance-Optimized Scenario Install & customize SLES for SAP Applications for SAP HANA Scale-Out in performance optimized scenarios, add a 3rd site in a multi-target architecture +SAP HANA SR Multi-Target Performance-Optimized Scenario - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2022-12-22 SUSE Linux Enterprise Server for SAP Applications 15 @@ -81,8 +80,14 @@ - - + + + 2022-12-22 + + + + + diff --git a/adoc/SLES4SAP-hana-scaleout-multitarget-perfopt-15.adoc b/adoc/SLES4SAP-hana-scaleout-multitarget-perfopt-15.adoc index d5f043398..a323c101d 100644 --- a/adoc/SLES4SAP-hana-scaleout-multitarget-perfopt-15.adoc +++ b/adoc/SLES4SAP-hana-scaleout-multitarget-perfopt-15.adoc @@ -6,6 +6,9 @@ include::Var_SLES4SAP-hana-scaleOut-multiTarget-PerfOpt-15-param.txt[] // Start of the document // +// defining article ID +[#art-sap-multi-target-perfopt] + = {saphana} System Replication Scale-Out - Multi-Target Performance-Optimized Scenario // TODO PRIO3: use variables like {usecase}, as in scale-up diff --git a/adoc/SLES4SAP-hana-sr-guide-CostOpt-12-docinfo.xml b/adoc/SLES4SAP-hana-sr-guide-CostOpt-12-docinfo.xml index 1c627c61f..962d2a5ca 100644 --- a/adoc/SLES4SAP-hana-sr-guide-CostOpt-12-docinfo.xml +++ b/adoc/SLES4SAP-hana-sr-guide-CostOpt-12-docinfo.xml @@ -6,13 +6,8 @@ - - - SUSE Linux Enterprise Server for SAP Applications - 12 SP4 -SUSE Best Practices - +Best Practices SAP @@ -21,15 +16,16 @@ Clustering Installation -SAP HANA System Replication Scale-Up - Cost Optimized Scenario +SAP HANA SR Scale-Up - Cost Optimized Scenario SLES12 How to install and customize SLES for SAP Applications for SAP HANA Scale-Up system replication automation in the cost optimized scenario +SAP HANA SR Scale-Up Cost Optimized with SLES12 - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2022-12-07 -SUSE Linux Enterprise Server for SAP Applications 12 SP4 +SUSE Linux Enterprise Server for SAP Applications 12 SP4 and later @@ -81,7 +77,14 @@ - + + + 2022-12-07 + + + + + SUSE® Linux Enterprise Server for SAP Applications is diff --git a/adoc/SLES4SAP-hana-sr-guide-CostOpt-12.adoc b/adoc/SLES4SAP-hana-sr-guide-CostOpt-12.adoc index 7241a3300..ba7596cba 100644 --- a/adoc/SLES4SAP-hana-sr-guide-CostOpt-12.adoc +++ b/adoc/SLES4SAP-hana-sr-guide-CostOpt-12.adoc @@ -10,6 +10,9 @@ include::Var_SLES4SAP-hana-sr-guide-CostOpt-12.txt[] include::Var_SLES4SAP-hana-sr-guide-CostOpt-12-param.txt[] +// defining article ID +[#art-sap-hana-srguide-costopt12] + // Start of the document = {SAPHANA} System Replication Scale-Up - Cost Optimized Scenario diff --git a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-AWS-docinfo.xml b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-AWS-docinfo.xml index 5b560d5fc..bca2d7f7d 100644 --- a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-AWS-docinfo.xml +++ b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-AWS-docinfo.xml @@ -7,12 +7,8 @@ - - SUSE Linux Enterprise Server for SAP Applications - 12 SP5 -SUSE Best Practices - +Best Practices SAP @@ -21,13 +17,13 @@ Cloud Deployment -SAP HANA High Availability Cluster for the AWS Cloud +SAP HANA HA Cluster on AWS Cloud and SLES12 How to install and customize SLES for SAP Applications for SAP HANA system replication in the performance-optimized scenario on the AWS platform +Installing an SAP HANA HA cluster on AWS and SLES12 - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications -2021-10-06 SUSE Linux Enterprise Server for SAP Applications 12 SP5 Amazon Web Services @@ -112,9 +108,14 @@ - - + + + 2021-10-06 + + + + + diff --git a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-AWS.adoc b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-AWS.adoc index bf90f28ae..8053953a6 100644 --- a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-AWS.adoc +++ b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-AWS.adoc @@ -5,6 +5,9 @@ :slesProdVersion: 12 +// defining article ID +[#art-sap-hana-srguide-perfopt12-aws] + = SAP HANA High Availability Cluster for the AWS Cloud: Setup Guide (v12) //= SUSE Linux Enterprise Server for SAP Applications 12 SP5 for the AWS Cloud - Setup Guide diff --git a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud-docinfo.xml b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud-docinfo.xml index 3a0d6ae20..d5fbacd06 100644 --- a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud-docinfo.xml +++ b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud-docinfo.xml @@ -7,13 +7,8 @@ - - SUSE Linux Enterprise Server for SAP - Applications - 12 SP2+ -SUSE Best Practices - +Best Practices SAP @@ -22,15 +17,18 @@ Cloud Deployment -SAP HANA High Availability Cross-Zone Solution on Alibaba Cloud +SAP HANA HA Cross-Zone Solution on Alibaba Cloud This document explains how to deploy an SAP HANA - High Availability solution across different Zones on Alibaba Cloud. + High Availability solution across different zones on Alibaba Cloud. +SAP HANA HA cross-zone solution on AliCloud and SLES12 - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2019-05-24 -SUSE Linux Enterprise Server for SAP Applications 15 +SUSE Linux Enterprise Server for SAP Applications 12 and later Alibaba Cloud @@ -82,8 +80,14 @@ - - + + + 2019-05-24 + + + + + diff --git a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud.adoc b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud.adoc index a3a736b65..fa141f7e8 100644 --- a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud.adoc +++ b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-Alicloud.adoc @@ -2,6 +2,9 @@ :localdate: +// defining article ID +[#art-sap-hana-srguide-perfopt12-ali] + // Start of the document // diff --git a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-docinfo.xml b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-docinfo.xml index ecc4e2505..b8e9d6a9e 100644 --- a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-docinfo.xml +++ b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12-docinfo.xml @@ -6,13 +6,8 @@ - - -SUSE Linux Enterprise Server for SAP Applications -12 SP4 -SUSE Best Practices -Best Practices +Best Practices SAP @@ -21,15 +16,16 @@ Clustering Installation -SAP HANA System Replication Scale-Up - Performance Optimized Scenario +SAP HANA SR Scale-Up - Performance Optimized Scenario How to install and customize SLES for SAP Applications for SAP HANA system replication in the performance optimized scenario +SAP HANA SR Scale-Up performance optimized with SLES12 - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2022-12-07 -SUSE Linux Enterprise Server for SAP Applications 12 SP4 +SUSE Linux Enterprise Server for SAP Applications 12 SP4 and later @@ -81,8 +77,14 @@ - - + + + 2022-12-07 + + + + + diff --git a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12.adoc b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12.adoc index 739415bb8..066c6b578 100644 --- a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12.adoc +++ b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-12.adoc @@ -5,6 +5,11 @@ include::Var_SLES4SAP-hana-sr-guide-PerfOpt-12.txt[] include::Var_SLES4SAP-hana-sr-guide-PerfOpt-12-param.txt[] :docinfo: // + +// defining article ID +[#art-sap-hana-srguide-perfopt12] + + // Start of the document // diff --git a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15-docinfo.xml b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15-docinfo.xml index 6b33eb4cf..456e22597 100644 --- a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15-docinfo.xml +++ b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15-docinfo.xml @@ -6,13 +6,8 @@ - - - SUSE Linux Enterprise Server for SAP Applications - 15 -SUSE Best Practices - +Best Practices SAP @@ -21,13 +16,18 @@ Clustering Installation -SAP HANA System Replication Scale-Up - Performance Optimized Scenario +SAP HANA SR Scale-Up - Performance Optimized Scenario SLES15 How to install and customize SLES for SAP Applications for SAP HANA system replication in the performance-optimized scenario +SAP HANA SR Scale-Up performance optimized with SLES15 - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2022-12-07 SUSE Linux Enterprise Server for SAP Applications 15 @@ -81,8 +81,14 @@ - - + + + 2022-12-07 + + + + + diff --git a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc index 6b79547b7..ccfb01233 100644 --- a/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc +++ b/adoc/SLES4SAP-hana-sr-guide-PerfOpt-15.adoc @@ -1,5 +1,8 @@ :docinfo: +// defining article ID +[#art-sap-hana-srguide-perfopt15] + // Load document variables include::Var_SLES4SAP-hana-sr-guide-PerfOpt-15.txt[] include::Var_SLES4SAP-hana-sr-guide-PerfOpt-15-param.txt[] diff --git a/adoc/SLES4SAP-hana-sr-guide-costopt-15-docinfo.xml b/adoc/SLES4SAP-hana-sr-guide-costopt-15-docinfo.xml index 5bf72b3b6..15fec9407 100644 --- a/adoc/SLES4SAP-hana-sr-guide-costopt-15-docinfo.xml +++ b/adoc/SLES4SAP-hana-sr-guide-costopt-15-docinfo.xml @@ -6,13 +6,8 @@ - - - SUSE Linux Enterprise Server for SAP Applications - 15 -SUSE Best Practices - +Best Practices SAP @@ -21,13 +16,19 @@ Clustering Installation -SAP HANA System Replication Scale-Up - Cost Optimized Scenario +SAP HANA SR Scale-Up - Cost Optimized Scenario SLES15 How to install and customize SLES for SAP Applications for SAP HANA Scale-Up system replication automation in the cost optimized scenario +SAP HANA SR Scale-Up Cost Optimized with SLES15 - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2022-12-07 SUSE Linux Enterprise Server for SAP Applications 15 @@ -101,7 +102,14 @@ - + + + 2022-12-07 + + + + + SUSE® Linux Enterprise Server for SAP Applications is diff --git a/adoc/SLES4SAP-hana-sr-guide-costopt-15.adoc b/adoc/SLES4SAP-hana-sr-guide-costopt-15.adoc index cfe417230..be7d851a1 100644 --- a/adoc/SLES4SAP-hana-sr-guide-costopt-15.adoc +++ b/adoc/SLES4SAP-hana-sr-guide-costopt-15.adoc @@ -4,6 +4,9 @@ include::Var_SLES4SAP-hana-sr-guide-CostOpt-15.txt[] include::Var_SLES4SAP-hana-sr-guide-CostOpt-15-param.txt[] +// defining article ID +[#art-sap-hana-srguide-costopt15] + // Start of the document = {SAPHANA} System Replication Scale-Up - Cost Optimized Scenario diff --git a/adoc/SLES4SAP-hana-sr-guide-perfopt-15-aws-docinfo.xml b/adoc/SLES4SAP-hana-sr-guide-perfopt-15-aws-docinfo.xml index 93ad644b2..670b3ce28 100644 --- a/adoc/SLES4SAP-hana-sr-guide-perfopt-15-aws-docinfo.xml +++ b/adoc/SLES4SAP-hana-sr-guide-perfopt-15-aws-docinfo.xml @@ -6,13 +6,8 @@ - - - SUSE Linux Enterprise Server for SAP Applications - 15 SP1 -SUSE Best Practices - +Best Practices SAP @@ -25,12 +20,17 @@ SAP HANA High Availability Cluster for the AWS Cloud How to install and customize SLES for SAP Applications for SAP HANA system replication in the performance-optimized scenario on AWS +SAP HANA HA cluster on AWS Cloud and SLES15 - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2022-02-28 -SUSE Linux Enterprise Server for SAP Applications 15 SP1 +SUSE Linux Enterprise Server for SAP Applications 15 SP1 and later Amazon Web Services @@ -44,7 +44,7 @@ SUSE - + Bernd Schubert @@ -93,8 +93,14 @@ - - + + + 2022-02-28 + + + + + diff --git a/adoc/SLES4SAP-hana-sr-guide-perfopt-15-aws.adoc b/adoc/SLES4SAP-hana-sr-guide-perfopt-15-aws.adoc index 44ff51d82..9dc93ffd1 100644 --- a/adoc/SLES4SAP-hana-sr-guide-perfopt-15-aws.adoc +++ b/adoc/SLES4SAP-hana-sr-guide-perfopt-15-aws.adoc @@ -11,6 +11,10 @@ include::Var_SLES4SAP-hana-sr-guide-PerfOpt-15.txt[] include::Var_SLES4SAP-hana-sr-guide-PerfOpt-15-param.txt[] :docinfo: // + +// defining article ID +[#art-sap-hana-srguide-perfopt15-aws] + // Start of the document // diff --git a/adoc/SLES4SAP-sap-infra-monitoring-docinfo.xml b/adoc/SLES4SAP-sap-infra-monitoring-docinfo.xml index 6d302eede..f4c193068 100644 --- a/adoc/SLES4SAP-sap-infra-monitoring-docinfo.xml +++ b/adoc/SLES4SAP-sap-infra-monitoring-docinfo.xml @@ -6,27 +6,28 @@ - -SUSE Linux Enterprise Server for SAP Applications -15 SP3 -SUSE Best Practices - +Best Practices SAP Monitoring -SUSE Best Practices for SAP HANA on KVM +Infrastructure monitoring for SAP Systems How to install and customize SLES for SAP Applications to monitor hardware-related metrics to help increase the uptime of critical SAP applications +Infrastructure monitoring for SAP Systems - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2023-09-29 -SUSE Linux Enterprise Server for SAP Applications 15 SP3 +SUSE Linux Enterprise Server for SAP Applications 15 SP3 and later @@ -78,7 +79,14 @@ - + + + 2023-09-29 + + + + + diff --git a/adoc/SLES4SAP-sap-infra-monitoring.adoc b/adoc/SLES4SAP-sap-infra-monitoring.adoc index 7656526bd..54616141f 100644 --- a/adoc/SLES4SAP-sap-infra-monitoring.adoc +++ b/adoc/SLES4SAP-sap-infra-monitoring.adoc @@ -5,6 +5,9 @@ // enable docinfo :docinfo: +// defining article ID +[#art-sap-infra-monitoring] + :reg: ® :tm: ™ diff --git a/adoc/TRD-Linux-gs-wordpress-lamp-sles-docinfo.xml b/adoc/TRD-Linux-gs-wordpress-lamp-sles-docinfo.xml index bf3414dfb..842453359 100644 --- a/adoc/TRD-Linux-gs-wordpress-lamp-sles-docinfo.xml +++ b/adoc/TRD-Linux-gs-wordpress-lamp-sles-docinfo.xml @@ -9,33 +9,27 @@ - -SUSE Linux Enterprise Server -12, 15 -Technical Reference Documentation +Technical References Getting Started - WordPress; + WordPress SUSE - - WordPress on LAMP on SUSE Linux Enterprise Server This guide helps users install and configure WordPress - using the LAMP stack on SUSE Linux Enterprise Server. +using the LAMP stack on SUSE Linux Enterprise Server. +Installing and configuring WordPress on LAMP on SLES - SLES - SLES + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server -2023-07-27 SUSE Linux Enterprise Server 15 SUSE Linux Enterprise Server 12 @@ -48,7 +42,7 @@ Yang - + SUSE @@ -71,7 +65,14 @@ - + + + 2023-07-27 + + + + + This guide helps users install and configure WordPress using the LAMP stack on SUSE Linux Enterprise Server. diff --git a/adoc/TRD-Linux-gs-wordpress-lamp-sles.adoc b/adoc/TRD-Linux-gs-wordpress-lamp-sles.adoc index 157c1d1fe..6e3db47f5 100644 --- a/adoc/TRD-Linux-gs-wordpress-lamp-sles.adoc +++ b/adoc/TRD-Linux-gs-wordpress-lamp-sles.adoc @@ -1,5 +1,8 @@ :docinfo: +// defining article ID +[#art-wordpress-lamp-sles] + // = {title} = WordPress on LAMP on SUSE Linux Enterprise Server // SUSE Linux Enterprise Server 12, SUSE Linux Enterprise Server 15 diff --git a/adoc/TRD-SLES-SAP-HA-automation-quickstart-cloud-docinfo.xml b/adoc/TRD-SLES-SAP-HA-automation-quickstart-cloud-docinfo.xml index 3bccf1b72..7ad372eda 100644 --- a/adoc/TRD-SLES-SAP-HA-automation-quickstart-cloud-docinfo.xml +++ b/adoc/TRD-SLES-SAP-HA-automation-quickstart-cloud-docinfo.xml @@ -8,30 +8,29 @@ -SLES for SAP, SLES +SLES for SAP 12 and 15, SLES -Technical Reference Documentation +Technical References Getting Started SAP {cloud} - - -Using SUSE Automation to Deploy an SAP HANA Cluster {cloud} Cloud Platform + +SAP HANA Cluster on {cloud} with SUSE Automation Deployment of a two-node SAP HANA High Availability Cluster using the SUSE Automation Project into a sandbox environment, operated on a public cloud +Deploying an SAP HANA HA cluster with SUSE Automation - SLES for SAP - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2021-09-09 SUSE Linux Enterprise Server for SAP Applications SUSE Linux Enterprise Server @@ -67,16 +66,14 @@ - - - - + + + 2021-09-09 + + + + + diff --git a/adoc/TRD-SLES-SAP-HA-automation-quickstart-cloud.adoc b/adoc/TRD-SLES-SAP-HA-automation-quickstart-cloud.adoc index f25f21afb..565414914 100644 --- a/adoc/TRD-SLES-SAP-HA-automation-quickstart-cloud.adoc +++ b/adoc/TRD-SLES-SAP-HA-automation-quickstart-cloud.adoc @@ -1,6 +1,9 @@ // enable docinfo :docinfo: +// defining article ID +[#art-sap-ha-automation-cloud] + // the ifdef's make it possible to only change the DC file for generating the right document ifdef::Azure[] :cloud: Azure diff --git a/adoc/sap-nw740-sle15-setupguide-docinfo.xml b/adoc/sap-nw740-sle15-setupguide-docinfo.xml index 99617236f..d37b1e6f0 100644 --- a/adoc/sap-nw740-sle15-setupguide-docinfo.xml +++ b/adoc/sap-nw740-sle15-setupguide-docinfo.xml @@ -8,26 +8,28 @@ - - SUSE Linux Enterprise Server for SAP Applications - 15 -SUSE Best Practices - +Best Practices SAP High Availability Clustering + Deployment -SAP NetWeaver Enqueue Replication 1 High Availability Cluster - SAP NetWeaver 7.40 and 7.50 +SAP NetWeaver 7.40 and 7.50 ER 1 HA Cluster Setup Guide This document explains how to deploy an SAP NetWeaver Enqueue Replication 1 High Availability Cluster solution. +Deploying an SAP NW ER 1 HA cluster - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2023-05-22 SUSE Linux Enterprise Server for SAP Applications 15 SAP NetWeaver 7.40 and 7.50 @@ -82,7 +84,14 @@ - + + + 2023-05-22 + + + + + diff --git a/adoc/sap-nw740-sle15-setupguide.adoc b/adoc/sap-nw740-sle15-setupguide.adoc index 3a3e5909c..2899a2597 100644 --- a/adoc/sap-nw740-sle15-setupguide.adoc +++ b/adoc/sap-nw740-sle15-setupguide.adoc @@ -2,6 +2,9 @@ // Document Variables :slesProdVersion: 15 +// defining article ID +[#art-sapnw740-sles15] + = SAP NetWeaver Enqueue Replication 1 High Availability Cluster - SAP NetWeaver 7.40 and 7.50: Setup Guide // Standard SUSE includes //include::common_copyright_gfdl.adoc[] diff --git a/adoc/sap_hana_azure-main_document-docinfo.xml b/adoc/sap_hana_azure-main_document-docinfo.xml index 8331eb9a4..f840c8341 100644 --- a/adoc/sap_hana_azure-main_document-docinfo.xml +++ b/adoc/sap_hana_azure-main_document-docinfo.xml @@ -7,12 +7,8 @@ - - SUSE Linux Enterprise Server for SAP Applications - 15 SP1 -SUSE Best Practices - +Best Practices SAP @@ -21,14 +17,20 @@ Automation Cloud -SAP HANA High Availability Cluster Automation Operating on Azure +SAP HANA HA Cluster Automation Operating on Azure How to build an automated SAP HANA System Replication (SR) Performance Optimized High Availability (HA) cluster operating on Microsoft Azure +Building an automated SAP HANA cluster on Azure - SLES for SAP + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications -2020-04-02 -SUSE Linux Enterprise Server for SAP Applications 15 SP1 + +SUSE Linux Enterprise Server for SAP Applications 15 SP1 and later Microsoft Azure @@ -42,17 +44,7 @@ SUSE - + + + 2020-02-25 + + + + + diff --git a/adoc/sap_hana_azure-main_document.adoc b/adoc/sap_hana_azure-main_document.adoc index a375e2d2f..6d0094c9f 100644 --- a/adoc/sap_hana_azure-main_document.adoc +++ b/adoc/sap_hana_azure-main_document.adoc @@ -35,6 +35,10 @@ Handing over the document to the documentation team :localdate: + +// defining article ID +[#art-saphana-automation-azure] + = SAP HANA High Availability Cluster Automation Operating on Azure: Getting Started // Revision {revision} from {date} diff --git a/adoc/SAP_S4HA10_SetupGuide-docinfo.xml b/attic/SAP_S4HA10_SetupGuide-docinfo.xml similarity index 100% rename from adoc/SAP_S4HA10_SetupGuide-docinfo.xml rename to attic/SAP_S4HA10_SetupGuide-docinfo.xml diff --git a/xml/MAIN-SBP-AMD-EPYC-2-SLES15SP1.xml b/xml/MAIN-SBP-AMD-EPYC-2-SLES15SP1.xml index aaf63bec6..ba0b7af7b 100644 --- a/xml/MAIN-SBP-AMD-EPYC-2-SLES15SP1.xml +++ b/xml/MAIN-SBP-AMD-EPYC-2-SLES15SP1.xml @@ -6,14 +6,13 @@ ]>
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-amdepyc2-sles15sp1" xml:lang="en"> Optimizing Linux for AMD EPYC™ 7002 Series Processors with SUSE Linux Enterprise 15 SP1 - SUSE Linux Enterprise Server - 15 SP1 + https://github.com/SUSE/suse-best-practices/issues/new @@ -24,85 +23,83 @@ - SUSE Best Practices - - + Best Practices + Tuning & Performance - + Configuration - Optimizing Linux for AMD EPYC™ 7002 Series Processors with SUSE Linux Enterprise - 15 SP1 - Overview of the AMD EPYC* 7002 Series Processors and tuning of - computational-intensive workloads on SUSE Linux Enterprise Server 15 SP1. - - SLES - AMD EPYC* + Optimizing SLES 15 SP1 for AMD EPYC™ 7002 processors + Overview of the AMD EPYC* 7002 Series Processors and tuning of + computational-intensive workloads on SUSE Linux Enterprise Server 15 SP1 + Optimizing SLES 15 SP1 for AMD EPYC™ 7002 processors + + SUSE Linux Enterprise Server - SUSE Linux Enterprise 15 SP1 - AMD EPYC™ 7002 Series Processors - 2019-11-13 + SUSE Linux Enterprise 15 SP1 + AMD EPYC™ 7002 Series Processors + - + - - Mel - Gorman - - - Senior Kernel Engineer - SUSE - + + Mel + Gorman + + + Senior Kernel Engineer + SUSE + - - Matt - Fleming - - - Senior Performance Engineer - SUSE - + + Matt + Fleming + + + Senior Performance Engineer + SUSE + - - - Dario - Faggioli - - - Software Engineer Virtualization Specialist - SUSE - - - - - Martin - Jambor - - - Tool Chain Developer - SUSE - - - - - Brent - Hollingsworth - - - Engineering Manager - AMD - - - - + @@ -114,8 +111,16 @@ + + + + 2019-11-13 + + + + + - 2019-11-13 The document at hand provides an overview of the AMD EPYC* 7002 Series Processors and @@ -123,15 +128,14 @@ SP1. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the SUSE Best + Practices series have been contributed voluntarily by SUSE employees and third parties. They + are meant to serve as examples of how particular actions can be performed. They have been + compiled with utmost attention to detail. However, this does not guarantee complete + accuracy. SUSE cannot verify that actions described in these documents do what is claimed or + whether actions described have unintended consequences. SUSE LLC, its affiliates, the + authors, and the translators may not be held liable for possible errors or the consequences + thereof. @@ -2523,8 +2527,8 @@ dmesg |grep SEV Identical conclusions than with the single threaded STREAM workload can be drawn: proper and complete tuning allows a VM running on an AMD EPYC 7002 Series Processor server - to achieve host memory bandwidth. Not pinning vCPUs and memory or not providing the VM with a - sensible virtual topology leads to performance drops, as compared to the host, ranging + to achieve host memory bandwidth. Not pinning vCPUs and memory or not providing the VM with + a sensible virtual topology leads to performance drops, as compared to the host, ranging between -50% and -60%. With full tuning applied, the throughput of the VM is around -1% the one of the host. @@ -3139,7 +3143,7 @@ dmesg |grep SEV - + diff --git a/xml/MAIN-SBP-AMD-EPYC-3-SLES15SP2.xml b/xml/MAIN-SBP-AMD-EPYC-3-SLES15SP2.xml index c1bfc5090..4e3d61505 100644 --- a/xml/MAIN-SBP-AMD-EPYC-3-SLES15SP2.xml +++ b/xml/MAIN-SBP-AMD-EPYC-3-SLES15SP2.xml @@ -6,14 +6,13 @@ ]>
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-amdepyc3-sles15sp2" xml:lang="en"> Optimizing Linux for AMD EPYC™ 7003 Series Processors with SUSE Linux Enterprise 15 SP2 - SUSE Linux Enterprise Server - 15 SP2 + https://github.com/SUSE/suse-best-practices/issues/new @@ -23,27 +22,23 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - - SUSE Best Practices - - + Best Practices + Tuning & Performance - + Configuration - Optimizing Linux for AMD EPYC™ 7003 Series Processors with SUSE Linux Enterprise 15 - SP2 - Overview of the AMD EPYC* 7003 Series Processors and tuning of - computational-intensive workloads on SUSE Linux Enterprise Server 15 SP2. - - SLES - AMD EPYC* + Optimizing SLES 15 SP2 for AMD EPYC™ 7003 processors + Overview of the AMD EPYC* 7003 Series Processors and tuning of + computational-intensive workloads on SUSE Linux Enterprise Server 15 SP2. + Optimizing SLES 15 SP2 for AMD EPYC™ 7003 processors + + SUSE Linux Enterprise Server - SUSE Linux Enterprise 15 SP2 - AMD EPYC™ 7003 Series Processors - 2021-03-16 + SUSE Linux Enterprise 15 SP2 + AMD EPYC™ 7003 Series Processors @@ -56,7 +51,7 @@ SUSE - + Dario Faggioli @@ -105,7 +100,15 @@ - 2021-03-16 + + + 2021-03-16 + + + + + + @@ -114,15 +117,14 @@ SP2. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the SUSE Best + Practices series have been contributed voluntarily by SUSE employees and third parties. They + are meant to serve as examples of how particular actions can be performed. They have been + compiled with utmost attention to detail. However, this does not guarantee complete + accuracy. SUSE cannot verify that actions described in these documents do what is claimed or + whether actions described have unintended consequences. SUSE LLC, its affiliates, the + authors, and the translators may not be held liable for possible errors or the consequences + thereof. diff --git a/xml/MAIN-SBP-AMD-EPYC-4-SLES15SP4.xml b/xml/MAIN-SBP-AMD-EPYC-4-SLES15SP4.xml index 0830c62c3..c2dfac3ca 100644 --- a/xml/MAIN-SBP-AMD-EPYC-4-SLES15SP4.xml +++ b/xml/MAIN-SBP-AMD-EPYC-4-SLES15SP4.xml @@ -6,14 +6,13 @@ ]>
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-amdepyc4-sles15sp4" xml:lang="en"> Optimizing Linux for AMD EPYC™ 9004 Series Processors with SUSE Linux Enterprise 15 SP4 - SUSE Linux Enterprise Server - 15 SP4 + https://github.com/SUSE/suse-best-practices/issues/new @@ -22,28 +21,26 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - - SUSE Best Practices - - + + Best Practices + Tuning & Performance - + Configuration - Optimizing Linux for AMD EPYC™ 9004 Series Processors with SUSE Linux Enterprise - 15 SP4 - Overview of the AMD EPYC* 9004 Series Processors and tuning of - computational-intensive workloads on SUSE Linux Enterprise Server 15 SP4. - - SLES - AMD EPYC* + Optimizing SLES 15 SP4 for AMD EPYC™ 9004 processors + Overview of the AMD EPYC* 9004 Series Processors and tuning of + computational-intensive workloads on SUSE Linux Enterprise Server 15 SP4. + Optimizing SLES 15 SP4 for AMD EPYC™ 9004 processors + + SUSE Linux Enterprise Server - - SUSE Linux Enterprise 15 SP4 - AMD EPYC™ 9004 Series Processors - 2023-06-20 - + + SUSE Linux Enterprise 15 SP4 + AMD EPYC™ 9004 Series Processors + + @@ -55,16 +52,6 @@ SUSE - Martin @@ -84,15 +71,15 @@ Engineering Manager AMD - + - - + + @@ -106,25 +93,32 @@ --> - 2023-06-20 + + + 2023-06-20 + + Latest version of AMD EPYC processors + + + + - The document at hand provides an overview of both the AMD EPYC™ 9004 Series - Classic and AMD EPYC™ 9004 Series Dense Processors. It details how some - computational-intensive workloads can be tuned on SUSE Linux Enterprise Server - 15 SP4. - - Disclaimer: - Documents published as part of - the SUSE Best Practices series have been contributed voluntarily by SUSE employees - and third parties. They are meant to serve as examples of how particular actions - can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described - in these documents do what is claimed or whether actions described have unintended - consequences. SUSE LLC, its affiliates, the authors, and the translators may not - be held liable for possible errors or the consequences thereof. + The document at hand provides an overview of both the AMD EPYC™ 9004 Series Classic and + AMD EPYC™ 9004 Series Dense Processors. It details how some computational-intensive + workloads can be tuned on SUSE Linux Enterprise Server 15 SP4. + + + Disclaimer: + Documents published as part of the SUSE Best Practices series have been contributed + voluntarily by SUSE employees and third parties. They are meant to serve as examples of how + particular actions can be performed. They have been compiled with utmost attention to + detail. However, this does not guarantee complete accuracy. SUSE cannot verify that actions + described in these documents do what is claimed or whether actions described have unintended + consequences. SUSE LLC, its affiliates, the authors, and the translators may not be held + liable for possible errors or the consequences thereof. @@ -132,17 +126,16 @@ Overview - The AMD EPYC 9004 Series Processor is the latest generation of the AMD64 - System-on-Chip (SoC) processor family. It is based on the Zen 4 microarchitecture - introduced in 2022, supporting up to 96 cores (192 threads) and 12 memory channels - per socket. At the time of writing, 1-socket and 2-socket models are expected - to be available from Original Equipment Manufacturers (OEMs) in 2023. In 2023, a - new AMD EPYC 9004 Series Dense Processor was launched which is based on a similar - architecture to the AMD EPYC 9004 Series Classic Processor supporting up to 128 - cores (256 threads). This document provides an overview of the AMD EPYC 9004 Series - Classic Processor and how computational-intensive workloads can be tuned on SUSE - Linux Enterprise Server 15 SP4. Additional details about the AMD EPYC 9004 Series - Dense Processor are provided where appropriate. + The AMD EPYC 9004 Series Processor is the latest generation of the AMD64 System-on-Chip + (SoC) processor family. It is based on the Zen 4 microarchitecture introduced in 2022, + supporting up to 96 cores (192 threads) and 12 memory channels per socket. At the time of + writing, 1-socket and 2-socket models are expected to be available from Original Equipment + Manufacturers (OEMs) in 2023. In 2023, a new AMD EPYC 9004 Series Dense Processor was launched + which is based on a similar architecture to the AMD EPYC 9004 Series Classic Processor + supporting up to 128 cores (256 threads). This document provides an overview of the AMD EPYC + 9004 Series Classic Processor and how computational-intensive workloads can be tuned on SUSE + Linux Enterprise Server 15 SP4. Additional details about the AMD EPYC 9004 Series Dense + Processor are provided where appropriate. @@ -150,99 +143,93 @@ AMD EPYC 9004 Series Classic Processor architecture Symmetric multiprocessing (SMP) systems are those that - contain two or more physical processing cores. Each core may have two threads if Symmetric - multithreading (SMT) is enabled, with some resources being shared between SMT siblings. To - minimize access latencies, multiple layers of caches are used with each level being larger but - with higher access costs. Cores may share different levels of cache which should be considered - when tuning for a workload. + contain two or more physical processing cores. Each core may have two threads if Symmetric multithreading (SMT) is enabled, with some resources + being shared between SMT siblings. To minimize access latencies, multiple layers of caches are + used with each level being larger but with higher access costs. Cores may share different + levels of cache which should be considered when tuning for a workload. Historically, a single socket contained several cores sharing a hierarchy of caches and memory channels and multiple sockets were connected via a memory interconnect. Modern configurations may have multiple dies as a Multi-Chip Module (MCM) with one set of interconnects within the socket and a separate interconnect - for each socket. This means that some CPUs and memory are faster to access - than others depending on the distance. This should be considered when tuning - for Non-Uniform Memory Architecture (NUMA) as all memory - accesses may not reference local memory incurring a variable access penalty. - - The 4th Generation AMD EPYC Processor has an MCM design with up to thirteen dies - on each package. From a topology point of view, this is significantly different to the - 1st Generation AMD EPYC Processor design. However, it is similar to the 3rd Generation AMD EPYC - Processor other than the increase in die count. One die is a central IO die through - which all off-chip communication passes through. The basic building block of a compute - die is an eight-core Core CompleX (CCX) with its own L1-L3 cache hierarchy. Similar to - the 3rd Generation AMD EPYC Processor, one Core Complex Die (CCD) consists of one CCX - connected via an Infinity Link to the IO die, as opposed to two CCXs used in the 2nd - Generation AMD EPYC Processor. This allows direct communication within a CCD instead - of using the IO link maintaining reduced communication and memory access latency. - A 96-core 4th Generation AMD EPYC Processor socket therefore consists of 12 CCDs - consisting of 12 CCXs (containing 8 cores each) or 96 cores in total (192 threads - with SMP enabled) with one additional IO die for 13 dies in total. This is a large - increase in both the core count and number of memory channels relative to the 3rd + for each socket. This means that some CPUs and memory are faster to access than others + depending on the distance. This should be considered when tuning for Non-Uniform Memory Architecture (NUMA) as all memory accesses may + not reference local memory incurring a variable access penalty. + + The 4th Generation AMD EPYC Processor has an MCM design with up to thirteen dies on each + package. From a topology point of view, this is significantly different to the 1st Generation + AMD EPYC Processor design. However, it is similar to the 3rd Generation AMD EPYC Processor + other than the increase in die count. One die is a central IO die through which all off-chip + communication passes through. The basic building block of a compute die is an eight-core Core + CompleX (CCX) with its own L1-L3 cache hierarchy. Similar to the 3rd Generation AMD EPYC + Processor, one Core Complex Die (CCD) consists of one CCX connected via an Infinity Link to + the IO die, as opposed to two CCXs used in the 2nd Generation AMD EPYC Processor. This allows + direct communication within a CCD instead of using the IO link maintaining reduced + communication and memory access latency. A 96-core 4th Generation AMD EPYC Processor socket + therefore consists of 12 CCDs consisting of 12 CCXs (containing 8 cores each) or 96 cores in + total (192 threads with SMP enabled) with one additional IO die for 13 dies in total. This is + a large increase in both the core count and number of memory channels relative to the 3rd Generation AMD EPYC Processor. - Both the 3rd and 4th Generation AMD EPYC Processors potentially have a larger - L3 cache. In a standard configuration, a 4th Generation AMD EPYC Processor has - 32MB L3 cache. Some CPU chips may also include an AMD V-Cache expansion that can - triple the size of the L3 cache. This potentially provides a major performance boost - to applications as more active data can be stored in low-latency cache. The exact - performance impact is variable, but any memory-intensive workload should benefit from - having a lower average memory access latency because of a larger cache. + Both the 3rd and 4th Generation AMD EPYC Processors potentially have a larger L3 cache. In + a standard configuration, a 4th Generation AMD EPYC Processor has 32MB L3 cache. Some CPU + chips may also include an AMD V-Cache expansion that can triple the size of the L3 cache. This + potentially provides a major performance boost to applications as more active data can be + stored in low-latency cache. The exact performance impact is variable, but any + memory-intensive workload should benefit from having a lower average memory access latency + because of a larger cache. Communication between the chip and memory happens via the IO die. Each CCD has one - dedicated Infinity Fabric link to the die and one memory channel per CCD located on the - die. The practical consequence of this architecture versus the 1st Generation AMD EPYC - Processor is that the topology is simpler. The first generation had separate memory - channels per die and links between dies giving two levels of NUMA distance within - a single socket and a third distance when communicating between sockets. This meant - that a two-socket machine for EPYC had 4 NUMA nodes (3 levels of NUMA distance). The - 2nd Generation AMD EPYC Processor has only 2 NUMA nodes (2 levels of NUMA distance) - which makes it easier to tune and optimize. The NUMA distances are the same for the - 3rd and 4th Generation AMD EPYC Processors. - - The IO die has a total of 12 memory controllers supporting - DDR5 Dual Inline Memory Modules (DIMMs) with the - maximum supported speed expected to be DDR-5200 at the time of writing. This implies - a peak channel bandwidth of 40.6 GB/sec or 487.2 GB/sec total throughput across a - socket. The exact bandwidth depends on the DIMMs selected, the number of memory channels - populated, how cache is used and the efficiency of the application. Where possible, - all memory channels should have a DIMM installed to maximize memory bandwidth. - - While the topologies and basic layout is similar between the 3rd and 4th - Generation AMD EPYC Processors, there are several micro-architectural - differences. The Instructions Per Cycle (IPC) has - improved by 13% on average across a selected range of workloads, although the exact - improvement is workload-dependent. The improvements are due to a variety of factors - including a larger L2 cache, improvements in branch prediction, the execution engine, - the front-end fetching/decoding of instructions and additional instructions such - as supporting AVX-512. The degree to which these changes affect performance varies between - applications. - - Power management on the links is careful to minimize the amount of power - required. If the links are idle, the power may be used to boost the frequency of - individual cores. Hence, minimizing access is not only important from a memory - access latency point of view, but it also has an impact on the speed of individual - cores. - - There are 128 IO lanes supporting PCIe Gen 5.0 per socket. Lanes can be - used as Infinity links, PCI Express links or SATA links with a limit of 32 SATA - links. The exact number of PCIe 4.0, PCIe 5.0 and configuration links vary by chip and - motherboard. This allows very large IO configurations and a high degree of flexibility, - given that either IO bandwidth or the bandwidth between sockets can be optimized, - depending on the OEM requirements. The most likely configuration is that the number - of PCIe links will be the same for 1- and 2-socket machines, given that some lanes per - socket will be used for inter-socket communication. While some links must be used - for inter-socket communication, adding a socket does not compromise the number - of available IO channels. The exact configuration used depends on the platform. + dedicated Infinity Fabric link to the die and one memory channel per CCD located on the die. + The practical consequence of this architecture versus the 1st Generation AMD EPYC Processor is + that the topology is simpler. The first generation had separate memory channels per die and + links between dies giving two levels of NUMA distance within a single socket and a third + distance when communicating between sockets. This meant that a two-socket machine for EPYC had + 4 NUMA nodes (3 levels of NUMA distance). The 2nd Generation AMD EPYC Processor has only 2 + NUMA nodes (2 levels of NUMA distance) which makes it easier to tune and optimize. The NUMA + distances are the same for the 3rd and 4th Generation AMD EPYC Processors. + + The IO die has a total of 12 memory controllers supporting DDR5 Dual Inline Memory Modules (DIMMs) with the maximum supported speed expected to + be DDR-5200 at the time of writing. This implies a peak channel bandwidth of 40.6 GB/sec or + 487.2 GB/sec total throughput across a socket. The exact bandwidth depends on the DIMMs + selected, the number of memory channels populated, how cache is used and the efficiency of the + application. Where possible, all memory channels should have a DIMM installed to maximize + memory bandwidth. + + While the topologies and basic layout is similar between the 3rd and 4th Generation AMD + EPYC Processors, there are several micro-architectural differences. The Instructions Per Cycle (IPC) has improved by 13% on average across + a selected range of workloads, although the exact improvement is workload-dependent. The + improvements are due to a variety of factors including a larger L2 cache, improvements in + branch prediction, the execution engine, the front-end fetching/decoding of instructions and + additional instructions such as supporting AVX-512. The degree to which these changes affect + performance varies between applications. + + Power management on the links is careful to minimize the amount of power required. If the + links are idle, the power may be used to boost the frequency of individual cores. Hence, + minimizing access is not only important from a memory access latency point of view, but it + also has an impact on the speed of individual cores. + + There are 128 IO lanes supporting PCIe Gen 5.0 per socket. Lanes can be used as Infinity + links, PCI Express links or SATA links with a limit of 32 SATA links. The exact number of PCIe + 4.0, PCIe 5.0 and configuration links vary by chip and motherboard. This allows very large IO + configurations and a high degree of flexibility, given that either IO bandwidth or the + bandwidth between sockets can be optimized, depending on the OEM requirements. The most likely + configuration is that the number of PCIe links will be the same for 1- and 2-socket machines, + given that some lanes per socket will be used for inter-socket communication. While some links + must be used for inter-socket communication, adding a socket does not compromise the number of + available IO channels. The exact configuration used depends on the platform. AMD EPYC 9004 Series Classic Processor topology - below shows the topology of an example machine - with a fully populated memory configuration generated by the lstopo - tool. + below shows the topology of an example machine with a + fully populated memory configuration generated by the lstopo tool.
AMD EPYC 9004 Series Classic Processor Topology @@ -257,22 +244,22 @@
This tool is part of the hwloc package. The two packages - correspond to each socket. The CCXs consisting of 8 cores (16 threads) each should - be clear, as each CCX has one L3 cache and each socket has 12 CCXs resulting in 96 - cores (192 threads). Not obvious are the links to the IO die, but the IO die should - be taken into account when splitting a workload to optimize bandwidth to memory. In - this example, the IO channels are not heavily used, but the focus will be on CPU and - memory-intensive loads. If optimizing for IO, it is recommended that, where possible, - the workload is located on the nodes local to the IO channel. + correspond to each socket. The CCXs consisting of 8 cores (16 threads) each should be clear, + as each CCX has one L3 cache and each socket has 12 CCXs resulting in 96 cores (192 threads). + Not obvious are the links to the IO die, but the IO die should be taken into account when + splitting a workload to optimize bandwidth to memory. In this example, the IO channels are not + heavily used, but the focus will be on CPU and memory-intensive loads. If optimizing for IO, + it is recommended that, where possible, the workload is located on the nodes local to the IO + channel. The computer output below shows a conventional view of the topology using the - numactl tool which is slightly edited for clarity. The CPU IDs - that map to each node are reported on the node X cpus: lines. They note - the NUMA distances on the table at the bottom of the computer output. Node 0 and node - 1 are a distance of 32 apart as they are on separate sockets. The distance is not - a guarantee of the access latency, it is an estimate of the relative difference. The - general interpretation of this distance would suggest that a remote node is 3.2 times - longer than a local memory access but the actual latency cost can be different. + numactl tool which is slightly edited for clarity. The CPU IDs that map + to each node are reported on the node X cpus: lines. They note the NUMA + distances on the table at the bottom of the computer output. Node 0 and node 1 are a distance + of 32 apart as they are on separate sockets. The distance is not a guarantee of the access + latency, it is an estimate of the relative difference. The general interpretation of this + distance would suggest that a remote node is 3.2 times longer than a local memory access but + the actual latency cost can be different. epyc:~ # numactl --hardware node 0 cpus: 0 .. 95 192 .. 287 @@ -287,19 +274,17 @@ node 0 1 1: 32 10 - Note that the two sockets displayed are masking some details. There - are multiple CCDs and multiple channels meaning that there are slight differences - in access latency even to local memory. If an application is so sensitive to - latency that it needs to be aware of the precise relative distances, then the - Nodes Per Socket (NPS) value can be adjusted - in the BIOS. If adjusted, numactl will show additional nodes - and the relative distances between them. + Note that the two sockets displayed are masking some details. There are multiple CCDs and + multiple channels meaning that there are slight differences in access latency even to + local memory. If an application is so sensitive to latency that it needs to + be aware of the precise relative distances, then the Nodes Per Socket + (NPS) value can be adjusted in the BIOS. If adjusted, numactl + will show additional nodes and the relative distances between them. - Finally, the cache topology can be discovered in a variety of fashions. In - addition to lstopo which can provide the information, the - level, size and ID of CPUs that share cache can be identified from the files under - /sys/devices/system/cpu/cpuN/cache. - + Finally, the cache topology can be discovered in a variety of fashions. In addition to + lstopo which can provide the information, the level, size and ID of CPUs + that share cache can be identified from the files under + /sys/devices/system/cpu/cpuN/cache.
@@ -307,39 +292,36 @@ node 0 1 AMD EPYC 9004 Series Dense Processor The AMD EPYC 9004 Series Dense Processor launched in 2023. While the fundamental - microarchitecture is based on the Zen 4 compute core, there are some - important differences between it and the AMD EPYC 9004 Series Classic Processors. Both - processors are socket-compatible, have the same number of memory channels and the - same number of I/O lanes. This means that the processors may be used interchangeably - on the same platform with the same limitation that dual-socket configurations must - use identical processors. Both processors use the same Instruction Set - Architecture (ISA). This means that code optimized for one processor will run - without modification on the other. - - Despite the compatible ISA, the processors are physically different using a - manufacturing process focused on increased density for both the CPU core and the - physical cache. The L1 and L2 caches have the same capacity. The L3 cache, however, is half - the capacity of the AMD EPYC 9004 Series Classic Processor with the space used for - additional CCDs. The basic CCX structure for both the AMD EPYC 9004 Series Dense and - 9004 Series Classic processor is similar but each CCD for the AMD EPYC 9004 Series - Dense has 2 CCXs per CCD instead of 1. While the AMD EPYC 9004 Series Classic has 12 - CCDs with 1 CCX each within a socket, the AMD EPYC 9004 Series Dense processor has - 8 CCDs, each with 2 CCXs. This increases the maximum number of its cores per socket from 96 - cores to 128. Finally, the Thermal Design Points (TDPs) differ - for the AMD EPYC 9004 Series Dense processor, with different frequency scaling limits - and generally a lower peak frequency. While each individual core may achieve less peak - performance than the AMD EPYC 9004 Series Classic Processor, the total peak compute - throughput available is higher due to the increased number of cores. - - The intended use case and workloads determine which processor is superior. The - key advantage of the AMD EPYC 9004 Series Dense Processor is packing more cores within - the same socket. This may benefit Cloud or HyperScale environments in that more - containers or virtual machines can use uncontested CPUs for their workloads within - the same physical machine. As a result, physical space in data centers can potentially - be reduced. It may also benefit some HPC workloads that are primarily CPU and memory bound. - For example, some HPC workloads scale to the number of available cores working on data sets that - are too large to fit into a typical cache. For such workloads, the AMD EPYC 9004 Series Dense - Processor may be ideal. + microarchitecture is based on the Zen 4 compute core, there are some important + differences between it and the AMD EPYC 9004 Series Classic Processors. Both processors are + socket-compatible, have the same number of memory channels and the same number of I/O lanes. + This means that the processors may be used interchangeably on the same platform with the same + limitation that dual-socket configurations must use identical processors. Both processors use + the same Instruction Set Architecture (ISA). This means that code + optimized for one processor will run without modification on the other. + + Despite the compatible ISA, the processors are physically different using a manufacturing + process focused on increased density for both the CPU core and the physical cache. The L1 and + L2 caches have the same capacity. The L3 cache, however, is half the capacity of the AMD EPYC + 9004 Series Classic Processor with the space used for additional CCDs. The basic CCX structure + for both the AMD EPYC 9004 Series Dense and 9004 Series Classic processor is similar but each + CCD for the AMD EPYC 9004 Series Dense has 2 CCXs per CCD instead of 1. While the AMD EPYC + 9004 Series Classic has 12 CCDs with 1 CCX each within a socket, the AMD EPYC 9004 Series + Dense processor has 8 CCDs, each with 2 CCXs. This increases the maximum number of its cores + per socket from 96 cores to 128. Finally, the Thermal Design Points + (TDPs) differ for the AMD EPYC 9004 Series Dense processor, with different + frequency scaling limits and generally a lower peak frequency. While each individual core may + achieve less peak performance than the AMD EPYC 9004 Series Classic Processor, the total peak + compute throughput available is higher due to the increased number of cores. + + The intended use case and workloads determine which processor is superior. The key + advantage of the AMD EPYC 9004 Series Dense Processor is packing more cores within the same + socket. This may benefit Cloud or HyperScale environments in that more containers or virtual + machines can use uncontested CPUs for their workloads within the same physical machine. As a + result, physical space in data centers can potentially be reduced. It may also benefit some + HPC workloads that are primarily CPU and memory bound. For example, some HPC workloads scale + to the number of available cores working on data sets that are too large to fit into a typical + cache. For such workloads, the AMD EPYC 9004 Series Dense Processor may be ideal. @@ -390,32 +372,32 @@ node 0 1 memory. More importantly, interleaving reduces the probability that the operating system (OS) will need to reclaim any data belonging to a large task. - Further improvements can be made to access latencies by binding a workload to - a single CCD within a node. As L3 caches are shared within a CCD on both the 3rd - and 4th Generation AMD EPYC Processors, binding a workload to a CCD avoids L3 cache - misses caused by workload migration. This is an important difference from the - 2nd Generation AMD EPYC Processor which favored binding within a CCX. + Further improvements can be made to access latencies by binding a workload to a single CCD + within a node. As L3 caches are shared within a CCD on both the 3rd and 4th Generation AMD + EPYC Processors, binding a workload to a CCD avoids L3 cache misses caused by workload + migration. This is an important difference from the 2nd Generation AMD EPYC Processor which + favored binding within a CCX. - In most respects, the guidance for optimal bindings for cache and nodes remains - the same between the 3rd and 4th Generation AMD EPYC Processors. However, with SUSE - Linux Enterprise 15 SP4, the necessity to bind specifically to the L3 cache for - optimal performance is relaxed. The CPU scheduler in SUSE Linux Enterprise 15 SP4 has - superior knowledge of the cache topology of all generations of the AMD EPYC Processors - and how to balance load between CPU caches, NUMA nodes and memory channels. + In most respects, the guidance for optimal bindings for cache and nodes remains the same + between the 3rd and 4th Generation AMD EPYC Processors. However, with SUSE Linux Enterprise 15 + SP4, the necessity to bind specifically to the L3 cache for optimal performance is relaxed. + The CPU scheduler in SUSE Linux Enterprise 15 SP4 has superior knowledge of the cache topology + of all generations of the AMD EPYC Processors and how to balance load between CPU caches, NUMA + nodes and memory channels. - - CPU Scheduler Awareness of Cache Topology + + CPU Scheduler Awareness of Cache Topology - With SUSE Linux Enterprise 15 SP4 having superior knowledge of the CPU cache - topology and how to balance load, tuning specifically has a smaller impact to performance - for a given workload. This is not a limitation of the - operating system. It is a side-effect of the baseline performance being improved on - AMD EPYC Processors in general. + With SUSE Linux Enterprise 15 SP4 having superior knowledge of the CPU cache topology + and how to balance load, tuning specifically has a smaller impact to performance for a given + workload. This is not a limitation of the operating + system. It is a side-effect of the baseline performance being improved on AMD EPYC + Processors in general. - + - See examples below on how taskset and numactl can - be used to start commands bound to different CPUs depending on the topology. + See examples below on how taskset and numactl can be + used to start commands bound to different CPUs depending on the topology. # Run a command bound to CPU 1 epyc:~ # taskset -c 1 [command] @@ -453,34 +435,34 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li There are three major hazards to consider with CPU binding. - The first to watch for is remote memory nodes being used when the process is not - allowed to run on CPUs local to that node. The scenarios when this can occur - are outside the scope of this paper. However, a common reason is an IO-bound thread - communicating with a kernel IO thread on a remote node bound to the IO controller. - In such a setup, the data buffers managed by the application are stored in remote memory - incurring an access cost for the IO. + The first to watch for is remote memory nodes being used when the process is not allowed + to run on CPUs local to that node. The scenarios when this can occur are outside the scope + of this paper. However, a common reason is an IO-bound thread communicating with a kernel IO + thread on a remote node bound to the IO controller. In such a setup, the data buffers + managed by the application are stored in remote memory incurring an access cost for the + IO. While tasks may be bound to CPUs, the resources they are accessing, such as network or storage devices, may not have interrupts routed locally. irqbalance generally makes good decisions. But in cases where the network or IO is extremely - high-performance or the application has very low latency requirements, it may be necessary to - disable irqbalance using systemctl. When that is done, - the IRQs for the target device need to be routed manually to CPUs local to the target + high-performance or the application has very low latency requirements, it may be necessary + to disable irqbalance using systemctl. When that is + done, the IRQs for the target device need to be routed manually to CPUs local to the target workload for optimal performance. - The second is that guides about CPU binding tend to focus on binding to a - single CPU. This is not always optimal when the task communicates with other threads, - as fixed bindings potentially miss an opportunity for the processes to use idle - CPUs sharing a common cache. This is particularly true when dispatching IO, be it - to disk or a network interface, where a task may benefit from being able to migrate - close to the related threads. It also applies to pipeline-based communicating threads - for a computational workload. Hence, focus initially on binding to CPUs sharing L3 - cache. Then consider whether to bind based on an L1/L2 cache or a single CPU using the - primary metric of the workload to establish whether the tuning is appropriate. + The second is that guides about CPU binding tend to focus on binding to a single CPU. + This is not always optimal when the task communicates with other threads, as fixed bindings + potentially miss an opportunity for the processes to use idle CPUs sharing a common cache. + This is particularly true when dispatching IO, be it to disk or a network interface, where a + task may benefit from being able to migrate close to the related threads. It also applies to + pipeline-based communicating threads for a computational workload. Hence, focus initially on + binding to CPUs sharing L3 cache. Then consider whether to bind based on an L1/L2 cache or a + single CPU using the primary metric of the workload to establish whether the tuning is + appropriate. - The final hazard is similar: if many tasks are bound to a smaller set of CPUs, - then the subset of CPUs could be oversaturated even though there is spare CPU - capacity available. + The final hazard is similar: if many tasks are bound to a smaller set of CPUs, then the + subset of CPUs could be oversaturated even though there is spare CPU capacity + available. @@ -499,14 +481,14 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li disabled. But when a single CPUset is created, there is a second layer of checks against scheduler and memory policies. - Similarly, memcg can be used to limit the amount of memory that can be - used by a set of processes. When the limits are exceeded, the memory will be reclaimed - by tasks within memcg directly without interfering with any other tasks. + Similarly, memcg can be used to limit the amount of memory that can + be used by a set of processes. When the limits are exceeded, the memory will be reclaimed by + tasks within memcg directly without interfering with any other tasks. This is ideal for ensuring there is no inference between two or more sets of tasks. Similar to CPUsets, there is some management overhead incurred. This means, if tasks can simply be - isolated on a NUMA boundary, then this is preferred from a performance perspective. The major - hazard is that, if the limits are exceeded, then the processes directly stall to reclaim the - memory which can incur significant latencies. + isolated on a NUMA boundary, then this is preferred from a performance perspective. The + major hazard is that, if the limits are exceeded, then the processes directly stall to + reclaim the memory which can incur significant latencies. @@ -530,45 +512,43 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li controller are designed to take advantage of parallel IO submission. These devices typically support a large number of submit and receive queues, which are tied to <emphasis role="italic" >MSI-X</emphasis> interrupts. Ideally, these devices should provide as many MSI-X vectors as - there are CPUs in the system. To achieve the best performance, each MSI-X vector should be assigned - to an individual CPU.</para> + there are CPUs in the system. To achieve the best performance, each MSI-X vector should be + assigned to an individual CPU.</para> </sect1> <sect1 xml:id="sec-auto-numa-balancing"> <title>Automatic NUMA balancing - Automatic NUMA Balancing (NUMAB) is a feature that identifies and relocates - pages that are being accessed remotely for applications that are not NUMA-aware. There - are cases where it is impractical or impossible to specify policies. In such cases, the - balancing should be sufficient for throughput-sensitive workloads, but on occasion, - NUMAB may be considered hazardous as it incurs a cost. Under ideal conditions, - an application is NUMA aware and uses memory policies to control what memory is - accessed and NUMAB simply ignores such regions. However, even if an application does - not use memory policies, it is possible that the application still accesses mostly - local memory and NUMA adds overhead confirming that accesses are local which is an - unnecessary cost. For latency-sensitive workloads, the sampling for NUMA balancing may - be too unpredictable and would prefer to incur the remote access cost or interleave - memory instead of using NUMA. The final corner case where NUMA balancing is a hazard - happens when the number of runnable tasks always exceeds the number of CPUs in a - single node. In this case, the load balancer (and potentially affine wakes) may - pull tasks away from the preferred node as identified by Automatic NUMA balancing - resulting in excessive sampling and migrations. - - If the workloads can be manually optimized with policies, then consider disabling automatic - NUMA balancing by specifying numa_balancing=disable on the kernel - command line or via sysctl kernel.numa_balancing. The same applies if - it is known that the application is mostly accessing local memory. + Automatic NUMA Balancing (NUMAB) is a feature that identifies and relocates pages that are + being accessed remotely for applications that are not NUMA-aware. There are cases where it is + impractical or impossible to specify policies. In such cases, the balancing should be + sufficient for throughput-sensitive workloads, but on occasion, NUMAB may be considered + hazardous as it incurs a cost. Under ideal conditions, an application is NUMA aware and uses + memory policies to control what memory is accessed and NUMAB simply ignores such regions. + However, even if an application does not use memory policies, it is possible that the + application still accesses mostly local memory and NUMA adds overhead confirming that accesses + are local which is an unnecessary cost. For latency-sensitive workloads, the sampling for NUMA + balancing may be too unpredictable and would prefer to incur the remote access cost or + interleave memory instead of using NUMA. The final corner case where NUMA balancing is a + hazard happens when the number of runnable tasks always exceeds the number of CPUs in a single + node. In this case, the load balancer (and potentially affine wakes) may pull tasks away from + the preferred node as identified by Automatic NUMA balancing resulting in excessive sampling + and migrations. + + If the workloads can be manually optimized with policies, then consider disabling + automatic NUMA balancing by specifying numa_balancing=disable on the kernel + command line or via sysctl kernel.numa_balancing. The same applies if it is + known that the application is mostly accessing local memory. Changes to Automatic NUMA Balancing - While a disconnect between CPU Scheduler and NUMA Balancing placement decisions - still potentially exists in SUSE Linux Enterprise 15 SP4 when the machine is heavily - overloaded, the impact is much reduced relative to previous releases for most - scenarios. The placement decisions made by the CPU Scheduler and NUMA Balancing - are now coupled. Situations where the CPU scheduler and NUMA Balancing make - opposing decisions are relatively rare. + While a disconnect between CPU Scheduler and NUMA Balancing placement decisions still + potentially exists in SUSE Linux Enterprise 15 SP4 when the machine is heavily overloaded, + the impact is much reduced relative to previous releases for most scenarios. The placement + decisions made by the CPU Scheduler and NUMA Balancing are now coupled. Situations where the + CPU scheduler and NUMA Balancing make opposing decisions are relatively rare. @@ -578,21 +558,20 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li Evaluating workloads - The first and foremost step when evaluating how a workload should be tuned is - to establish a primary metric such as latency, throughput, operations per second - or elapsed time. When each tuning step is considered or applied, it is critical - that the primary metric be examined before conducting any further analysis. This is to avoid - intensive focus on a relatively wrong bottleneck. Make sure that the metric is measured - multiple times to ensure that the result is reproducible and reliable within reasonable - boundaries. When that is established, analyze how the workload is using different system - resources to determine what area should be the focus. The focus in this paper is on how - CPU and memory is used. But other evaluations may need to consider the IO subsystem, - network subsystem, system call interfaces, external libraries, etc. The methodologies - that can be employed to conduct this are outside the scope of this paper. But the book - Systems Performance: Enterprise and the Cloud by Brendan Gregg (see http://www.brendangregg.com/systems-performance-2nd-edition-book.html) - is a recommended primer on the subject. + The first and foremost step when evaluating how a workload should be tuned is to establish + a primary metric such as latency, throughput, operations per second or elapsed time. When each + tuning step is considered or applied, it is critical that the primary metric be examined + before conducting any further analysis. This is to avoid intensive focus on a relatively wrong + bottleneck. Make sure that the metric is measured multiple times to ensure that the result is + reproducible and reliable within reasonable boundaries. When that is established, analyze how + the workload is using different system resources to determine what area should be the focus. + The focus in this paper is on how CPU and memory is used. But other evaluations may need to + consider the IO subsystem, network subsystem, system call interfaces, external libraries, etc. + The methodologies that can be employed to conduct this are outside the scope of this paper. + But the book Systems Performance: Enterprise and the Cloud by Brendan Gregg + (see http://www.brendangregg.com/systems-performance-2nd-edition-book.html) is a + recommended primer on the subject. CPU utilization and saturation @@ -602,8 +581,8 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li pidstat commands can be used to sample the number of threads in a system. Typically, pidstat yields more useful information with the important exception of the run state. A system may have many threads, but if they are idle, - they are not contributing to utilization. The mpstat command can - report the utilization of each CPU in the system. + they are not contributing to utilization. The mpstat command can report + the utilization of each CPU in the system. High utilization of a small subset of CPUs may be indicative of a single-threaded workload that is pushing the CPU to the limits and may indicate a bottleneck. Conversely, @@ -612,92 +591,90 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li workload that can run on a subset of CPUs to reduce latencies because of either migrations or remote accesses. When utilization is high, it is important to determine if the system could be saturated. The vmstat tool reports the number of runnable tasks - waiting for a CPU in the r column where any value over 1 indicates that wakeup - latencies may be incurred. While the exact wakeup latency can be calculated using trace - points, knowing that there are tasks queued is an important step. If a system is saturated, - it may be possible to tune the workload to use fewer threads. - - Overall, the initial intent should be to use CPUs from as few NUMA nodes as - possible to reduce access latency. However, there are exceptions. The AMD EPYC 9004 - Series Processor has a large number of high-speed memory channels to main memory, so - consider the workload thread activity. If they are cooperating threads or sharing - data, isolate them on as few nodes as possible to minimize cross-node memory - accesses. If the threads are completely independent with no shared data, it may - be best to isolate them on a subset of CPUs from each node. This is to maximize - the number of available memory channels and throughput to main memory. For some - computational workloads, it may be possible to use hybrid models such as MPI for - parallelization across nodes and OpenMP for threads within nodes. - - - Updating tuning for AMD EPYC 9004 Series Processor - - It is expected that tuning based on the AMD EPYC 7003 Series Processor - will also usually perform optimally on AMD EPYC 9004 series. The main - consideration is to account for potential differences in L3 cache sizes because of - AMD V-Cache if workloads are tuned specifically for cache size. Also, keep in mind - that CPU bindings based on caches may potentially be relaxed on SUSE Linux Enterprise - 15 SP4. - + waiting for a CPU in the r column where any value over 1 indicates that + wakeup latencies may be incurred. While the exact wakeup latency can be calculated using + trace points, knowing that there are tasks queued is an important step. If a system is + saturated, it may be possible to tune the workload to use fewer threads. + + Overall, the initial intent should be to use CPUs from as few NUMA nodes as possible to + reduce access latency. However, there are exceptions. The AMD EPYC 9004 Series Processor has + a large number of high-speed memory channels to main memory, so consider the workload thread + activity. If they are cooperating threads or sharing data, isolate them on as few nodes as + possible to minimize cross-node memory accesses. If the threads are completely independent + with no shared data, it may be best to isolate them on a subset of CPUs from each node. This + is to maximize the number of available memory channels and throughput to main memory. For + some computational workloads, it may be possible to use hybrid models such as MPI for + parallelization across nodes and OpenMP for threads within nodes. + + + Updating tuning for AMD EPYC 9004 Series Processor + + It is expected that tuning based on the AMD EPYC 7003 Series Processor will also + usually perform optimally on AMD EPYC 9004 series. The main consideration is to account + for potential differences in L3 cache sizes because of AMD V-Cache if workloads are tuned + specifically for cache size. Also, keep in mind that CPU bindings based on caches may + potentially be relaxed on SUSE Linux Enterprise 15 SP4. + Transparent Huge Pages - Huge pages are a feature that can improve performance in many cases. This is - achieved by reducing the number of page faults, the cost of translating virtual - addresses to physical addresses because of fewer layers in the page table and - being able to cache translations for a larger portion of memory. Transparent Huge Pages (THP) is supported for private - anonymous memory that automatically backs mappings with huge pages where anonymous - memory could be allocated as heap, malloc(), - mmap(MAP_ANONYMOUS), etc. There is also support for using THP - pages backed by tmpfs which can be configured at mount time - using the huge= mount option. While the THP feature has - existed for a long time, it has evolved significantly. + Huge pages are a feature that can improve performance in many cases. This is achieved by + reducing the number of page faults, the cost of translating virtual addresses to physical + addresses because of fewer layers in the page table and being able to cache translations for + a larger portion of memory. Transparent Huge Pages (THP) + is supported for private anonymous memory that automatically backs mappings with huge pages + where anonymous memory could be allocated as heap, + malloc(), mmap(MAP_ANONYMOUS), etc. There is also + support for using THP pages backed by tmpfs which can be configured at + mount time using the huge= mount option. While the THP feature has + existed for a long time, it has evolved significantly. Many tuning guides recommend disabling THP because of problems with early - implementations. Specifically, when the machine was running for long enough, the - use of THP could incur severe latencies and could aggressively reclaim memory in - certain circumstances. These problems were resolved by the time SUSE Linux - Enterprise Server 15 SP2 was released, and this is still the case for SUSE Linux - Enterprise Server 15 SP4. This means there are no good grounds for automatically - disabling THP because of severe latency issues without measuring the impact. However, - there are exceptions that are worth considering for specific workloads. + implementations. Specifically, when the machine was running for long enough, the use of THP + could incur severe latencies and could aggressively reclaim memory in certain circumstances. + These problems were resolved by the time SUSE Linux Enterprise Server 15 SP2 was released, + and this is still the case for SUSE Linux Enterprise Server 15 SP4. This means there are no + good grounds for automatically disabling THP because of severe latency issues without + measuring the impact. However, there are exceptions that are worth considering for specific + workloads. Some high-end in-memory databases and other applications aggressively use - mprotect() to ensure that unprivileged data is never leaked. If - these protections are at the base page granularity, then there may be many THP splits - and rebuilds that incur overhead. It can be identified if this is a potential - problem by using strace or perf trace to - detect the frequency and granularity of the system call. If they are high-frequency, - consider disabling THP. It can also be sometimes inferred from observing the - thp_split and thp_collapse_alloc counters - in /proc/vmstat. - - Workloads that sparsely address large mappings may have a higher memory - footprint when using THP. This could result in premature reclaim or fallback to - remote nodes. An example would be HPC workloads operating on large sparse matrices. If - memory usage is much higher than expected, compare memory usage with and without - THP to decide if the trade-off is not worthwhile. This may be critical on AMD EPYC - 7003 and 9004 Series Processor given that any spillover will congest the Infinity - links and potentially cause cores to run at a lower frequency. + mprotect() to ensure that unprivileged data is never leaked. If these + protections are at the base page granularity, then there may be many THP splits and rebuilds + that incur overhead. It can be identified if this is a potential problem by using + strace or perf trace to detect the frequency and + granularity of the system call. If they are high-frequency, consider disabling THP. It can + also be sometimes inferred from observing the thp_split and + thp_collapse_alloc counters in + /proc/vmstat. + + Workloads that sparsely address large mappings may have a higher memory footprint when + using THP. This could result in premature reclaim or fallback to remote nodes. An example + would be HPC workloads operating on large sparse matrices. If memory usage is much higher + than expected, compare memory usage with and without THP to decide if the trade-off is not + worthwhile. This may be critical on AMD EPYC 7003 and 9004 Series Processor given that any + spillover will congest the Infinity links and potentially cause cores to run at a lower + frequency. Sparsely addressed memory This is specific to sparsely addressed memory. A secondary hint for this case may be - that the application primarily uses large mappings with a much higher - Virtual Size (VSZ, see ) than Resident Set Size (RSS). - Applications which densely address memory benefit from the use of THP by achieving greater - bandwidth to memory. + that the application primarily uses large mappings with a much higher Virtual Size (VSZ, see ) than Resident Set + Size (RSS). Applications which densely address memory benefit from the use of + THP by achieving greater bandwidth to memory. - Parallelized workloads that operate on shared buffers with thread counts - exceeding the number of available CPUs on a single node may experience a slowdown - with THP if the granularity of partitioning is not aligned to the huge page. The - problem is that if a large shared buffer is partitioned on a 4K boundary, then false - sharing may occur whereby one thread accesses a huge page locally and other threads - access it remotely. If this situation is encountered, the granularity of sharing should - be increased to the THP size. But if that is not possible, disabling THP is an option. + Parallelized workloads that operate on shared buffers with thread counts exceeding the + number of available CPUs on a single node may experience a slowdown with THP if the + granularity of partitioning is not aligned to the huge page. The problem is that if a large + shared buffer is partitioned on a 4K boundary, then false sharing may occur whereby one + thread accesses a huge page locally and other threads access it remotely. If this situation + is encountered, the granularity of sharing should be increased to the THP size. But if that + is not possible, disabling THP is an option. Applications that are extremely latency-sensitive or must always perform in a deterministic fashion can be hindered by THP. While there are fewer faults, the time for @@ -714,11 +691,11 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li This will still allow THP to be used opportunistically while avoiding stalls when calling malloc() or mmap(). - THP can be disabled. To do so, specify - transparent_hugepage=disable on the kernel command line, - at runtime via /sys/kernel/mm/transparent_hugepage/enabled - or on a per-process basis by using a wrapper to execute the workload that calls - prctl(PR_SET_THP_DISABLE). + THP can be disabled. To do so, specify transparent_hugepage=disable + on the kernel command line, at runtime via + /sys/kernel/mm/transparent_hugepage/enabled or on a per-process basis + by using a wrapper to execute the workload that calls + prctl(PR_SET_THP_DISABLE). @@ -728,7 +705,7 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li Assuming an application is mostly CPU- or memory-bound, it is useful to determine if the footprint is primarily in user space or kernel space. This gives a hint where tuning should be focused. The percentage of CPU time can be measured on a coarse-grained fashion using - vmstat or a fine-grained fashion using mpstat. If an + vmstat or a fine-grained fashion using mpstat. If an application is mostly spending time in user space, then the focus should be on tuning the application itself. If the application is spending time in the kernel, then it should be determined which subsystem dominates. The strace or perf @@ -742,41 +719,39 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li Memory utilization and saturation - The traditional means of measuring memory utilization of a workload is to - examine the Virtual Size (VSZ) and - Resident Set Size (RSS). This can be done by - using either the ps or pidstat tool. - This is a reasonable first step but is potentially misleading when shared memory - is used and multiple processes are examined. VSZ is simply a measure of memory space reservation and - is not necessarily used. RSS may be double accounted if it is a shared segment - between multiple processes. The file /proc/pid/maps can be - used to identify all segments used and whether they are private or shared. The file - /proc/pid/smaps and /proc/pid/smaps_rollup - reveals more detailed information including the Proportional - Set Size (PSS). PSS is an estimate of RSS except it is divided - between the number of processes mapping that segment, which can give a more - accurate estimate of utilization. Note that the smaps and - smaps_rollup files are very expensive to read and should not be - monitored at a high frequency. This is especially the case if workloads are using large amounts of - address space, many threads or both. Finally, the Working Set - Size (WSS) is the amount of active memory required to complete computations - during an arbitrary phase of a program's execution. It is not a value that can be - trivially measured. But conceptually it is useful as the interaction between WSS - relative to available memory affects memory residency and page fault rates. + The traditional means of measuring memory utilization of a workload is to examine the + Virtual Size (VSZ) and Resident + Set Size (RSS). This can be done by using either the ps or + pidstat tool. This is a reasonable first step but is potentially + misleading when shared memory is used and multiple processes are examined. VSZ is simply a + measure of memory space reservation and is not necessarily used. RSS may be double accounted + if it is a shared segment between multiple processes. The file + /proc/pid/maps can be used to identify all segments used and whether + they are private or shared. The file /proc/pid/smaps and + /proc/pid/smaps_rollup reveals more detailed information including + the Proportional Set Size (PSS). PSS is an estimate of + RSS except it is divided between the number of processes mapping that segment, which can + give a more accurate estimate of utilization. Note that the smaps and + smaps_rollup files are very expensive to read and should not be + monitored at a high frequency. This is especially the case if workloads are using large + amounts of address space, many threads or both. Finally, the Working + Set Size (WSS) is the amount of active memory required to complete computations + during an arbitrary phase of a program's execution. It is not a value that can be trivially + measured. But conceptually it is useful as the interaction between WSS relative to available + memory affects memory residency and page fault rates. On NUMA systems, the first saturation point is a node overflow when the - local policy is in effect. Given no binding of memory, when a node is - filled, a remote node’s memory will be used transparently and background reclaim - will take place on the local node. Two consequences of this are that remote access - penalties will be used and old memory from the local node will be reclaimed. If the - WSS of the application exceeds the size of a local node, then paging and re-faults - may be incurred. + local policy is in effect. Given no binding of memory, when a node is + filled, a remote node’s memory will be used transparently and background reclaim will take + place on the local node. Two consequences of this are that remote access penalties will be + used and old memory from the local node will be reclaimed. If the WSS of the application + exceeds the size of a local node, then paging and re-faults may be incurred. The first item to identify is whether a remote node overflow occurred, which is accounted for in /proc/vmstat as the numa_hit, - numa_miss, numa_foreign, - numa_interleave, numa_local and - numa_other counters: + numa_miss, numa_foreign, + numa_interleave, numa_local and numa_other + counters: @@ -790,8 +765,8 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li numa_foreign is rarely useful but is accounted against a node - that was preferred. It is a subtle distinction from numa_miss that is rarely - useful. + that was preferred. It is a subtle distinction from numa_miss that is + rarely useful. numa_interleave is incremented when an interleave policy was used @@ -808,14 +783,14 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li For the local memory policy, the numa_hit and - numa_miss counters are the most important to pay attention - to. An application that is allocating memory that starts incrementing the - numa_miss implies that the first level of saturation has - been reached. If monitoring the proc is undesirable, then the - numastat provides the same information. If this is observed on the - AMD EPYC 9004 Series Processor, it may be valuable to bind the application to nodes - that represent dies on a single socket. If the ratio of hits to misses is close to 1, - consider an evaluation of the interleave policy to avoid unnecessary reclaim. + numa_miss counters are the most important to pay attention to. An + application that is allocating memory that starts incrementing the + numa_miss implies that the first level of saturation has been reached. + If monitoring the proc is undesirable, then the + numastat provides the same information. If this is observed on the AMD + EPYC 9004 Series Processor, it may be valuable to bind the application to nodes that + represent dies on a single socket. If the ratio of hits to misses is close to 1, consider an + evaluation of the interleave policy to avoid unnecessary reclaim. NUMA statistics @@ -830,52 +805,50 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li Performance Management Unit to detect. - When the first saturation point is reached, reclaim will be active. This - can be observed by monitoring the pgscan_kswapd and - pgsteal_kswapd counters in /proc/vmstat. If - this is matched with an increase in major faults or minor faults, then it may - be indicative of severe thrashing. In this case, the interleave policy should be - considered. An ideal tuning option is to identify if shared memory is the source - of the usage. If this is the case, then interleave the shared memory segments. This - can be done in some circumstances using numactl or by modifying - the application directly. - - More severe saturation is observed if the pgscan_direct - and pgsteal_direct counters are also increasing. These counters indicate - that the application is stalling while memory is being reclaimed. If the application - was bound to individual nodes, increasing the number of available nodes will - alleviate the pressure. If the application is unbound, it indicates that the WSS - of the workload exceeds all available memory. It can only be alleviated by tuning - the application to use less memory or increasing the amount of RAM available. - - A more generalized view of resource pressure for CPU, memory and IO can be - measured using the kernel Pressure - Stall Information feature enabled with the command - line psi=1. When enabled, proc files under - /proc/pressure show if some or all active tasks were stalled - recently contending on a resource. This information is not always available in - production. But if the information is available, the memory pressure information may be used to guide - whether a deeper analysis is necessary and which resource is the bottleneck. - - As before, whether to use memory nodes from one socket or two sockets depends - on the application. If the individual processes are independent, either socket - can be used. Where possible, keep communicating processes on the same socket - to maximize memory throughput while minimizing the socket interconnect traffic. + When the first saturation point is reached, reclaim will be active. This can be observed + by monitoring the pgscan_kswapd and pgsteal_kswapd + counters in /proc/vmstat. If this is matched with an increase in major + faults or minor faults, then it may be indicative of severe thrashing. In this case, the + interleave policy should be considered. An ideal tuning option is to identify if shared + memory is the source of the usage. If this is the case, then interleave the shared memory + segments. This can be done in some circumstances using numactl or by + modifying the application directly. + + More severe saturation is observed if the pgscan_direct and + pgsteal_direct counters are also increasing. These counters indicate + that the application is stalling while memory is being reclaimed. If the application was + bound to individual nodes, increasing the number of available nodes will alleviate the + pressure. If the application is unbound, it indicates that the WSS of the workload exceeds + all available memory. It can only be alleviated by tuning the application to use less memory + or increasing the amount of RAM available. + + A more generalized view of resource pressure for CPU, memory and IO can be measured + using the kernel Pressure Stall Information feature + enabled with the command line psi=1. When enabled, proc files under + /proc/pressure show if some or all active tasks were stalled recently + contending on a resource. This information is not always available in production. But if the + information is available, the memory pressure information may be used to guide whether a + deeper analysis is necessary and which resource is the bottleneck. + + As before, whether to use memory nodes from one socket or two sockets depends on the + application. If the individual processes are independent, either socket can be used. Where + possible, keep communicating processes on the same socket to maximize memory throughput + while minimizing the socket interconnect traffic. Other resources - The analysis of other resources is outside the scope of this paper. However, - a common scenario is that an application is IO-bound. A superficial check can be made - using the vmstat tool. This tool checks what percentage of CPU time - is spent idle combined with the number of processes that are blocked and the values in - the bi and bo - columns. Similarly, if PSI is enabled, then the IO pressure file will show whether - some or all active tasks are losing time because of lack of resources. Further analysis - is required to determine if an application is IO rather than CPU- or memory-bound. - However, this is a sufficient check to start with. + The analysis of other resources is outside the scope of this paper. However, a common + scenario is that an application is IO-bound. A superficial check can be made using the + vmstat tool. This tool checks what percentage of CPU time is spent idle + combined with the number of processes that are blocked and the values in the bi and bo columns. Similarly, + if PSI is enabled, then the IO pressure file will show whether some or all active tasks are + losing time because of lack of resources. Further analysis is required to determine if an + application is IO rather than CPU- or memory-bound. However, this is a sufficient check to + start with. @@ -883,70 +856,68 @@ epyc:~ # taskset -c `cat /sys/devices/system/cpu/cpu1/cache/index3/shared_cpu_li Power management - Modern CPUs balance power consumption and performance through Performance States (P-States). Low utilization workloads may - use lower P-States to conserve power while still achieving acceptable performance. When - a CPU is idle, lower power idle states (C-States) can - be selected to further conserve power. However, this comes with higher exit latencies - when lower power states are selected. It is further complicated by the fact that, - if individual cores are idle and running at low power, the additional power can be - used to boost the performance of active cores. This means this scenario is not a - straightforward balance between power consumption and performance. More complexity - is added on the AMD EPYC 7003 and 9004 Series Processors whereby spare power may be - used to boost either cores or the Infinity links. - - The 4th Generation AMD EPYC Processor provides SenseMI. This technology, among other capabilities, enables CPUs to - make adjustments to voltage and frequency depending on the historical state of the - CPU. There is a latency penalty when switching P-States, but the AMD EPYC 9004 - Series Processor is capable of making fine-grained adjustments to reduce the likelihood - that the latency is a bottleneck. On SUSE Linux Enterprise - Server, the AMD EPYC 9004 Series Processor uses the acpi_cpufreq - driver. This allows P-states to be configured to match requested performance. However, - this is limited in terms of the full capabilities of the hardware. It cannot boost - the frequency beyond the maximum stated frequencies, and if a target is specified, - then the highest frequency below the target will be used. A special case is if the - governor is set to performance. In this situation - the hardware will use the highest available frequency in an attempt to work quickly - and then return to idle. + Modern CPUs balance power consumption and performance through Performance States (P-States). Low utilization workloads may use lower P-States + to conserve power while still achieving acceptable performance. When a CPU is idle, lower + power idle states (C-States) can be selected to further + conserve power. However, this comes with higher exit latencies when lower power states are + selected. It is further complicated by the fact that, if individual cores are idle and running + at low power, the additional power can be used to boost the performance of active cores. This + means this scenario is not a straightforward balance between power consumption and + performance. More complexity is added on the AMD EPYC 7003 and 9004 Series Processors whereby + spare power may be used to boost either cores or the Infinity links. + + The 4th Generation AMD EPYC Processor provides SenseMI. + This technology, among other capabilities, enables CPUs to make adjustments to voltage and + frequency depending on the historical state of the CPU. There is a latency penalty when + switching P-States, but the AMD EPYC 9004 Series Processor is capable of making fine-grained + adjustments to reduce the likelihood that the latency is a bottleneck. On SUSE Linux + Enterprise Server, the AMD EPYC 9004 Series Processor uses the acpi_cpufreq + driver. This allows P-states to be configured to match requested performance. However, this is + limited in terms of the full capabilities of the hardware. It cannot boost the frequency + beyond the maximum stated frequencies, and if a target is specified, then the highest + frequency below the target will be used. A special case is if the governor is set to performance. In this situation the hardware will use the highest + available frequency in an attempt to work quickly and then return to idle. What should be determined is whether power management is likely to be a factor for a - workload. A single thread workload that is CPU-bound is likely to run at the highest - frequency on a single core. Lastly, a workload that does not communicate heavily - with other processes and is mostly CPU-bound is unlikely to experience any side - effects because of power management. The exceptions are when load balancing moves - tasks away from active CPUs if there is a compute imbalance between NUMA nodes or - the machine is heavily overloaded. + workload. A single thread workload that is CPU-bound is likely to run at the highest frequency + on a single core. Lastly, a workload that does not communicate heavily with other processes + and is mostly CPU-bound is unlikely to experience any side effects because of power + management. The exceptions are when load balancing moves tasks away from active CPUs if there + is a compute imbalance between NUMA nodes or the machine is heavily overloaded. The workloads that are most likely to be affected by power management are those that: - - - synchronously communicate between multiple threads. - - idle frequently. - - have low CPU utilization overall. - - - - It will be further compounded if the threads are sensitive to wakeup latency. - - Power management is critical, not only for power savings, but because power saved - from idling inactive cores can be used to boost the performance of active cores. On - the other side, low utilization tasks may take longer to complete if the task is not - active long enough for the CPU to run at a high frequency. In some cases, problems - can be avoided by configuring a workload to use the minimum number of CPUs necessary - for its active tasks. Deciding that means monitoring the power state of CPUs. - - The P-State and C-State of each CPU can be examined using the - turbostat utility. The computer output below shows an example, - slightly edited to fit the page, where a workload is busy on CPU 0 and other - workloads are idle. A useful exercise is to start a workload and monitor the output - of turbostat paying close attention to CPUs that have moderate - utilization and running at a lower frequency. If the workload is latency-sensitive, - it is grounds for either minimizing the number of CPUs available to the workload or - configuring power management. + + + + synchronously communicate between multiple threads. + + + idle frequently. + + + have low CPU utilization overall. + + + + It will be further compounded if the threads are sensitive to wakeup latency. + + Power management is critical, not only for power savings, but because power saved from + idling inactive cores can be used to boost the performance of active cores. On the other side, + low utilization tasks may take longer to complete if the task is not active long enough for + the CPU to run at a high frequency. In some cases, problems can be avoided by configuring a + workload to use the minimum number of CPUs necessary for its active tasks. Deciding that means + monitoring the power state of CPUs. + + The P-State and C-State of each CPU can be examined using the turbostat + utility. The computer output below shows an example, slightly edited to fit the page, where a + workload is busy on CPU 0 and other workloads are idle. A useful exercise is to start a + workload and monitor the output of turbostat paying close attention to CPUs + that have moderate utilization and running at a lower frequency. If the workload is + latency-sensitive, it is grounds for either minimizing the number of CPUs available to the + workload or configuring power management. Pac. Die Core CPU Avg_M Busy% Bzy_M TSC_M IRQ POLL C1 C2 POLL% C1% C2% @@ -959,9 +930,8 @@ Pac. Die Core CPU Avg_M Busy% Bzy_M TSC_M IRQ POLL C1 C2 POLL% C1% C2% 0 0 2 194 78 2.22 3500 1200 0.92 84 0 0 29 0.00 0.00 - If tuning CPU frequency management is appropriate, the following actions can be - taken to set the management policy to performance using the cpupower - utility: + If tuning CPU frequency management is appropriate, the following actions can be taken to + set the management policy to performance using the cpupower utility: epyc:~# cpupower frequency-set -g performance Setting cpu: 0 @@ -969,36 +939,35 @@ Setting cpu: 1 Setting cpu: 2 ... - Persisting it across reboots can be done via a local init - script, via udev or via one-shot systemd - service file if necessary. Note that turbostat - will still show that idling CPUs use a low frequency. The impact of the policy is - that the highest P-State will be used as soon as possible when the CPU is active. In - some cases, a latency bottleneck will occur because of a CPU exiting idle. If this is - identified on the AMD EPYC 9004 Series Processor, restrict the C-state by specifying - processor.max_cstate=2 if lower P-States exist on the kernel command - line. This will prevent CPUs from entering lower C-states. The availability of - P-states can be determined with cpupower idle-info. It is expected - on the AMD EPYC 9004 Series Processor that the exit latency from C1 is very low. But - by allowing C2, it reduces interference from the idle loop injecting micro-operations - into the pipeline and should be the best state overall. It is also possible to set - the max idle state on individual cores using cpupower idle-set. If - SMT is enabled, the idle state should be set on both siblings. + Persisting it across reboots can be done via a local init script, via + udev or via one-shot systemd service file if + necessary. Note that turbostat will still show that idling CPUs use a low + frequency. The impact of the policy is that the highest P-State will be used as soon as + possible when the CPU is active. In some cases, a latency bottleneck will occur because of a + CPU exiting idle. If this is identified on the AMD EPYC 9004 Series Processor, restrict the + C-state by specifying processor.max_cstate=2 if lower P-States exist on the + kernel command line. This will prevent CPUs from entering lower C-states. The availability of + P-states can be determined with cpupower idle-info. It is expected on the + AMD EPYC 9004 Series Processor that the exit latency from C1 is very low. But by allowing C2, + it reduces interference from the idle loop injecting micro-operations into the pipeline and + should be the best state overall. It is also possible to set the max idle state on individual + cores using cpupower idle-set. If SMT is enabled, the idle state should be + set on both siblings. Security mitigation - On occasion, a security fix is applied to a distribution that has a performance - impact. The most notable example is Meltdown - and multiple variants of Spectre but includes - others such as ForeShadow (L1TF). The AMD EPYC 9004 Series Processor is immune to the - Meltdown variant and page table isolation is never active. However, it is vulnerable - to a subset of Spectre variants although retbleed - is a notable exception. The following table lists all security vulnerabilities that - affect the 4th Generation AMD EPYC Processor. In addition, it specifies which mitigations - are enabled by default for SUSE Linux Enterprise Server 15 SP4. + On occasion, a security fix is applied to a distribution that has a performance impact. + The most notable example is Meltdown and multiple variants + of Spectre but includes others such as ForeShadow (L1TF). + The AMD EPYC 9004 Series Processor is immune to the Meltdown variant and page table isolation + is never active. However, it is vulnerable to a subset of Spectre variants although retbleed is a notable exception. The following table lists all + security vulnerabilities that affect the 4th Generation AMD EPYC Processor. In addition, it + specifies which mitigations are enabled by default for SUSE Linux Enterprise Server 15 + SP4. Security mitigations for AMD EPYC 9004 Series Processors @@ -1139,65 +1108,64 @@ Setting cpu: 2
- If it can be guaranteed that the server is in a trusted environment running - only known code that is not malicious, the mitigations=off - parameter can be specified on the kernel command line. This option disables all - security mitigations and may improve performance in some cases. However, at the - time of writing and in most cases, the gain on an AMD EPYC 9004 Series Processor is - marginal when compared to other CPUs. Evaluate carefully whether the gain is worth - the risk and if unsure, leave the mitigations enabled. + If it can be guaranteed that the server is in a trusted environment running only known + code that is not malicious, the mitigations=off parameter can be + specified on the kernel command line. This option disables all security mitigations and may + improve performance in some cases. However, at the time of writing and in most cases, the gain + on an AMD EPYC 9004 Series Processor is marginal when compared to other CPUs. Evaluate + carefully whether the gain is worth the risk and if unsure, leave the mitigations + enabled.
Hardware-based profiling - The AMD EPYC 9004 Series Processor has extensive Performance Monitoring - Unit (PMU) capabilities. Advanced monitoring of a workload can be conducted via the - perf. The command supports a range of hardware events including - cycles, L1 cache access/misses, TLB access/misses, retired branch instructions and - mispredicted branches. To identify what subsystem may be worth tuning in the OS, - the most useful invocation is perf record -a -e cycles sleep 30. This - captures 30 seconds of data for the entire system. You can also call perf - record -e cycles command to gather a profile of a given workload. Specific - information on the OS can be gathered through tracepoints or creating probe points - with perf or trace-cmd. But the details on - how to conduct such analyses are beyond the scope of this paper. + The AMD EPYC 9004 Series Processor has extensive Performance + Monitoring Unit (PMU) capabilities. Advanced monitoring of a workload can be + conducted via the perf. The command supports a range of hardware events + including cycles, L1 cache access/misses, TLB access/misses, retired branch instructions and + mispredicted branches. To identify what subsystem may be worth tuning in the OS, the most + useful invocation is perf record -a -e cycles sleep 30. This captures 30 + seconds of data for the entire system. You can also call perf record -e cycles + command to gather a profile of a given workload. Specific information on the OS + can be gathered through tracepoints or creating probe points with perf or + trace-cmd. But the details on how to conduct such analyses are beyond the + scope of this paper. Compiler selection - SUSE Linux Enterprise ships with multiple versions of GCC. SUSE Linux Enterprise - 15 SP4 service packs ship with GCC 7 which at the time of writing is - GCC 7.5.0 with the package version 7-3.9.1. The - intention is to avoid unintended consequences when porting code that may affect - the building and operation of applications. The GCC 7 development - originally started in 2016, with a branch created in 2017 and GCC 7.5 - released in 2019. This means that the system compiler has no awareness of the AMD - EPYC 7002 or later Series processors. - - Fortunately, the add-on Developer Tools Module includes - additional compilers with the latest version currently based on GCC - 11.2.1. This compiler is capable of generating optimized code targeted at - the 3rd Generation AMD EPYC Processor using the znver3 target. - It also provides additional support for OpenMP 5.0. Unlike the system - compiler, the major version of GCC shipped with the Developer Tools Module can change - during the lifetime of the product. It is expected that GCC 12 will - be included in future releases for generating optimized code for the 4th Generation - AMD EPYC Processor. Unfortunately, at the time of writing, there is not a version of - GCC available optimized for the AMD EPYC 9004 Series Processor - specifically. + SUSE Linux Enterprise ships with multiple versions of GCC. SUSE Linux Enterprise 15 SP4 + service packs ship with GCC 7 which at the time of writing is GCC + 7.5.0 with the package version 7-3.9.1. The intention is to + avoid unintended consequences when porting code that may affect the building and operation of + applications. The GCC 7 development originally started in 2016, with a + branch created in 2017 and GCC 7.5 released in 2019. This means that the + system compiler has no awareness of the AMD EPYC 7002 or later Series processors. + + Fortunately, the add-on Developer Tools Module includes additional + compilers with the latest version currently based on GCC 11.2.1. This + compiler is capable of generating optimized code targeted at the 3rd Generation AMD EPYC + Processor using the znver3 target. It also provides additional support + for OpenMP 5.0. Unlike the system compiler, the major version of GCC + shipped with the Developer Tools Module can change during the lifetime of the product. It is + expected that GCC 12 will be included in future releases for generating + optimized code for the 4th Generation AMD EPYC Processor. Unfortunately, at the time of + writing, there is not a version of GCC available optimized for the AMD EPYC 9004 Series + Processor specifically. The OS packages are built against a generic target. However, where applications and benchmarks can be rebuilt from source, the minimum option should be - -march=znver3 for GCC 11 and later versions - of GCC. + -march=znver3 for GCC 11 and later versions of + GCC. - Further information on how to install the Developer Tools Module - and how to build optimized versions of applications can be found in the guide - Advanced optimization and new capabilities of GCC 11. + Further information on how to install the Developer Tools Module and + how to build optimized versions of applications can be found in the guide Advanced optimization and new capabilities of GCC 11. @@ -1205,27 +1173,26 @@ Setting cpu: 2 Candidate workloads The workloads that will benefit most from the 4th Generation AMD EPYC Processor - architecture are those that can be parallelized and are either memory or IO-bound. This - is particularly true for workloads that are NUMA friendly. They can - be parallelized, and each thread can operate independently for most of the - workload's lifetime. For memory-bound workloads, the primary benefit will be taking - advantage of the high bandwidth available on each channel. For IO-bound workloads, - the primary benefit will be realized when there are multiple storage devices, each - of which is connected to the node local to a task issuing IO. + architecture are those that can be parallelized and are either memory or IO-bound. This is + particularly true for workloads that are NUMA friendly. They can be + parallelized, and each thread can operate independently for most of the workload's lifetime. + For memory-bound workloads, the primary benefit will be taking advantage of the high bandwidth + available on each channel. For IO-bound workloads, the primary benefit will be realized when + there are multiple storage devices, each of which is connected to the node local to a task + issuing IO. Test setup - The following sections will demonstrate how an OpenMP and MPI workload can - be configured and tuned on an AMD EPYC 9004 Series Processor reference platform. The - system has two processors, each with 96 cores and SMT enabled for a total of 192 - cores (384 logical CPUs). The peak bandwidth available to the machine depends on - the type of DIMMs installed and how the DIMM slots are populated. Note that the - peak theoretical transfer speed is rarely reached in practice, given that it can - be affected by the mix of reads/writes and the location and temporal proximity of - memory locations accessed. + The following sections will demonstrate how an OpenMP and MPI workload can be configured + and tuned on an AMD EPYC 9004 Series Processor reference platform. The system has two + processors, each with 96 cores and SMT enabled for a total of 192 cores (384 logical CPUs). + The peak bandwidth available to the machine depends on the type of DIMMs installed and how + the DIMM slots are populated. Note that the peak theoretical transfer speed is rarely + reached in practice, given that it can be affected by the mix of reads/writes and the + location and temporal proximity of memory locations accessed. - @@ -1311,24 +1278,23 @@ Setting cpu: 2 Test workload: STREAM - STREAM is a memory bandwidth benchmark - created by Dr. John D. McCalpin from the University of Virginia (for more - information, see https://www.cs.virginia.edu/stream/). It can be used to measure bandwidth - of each cache level and bandwidth to main memory while calculating four basic - vector operations. Each operation can exhibit different throughputs to main memory - depending on the locations and type of access. - - The benchmark was configured to run both single-threaded and parallelized with - OpenMP to take advantage of multiple memory controllers. The array of elements for - the benchmark was set at 268,435,456 elements at compile time so that each array - was 2048MB in size for a total memory footprint of approximately 6144 MB. The size - was selected in line with the recommendation from STREAM that the array sizes be - at least 4 times the total size of L3 cache available in the system. Pay special - attention to the exact size of the L3 cache if V-Cache is present. An array-size - offset was used so that the separate arrays for each parallelized thread would - not share a Transparent Huge Page. The reason is that NUMA balancing may - choose to migrate shared pages leading to some distortion of the results. + STREAM is a memory bandwidth benchmark created by Dr. + John D. McCalpin from the University of Virginia (for more information, see https://www.cs.virginia.edu/stream/). It can be used to measure bandwidth of each + cache level and bandwidth to main memory while calculating four basic vector operations. + Each operation can exhibit different throughputs to main memory depending on the locations + and type of access. + + The benchmark was configured to run both single-threaded and parallelized with OpenMP to + take advantage of multiple memory controllers. The array of elements for the benchmark was + set at 268,435,456 elements at compile time so that each array was 2048MB in size for a + total memory footprint of approximately 6144 MB. The size was selected in line with the + recommendation from STREAM that the array sizes be at least 4 times the total size of L3 + cache available in the system. Pay special attention to the exact size of the L3 cache if + V-Cache is present. An array-size offset was used so that the separate arrays for each + parallelized thread would not share a Transparent Huge Page. The reason is that NUMA + balancing may choose to migrate shared pages leading to some distortion of the results. Test workload: STREAM @@ -1381,16 +1347,15 @@ Setting cpu: 2
- The march=znver3 is a reflection of the compiler - available in SUSE Linux Enterprise 15 SP4 at the time of writing. - It should be checked if a later GCC version is - available in the Developer Tools Module that supported - march=znver4. The number of openMP threads was selected - to have at least one thread running for every memory channel by having one thread - per L3 cache available. The OMP_PROC_BIND parameter spreads - the threads such that one thread is bound to each available dedicated L3 cache to - maximize available bandwidth. This can be verified using perf, - as illustrated below with slight editing for formatting and clarity. + The march=znver3 is a reflection of the compiler available in + SUSE Linux Enterprise 15 SP4 at the time of writing. It should be checked if a later + GCC version is available in the Developer Tools + Module that supported march=znver4. The number of openMP + threads was selected to have at least one thread running for every memory channel by having + one thread per L3 cache available. The OMP_PROC_BIND parameter + spreads the threads such that one thread is bound to each available dedicated L3 cache to + maximize available bandwidth. This can be verified using perf, as + illustrated below with slight editing for formatting and clarity. epyc:~ # perf record -e sched:sched_migrate_task ./stream epyc:~ # perf script @@ -1405,53 +1370,49 @@ epyc:~ # perf script ... - Several options were considered for the test system that were unnecessary - for STREAM running on the AMD EPYC 9004 Series Processor but may be useful in other - situations. STREAM performance can be limited if a load/store instruction stalls to - fetch data. CPUs may automatically prefetch data based on historical behavior but it - is not guaranteed. In limited cases, depending on the CPU and workload, this may be - addressed by specifying -fprefetch-loop-arrays and depending - on whether the workload is store-intensive, -mprefetchwt1. In - the test system using AMD EPYC 9004 Series Processors, explicit prefetching did - not help and was omitted. This is because an explicitly scheduled prefetch may - disable a CPU's predictive algorithms and degrade performance. Similarly, for - some workloads branch mispredictions can be a major problem, and in some cases - breach mispredictions can be offset against I-Cache pressure by specifying - -funroll-loops. In the case of STREAM on the test - system, the CPU accurately predicted the branches rendering the unrolling of - loops unnecessary. For math-intensive workloads it can be beneficial to link the - application with -lmvec depending on the application. In the - case of STREAM, the workload did not use significant math-based operations and so - this option was not used. Some styles of code blocks and loops can also be optimized - to use vectored operations by specifying -ftree-vectorize and - explicitly adding support for CPU features such as -mavx2. In - all cases, STREAM does not benefit as its operations are very basic. But it should - be considered on an application-by-application basis and when building support - libraries such as numerical libraries. In all cases, experimentation is recommended - but caution advised. This holds particularly true when considering options like prefetch that may - have been advisable on much older CPUs or completely different workloads. Such - options are not universally beneficial or always suitable for modern CPUs such as - the AMD EPYC 9004 Series Processors. - - In the case of STREAM running on the AMD EPYC 9004 Series Processor, - it was sufficient to enable -Ofast. This includes the - -O3 optimizations to enable vectorization. In addition, - it gives some leeway for optimizations that increase the code size with additional - options for fast math that may not be standards-compliant. - - For OpenMP, the SPREAD option was used to spread the - load across L3 caches. OpenMP has a variety of different placement options - if manually tuning placement. But careful attention should be paid to - OMP_PLACES, given the importance of the L3 Cache topology in - AMD EPYC 9004 Series Processors, if the operating system does not automatically - place tasks appropriately. At the time of writing, it is not possible to - specify l3cache as a place similar to what MPI has. In - this case, the topology will need to be examined either with library support - such as hwloc, directly via the sysfs - or manually. While it is possible to guess via the CPUID, it is not recommended - as CPUs may be offlined or the enumeration may vary between platforms because of BIOS - implementations. An example specification of places based on L3 cache for the test - system is: + Several options were considered for the test system that were unnecessary for STREAM + running on the AMD EPYC 9004 Series Processor but may be useful in other situations. STREAM + performance can be limited if a load/store instruction stalls to fetch data. CPUs may + automatically prefetch data based on historical behavior but it is not guaranteed. In + limited cases, depending on the CPU and workload, this may be addressed by specifying + -fprefetch-loop-arrays and depending on whether the workload is + store-intensive, -mprefetchwt1. In the test system using AMD EPYC + 9004 Series Processors, explicit prefetching did not help and was omitted. This is because + an explicitly scheduled prefetch may disable a CPU's predictive algorithms and degrade + performance. Similarly, for some workloads branch mispredictions can be a major problem, and + in some cases breach mispredictions can be offset against I-Cache pressure by specifying + -funroll-loops. In the case of STREAM on the test system, the CPU + accurately predicted the branches rendering the unrolling of loops unnecessary. For + math-intensive workloads it can be beneficial to link the application with + -lmvec depending on the application. In the case of STREAM, the + workload did not use significant math-based operations and so this option was not used. Some + styles of code blocks and loops can also be optimized to use vectored operations by + specifying -ftree-vectorize and explicitly adding support for CPU + features such as -mavx2. In all cases, STREAM does not benefit as its + operations are very basic. But it should be considered on an application-by-application + basis and when building support libraries such as numerical libraries. In all cases, + experimentation is recommended but caution advised. This holds particularly true when + considering options like prefetch that may have been advisable on much older CPUs or + completely different workloads. Such options are not universally beneficial or always + suitable for modern CPUs such as the AMD EPYC 9004 Series Processors. + + In the case of STREAM running on the AMD EPYC 9004 Series Processor, it was sufficient + to enable -Ofast. This includes the -O3 + optimizations to enable vectorization. In addition, it gives some leeway for optimizations + that increase the code size with additional options for fast math that may not be + standards-compliant. + + For OpenMP, the SPREAD option was used to spread the load across + L3 caches. OpenMP has a variety of different placement options if manually tuning placement. + But careful attention should be paid to OMP_PLACES, given the + importance of the L3 Cache topology in AMD EPYC 9004 Series Processors, if the operating + system does not automatically place tasks appropriately. At the time of writing, it is not + possible to specify l3cache as a place similar to what MPI has. In + this case, the topology will need to be examined either with library support such as + hwloc, directly via the sysfs or manually. While + it is possible to guess via the CPUID, it is not recommended as CPUs may be offlined or the + enumeration may vary between platforms because of BIOS implementations. An example + specification of places based on L3 cache for the test system is: {0:8,192:8}, {8:8,200:8}, {16:8,208:8}, {24:8,216:8}, {32:8,224:8}, {40:8,232:8}, @@ -1460,15 +1421,14 @@ epyc:~ # perf script {144:8,336:8}, {152:8,344:8}, {160:8,352:8}, {168:8,360:8}, {176:8,368:8}, {184:8,376:8} - shows the reported bandwidth for the - single and parallelized case. The single-threaded bandwidth for the basic Copy - vector operation on a single core was 50.3 GB/sec. This is higher than the - theoretical maximum of a single DIMM, but the IO die may interleave accesses, and - caching effects and prefetch still apply. The total throughput for each parallel - operation ranged from 1259 GB/sec to 1587 GB/sec depending on the type of operation - and how efficiently memory bandwidth was used. This is very roughly scaling with the - number of memory channels available on the machine with caching effects accounting - for results exceeding theoretical maximums. + shows the reported bandwidth for the single and + parallelized case. The single-threaded bandwidth for the basic Copy vector operation on a + single core was 50.3 GB/sec. This is higher than the theoretical maximum of a single DIMM, + but the IO die may interleave accesses, and caching effects and prefetch still apply. The + total throughput for each parallel operation ranged from 1259 GB/sec to 1587 GB/sec + depending on the type of operation and how efficiently memory bandwidth was used. This is + very roughly scaling with the number of memory channels available on the machine with + caching effects accounting for results exceeding theoretical maximums. STREAM scores @@ -1513,11 +1473,11 @@ epyc:~ # perf script detail the tuning selected for this workload. The most basic step is setting the CPU governor to performance although - it is not mandatory. This can address issues with short-lived or mobile tasks failing to - run long enough for a higher P-State to be selected even though the workload is very - throughput-sensitive. The migration cost parameter is set to reduce the frequency in which - the load balancer will move an individual task. The minimum granularity is adjusted to reduce - overscheduling effects. + it is not mandatory. This can address issues with short-lived or mobile tasks failing to run + long enough for a higher P-State to be selected even though the workload is very + throughput-sensitive. The migration cost parameter is set to reduce the frequency in which + the load balancer will move an individual task. The minimum granularity is adjusted to + reduce overscheduling effects. Depending on the computational kernel used, the workload may require a power-of-two number or a square number of processes to be used. However, note that using all available @@ -1528,8 +1488,8 @@ epyc:~ # perf script available. These factors should be carefully considered when tuning for parallelized workloads in general and MPI workloads in particular. - In the specific case of testing NPB on the System Under Test, there was usually a limited - advantage to limiting the number of CPUs used. For the In the specific case of testing NPB on the System Under Test, there was usually a + limited advantage to limiting the number of CPUs used. For the Embarrassingly Parallel (EP) load in particular, it benefits from using all available CPUs. Hence, the default configuration used all available CPUs (384) which is both a power-of-two and square number of CPUs because it was a sensible starting point. However, @@ -1544,20 +1504,20 @@ epyc:~ # perf script universal good choice for optimizing a workload for a platform. Thus, experimentation and validation of tuning parameters is vital. - The basic compilation flags simply turned on all safe optimizations. The - tuned flags used -Ofast which can be unsafe for some - mathematical workloads but generated the correct output for NPB. The other - flags used the optimal instructions available on the distribution compiler and - vectorized some operations. GCC 11 is more strict in terms - of matching types in Fortran. Depending on the version of NPB used, it may - be necessary to specify the -fallow-argument-mismatch - or -fallow-invalid-boz to compile unless the source code - is modified. + The basic compilation flags simply turned on all safe optimizations. The tuned flags + used -Ofast which can be unsafe for some mathematical workloads but + generated the correct output for NPB. The other flags used the optimal instructions + available on the distribution compiler and vectorized some operations. GCC + 11 is more strict in terms of matching types in Fortran. Depending on the + version of NPB used, it may be necessary to specify the + -fallow-argument-mismatch or + -fallow-invalid-boz to compile unless the source code is + modified. As NPB uses shared files, an XFS partition was used for the temporary files. It is, however, only used for mapping shared files and is not a critical path for the benchmark and - no IO tuning is necessary. In some cases, with MPI applications, it will be possible to use a - tmpfs partition for OpenMPI. This avoids unnecessary IO assuming the + no IO tuning is necessary. In some cases, with MPI applications, it will be possible to use + a tmpfs partition for OpenMPI. This avoids unnecessary IO assuming the increased physical memory usage does not cause the application to be paged out. @@ -1667,8 +1627,8 @@ epyc:~ # perf script
- shows the time, as reported by - the benchmark, for each of the kernels to complete. + shows the time, as reported by the benchmark, + for each of the kernels to complete.
NAS MPI Results @@ -1683,29 +1643,26 @@ epyc:~ # perf script
- The gcc-7-default test used the system - compiler, all available CPUs, basic compilation options and the - performance governor. The second test gcc-11-default used an alternative compiler. gcc-11-tuned used additional compilation options, and - bound tasks to L3 caches gaining between 1.89% and 43.4% performance on average - relative to gcc-7-default. Even taking into - account that ep.D is an outlier because of the fact it is - an embarrassingly parallel workload, the range of improvements is between 1.89% - and 11.65%. The final test selective used - processes that either used all CPUs, avoided heavy overloaded or limited processes - to one per L3 cache, showing additional gains between 0.7% and 15.56% depending on - the computational kernel. - - In some cases, it will be necessary to compile an application that can run - on different CPUs. In such cases, -march=znver3 - may not be suitable if it generates binaries that are incompatible with other - vendors. In such cases, it is possible to specify the ISA-specific options that are - cross-compatible with many x86-based CPUs such as -mavx2, - -mfma, -msse2 or - msse4a while favoring optimal code generation for AMD - EPYC 9004 Series Processors with -mtune=znver3. This can be - used to strike a balance between excellent performance on a single CPU and great + The gcc-7-default test used the system compiler, all + available CPUs, basic compilation options and the performance + governor. The second test gcc-11-default used an + alternative compiler. gcc-11-tuned used additional + compilation options, and bound tasks to L3 caches gaining between 1.89% and 43.4% + performance on average relative to gcc-7-default. Even + taking into account that ep.D is an outlier because of the fact it is + an embarrassingly parallel workload, the range of improvements is between 1.89% and 11.65%. + The final test selective used processes that either used + all CPUs, avoided heavy overloaded or limited processes to one per L3 cache, showing + additional gains between 0.7% and 15.56% depending on the computational kernel. + + In some cases, it will be necessary to compile an application that can run on different + CPUs. In such cases, -march=znver3 may not be suitable if it + generates binaries that are incompatible with other vendors. In such cases, it is possible + to specify the ISA-specific options that are cross-compatible with many x86-based CPUs such + as -mavx2, -mfma, + -msse2 or msse4a while favoring optimal code + generation for AMD EPYC 9004 Series Processors with -mtune=znver3. + This can be used to strike a balance between excellent performance on a single CPU and great performance on multiple CPUs.
@@ -1714,38 +1671,36 @@ epyc:~ # perf script Tuning AMD EPYC 9004 Series Dense - As the AMD EPYC 9004 Series Classic and AMD EPYC Series Dense are ISA-compatible, - no code tuning or compiler setting changes should be necessary. For Cloud environments, - partitioning or any binding of Virtual CPUs to Physical CPUs may need to be adjusted - to account for the increased number of cores. The additional cores may also allow - additional containers or virtual machines to be hosted on the same physical machine - without CPU contention. Similarly, the degree of parallelization for HPC workloads - may need to be adjusted. In cases where the workload is tuned based on the number of - CCD's, adjustments may be necessary for the increased number of CCDs. An exception are cases - where the workload already hits scaling limits. While tuning based on the different - number of CCXs is possible, it should only be necessary for applications with very - strict latency requirements. As the size of the cache is halved, partitioning based - on cache sizes may also need to be adjusted. In some cases, where workloads are tuned - based on the output of tools like hwloc partitioning - - may adjust automatically but any static partitioning should be re-examined. - - When configuring workloads for either the AMD EPYC 9004 Series Classic or the AMD - EPYC 9004 Series Dense, the most important task is to set expectations. While super-linear - scaling is possible, it should not be expected. It may be possible to achieve super-linear - scaling in Cloud Environments for the number of instances hosted without performance - loss if individual containers or virtual machines are not utilising 100% of CPU. However, - it should be planned carefully and tested. This would be particularly true in cases - where multiple instances are hosted that have different times of day or year for active - phases. The normal expectation is a best case of 33% gain for CPU-intensive workloads - due to the increased number of cores. But sub-linear scaling is common due to resource - contention. Contention between SMT siblings, memory bandwidth, memory availability, - memory interconnects, thread communication overhead or peripheral devices may prevent - perfect linear scaling even for perfectly parallelized applications. Similarly, not all - applications can scale perfectly. It is possible for performance to plateau and even - regress as the degree of parallelization increases. Testing for parallelized workloads - using NAS indicated that actual gains were between 4% and 21% for most workloads that - did not have an inherent scaling limitation. + As the AMD EPYC 9004 Series Classic and AMD EPYC Series Dense are ISA-compatible, no code + tuning or compiler setting changes should be necessary. For Cloud environments, partitioning + or any binding of Virtual CPUs to Physical CPUs may need to be adjusted to account for the + increased number of cores. The additional cores may also allow additional containers or + virtual machines to be hosted on the same physical machine without CPU contention. Similarly, + the degree of parallelization for HPC workloads may need to be adjusted. In cases where the + workload is tuned based on the number of CCD's, adjustments may be necessary for the increased + number of CCDs. An exception are cases where the workload already hits scaling limits. While + tuning based on the different number of CCXs is possible, it should only be necessary for + applications with very strict latency requirements. As the size of the cache is halved, + partitioning based on cache sizes may also need to be adjusted. In some cases, where workloads + are tuned based on the output of tools like hwloc partitioning + may adjust + automatically but any static partitioning should be re-examined. + + When configuring workloads for either the AMD EPYC 9004 Series Classic or the AMD EPYC + 9004 Series Dense, the most important task is to set expectations. While super-linear scaling + is possible, it should not be expected. It may be possible to achieve super-linear scaling in + Cloud Environments for the number of instances hosted without performance loss if individual + containers or virtual machines are not utilising 100% of CPU. However, it should be planned + carefully and tested. This would be particularly true in cases where multiple instances are + hosted that have different times of day or year for active phases. The normal expectation is a + best case of 33% gain for CPU-intensive workloads due to the increased number of cores. But + sub-linear scaling is common due to resource contention. Contention between SMT siblings, + memory bandwidth, memory availability, memory interconnects, thread communication overhead or + peripheral devices may prevent perfect linear scaling even for perfectly parallelized + applications. Similarly, not all applications can scale perfectly. It is possible for + performance to plateau and even regress as the degree of parallelization increases. Testing + for parallelized workloads using NAS indicated that actual gains were between 4% and 21% for + most workloads that did not have an inherent scaling limitation. @@ -1753,24 +1708,22 @@ epyc:~ # perf script Conclusion The introduction of the AMD EPYC 9004 Series Classic and AMD EPYC 9004 Series Dense - Processors continues to push the boundaries of what is possible for memory and - IO-bound workloads with much higher bandwidth and available number of channels. A - properly configured and tuned workload can exceed the performance of many contemporary - off-the-shelf solutions even when fully customized. The symmetric and balanced nature - of the machine makes the task of tuning a workload considerably easier, given that - each partition can have symmetric performance. And this is a property that can turn - out particularly handy in virtualization, as each virtual machine can be assigned - to each one of the partitions. - - With SUSE Linux Enterprise 15 SP4, all the tools to monitor and tune a workload - are readily available. You can extract the maximum performance and - reliability running your applications on the 4th Generation AMD EPYC Processor - platform. + Processors continues to push the boundaries of what is possible for memory and IO-bound + workloads with much higher bandwidth and available number of channels. A properly configured + and tuned workload can exceed the performance of many contemporary off-the-shelf solutions + even when fully customized. The symmetric and balanced nature of the machine makes the task of + tuning a workload considerably easier, given that each partition can have symmetric + performance. And this is a property that can turn out particularly handy in virtualization, as + each virtual machine can be assigned to each one of the partitions. + + With SUSE Linux Enterprise 15 SP4, all the tools to monitor and tune a workload are + readily available. You can extract the maximum performance and reliability running your + applications on the 4th Generation AMD EPYC Processor platform.
- - - + - + - +
diff --git a/xml/MAIN-SBP-AMD-EPYC-SLES12SP3.xml b/xml/MAIN-SBP-AMD-EPYC-SLES12SP3.xml index c3962a6c7..dd059af8b 100644 --- a/xml/MAIN-SBP-AMD-EPYC-SLES12SP3.xml +++ b/xml/MAIN-SBP-AMD-EPYC-SLES12SP3.xml @@ -8,14 +8,12 @@
Optimizing Linux for AMD EPYC with SUSE Linux Enterprise 12 SP3 - - SUSE Linux Enterprise Server - 12 SP3 https://github.com/SUSE/suse-best-practices/issues/new @@ -24,26 +22,24 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + Tuning & Performance - + Configuration - Optimizing Linux for AMD EPYC with SUSE Linux Enterprise 12 + Optimizing Linux for AMD EPYC with SUSE Linux Enterprise 12 SP3 - Overview of the AMD EPYC* Series Processors and tuning of - computational-intensive workloads on SUSE Linux Enterprise Server 12 SP3. - - SLES - AMD EPYC* + Overview of the AMD EPYC* Series Processors and tuning of + computational-intensive workloads on SUSE Linux Enterprise Server 12 SP3. + Optimizing SLES 12 SP3 for AMD EPYC™ 7002 processors + + SUSE Linux Enterprise Server - SUSE Linux Enterprise 12 SP3 - AMD EPYC™ Series Processors - 2018-06-18 + SUSE Linux Enterprise 12 SP3 + AMD EPYC™ Series Processors @@ -114,16 +110,24 @@ + + + + 2018-06-18 + + + + + + - 2018-06-18 The document at hand provides an overview of the AMD* EPYC* architecture and how computational-intensive workloads can be tuned on SUSE Linux Enterprise Server 12 SP3. - - Disclaimer: + Disclaimer: Documents published as part of the SUSE Best Practices series have been contributed voluntarily by SUSE employees and third parties. They are meant to serve as examples of how particular actions can be performed. They have been compiled with utmost attention to detail. However, diff --git a/xml/MAIN-SBP-CSP-UpdateInfra.xml b/xml/MAIN-SBP-CSP-UpdateInfra.xml index f4da6bd38..1416855d2 100644 --- a/xml/MAIN-SBP-CSP-UpdateInfra.xml +++ b/xml/MAIN-SBP-CSP-UpdateInfra.xml @@ -5,16 +5,13 @@ %entity; ]>
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-csp-update-infra" xml:lang="en"> SUSE Update Infrastructure Setup Guide for Cloud Service Providers - - SUSE Linux Enterprise Server 15 SP3 or later, Repository Mirroring Tool - - + https://github.com/SUSE/suse-best-practices/issues/new @@ -24,28 +21,26 @@ - SUSE Best Practices - - - Cloud - Systems Management - - - Configuration - Installation - Subscription Management + Best Practices + + Cloud - SUSE Update Infrastructure - Describes the setup of the recommended infrastructure for providing - SUSE Linux Enterprise Server as on-demand offerings in a cloud environment. - - SLES - RMT + + Configuration + Installation + Subscription Management - - - SUSE Linux Enterprise Server - Repository Mirroring Tool + SUSE Update Infrastructure + Describes the setup of the recommended infrastructure for providing SUSE + Linux Enterprise Server as on-demand offerings in a cloud environment + Setup guide for cloud service providers + + SUSE Linux Enterprise Server + SUSE Linux Enterprise + + + SUSE Linux Enterprise Server + Repository Mirroring Tool @@ -87,22 +82,21 @@ - 2024-06-12 - - + + This guide describes the setup of the recommended infrastructure for offering &sls; diff --git a/xml/MAIN-SBP-DRBD.xml b/xml/MAIN-SBP-DRBD.xml index 9aa8bec50..3dd658010 100644 --- a/xml/MAIN-SBP-DRBD.xml +++ b/xml/MAIN-SBP-DRBD.xml @@ -5,14 +5,13 @@ %entity; ]>
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" + xmlns:its="http://www.w3.org/2005/11/its" xml:id="art-sbp-geo-drbd" xml:lang="en"> Data Replication Across Geo Clusters via DRBD Included with SUSE Linux Enterprise High Availability - - SUSE Linux Enterprise High Availability - 12 SP1, SP2 + https://github.com/SUSE/suse-best-practices/issues/new @@ -22,95 +21,99 @@ - SUSE Best Practices - - + Best Practices + High Availability - + Configuration Installation - Data Replication Across Geo Clusters via DRBD - This technical guide describes the setup of a geo cluster using DRBD as delivered - with SUSE Linux Enterprise High Availability. - - SUSE Linux Enterprise High Availability + Data Replication Across Geo Clusters via DRBD + Describes the setup of a geo cluster using + DRBD as delivered with SUSE Linux Enterprise High Availability + Data replication across geo clusters via DRBD + + SUSE Linux Enterprise High Availability + SUSE Linux Enterprise High Availability - 2016-11-08 - - SUSE Linux Enterprise High Availability 12 SP1, SP2 - DRBD 8, 9 - 2016-11-08 - SUSE Linux Enterprise High Availability 12 SP1, SP2 - DRBD 8, 9 - + SUSE Linux Enterprise High Availability 12 SP1, SP2 + DRBD 8, 9 + + + + + Matt + Kereczman + + + Cluster Engineer + LINBIT + + - - Matt - Kereczman - - - Cluster Engineer - LINBIT - + + Philipp + Marek + + + Senior Software Developer + LINBIT + - - Philipp - Marek - - - Senior Software Developer - LINBIT - + + Kristoffer + Grönlund + + + Architect High Availability + SUSE + - - - Kristoffer - Grönlund - - - Architect High Availability - SUSE - - - - + - - + + - + - + - - + + + + + + 2016-11-08 + + + + + - November 8, 2016 This technical guide describes the setup of a geo cluster using DRBD as delivered - with the SUSE Linux Enterprise High Availability. + with the SUSE Linux Enterprise High Availability. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the + SUSE Best Practices series have been contributed voluntarily by SUSE employees and + third parties. They are meant to serve as examples of how particular actions can be + performed. They have been compiled with utmost attention to detail. However, this + does not guarantee complete accuracy. SUSE cannot verify that actions described in + these documents do what is claimed or whether actions described have unintended + consequences. SUSE LLC, its affiliates, the authors, and the translators may not be + held liable for possible errors or the consequences thereof. @@ -144,13 +147,13 @@ About SUSE Linux Enterprise High Availability - SUSE Linux Enterprise High Availability is an integrated suite of open - source clustering technologies that enables you to implement highly available - physical and virtual Linux clusters, and to eliminate single points of failure. It - ensures the availability and manageability of critical networked resources including - data, applications, and services. Thus, it helps you maintain business continuity, - protect data integrity, and reduce unplanned downtime for your mission-critical - Linux workloads. + SUSE Linux Enterprise High Availability is an integrated suite of open source + clustering technologies that enables you to implement highly available physical and + virtual Linux clusters, and to eliminate single points of failure. It ensures the + availability and manageability of critical networked resources including data, + applications, and services. Thus, it helps you maintain business continuity, protect + data integrity, and reduce unplanned downtime for your mission-critical Linux + workloads. SUSE Linux Enterprise High Availability ships with essential monitoring, messaging, and cluster resource management functionality (supporting failover, @@ -491,8 +494,9 @@ >stacked-on-top-of above, and a proxy { ... } section inside of resource. See LINBIT's DRBD Proxy guide at https://downloads.linbit.com/ for more details - regarding configuring DRBD Proxy. + xlink:href="https://downloads.linbit.com/" + >https://downloads.linbit.com/ for more details regarding configuring + DRBD Proxy. @@ -640,9 +644,9 @@ ticket = "ticket-nfs" As described in the SUSE Multi-Site Cluster Documentation at https://documentation.suse.com/sle-ha-geo/12-SP4/, there - should be an order constraint that makes sure that Booth can fetch the - ticket before trying to start the service. + >https://documentation.suse.com/sle-ha-geo/12-SP4/, there should + be an order constraint that makes sure that Booth can fetch the ticket + before trying to start the service. @@ -725,9 +729,9 @@ ticket: ticket-nfs, leader: 192.168.201.100, expires: 2016-04-14 07:50:48 Further Documentation - SUSE Linux Enterprise High Availability - Guide: A comprehensive documentation about nearly every part of the Linux - Cluster stack. See SUSE Linux Enterprise High Availability Guide: A + comprehensive documentation about nearly every part of the Linux Cluster stack. See + https://documentation.suse.com/sle-ha/11-SP4/html/SLE-ha-all/book-sleha.html . diff --git a/xml/MAIN-SBP-GCC-10.xml b/xml/MAIN-SBP-GCC-10.xml index 2651f1b47..d9d4de2ac 100644 --- a/xml/MAIN-SBP-GCC-10.xml +++ b/xml/MAIN-SBP-GCC-10.xml @@ -6,14 +6,12 @@ ]>
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-gcc10-sle15" xml:lang="en"> Advanced Optimization and New Capabilities of GCC 10 - Development Tools Module, SUSE Linux Enterprise - 15 SP2 https://github.com/SUSE/suse-best-practices/issues/new @@ -22,25 +20,23 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - - Tuning & Performance - Developer Tools + Best Practices + + Development Tools> - - Configuration + + Configuration - Advanced Optimization and New Capabilities of GCC 10 - Overview of GCC 10 and compilation optimization options for - applications - - SLES + Advanced Optimization and New Capabilities of GCC 10 + Overview of GCC 10 and compilation optimization options for + applications + Advanced optimization and new capabilities of GCC 10 + + SUSE Linux Enterprise Server - 2021-03-12 - - SUSE Linux Enterprise Server 15 SP2 - Development Tools Module + + SUSE Linux Enterprise Server 15 SP2 + Development Tools Module @@ -118,17 +114,25 @@ - - - - - - - - + + + + + + + + - 2021-03-12 + + + 2021-03-12 + + + + + + The document at hand provides an overview of GCC 10 as the current Development Tools @@ -139,15 +143,13 @@ Firefox for a generic x86_64 machine. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the SUSE Best + Practices series have been contributed voluntarily by SUSE employees and third parties. They are + meant to serve as examples of how particular actions can be performed. They have been compiled + with utmost attention to detail. However, this does not guarantee complete accuracy. SUSE cannot + verify that actions described in these documents do what is claimed or whether actions described + have unintended consequences. SUSE LLC, its affiliates, the authors, and the translators may not + be held liable for possible errors or the consequences thereof. @@ -230,9 +232,9 @@ warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. conversions Proposal P0067R5 - have extra limitations. Most of C++20 features are - implemented in GCC 10 as experimental features. Try them out with appropriate caution. Most - notably, Modules + have extra limitations. Most of C++20 features are implemented in + GCC 10 as experimental features. Try them out with appropriate caution. Most notably, Modules Proposals P1103R3, P1766R1, P1811R0, P1703R1, P1874R1, P1979R0, P1779R3, P1857R3, P2115R0 and P1815R2 @@ -582,13 +584,13 @@ S | Name | Summary site. Bear in mind that although almost all optimizing compilers have the concept of optimization - levels and their optimization levels often have the same names as those in GCC, they do - not necessarily mean to make the same trade-offs. Famously, GCC's -Os - optimizes for size much more aggressively than LLVM/Clang's level with the same name. Therefore, - it often produces slower code; the more equivalent option in Clang is -Oz - which GCC does not have. Similarly, -O2 can have different meanings for - different compilers. For example, the difference between -O2 and - -O3 is much bigger in GCC than in LLVM/Clang. + levels and their optimization levels often have the same names as those in GCC, they do not + necessarily mean to make the same trade-offs. Famously, GCC's -Os optimizes + for size much more aggressively than LLVM/Clang's level with the same name. Therefore, it often + produces slower code; the more equivalent option in Clang is -Oz which GCC + does not have. Similarly, -O2 can have different meanings for different + compilers. For example, the difference between -O2 and -O3 + is much bigger in GCC than in LLVM/Clang. Changing the optimization level with <command>cmake</command> @@ -810,15 +812,15 @@ int foo_v1 (void) which parts of a program are the hot ones is difficult, and even sophisticated estimation algorithms implemented in GCC are no good match for a measurement. - If you do not mind adding an extra level of complexity to the build system of your project, you - can make such measurement part of the process. The makefile - (or any other) build script needs to compile it twice. The first time it needs to compile with - the -fprofile-generate option and then execute the first binary in one or - multiple train runs during which it will save information - about the behavior of the program to special files. Afterward, the project needs to be rebuilt - again, this time with the -fprofile-use option which instructs the compiler to - look for the files with the measurements and use them when making optimization decisions, a - process called Profile-Guided Optimization (PGO). + If you do not mind adding an extra level of complexity to the build system of your project, + you can make such measurement part of the process. The makefile (or any other) build script needs to compile it twice. The first time it + needs to compile with the -fprofile-generate option and then execute the first + binary in one or multiple train runs during which it will save + information about the behavior of the program to special files. Afterward, the project needs to + be rebuilt again, this time with the -fprofile-use option which instructs the + compiler to look for the files with the measurements and use them when making optimization + decisions, a process called Profile-Guided Optimization (PGO). It is important that the train exhibits the same characteristics as the real workload. Unless you use the option -fprofile-partial-training in the second build, it @@ -1039,8 +1041,8 @@ int foo_v1 (void) again illustrates the overall effect on the whole suite and the - benchmarks that benefit the most. All omitted benchmarks had comparable runtimes regardless - of the mode of compilation, except for 521.wrf_r where the PGO profiling data + benchmarks that benefit the most. All omitted benchmarks had comparable runtimes regardless of + the mode of compilation, except for 521.wrf_r where the PGO profiling data seem to be damaged in the build process, resulting in 13% slowdown with PGO (see GCC bug 90364). This means that the increase by 6.4% of the geometric mean with both PGO and LTO actually does not @@ -1262,8 +1264,8 @@ int foo_v1 (void) where ICC achieves 174% compared to itself without LTO. 548.exchange2_r is the only Fortran benchmark in the integer suite and its only hot function contains a recursive call in a deep loop nest which poses a problem for many loop optimizers. Furthermore, it can be - made faster by inter-procedural constant propagation and cloning if performed to an extent - that would typically be excessive. When GCC 10 is instructed to do that with the parameters + made faster by inter-procedural constant propagation and cloning if performed to an extent that + would typically be excessive. When GCC 10 is instructed to do that with the parameters --param ipa-cp-eval-threshold=1 --param ipa-cp-unit-growth=80, it achieves a score on par with ICC with LTO, even without LTO. We do not expect users to compile with such options, and are working to enable the transformation with only -Ofast in the diff --git a/xml/MAIN-SBP-GCC-11.xml b/xml/MAIN-SBP-GCC-11.xml index d4970efd6..2f1b24940 100644 --- a/xml/MAIN-SBP-GCC-11.xml +++ b/xml/MAIN-SBP-GCC-11.xml @@ -11,13 +11,12 @@
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-gcc11-sle15" xml:lang="en"> Advanced Optimization and New Capabilities of GCC 11 - Development Tools Module, SUSE Linux Enterprise - 15 SP3 https://github.com/SUSE/suse-best-practices/issues/new @@ -26,114 +25,119 @@ https://github.com/SUSE/suse-best-practices/blob/master/xml/ - SUSE Best Practices - - - Tuning & Performance - Developer Tools + Best Practices + + Development Tools - - Configuration + + Configuration - Advanced Optimization and New Capabilities of GCC 11 - Overview of GCC 11 and compilation optimization options for - applications - - SLES + Advanced Optimization and New Capabilities of GCC 11 + Overview of GCC 11 and compilation optimization options for + applications + Advanced optimization and new capabilities of GCC 11 + + SUSE Linux Enterprise Server - 2022-03-15 - - SUSE Linux Enterprise Server 15 SP3 - Development Tools Module - + SUSE Linux Enterprise Server 15 SP3 + Development Tools Module + + - - Martin - Jambor - - - Toolchain Developer - SUSE - + + Martin + Jambor + + + Toolchain Developer + SUSE + - - Jan - Hubička - - - Toolchain Developer - SUSE - + + Jan + Hubička + + + Toolchain Developer + SUSE + - - Richard - Biener - - - Toolchain Developer - SUSE - + + Richard + Biener + + + Toolchain Developer + SUSE + - - Martin - Liška - - - Toolchain Developer - SUSE - + + Martin + Liška + + + Toolchain Developer + SUSE + - - Michael - Matz - - - Toolchain Team Lead - SUSE - + + Michael + Matz + + + Toolchain Team Lead + SUSE + - - Brent - Hollingsworth - - - Engineering Manager - AMD - + + Brent + Hollingsworth + + + Engineering Manager + AMD + - - + - - - - - - - - - - + + + + + + + + + + + + + 2022-03-15 + + + + + - 2022-03-15 The document at hand provides an overview of GCC 11 as the current Development Tools @@ -369,10 +373,11 @@ warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. inconsistent with C++ and may incur speed and code size penalties. Users compiling C++ sources should also be aware that - g++ 11 defaults to -std=gnu++17, the C++17 standard with - GNU extensions, instead of -std=gnu++14. Moreover, some C++ Standard Library - headers have been changed to no longer include other headers that they do not depend on. You may - need to explicitly include <limits>, + g++ 11 defaults to -std=gnu++17, the + C++17 standard with GNU extensions, instead of -std=gnu++14. Moreover, some C++ Standard Library headers have + been changed to no longer include other headers that they do not depend on. You may need to + explicitly include <limits>, <memory>, <utility> or <thread>. @@ -426,12 +431,12 @@ Successfully registered system If you prefer to use YaST, the procedure is also straightforward. Run YaST as root and go to the Add-On Products menu in the - Software section. If Development Tools Module is among the - listed installed modules, you already have the module activated and can proceed with installing - individual compiler packages. If not, click the Add button, - select Select Extensions and Modules from Registration - Server, and YaST will guide you through a simple procedure to add - the module. + Software section. If Development Tools Module + is among the listed installed modules, you already have the module activated and can proceed + with installing individual compiler packages. If not, click the Add button, select Select Extensions and Modules from + Registration Server, and YaST will guide you through a simple + procedure to add the module. When you have the Development Tools Module installed, you can verify that the GCC 11 @@ -1583,12 +1588,13 @@ int foo_v1 (void) using the Mozilla Treeherder infrastructure with the optimization levels and modes most discussed in this document. Again, you can see that LTO can reduce the code size to an extent that more than offsets the difference between -⁠O3 and - -⁠O2. Note that, since a big portion of Firefox is written in Rust - and the whole program analysis is limited to the parts written in C++, the - LTO benefits are smaller than the typical case, in terms of both size and performance. Work on - the Rust GCC front-end has started only recently but we hope that we will overcome this - limitation. Nevertheless, as demonstrated throughout this case study, LTO combined with PGO is - by far the best option, not only in code size comparison but also in any other measurement. + -⁠O2. Note that, since a big portion of Firefox is written in + Rust and the whole program analysis is limited to the parts written in + C++, the LTO benefits are smaller than the typical case, in terms of both + size and performance. Work on the Rust GCC front-end has started only + recently but we hope that we will overcome this limitation. Nevertheless, as demonstrated + throughout this case study, LTO combined with PGO is by far the best option, not only in code + size comparison but also in any other measurement.
Code size (smaller is better) of Firefox binaries built with GCC 11.2 with different @@ -1605,31 +1611,28 @@ int foo_v1 (void) <para> When not employing neither LTO nor PGO, the performance difference between <literal>-⁠O2</literal> and <literal>-⁠O3</literal> is visible but modest - in the <emphasis role="italic">tp5o</emphasis> benchmark results, somewhat more pronounced looking at singletons benchmark, but - barely visible in the measure of responsiveness. Our data indicate that the use of LTO regresses - the responsiveness benchmark by 2-3% at both <literal>-⁠O2</literal> and - <literal>-⁠O3</literal>. The size of the performance drop is close to the noise level - and so difficult to investigate but we believe it takes place because the code growth limits, - when applied on the entire binary, prevent useful inlining which is allowed when growth limits - are applied to individual compilation units.</para> - - <para>LTO improves performance slightly in all the other - benchmarks. The gain is small but it should be assessed together with the code size LTO brings - about. The real speed-up comes only when PGO is added into the formula, leading to performance - gain of 9% in the responsiveness test and over 17% in all other benchmarks. This observation - holds for both the data measured using the Talos and Perfherder systems (<xref - linkend="fig-gcc11-ff-levels-lto-pgo-perf" xrefstyle="template:figure %n"/>) and speedometer - results we obtained manually on an AMD Ryzen 7 5800X 8-Core Processor (<xref - linkend="fig-gcc11-ff-levels-lto-pgo-speedo" xrefstyle="template:figure %n"/>). This is - especially remarkable when you consider that the binary is more than 20% smaller than a simple - <literal>-⁠O2</literal> build. - + in the <emphasis role="italic">tp5o</emphasis> benchmark results, somewhat more pronounced + looking at singletons benchmark, but barely visible in the measure of responsiveness. Our data + indicate that the use of LTO regresses the responsiveness benchmark by 2-3% at both + <literal>-⁠O2</literal> and <literal>-⁠O3</literal>. The size of the + performance drop is close to the noise level and so difficult to investigate but we believe it + takes place because the code growth limits, when applied on the entire binary, prevent useful + inlining which is allowed when growth limits are applied to individual compilation units.</para> + + <para>LTO improves performance slightly in all the other benchmarks. The gain is small but it + should be assessed together with the code size LTO brings about. The real speed-up comes only + when PGO is added into the formula, leading to performance gain of 9% in the responsiveness test + and over 17% in all other benchmarks. This observation holds for both the data measured using + the Talos and Perfherder systems (<xref linkend="fig-gcc11-ff-levels-lto-pgo-perf" + xrefstyle="template:figure %n"/>) and speedometer results we obtained manually on an AMD Ryzen + 7 5800X 8-Core Processor (<xref linkend="fig-gcc11-ff-levels-lto-pgo-speedo" + xrefstyle="template:figure %n"/>). This is especially remarkable when you consider that the + binary is more than 20% smaller than a simple <literal>-⁠O2</literal> build. <!-- POSTPONED Note that such aggressive size shrinking comes at a modest cost which we will discuss in section <xref linkend="sec-gcc10-ff-pgo-notes" xrefstyle="template:section %n"/>. --> - </para> <figure xml:id="fig-gcc11-ff-levels-lto-pgo-perf"> @@ -1672,9 +1675,9 @@ int foo_v1 (void) optimization level and classic compilation method, that is not with LTO nor PGO. Unfortunately, our attempts to build a modern Firefox with them and the old compiler have failed. The sizes of the binary produced by both compilers are very similar but the one created with GCC 10 has - always performed noticeably better. In the <emphasis role="italic">tp5o</emphasis> responsiveness benchmark, the simple - <literal>-⁠O2</literal> build was 10% faster (see <xref - linkend="fig-gcc11-ff-vs7-perf" xrefstyle="template:figure %n"/>). </para> + always performed noticeably better. In the <emphasis role="italic">tp5o</emphasis> + responsiveness benchmark, the simple <literal>-⁠O2</literal> build was 10% faster (see + <xref linkend="fig-gcc11-ff-vs7-perf" xrefstyle="template:figure %n"/>). </para> <figure xml:id="fig-gcc11-ff-vs7-perf"> <title>Runtime performance (bigger is better) of Firefox built with GCC 7.5 and 11.2, running on @@ -1727,14 +1730,14 @@ int foo_v1 (void) </figure> <para> Runtime comparisons measured on Talos can be found in <xref - linkend="fig-gcc11-ff-vsclang-perf" xrefstyle="template:figure %n"/>. In the <emphasis role="italic">tp5o</emphasis> benchmark, - the GCC with the help of both PGO and LTO manages to produce code that is 10% quicker, the - performance using other compilation methods was comparable. In the responsiveness measurement, - it was Clang that was 6% faster when using both PGO and LTO. Like in the previous case, other - respective compilation methods of the two compilers performed similarly, except for the LTO - regression discussed in <xref linkend="sec-gcc11-ff-levels-lto-pgo" - xrefstyle="template:section %n"/>. In the singletons benchmark, GCC was always distinctly - faster. </para> + linkend="fig-gcc11-ff-vsclang-perf" xrefstyle="template:figure %n"/>. In the <emphasis + role="italic">tp5o</emphasis> benchmark, the GCC with the help of both PGO and LTO manages to + produce code that is 10% quicker, the performance using other compilation methods was + comparable. In the responsiveness measurement, it was Clang that was 6% faster when using both + PGO and LTO. Like in the previous case, other respective compilation methods of the two + compilers performed similarly, except for the LTO regression discussed in <xref + linkend="sec-gcc11-ff-levels-lto-pgo" xrefstyle="template:section %n"/>. In the singletons + benchmark, GCC was always distinctly faster. </para> <figure xml:id="fig-gcc11-ff-vsclang-perf"> <title>Runtime performance (bigger is better) of Firefox with GCC 11.2 and Clang 11, running on diff --git a/xml/MAIN-SBP-GCC-12.xml b/xml/MAIN-SBP-GCC-12.xml index 62905c5ed..9daad08d5 100644 --- a/xml/MAIN-SBP-GCC-12.xml +++ b/xml/MAIN-SBP-GCC-12.xml @@ -11,13 +11,12 @@ <article role="sbp" xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude" - xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="art-sbp-gcc12-sle15" xml:lang="en"> + xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-gcc12-sle15" xml:lang="en"> <info> <title>Advanced Optimization and New Capabilities of GCC 12 - Development Tools Module, SUSE Linux Enterprise - 15 SP4 https://github.com/SUSE/suse-best-practices/issues/new @@ -26,103 +25,110 @@ https://github.com/SUSE/suse-best-practices/blob/master/xml/ - SUSE Best Practices - - - Tuning & Performance - Developer Tools + Best Practices + + Development Tools - - Configuration + + Configuration - Advanced Optimization and New Capabilities of GCC 12 - Overview of GCC 12 and compilation optimization options for - applications - - SLES + Advanced Optimization and New Capabilities of GCC 12 + Overview of GCC 12 and compilation optimization options for + applications + Advanced optimization and new capabilities of GCC 12 + + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server - 2023-06-15 - SUSE Linux Enterprise Server 15 SP4 - Development Tools Module + SUSE Linux Enterprise Server 15 SP4 and later + Development Tools Module - + - - Martin - Jambor - - - Toolchain Team Lead - SUSE - + + Martin + Jambor + + + Toolchain Team Lead + SUSE + - - Jan - Hubička - - - Toolchain Developer - SUSE - + + Jan + Hubička + + + Toolchain Developer + SUSE + - - Richard - Biener - - - Toolchain Developer - SUSE - + + Richard + Biener + + + Toolchain Developer + SUSE + - - Michael - Matz - - - Toolchain Developer - SUSE - + + Michael + Matz + + + Toolchain Developer + SUSE + - - Brent - Hollingsworth - - - Engineering Manager - AMD - + + Brent + Hollingsworth + + + Engineering Manager + AMD + - - + - - - - - - - - - - - - - 2023-06-15 + + + + + + + + + + + + + + 2023-06-15 + + + + + + The document at hand provides an overview of GCC 12.3 as the current Development Tools @@ -131,9 +137,9 @@ role="strong">Profile Guided Optimization (PGO). Their effects are demonstrated by compiling the SPEC CPU benchmark suite for AMD EPYC 9004 Series Processors. - - + Disclaimer: Documents published as part of the SUSE Best Practices series have been contributed voluntarily by SUSE employees and third parties. They are @@ -149,11 +155,11 @@ Overview The first release of the GNU Compiler Collection (GCC) with the major version 12, GCC 12.1, - took place in May 2022. Later that month, the entire openSUSE Tumbleweed Linux - distribution was rebuilt with it and shipped to users. GCC 12.2, with fixes to over 71 bugs, was - released in August of the same year. Subsequently, it has replaced the compiler in the SUSE Linux - Enterprise (SLE) Development Tools Module. GCC 12.3 followed in May 2023. Apart from further - bug fixes, it also introduced support for Zen 4 based CPUs. GCC 12 comes with many new features, such as + took place in May 2022. Later that month, the entire openSUSE Tumbleweed Linux distribution was + rebuilt with it and shipped to users. GCC 12.2, with fixes to over 71 bugs, was released in + August of the same year. Subsequently, it has replaced the compiler in the SUSE Linux Enterprise + (SLE) Development Tools Module. GCC 12.3 followed in May 2023. Apart from further bug fixes, it + also introduced support for Zen 4 based CPUs. GCC 12 comes with many new features, such as implementing parts of the most recent versions of specifications of various languages (especially C2X, C++20, C++23) and their extensions (OpenMP, OpenACC), supporting new capabilities of a wide range of computer architectures and @@ -166,7 +172,7 @@ Optimization (LTO) and Profile Guided Optimization (PGO) builds. We also detail their effects when building a set of well-known CPU intensive benchmarks. Finally, we look at how these perform on AMD Zen 4 based EPYC 9004 Series - Processors. + Processors. @@ -195,19 +201,19 @@ warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. At the time of writing this document, the compiler included in the Development Tools Module is GCC 12.3. Nevertheless, it is important to stress that, unlike the system compiler, the major - version of the most recent GCC from the module will change a few months after the upstream release of - GCC 13.2 (which is planned for summer 2023), GCC 14.2 (summer 2024) and so forth. Note that only the - most recent compiler in the Development Tools Module is supported at any time, except for a six - months overlap period after an upgrade happened. Developers on a SUSE Linux Enterprise Server 15 - system therefore have always access to two supported GCC versions: the almost unchanging system - compiler and the most recent compiler from the Development Tools Module. + version of the most recent GCC from the module will change a few months after the upstream + release of GCC 13.2 (which is planned for summer 2023), GCC 14.2 (summer 2024) and so forth. Note + that only the most recent compiler in the Development Tools Module is supported at any time, + except for a six months overlap period after an upgrade happened. Developers on a SUSE Linux + Enterprise Server 15 system therefore have always access to two supported GCC versions: the + almost unchanging system compiler and the most recent compiler from the Development Tools Module. Programs and libraries built with the compiler from the Development Tools Module can run on computers running SUSE Linux Enterprise Server 15 which do not have the module installed. All necessary runtime libraries are available from the main repositories of the operating system - itself, and new ones are added through the standard update mechanism. In this document, we - use the term GCC 12 as synonym for any minor version of the major version 12 and GCC 12.3, to - refer to specifically that version. In practice they should be interchangeable except when we discuss + itself, and new ones are added through the standard update mechanism. In this document, we use + the term GCC 12 as synonym for any minor version of the major version 12 and GCC 12.3, to refer + to specifically that version. In practice they should be interchangeable except when we discuss targeting AMD Zen 4 based CPUs which is only supported in 12.3 and newer versions. @@ -229,12 +235,12 @@ warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Code using <literal>C++17</literal> features Code using C++17 features should always be compiled with the compiler from the Development Tools Module. Linking two objects, such as an application and a shared - library, which both use C++17, where one was built with g++ 8 - or earlier and the other with g++ 9 or later, is particularly dangerous. This is - because C++ STL objects instantiated by the experimental code may provide - implementation and even ABI that is different from what the mature implementation expects and - vice versa. Issues caused by such a mismatch are difficult to predict and may include silent - data corruption. + library, which both use C++17, where one was built with + g++ 8 or earlier and the other with g++ 9 or later, is + particularly dangerous. This is because C++ STL objects instantiated by the + experimental code may provide implementation and even ABI that is different from what the + mature implementation expects and vice versa. Issues caused by such a mismatch are difficult to + predict and may include silent data corruption. Most of C++20 features are implemented in GCC 12 as experimental @@ -248,10 +254,10 @@ warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Proposal P0912R5 are also implemented but require that the source file is compiled with the - -⁠fcoroutines switch. GCC 12 also experimentally implements many - C++23 features. If you are interested in the implementation - status of any particular C++ feature in the compiler or the standard library, - consult the following pages: + -⁠fcoroutines switch. GCC 12 also experimentally implements many + C++23 features. If you are interested in the implementation status of any + particular C++ feature in the compiler or the standard library, consult the + following pages: @@ -283,13 +289,13 @@ warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. GCC 7. Such a list of processors would be too large to be displayed here. Nevertheless, in we specifically look at optimizing code for an AMD EPYC 9004 Series Processor which is based on AMD Zen 4 cores. The system - compiler does not know this kind of core and therefore cannot optimize for it. On the - other hand, Zen 4 support has been backported to GCC 12.3 and thus it can often produce + compiler does not know this kind of core and therefore cannot optimize for it. On + the other hand, Zen 4 support has been backported to GCC 12.3 and thus it can often produce significantly faster code for it. - Finally, the general optimization pipeline of the compiler has also significantly - improved over the years. To find out more about improvements in versions of GCC 8 through 12, - visit their respective changes pages: + Finally, the general optimization pipeline of the compiler has also significantly improved + over the years. To find out more about improvements in versions of GCC 8 through 12, visit their + respective changes pages: @@ -369,11 +375,11 @@ warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. inconsistent with C++ and may incur speed and code size penalties. Users compiling C++ sources should also be aware that - g++ version 11 and later default to -std=gnu++17, - the C++17 standard with GNU extensions, instead of - -std=gnu++14. Moreover, some C++ Standard Library - headers have been changed to no longer include other headers that they do not depend on. You may - need to explicitly include <limits>, + g++ version 11 and later default to -std=gnu++17, the + C++17 standard with GNU extensions, instead of + -std=gnu++14. Moreover, some C++ Standard Library headers have + been changed to no longer include other headers that they do not depend on. You may need to + explicitly include <limits>, <memory>, <utility> or <thread>. @@ -390,8 +396,8 @@ warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Similar to other modules and extensions for SUSE Linux Enterprise Server 15, you can activate the Development Tools Module using either the command line tool - SUSEConnect or the YaST setup and configuration - tool. To use the former, carry out the following steps: + SUSEConnect or the YaST setup and configuration tool. To + use the former, carry out the following steps: @@ -427,12 +433,12 @@ Successfully registered system If you prefer to use YaST, the procedure is also straightforward. Run YaST as root and go to the Add-On Products menu in the - Software section. If Development Tools Module is among the - listed installed modules, you already have the module activated and can proceed with installing - individual compiler packages. If not, click the Add button, - select Select Extensions and Modules from Registration - Server, and YaST will guide you through a simple procedure to add - the module. + Software section. If Development Tools Module + is among the listed installed modules, you already have the module activated and can proceed + with installing individual compiler packages. If not, click the Add button, select Select Extensions and Modules from + Registration Server, and YaST will guide you through a simple + procedure to add the module. When you have the Development Tools Module installed, you can verify that the GCC 12 @@ -490,10 +496,10 @@ S | Name | Summary Newer compilers on openSUSE Leap 15.4 - The community distribution openSUSE Leap 15.4 shares the base packages with SUSE - Linux Enterprise Server 15 SP4. The system compiler on systems running openSUSE Leap 15.4 is - also GCC 7.5. There is no Development Tools Module for the community distribution available, - but a newer compiler is provided. Simply install the packages gcc12, + The community distribution openSUSE Leap 15.4 shares the base packages with SUSE Linux + Enterprise Server 15 SP4. The system compiler on systems running openSUSE Leap 15.4 is also GCC + 7.5. There is no Development Tools Module for the community distribution available, but a newer + compiler is provided. Simply install the packages gcc12, gcc12-c++, gcc12-fortran, and the like. @@ -514,8 +520,8 @@ S | Name | Summary This means it is usually accompanied with the command line switch -g so that debug information is emitted. As no optimizations take place, no information is lost because of it. No variables are optimized away, the compiler only inlines functions with special attributes - that require it, and so forth. As a consequence, the debugger can almost always find everything it - searches for in the running program and report on its state very well. On the other hand, the + that require it, and so forth. As a consequence, the debugger can almost always find everything + it searches for in the running program and report on its state very well. On the other hand, the resulting code is big and slow. Thus this optimization level should not be used for release builds. @@ -526,32 +532,32 @@ S | Name | Summary compilation can take longer. Note that neither -⁠O2 nor -⁠O3 imply anything about the precision and semantics of floating-point operations. Even at the optimization level -⁠O3 GCC - implements math operations and functions so that they follow the respective IEEE and/or ISO - rules When the rounding mode is set to the default round-to-nearest (look up - -⁠frounding-⁠math in the manual). - with the exception of allowing floating-point expression contraction, for example when fusing an - addition and a multiplication into one operationSee - documentation of -⁠ffp-⁠contract.. This - often means that the compiled programs run markedly slower than necessary if such strict - adherence is not required. The command line switch -⁠ffast-math is a - common way to relax rules governing floating-point operations. It is out of scope of this - document to provide a list of the fine-grained options it enables and their meaning. However, if - your software crunches floating-point numbers and its runtime is a priority, you can look them up - in the GCC manual and review what semantics of floating-point operations you need. + implements math operations and functions so that they follow the respective IEEE and/or ISO rules + When the rounding mode is set to the default round-to-nearest (look up + -⁠frounding-⁠math in the manual). + with the exception of allowing floating-point expression contraction, for example + when fusing an addition and a multiplication into one operation + See documentation of -⁠ffp-⁠contract. + . This often means that the compiled programs run markedly slower than necessary if + such strict adherence is not required. The command line switch + -⁠ffast-math is a common way to relax rules governing + floating-point operations. It is out of scope of this document to provide a list of the + fine-grained options it enables and their meaning. However, if your software crunches + floating-point numbers and its runtime is a priority, you can look them up in the GCC manual and + review what semantics of floating-point operations you need. The most aggressive optimization level is -⁠Ofast which does imply -⁠ffast-math along with a few options that disregard strict standard compliance. In GCC 12 this level also means the optimizers may introduce data races when moving memory stores which may not be safe for multithreaded applications and disregards the - possibility of ELF symbol interposition happening at runtime. Additionally, the - Fortran compiler can take advantage of associativity of math operations even across parentheses - and convert big memory allocations on the heap to allocations on stack. The last mentioned - transformation may cause the code to violate maximum stack size allowed by - ulimit which is then reported to the user as a segmentation fault. We often - use level -⁠Ofast to build benchmarks. It is a shorthand for the - options on top of -⁠O3 which often make them run faster. Most - benchmarks are intentionally written in a way that they run correctly even when these - rules are relaxed. + possibility of ELF symbol interposition happening at runtime. Additionally, the Fortran compiler + can take advantage of associativity of math operations even across parentheses and convert big + memory allocations on the heap to allocations on stack. The last mentioned transformation may + cause the code to violate maximum stack size allowed by ulimit which is then + reported to the user as a segmentation fault. We often use level + -⁠Ofast to build benchmarks. It is a shorthand for the options on + top of -⁠O3 which often make them run faster. Most benchmarks are + intentionally written in a way that they run correctly even when these rules are relaxed. If you feed the compiler with huge machine-generated input, especially if individual functions happen to be extremely large, the compile time can become an issue even when using @@ -593,18 +599,19 @@ S | Name | Summary therefore be a challenging task but usually is still somewhat possible. The complete list of optimization and other command line switches is available in the - compiler manual. The manual is provided in the info format in the package gcc12-info or - online at the GCC project Web - site. + compiler manual. The manual is provided in the info format in the package + gcc12-info or online at the GCC project Web site. Bear in mind that although almost all optimizing compilers have the concept of optimization levels and their optimization levels often have the same names as those in GCC, they do not necessarily mean to make the same trade-offs. Famously, GCC's -⁠Os optimizes for size much more aggressively than LLVM/Clang's level with the same name. Therefore, it often produces slower code; the more equivalent option in Clang is - -⁠Oz. Similarly, -⁠O2 can have different meanings - for different compilers. For example, the difference between -⁠O2 and - -⁠O3 is much bigger in GCC than in LLVM/Clang. + -⁠Oz. Similarly, -⁠O2 can have + different meanings for different compilers. For example, the difference between + -⁠O2 and -⁠O3 is much bigger in GCC + than in LLVM/Clang. Changing the optimization level with <command>cmake</command> @@ -633,12 +640,11 @@ S | Name | Summary instruction set extensions, you can specify it on the command line. Their complete list is available in the manual, but the most prominent one is -⁠march which lets you select a CPU model to generate code for. For example, if you know that your program will - only be executed on AMD EPYC 9004 Series Processors based on AMD Zen 4 cores or - processors that are compatible with it, you can instruct GCC to take advantage of all the - instructions the CPU supports with option -⁠march=znver4. Note that - on SUSE Linux Enterprise Server 15, the system compiler does not know this particular value of - the switch; you need to use GCC 12 from the Development Tools Module to optimize code for these - processors. + only be executed on AMD EPYC 9004 Series Processors based on AMD Zen 4 cores or processors that + are compatible with it, you can instruct GCC to take advantage of all the instructions the CPU + supports with option -⁠march=znver4. Note that on SUSE Linux + Enterprise Server 15, the system compiler does not know this particular value of the switch; you + need to use GCC 12 from the Development Tools Module to optimize code for these processors. To run the program on the machine on which you are compiling it, you can have the compiler auto-detect the target CPU model for you with the option @@ -649,16 +655,16 @@ S | Name | Summary process 512 bit ones. Again, the easy solution is to use the compiler from the Development Tools Module when targeting recent processors. - - Running 32-bit code - SUSE Linux Enterprise Server does not support compilation of 32-bit applications, it - only offers runtime support for 32-bit binaries. To do so, you will need 32-bit - libraries your binary depends on which likely include at least glibc which can be found in - package glibc-32bit. See chapter 20 - (32-bit and 64-bit applications in a 64-bit system environment) of the Administration - Guide for more information. - + + Running 32-bit code + SUSE Linux Enterprise Server does not support compilation of 32-bit applications, it only + offers runtime support for 32-bit binaries. To do so, you will need 32-bit libraries your binary + depends on which likely include at least glibc which can be found in package + glibc-32bit. See chapter 20 + (32-bit and 64-bit applications in a 64-bit system environment) of the Administration + Guide for more information. + @@ -668,9 +674,9 @@ S | Name | Summary outlines the classic mode of operation of a compiler and a linker. Pieces of a program are compiled and optimized in chunks - defined by the user called compilation units to produce so-called object files. These object files already - contain binary machine instructions and are combined together by a linker. Because the - linker works at such low level, it cannot perform much optimization and the division of the + defined by the user called compilation units to produce so-called object files. These object + files already contain binary machine instructions and are combined together by a linker. Because + the linker works at such low level, it cannot perform much optimization and the division of the program into compilation units thus presents a profound barrier to optimization.
@@ -785,11 +791,11 @@ S | Name | Summary from their uses and subsequently fail the final linking step. To build such projects with LTO, the assembler snippets defining symbols must be placed into a separate assembler source file so that they only participate in the final linking step. Global register - variables are not supported by LTO, and programs either must not use this feature or be built the - traditional way. It is also possible to exclude some compilation units from LTO (simply by + variables are not supported by LTO, and programs either must not use this feature or be built + the traditional way. It is also possible to exclude some compilation units from LTO (simply by compiling them without -⁠flto or appending - -⁠fno-⁠lto to the compilation command line), while the rest of the - program can still benefit from using this feature. + -⁠fno-⁠lto to the compilation command line), while the + rest of the program can still benefit from using this feature. Another notable limitation of LTO is that it does not support symbol versioning implemented with special inline assembly snippets (as opposed to a @@ -853,8 +859,8 @@ int foo_v1 (void) build, it needs to exercise the code that is also the most frequently executed in real use, otherwise it will be optimized for size and PGO would make more harm than good. With the option, GCC reverts to guessing properties of portions of the projects not exercised in the train run, as - if they were compiled without profile feedback. This however also means that this code will - not perform better or shrink as much as one would expect from a PGO build. + if they were compiled without profile feedback. This however also means that this code will not + perform better or shrink as much as one would expect from a PGO build. On the other hand, train runs do not need to be a perfect simulation of the real workload. For example, even though a test suite should not be a very good train run in theory because it @@ -867,11 +873,10 @@ int foo_v1 (void) -⁠fprofile-use so that GCC uses heuristics to correct or smooth out such inconsistencies instead of emitting an error. - Profile-Guided Optimization can be combined and is complimentary to Link Time - Optimization. While LTO expands what the compiler can do, PGO informs it about which parts of - the program are the important ones and should be focused on. The case study in the following - section shows how the two techniques work with each other on a well-known set of - benchmarks. + Profile-Guided Optimization can be combined and is complimentary to Link Time Optimization. + While LTO expands what the compiler can do, PGO informs it about which parts of the program are + the important ones and should be focused on. The case study in the following section shows how + the two techniques work with each other on a well-known set of benchmarks. @@ -882,9 +887,9 @@ int foo_v1 (void) non-profit corporation that publishes a variety of industry standard benchmarks to evaluate performance and other characteristics of computer systems. Its latest suite of CPU intensive workloads, SPEC CPU 2017, is often used to compare compilers and how well they optimize code with - different settings. This is because the included benchmarks are well known and represent a wide variety of - computation-heavy programs. The following section highlights selected results of a GCC 12 evaluation using - the suite. + different settings. This is because the included benchmarks are well known and represent a wide + variety of computation-heavy programs. The following section highlights selected results of a GCC + 12 evaluation using the suite. Note that when we use SPEC to perform compiler comparisons, we are lenient toward some official SPEC rules which system manufacturers need to observe to claim an official score for @@ -960,17 +965,17 @@ int foo_v1 (void) overall performance effect on the whole integer benchmark suite as captured by the geometric mean of all individual benchmark rates. The relative uplift is no longer as remarkable as with the previous versions of GCC because GCC 12 can conservatively vectorize code in - 525.x264_r also at plain -⁠O2. As a consequence, - the benchmark, which in practice is usually compiled with -⁠O3, runs 37% - faster than when compiled with GCC 11 and the same optimization level. Nevertheless, it still - benefits from the more advanced modes of compilation a lot, together with several other - benchmarks which are derived from programs that are typically compiled with - -⁠O2. This is illustrated in . + 525.x264_r also at plain -⁠O2. As a + consequence, the benchmark, which in practice is usually compiled with + -⁠O3, runs 37% faster than when compiled with GCC 11 and the same + optimization level. Nevertheless, it still benefits from the more advanced modes of compilation + a lot, together with several other benchmarks which are derived from programs that are typically + compiled with -⁠O2. This is illustrated in .
- Runtime performance (bigger is better) of individual integer benchmarks built with - GCC 12.3 and -⁠O2 + Runtime performance (bigger is better) of individual integer benchmarks built with GCC + 12.3 and -⁠O2 @@ -985,12 +990,12 @@ int foo_v1 (void) shows another important advantage of LTO and PGO which is significant reduction of the size of the binaries (measured without debug info). Note that it does not depict that the size of benchmark - 548.exchange2_r grew to 290% and 200% of the original size when built with + 548.exchange2_r grew to 290% and 200% of the original size when built with PGO or both PGO and LTO respectively, which looks huge but the growth is from a particularly small base. It is the only Fortran benchmark in the integer suite and, most importantly, the size penalty is offset by significant speed-up, making the trade-off reasonable. For completeness, we show this result in + xrefstyle="template:figure %n"/>
@@ -1022,17 +1027,17 @@ int foo_v1 (void) The runtime benefits and binary size savings are also substantial when using the - optimization level -⁠Ofast and option - -⁠march=native to allow the compiler to take full advantage of all - instructions that the AMD EPYC 9654 Processor supports. -⁠Ofast and option + -⁠march=native to allow the compiler to take full advantage of all + instructions that the AMD EPYC 9654 Processor supports. shows the - respective geometric means, and shows how rates improve for individual benchmarks. - Moreover, even though optimization levels -⁠O3 and - -⁠Ofast are permitted to be relaxed about the final binary size, PGO - and especially LTO can bring it nicely down at these levels, too. - depicts - the relative binary sizes of all integer benchmarks. + respective geometric means, and shows how rates improve for individual benchmarks. Moreover, + even though optimization levels -⁠O3 and + -⁠Ofast are permitted to be relaxed about the final binary size, + PGO and especially LTO can bring it nicely down at these levels, too. depicts the + relative binary sizes of all integer benchmarks.
Overall performance (bigger is better) of SPEC INTrate 2017 built with GCC 12.3 using @@ -1076,17 +1081,16 @@ int foo_v1 (void) <para>Many of the SPEC 2017 floating-point benchmarks measure how well a given system can optimize and execute a handful of number crunching loops. They often come from performance sensitive programs written with traditional compilation method in mind. Consequently there are - fewer cross-module dependencies, identifying hot paths is less crucial and the overall effect - of LTO and PGO suite only improves by 5% (see <xref - linkend="fig-gcc12-specfp-ofast-pgolto-perf-indiv" xrefstyle="template:figure %n"/>). + fewer cross-module dependencies, identifying hot paths is less crucial and the overall effect of + LTO and PGO suite only improves by 5% (see <xref + linkend="fig-gcc12-specfp-ofast-pgolto-perf-indiv" xrefstyle="template:figure %n"/>). Nevertheless, there are important cases when these modes of compilation also bring about - significant performance increases. <xref - linkend="fig-gcc12-specfp-ofast-pgolto-perf-indiv" xrefstyle="template:Figure %n"/> shows the - effect of these methods on individual benchmarks when compiled at - <literal>-⁠Ofast</literal> and targeting the full ISA of the AMD EPYC 9654 - Processor. Furthermore, binary size savings of PGO and LTO are sometimes even bigger than + significant performance increases. <xref linkend="fig-gcc12-specfp-ofast-pgolto-perf-indiv" + xrefstyle="template:Figure %n"/> shows the effect of these methods on individual benchmarks + when compiled at <literal>-⁠Ofast</literal> and targeting the full ISA of the AMD EPYC + 9654 Processor. Furthermore, binary size savings of PGO and LTO are sometimes even bigger than those achieved on integer benchmarks, as can be seen on <xref - linkend="fig-gcc12-specfp-ofast-pgolto-size" xrefstyle="template:figure %n"/></para> + linkend="fig-gcc12-specfp-ofast-pgolto-size" xrefstyle="template:figure %n"/></para> <figure xml:id="fig-gcc12-specfp-ofast-pgolto-geomean"> <title>Overall performance (bigger is better) of SPEC FPrate 2017 built with GCC 12.3 and @@ -1142,11 +1146,12 @@ int foo_v1 (void) <literal>-⁠Ofast</literal> and <literal>-⁠march=native</literal>. Note that the latter option means that both compilers differ in their CPU targets because GCC 7.5 does not know the Zen 4 core. This in turn means that in large part the optimization benefits presented - here exist because the old compiler only issues 128bit (AVX2) vector operations whereas the newer one - can take full advantage of AVX512. Nevertheless, be aware that simply using wider vectors everywhere - often backfires. GCC has made substantial advancements over the recent years to avoid such issues, - both in its vectorizer and other optimizers. It is therefore much better placed to use the extra - vector width appropriately and produce code which utilizes the processor better in general. </para> + here exist because the old compiler only issues 128bit (AVX2) vector operations whereas the + newer one can take full advantage of AVX512. Nevertheless, be aware that simply using wider + vectors everywhere often backfires. GCC has made substantial advancements over the recent years + to avoid such issues, both in its vectorizer and other optimizers. It is therefore much better + placed to use the extra vector width appropriately and produce code which utilizes the processor + better in general. </para> <figure xml:id="fig-gcc12-specint-ofast-vs7-geomean"> <title>Overall performance (bigger is better) of SPEC INTrate 2017 built with GCC 7.5 and 12.3 @@ -1171,8 +1176,8 @@ int foo_v1 (void) not surprising it has improved a lot. <literal>531.deepsjeng_r</literal> is faster chiefly because it can emit better code for <emphasis role="italic">count trailing zeros</emphasis> (CTZ) operation which it performs frequently. Finally, modern GCC can optimize - <literal>548.exchange2_r</literal> particularly well by specializing different invocations of the - hottest recursive function and it also clearly shows in the picture.</para> + <literal>548.exchange2_r</literal> particularly well by specializing different invocations of + the hottest recursive function and it also clearly shows in the picture.</para> <figure xml:id="fig-gcc12-specint-ofast-vs7-indiv"> <title>Runtime performance (bigger is better) of selected integer benchmarks built with GCC 7.5 @@ -1187,13 +1192,14 @@ int foo_v1 (void) </mediaobject> </figure> - <para> Floating-point computations tend to particularly benefit from vectorization advancements. - Thus it should be no surprise that the FPrate benchmarks also improve substantially when - compiled with GCC 12.3, which also emits AVX512 instructions for a Zen 4 based CPU. The overall - boost is shown in <xref linkend="fig-gcc12-specfp-ofast-vs7-geomean" xrefstyle="template:figure - %n"/> whereas <xref linkend="fig-gcc12-specfp-ofast-vs7-indiv" xrefstyle="template:figure %n"/> - provides a detailed look at which benchmarks contributed most to the overall score - difference. </para> + <para> Floating-point computations tend to particularly benefit from vectorization advancements. + Thus it should be no surprise that the FPrate benchmarks also improve substantially when + compiled with GCC 12.3, which also emits AVX512 instructions for a Zen 4 based CPU. The overall + boost is shown in <xref linkend="fig-gcc12-specfp-ofast-vs7-geomean" + xrefstyle="template:figure + %n"/> whereas <xref linkend="fig-gcc12-specfp-ofast-vs7-indiv" + xrefstyle="template:figure %n"/> provides a detailed look at which benchmarks contributed most + to the overall score difference. </para> <figure xml:id="fig-gcc12-specfp-ofast-vs7-geomean"> <title>Overall performance (bigger is better) of SPEC FPrate 2017 built with GCC 7.5 and 12.3 @@ -1231,19 +1237,20 @@ int foo_v1 (void) on the table. This section uses the SPEC FPrate 2017 test suite to illustrate how much performance that might be. </para> - <para> We have built the benchmarking suite using optimization level - <literal>-⁠O3</literal>, LTO (though without PGO) and - <literal>-⁠march=native</literal> to target the native ISA of our AMD EPYC 9654 Processor. - Then we compared its runtime score against the suite built with these options and - <literal>-⁠ffast-math</literal>. As you can see in <xref - linkend="fig-gcc12-specfp-o3-fastmath-geomean" xrefstyle="template:figure %n"/>, the geometric + <para> We have built the benchmarking suite using optimization level + <literal>-⁠O3</literal>, LTO (though without PGO) and + <literal>-⁠march=native</literal> to target the native ISA of our AMD EPYC 9654 + Processor. Then we compared its runtime score against the suite built with these options and + <literal>-⁠ffast-math</literal>. As you can see in <xref + linkend="fig-gcc12-specfp-o3-fastmath-geomean" xrefstyle="template:figure %n"/>, the geometric mean grew by over 13%. But a quick look at <xref linkend="fig-gcc12-specfp-o3-fastmath-indiv" - xrefstyle="template:figure %n"/> will tell you that there are four benchmarks with scores which + xrefstyle="template:figure %n"/> will tell you that there are four benchmarks with scores which improved by more than 20% and that of <literal>510.parest_r</literal> grew by over 76%. </para> <figure xml:id="fig-gcc12-specfp-o3-fastmath-geomean"> <title>Overall performance (bigger is better) of SPEC FPrate 2017 built with GCC 12.3 and - -⁠O3 -⁠flto -⁠march=native, without and with -⁠ffast-math + -⁠O3 -⁠flto -⁠march=native, without and with + -⁠ffast-math @@ -1273,20 +1280,20 @@ int foo_v1 (void) Comparison with other compilers The toolchain team at SUSE regularly uses the SPEC CPU 2017 suite to compare the - optimization capabilities of GCC with other compilers, mainly LLVM/Clang and ICC and ICX from - Intel. In the final section of this case study we will share how the Development Module compiler - stands compared to these competitors on SUSE Linux Enterprise Server 15 SP4. Before we start, we - should emphasize that the comparison has been carried out by people who have much better - knowledge of GCC than of the other compilers and are not unbiased. Also, keep in - mind that everything we explained previously about how we carry out the measurements and patch - the benchmarks also applies to this section. On the other hand, the results often guide our own - work and therefore we strive to be accurate. - - LLVM/Clang 16.0 now comes with a new Fortran front-end called - flang-new which is capable of compiling SPEC, but we were not able to - successfully run 527.cam4_r benchmark compiled with it and LTO. Comparison - with LLVM in this report is therefore incomplete but for the first time we were able to include - the rest of the benchmarks using Fortran in our comparison with LLVM/Clang. + optimization capabilities of GCC with other compilers, mainly LLVM/Clang and ICC and ICX from + Intel. In the final section of this case study we will share how the Development Module compiler + stands compared to these competitors on SUSE Linux Enterprise Server 15 SP4. Before we start, we + should emphasize that the comparison has been carried out by people who have much better + knowledge of GCC than of the other compilers and are not unbiased. Also, keep in + mind that everything we explained previously about how we carry out the measurements and patch + the benchmarks also applies to this section. On the other hand, the results often guide our own + work and therefore we strive to be accurate. + + LLVM/Clang 16.0 now comes with a new Fortran front-end called flang-new + which is capable of compiling SPEC, but we were not able to successfully run + 527.cam4_r benchmark compiled with it and LTO. Comparison with LLVM in this + report is therefore incomplete but for the first time we were able to include the rest of the + benchmarks using Fortran in our comparison with LLVM/Clang. We have built the clang and clang++ compilers from sources obtained from the official git repository (tag llvmorg-16.0.1), used @@ -1308,13 +1315,14 @@ int foo_v1 (void)
- shows - that the geometric mean of the whole SPEC INTrate 2017 suite is quite substantially better when - the benchmarks are compiled with GCC. To be fair, a disproportionate amount of the difference is - because GNU Fortran can optimize 548.exchange2_r much better than LLVM. - Given that the LLVM Fortran front-end is very new and the optimization opportunities in this - particular benchmark are quite specific, the result may not be important for many users. - + + shows that the geometric mean of the whole SPEC INTrate 2017 suite is quite substantially better + when the benchmarks are compiled with GCC. To be fair, a disproportionate amount of the + difference is because GNU Fortran can optimize 548.exchange2_r much better + than LLVM. Given that the LLVM Fortran front-end is very new and the optimization opportunities + in this particular benchmark are quite specific, the result may not be important for many + users. +
Runtime performance (bigger is better) of 548.exchange2_r benchmarks built with Clang 16 and GCC 12.3 @@ -1327,7 +1335,7 @@ int foo_v1 (void)
- +
Runtime performance (bigger is better) of C/C++ integer benchmarks built with Clang 16 and GCC 12.3 @@ -1342,23 +1350,22 @@ int foo_v1 (void)
- shows - relative rates of integer benchmarks written in C/C++ and the compilers perform fairly - similarly there. GCC wins by a large margin on 500.perlbench_r but loses - significantly when compiling 525.x264_r. This is because the compiler - chooses a vectorizing factor that is too large for the important loops in this video encoder. - It is possible to mitigate the problem using compiler option - -⁠mprefer-⁠vector-⁠width=128, with which it is again - competitive, as you can see in . This problem is being actively worked on by the upstream - GCC community. We plan to use masked vectorized epilogues to minimize the fallout of - choosing a large vectorizing factor for the principal vector loop. Note that PGO can - substantially help in this case too. - + shows + relative rates of integer benchmarks written in C/C++ and the compilers perform fairly similarly + there. GCC wins by a large margin on 500.perlbench_r but loses significantly + when compiling 525.x264_r. This is because the compiler chooses a vectorizing + factor that is too large for the important loops in this video encoder. It is possible to + mitigate the problem using compiler option + -⁠mprefer-⁠vector-⁠width=128, with which it is + again competitive, as you can see in . This problem is being actively worked on by the upstream GCC + community. We plan to use masked vectorized epilogues to minimize the fallout of choosing a + large vectorizing factor for the principal vector loop. Note that PGO can substantially help in + this case too.
- Runtime performance (bigger is better) of 525.x264_r benchmark built with Clang 16 - and with GCC 12.3 using -mprefer-vector-width=128 + Runtime performance (bigger is better) of 525.x264_r benchmark built with Clang 16 and + with GCC 12.3 using -mprefer-vector-width=128 @@ -1370,24 +1377,23 @@ int foo_v1 (void)
Because we were not able to successfully run 527.cam4_r benchmark - compiled with LLVM with LTO, we have excluded the benchmark in our comparison of geometric mean - of SPEC FPrate 2017 suite depicted in . The floating point benchmark suite contains many more Fortran - benchmarks. It can be seen that GCC has advantage in having a mature optimization pipeline - for this language as well, especially when compiling 503.bwaves_r, - 510.parest_r, 549.fotonik3d_r, - 554.roms_r (see ) and the already mentioned 527.cam4_r (see - ). The - comparison also shows that the performance of 538.imagick_r when compiled - with GCC 12.3 is substantially smaller. This is caused by store-to-load - forwarding stall issues, which can be mitigated by relaxing inlining limits, - something that GCC 13 does automatically. - + compiled with LLVM with LTO, we have excluded the benchmark in our comparison of geometric mean + of SPEC FPrate 2017 suite depicted in . The floating point benchmark suite contains many more Fortran + benchmarks. It can be seen that GCC has advantage in having a mature optimization pipeline for + this language as well, especially when compiling 503.bwaves_r, + 510.parest_r, 549.fotonik3d_r, + 554.roms_r (see ) and the already mentioned 527.cam4_r (see + ). The + comparison also shows that the performance of 538.imagick_r when compiled + with GCC 12.3 is substantially smaller. This is caused by store-to-load + forwarding stall issues, which can be mitigated by relaxing inlining limits, + something that GCC 13 does automatically.
Overall performance (bigger is better) of SPEC FPrate 2017 excluding 527.cam4_r built - with Clang 16 and GCC 12.3 + with Clang 16 and GCC 12.3 @@ -1397,10 +1403,10 @@ int foo_v1 (void)
- +
- Runtime performance (bigger is better) of floating point benchmarks built with - Clang 16 and GCC 12.3 + Runtime performance (bigger is better) of floating point benchmarks built with Clang 16 + and GCC 12.3 @@ -1413,7 +1419,7 @@ int foo_v1 (void)
Runtime performance (bigger is better) of 527.cam4_r benchmark built with Clang 16 and - GCC 12.3 + GCC 12.3 @@ -1423,23 +1429,23 @@ int foo_v1 (void)
- - + + Even though ICC is not intended as a compiler for AMD processors, it is known for its - high-level optimization capabilities, especially when it comes to vectorization. Therefore we - have traditionally included it our comparisons of compilers. Recently, however, Intel has - decided to abandon this compiler and is directing its users toward ICX, a new one built on top - of LLVM. This year we have therefore included not just ICC 2021.9.0 (20230302) but also ICX - 2023.1.0 in our comparison. To keep the amount of presented data in the rest of this - section reasonable, we only compare binaries built with -⁠Ofast and - LTO. We have simply passed -⁠march=native GCC and ICX. On the other - hand, we have used -⁠march=core-avx2 option to specify the target ISA - for the old ICC because it is unclear which option is the most appropriate for AMD EPYC 9654 - Processor. This puts this compiler at a disadvantage because it can only emit AVX256 - instructions while the other two can, and GCC does, make use of AVX512. We believe that the - comparison is still useful as ICC serves mainly as a base and the focus now shifts to ICX but - keep this in mind when looking at the results below. + high-level optimization capabilities, especially when it comes to vectorization. Therefore we + have traditionally included it our comparisons of compilers. Recently, however, Intel has + decided to abandon this compiler and is directing its users toward ICX, a new one built on top + of LLVM. This year we have therefore included not just ICC 2021.9.0 (20230302) but also ICX + 2023.1.0 in our comparison. To keep the amount of presented data in the rest of this section + reasonable, we only compare binaries built with -⁠Ofast and LTO. We + have simply passed -⁠march=native GCC and ICX. On the other hand, + we have used -⁠march=core-avx2 option to specify the target ISA for + the old ICC because it is unclear which option is the most appropriate for AMD EPYC 9654 + Processor. This puts this compiler at a disadvantage because it can only emit AVX256 + instructions while the other two can, and GCC does, make use of AVX512. We believe that the + comparison is still useful as ICC serves mainly as a base and the focus now shifts to ICX but + keep this in mind when looking at the results below.
Overall performance (bigger is better) of SPEC INTrate 2017 built with ICC 2021.9.0, ICX @@ -1455,18 +1461,19 @@ int foo_v1 (void) </figure> <para> - <xref linkend="fig-gcc12-specint-ofast-vsicc-geomean" xrefstyle="template:Figure %n"/> shows - that the new ICX compiler takes the lead in overall SPEC INTrate assessment. The results of - individual benchmarks however quickly show that the majority of the lead is due to one - benchmark, <literal>525.x264_r</literal>, and for the same reasons we outlined when discussing - LLVM/Clang results. GCC picks too large vectorizing factor and the mitigation is again using - <literal>-⁠mprefer-⁠vector-⁠width=128</literal> which leads to a much narrower gap (see <xref - linkend="fig-gcc12-specint-ofast-vsicc-x264_128" xrefstyle="template:figure %n"/>). When - looking at the other benchmarks, GCC achieves comparable results.</para> + <xref linkend="fig-gcc12-specint-ofast-vsicc-geomean" xrefstyle="template:Figure %n"/> shows + that the new ICX compiler takes the lead in overall SPEC INTrate assessment. The results of + individual benchmarks however quickly show that the majority of the lead is due to one + benchmark, <literal>525.x264_r</literal>, and for the same reasons we outlined when discussing + LLVM/Clang results. GCC picks too large vectorizing factor and the mitigation is again using + <literal>-⁠mprefer-⁠vector-⁠width=128</literal> which leads to a + much narrower gap (see <xref linkend="fig-gcc12-specint-ofast-vsicc-x264_128" + xrefstyle="template:figure %n"/>). When looking at the other benchmarks, GCC achieves + comparable results.</para> <figure xml:id="fig-gcc12-specint-ofast-vsicc-indiv"> <title>Runtime performance (bigger is better) of individual integer benchmarks built with ICC - 2021.9.0, ICX 2023.1.0 and GCC 12.3 + 2021.9.0, ICX 2023.1.0 and GCC 12.3 @@ -1478,8 +1485,8 @@ int foo_v1 (void)
- Runtime performance (bigger is better) of 525.x264_r benchmark built with ICC - 2021.9.0, ICX 2023.1.0 and with GCC 12.3 using -mprefer-vector-width=128 + Runtime performance (bigger is better) of 525.x264_r benchmark built with ICC 2021.9.0, + ICX 2023.1.0 and with GCC 12.3 using -mprefer-vector-width=128 @@ -1491,14 +1498,14 @@ int foo_v1 (void)
Comparison with ICX on SPEC FPrate suite has been hampered by the fact that again there is - a benchmark which did not run correctly, this time it was 521.wrf_r. - Therefore we have calculated the geometric means of rates for excluding - it. - + a benchmark which did not run correctly, this time it was 521.wrf_r. + Therefore we have calculated the geometric means of rates for excluding + it. +
- Overall performance (bigger is better) of SPEC FPrate 2017 excluding 521.wrf_r built - with ICC 2021.9.0, ICX 2023.1.0 and GCC 12.3 + Overall performance (bigger is better) of SPEC FPrate 2017 excluding 521.wrf_r built with + ICC 2021.9.0, ICX 2023.1.0 and GCC 12.3 @@ -1510,20 +1517,20 @@ int foo_v1 (void)
While GCC achieves the best geometric mean, it is important to look at individual results - too. The overall picture is mixed (see ), as each of the three compilers managed to be the fastest in at - least one benchmark. We do not know the reason for rather poor performance of ICX on - 554.roms_r. But we have seen a similar issue with the compiler on an Intel - Cascadelake server machine too, so it is not a consequence of using an Intel compiler on an AMD - platform. For completeness, 521.wrf_r results for ICC and ICX are provided - in . In - conclusion, GCC manages to perform consistently and competitively against these high-performance - compilers. + too. The overall picture is mixed (see ), as each of the three compilers managed to be the fastest in + at least one benchmark. We do not know the reason for rather poor performance of ICX on + 554.roms_r. But we have seen a similar issue with the compiler on an Intel + Cascadelake server machine too, so it is not a consequence of using an Intel compiler on an AMD + platform. For completeness, 521.wrf_r results for ICC and ICX are provided in + . In + conclusion, GCC manages to perform consistently and competitively against these high-performance + compilers.
- Runtime performance (bigger is better) of individual floating point benchmarks built - with ICC 2021.9.0, ICX 2023.1.0 and GCC 12.3 + Runtime performance (bigger is better) of individual floating point benchmarks built with + ICC 2021.9.0, ICX 2023.1.0 and GCC 12.3 @@ -1536,7 +1543,7 @@ int foo_v1 (void)
Runtime performance (bigger is better) of 521.wrf_r built with ICC 2021.9.0 and GCC - 12.3 + 12.3 diff --git a/xml/MAIN-SBP-HANAonKVM-SLES12SP2.xml b/xml/MAIN-SBP-HANAonKVM-SLES12SP2.xml index 90643fcd2..47ce804fd 100644 --- a/xml/MAIN-SBP-HANAonKVM-SLES12SP2.xml +++ b/xml/MAIN-SBP-HANAonKVM-SLES12SP2.xml @@ -5,20 +5,15 @@ %entity; ]> - -
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-hanaonkvm-sles12sp2" xml:lang="en"> SUSE Best Practices for SAP HANA on KVM SUSE Linux Enterprise Server for SAP Applications 12 SP2 - SUSE Linux Enterprise Server for SAP Applications - 12 SP2 + https://github.com/SUSE/suse-best-practices/issues/new @@ -28,25 +23,23 @@ - SUSE Best Practices - - - SAP - Virtualization + Best Practices + + SAP - - Configuration - Virtualization + + Configuration + Virtualization - SUSE Best Practices for SAP HANA on KVM - Describes how to configure SUSE Linux Enterprise Server for SAP Applications 12 SP2 - with KVM to run SAP HANA for use in production environments - - SLES for SAP + SUSE Best Practices for SAP HANA on KVM + Describes how to configure SUSE Linux Enterprise Server for SAP + Applications 12 SP2 with KVM to run SAP HANA for use in production environments + Configuring SLES for SAP Applications with KVM and HANA + + SUSE Linux Enterprise Server for SAP Applications - 2019-08-19 - SUSE Linux Enterprise Server for SAP Applications 12 SP2 + SUSE Linux Enterprise Server for SAP Applications 12 SP2 @@ -82,8 +75,16 @@ + + + + 2019-08-19 + + + + + - 2019-08-19 @@ -92,18 +93,17 @@ environments. Configurations which are not set up according to this best practice guide are considered as unsupported by SAP for production workloads. While this document is not compulsory for non-production SAP HANA workloads, it may - still be useful to help ensure optimal performance in such scenarios. + still be useful to help ensure optimal performance in such scenarios. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the SUSE Best + Practices series have been contributed voluntarily by SUSE employees and third parties. They + are meant to serve as examples of how particular actions can be performed. They have been + compiled with utmost attention to detail. However, this does not guarantee complete + accuracy. SUSE cannot verify that actions described in these documents do what is claimed or + whether actions described have unintended consequences. SUSE LLC, its affiliates, the + authors, and the translators may not be held liable for possible errors or the consequences + thereof. @@ -1206,8 +1206,7 @@ lstopo-no-graphics Install SUSE Linux Enterprise Server for SAP Applications Inside the Guest VM Refer to the SUSE Guide SUSE Linux Enterprise Server for SAP Applications 12 - SP2 (). + SP2 (). @@ -2110,9 +2109,9 @@ or other application using the libvirt API. Report Documentation Bug To report errors or suggest enhancements for a certain document, use the Report Documentation Bug feature at the right side of - each section in the online documentation. Provide a concise description of the problem - and refer to the respective section number and page (or URL). + role="strong">Report Documentation Bug feature at the right side of each + section in the online documentation. Provide a concise description of the problem and + refer to the respective section number and page (or URL). diff --git a/xml/MAIN-SBP-KMP-Manual-SLE12SP2.xml b/xml/MAIN-SBP-KMP-Manual-SLE12SP2.xml index 33df5188e..fe83b0c2e 100644 --- a/xml/MAIN-SBP-KMP-Manual-SLE12SP2.xml +++ b/xml/MAIN-SBP-KMP-Manual-SLE12SP2.xml @@ -8,14 +8,13 @@
+ xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-kmp-manual-12-15" xml:lang="en"> Kernel Module Packages Manual SUSE Linux Enterprise 12 SP2 or later and SUSE Linux Enterprise 15 - - SUSE Linux Enterprise - 12 SP2+, 15 + https://github.com/SUSE/suse-best-practices/issues/new @@ -25,26 +24,32 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - - SUSE Best Practices - - + Best Practices + Systems Management - + Packaging - Kernel Module Packages Manual - The document specifies the requirements for RPM packages that contain kernel modules, - and describes the processes surrounding those packages. - - SLES - SLES + Kernel Module Packages Manual + The document specifies the requirements for RPM packages that contain kernel modules, + and describes the processes surrounding those packages. + Describes requirements for RPM kernel module packages + + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise - 2022-02-15 - SUSE Linux Enterprise Server 12 SP2 and later - SUSE Linux Enterprise Server 15 + SUSE Linux Enterprise Server 12 SP2 and later + SUSE Linux Enterprise Server 15 @@ -57,17 +62,7 @@ SUSE - - SUSE Linux Enterprise Server - 11, 12 GA, 12 SP1 + https://github.com/SUSE/suse-best-practices/issues/new @@ -24,25 +23,24 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + Systems Management - + Packaging - Kernel Module Packages Manual - The document specifies the requirements for RPM packages that contain kernel modules, - and describes the processes surrounding those packages. - - SLES - SLES + Kernel Module Packages Manual + The document specifies the requirements for RPM packages that contain kernel modules, + and describes the processes surrounding those packages. + Describes requirements for RPM kernel module packages + + SUSE Linux Enterprise + SUSE Linux Enterprise - 2022-02-15 - SUSE Linux Enterprise Server 11 - SUSE Linux Enterprise Server 12 GA and SP1 + SUSE Linux Enterprise Server 11 + SUSE Linux Enterprise Server 12 GA and SP1 @@ -55,17 +53,7 @@ SUSE - - SUSE Linux Enterprise Server for z Systems and LinuxONE - 12 SP2 - - + https://github.com/SUSE/suse-best-practices/issues/new - Migrating from KVM for IBM z Systems to KVM integrated with SUSE Linux Enterprise - Server + Migrating from KVM for IBM z Systems to KVM integrated with SUSE Linux + Enterprise Server https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + Virtualization - - Migration + + Migration - Migrating from KVM for IBM z Systems - How to migrate a virtual machine from KVM for IBM z Systems to KVM delivered - with SUSE Linux Enterprise Server. - - SLES - - 2017-05-20 - - KVM for IBM z Systems - SUSE Linux Enterprise Server 12 SP2 + Migrating from KVM for IBM z Systems + How to migrate a virtual machine from KVM for IBM z Systems to KVM + delivered with SUSE Linux Enterprise Server. + Migrating a VM from IBM Z to KVM on SLES + + SUSE Linux Enterprise Server + + + SUSE Linux Enterprise Server for IBM Z and LinuxONE 12 SP2 - + - - Mike - Friesenegger - - - Technology Strategist Alliances and Integrated Systems - SUSE - + + Mike + Friesenegger + + + Technology Strategist Alliances and Integrated Systems + SUSE + - - + - - + + - + - + - - - - 2017-05-20 - + + + + + + 2017-05-20 + + + + + + + On March 7, 2017, IBM announced the end of support for their product KVM for IBM z Systems. To learn more, read the official IBM Withdrawal Announcement at - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the + SUSE Best Practices series have been contributed voluntarily by SUSE employees and + third parties. They are meant to serve as examples of how particular actions can be + performed. They have been compiled with utmost attention to detail. However, this + does not guarantee complete accuracy. SUSE cannot verify that actions described in + these documents do what is claimed or whether actions described have unintended + consequences. SUSE LLC, its affiliates, the authors, and the translators may not be + held liable for possible errors or the consequences thereof. @@ -123,9 +116,10 @@ Installing SUSE Linux Enterprise Server for z Systems and LinuxONE as KVM Host - Like KVM for IBM z Systems, KVM integrated with SUSE Linux Enterprise Server for z Systems and LinuxONE is - supported when installed into an LPAR. The section Installation on IBM z - Systems of the SUSE Linux Enterprise Server Deployment Guide at Like KVM for IBM z Systems, KVM integrated with SUSE Linux Enterprise Server for z + Systems and LinuxONE is supported when installed into an LPAR. The section + Installation on IBM z Systems of the SUSE Linux Enterprise Server + Deployment Guide at provides information regarding supported hardware and memory requirements for installing into an LPAR. The section Overview of an LPAR Installation at @@ -191,42 +185,41 @@ Using the Command Line to Create and Manage Instances - virsh and virt-install are - the command line utilities to manage virtual machines. These utilities are installed - as part of the KVM virtualization host and tools pattern. The section - Virtualization Console Tools at virsh and virt-install are the command line + utilities to manage virtual machines. These utilities are installed as part of the + KVM virtualization host and tools pattern. The section Virtualization Console + Tools at provides an introduction to these utilities. - The virsh and - virsh-installtools can be used both from an SSH session, - or in a terminal/console session when working via Virtual Network Computing (VNC). + The virsh and virsh-installtools can be used + both from an SSH session, or in a terminal/console session when working via Virtual + Network Computing (VNC). - Connect via SSH to log in to the KVM host to use virsh - and virt-install. + Connect via SSH to log in to the KVM host to use virsh and + virt-install. Managing Virtual Machines Using a GUI - Virtual Machine Manager (virt-manager) is the GUI tool - used to manage virtual machines. This utility is installed as part of the KVM - virtualization host and tools pattern. The section Virtualization GUI - Tools of the Virtualization Guide at Virtual Machine Manager (virt-manager) is the GUI tool used to + manage virtual machines. This utility is installed as part of the KVM virtualization + host and tools pattern. The section Virtualization GUI Tools of the + Virtualization Guide at provides an introduction. - To access the virt-manager GUI, use X11 forwarding via - SSH or vncviewer. X11 forwarding is a good option if the connection (local or - metropolitan area network) speed between the client and the KVM host has relatively low latency. Use - vncviewer if you find that X11 redirection is too - slow. Perform the following actions to enable access to - virt-manager. + To access the virt-manager GUI, use X11 forwarding via SSH or + vncviewer. X11 forwarding is a good option if the connection (local or metropolitan + area network) speed between the client and the KVM host has relatively low latency. + Use vncviewer if you find that X11 redirection is too slow. + Perform the following actions to enable access to + virt-manager. - Both X11 forwarding via SSH and - vncviewer + Both X11 forwarding via SSH and vncviewer Install the suggested additional patterns listed above using YaST @@ -251,7 +244,8 @@ Administration (VNC) - Select Open Port in Firewall if the firewall was enabled during the installation + Select Open Port in Firewall if the firewall was + enabled during the installation Select Allow Remote Administration Without Session @@ -299,13 +293,13 @@ - Moving an Existing KVM for IBM z Systems Virtual Machine to KVM in SUSE Linux Enterprise - Server + Moving an Existing KVM for IBM z Systems Virtual Machine to KVM in SUSE Linux + Enterprise Server - The following section explains how to move a KVM for IBM z Systems virtual machine to KVM - integrated with SUSE Linux Enterprise Server. A VM was created on KVM for IBM z Systems - following the section 3.7 of the IBM Redbook Getting Started with KVM for IBM z - Systems at The following section explains how to move a KVM for IBM z Systems virtual machine to + KVM integrated with SUSE Linux Enterprise Server. A VM was created on KVM for IBM z + Systems following the section 3.7 of the IBM Redbook Getting Started with KVM for + IBM z Systems at . This procedure assumes that the KVM for IBM z Systems VM has a virtual disk file and is attached to a virtual network defined using libvirt. Below is the XML definition of the diff --git a/xml/MAIN-SBP-Multi-PXE-Install.xml b/xml/MAIN-SBP-Multi-PXE-Install.xml index 6fb297579..a0068acf1 100644 --- a/xml/MAIN-SBP-Multi-PXE-Install.xml +++ b/xml/MAIN-SBP-Multi-PXE-Install.xml @@ -4,16 +4,15 @@ %entity; ]> +
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" + xmlns:its="http://www.w3.org/2005/11/its" xml:id="art-sbp-multi-pxe" xml:lang="en"> How to Set Up a Multi-PXE Installation Server - - SUSE Linux Enterprise Server - + https://github.com/SUSE/suse-best-practices/issues/new @@ -22,67 +21,64 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + Systems Management - + Installation Deployment - How to Set Up a Multi-PXE Installation Server - How to set up a multi-architecture PXE environment - for the installation of SLES on x86_64 and ARMv8 platforms with both BIOS and EFI - - SLES + How to Set Up a Multi-PXE Installation Server + How to set up a multi-architecture PXE environment for the + installation of SLES on x86_64 and ARMv8 platforms with both BIOS and EFI + Setting up a multi-PXE installation server with SLES + + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server - 2018-09-25 - SUSE Linux Enterprise Server 12 + SUSE Linux Enterprise Server 12 - + - - David - Byte - - - Senior Technology Strategist - SUSE - - - - + - - + + - + - + - - - - 2018-09-25 - + + + + + + 2018-09-25 + + + + + @@ -95,15 +91,14 @@ environments. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the + SUSE Best Practices series have been contributed voluntarily by SUSE employees and + third parties. They are meant to serve as examples of how particular actions can be + performed. They have been compiled with utmost attention to detail. However, this + does not guarantee complete accuracy. SUSE cannot verify that actions described in + these documents do what is claimed or whether actions described have unintended + consequences. SUSE LLC, its affiliates, the authors, and the translators may not be + held liable for possible errors or the consequences thereof. diff --git a/xml/MAIN-SBP-OracleWeblogic-SLES12SP3.xml b/xml/MAIN-SBP-OracleWeblogic-SLES12SP3.xml index db03f67b7..672a44740 100644 --- a/xml/MAIN-SBP-OracleWeblogic-SLES12SP3.xml +++ b/xml/MAIN-SBP-OracleWeblogic-SLES12SP3.xml @@ -4,17 +4,18 @@ %entity; ]> +
+ xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-weblogic-sles12sp3" xml:lang="en"> Oracle WebLogic Server 12cR2 on SUSE Linux Enterprise Server 12 SP3 Installation Guide for x86-64 Architectures - SUSE Linux Enterprise Server - 12 SP3 + https://github.com/SUSE/suse-best-practices/issues/new @@ -24,26 +25,25 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - Best Practices - + Best Practices + 3rd party - + Installation Integration + Web - Oracle WebLogic Server 12cR2 on SUSE Linux Enterprise Server 12 SP3 - This document provides the details for installing - Oracle WebLogic Server 12cR2 on SUSE Linux Enterprise Server 12 SP3. - - SLES - Oracle WebLogic Server + Oracle WebLogic Server 12cR2 on SUSE Linux Enterprise Server 12 SP3 + This document provides the details for installing + Oracle WebLogic Server 12cR2 on SUSE Linux Enterprise Server 12 SP3. + Installing Oracle WebLogic Server 12cR2 on SLES 12 SP3 + + SUSE Linux Enterprise Server - 2018-01-24 - SUSE Linux Enterprise Server 12 SP3 - Oracle Fusion Middleware 12c + SUSE Linux Enterprise Server 12 SP3 + Oracle Fusion Middleware 12c @@ -84,8 +84,16 @@ + + + + 2018-01-24 + + + + + - 2018-01-24 Oracle WebLogic Server 12c R2 is a reliable application diff --git a/xml/MAIN-SBP-Quilting-OSC.xml b/xml/MAIN-SBP-Quilting-OSC.xml index 2425bf481..9594ff198 100644 --- a/xml/MAIN-SBP-Quilting-OSC.xml +++ b/xml/MAIN-SBP-Quilting-OSC.xml @@ -4,17 +4,17 @@ %entity; ]> +
How to Modify a Package in Open Build Service Quilting with OSC - SUSE Linux Enterprise Server - - + https://github.com/SUSE/suse-best-practices/issues/new @@ -23,39 +23,31 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + Systems Management - Developer Tools - + Packaging - How to Modify a Package in Open Build Service - This document leads you through the process of + How to Modify a Package in Open Build Service + This document leads you through the process of modifying a software package in the Open Build Service using the osc command and quilt tools. - - SLE - OBS + Modifying software packages in OBS with osc and quilt + + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise - 2018-03-13 - Open Build Service - SUSE Linux Enterprise + Open Build Service + SUSE Linux Enterprise - Josef @@ -85,9 +77,15 @@ - + + + 2018-03-13 - + + + + + diff --git a/xml/MAIN-SBP-RPM-Packaging.xml b/xml/MAIN-SBP-RPM-Packaging.xml index f7297bc14..7760bbc04 100644 --- a/xml/MAIN-SBP-RPM-Packaging.xml +++ b/xml/MAIN-SBP-RPM-Packaging.xml @@ -4,15 +4,14 @@ %entity; ]> +
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" + xmlns:its="http://www.w3.org/2005/11/its" xml:id="art-sbp-rpm-packaging" xml:lang="en"> + Introduction to RPM Packaging - - - SUSE Linux Enterprise, openSUSE - + https://github.com/SUSE/suse-best-practices/issues/new @@ -22,84 +21,97 @@ - SUSE Best Practices - - + Best Practices + Systems Management - + Packaging - Introduction to RPM Packaging - This document describes in detail how to create an RPM package on SUSE-based - systems. - - SLE - openSUSE + Introduction to RPM Packaging + This document describes in detail how to create an RPM package on + SUSE-based systems. + Creating RPM packages for SUSE systems + + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server - 2017-05-29 - SUSE Linux Enterprise - openSUSE + SUSE Linux Enterprise + openSUSE - + - - Duncan - Mac-Vicar Prett - - - Director Data Center Management - SUSE - + + Duncan + Mac-Vicar Prett + + + Director Data Center Management + SUSE + - + - - + + - + - + - - + + + + + + 2017-05-29 + + + + + - 2017-05-29 - In general, a pre-built, open source application is called a package and bundles all the binary, data, and - configuration files required to run the application. A package also includes all the - steps required to deploy the application on a system, typically in the form of a - script. The script might generate data, start and stop system services, or - manipulate files and directories. A script might also perform operations to upgrade - existing software to a new version. + In general, a pre-built, open source application is called a + package and bundles all the binary, data, and configuration + files required to run the application. A package also includes all the steps + required to deploy the application on a system, typically in the form of a script. + The script might generate data, start and stop system services, or manipulate files + and directories. A script might also perform operations to upgrade existing software + to a new version. Because each operating system has its idiosyncrasies, a package is typically tailored to a specific system. Moreover, each operating system provides its own - package manager, a special utility to add and - remove packages from the system. SUSE-based systems – openSUSE and SUSE Linux - Enterprise - use the RPM Package Manager. The package manager precludes partial and - faulty installations and uninstalls by adding and removing the files - in a package atomically. The package manager also maintains a manifest of all - packages installed on the system and can validate the existence of prerequisites and + package manager, a special utility to add and remove + packages from the system. SUSE-based systems – openSUSE and SUSE Linux Enterprise - + use the RPM Package Manager. The package manager precludes partial and faulty + installations and uninstalls by adding and removing the files in a + package atomically. The package manager also maintains a manifest of all packages + installed on the system and can validate the existence of prerequisites and co-requisites beforehand. This document describes in detail how to create an RPM package on SUSE-based - systems. + systems. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the + SUSE Best Practices series have been contributed voluntarily by SUSE employees and + third parties. They are meant to serve as examples of how particular actions can be + performed. They have been compiled with utmost attention to detail. However, this + does not guarantee complete accuracy. SUSE cannot verify that actions described in + these documents do what is claimed or whether actions described have unintended + consequences. SUSE LLC, its affiliates, the authors, and the translators may not be + held liable for possible errors or the consequences thereof. @@ -113,11 +125,11 @@ On some platforms, applications are self-contained into a directory. This means installing an application is simply adding a directory, and uninstalling the application is simply removing this directory. - Linux systems tend to share as much of their components as possible. Partly this is the case - because of some advantages of this philosophy. But mainly it happens because of the fact - that in the Linux ecosystem, the whole universe is built by the same entity, except for - a few 3rd party applications. This makes it easy to assume that a library is available - for all applications to consume. + Linux systems tend to share as much of their components as possible. Partly this is + the case because of some advantages of this philosophy. But mainly it happens because of + the fact that in the Linux ecosystem, the whole universe is built by the same entity, + except for a few 3rd party applications. This makes it easy to assume that a library is + available for all applications to consume. In a MacOS system, only the core comes from a single vendor, and all applications are provided by third party suppliers. It is therefore harder to make assumptions, and they tend to ship their own version of any depending component, with the exception of @@ -172,8 +184,7 @@ - As an example, the metadata for rsync look as - follows: + As an example, the metadata for rsync look as follows: $ rpm -qpi rsync-3.1.2-1.5.x86_64.rpm @@ -271,9 +282,9 @@ rsync(x86-64) = 3.1.2-1.5 $ rpm -q --provides rsync - The rpm tool will not help you if the dependencies of - the package are not met at installation time. It will then refuse to install the - package to avoid having the system in an inconsistent state. + The rpm tool will not help you if the dependencies of the + package are not met at installation time. It will then refuse to install the package + to avoid having the system in an inconsistent state. Features like automatically finding the required packages and retrieving them, are implemented in higher-level tools like zypper. @@ -391,10 +402,10 @@ error while loading shared libraries: libsound.so.7: cannot open shared object f Working with Packages - For daily system administration and maintenance, the rpm - tool does not suffice. You will quickly fall into what is commonly called the - “dependency hell”. This means you download packages manually to quickly satisfy a - dependency, but then you realize the new package implicates another dependency. + For daily system administration and maintenance, the rpm tool does + not suffice. You will quickly fall into what is commonly called the “dependency hell”. + This means you download packages manually to quickly satisfy a dependency, but then you + realize the new package implicates another dependency. This problem is taken care of by a tool that implements a solver. The solver considers: @@ -439,11 +450,11 @@ error while loading shared libraries: libsound.so.7: cannot open shared object f In SUSE systems, this functionality is implemented by the ZYpp (see )library, which also includes a command-line tool called - zypper. While tools like YaST (see ) also interact - with ZYpp, on the console you will likely interact - with zypper. The command + />)library, which also includes a command-line tool called zypper. + While tools like YaST (see ) also interact with ZYpp, on the console you will likely interact with + zypper. The command $ zypper install rsync-3.1.2-1.5.x86_64.rpm @@ -559,9 +570,9 @@ error while loading shared libraries: libsound.so.7: cannot open shared object f you will get an error at retrieval time. The list of repositories of the system is kept in - /etc/zypp/repos.d. zypper - provides most of repository operations in a safer way than trying to update - these files manually. + /etc/zypp/repos.d. zypper provides + most of repository operations in a safer way than trying to update these files + manually. During refresh, metadata is cached locally at /var/cache/zypp/raw and converted to an efficient @@ -587,10 +598,10 @@ error while loading shared libraries: libsound.so.7: cannot open shared object f Services can be installed remote (like SCC), or locally, via a plug-in, on the system. The package manager asks the plug-in for a list of repositories. It is up to the plug-in to build this list. This is normally used for integration with - other systems. For example, the connectivity between - zypper and Spacewalk respective SUSE Manager (see - was - originally implemented using a local plug-in. + other systems. For example, the connectivity between zypper + and Spacewalk respective SUSE Manager (see was originally + implemented using a local plug-in. @@ -598,9 +609,8 @@ error while loading shared libraries: libsound.so.7: cannot open shared object f Repository sources If you are using SUSE Linux Enterprise, your repositories will appear after - the SUSEConnect tool registers your product against - the SUSE Customer Center at . + the SUSEConnect tool registers your product against the SUSE + Customer Center at . If you are using openSUSE, the default installation will set up the base and update the repositories. Additionally, there is a lot of content published by @@ -847,9 +857,9 @@ touch %{buildroot}%{_datadir}/%{name}/CONTENT Building with rpmbuild - You can build a package with the rpmbuild tool. It - requires the spec file to be in a specific location. You can tweak the standard - configuration to search spec files in the current directory: + You can build a package with the rpmbuild tool. It requires the + spec file to be in a specific location. You can tweak the standard configuration to + search spec files in the current directory: $ cat ~/.rpmmacros %topdir /space/packages @@ -896,9 +906,9 @@ Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.0xLGri assembling the package from existing content. You could build your application in Jenkins, take the built artifacts and use the spec file to package it. - However, where rpm excels is that you can build the - application in the spec file itself, and use the distribution and dependencies to - set up the build environment. + However, where rpm excels is that you can build the application + in the spec file itself, and use the distribution and dependencies to set up the + build environment. A common use case to illustrate this is the typical Linux application built with configure && make && make @@ -920,8 +930,8 @@ Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.0xLGri $(rpm --eval '%configure')). The package cannot build if some libraries are not present. A C compiler is there, - but the basic build tools (make) are not available. That - is what BuildRequires are for. They define what + but the basic build tools (make) are not available. That is what + BuildRequires are for. They define what packages are needed for building - but not necessarily at runtime. On the other hand, the original oracle-instantclient-sqlplus @@ -1033,10 +1043,10 @@ libncurses6-6.0-19.1.x86_64 xlink:href="http://openbuildservice.org/"/> allows to build packages for multiple distributions and architectures. Visit the Materials section of the Web site (see ) for a deeper - introduction. For the package you are building, you can get an account at the - openSUSE Build Service instance. Go to - your Home Project, and click Create New Package. + xlink:href="http://openbuildservice.org/help/"/>) for a deeper introduction. + For the package you are building, you can get an account at the openSUSE Build Service instance. Go to your + Home Project, and click Create New Package. Upload the spec file and sources. After that you need to configure some target distributions for your home project. That can be one base distribution, or another project. This shows the @@ -1074,8 +1084,8 @@ libncurses6-6.0-19.1.x86_64 Using the Open Build Service locally - With the osc tool you can checkout packages from - the Open Build Service, make changes to them and resubmit them. + With the osc tool you can checkout packages from the Open + Build Service, make changes to them and resubmit them. $ osc co home:dmacvicar gqlplus A home:dmacvicar @@ -1108,8 +1118,8 @@ $ osc build openSUSE_Leap_42.2 These checks are very detailed. But this is the only way to ensure quality and consistency when a product is assembled from thousands of sources by hundreds of contributors. - The spec-cleaner tool can help you keeping your spec - file in shape: + The spec-cleaner tool can help you keeping your spec file in + shape: $ spec-cleaner -i gqlplus.spec diff --git a/xml/MAIN-SBP-SAP-AzureSolutionTemplates.xml b/xml/MAIN-SBP-SAP-AzureSolutionTemplates.xml index 989bec9ad..26bb15438 100644 --- a/xml/MAIN-SBP-SAP-AzureSolutionTemplates.xml +++ b/xml/MAIN-SBP-SAP-AzureSolutionTemplates.xml @@ -13,15 +13,15 @@
+ xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-sap-azure-templates" xml:lang="en"> SUSE and Microsoft Solution Templates for SAP Applications Simplified Deployment on Microsoft Azure - SUSE Linux Enterprise Server for SAP Applications - 12 + https://github.com/SUSE/suse-best-practices/issues/new @@ -31,26 +31,27 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + SAP - Cloud - + Automation Deployment + Cloud - SUSE and Microsoft Solution Templates for SAP Applications - Predefined solution templates for SAP Applications from - SUSE and Microsoft for simplified deployment on Microsoft Azure. - - SLES for SAP + SUSE and Microsoft Solution Templates for SAP Applications + Predefined solution templates for SAP Applications from + SUSE and Microsoft for simplified deployment on Microsoft Azure. + Predefined solution templates SLES for SAP on Azure + + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications + SUSE Linux Enterprise Server for SAP Applications - 2018-05-08 - SUSE Linux Enterprise Server for SAP Applications 12 - Microsoft Azure + SUSE Linux Enterprise Server for SAP Applications 12 + Microsoft Azure @@ -63,17 +64,7 @@ SUSE - - + Best Practices + Systems Management - + Upgrade & Update Deployment - Performing Major Version Upgrades of SUSE Linux Enterprise - How to perform an upgrade of a SLE system to - a new major version in environments which do not allow for standard booting of the installation - - SLE + Performing Major Version Upgrades of SUSE Linux Enterprise + How to perform an upgrade of a SLE system to a new major version in + environments which do not allow for standard booting of the installation + Non-standard upgrades to a major SLE version + + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise - 2023-10-31 - SUSE Linux Enterprise 11 and newer + SUSE Linux Enterprise 11 and newer - + - - Jiri - Srain - - - Project Manager Engineering - SUSE - - - - - - - - - - - - - - - - - 2023-10-31 + + + + + + + + + + + + + + + + 2023-10-31 + + + + + - This guide shows how to perform an upgrade of a SUSE Linux Enterprise - system to a new major version in environments which do not allow for standard booting of the - installation, neither via the local boot media, nor via PXE boot. + This guide shows how to perform an upgrade of a SUSE Linux Enterprise system to a new major + version in environments which do not allow for standard booting of the installation, neither via + the local boot media, nor via PXE boot. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the SUSE Best + Practices series have been contributed voluntarily by SUSE employees and third parties. They are + meant to serve as examples of how particular actions can be performed. They have been compiled + with utmost attention to detail. However, this does not guarantee complete accuracy. SUSE cannot + verify that actions described in these documents do what is claimed or whether actions described + have unintended consequences. SUSE LLC, its affiliates, the authors, and the translators may not + be held liable for possible errors or the consequences thereof. @@ -179,8 +181,8 @@ The installation kernel and initrd must be made available in the boot area of the respective hardware architecture. For example, for legacy booting of AMD64 or Intel - 64 systems, this is the /boot directory. The required disk - space is approximately 100 MB. + 64 systems, this is the /boot directory. The required disk space is + approximately 100 MB. @@ -467,7 +469,7 @@ install=http://<server-IP>/repo/SUSE/Products/SLE-Product-SLES/15/x86_64/p - + diff --git a/xml/MAIN-SBP-SLE15-Custom-Installation-Medium.xml b/xml/MAIN-SBP-SLE15-Custom-Installation-Medium.xml index 5331e0bf6..24c099692 100644 --- a/xml/MAIN-SBP-SLE15-Custom-Installation-Medium.xml +++ b/xml/MAIN-SBP-SLE15-Custom-Installation-Medium.xml @@ -11,14 +11,12 @@
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-custom-install-medium" xml:lang="en"> Creating a Custom Installation Medium for SUSE Linux Enterprise 15 - - SUSE Linux Enterprise - 15 + https://github.com/SUSE/suse-best-practices/issues/new @@ -27,79 +25,82 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + Systems Management - + Installation - Creating a Custom Installation Medium for SUSE Linux Enterprise 15 - How to create one single custom installation media for SUSE Linux Enterprise 15 - - SLE + Creating a Custom Installation Medium for SUSE Linux + Enterprise 15 + How to create one single custom installation media + for SUSE Linux Enterprise 15 + Creating a custom installation media for SLE + 15 + + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise - 2019-01-14 - SUSE Linux Enterprise 15 + SUSE Linux Enterprise 15 - + - - Jiri - Srain - - - Project Manager Engineering - SUSE - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + 2019-01-14 + + + + + - 2019-01-14 This document provides guidance on how to create one single custom installation media for - SUSE Linux Enterprise 15. + SUSE Linux Enterprise 15. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the SUSE Best + Practices series have been contributed voluntarily by SUSE employees and third parties. They are + meant to serve as examples of how particular actions can be performed. They have been compiled + with utmost attention to detail. However, this does not guarantee complete accuracy. SUSE cannot + verify that actions described in these documents do what is claimed or whether actions described + have unintended consequences. SUSE LLC, its affiliates, the authors, and the translators may not + be held liable for possible errors or the consequences thereof. @@ -234,11 +235,13 @@ Append new lines to the end of the file /media.1/products on the - medium. For each of the modules, add one line in the following format: - + medium. For each of the modules, add one line in the following format: + + <Path on media> <Name of the module> - + + @@ -333,8 +336,8 @@ - - + + diff --git a/xml/MAIN-SBP-SLES-MFAD.xml b/xml/MAIN-SBP-SLES-MFAD.xml index 661a8c2e7..21621b5e0 100644 --- a/xml/MAIN-SBP-SLES-MFAD.xml +++ b/xml/MAIN-SBP-SLES-MFAD.xml @@ -8,15 +8,14 @@
Joining a Microsoft Azure Active Directory Domain Services Managed Domain with SUSE Linux Enterprise Server - - SUSE Linux Enterprise Server, Microsoft Azure - + https://github.com/SUSE/suse-best-practices/issues/new @@ -26,27 +25,28 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - - Systems Management + Best Practices + 3rd Party - + Authentication Integration Configuration + Cloud - Joining a Microsoft Azure Active Directory Domain Services Managed Domain - How to use Azure AD Domain Services as a managed service in Microsoft Azure - to enable NTLM, Kerberos, and LDAP capabilities with SLES - - SLES + Joining a Microsoft Azure Active Directory Domain Services Managed Domain + How to use Azure AD Domain Services as a managed service in Microsoft Azure + to enable NTLM, Kerberos, and LDAP capabilities with SLES + Using Azure AD Domain Services with SLES on Azure + + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server - 2018-01-23 - SUSE Linux Enterprise Server - Microsoft Azure Active Directory Domain Services + SUSE Linux Enterprise Server 12 + Microsoft Azure Active Directory Domain Services @@ -59,17 +59,7 @@ Microsoft - - - @@ -131,8 +85,15 @@ - - 2018-01-23 + + + + 2018-01-23 + + + + + diff --git a/xml/MAIN-SBP-SLSA.xml b/xml/MAIN-SBP-SLSA.xml index 4392ba626..41d965a9b 100644 --- a/xml/MAIN-SBP-SLSA.xml +++ b/xml/MAIN-SBP-SLSA.xml @@ -6,101 +6,92 @@ ]>
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-slsa" xml:lang="en"> SLSA: Securing the Software Supply Chain - All SUSE Products - https://github.com/SUSE/suse-best-practices/issues/new - SLSA: Securing the Software Supply Chain/> - + SLSA: Securing the Software Supply Chain/> https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + Security - + Compliance - Auditing + Vulnerability + Auditing - SLSA: Securing the Software Supply Chain - How SUSE, as a long-time champion and expert of software supply chain security, - prepares for SLSA L4 compliance - + SLSA: Securing the Software Supply Chain + How SUSE, as a long-time champion and expert of software supply chain + security, prepares for SLSA L4 compliance + Securing the SUSE software supply chain for SLSA L4 + + + All SUSE Products - All SUSE Products - - + - - Marcus - Meissner - - - Distinguished Engineer Solutions Security - SUSE - + + Marcus + Meissner + + + Distinguished Engineer Solutions Security + SUSE + - - Jana - Jaeger - - - Project Manager Technical Documentation - SUSE - + + Jana + Jaeger + + + Project Manager Technical Documentation + SUSE + - - - - - - - - - - - - - - + - 2022-06-02 + + + + + + + + + + + + + + 2022-06-02 + + + + + - - This document details how SUSE, as a long-time champion and expert of software supply chain security, - prepares for SLSA L4 compliance. + + This document details how SUSE, as a long-time champion and expert of software supply + chain security, prepares for SLSA L4 compliance. Disclaimer: This document is part of the SUSE Best Practices @@ -373,8 +364,8 @@ Building and build system The next part of the integrity chain is the actual build process that turns sources to binaries. The entire build process must be secured against any kind of unknown or outside - influence to avoid possible tampering with the builds. Builds must be reproducible to - allow verification and checking of build results. + influence to avoid possible tampering with the builds. Builds must be reproducible to allow + verification and checking of build results. SLSA4 Build (process) requirements and SUSE's OBS @@ -607,12 +598,12 @@ SUSE's processes The OBS build system tracks all used binaries for each build and can reproduce the build environment of any released binary. The binaries used are also referenced in in-toto - provenance files and made available together with the sources starting from SUSE Linux Enterprise 15 SP4 - builds. Older builds may have missing binaries because of the nature of the bootstrapping process. - It is notable that the SUSE Linux Enterprise 15 code base is not enforcing binary identical reproducibility - yet. Instead, builds are compared and known good differences are accepted (for example time - stamps or build host name). This validation is done by the code in the build-compare - package. + provenance files and made available together with the sources starting from SUSE Linux + Enterprise 15 SP4 builds. Older builds may have missing binaries because of the nature of + the bootstrapping process. It is notable that the SUSE Linux Enterprise 15 code base is + not enforcing binary identical reproducibility yet. Instead, builds are compared and known + good differences are accepted (for example time stamps or build host name). This + validation is done by the code in the build-compare package. @@ -938,8 +929,8 @@ SLSA L4 requirement - Identify the entry point of the build definition used to drive the build (for example the - source repo the configuration was taken from). + Identify the entry point of the build definition used to drive the build (for + example the source repo the configuration was taken from). diff --git a/xml/MAIN-SBP-SUMA-on-IBM-PowerVM.xml b/xml/MAIN-SBP-SUMA-on-IBM-PowerVM.xml index 27aab83bf..c243424eb 100644 --- a/xml/MAIN-SBP-SUMA-on-IBM-PowerVM.xml +++ b/xml/MAIN-SBP-SUMA-on-IBM-PowerVM.xml @@ -6,93 +6,95 @@ ]>
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-suma-ibmpowervm" xml:lang="en"> Deploying SUSE Linux Enterprise Products with SUSE Manager on IBM PowerVM - - SUSE Manager - 3.1+ + https://github.com/SUSE/suse-best-practices/issues/new - Deploying SUSE Linux Enterprise Products with SUSE Manager on IBM PowerVM + Deploying SUSE Linux Enterprise Products with SUSE Manager on IBM + PowerVM https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + Systems Management - + Deployment Installation Configuration - Deploying SUSE Linux Enterprise Products with SUSE Manager on IBM PowerVM - Overview of how to deploy SUSE Linux Enterprise products with SUSE Manager - on IBM Power Systems - - SUMA + Deploying SUSE Linux Enterprise Products with SUSE Manager on IBM + PowerVM + Overview of how to deploy SUSE Linux Enterprise products with SUSE + Manager on IBM Power Systems + Deploying SLE products with SUMA on IBM Power Systems + + SUSE Manager + SUSE Manager + SUSE Manager + SUSE Manager + SUSE Manager + SUSE Manager - 2018-08-14 - SUSE Manager 3.1 and later + SUSE Manager - + - - Olivier - Van Rompuy - - - Senior System Engineer and Technical Consultant - IRIS - + + Olivier + Van Rompuy + + + Senior System Engineer and Technical Consultant + IRIS + - - - - - - - - - - - - - - - - - - - - - - - 2018-08-14 + + + + + + + + + + + + + + + + + + + + + + + 2018-08-14 + + + + + + The document at hand provides an overview of how to deploy SUSE Linux Enterprise @@ -102,15 +104,14 @@ IBM PowerVM LPARs, including Autoinstallation, AutoYaST and Netboot Integration. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the SUSE Best + Practices series have been contributed voluntarily by SUSE employees and third parties. They + are meant to serve as examples of how particular actions can be performed. They have been + compiled with utmost attention to detail. However, this does not guarantee complete + accuracy. SUSE cannot verify that actions described in these documents do what is claimed or + whether actions described have unintended consequences. SUSE LLC, its affiliates, the + authors, and the translators may not be held liable for possible errors or the consequences + thereof. @@ -157,8 +158,8 @@ for IBM POWER). Collect your registration codes from the SCC portal site at https://scc.suse.com. - Now choose to add the SUSE Manager Server extension as shown on the screen below, and enter - Next: + Now choose to add the SUSE Manager Server extension as shown on the screen below, and + enter Next:
YaST Installation - Extensions and Module Selection @@ -779,12 +780,12 @@ Synchronize the base channel : In the specific setup at hand, the lifecycle phases are limited to dev and prod (test has been removed). -vi ~/.spacewalk-manage-channel-lifecycle/settings.conf + vi ~/.spacewalk-manage-channel-lifecycle/settings.conf phases = dev, prod exclude channels = - This can be customized as required, which means you can add and remove phases at this stage - of the procedure. + This can be customized as required, which means you can add and remove phases at this + stage of the procedure. Generate the dev channels by promoting the SUSE channels. The same command is used to fully synchronize the dev channels with the online @@ -1988,7 +1989,8 @@ zypper ref -y;zypper -n patch -l -y;zypper -n patch -l -y;zypper -n up -l -y
- Now you can create an autoinstallation profile. Click Upload Kickstart/Autoyast File: + Now you can create an autoinstallation profile. Click Upload + Kickstart/Autoyast File:
SUSE Manager Web UI - Button Upload Kickstart/Autoyast File @@ -2004,7 +2006,7 @@ zypper ref -y;zypper -n patch -l -y;zypper -n patch -l -y;zypper -n up -l -y The screen below opens. Provide the required details and an AutoYaST script: -
+
SUSE Manager Web UI - Create Autoinstallation Profile @@ -2537,8 +2539,8 @@ cp -r /root/grub2 /srv/tftpboot/boot/ SUSE Manager documentation: https://documentation.suse.com/suma/3.2/ + xlink:href="https://documentation.suse.com/suma/3.2/" + >https://documentation.suse.com/suma/3.2/ IBM Knowledge Center: - + diff --git a/xml/MAIN-SBP-SUSE-oem-identification.xml b/xml/MAIN-SBP-SUSE-oem-identification.xml index 442f5a0c7..de3338c6d 100644 --- a/xml/MAIN-SBP-SUSE-oem-identification.xml +++ b/xml/MAIN-SBP-SUSE-oem-identification.xml @@ -6,13 +6,12 @@ ]>
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-oem-identification" xml:lang="en"> SUSE OEM Identification - SUSE Linux Enterprise - 12, 15 + https://github.com/SUSE/suse-best-practices/issues/new @@ -22,68 +21,71 @@ - SUSE Best Practices - - + Best Practices + 3rd Party - + Integration Implementation - SUSE OEM Identification - This document provides guidance how to identify a - SUSE Linux Enterprise-based OEM system - - SLE + SUSE OEM Identification + This document provides guidance how to identify a SUSE Linux + Enterprise-based OEM system + How to identify a SLE-based OEM system + + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise - 2022-08-15 - SUSE Linux Enterprise 12 and 15 - + SUSE Linux Enterprise 12 and 15 - + + - - Daniel - Rahn - - - Product Manager - SUSE - + + Daniel + Rahn + + + Product Manager + SUSE + - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + 2022-08-15 - + + + + + + When SUSE Linux Enterprise family products are being bundled with a system or integrated diff --git a/xml/MAIN-SBP-SUSE-security-report-2021.xml b/xml/MAIN-SBP-SUSE-security-report-2021.xml index d61157e84..a189096ed 100644 --- a/xml/MAIN-SBP-SUSE-security-report-2021.xml +++ b/xml/MAIN-SBP-SUSE-security-report-2021.xml @@ -6,13 +6,12 @@ ]>
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-suse-sec-report-21" xml:lang="en"> SUSE Solution Security Risk Report 2021 - All SUSE Products - + https://github.com/SUSE/suse-best-practices/issues/new @@ -21,66 +20,62 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + Security - - Vulnerability + + Vulnerability Auditing - SUSE Solution Security Risk Report 2021 - Summary of all security vulnerabilities which affected SUSE products - in calendar year 2021 - + SUSE Solution Security Risk Report 2021 + Summary of all security vulnerabilities which affected SUSE products in + calendar year 2021 + Summary of security issues affecting SUSE in 2021 + + + All SUSE Products - + - - Stoyan - Manolov - - - Head of Solution Security - SUSE - + + Stoyan + Manolov + + + Head of Solution Security + SUSE + - - + - - - - - - - - - - - - 2022-04-27 + + + + + + + + + + + + + + 2022-04-27 + + + + + @@ -152,26 +147,26 @@ Background Software provides security features (such as authentication methods, encryption, intrusion - prevention and detection, backup and others). However, it can also contain errors (such as design flaws, - programming errors, and even backdoors) that often turn out to be relevant for the system's - security. The SUSE Security Team's task is to addresses all of these aspects of software + prevention and detection, backup and others). However, it can also contain errors (such as design + flaws, programming errors, and even backdoors) that often turn out to be relevant for the + system's security. The SUSE Security Team's task is to addresses all of these aspects of software security, with the understanding that security in software is a challenge that never ends. Software security cannot be understood a state taken at some certain point in time; it is a process that must be filled with professional expertise and permanent development, both on software and on skills. The resulting evolution is what has given open source software, Linux and SUSE its excellent reputation for security. - A modern Linux operating system, such as SUSE Linux Enterprise Server for enterprise use - or the openSUSE community distribution for home use, features a rich set of security programs and + A modern Linux operating system, such as SUSE Linux Enterprise Server for enterprise use or + the openSUSE community distribution for home use, features a rich set of security programs and functions that range from access controls, intrusion prevention and detection, flexible and trustworthy authentication mechanisms, encryption for files and network connections, file integrity checking utilities, to network analysis tools and monitoring/logging utilities for your system. To complement this, there are advanced tools that help you to securely configure and administer your system, and to securely download and install update packages. These utilities are standard in SUSE products. The update packages fix security bugs that have been found after your - product has been released. The security features of your Linux system are waiting for you to explore - them. SUSE encourages our customers to take advantage of them to further improve the level of - privacy and security that is built into every system by default. + product has been released. The security features of your Linux system are waiting for you to + explore them. SUSE encourages our customers to take advantage of them to further improve the + level of privacy and security that is built into every system by default. Programs are usually written by humans, and humans make mistakes. By consequence, all software can contain errors. Some of these errors appear as instabilities (the software or the @@ -189,15 +184,15 @@ The SUSE Solution Security team is responsible for handling all SUSE product-related security incidents. In that team, clear and well-defined roles are assigned for tracking new - incidents and coordinating needed updates. The team works with all SUSE engineering - software specialists. + incidents and coordinating needed updates. The team works with all SUSE engineering software + specialists. We use multiple sources to understand security incidents. These sources include the Mitre - and NVD Common Vulnerabilities and Exposures (CVE) databases, various security mailing lists (OSS security, Linux distros, distros, - bugtraq, and full-disclosure), direct reports, and other Linux vendors databases. We are also - part of various pre-notification mailing lists for software components, like Xen, Samba, X.ORG. - Confidential pre-notifications about vulnerabilities will be treated according to established - responsible disclosure procedures. + and NVD Common Vulnerabilities and Exposures (CVE) databases, various security mailing lists (OSS + security, Linux distros, distros, bugtraq, and full-disclosure), direct reports, and other Linux + vendors databases. We are also part of various pre-notification mailing lists for software + components, like Xen, Samba, X.ORG. Confidential pre-notifications about vulnerabilities will be + treated according to established responsible disclosure procedures. @@ -207,19 +202,19 @@ We rate the severity of incidents with two different systems, a simplified rating system and the Common Vulnerability Scoring System (CVSS) v3.1 scoring system. The CVSS is an open framework for communicating the characteristics and severity of software vulnerabilities. It is being - developed by the US-based non-profit organization FIRST.org: Its main goal is to assign the - right score to a vulnerability to help security administrators prioritize responses and resources - to specific threats. CVSS v3.1 scoring consists of three metric groups: Base, Temporal, and + developed by the US-based non-profit organization FIRST.org: Its main goal is to assign the right + score to a vulnerability to help security administrators prioritize responses and resources to + specific threats. CVSS v3.1 scoring consists of three metric groups: Base, Temporal, and Environmental. The Base group represents the intrinsic qualities of a vulnerability that are constant over time and across user environments. The Temporal group reflects the characteristics - of a vulnerability that change over time. The Environmental group represents the - characteristics of a vulnerability that are unique to a user's environment. The Base metrics - produce a score ranging from 0 to 10, which can then be modified by scoring the Temporal and - Environmental metrics. A CVSS score is also represented as a vector string, a compressed textual - representation of the values used to derive the score. Today, SUSE uses the Base score - methodology to evaluate vulnerabilities throughout the support lifecycle of our products. SUSE - keeps the right to adjust the final score of the vulnerability as more details become known and - available throughout the analysis. The most current CVSS resources can be found at . The CVSS v3.1 calculator used by SUSE could be found at . The framework is measuring the severity of a given vulnerability, not the associated risk alone. The scoring of @@ -230,16 +225,17 @@ The security incidents are tracked in our own workflow system, technical details are tracked in the SUSE bug-tracking system, and the updated software package is built, processed, - and published by our internal Open Build System. Internal Service Level Agreements (SLAs) corresponding to the severity - rating are monitored and reviewed regularly. Our packagers backport the required security fixes - to our version of the software. To protect the stability of our customer setups, we only rarely - do minor version upgrades. After receiving fixes for the affected software, four eye reviews - cross-check the source patches. A number of automated checks verify source and binary - compatibility and the completeness of patch meta information. They also check whether patches can be - installed without problems. Dedicated QA teams provide integration, bugfix, and regression - testing for all updates before they are released to our customers. After the release of an - update, automated processes publish the updates, update notices, and cross reference information - on our CVE index pages and machine-readable OVAL and CVRF XML information. + and published by our internal Open Build System. Internal Service Level Agreements + (SLAs) corresponding to the severity rating are monitored and reviewed regularly. Our packagers + backport the required security fixes to our version of the software. To protect the stability of + our customer setups, we only rarely do minor version upgrades. After receiving fixes for the + affected software, four eye reviews cross-check the source patches. A number of automated checks + verify source and binary compatibility and the completeness of patch meta information. They also + check whether patches can be installed without problems. Dedicated QA teams provide integration, + bugfix, and regression testing for all updates before they are released to our customers. After + the release of an update, automated processes publish the updates, update notices, and cross + reference information on our CVE index pages and machine-readable OVAL and CVRF XML + information. The objective of this report is to provide a summary of all security vulnerabilities which affected SUSE products in calendar year 2021. We will go into details on the high impact @@ -316,11 +312,11 @@ class, occasionally newer versions are introduced to a released version of an enterprise product line. - Sometimes also for other types of packages the choice is made to introduce a new version rather - than a backport. This is done when producing a backport is not economically feasible or when - there is a very relevant technical reason to introduce the newer version. + Sometimes also for other types of packages the choice is made to introduce a new version + rather than a backport. This is done when producing a backport is not economically feasible or + when there is a very relevant technical reason to introduce the newer version. - + Major security vulnerabilities in 2021 @@ -348,10 +344,10 @@ does not uses Zipkin and is not affect to the vulnerability. The vulnerability does not affect SUSE Manager, as it is using at most log4j 1.2.x, which - is not affected. One component of SUSE OpenStack Cloud (storm) embeds log4j 2.x, which - immediately received the required update. The SUSE NeuVector product is not affected by this - vulnerability, but its security scanner functionality has now added support for scanning your - containers, see the NeuVector log4j2 page. + is not affected. One component of SUSE OpenStack Cloud (storm) embeds log4j 2.x, + which immediately received the required update. The SUSE NeuVector product is not affected by + this vulnerability, but its security scanner functionality has now added support for scanning + your containers, see the NeuVector log4j2 page. A much less severe similar vulnerability was discovered in older log4j 1.2.x versions via the JMS interface. This JMS functionality is not default enabled, administrators must have @@ -520,7 +516,8 @@ Solution - Fixes have been provided for all affected and supported SUSE products. For more details, check the CVE Web page referenced below. + Fixes have been provided for all affected and supported SUSE products. For more details, + check the CVE Web page referenced below. References @@ -557,10 +554,10 @@ CVE-2020-28243: A privilege escalation is possible on a SaltStack minion when an - unprivileged user can create files in any non-blacklisted directory via a command - injection in a processes' name. Simply ending a file with (deleted) and keeping - a file handle open to it is enough to trigger the exploit whenever a restart check is - triggered from a SaltStack master. + unprivileged user can create files in any non-blacklisted directory via a command injection in + a processes' name. Simply ending a file with (deleted) and keeping a file + handle open to it is enough to trigger the exploit whenever a restart check is triggered from + a SaltStack master. CVE-2020-28972: In SaltStack Salt v2015.8.0 through v3002.2, authentication to vCenter, @@ -766,8 +763,8 @@ Software and hardware vendors are closely collaborating to ensure that sophisticated attackers cannot reinstall old versions of GRUB2. Over time, vendors are going to update - cryptographic keys in the BIOS for new computers, and to provide so-called DBX Exclusion - List updates for existing computers. These can prevent systems that are not patched and old + cryptographic keys in the BIOS for new computers, and to provide so-called DBX Exclusion List + updates for existing computers. These can prevent systems that are not patched and old installation media from starting. Make sure you have installed all relevant boot loader and operating system updates for BootHole before installing a BIOS or DBX Exclusion List update to ensure continuity. @@ -827,9 +824,8 @@ FRAGATTACKS - several WLAN vulnerabilities Security Researcher Mathy Vanhoef discovered various attacks against Wi-Fi (802.11) stacks - and against the Wi-Fi standard related to Wi-Fi fragments. This vulnerability is documented - on the Web site and is called - FRAGATTACKS. + and against the Wi-Fi standard related to Wi-Fi fragments. This vulnerability is documented on + the Web site and is called FRAGATTACKS. This set of vulnerabilities can allow local attackers in Wi-Fi range to inject traffic even in encrypted Wi-Fi networks, or get access to information of other users in the same Wi-Fi @@ -1064,8 +1060,8 @@ Security Evaluation is an international standard (ISO/IEC 15408), recognized by 26 countries (CCRA) worldwide. Details regarding SUSE’s full Common Criteria Part 3 conformant EAL 4 augmented by ALC_FLR.3 Systematic Flaw Remediation certification are listed at - . + xlink:href="https://www.bsi.bund.de/SharedDocs/Zertifikate_CC/CC/Betriebssysteme/1151.html?nn=513260" + /> . On January 29th 2021, the Defense Information Systems Agency (DISA) has released the SUSE Linux Enterprise Server 15 Security Technical Implementation Guide (STIG). Details regarding STIG diff --git a/xml/MAIN-SBP-SUSE-security-report-2022.xml b/xml/MAIN-SBP-SUSE-security-report-2022.xml index 83871af79..563ce612f 100644 --- a/xml/MAIN-SBP-SUSE-security-report-2022.xml +++ b/xml/MAIN-SBP-SUSE-security-report-2022.xml @@ -6,13 +6,12 @@ ]>
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-suse-sec-report-22" xml:lang="en"> SUSE Solution Security Risk Report 2022 - All SUSE Products - + https://github.com/SUSE/suse-best-practices/issues/new @@ -22,25 +21,24 @@ - SUSE Best Practices - - + Best Practices + Security - - Vulnerability + + Vulnerability Auditing - SUSE Solution Security Risk Report 2022 - Summary of all security vulnerabilities which affected SUSE products - in calendar year 2022 - + SUSE Solution Security Risk Report 2022 + Summary of all security vulnerabilities which affected SUSE products in + calendar year 2022 + Summary of security issues affecting SUSE in 2022 + + + All SUSE Products - All SUSE Products - @@ -52,17 +50,7 @@ SUSE - + https://github.com/SUSE/suse-best-practices/issues/new @@ -22,24 +22,23 @@ - SUSE Best Practices - - + Best Practices + Security - + Vulnerability Auditing - SUSE Solution Security Risk Report 2023 - Summary of all security vulnerabilities which affected SUSE products + SUSE Solution Security Risk Report 2023 + Summary of all security vulnerabilities which affected SUSE products in calendar year 2023 - SUSE Security Report 2023 + SUSE Security Report 2023 - All SUSE Products + All SUSE Products @@ -52,22 +51,12 @@ SUSE - + @@ -81,16 +70,15 @@ - + - 2024-05-27 diff --git a/xml/MAIN-SBP-Spectre-Meltdown-L1TF.xml b/xml/MAIN-SBP-Spectre-Meltdown-L1TF.xml index 19621fa37..620d24995 100644 --- a/xml/MAIN-SBP-Spectre-Meltdown-L1TF.xml +++ b/xml/MAIN-SBP-Spectre-Meltdown-L1TF.xml @@ -11,125 +11,137 @@
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xmlns:its="http://www.w3.org/2005/11/its" + xml:id="art-sbp-spectre-meltdown" xml:lang="en"> System Performance Implications of Meltdown, Spectre, and L1TF Vulnerabilities in SUSE-based Products - All SUSE Products - + https://github.com/SUSE/suse-best-practices/issues/new - System Performance Implications of Meltdown, Spectre, and L1TF Vulnerabilities + System Performance Implications of Meltdown, Spectre, and L1TF + Vulnerabilities https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + Security - Tuning & Performance - + Vulnerability Auditing Monitoring - SLSA: Securing the Software Supply Chain - Information about released mitigations for Meltdown, Spectre, and L1 - Terminal Fault (L1TF) in SUSE Linux Enterprise-based products - - SLE + Performance implications of Meltdown, Spectre, and L1TF + Information about released mitigations for Meltdown, Spectre, and L1 + Terminal Fault (L1TF) in SUSE Linux Enterprise-based products + Meltdown, Spectre, L1TF and their impact on SLE + + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise - 2019-01-17 - SUSE Linux Enterprise - + SUSE Linux Enterprise + - + + + + Sheilagh + Morlan + + + Manager Software Engineering + SUSE + + + + + Bryan + Stephenson + + + SUSE OpenStack Cloud Security Engineer + SUSE + + - - Sheilagh - Morlan - - - Manager Software Engineering - SUSE - + + T.R. + Bosworth + + + Senior Product Manager SUSE OpenStack Cloud + SUSE + - - Bryan - Stephenson - - - SUSE OpenStack Cloud Security Engineer - SUSE - + + Jiri + Kosina + + + Director SUSE Labs Core + SUSE + - - - T.R. - Bosworth - - - Senior Product Manager SUSE OpenStack Cloud - SUSE - - - - - Jiri - Kosina - - - Director SUSE Labs Core - SUSE - - - - - Vojtech - Pavlik - - - VP SUSE Labs - SUSE - - - - - Olaf - Kirch - - - VP SUSE Linux Enterprise - SUSE - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + 2019-01-17 + + + + + - 2019-01-17 This document provides information about released mitigations for Meltdown, Spectre, and L1 @@ -137,15 +149,13 @@ to help customers evaluate how best to performance test their deployments. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the SUSE Best + Practices series have been contributed voluntarily by SUSE employees and third parties. They are + meant to serve as examples of how particular actions can be performed. They have been compiled + with utmost attention to detail. However, this does not guarantee complete accuracy. SUSE cannot + verify that actions described in these documents do what is claimed or whether actions described + have unintended consequences. SUSE LLC, its affiliates, the authors, and the translators may not + be held liable for possible errors or the consequences thereof. @@ -501,7 +511,7 @@ - + diff --git a/xml/MAIN-SBP-intelsupport.xml b/xml/MAIN-SBP-intelsupport.xml index c538600e7..318303335 100644 --- a/xml/MAIN-SBP-intelsupport.xml +++ b/xml/MAIN-SBP-intelsupport.xml @@ -4,85 +4,85 @@ %entity; ]> +
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" + xmlns:its="http://www.w3.org/2005/11/its" xml:id="art-intel-support" xml:lang="en"> + SUSE Linux Enterprise Server - Support for Intel Server Platforms YES Certified Products - - SUSE Linux Enterprise Server - 12 + https://github.com/SUSE/suse-best-practices/issues/new - SUSE Linux Enterprise Server - Support for Intel Server Platforms + SUSE Linux Enterprise Server - Support for Intel Server + Platforms https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + 3rd Party - - Certification - Maintenance + + Certification + Maintenance + Integration - SUSE Linux Enterprise Server - Support for Intel Server Platforms - Explanation of which versions of SUSE Linux Enterprise Server are supported by a specific - Intel* microarchitecture. - - SLES + SUSE Linux Enterprise Server - Support for Intel Server Platforms + Explanation of which versions of SUSE Linux Enterprise Server are + supported by a specific Intel* microarchitecture. + What Intel microarchitecture supports which SLE version + + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server + SUSE Linux Enterprise Server - 2016-07-28 - - SUSE Linux Enterprise Server 12 + SUSE Linux Enterprise Server 12 - + + - - Scott - Bahling - - - Senior Technical Account Manager - SUSE - - - - + + - - + + - + - + - - + + - 2016-07-28 + + + 2016-07-28 + + + + + @@ -93,15 +93,14 @@ correct information. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the + SUSE Best Practices series have been contributed voluntarily by SUSE employees and + third parties. They are meant to serve as examples of how particular actions can be + performed. They have been compiled with utmost attention to detail. However, this + does not guarantee complete accuracy. SUSE cannot verify that actions described in + these documents do what is claimed or whether actions described have unintended + consequences. SUSE LLC, its affiliates, the authors, and the translators may not be + held liable for possible errors or the consequences thereof. @@ -202,10 +201,10 @@ Most people refer to the basic I/O chipset or Platform Controller Hub (PCH) for Intel based systems as the chipset, or in our case for the document - at hand the Haswell chipset. These chipsets may require updates to the - kernel or other operating system subsystems for proper support. In the HP and IBM - examples given, these are two different chipsets: HP x240 utilizing Intel C610, IBM - x3250 utilizing Intel C226. Because the two systems make use of two different + at hand the Haswell chipset. These chipsets may require updates to + the kernel or other operating system subsystems for proper support. In the HP and + IBM examples given, these are two different chipsets: HP x240 utilizing Intel C610, + IBM x3250 utilizing Intel C226. Because the two systems make use of two different chipsets, knowing that the HP system is supported does not allow for any statement about the support status of the IBM system. @@ -286,16 +285,15 @@ The two main goals of the YES Certified Program are: - - Help customers easily identify and purchase hardware solutions that have - been tested for compatibility and are supported in a SUSE environment. - - - - Help Hardware Vendors deliver and market solutions that work well and are - easily supported in a SUSE environment. - - + + Help customers easily identify and purchase hardware solutions that have been + tested for compatibility and are supported in a SUSE environment. + + + Help Hardware Vendors deliver and market solutions that work well and are + easily supported in a SUSE environment. + + YES Certified is for hardware applications. Because these products interact directly with an operating system, SUSE has developed rigorous compatibility tests to ensure very diff --git a/xml/MAIN-SBP-oraclerac.xml b/xml/MAIN-SBP-oraclerac.xml index facced6e9..19f3edc5c 100644 --- a/xml/MAIN-SBP-oraclerac.xml +++ b/xml/MAIN-SBP-oraclerac.xml @@ -4,15 +4,15 @@ %entity; ]> +
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" + xmlns:its="http://www.w3.org/2005/11/its" xml:id="art-sbp-oraclerac" xml:lang="en"> - Oracle RAC on SUSE Linux Enterprise Server 12 SP1 + + Oracle RAC on SUSE Linux Enterprise Server 12 SP1 Installation Guide for x86-64 Architectures - - SUSE Linux Enterprise Server - 12 SP1 + https://github.com/SUSE/suse-best-practices/issues/new @@ -21,68 +21,75 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - - 3rd party + Best Practices + + 3rd party - + Installation Integration + Clustering - Oracle RAC on SUSE Linux Enterprise Server 12 SP1 - This document provides the details for installing - Oracle RAC 12.1.0.2.0 on SUSE Linux Enterprise Server 12 SP1. - - SLES - Oracle RAC + Oracle RAC on SUSE Linux Enterprise Server 12 SP1 + This document provides the details for installing Oracle RAC + 12.1.0.2.0 on SUSE Linux Enterprise Server 12 SP1. + Installing Oracle RAC on SLES 12 SP1 + + SUSE Linux Enterprise Server - 2017-01-23 - - SUSE Linux Enterprise Server 12 SP1 - Oracle RAC 12.1.0.2.0 - + SUSE Linux Enterprise Server 12 SP1 + Oracle RAC 12.1.0.2.0 + + - - Chen - Wu - - - ISV Technical Manager - SUSE - + + Chen + Wu + + + ISV Technical Manager + SUSE + - - Arun - Singh - - - ISV Technical Manager - SUSE - + + Arun + Singh + + + ISV Technical Manager + SUSE + - - + - - + + - + - + - - + + + + + + 2017-01-23 + + + + + - 2017-01-23 In database computing, Oracle Real Application Clusters (RAC) provides software @@ -97,25 +104,24 @@ Enterprise Server 12 OS. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the + SUSE Best Practices series have been contributed voluntarily by SUSE employees and + third parties. They are meant to serve as examples of how particular actions can be + performed. They have been compiled with utmost attention to detail. However, this + does not guarantee complete accuracy. SUSE cannot verify that actions described in + these documents do what is claimed or whether actions described have unintended + consequences. SUSE LLC, its affiliates, the authors, and the translators may not be + held liable for possible errors or the consequences thereof. - + + Introduction This document provides the details for installing Oracle RAC 12.1.0.2.0 on SUSE Linux Enterprise Server 12. For the example at hand, the x86_64 version of both the Oracle Database 12c Enterprise and SUSE Linux Enterprise Server is used. Similar steps apply to other platforms (x86, IA64, etc.). If you encounter any problem or have general - questions, submit your query to - suse-oracle@listx.novell.com. + questions, submit your query to suse-oracle@listx.novell.com. The official Oracle product documentation is available at: @@ -203,8 +209,8 @@ Prerequisites Installing &sls; 12 - Install &sls; 12 on each node of the cluster. To do so, follow the instructions in - the official &sls; documentation at Install &sls; 12 on each node of the cluster. To do so, follow the + instructions in the official &sls; documentation at @@ -298,7 +304,8 @@ >Next to continue. - Proceed to Installation Type. + Proceed to Installation + Type.
Installation Type @@ -317,7 +324,8 @@ >Next to continue. - Proceed to Product Languages. + Proceed to Product + Languages.
Product Languages @@ -373,8 +381,8 @@
- Provide the list of nodes with their public host name and virtual - host name, then click Next to + Provide the list of nodes with their public host name and virtual host + name, then click Next to continue.
@@ -592,8 +600,8 @@
Perform the prerequisite check as shown on the screen above. Click - Fix&Check Again to recheck - the system. + Fix&Check Again to + recheck the system.
Prerequisite Checks 2 @@ -943,7 +951,8 @@ Located 3 voting disk(s). click Next to continue. - Proceed to Installation Option. + Proceed to Installation + Option.
Installation Option @@ -962,7 +971,7 @@ Located 3 voting disk(s). Proceed to Grid Installation - Options. + Options.
Grid Installation Options @@ -1000,7 +1009,8 @@ Located 3 voting disk(s). >Next to continue. - Proceed to Product Languages. + Proceed to Product + Languages.
Product Languages @@ -1297,7 +1307,8 @@ Now product-specific root actions will be performed. then click Next to continue. - Define the Database Template. + Define the Database + Template.
Database Template @@ -1341,7 +1352,8 @@ Now product-specific root actions will be performed. - Proceed to Database Placement. + Proceed to Database + Placement.
Database Placement @@ -1360,7 +1372,8 @@ Now product-specific root actions will be performed. continue. - Specify the Management Options. + Specify the Management + Options.
Management Options @@ -1378,7 +1391,8 @@ Now product-specific root actions will be performed. role="strong">Next to continue. - Proceed to Database Credentials. + Proceed to Database + Credentials.
Database Credentials @@ -1396,7 +1410,8 @@ Now product-specific root actions will be performed. Next to continue. - Proceed to Storage Locations. + Proceed to Storage + Locations.
Storage Locations @@ -1428,13 +1443,13 @@ Now product-specific root actions will be performed.
- According to your needs, specify whether to add the schemas to - your database. Click Next to + According to your needs, specify whether to add the schemas to your + database. Click Next to continue.
Proceed to Initialization - Parameters. + Parameters.
Initialization Parameters @@ -1663,18 +1678,18 @@ ora.susedb.db Verify the availability of your Oracle - Enterprise Manager. + Enterprise Manager.
Oracle Enterprise Manager - + - +
diff --git a/xml/MAIN-SBP-oracleweblogic.xml b/xml/MAIN-SBP-oracleweblogic.xml index 622c74719..b6d9e9077 100644 --- a/xml/MAIN-SBP-oracleweblogic.xml +++ b/xml/MAIN-SBP-oracleweblogic.xml @@ -4,45 +4,44 @@ %entity; ]> +
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" + xmlns:its="http://www.w3.org/2005/11/its" xml:id="art-sbp-weblogic-sles12sp1" xml:lang="en"> Oracle WebLogic Server 12cR2 on SUSE Linux Enterprise Server 12 SP1 Installation Guide for x86-64 Architectures - - SUSE Linux Enterprise Server - 12 SP1 https://github.com/SUSE/suse-best-practices/issues/new - Oracle WebLogic Server 12cR2 on SUSE Linux Enterprise Server 12 SP1 + Oracle WebLogic Server 12cR2 on SUSE Linux Enterprise Server 12 SP1 + https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - - 3rd party + Best Practices + + 3rd party - + Installation Integration + Web - Oracle WebLogic Server 12cR2 on SUSE Linux Enterprise Server 12 SP1 - This document provides the details for installing - Oracle WebLogic Server 12cR2 on SUSE Linux Enterprise Server 12 SP1. - - SLES - Oracle WebLogic Server + Oracle WebLogic Server 12cR2 on SUSE Linux Enterprise Server 12 + SP1 + This document provides the details for installing Oracle WebLogic + Server 12cR2 on SUSE Linux Enterprise Server 12 SP1. + Installing Oracle WebLogic Server 12cR2 on SLES 12 + + SUSE Linux Enterprise Server - 2017-01-23 - SUSE Linux Enterprise Server 12 SP1 - Oracle WebLogic Server 12cR2 + SUSE Linux Enterprise Server 12 SP1 + Oracle WebLogic Server 12cR2 @@ -74,17 +73,25 @@ - - - - - - - - + + + + + + + + + + + + 2017-01-23 + + + + + - 2017-01-23 Oracle WebLogic Server 12c R2 is a reliable application server for building and @@ -101,15 +108,14 @@ WebLogic Server 12cR2 on SUSE Linux Enterprise Server 12 SP1. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published as part of the + SUSE Best Practices series have been contributed voluntarily by SUSE employees and + third parties. They are meant to serve as examples of how particular actions can be + performed. They have been compiled with utmost attention to detail. However, this + does not guarantee complete accuracy. SUSE cannot verify that actions described in + these documents do what is claimed or whether actions described have unintended + consequences. SUSE LLC, its affiliates, the authors, and the translators may not be + held liable for possible errors or the consequences thereof. @@ -260,7 +266,7 @@ In YaST, select the patterns you need. Make sure you select the patterns and packages required to run Oracle products (for example - orarun). + orarun).
Installation of &sls;- 2 @@ -833,7 +839,8 @@ Oracle WebLogic Server Administration Console - + Performance Analysis, Tuning and Tools on SUSE Linux Enterprise Products - - SUSE Linux Enterprise - - 12, 15 + https://github.com/SUSE/suse-best-practices/issues/new @@ -25,23 +23,28 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - SUSE Best Practices - - + Best Practices + Tuning & Performance - + Configuration - Performance Analysis, Tuning and Tools on SUSE Linux + Performance Analysis, Tuning and Tools on SUSE Linux Enterprise Products - How to configure and tune a SUSE Linux Enterprise-based - system to get the best possible performance out of it - - SLE + How to configure and tune a SUSE Linux Enterprise-based + system to get the best possible performance out of it + Getting best possible performance from SLE products + + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise + SUSE Linux Enterprise - 2019-08-19 - SUSE Linux Enterprise + + SUSE Linux Enterprise 12 and 15 @@ -54,17 +57,7 @@ SUSE - - + Best Practices + Storage - Tuning & Performance - + Implementation Configuration - Rook Best Practices for Running Ceph on Kubernetes - Overview of best practices and tested patterns of using Rook v1.3 - to manage a Ceph Octopus cluster running in Kubernetes - + Rook Best Practices for Running Ceph on Kubernetes + Overview of best practices and tested patterns of using + Rook v1.3 to manage a Ceph Octopus cluster running in Kubernetes + Managing a Ceph Octopus cluster in Kubernetes with Rook + - Ceph Octopus v15 - Rook v1.3 - Kubernetes 1.17 + Ceph Octopus v15 + Rook v1.3 + Kubernetes 1.17 - + - - Blaine - Gardner - - - Senior Software Developer - SUSE - + + Blaine + Gardner + + + Senior Software Developer + SUSE + - - Alexandra - Settle - - - Senior Information Developer - SUSE - + + Alexandra + Settle + + + Senior Information Developer + SUSE + - - - - - - - - - - - - - 2020-05-27 + + + + + + + + + + + + + + + + 2020-05-27 + + + + + + - The document at hand provides an overview of the best practices and tested - patterns of using Rook v1.3 to manage your Ceph Octopus cluster running in - Kubernetes. + The document at hand provides an overview of the best practices and + tested patterns of using Rook v1.3 to manage your Ceph Octopus + cluster running in Kubernetes. - Disclaimer: - Documents published as part of the SUSE Best Practices series have been contributed voluntarily - by SUSE employees and third parties. They are meant to serve as examples of how particular - actions can be performed. They have been compiled with utmost attention to detail. However, - this does not guarantee complete accuracy. SUSE cannot verify that actions described in these - documents do what is claimed or whether actions described have unintended consequences. - SUSE LLC, its affiliates, the authors, and the translators may not be held liable for possible errors - or the consequences thereof. - + Disclaimer: Documents published + as part of the SUSE Best Practices series have been contributed + voluntarily by SUSE employees and third parties. They are meant to + serve as examples of how particular actions can be performed. They + have been compiled with utmost attention to detail. However, this + does not guarantee complete accuracy. SUSE cannot verify that + actions described in these documents do what is claimed or whether + actions described have unintended consequences. SUSE LLC, its + affiliates, the authors, and the translators may not be held liable + for possible errors or the consequences thereof. @@ -108,26 +118,30 @@ Overview - Ceph and Kubernetes are both complex tools and harmonizing the interactions between - the two can be daunting. This is especially true for users who are new to operating - either system, prompting questions such as: + Ceph and Kubernetes are both complex tools and harmonizing the interactions + between the two can be daunting. This is especially true for users who are + new to operating either system, prompting questions such as: How can I restrict Ceph to a portion of my nodes? - Can I set Kubernetes CPU or RAM limits for my Ceph daemons? + Can I set Kubernetes CPU or RAM limits for my Ceph daemons? + - What are some ways to get better performance from my cluster? + What are some ways to get better performance from my cluster? + - This document covers tested patterns and best practices to answer these questions and - more. Our examples will help you configure and manage your Ceph cluster running in - Kubernetes to meet your needs. The following examples and advice are based on Ceph - Octopus (v15) with Rook v1.3 running in a Kubernetes 1.17 cluster. - This is a moderately advanced topic, so basic experience with Rook is recommended. - Before you begin, ensure you have the following requisite knowledge: + This document covers tested patterns and best practices to answer these + questions and more. Our examples will help you configure and manage your + Ceph cluster running in Kubernetes to meet your needs. The following + examples and advice are based on Ceph Octopus (v15) with Rook v1.3 running + in a Kubernetes 1.17 cluster. + This is a moderately advanced topic, so basic experience with Rook is + recommended. Before you begin, ensure you have the following requisite + knowledge: Basics of Kubernetes @@ -168,38 +182,43 @@ Ceph components and daemons, basic Ceph configuration - Rook basics and how to install Rook-Ceph. For more information see + Rook basics and how to install Rook-Ceph. For more + information see - In places, we will give examples that describe an imaginary data center. This data - center is hypothetical, and it will focus on the Ceph- and Rook-centric elements and - ignore user applications. - Our example data center has two rooms for data storage. A properly fault tolerant - Ceph cluster should have at least three monitor (MON) nodes. These should be spread - across fault-tolerant rooms if possible. The example will have a separate failure domain - for the third monitor node. As such, our hypothetical data center has two rooms and one - failure domain, with the following configuration: + In places, we will give examples that describe an imaginary data center. This + data center is hypothetical, and it will focus on the Ceph- and Rook-centric + elements and ignore user applications. + Our example data center has two rooms for data storage. A properly fault + tolerant Ceph cluster should have at least three monitor (MON) nodes. These + should be spread across fault-tolerant rooms if possible. The example will + have a separate failure domain for the third monitor node. As such, our + hypothetical data center has two rooms and one failure domain, with the + following configuration: - The failure domain is small and can only to be used for the third Ceph MON; - it does not have space for storage nodes. + The failure domain is small and can only to be used for the + third Ceph MON; it does not have space for storage nodes. + - Eight OSD nodes provide a good amount of data safety without requiring too - many nodes. + Eight OSD nodes provide a good amount of data safety without + requiring too many nodes. - These eight nodes should be equally separated — four to each data center - room. + These eight nodes should be equally separated — four to each + data center room. - The four nodes are separated in each room into two racks. + The four nodes are separated in each room into two racks. + - In the event of a MON node failure, ensure that you can run MONs on each rack - for failure scenarios. + In the event of a MON node failure, ensure that you can run + MONs on each rack for failure scenarios. The data center looks as follows: @@ -216,125 +235,141 @@
- Now we will dig a little deeper and talk about the actual disks used for Rook and - Ceph storage. To ensure we are following known Ceph best practices for this data center - setup, ensure that MON storage goes on the SSDs. Because each rack should be able to run - a Ceph MON, one of the nodes in each rack will have an SSD that is usable for MON - storage. Additionally, all nodes in all racks (except in the failure domain) will have - disks for OSD storage. This will look like the following: + Now we will dig a little deeper and talk about the actual disks used for Rook + and Ceph storage. To ensure we are following known Ceph best practices for + this data center setup, ensure that MON storage goes on the SSDs. Because + each rack should be able to run a Ceph MON, one of the nodes in each rack + will have an SSD that is usable for MON storage. Additionally, all nodes in + all racks (except in the failure domain) will have disks for OSD storage. + This will look like the following:
Example Datacenter - Disks - + - +
Diagrams - Refer to these diagrams when we discuss the example data center below. + Refer to these diagrams when we discuss the example data center + below. Introduction - Ceph and Kubernetes both have their own well-known and established best practices. - Rook bridges the gap between Ceph and Kubernetes, putting it in a unique domain with its - own best practices to follow. This document specifically covers best practice for - running Ceph on Kubernetes with Rook. Because Rook augments on top of Kubernetes, it has - different ways of meeting Ceph and Kubernetes best practices. This is in comparison to - the bare metal version of each. Out of the box, Rook is predominantly a default Ceph - cluster. The Ceph cluster needs tuning to meet user workloads, and Rook does not absolve - the user from planning out their production storage cluster beforehand. - For the purpose of this document, we will consider two simplified use cases to help - us make informed decisions about Rook and Ceph: + Ceph and Kubernetes both have their own well-known and established best + practices. Rook bridges the gap between Ceph and Kubernetes, putting it in a + unique domain with its own best practices to follow. This document + specifically covers best practice for running Ceph on Kubernetes with Rook. + Because Rook augments on top of Kubernetes, it has different ways of meeting + Ceph and Kubernetes best practices. This is in comparison to the bare metal + version of each. Out of the box, Rook is predominantly a default Ceph + cluster. The Ceph cluster needs tuning to meet user workloads, and Rook does + not absolve the user from planning out their production storage cluster + beforehand. + For the purpose of this document, we will consider two simplified use cases + to help us make informed decisions about Rook and Ceph: - Co-located: User applications co-exist on nodes running Ceph + Co-located: User applications co-exist on nodes running Ceph + - Disaggregated: Ceph nodes are totally separated from user applications - + Disaggregated: Ceph nodes are totally separated from user + applications General Best Practices - This chapter provides an outline of a series of generalized recommendations for best - practices: + This chapter provides an outline of a series of generalized recommendations + for best practices: - Ceph monitors are more stable on fast storage (SSD-class or better) according - to Ceph best practices. In Rook, this means that the - dataDirHostPath location in the - cluster.yaml should be backed by SSD or better on MON - nodes. + Ceph monitors are more stable on fast storage (SSD-class or + better) according to Ceph best practices. In Rook, this + means that the dataDirHostPath location + in the cluster.yaml should be backed by + SSD or better on MON nodes. - Raise the Rook log level to for initial deployment and - for upgrades, as it will help with debugging problems that are more likely to - occur at those times. + Raise the Rook log level to for + initial deployment and for upgrades, as it will help with + debugging problems that are more likely to occur at those + times. Ensure that the ROOK_LOG_LEVEL in - operator.yaml equals . + operator.yaml equals + . - The Kubernetes CSI driver is the preferred default but ensure that in - operator.yaml the - remains set to - . This is because the FlexVolume driver is in - sustaining mode, is not getting non-priority bug fixes, and will soon be - deprecated. + The Kubernetes CSI driver is the preferred default but ensure + that in operator.yaml the + remains set + to . This is because the FlexVolume + driver is in sustaining mode, is not getting non-priority + bug fixes, and will soon be deprecated. - Ceph’s placement group (PG) auto-scaler module makes it unnecessary to - manually manage PGs. We recommend you always set this to - , unless you have some need to manage PGs manually. - In cluster.yaml, enable the + Ceph’s placement group (PG) auto-scaler module makes it + unnecessary to manually manage PGs. We recommend you always + set this to , unless you have some + need to manage PGs manually. In + cluster.yaml, enable the pg_autoscaler MGR module. - Rook has the capability to auto-remove Deployments for OSDs which are kicked - out of a Ceph cluster. This is enabled by: - removeOSDsIfOutAndSafeToRemove: true. This means there is - less user OSD maintenance and no need to delete Deployments for OSDs that have - been kicked out. Rook will automatically clean up the cluster by removing OSD - Pods if the OSDs are no longer holding Ceph data. However, keep in mind that - this can reduce the visibility of failures from Kubernetes Pod and Pod - Controller views. You can optionally set - removeOSDsIfOutAndSafeToRemove to - if need be, such as if a Kubernetes administrator wants to see disk failures via - a Pod overview. + Rook has the capability to auto-remove Deployments for OSDs + which are kicked out of a Ceph cluster. This is enabled by: + removeOSDsIfOutAndSafeToRemove: + true. This means there is less user OSD + maintenance and no need to delete Deployments for OSDs that + have been kicked out. Rook will automatically clean up the + cluster by removing OSD Pods if the OSDs are no longer + holding Ceph data. However, keep in mind that this can + reduce the visibility of failures from Kubernetes Pod and + Pod Controller views. You can optionally set + removeOSDsIfOutAndSafeToRemove to + if need be, such as if a + Kubernetes administrator wants to see disk failures via a + Pod overview. - Configure Ceph using the central configuration database when possible. This - allows for more runtime configuration flexibility. Do this using the - ceph config set commands from Rook's toolbox. Only use + Configure Ceph using the central configuration database when + possible. This allows for more runtime configuration + flexibility. Do this using the ceph config + set commands from Rook's toolbox. Only use Rook’s provided ceph.conf to override - ConfigMap when it is required. + ConfigMap when it is required. + Limiting Ceph to Specific Nodes - One of the more common setups you may want for your Rook-Ceph cluster is to limit - Ceph to a specific set of nodes. Even for co-located use cases, you could have valid - reasons why you must not (or do not want to) use some nodes for storage. This is - applicable for both co-located and disaggregated use cases. To limit Ceph to specific - nodes, we can Label Kubernetes Nodes and configure Rook to have Affinity (as a hard - preference). - Label the desired storage nodes with storage-node=true. To run - Rook and ceph daemons on labeled nodes, we will configure Rook Affinities in both the - Rook Operator manifest (operator.yaml) and the Ceph cluster - manifest (cluster.yaml). + One of the more common setups you may want for your Rook-Ceph cluster is to + limit Ceph to a specific set of nodes. Even for co-located use cases, you + could have valid reasons why you must not (or do not want to) use some nodes + for storage. This is applicable for both co-located and disaggregated use + cases. To limit Ceph to specific nodes, we can Label Kubernetes Nodes and + configure Rook to have Affinity (as a hard preference). + Label the desired storage nodes with storage-node=true. To + run Rook and ceph daemons on labeled nodes, we will configure Rook + Affinities in both the Rook Operator manifest + (operator.yaml) and the Ceph cluster manifest + (cluster.yaml). operator.yaml @@ -345,9 +380,10 @@ AGENT_NODE_AFFINITY:“storage-node=true” DISCOVER_AGENT_NODE_AFFINITY:“storage-node=true” - For Rook daemons and the CSI driver daemons, adjust the Operator manifest. The CSI - Provisioner is best started on the same nodes as the other Ceph daemons. As above, add - affinity for all storage nodes in cluster.yaml: + For Rook daemons and the CSI driver daemons, adjust the Operator manifest. The + CSI Provisioner is best started on the same nodes as the other Ceph daemons. + As above, add affinity for all storage nodes in + cluster.yaml: placement: @@ -365,15 +401,16 @@ placement: Segregating Ceph From User Applications - You could also have reason to totally separate Rook and Ceph nodes from application - nodes. This falls under the disaggregated use-case, and it is a more traditional way to - deploy storage. In this case, we still need to as described in the section above, and we - also need some additional settings. - To segregate Ceph from user applications, we will also label all non-storage nodes - with storage-node=false. The CSI plugin pods must run where user - applications run and not where Rook or Ceph pods are run. Add a CSI plugin Affinity for - all non-storage nodes in the Rook operator configuration. + You could also have reason to totally separate Rook and Ceph nodes from + application nodes. This falls under the disaggregated use-case, and it is a + more traditional way to deploy storage. In this case, we still need to as described in the section + above, and we also need some additional settings. + To segregate Ceph from user applications, we will also label all non-storage + nodes with storage-node=false. The CSI plugin pods must + run where user applications run and not where Rook or Ceph pods are run. Add + a CSI plugin Affinity for all non-storage nodes in the Rook operator + configuration. CSI_PLUGIN_NODE_AFFINITY:"storage-node=false" @@ -381,8 +418,9 @@ CSI_PLUGIN_NODE_AFFINITY:"storage-node=false" In addition to that, we will set Kubernetes Node Taints and configure Rook Tolerations. For example, Taint the storage nodes with - storage-node=true:NoSchedule and then add the Tolerations below - to the Rook operator in operator.yaml: + storage-node=true:NoSchedule and then add the + Tolerations below to the Rook operator in + operator.yaml: AGENT_TOLERATIONS:| @@ -402,7 +440,8 @@ CSI_PROVISIONER_TOLERATIONS:| operator:Exists - Also add a Toleration for all Ceph daemon Pods in cluster.yaml: + Also add a Toleration for all Ceph daemon Pods in + cluster.yaml: placement: @@ -415,14 +454,16 @@ placement: Setting Ceph CRUSH Map via Kubernetes Node Labels - A feature that was implemented early in Rook’s development is to set Ceph’s CRUSH map - via Kubernetes Node labels. For our example data center, we recommend labelling Nodes - with , , and . - As a note, Rook will only set a CRUSH map on initial creation for each OSD associated - with the node. It will not alter the CRUSH map if labels are modified later. Therefore, - modifying the CRUSH location of an OSD after Rook has created it must be done manually. - For example, in our hypothetical data center, labeling nodes will look like the - following: + A feature that was implemented early in Rook’s development is to set Ceph’s + CRUSH map via Kubernetes Node labels. For our example data center, we + recommend labelling Nodes with , , + and . + As a note, Rook will only set a CRUSH map on initial creation for each OSD + associated with the node. It will not alter the CRUSH map if labels are + modified later. Therefore, modifying the CRUSH location of an OSD after Rook + has created it must be done manually. + For example, in our hypothetical data center, labeling nodes will look like + the following: # -- room-1 -- @@ -474,68 +515,80 @@ kubectl label node node-f-1-1 topology.rook.io/chassis=node-f-1-1 Ceph MONS Using the hypothetical data center described in the , this section will look at planning the nodes - where Ceph daemons are going to run. - Ceph MON scheduling is one of the more detailed, and more important, things to - understand about maintaining a healthy Ceph cluster. The goals we will target in - this section can be summarized as: Avoid risky co-location scenarios, but - allow them if there are no other options, to still have as much redundancy as - possible. + linkend="sec-rook-bp-overview"/>, this section will look at + planning the nodes where Ceph daemons are going to run. + Ceph MON scheduling is one of the more detailed, and more important, + things to understand about maintaining a healthy Ceph cluster. The + goals we will target in this section can be summarized as: + Avoid risky co-location scenarios, but allow them if + there are no other options, to still have as much redundancy + as possible. This can lead us to the following specific goals: - Allow MONs to be in the same room if a room is unavailable. + Allow MONs to be in the same room if a room is + unavailable. - Allow MONs to be in the same rack if no other racks in the room are - available. + Allow MONs to be in the same rack if no other racks + in the room are available. - Allow MONs to be on the same host only if no other hosts are available. - We must allow this specifically in the cluster configuration - cluster.yaml by setting - . + Allow MONs to be on the same host only if no other + hosts are available. + We must allow this specifically in the cluster + configuration cluster.yaml by + setting . - This cannot be set to for clusters using host - networking. + This cannot be set to + for clusters using host networking. Topology Labels - We recommend using the same topology labels used for informing the CRUSH map - here for convenience. + We recommend using the same topology labels used for + informing the CRUSH map here for convenience. - Because of our MON SSD availability, in our hypothetical data center, we only - want monitors to be able to run where shown below in green. We need to plan for - monitors to fail over, and so we will make two nodes explicitly available for this - scenario. In our example, we want any node with a MON SSD to be a MON failover - location in emergencies, for maximum cluster health. This is highlighted in orange - below. This will give us the most redundancy under failure conditions. + Because of our MON SSD availability, in our hypothetical data center, + we only want monitors to be able to run where shown below in green. + We need to plan for monitors to fail over, and so we will make two + nodes explicitly available for this scenario. In our example, we + want any node with a MON SSD to be a MON failover location in + emergencies, for maximum cluster health. This is highlighted in + orange below. This will give us the most redundancy under failure + conditions.
Example Datacenter - MON Failover - + - +
- To implement this in Rook, ensure that Rook will only schedule MONs on nodes with - MON SSDs. There is a required Affinity for those nodes, which can be accomplished by - applying a label to nodes with SSDs for Ceph - MONs. Note that the MON section’s nodeAffinity takes precedence - over the all section’s nodeAffinity. Make sure - that you re-specify the rules from the all section to ensure Ceph MONs maintain - affinity only for storage nodes. + To implement this in Rook, ensure that Rook will only schedule MONs + on nodes with MON SSDs. There is a required Affinity for those + nodes, which can be accomplished by applying a + label to nodes with SSDs + for Ceph MONs. Note that the MON section’s + nodeAffinity takes precedence over the + all section’s + nodeAffinity. Make sure that you + re-specify the rules from the all section to ensure Ceph MONs + maintain affinity only for storage nodes. nodeAffinity:​ @@ -553,17 +606,19 @@ nodeAffinity:​ -"true" - We want to schedule MONs so they are spread across failure domains whenever - possible. We will accomplish this by applying Anti-affinity between MON pods. Rook - labels all MON pods app=rook-ceph-mon, and that is what will be - used to spread the monitors apart. There is one rule for rooms, and one for racks if - a room is down. We want to ensure a higher weight is given to riskier co-location - scenarios: - We do not recommend running MONs on the same node unless absolutely necessary. - Rook automatically applies an Anti-affinity with medium-level weight. However, this - might not be appropriate for all scenarios. For our scenario, we only want - node-level co-location in the worst of failure scenarios, so we want to apply the - highest weight Anti-affinity for nodes. + We want to schedule MONs so they are spread across failure domains + whenever possible. We will accomplish this by applying Anti-affinity + between MON pods. Rook labels all MON pods + app=rook-ceph-mon, and that is what will + be used to spread the monitors apart. There is one rule for rooms, + and one for racks if a room is down. We want to ensure a higher + weight is given to riskier co-location scenarios: + We do not recommend running MONs on the same node unless absolutely + necessary. Rook automatically applies an Anti-affinity with + medium-level weight. However, this might not be appropriate for all + scenarios. For our scenario, we only want node-level co-location in + the worst of failure scenarios, so we want to apply the highest + weight Anti-affinity for nodes. cluster.yaml: @@ -593,28 +648,33 @@ placement: - If hostNetworking is enabled, you cannot co-locate MONs, - because the ports will collide on nodes. To enforce this, if host networking is - enabled, Rook will automatically set a - requiredDuringSchedulingIgnoredDuringExecution Pod - Anti-affinity rule. + If hostNetworking is enabled, you cannot + co-locate MONs, because the ports will collide on nodes. To + enforce this, if host networking is enabled, Rook will + automatically set a + requiredDuringSchedulingIgnoredDuringExecution + Pod Anti-affinity rule.
Ceph OSDS - There is a lot of planning that goes into the placement of monitors, and this is - also true for OSDs. Fortunately, because the planning is already done with the - monitors and because we have discussed the methods, it is quite a bit easier to plan - for the OSDs. + There is a lot of planning that goes into the placement of monitors, + and this is also true for OSDs. Fortunately, because the planning is + already done with the monitors and because we have discussed the + methods, it is quite a bit easier to plan for the OSDs.
Example Datacenter - OSD Placement - + - +
@@ -623,29 +683,33 @@ placement: - Apply Kubernetes Node labels and tell Rook to look for those labels. - Specify in the cluster.yaml - storage:useAllNodes true and specify - osd - nodeAffinity using ceph-osd=true label - using the same Affinity methods we used for MONs. + Apply Kubernetes Node labels and tell Rook to look + for those labels. Specify in the + cluster.yaml + storage:useAllNodes true and + specify osd + nodeAffinity using + ceph-osd=true label using the + same Affinity methods we used for MONs. - Specify node names in the CephCluster definition - (cluster.yaml) individually in - storage:nodes. + Specify node names in the + CephCluster definition + (cluster.yaml) individually + in storage:nodes. - Choosing which option to use depends on your desired management strategy. There - is no single strategy we would recommend over any other. + Choosing which option to use depends on your desired management + strategy. There is no single strategy we would recommend over any + other.
Other Ceph Daemons - Placing the other Ceph daemons follows the same logic and methods as MONs and - OSDs: MGR, MDS, RGW, NFS-Ganesha, and RBD mirroring daemons can all be placed as - desired. For more information, see Placing the other Ceph daemons follows the same logic and methods as + MONs and OSDs: MGR, MDS, RGW, NFS-Ganesha, and RBD mirroring daemons + can all be placed as desired. For more information, see @@ -654,38 +718,42 @@ placement:
Hardware Resource Requirements and Requests - Kubernetes can watch the system resources available on nodes and can help schedule - applications—such as the Ceph daemons—automatically. Kubernetes uses Resource Requests - to do this. For Rook, we are notably concerned about Kubernetes' scheduling of Ceph - daemons. + Kubernetes can watch the system resources available on nodes and can help + schedule applications—such as the Ceph daemons—automatically. Kubernetes + uses Resource Requests to do this. For Rook, we are notably concerned about + Kubernetes' scheduling of Ceph daemons. Kubernetes has two Resource Request types: Requests and - Limits. Requests govern scheduling, and - Limits instruct Kubernetes to kill and restart application Pods - when they are over-consuming given Limits. + Limits. Requests govern + scheduling, and Limits instruct Kubernetes to kill and + restart application Pods when they are over-consuming given + Limits. When there are Ceph hardware requirements, treat those requirements as - Requests, not Limits. This is because all - Ceph daemons are critical for storage, and it is best to never set Resource - Limits for Ceph Pods. If Ceph Daemons are over-consuming - Requests, there is likely a failure scenario happening. In a - failure scenario, killing a daemon beyond a Limit is likely to make - an already bad situation worse. This could create a “thundering herds” situation where + Requests, not Limits. This + is because all Ceph daemons are critical for storage, and it is best to + never set Resource Limits for Ceph Pods. If Ceph + Daemons are over-consuming Requests, there is likely a + failure scenario happening. In a failure scenario, killing a daemon beyond a + Limit is likely to make an already bad + situation worse. This could create a “thundering herds” situation where failures synchronize and magnify. - Generally, storage is given minimum resource guarantees, and other applications - should be limited so as not to interfere. This guideline already applies to bare-metal - storage deployments, not only for Kubernetes. - As you read on, it is important to note that all recommendations can be affected by - how Ceph daemons are configured. For example, any configuration regarding caching. Keep - in mind that individual configurations are out of scope for this document. + Generally, storage is given minimum resource guarantees, and other + applications should be limited so as not to interfere. This guideline + already applies to bare-metal storage deployments, not only for Kubernetes. + As you read on, it is important to note that all recommendations can be + affected by how Ceph daemons are configured. For example, any configuration + regarding caching. Keep in mind that individual configurations are out of + scope for this document. Resource Requests - MON/MGR - Resource Requests for MONs and MGRs are straightforward. MONs try to keep memory - usage to around 1 GB — however, that can expand under failure scenarios. We - recommend 4 GB RAM and 4 CPU cores. - Recommendations for MGR nodes are harder to make, since enabling more modules - means higher usage. We recommend starting with 2 GB RAM and 2 CPU cores for MGRs. It - is a good idea to look at the actual usage for deployments and do not forget to - consider usage during failure scenarios. + Resource Requests for MONs and MGRs are straightforward. MONs try to + keep memory usage to around 1 GB — however, that can expand under + failure scenarios. We recommend 4 GB RAM and 4 CPU cores. + Recommendations for MGR nodes are harder to make, since enabling more + modules means higher usage. We recommend starting with 2 GB RAM and + 2 CPU cores for MGRs. It is a good idea to look at the actual usage + for deployments and do not forget to consider usage during failure + scenarios. MONs: @@ -695,24 +763,28 @@ placement: - Request 4GB RAM (2.5GB minimum) + Request 4GB RAM (2.5GB minimum) + MGR: - Memory will grow the more MGR modules are enabled + Memory will grow the more MGR modules are enabled + - Request 2 GB RAM and 2 CPU cores + Request 2 GB RAM and 2 CPU + cores Resource Requests - OSD CPU - Recommendations and calculations for OSD CPU are straightforward. + Recommendations and calculations for OSD CPU are + straightforward. Hardware recommendations: @@ -729,42 +801,49 @@ placement: Examples: - 8 HDDS journaled to SSD – 10 cores / 8 OSDs = 1.25 cores per OSD + 8 HDDS journaled to SSD – 10 cores / 8 OSDs = 1.25 + cores per OSD - 6 SSDs without journals – 12 cores / 6 OSDs = 2 cores per OSD + 6 SSDs without journals – 12 cores / 6 OSDs = 2 cores + per OSD - 8 SSDs journaled to NVMe – 20 cores / 8 OSDs = 2.5 cores per OSD + 8 SSDs journaled to NVMe – 20 cores / 8 OSDs = 2.5 + cores per OSD - Note that resources are applied cluster-wide to all OSDs. If a cluster contains - multiple OSD types, you must use the highest Requests for the whole cluster. For the - examples below, a mixture of HDDs journaled to SSD and SSDs without journals would - necessitate a Request for 2 cores. + Note that resources are applied cluster-wide to all OSDs. If a + cluster contains multiple OSD types, you must use the highest + Requests for the whole cluster. For the examples below, a mixture of + HDDs journaled to SSD and SSDs without journals would necessitate a + Request for 2 cores. Resource Requests - OSD RAM - There are node hardware recommendations for OSD RAM usage, and this needs to be - translated to RAM requests on a per-OSD basis. The node-level recommendation below - describes osd_memory_target. This is a Ceph configuration that is - described in detail further on. + There are node hardware recommendations for OSD RAM usage, and this + needs to be translated to RAM requests on a per-OSD basis. The + node-level recommendation below describes + osd_memory_target. This is a Ceph + configuration that is described in detail further on. Total RAM required = [number of OSDs] x (1 GB + osd_memory_target) + 16 GB - Ceph OSDs will attempt to keep heap memory usage under a designated target size - set via the osd_memory_target configuration option. Ceph’s - default osd_memory_target is 4GB, and we do not recommend - decreasing the osd_memory_target below 4GB. You may wish to - increase this value to improve overall Ceph read performance by allowing the OSDs to - use more RAM. While the total amount of heap memory mapped by the process should - stay close to this target, there is no guarantee that the kernel will actually + Ceph OSDs will attempt to keep heap memory usage under a designated + target size set via the osd_memory_target + configuration option. Ceph’s default + osd_memory_target is 4GB, and we do not + recommend decreasing the osd_memory_target below + 4GB. You may wish to increase this value to improve overall Ceph + read performance by allowing the OSDs to use more RAM. While the + total amount of heap memory mapped by the process should stay close + to this target, there is no guarantee that the kernel will actually reclaim memory that has been unmapped. - For example, a node hosting 8 OSDs, memory Requests would be - calculated as such: + For example, a node hosting 8 OSDs, memory + Requests would be calculated as such: 8 OSDs x (1GB + 4GB) + 16GB = 56GB per node @@ -772,24 +851,28 @@ Total RAM required = [number of OSDs] x (1 GB + osd_memory_target) + 16 GB 56GB / 8 OSDs = 7GB - Ceph has a feature that allows it to set osd_memory_target - automatically when a Rook OSD Resource Request is set. However, Ceph sets this value - 1:1 and does not leave overhead for waiting for the kernel to - free memory. Therefore, we recommend setting osd_memory_target in - Ceph explicitly, even if you wish to use the default value. Set Rook’s OSD resource - requests accordingly and to a higher value than osd_memory_target - by at least an additional 1GB. This is so Kubernetes does not schedule more - applications or Ceph daemons onto a node than the node is likely to have RAM - available for. - OSD RAM Resource Requests come with the same cluster-wide - Resource Requests note as for OSD CPU. Use the highest - Requests for a cluster consisting of multiple different - configurations of OSDs. + Ceph has a feature that allows it to set + osd_memory_target automatically when a + Rook OSD Resource Request is set. However, Ceph sets this value + 1:1 and does not leave overhead for + waiting for the kernel to free memory. Therefore, we recommend + setting osd_memory_target in Ceph explicitly, + even if you wish to use the default value. Set Rook’s OSD resource + requests accordingly and to a higher value than + osd_memory_target by at least an + additional 1GB. This is so Kubernetes does not schedule more + applications or Ceph daemons onto a node than the node is likely to + have RAM available for. + OSD RAM Resource Requests come with the same + cluster-wide Resource Requests note as for OSD + CPU. Use the highest Requests for a cluster + consisting of multiple different configurations of OSDs. Resource Requests - Gateways - For gateways, the best recommendation is to always tune your workload and daemon - configurations. However, we do recommend the following initial configurations: + For gateways, the best recommendation is to always tune your workload + and daemon configurations. However, we do recommend the following + initial configurations: RGWs: @@ -797,15 +880,16 @@ Total RAM required = [number of OSDs] x (1 GB + osd_memory_target) + 16 GB 6-8 CPU cores - 64 GB RAM (32 GB minimum – may only apply to older "civetweb" protocol) - + 64 GB RAM (32 GB minimum – may only apply to older + "civetweb" protocol) - The numbers below for RGW assume a lot of clients connecting. Thus they might - not be the best for your scenario. The RAM usage should be lower for the newer - “beast” protocol as opposed to the older “civetweb” protocol. + The numbers below for RGW assume a lot of clients connecting. + Thus they might not be the best for your scenario. The RAM + usage should be lower for the newer “beast” protocol as + opposed to the older “civetweb” protocol. @@ -827,8 +911,8 @@ Total RAM required = [number of OSDs] x (1 GB + osd_memory_target) + 16 GB 6-8 CPU cores (untested, high estimate) - 4GB RAM for default settings (settings hardcoded in Rook presently) - + 4GB RAM for default settings (settings hardcoded in + Rook presently) @@ -840,32 +924,34 @@ Total RAM required = [number of OSDs] x (1 GB + osd_memory_target) + 16 GB low-hanging-fruit recommendations. - Not all of these will be right for all clusters or workloads. Always performance - test and use your best judgment. + Not all of these will be right for all clusters or workloads. Always + performance test and use your best judgment. - You can gain performance by using a CNI plugin with an accelerated network - stack. For example, Cilium uses eBPF to improve performance over some other CNI - plugins. + You can gain performance by using a CNI plugin with an + accelerated network stack. For example, Cilium uses eBPF to + improve performance over some other CNI plugins. - Enable host networking to improve network performance. Notably, this provides - lower, more stable latency. This does, however, step outside of Kubernetes’ - network security domain. In cluster.yaml set + Enable host networking to improve network performance. + Notably, this provides lower, more stable latency. This + does, however, step outside of Kubernetes’ network security + domain. In cluster.yaml set network:provider:host. - Use jumbo frames for your networking. This can be applied to both host - networking and the CNI plugin. + Use jumbo frames for your networking. This can be applied to + both host networking and the CNI plugin. - For performance-sensitive deployments, ensure Ceph OSDs always get the - performance they need by not allowing other Ceph daemons or user applications to - run on OSD nodes. Co-locating MONs and MGRs with OSDs can still be done fairly - safely as long as there are enough hardware resources to also include monitors - and managers. + For performance-sensitive deployments, ensure Ceph OSDs + always get the performance they need by not allowing other + Ceph daemons or user applications to run on OSD nodes. + Co-locating MONs and MGRs with OSDs can still be done fairly + safely as long as there are enough hardware resources to + also include monitors and managers. diff --git a/xml/MAIN-SBP-rpiquick-SLES12SP3.xml b/xml/MAIN-SBP-rpiquick-SLES12SP3.xml index a7b001b87..0770314ae 100644 --- a/xml/MAIN-SBP-rpiquick-SLES12SP3.xml +++ b/xml/MAIN-SBP-rpiquick-SLES12SP3.xml @@ -11,15 +11,13 @@
- - Introduction to &productname; for ARM on the &rpi; - - &productname; for ARM 12 SP3 + https://github.com/SUSE/suse-best-practices/issues/new @@ -29,29 +27,25 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - - SUSE Best Practices - - + Best Practices + Systems Management - 3rd Party - + Installation Configuration + Integration - Introduction to SUSE Linux Enterprise Server for ARM on the Raspberry Pi - This document provides an overview of - SUSE Linux Enterprise Server for ARM on the Raspberry Pi platform and - guides through the setup procedure. - - SLES - RasPi + Introduction to SUSE Linux Enterprise Server for ARM on the Raspberry Pi + This document provides an overview of + SUSE Linux Enterprise Server for ARM on the Raspberry Pi and guides through the setup procedure. + Setting up SLES for ARM on Raspberry Pi + + SUSE Linux Enterprise Server - 2018-04-09 - SUSE Linux Enterprise Server 12 SP3 - Raspberry Pi 3 Model B + SUSE Linux Enterprise Server 12 SP3 + Raspberry Pi 3 Model B @@ -93,8 +87,16 @@ - 2018-04-09 - + + + 2018-04-09 + + + + + + + This guide contains an overview of &productname; for ARM diff --git a/xml/MAIN-SBP-susemanager.xml b/xml/MAIN-SBP-susemanager.xml index cf96a5793..1f5870dea 100644 --- a/xml/MAIN-SBP-susemanager.xml +++ b/xml/MAIN-SBP-susemanager.xml @@ -4,17 +4,18 @@ %entity; ]> +
+ Advanced Patch Lifecycle Management with SUSE Manager Methods and approaches for managing updates in multi-landscape environments - - SUSE Manager - + https://github.com/SUSE/suse-best-practices/issues/new @@ -24,25 +25,28 @@ https://github.com/SUSE/suse-best-practices/edit/main/xml/ - - SUSE Best Practices - - + Best Practices + Systems Management - + Upgrade & Update + Configuration - Advanced Patch Lifecycle Management with SUSE Manager - How to set up and configure a SUSE Manager - implementation to enable companies in the delivery of often requested features - - SUMA + Advanced Patch Lifecycle Management with SUSE Manager + How to set up and configure a SUSE Manager + implementation to enable companies in the delivery of often requested features + Advanced patch lifecycle management with SUMA + + SUSE Manager + SUSE Manager + SUSE Manager + SUSE Manager + SUSE Manager + SUSE Manager - 2018-07-11 - SUSE Manager - + SUSE Manager @@ -55,17 +59,7 @@ SUSE -