Skip to content

Commit

Permalink
Merge branch 'main' into update_cloud_resources
Browse files Browse the repository at this point in the history
  • Loading branch information
frosty-geek authored Oct 8, 2024
2 parents d760989 + 6e915da commit c8145fb
Show file tree
Hide file tree
Showing 13 changed files with 557 additions and 196 deletions.
9 changes: 5 additions & 4 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,8 @@ jobs:
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install dependencies
run: npm install
- name: build page
run: npm run build
- name: Install dependencies and build page
run: |
npm ci
npm run build
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
/standards/*/*.md
/standards/*/*.mdx
/standards/scs-*.yaml
/user-docs/application-examples

# Dependencies
node_modules
Expand Down
2 changes: 1 addition & 1 deletion .markdownlint-cli2.jsonc
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,6 @@
"markdownlint-rule-search-replace",
"markdownlint-rule-relative-links"
],
"ignores": ["node_modules", ".github", "docs"],
"ignores": ["node_modules", ".github", "docs", "standards"],
"globs": ["**/*.{md}"]
}
4 changes: 2 additions & 2 deletions community/contribute/adding-docs-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,15 +23,15 @@ Your repository containing the documentation has to...

The documentation files have to be in markdown format and...

- comply [SCS licensing guidelines](https://github.com/SovereignCloudStack/docs/blob/main/community/github/dco-and-licenses.md)
- comply [SCS licensing guidelines](https://github.com/SovereignCloudStack/docs/blob/main/community/license-considerations.md)
- match our
- [markdown file structure guideline](https://github.com/SovereignCloudStack/docs/blob/main/community/contribute/doc-files-structure-guide.md)
- linting Rules
- [styleguide](https://github.com/SovereignCloudStack/docs/blob/main/community/contribute/styleguide.md)

### Step 2 – Adding your repo to the docs.json

File a Pull Request within the [docs-page](https://github.com/SovereignCloudStack/docs-page) repository and add your repo to the docs.package.json:
File a Pull Request within the [docs](https://github.com/SovereignCloudStack/docs) repository and add your repo to the docs.package.json:

```json
[
Expand Down
6 changes: 6 additions & 0 deletions docs.package.json
Original file line number Diff line number Diff line change
Expand Up @@ -135,5 +135,11 @@
"source": ["documentation/overview.md"],
"target": "docs/turnkey-solution",
"label": ""
},
{
"repo": "SovereignCloudStack/opendesk-on-scs",
"source": "docs/*",
"target": "user-docs/application-examples",
"label": "opendesk-on-scs"
}
]
161 changes: 161 additions & 0 deletions docs/02-iaas/deployment-examples/artcodix/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
# artcodix

## Preface

This document describes a possible environment setup for a pre-production or minimal production setup.
In general hardware requirements can vary largely from environment to environment and this guide is not
a hardware sizing guide nor the best placement solution of services for every setup. This guide intends to
provide a starting point for a hardware based deployment of the SCS-IaaS reference implementation based on OSISM.

## Node type definitions

### Control Node

A control node runs all or most of the openstack services, that are responsible for API-services and the corresponding
runtimes. These nodes are necessary for any user to interact with the cloud and to keep the cloud in a managed state.
However these nodes are usualy **not** running user virtual machines.
Hence it is advisable to have the control nodes replicated. To have a RAFT-quorum three nodes are a good starting point.

### Compute Node (HCI/no HCI)

#### Not Hyperconverged Infrastructure (no HCI)

Non HCI compute nodes are exclusively running user virtual machines. They are running no API-services, no storage daemons
and no network routers, except for the necessary network infrastructure to connect virtual machines.

#### Hyperconverged Infrastructure (HCI)

HCI nodes generally run at least user virtual machines and storage daemons. It is possible to place networking services
here as well but that is not considered good practice.

#### No HCI / vs HCI

Whether to use HCI nodes or not is in general not an easy question. For a getting started (pre production/smalles possible production)
environment however, it is the most cost efficent option. Therefore we will continue with HCI nodes (compute + storage).

### Storage Node

A dedicated storage node runs only storage daemons. This can be necessary in larger deployments to protect the storage daemons from
ressource starvation through user workloads.

Not used in this setup.

### Network Node

A dedicated network node runs the routing infrastructure for user virtual machines that connects these machines with provider / external
networks. In larger deployments these can be useful to enhance scaling and improve network performance.

Not used in this setup.

## Nodes in this deployment example

As mentioned before we are running three dedicated control nodes. To be able to fully test an openstack environment it is
recommended to run three compute nodes (HCI) as well. Technically you can get a setup running with just one compute node.
See the following chapter (Use cases and validation) for more information.

### Use cases and validation

The setup described allows for the following use cases / test cases:

- Highly available control plane
- Control plane failure toleration test (Database, RabbitMQ, Ceph Mons, Routers)
- Highly available user virtual clusters (e.g. Kubernetes clusters)
- Compute host failure simulation
- Host aggregates / compute node grouping
- Host based storage replication (instead of OSD based)
- Fully replicated storage / storage high availability test

### Control Node

#### General requirements

The control nodes do not run any user workloads. This means they are usually not sized as big as the compute nodes.
Relevant metrics for control nodes are:

- Fast and big enough discs. At least SATA-SSDs are recommended, NVMe will greatly improve the overall responsiveness.
- A rather large amount of memory to house all the caches for databases and queues.
- CPU performance should be average. A good compromise between amount of cores and speed should be used. However this is
the least important requirement on the list.

#### Hardware recommendation

The following server specs are just a starting point and can greatly vary between environments.

Example:
3x Dell R630/R640/R650 1HE Server

- Dual 8 Core 3,00 GHz Intel/AMD
- 128 GB RAM
- 2x 3,84 TB NVMe in (Software-) RAID 1
- 2x 10/25/40 GBit 2 Port SFP+/QSFP Network Cards

### Compute Node (HCI)

The compute nodes in this scenario run all the user virtual workloads **and** the storage infrastructure. To make sure
we don't starve these nodes, they should be of decent size.

> This setup takes local storage tests into consideration. The SCS-standards require certain flavors with very fast disc speed
> to house customer kubernetes control planes (etcd). These speeds are usually not achievable with shared storage. If you don't
> intend to test this scenario, you can skip the NVMe discs.
#### Hardware recommendation

The following server specs are just a starting point and can greatly vary between environments. The sizing of the nodes needs to fit
the expected workloads (customer VMs).

Example:
3x Dell R730(xd)/R740(xd)/R750(xd)
or
3x Supermicro

- Dual 16 Core 2,8 GHz Intel/AMD
- 512 GB RAM
- 2x 3,84 TB NVMe in (Software-) RAID 1 if you want to have local storage available (optional)

For hyperconverged ceph osds:

- 4x 10 TB HDD -> This leads to ~30 TB of available HDD storage (optional)
- 4x 7,68 TB SSD -> This leads to ~25 TB of available SSD storage (optional)
- 2x 10/25/40 GBit 2 Port SFP+/QSFP Network Cards

## Network

The network infrastructure can vary a lot from setup to setup. This guide does not intend to define the best networking solution
for every cluster but rather give two possible scenarios.

### Scenario A: Not recommended for production

The smallest possible setup is just a single switch connected to all the nodes physically on one interface. The switch has to be
VLAN enabled. Openstack recommends multiple isolated networks but the following are at least recommended to be split:

- Out of Band network
- Management networks
- Storage backend network
- Public / External network for virutal machines
If there is only one switch, these networks should all be defined as seperate VLANs. One of the networks can run in untagged default
VLAN 1.

### Scenario B: Minimum recommended setup for small production environments

The recommended setup uses two stacked switches connected in a LAG and at least three different physical network ports on each node.

- Physical Network 1: VLANs for Public / External network for virutal machines, Management networks
- Physical Network 2: Storage backend network
- Physical Network 3: Out of Band network

### Network adapters

The out of band network does usually not need a lot of bandwith. Most modern servers come with 1Gbit/s adapters which are sufficient.
For small test clusters, it might also be sufficient to use 1Gbit/s networks for the other two physical networks.
For a minimum production cluster it is recommended to use the following:

- Out of Band Network: 1Gbit/s
- VLANs for Public / External network for virutal machines, Management networks: 10 / 25 Gbit/s
- Storage backend network: 10 / 25 / 40 Gbit/s

Whether you need a higher throughput for your storage backend services depends on your expected storage load. The faster the network
the faster storage data can be replicated between nodes. This usually leads to improved performance and better/faster fault tolerance.

## How to continue

After implementing the recommended deployment example hardware, you can continue with the [deployment guide](https://docs.scs.community/docs/iaas/guides/deploy-guide/).
30 changes: 26 additions & 4 deletions docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ const config = {
tagline: 'Documentation and Community Platform for the Sovereign Cloud Stack',
url: 'https://docs.scs.community',
baseUrl: '/',
onBrokenLinks: 'throw',
onBrokenLinks: 'warn',
onBrokenMarkdownLinks: 'warn',
favicon: 'img/favicon.ico',
markdown: {
Expand Down Expand Up @@ -81,6 +81,16 @@ const config = {
// ... other options
}
],
[
'@docusaurus/plugin-content-docs',
{
id: 'user-docs',
path: 'user-docs',
routeBasePath: 'user-docs',
sidebarPath: require.resolve('./sidebarsUserDocs.js')
// ... other options
}
],
[
'@docusaurus/plugin-content-docs',
{
Expand All @@ -104,7 +114,7 @@ const config = {
'Documentation and Community Platform for the Sovereign Cloud Stack'
}
],
image: 'img/summit-social.png',
image: 'img/scs-og-basic.png',
navbar: {
title: '',
logo: {
Expand All @@ -120,6 +130,11 @@ const config = {
label: 'For Contributors',
position: 'left'
},
{
to: '/user-docs',
label: 'For Users',
position: 'left'
},
{ to: '/community', label: 'Community', position: 'left' },
{ to: '/docs/faq', label: 'FAQ', position: 'left' },
{
Expand Down Expand Up @@ -194,12 +209,19 @@ const config = {
// @ts-ignore
({
hashed: true,
docsDir: ['docs', 'community', 'standards', 'contributor-docs'],
docsDir: [
'docs',
'community',
'standards',
'contributor-docs',
'user-docs'
],
docsRouteBasePath: [
'docs',
'community',
'standards',
'contributor-docs'
'contributor-docs',
'user-docs'
]
})
]
Expand Down
Loading

0 comments on commit c8145fb

Please sign in to comment.