Skip to content

Commit

Permalink
Change CPUs to Graviton3
Browse files Browse the repository at this point in the history
Change EC2 instance type from m6g.2xlarge to m7g.2xlarge

Closes #1040

Signed-off-by: Pedro Ruivo <[email protected]>
  • Loading branch information
pruivo committed Nov 14, 2024
1 parent 76838fc commit 20df101
Show file tree
Hide file tree
Showing 7 changed files with 9 additions and 9 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
2. Click on Run workflow button
3. Fill in the form and click on Run workflow button
1. Name of the cluster - the name of the cluster that will be later used for other workflows. Default value is `gh-${{ github.repository_owner }}`, this results in `gh-<owner of fork>`.
2. Instance type for compute nodes - see [AWS EC2 instance types](https://aws.amazon.com/ec2/instance-types/). Default value is `m6g.2xlarge`.
2. Instance type for compute nodes - see [AWS EC2 instance types](https://aws.amazon.com/ec2/instance-types/). Default value is `m7g.2xlarge`.
3. Deploy to multiple availability zones in the region - if checked, the cluster will be deployed to multiple availability zones in the region. Default value is `false`.
4. Number of worker nodes to provision - number of compute nodes in the cluster. Default value is `2`.
4. Wait for the workflow to finish.
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/rosa-cluster-create.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ on:
type: string
computeMachineType:
description: 'Instance type for the compute nodes'
default: 'm6g.2xlarge'
default: 'm7g.2xlarge'
type: string
availabilityZones:
description: 'Availability zones to deploy to'
Expand All @@ -35,7 +35,7 @@ on:
default: 10.0.0.0/24
computeMachineType:
description: 'Instance type for the compute nodes'
default: 'm6g.2xlarge'
default: 'm7g.2xlarge'
type: string
availabilityZones:
description: 'Availability zones to deploy to'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Collecting the CPU usage for refreshing a token is currently performed manually
This setup is run https://github.com/keycloak/keycloak-benchmark/blob/main/.github/workflows/rosa-cluster-auto-provision-on-schedule.yml[daily on a GitHub action schedule]:

* OpenShift 4.15.x deployed on AWS via ROSA with two AWS availability zones in AWS one region.
* Machinepool with `m6g.2xlarge` instances.
* Machinepool with `m7g.2xlarge` instances.
* Keycloak 25 release candidate build deployed with Operator and 3 pods in each site as an active/passive setup, and Infinispan connecting the two sites.
* Default user password hashing with Argon2 and 5 hash iterations and minimum memory size 7 MiB https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#argon2id[as recommended by OWASP].
* Database seeded with 100,000 users and 100,000 clients.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ After the installation process is finished, it creates a new admin user.
CLUSTER_NAME=rosa-kcb
VERSION=4.13.8
REGION=eu-central-1
COMPUTE_MACHINE_TYPE=m6g.2xlarge
COMPUTE_MACHINE_TYPE=m7g.2xlarge
MULTI_AZ=false
REPLICAS=3
----
Expand Down Expand Up @@ -85,7 +85,7 @@ The above installation script creates an admin user automatically but in case th
== Scaling the cluster's nodes on demand

The standard setup of nodes might be too small for running a load test, at the same time using a different instance type and rebuilding the cluster takes a lot of time (about 45 minutes).
To scale the cluster on demand, the standard setup has a machine pool named `scaling` with instances of type `m6g.2xlarge` which is auto-scaled based on the current demand from 4 to 15 instances.
To scale the cluster on demand, the standard setup has a machine pool named `scaling` with instances of type `m7g.2xlarge` which is auto-scaled based on the current demand from 4 to 15 instances.
However, auto-scaling of worker nodes is quite time-consuming as nodes are scaled one by one.

To use different instance types, use `rosa create machinepool` to create additional machine pools
Expand Down
2 changes: 1 addition & 1 deletion provision/aws/rds/aurora_common.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ export AURORA_CLUSTER=${AURORA_CLUSTER:-"keycloak"}
export AURORA_ENGINE=${AURORA_ENGINE:-"aurora-postgresql"}
export AURORA_ENGINE_VERSION=${AURORA_ENGINE_VERSION:-"16.1"}
export AURORA_INSTANCES=${AURORA_INSTANCES:-"1"}
export AURORA_INSTANCE_CLASS=${AURORA_INSTANCE_CLASS:-"db.r6g.xlarge"}
export AURORA_INSTANCE_CLASS=${AURORA_INSTANCE_CLASS:-"db.r7g.xlarge"}
export AURORA_PASSWORD=${AURORA_PASSWORD:-"secret99"}
export AURORA_REGION=${AURORA_REGION}
export AURORA_SECURITY_GROUP_NAME=${AURORA_SECURITY_GROUP_NAME:-"${AURORA_CLUSTER}-security-group"}
Expand Down
2 changes: 1 addition & 1 deletion provision/aws/rosa_create_cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ fi

SCALING_MACHINE_POOL=$(rosa list machinepools -c "${CLUSTER_NAME}" -o json | jq -r '.[] | select(.id == "scaling") | .id')
if [[ "${SCALING_MACHINE_POOL}" != "scaling" ]]; then
rosa create machinepool -c "${CLUSTER_NAME}" --instance-type "${COMPUTE_MACHINE_TYPE:-m6g.2xlarge}" --max-replicas 15 --min-replicas 1 --name scaling --enable-autoscaling --autorepair
rosa create machinepool -c "${CLUSTER_NAME}" --instance-type "${COMPUTE_MACHINE_TYPE:-m7g.2xlarge}" --max-replicas 15 --min-replicas 1 --name scaling --enable-autoscaling --autorepair
fi

cd ${SCRIPT_DIR}
Expand Down
2 changes: 1 addition & 1 deletion provision/opentofu/modules/rosa/hcp/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ variable "openshift_version" {

variable "instance_type" {
type = string
default = "m6g.2xlarge"
default = "m7g.2xlarge"
nullable = false
}

Expand Down

0 comments on commit 20df101

Please sign in to comment.