This module proposes a simple and uncomplicated way to run your load tests created with JMeter, Locust, K6 or TaurusBzt on AWS as IaaS.
module "loadtest-distribuited" {
source = "marcosborges/loadtest-distribuited/aws"
name = "nome-da-implantacao"
executor = "jmeter"
loadtest_dir_source = "examples/plan/"
nodes_size = 2
loadtest_entrypoint = "jmeter -n -t jmeter/basic.jmx -R \"{NODES_IPS}\" -l /var/logs/loadtest -e -o /var/www/html -Dnashorn.args=--no-deprecation-warning -Dserver.rmi.ssl.disable=true "
subnet_id = data.aws_subnet.current.id
}
data "aws_subnet" "current" {
filter {
name = "tag:Name"
values = ["my-subnet-name"]
}
}
In its basic use it is necessary to provide information about which network will be used, where are your test plan scripts and finally define the number of nodes needed to carry out the desired load.
module "loadtest-distribuited" {
source = "marcosborges/loadtest-distribuited/aws"
name = "nome-da-implantacao"
executor = "jmeter"
loadtest_dir_source = "examples/plan/"
nodes_size = 2
loadtest_entrypoint = "bzt -q -o execution.0.distributed=\"{NODES_IPS}\" taurus/basic.yml"
subnet_id = data.aws_subnet.current.id
}
data "aws_subnet" "current" {
filter {
name = "tag:Name"
values = ["my-subnet-name"]
}
}
module "loadtest-distribuited" {
source = "marcosborges/loadtest-distribuited/aws"
name = "nome-da-implantacao"
nodes_size = 2
executor = "locust"
loadtest_dir_source = "examples/plan/"
locust_plan_filename = "basic.py"
loadtest_entrypoint = <<-EOT
nohup locust \
-f ${var.locust_plan_filename} \
--web-port=8080 \
--expect-workers=${var.node_size} \
--master > locust-leader.out 2>&1 &
EOT
node_custom_entrypoint = <<-EOT
nohup locust \
-f ${var.locust_plan_filename} \
--worker \
--master-host={LEADER_IP} > locust-worker.out 2>&1 &
EOT
subnet_id = data.aws_subnet.current.id
}
data "aws_subnet" "current" {
filter {
name = "tag:Name"
values = ["my-subnet-name"]
}
}
The module also provides advanced settings.
-
It is possible to automate the splitting of the contents of a bulk file between the load nodes.
-
It is possible to export the ssh key used in remote access.
-
We can define a pre-configured and customized image.
-
We can customize too many instances provisioning parameters: tags, monitoring, public_ip, security_group, etc...
module "loadtest" {
source = "marcosborges/loadtest-distribuited/aws"
name = "nome-da-implantacao"
executor = "bzt"
loadtest_dir_source = "examples/plan/"
loadtest_dir_destination = "/loadtest"
loadtest_entrypoint = "bzt -q -o execution.0.distributed=\"{NODES_IPS}\" taurus/basic.yml"
nodes_size = 3
subnet_id = data.aws_subnet.current.id
#AUTO SPLIT
split_data_mass_between_nodes = {
enable = true
data_mass_filenames = [
"data/users.csv"
]
}
#EXPORT SSH KEY
ssh_export_pem = true
#CUSTOMIZE IMAGE
leader_ami_id = data.aws_ami.my_image.id
nodes_ami_id = data.aws_ami.my_image.id
#CUSTOMIZE TAGS
leader_tags = {
"Name" = "nome-da-implantacao-leader",
"Owner": "nome-do-proprietario",
"Environment": "producao",
"Role": "leader"
}
nodes_tags = {
"Name": "nome-da-implantacao",
"Owner": "nome-do-proprietario",
"Environment": "producao",
"Role": "node"
}
tags = {
"Name": "nome-da-implantacao",
"Owner": "nome-do-proprietario",
"Environment": "producao"
}
# SETUP INSTANCE SIZE
leader_instance_type = "t2.medium"
nodes_instance_type = "t2.medium"
# SETUP JVM PARAMETERS
leader_jvm_args = " -Xms12g -Xmx80g -XX:MaxMetaspaceSize=512m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 "
nodes_jvm_args = " -Xms12g -Xmx80g -XX:MaxMetaspaceSize=512m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 "
# DISABLE AUTO SETUP
auto_setup = false
# SET JMETER VERSION. WORK ONLY WHEN AUTO-SETUP IS TRUE
jmeter_version = "5.4.1"
# ASSOCIATE PUBLIC IP
leader_associate_public_ip_address = true
nodes_associate_public_ip_address = true
# ENABLE MONITORING
leader_monitoring = true
nodes_monitoring = true
# SETUP SSH USERNAME
ssh_user = "ec2-user"
# SETUP ALLOWEDs CIDRS FOR SSH ACCESS
ssh_cidr_ingress_block = ["0.0.0.0/0"]
}
data "aws_subnet" "current" {
filter {
name = "tag:Name"
values = ["my-subnet-name"]
}
}
data "aws_ami" "my_image" {
most_recent = true
filter {
name = "owner-alias"
values = ["amazon"]
}
filter {
name = "name"
values = ["amzn2-ami-hvm*"]
}
}
The C5 family of instances is a good choice for the load test.
Model | vCPU | Mem (GiB) | Storage (GiB) | Network Band. (Gbps) |
---|---|---|---|---|
c5n.large | 2 | 5.25 | EBS | 25 -> 4.750 |
c5n.xlarge | 4 | 10.5 | EBS | 25 -> 4.750 |
c5n.2xlarge | 8 | 21 | EBS | 25 -> 4.750 |
c5n.4xlarge | 16 | 42 | EBS | 25 4.750 |
c5n.9xlarge | 36 | 96 | EBS | 50 9.500 |
c5n.18xlarge | 72 | 192 | EBS | 100 19.000 |
c5n.metal | 72 | 192 | EBS | 100 19.000 |
Name | Version |
---|---|
terraform | >= 0.13.1 |
aws | >= 3.63 |
Name | Version |
---|---|
aws | >= 3.63 |
null | n/a |
tls | n/a |
No modules.
Name | Type |
---|---|
aws_iam_instance_profile.loadtest | resource |
aws_iam_role.loadtest | resource |
aws_instance.leader | resource |
aws_instance.nodes | resource |
aws_key_pair.loadtest | resource |
aws_security_group.loadtest | resource |
null_resource.executor | resource |
null_resource.key_pair_exporter | resource |
null_resource.publish_split_data | resource |
null_resource.split_data | resource |
tls_private_key.loadtest | resource |
aws_ami.amazon_linux_2 | data source |
aws_subnet.current | data source |
aws_vpc.current | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
auto_execute | Execute Loadtest after leader and nodes available | bool |
true |
no |
auto_setup | Install and configure instances Amazon Linux2 with JMeter and Taurus | bool |
true |
no |
executor | Executor of the loadtest | string |
"jmeter" |
no |
jmeter_version | JMeter version | string |
"5.4.1" |
no |
leader_ami_id | Id of the AMI | string |
"" |
no |
leader_associate_public_ip_address | Associate public IP address to the leader | bool |
true |
no |
leader_custom_setup_base64 | Custom bash script encoded in base64 to setup the leader | string |
"" |
no |
leader_instance_type | Instance type of the cluster leader | string |
"t2.medium" |
no |
leader_jvm_args | JVM Leader JVM_ARGS | string |
" -Xms2g -Xmx2g -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 " |
no |
leader_monitoring | Enable monitoring for the leader | bool |
true |
no |
leader_tags | Tags of the cluster leader | map |
{} |
no |
loadtest_dir_destination | Path to the destination loadtest directory | string |
"/loadtest" |
no |
loadtest_dir_source | Path to the source loadtest directory | string |
n/a | yes |
loadtest_entrypoint | Path to the entrypoint command | string |
"bzt -q -o execution.0.distributed=\"{NODES_IPS}\" *.yml" |
no |
name | Name of the provision | string |
n/a | yes |
nodes_ami_id | Id of the AMI | string |
"" |
no |
nodes_associate_public_ip_address | Associate public IP address to the nodes | bool |
true |
no |
nodes_custom_setup_base64 | Custom bash script encoded in base64 to setup the nodes | string |
"" |
no |
nodes_instance_type | Instance type of the cluster nodes | string |
"t2.medium" |
no |
nodes_jvm_args | JVM Nodes JVM_ARGS | string |
"-Xms2g -Xmx2g -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 -Dnashorn.args=--no-deprecation-warning -XX:+HeapDumpOnOutOfMemoryError " |
no |
nodes_monitoring | Enable monitoring for the leader | bool |
true |
no |
nodes_size | Total number of nodes in the cluster | number |
2 |
no |
nodes_tags | Tags of the cluster nodes | map |
{} |
no |
region | Name of the region | string |
"us-east-1" |
no |
split_data_mass_between_nodes | Split data mass between nodes | object({ |
{ |
no |
ssh_cidr_ingress_blocks | SSH user for the leader | list |
[ |
no |
ssh_export_pem | n/a | bool |
false |
no |
ssh_user | SSH user for the leader | string |
"ec2-user" |
no |
subnet_id | Id of the subnet | string |
n/a | yes |
tags | Common tags | map |
{} |
no |
taurus_version | Taurus version | string |
"1.16.0" |
no |
web_cidr_ingress_blocks | web for the leader | list |
[ |
no |
Name | Description |
---|---|
leader_private_ip | The private IP address of the leader server instance. |
leader_public_ip | The public IP address of the leader server instance. |
nodes_private_ip | The private IP address of the nodes instances. |
nodes_public_ip | The public IP address of the nodes instances. |