Skip to content
This repository has been archived by the owner on Jan 25, 2023. It is now read-only.

Enhancement to support ACL bootstrapping #221

Open
wants to merge 27 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
9b14799
The test README file has been updated to include commands to download…
yardbirdsax Apr 9, 2021
d2334af
The methods for testing the Consul cluster have been updated to allow…
yardbirdsax Apr 12, 2021
23bb6d8
The consul-iam-policies module has been updated to create a policy fo…
yardbirdsax Apr 16, 2021
bcffdeb
The cosul-cluster module now accepts an optional variable indicating …
yardbirdsax Apr 16, 2021
1b0a462
Multiple test methods have been refactored to support testing a clust…
yardbirdsax Apr 16, 2021
54b68d1
The example Packer file now includes the installation of the 'bash-co…
yardbirdsax Apr 16, 2021
d11727e
Functions to read and write ACL tokens have been added to the script …
yardbirdsax Apr 19, 2021
9ed23ad
The newly created functions have been moved to a common.sh file, whic…
yardbirdsax Apr 19, 2021
92ec301
A typo in the Packer configuration file has been corrected.
yardbirdsax Apr 19, 2021
3c53ead
The consul-cluster module now passes the cluster name to the IAM Poli…
yardbirdsax Apr 19, 2021
7aa990c
The IAM Policy module now correctly creates the policy allowing clust…
yardbirdsax Apr 19, 2021
815376a
The function to write ACL tokens has been fixed to provider the '--ty…
yardbirdsax Apr 19, 2021
60234ee
The run-consul script now correctly inserts the ACL configuration whe…
yardbirdsax Apr 19, 2021
580e035
The run-consul script now calculates a rally point instance if 'enabl…
yardbirdsax Apr 19, 2021
6d4ac74
The ACL example has been updated so the user-data scripts call the ru…
yardbirdsax Apr 19, 2021
cf28737
The install-consul script has been updated to include copying the 'co…
yardbirdsax Apr 19, 2021
7d387c2
The run-consul script now generates a root ACL token upon start-up an…
yardbirdsax Apr 20, 2021
254313f
The run-consul command now checks whether the bootstrap token already…
yardbirdsax Apr 20, 2021
53a4fd5
The run-consul script now creates agent tokens and sets the local age…
yardbirdsax Apr 21, 2021
9478600
The run-consul script will now only perform ACL bootstrap activities …
yardbirdsax Apr 27, 2021
5561dd2
The tests for Consul clients have been updated to ignore ACL configur…
yardbirdsax Apr 28, 2021
4a93734
Documentation for the ACL enabled example and the run-consul script h…
yardbirdsax Apr 28, 2021
c958768
The install-consul script now also installs Git as a required depende…
yardbirdsax Apr 28, 2021
4656f31
Moved install of bash-commons to install-consul
yardbirdsax Apr 28, 2021
e7dfaae
Tweak of token generation for agents to use newer method
yardbirdsax May 7, 2021
9e3c70b
Refactor of ACL storage logic
yardbirdsax May 10, 2021
9c30efe
Correction of agent policy template
yardbirdsax May 10, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions examples/example-with-acl/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Consul cluster with ACL example

This folder contains a set of Terraform manifest for deploying a Consul cluster in AWS which has [ACL](https://www.consul.io/docs/security/acl) enabled. The root bootstrap token is stored in an [AWS Systems Manager Parameter](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) so that other nodes can retrieve it and create agent tokens for themselves.

The end result of this example should be a cluster of 3 Consul servers and 3 Consul clients, all running on individual EC2 instances.

## Quick start

To deploy a Consul cluster with ACL enabled:

1. Create a new AMI using the Packer manifest in the [`examples/consul-ami`](../consul-ami) directory. Make note of the resulting AMI ID as you will need that for step 3.
1. Modify `main.tf` to add your provider credentials, VPC/subnet ids if you need to, etc.
1. Modify `variables.tf` to customize the cluster. At a minimum you will want to supply the AMI ID from the image built in step 1.
1. Run `terraform init`.
1. Run `terraform apply`.
1. `ssh` into one of the boxes and make sure all nodes correctly discover each other (by running `consul members` for example).
158 changes: 158 additions & 0 deletions examples/example-with-acl/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY A CONSUL CLUSTER IN AWS
# These templates show an example of how to use the consul-cluster module to deploy Consul in AWS. We deploy two Auto
# Scaling Groups (ASGs): one with a small number of Consul server nodes and one with a larger number of Consul client
# nodes. Note that these templates assume that the AMI you provide via the ami_id input variable is built from
# the examples/example-with-encryption/packer/consul-with-certs.json Packer template.
# ---------------------------------------------------------------------------------------------------------------------

# ----------------------------------------------------------------------------------------------------------------------
# REQUIRE A SPECIFIC TERRAFORM VERSION OR HIGHER
# ----------------------------------------------------------------------------------------------------------------------
terraform {
# This module is now only being tested with Terraform 0.14.x. However, to make upgrading easier, we are setting
# 0.12.26 as the minimum version, as that version added support for required_providers with source URLs, making it
# forwards compatible with 0.14.x code.
required_version = ">= 0.12.26"
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CONSUL SERVER NODES
# ---------------------------------------------------------------------------------------------------------------------

module "consul_servers" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "git::[email protected]:hashicorp/terraform-aws-consul.git//modules/consul-cluster?ref=v0.0.1"
source = "../../modules/consul-cluster"

cluster_name = "${var.cluster_name}-server"
cluster_size = var.num_servers
instance_type = "t2.micro"
spot_price = var.spot_price

# The EC2 Instances will use these tags to automatically discover each other and form a cluster
cluster_tag_key = var.cluster_tag_key
cluster_tag_value = var.cluster_name

ami_id = var.ami_id
user_data = data.template_file.user_data_server.rendered

vpc_id = data.aws_vpc.default.id
subnet_ids = data.aws_subnet_ids.default.ids

# TODO: Add variable enable_acl

# To make testing easier, we allow Consul and SSH requests from any IP address here but in a production
# deployment, we strongly recommend you limit this to the IP address ranges of known, trusted servers inside your VPC.
allowed_ssh_cidr_blocks = ["0.0.0.0/0"]

allowed_inbound_cidr_blocks = ["0.0.0.0/0"]
ssh_key_name = var.ssh_key_name
acl_store_type = var.acl_store_type

tags = [
{
key = "Environment"
value = "development"
propagate_at_launch = true
}
]
}

# ---------------------------------------------------------------------------------------------------------------------
# THE USER DATA SCRIPT THAT WILL RUN ON EACH CONSUL SERVER EC2 INSTANCE WHEN IT'S BOOTING
# This script will configure and start Consul
# ---------------------------------------------------------------------------------------------------------------------

data "template_file" "user_data_server" {
template = file("${path.module}/user-data-server.sh")

vars = {
cluster_tag_key = var.cluster_tag_key
cluster_tag_value = var.cluster_name
enable_gossip_encryption = var.enable_gossip_encryption
gossip_encryption_key = var.gossip_encryption_key
enable_rpc_encryption = var.enable_rpc_encryption
ca_path = var.ca_path
cert_file_path = var.cert_file_path
key_file_path = var.key_file_path
# TODO Add enable_acl
}
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CONSUL CLIENT NODES
# Note that you do not have to use the consul-cluster module to deploy your clients. We do so simply because it
# provides a convenient way to deploy an Auto Scaling Group with the necessary IAM and security group permissions for
# Consul, but feel free to deploy those clients however you choose (e.g. a single EC2 Instance, a Docker cluster, etc).
# ---------------------------------------------------------------------------------------------------------------------

module "consul_clients" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "git::[email protected]:hashicorp/terraform-aws-consul.git//modules/consul-cluster?ref=v0.0.1"
source = "../../modules/consul-cluster"

cluster_name = "${var.cluster_name}-client"
cluster_size = var.num_clients
instance_type = "t2.micro"
spot_price = var.spot_price

cluster_tag_key = "consul-clients"
cluster_tag_value = var.cluster_name

ami_id = var.ami_id
user_data = data.template_file.user_data_client.rendered

vpc_id = data.aws_vpc.default.id
subnet_ids = data.aws_subnet_ids.default.ids

# To make testing easier, we allow Consul and SSH requests from any IP address here but in a production
# deployment, we strongly recommend you limit this to the IP address ranges of known, trusted servers inside your VPC.
allowed_ssh_cidr_blocks = ["0.0.0.0/0"]

allowed_inbound_cidr_blocks = ["0.0.0.0/0"]
ssh_key_name = var.ssh_key_name

acl_store_type = var.acl_store_type
}

# ---------------------------------------------------------------------------------------------------------------------
# THE USER DATA SCRIPT THAT WILL RUN ON EACH CONSUL CLIENT EC2 INSTANCE WHEN IT'S BOOTING
# This script will configure and start Consul
# ---------------------------------------------------------------------------------------------------------------------

data "template_file" "user_data_client" {
template = file("${path.module}/user-data-client.sh")

vars = {
cluster_tag_key = var.cluster_tag_key
cluster_tag_value = var.cluster_name
enable_gossip_encryption = var.enable_gossip_encryption
gossip_encryption_key = var.gossip_encryption_key
enable_rpc_encryption = var.enable_rpc_encryption
ca_path = var.ca_path
cert_file_path = var.cert_file_path
key_file_path = var.key_file_path
# TODO Add enable_acl variable
}
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY CONSUL IN THE DEFAULT VPC AND SUBNETS
# Using the default VPC and subnets makes this example easy to run and test, but it means Consul is accessible from the
# public Internet. For a production deployment, we strongly recommend deploying into a custom VPC with private subnets.
# ---------------------------------------------------------------------------------------------------------------------

data "aws_vpc" "default" {
default = var.vpc_id == null ? true : false
id = var.vpc_id
}

data "aws_subnet_ids" "default" {
vpc_id = data.aws_vpc.default.id
}

data "aws_region" "current" {
}
59 changes: 59 additions & 0 deletions examples/example-with-acl/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
output "num_servers" {
value = module.consul_servers.cluster_size
}

output "asg_name_servers" {
value = module.consul_servers.asg_name
}

output "launch_config_name_servers" {
value = module.consul_servers.launch_config_name
}

output "iam_role_arn_servers" {
value = module.consul_servers.iam_role_arn
}

output "iam_role_id_servers" {
value = module.consul_servers.iam_role_id
}

output "security_group_id_servers" {
value = module.consul_servers.security_group_id
}

output "num_clients" {
value = module.consul_clients.cluster_size
}

output "asg_name_clients" {
value = module.consul_clients.asg_name
}

output "launch_config_name_clients" {
value = module.consul_clients.launch_config_name
}

output "iam_role_arn_clients" {
value = module.consul_clients.iam_role_arn
}

output "iam_role_id_clients" {
value = module.consul_clients.iam_role_id
}

output "security_group_id_clients" {
value = module.consul_clients.security_group_id
}

output "aws_region" {
value = data.aws_region.current.name
}

output "consul_servers_cluster_tag_key" {
value = module.consul_servers.cluster_tag_key
}

output "consul_servers_cluster_tag_value" {
value = module.consul_servers.cluster_tag_value
}
28 changes: 28 additions & 0 deletions examples/example-with-acl/user-data-client.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
#!/bin/bash
# This script is meant to be run in the User Data of each EC2 Instance while it's booting. The script uses the
# run-consul script to configure and start Consul in client mode. Note that this script assumes it's running in an AMI
# built from the Packer template in examples/consul-ami/consul.json.

set -e

# Send the log output from this script to user-data.log, syslog, and the console
# From: https://alestic.com/2010/12/ec2-user-data-output/
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1

# These variables are passed in via Terraform template interplation
if [[ "${enable_gossip_encryption}" == "true" && ! -z "${gossip_encryption_key}" ]]; then
# Note that setting the encryption key in plain text here means that it will be readable from the Terraform state file
# and/or the EC2 API/console. We're doing this for simplicity, but in a real production environment you should pass an
# encrypted key to Terraform and decrypt it before passing it to run-consul with something like KMS.
gossip_encryption_configuration="--enable-gossip-encryption --gossip-encryption-key ${gossip_encryption_key}"
fi

if [[ "${enable_rpc_encryption}" == "true" && ! -z "${ca_path}" && ! -z "${cert_file_path}" && ! -z "${key_file_path}" ]]; then
rpc_encryption_configuration="--enable-rpc-encryption --ca-path ${ca_path} --cert-file-path ${cert_file_path} --key-file-path ${key_file_path}"
fi

# TODO: Add option for enabling ACL

/opt/consul/bin/run-consul --client --cluster-tag-key "${cluster_tag_key}" --cluster-tag-value "${cluster_tag_value}" $gossip_encryption_configuration $rpc_encryption_configuration --enable-acl --acl-storage-type ssm

# You could add commands to boot your other apps here
26 changes: 26 additions & 0 deletions examples/example-with-acl/user-data-server.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
#!/bin/bash
# This script is meant to be run in the User Data of each EC2 Instance while it's booting. The script uses the
# run-consul script to configure and start Consul in server mode. Note that this script assumes it's running in an AMI
# built from the Packer template in examples/consul-ami/consul.json.

set -e

# Send the log output from this script to user-data.log, syslog, and the console
# From: https://alestic.com/2010/12/ec2-user-data-output/
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1

# These variables are passed in via Terraform template interplation
if [[ "${enable_gossip_encryption}" == "true" && ! -z "${gossip_encryption_key}" ]]; then
# Note that setting the encryption key in plain text here means that it will be readable from the Terraform state file
# and/or the EC2 API/console. We're doing this for simplicity, but in a real production environment you should pass an
# encrypted key to Terraform and decrypt it before passing it to run-consul with something like KMS.
gossip_encryption_configuration="--enable-gossip-encryption --gossip-encryption-key ${gossip_encryption_key}"
fi

if [[ "${enable_rpc_encryption}" == "true" && ! -z "${ca_path}" && ! -z "${cert_file_path}" && ! -z "${key_file_path}" ]]; then
rpc_encryption_configuration="--enable-rpc-encryption --ca-path ${ca_path} --cert-file-path ${cert_file_path} --key-file-path ${key_file_path}"
fi

# TODO: Add option for enabling ACL

/opt/consul/bin/run-consul --server --cluster-tag-key "${cluster_tag_key}" --cluster-tag-value "${cluster_tag_value}" $gossip_encryption_configuration $rpc_encryption_configuration --enable-acl --acl-storage-type ssm
Loading