Assuming you already have Amazon AWS account we will need additional binaries for AWS CLI, terraform, kubectland aws-iam-authenticator.
Article is structured in 5 parts
- Initial tooling setup aws cli , kubectl and terraform
- Creating terraform IAM account with access keys and access policy
- Creating back-end storage for tfstate file in AWS S3
- Creating Kubernetes cluster on AWS EKS and RDS on PostgreSQL
- Working with kubernetes "kubectl" in EKS
Assuming you already have AWS account and AWS CLI installed and AWS CLI configured for your user account we will need additional binaries for, terraform and kubectl.
curl -o terraform_0.11.7_darwin_amd64.zip \
https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_darwin_amd64.zip
unzip terraform_0.11.7_linux_amd64.zip -d /usr/local/bin/
curl https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip > \
terraform_0.11.7_linux_amd64.zip
unzip terraform_0.11.7_linux_amd64.zip -d /usr/local/bin/
Verify terraform version 0.11.7 or higher is installed:
terraform version
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.11.0/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
wget https://storage.googleapis.com/kubernetes-release/release/v1.11.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --client
aws-iam-authenticator is a tool developed by Heptio Team and this tool will allow us to manage EKS by using kubectl
curl -o aws-iam-authenticator \
https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/darwin/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH
curl -o aws-iam-authenticator \
https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
cp ./aws-iam-authenticator $HOME/.local/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH
aws-iam-authenticator help
Before configuring AWS CLI as EKS at this time is only available in US East (N. Virginia) and US West (Oregon) In below example we will be using US West (Oregon) "us-west-2"
aws configure
1st step is to setup terraform admin account in AWS IAM
aws iam create-user --user-name terraform
NOTE: For production or event proper testing account you may need tighten up and restrict access for terraform IAM user
aws iam attach-user-policy --user-name terraform --policy-arn arn:aws:iam::aws:policy/AdministratorAccess
NOTE: This Access Key and Secret Access Key will be used by terraform to manage infrastructure deployment
aws iam create-access-key --user-name terraform
Once we have terraform IAM account created we can proceed to next step creating dedicated bucket to keep terraform state files
NOTE: Change name of the bucker, name should be unique across all AWS S3 buckets
aws s3 mb s3://terra-state-bucket --region us-west-2
aws s3api put-bucket-versioning --bucket terra-state-bucket --versioning-configuration Status=Enabled
Now we can move into creating new infrastructure, eks and rds with terraform
.
βββ backend.tf
βββ eks
βΒ Β βββ eks_cluster
βΒ Β βΒ Β βββ main.tf
βΒ Β βΒ Β βββ outputs.tf
βΒ Β βΒ Β βββ variables.tf
βΒ Β βββ eks_iam_roles
βΒ Β βΒ Β βββ main.tf
βΒ Β βΒ Β βββ outputs.tf
βΒ Β βββ eks_node
βΒ Β βΒ Β βββ main.tf
βΒ Β βΒ Β βββ outputs.tf
βΒ Β βΒ Β βββ userdata.tpl
βΒ Β βΒ Β βββ variables.tf
βΒ Β βββ eks_sec_group
βΒ Β βββ main.tf
βΒ Β βββ outputs.tf
βΒ Β βββ variables.tf
βββ main.tf
βββ network
βΒ Β βββ route
βΒ Β βΒ Β βββ main.tf
βΒ Β βΒ Β βββ outputs.tf
βΒ Β βΒ Β βββ variables.tf
βΒ Β βββ sec_group
βΒ Β βΒ Β βββ main.tf
βΒ Β βΒ Β βββ outputs.tf
βΒ Β βΒ Β βββ variables.tf
βΒ Β βββ subnets
βΒ Β βΒ Β βββ main.tf
βΒ Β βΒ Β βββ outputs.tf
βΒ Β βΒ Β βββ variables.tf
βΒ Β βββ vpc
βΒ Β βββ main.tf
βΒ Β βββ outputs.tf
βΒ Β βββ variables.tf
βββ outputs.tf
βββ rds
βΒ Β βββ main.tf
βΒ Β βββ outputs.tf
βΒ Β βββ variables.tf
βββ README.md
βββ terraform.tfvars
βββ variables.tf
βββ yaml
βββ eks-admin-cluster-role-binding.yaml
βββ eks-admin-service-account.yaml
We will use terraform modules to keep our code clean and organized Terraform will run 2 separate environment dev and prod using same sources only difference in this case is number of worker nodes for kubernetes.
# Specify the provider and access details
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.aws_region}"
}
## Network
# Create VPC
module "vpc" {
source = "./network/vpc"
eks_cluster_name = "${var.eks_cluster_name}"
cidr_block = "${var.cidr_block}"
}
# Create Subnets
module "subnets" {
source = "./network/subnets"
eks_cluster_name = "${var.eks_cluster_name}"
vpc_id = "${module.vpc.vpc_id}"
vpc_cidr_block = "${module.vpc.vpc_cidr_block}"
}
# Configure Routes
module "route" {
source = "./network/route"
main_route_table_id = "${module.vpc.main_route_table_id}"
gw_id = "${module.vpc.gw_id}"
subnets = [
"${module.subnets.subnets}",
]
}
module "eks_iam_roles" {
source = "./eks/eks_iam_roles"
}
module "eks_sec_group" {
source = "./eks/eks_sec_group"
eks_cluster_name = "${var.eks_cluster_name}"
vpc_id = "${module.vpc.vpc_id}"
}
module "eks_cluster" {
source = "./eks/eks_cluster"
eks_cluster_name = "${var.eks_cluster_name}"
iam_cluster_arn = "${module.eks_iam_roles.iam_cluster_arn}"
iam_node_arn = "${module.eks_iam_roles.iam_node_arn}"
subnets = [
"${module.subnets.subnets}",
]
security_group_cluster = "${module.eks_sec_group.security_group_cluster}"
}
module "eks_node" {
source = "./eks/eks_node"
eks_cluster_name = "${var.eks_cluster_name}"
eks_certificate_authority = "${module.eks_cluster.eks_certificate_authority}"
eks_endpoint = "${module.eks_cluster.eks_endpoint}"
iam_instance_profile = "${module.eks_iam_roles.iam_instance_profile}"
security_group_node = "${module.eks_sec_group.security_group_node}"
subnets = [
"${module.subnets.subnets}",
]
}
module "sec_group_rds" {
source = "./network/sec_group"
vpc_id = "${module.vpc.vpc_id}"
vpc_cidr_block = "${module.vpc.vpc_cidr_block}"
}
module "rds" {
source = "./rds"
subnets = [
"${module.subnets.subnets}",
]
sec_grp_rds = "${module.sec_group_rds.sec_grp_rds}"
identifier = "${var.identifier}"
storage_type = "${var.storage_type}"
allocated_storage = "${var.allocated_storage}"
db_engine = "${var.db_engine}"
engine_version = "${var.engine_version}"
instance_class = "${var.instance_class}"
db_username = "${var.db_username}"
db_password = "${var.db_password}"
sec_grp_rds = "${module.sec_group_rds.sec_grp_rds}"
}
Terraform modules will create
- VPC
- Subnets
- Routes
- IAM Roles for master and nodes
- Security Groups "Firewall" to allow master and nodes to communicate
- EKS cluster
- Autoscaling Group will create nodes to be added to the cluster
- Security group for RDS
- RDS with PostgreSQL
NOTE: very important to keep tags as if tags is not specify nodes will not be able to join cluster
cd into project folder and create workspace for dev and prod
terraform init
terraform workspace new dev
terraform workspace list
terraform workspace select dev
Before we can start will need to update variables and add db password to terraform.tfvars
echo 'db_password = "Your_DB_Passwd."' >> terraform.tfvars
terraform get -update
terraform plan
NOTE: building complete infrastructure may take more than 10 minutes.
terraform apply
aws ec2 describe-instances --output table
In order to use kubectl with EKS we need to set new AWS CLI profile
NOTE: will need to use secret and access keys from terraform.tfvars
cat terraform.tfvars
aws configure --profile terraform
export AWS_PROFILE=terraform
In terraform configuration we output configuration file for kubectl
terraform output kubeconfig
terraform output kubeconfig > ~/.kube/config-devel
export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel
kubectl get namespaces
kubectl get services
terraform output config_map_aws_auth > yaml/config_map_aws_auth.yaml
kubectl apply -f yaml/config_map_aws_auth.yaml
kubectl get nodes
Deploy the Kubernetes Dashboard
kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
kubectl apply -f yaml/eks-admin-service-account.yaml
kubectl apply -f yaml/eks-admin-cluster-role-binding.yaml
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')
kubectl proxy
NOTE: Open the link with a web browser to access the dashboard endpoint: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
NOTE: Choose Token and paste output from the previous command into the Token field
terraform destroy -auto-approve
export AWS_PROFILE=default
aws s3 rm s3://terra-state-bucket --recursive
aws s3api put-bucket-versioning --bucket terra-state-bucket --versioning-configuration Status=Suspended
aws s3api delete-objects --bucket terra-state-bucket --delete \
"$(aws s3api list-object-versions --bucket terra-state-bucket | \
jq '{Objects: [.Versions[] | {Key:.Key, VersionId : .VersionId}], Quiet: false}')"
aws s3 rb s3://terra-state-bucket --force
aws iam detach-user-policy --user-name terraform --policy-arn arn:aws:iam::aws:policy/AdministratorAccess
aws iam list-access-keys --user-name terraform --query 'AccessKeyMetadata[*].{ID:AccessKeyId}' --output text
aws iam delete-access-key --user-name terraform --access-key-id OUT_KEY
aws iam delete-user --user-name terraform