AWS EKS Terraform module creates AWS EKS (Kubernetes) resources using Terraform.
Bobcares responds to all inquiries, no matter how big or small, as part of our AWS Support services.
Let’s take a look at how our Support team broke down the AWS EKS Terraform module.
AWS EKS Terraform Module
A Terraform module is a directory containing a collection of standard configuration files. Terraform modules contain groups of resources dedicated to a single task, reducing the amount of code we need to write for similar infrastructure components.
Step-by-Step Guide to Deploying Our First Cluster with Terraform and EKS
- Firstly, make a directory for the project, such as terraform-eks.
- Then, use this command to create an ssh key pair in the directory:
ssh-keygen -t rsa -f ./eks-key.
- We’ll now create several Terraform files to hold the different resource configurations.
provider.tf
will be the first file. - Once we create the file , we’ll add these lines of code to the file:
provider "aws" { version = "~> 2.57.0" region = "us-east-1" }
- Then, make a file called
cluster.tf
now. This will include our virtual network, cluster, and node pool modules. To begin, we’ll create a locals block with a variable for the cluster name that we can use across modules:locals { cluster_name = "my-eks-cluster" }
- We’ll then use Fairwinds’ AWS VPC module to set up the cluster’s network. Please note that the module is hardcoded to use a /16 cidr block and a /21 cidr subnet.
module "vpc" { source = "git::https://git@github.com/reactiveops/terraform-vpc.git?ref=v5.0.1" aws_region = "us-east-1" az_count = 3 aws_azs = "us-east-1a, us-east-1b, us-east-1c" global_tags = { "kubernetes.io/cluster/${local.cluster_name}" = "shared" } }
- Finally, it will add a module for the cluster. We’ll actually use a Terraform AWS module that the community supports:
module "eks" { source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=v12.1.0" cluster_name = local.cluster_name vpc_id = module.vpc.aws_vpc_id subnets = module.vpc.aws_subnet_private_prod_ids node_groups = { eks_nodes = { desired_capacity = 3 max_capacity = 3 min_capaicty = 3 instance_type = "t2.small" } } manage_aws_auth = false }
- Once the cluster.tf file is complete, run terraform init to start Terraform. Terraform will create a directory called .terraform in which each module source declared in cluster.tf will be downloaded. Initialization will pull in any providers required by these modules, in this example it will download the aws provider. If configured, Terraform will also configure the backend for storing the state file.
- This is how the final result will appear.
- Run
terraform plan
after Terraform has been successfully initialised to see what will be created. - Then, Terraform will add a network, a subnetwork (for pods and services), an EKS cluster, and a managed node group, totaling 59 resources.
- After the plan has been validated, run
terraform apply
to apply the changes. Terraform will re-output the plan and ask for confirmation before applying it as a final validation step. It will take you about 15-20 minutes to complete this step. - Then, in the terminal, run the following command to interact with the cluster:
aws eks --region us-east-1 update-kubeconfig --name my-eks-cluster
- Finally, run
kubectl get nodes
to see our cluster’s two worker nodes.
[Looking for a solution to another query? We are just a click away.]
Conclusion
To sum up, the AWS EKS Terraform module was broken down by our Support team.
PREVENT YOUR SERVER FROM CRASHING!
Never again lose customers to poor server speed! Let us help you.
Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.
var google_conversion_label = "owonCMyG5nEQ0aD71QM";
0 Comments