Wondering how to use multiple CIDR ranges with EKS? We can help you.
Here, at Bobcares, we assist our customers with several AWS queries as part of our AWS Support Services.
Today, let us see how our Support Techs does this task.
How to use multiple CIDR ranges with EKS?
Firstly, confirm the following:
- A running Amazon EKS cluster
- Next, access to a version (no earlier than 1.16.284) of the AWS Command Line Interface (AWS CLI)
- AWS Identity and Access Management (IAM) permissions to manage an Amazon VPC
- A kubectl with permissions to create custom resources and edit the DaemonsSet
- An installed version of jq (from the jq website) on your system
- A Unix-based system with a Bash shell
Today, let us see the steps followed by our Support Techs to perform this task.
Firstly, set up your VPC. Then, you configure the CNI plugin to use a new CIDR range.
Add additional CIDR ranges to expand your VPC network
1.Firstly, find your VPCs.
If your VPCs have a tag, then run the following command to find your VPC:
VPC_ID=$(aws ec2 describe-vpcs --filters Name=tag:Name,Values=yourVPCName | jq -r '.Vpcs[].VpcId')
If your VPCs don’t have a tag, then run the following command to list all the VPCs in your AWS Region:
aws ec2 describe-vpcs --filters | jq -r '.Vpcs[].VpcId'
2.To attach your VPC to a VPC_ID variable, run the following command:
export VPC_ID=vpc-xxxxxxxxxxxx
3.To associate an additional CIDR block with the range 100.64.0.0/16 to the VPC, run the following command:
aws ec2 associate-vpc-cidr-block --vpc-id $VPC_ID --cidr-block 100.64.0.0/16
Create subnets with a new CIDR range
1. To list all the Availability Zones in your AWS Region, run the following command:
aws ec2 describe-availability-zones --region us-east-1 --query 'AvailabilityZones[*].ZoneName'
Do not forget to replace us-east-1 with your AWS Region.
2. Choose the Availability Zone where you want to add the subnets, and then assign those Availability Zones to variables. For example:
export AZ1=us-east-1a
export AZ2=us-east-1b
export AZ3=us-east-1c
Note: You can add more Availability Zones by creating more variables.
3. To create new subnets under the VPC with the new CIDR range, run the following commands:
CUST_SNET1=$(aws ec2 create-subnet --cidr-block 100.64.0.0/19 --vpc-id $VPC_ID --availability-zone $AZ1 | jq -r .Subnet.SubnetId)
CUST_SNET2=$(aws ec2 create-subnet --cidr-block 100.64.32.0/19 --vpc-id $VPC_ID --availability-zone $AZ2 | jq -r .Subnet.SubnetId)
CUST_SNET3=$(aws ec2 create-subnet --cidr-block 100.64.64.0/19 --vpc-id $VPC_ID --availability-zone $AZ3 | jq -r .Subnet.SubnetId)
Tag the new subnets
For clusters running on Kubernetes 1.18 and earlier, you must tag all subnets so that Amazon EKS can discover the subnets.
1. Add a name tag for your subnets by setting a key-value pair. For example:
aws ec2 create-tags --resources $CUST_SNET1 --tags Key=Name,Value=SubnetA
aws ec2 create-tags --resources $CUST_SNET2 --tags Key=Name,Value=SubnetB
aws ec2 create-tags --resources $CUST_SNET3 --tags Key=Name,Value=SubnetC
2. For clusters running on Kubernetes 1.18 and below, tag the subnet for discovery by Amazon EKS. For example:
aws ec2 create-tags --resources $CUST_SNET1 --tags Key=kubernetes.io/cluster/yourClusterName,Value=shared
aws ec2 create-tags --resources $CUST_SNET2 --tags Key=kubernetes.io/cluster/yourClusterName,Value=shared
aws ec2 create-tags --resources $CUST_SNET3 --tags Key=kubernetes.io/cluster/yourClusterName,Value=shared
Associate your new subnet to a route table
1. To list the entire route table under the VPC, run the following command:
aws ec2 describe-route-tables --filters Name=vpc-id,Values=$VPC_ID |jq -r '.RouteTables[].RouteTableId'
2. For the route table that you want to associate with your subnet, run the following command to export to the variable.
Then, replace rtb-xxxxxxxxx with the values from step 1:
export RTASSOC_ID=rtb-xxxxxxxxx
3. Associate the route table to all new subnets. For example:
aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET1
aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET2
aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET3
Configure the CNI plugin to use the new CIDR range
1. To verify that you have the latest version of the CNI plugin, run the following command.
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
If your version of the CNI plugin is earlier than 1.5.3, then run the following command to update to the latest version:
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.5/aws-k8s-cni.yaml
2. To enable custom network configuration for the CNI plugin, run the following command:
kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true
3. To add the ENIConfig label for identifying your worker nodes, run the following command:
kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=failure-domain.beta.kubernetes.io/zone
4. To install the ENIConfig custom resource definition (from the Kubernetes website), run the following command:
cat << EOF | kubectl apply -f -
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: eniconfigs.crd.k8s.amazonaws.com
spec:
scope: Cluster
group: crd.k8s.amazonaws.com
version: v1alpha1
names:
plural: eniconfigs
singular: eniconfig
kind: ENIConfig
EOF
5. To create an ENIConfig custom resource for all subnets and Availability Zones, run the following commands:
cat <<EOF | kubectl apply -f -
apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
name: $AZ1
spec:
subnet: $CUST_SNET1
EOF
cat <<EOF | kubectl apply -f -
apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
name: $AZ2
spec:
subnet: $CUST_SNET2
EOF
cat <<EOF | kubectl apply -f -
apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
name: $AZ3
spec:
subnet: $CUST_SNET3
EOF
Note: The ENIConfig must match the Availability Zone of your worker nodes.
6. Launch the new worker nodes, and then terminate the old worker nodes.
Note: Completing step 6 allows the CNI plugin (ipamd) to allocate IP addresses from the new CIDR range on the new worker nodes.
If you’re using custom networking, then the primary network interface isn’t used for pod placement. In this scenario, you must first update max-pods using the following formula:
maxPods = (number of interfaces - 1) * (max IPv4 addresses per interface - 1) + 2
Then, you must update the user data of your self-managed nodes to add the user data to pass the new max-pods value. For the BootstrapArguments field, enter the following:
--use-max-pods false --kubelet-extra-args '--max-pods=<20>'
If you created a new group of self-managed nodes, then you must update ENIConfig. Update ENIConfig with the security group that your CloudFormation stack created for the new subnets. For example:
cat <<EOF | kubectl apply -f -
apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
name: $AZ1
spec:
securityGroups:
- sg-xxxxxxxxxxxx
subnet: $CUST_SNET1
EOF
cat <<EOF | kubectl apply -f -
apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
name: $AZ2
spec:
securityGroups:
- sg-xxxxxxxxxxxx
subnet: $CUST_SNET2
EOF
cat <<EOF | kubectl apply -f -
apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
name: $AZ3
spec:
securityGroups:
- sg-xxxxxxxxxxxx
subnet: $CUST_SNET3
EOF
Note: If you’re using managed node groups, then update the cluster security groups. Replace sg-XXXXXXXXXXXX with your cluster security group.
7. To test the configuration by launching pods, run the following commands:
kubectl run nginx --image nginx --replicas 10
kubectl get pods -o wide
Now, you see that ten new pods are added and the new CIDR range is scheduled on new worker nodes.
[Need help with the resolution? We’d be happy to help you out]
Conclusion
In short, we saw how our Support Techs use multiple CIDR ranges with EKS.
0 Comments