Wondering how to automate configuration of HTTP proxy for EKS worker nodes? As luck would have, our Support Engineers are well-versed in queries like these and plenty more.
Configuring HTTP proxy can be a tad bit tedious. Having Bobcares by your side makes the task a lot easier. We have split the process into two sections to make it easier for you.
Learn to automate configuration of HTTP proxy for EKS worker nodes
There are a couple of things to start out with before you get ready to automate configuration of HTTP proxy for EKS worker nodes. Our Support Team will take you through each step in detail.
Part 1: Automate configuration of HTTP proxy for EKS worker nodes
- First, find the cluster’s IP CIDR block with the following commands:
$ kubectl get service kubernetes -o jsonpath=’{.spec.clusterIP}’;echo
The first command here returns either 172.20.0.1 or 10.100.0.1. This indicates that the cluster block is either 172.20.0.0/16 or 10.100.0.0.
- Next, we will create a ConfigMap file and name it proxy-env-vars-config.yaml. This is based on the previous step’s output. In case the IP in the output is from the 172.20.x.x range, you need to structure your ConfigMap as seen below:
apiVersion: v1 kind: ConfigMap metadata: name: proxy-environment-variables namespace: kube-system data: HTTP_PROXY: http://customer.proxy.host:proxy_port HTTPS_PROXY: http://customer.proxy.host:proxy_port NO_PROXY: 172.20.0.0/16,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.com
However, if the IP in the output belongs to 10.100.x.x range, then structure your ConfigMap as seen below:
apiVersion: v1 kind: ConfigMap metadata: name: proxy-environment-variables namespace: kube-system data: HTTP_PROXY: http://customer.proxy.host:proxy_port HTTPS_PROXY: http://customer.proxy.host:proxy_port NO_PROXY: 10.100.0.0/16,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.com
Our Support Engineers would like to remind you to replace VP_CIDR_RANGE by Ipv4 CIDR block of the cluster’s VPC.
If your plan is to build an Amazon EKS cluster with private subnets, private API server endpoint access, and no internet access, we recommend creating and adding endpoints for Amazon Elastic Container Registry (Amazon ECR), Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), and Amazon Virtual Private Cloud (Amazon VPC).
For instance: You can create the following endpoints: api.ecr.us-east-1.amazonaws.com, ec2.us-east-1.amazonaws.com, s3.amazonaws.com, dkr.ecr.us-east-1.amazonaws.com, and s3.us-east-1.amazonaws.com.
Another key point to note is that the public endpoint subdomain has to be added to the NO_PROXY variable. For instance: add .s3.us-east-1.amazonaws.com domain for Amazon Simple Storage Service in the us-east-1 AWS Region.
- After that, we have to verify that the variable, NO_PROXY present in configmap/proxy-enivronment-variables also includes kubernetes cluster IP address space. For instance, 10.100.0.0/16 has been used in the previous code example for the ConnfigMap file when the IP range starts from 10.100.x.x.
- Next, you need to apply the ConfigMap with the following code:
$ kubectl apply -f/path/to/yaml/proxyenv-vars-config.yaml
Part 2: Automate configuration of HTTP proxy for EKS worker nodes
- Next, we will configure the Docker kubelet and daemon by injecting user data into the worker node. For instance:
Content-Type: multipart/mixed; boundary=”==BOUNDARY==” MIME-Version: 1.0 –==BOUNDARY== Content-Type: text/cloud-boothook; charset=”us-ascii” #Set proxy hostname and port PROXY=”proxy.local:3128″ MAC=$(curl -s http://169.254.169.254/latest/meta-data/mac/) VPC_CIDR=$(curl -s http://169.254.169.254/latest/meta-data/network/interfaces/macs/$MAC/vpc-ipv4-cidr-blocks | xargs | tr ‘ ‘ ‘,’) #Create docker systemd directory mkdir -p /etc/systemd/system/docker.service.d #Configure yum to use the proxy cloud-init-per instance yum_proxy_config cat << EOF >> /etc/yum.conf proxy=http://$PROXY EOF #Set proxy for future processes, and also use as an include file cloud-init-per instance proxy_config cat << EOF >> /etc/environment http_proxy=http://$PROXY https_proxy=http://$PROXY HTTP_PROXY=http://$PROXY HTTPS_PROXY=http://$PROXY no_proxy=$VPC_CIDR,localhost,127.0.0.1,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.com NO_PROXY=$VPC_CIDR,localhost,127.0.0.1,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.com EOF #Configure the docker with the proxy cloud-init-per instance docker_proxy_config tee </dev/null [Service] EnvironmentFile=/etc/environment EOF #Configure kubelet with proxy cloud-init-per instance kubelet_proxy_config tee </dev/null [Service] EnvironmentFile=/etc/environment EOF #Reload daemon and then restart docker to reflect proxy configuration at launch of instance cloud-init-per instance reload_daemon systemctl daemon-reload cloud-init-per instance enable_docker systemctl enable –now –no-block docker –==BOUNDARY== Content-Type:text/x-shellscript; charset=”us-ascii” #!/bin/bash set -o xtrace #Set proxy variables prior to running the bootstrap.sh script set -a source /etc/environment /etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments} # Use the cfn-signal only if node is created through AWS CloudFormation stack and needs to signal back to an AWS CloudFormation resource (CFN_RESOURCE_LOGICAL_NAME) that waits for a signal from this EC2 instance to progress through either: # – CreationPolicy https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-creationpolicy.html # – UpdatePolicy https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html # cfn-signal will signal back to AWS CloudFormation using https transport, so set the proxy for an HTTPS connection to AWS CloudFormation /opt/aws/bin/cfn-signal –exit-code $? \ –stack ${AWS::StackName} \ –resource CFN_RESOURCE_LOGICAL_NAME \ –region ${AWS::Region} \ –https-proxy $HTTPS_PROXY –==BOUNDARY==–
Remember to update or create docker, kubelet, and yum configuration files before you start the Docker daemon and kubelet.
- After that, we will update the kube-proxy pods and aws-node as seen below:
$ kubectl patch -n kube-system -p ‘{ “spec”: {“template”: { “spec”: { “containers”: [ { “name”: “aws-node”, “envFrom”: [ { “configMapRef”: {“name”: “proxy-environment-variables”} } ] } ] } } } }’ daemonset aws-node $ kubectl patch -n kube-system -p ‘{ “spec”: {“template”:{ “spec”: { “containers”: [ { “name”: “kube-proxy”, “envFrom”: [ { “configMapRef”: {“name”: “proxy-environment-variables”} } ] } ] } } } }’ daemonset kube-proxy
In case the ConfigMap has been changed, apply the updates. Then we will set the ConfigMap in the pods. For instance:
$ kubectl set env daemonset/kube-proxy –namespace=kube-system –from=configmap/proxy-environment-variables –containers=’*’ $ kubectl set env daemonset/aws-node –namespace=kube-system –from=configmap/proxy-environment-variables –containers=’*’
Our Support Engineers would like to remind you to update all YAMK modifications when the objects are upgraded. In order to update a ConfigMap to the default value, you can use the eksctl utils update-aws-node or eksctl utils update-kube-proxy commands.
Furthermore, the cluster’s behavior can become unpredictable in case the proxy loses connectivity. In order to prevent this from happening, run the proxy behind a load balancer or a service discovery namespace.
- Finally, verify that the proxy variables are used in the aws-node and kube-proxy pods with the following command:
$ kubectl describe pod kibe-proxy-xxxx n kubesystem
You will get a result similar to this:
HTTPS_PROXY: <set to the key ‘HTTPS_PROXY’ of config map ‘proxy-environment-variables’> Optional: false
HTTP_PROXY: <set to the key ‘HTTP_PROXY’ of config map ‘proxy-environment-variables’> Optional: false
NO_PROXY: <set to the key ‘NO_PROXY’ of config map ‘proxy-environment-variables’> Optional: false
[Need a helping hand? We are just a click away.]
Conclusion
In short, the Support Team at Bobcares demonstrated how to automate configuration of HTTP proxy for EKS worker nodes with expertise.
PREVENT YOUR SERVER FROM CRASHING!
Never again lose customers to poor server speed! Let us help you.
Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.
0 Comments