Bobcares

wesupport

CLIENT AREACall Us 1-800-383-5193
Bobcares

wesupport

Call Us 1-800-383-5193
Bobcares

wesupport

Call Us 1-800-383-5193

Need help?

Our experts have had an average response time of 13.52 minutes in October 2021 to fix urgent issues.

We will keep your servers stable, secure, and fast at all times for one fixed price.

Expose Kubernetes services running on EKS cluster

by | Sep 3, 2021

To expose the Kubernetes services running on EKS cluster, we create a sample application.

Here, at Bobcares, we assist our customers with several AWS queries as part of our AWS Support Services.

Today, let us see how our support techs expose it.

Expose Kubernetes services running on EKS cluster

To the sample application, we apply the ClusterIP, NodePort, and LoadBalancer Kubernetes ServiceTypes.

  • Create a sample application

1. Initially, we define and apply a deployment file.

For example,  the below file creates a ReplicaSet that spins up two nginx pods, and then creates the file, nginx-deployment.yaml.

cat <<EOF > nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
EOF

2. Then we create the deployment:

kubectl apply -f nginx-deployment.yaml

3. To verify if the pods are running and have their own internal IP addresses, we run:

kubectl get pods -l 'app=nginx' -o wide | awk {'print $1" " $3 " " $6'} | column -t

The output will look like this:

NAME STATUS IP
nginx-deployment-574b87c764-hcxdg Running 192.168.20.8
nginx-deployment-574b87c764-xsn9s Running 192.168.53.240
  • Create a ClusterIP service

1. Firstly, we create the file, clusterip.yaml, and then set type to ClusterIP.

For example:

cat <<EOF > clusterip.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service-cluster-ip
spec:
type: ClusterIP
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF

2. After that, we create the ClusterIP object in Kubernetes.

Create the object and apply the clusterip.yaml file using the declarative command:

kubectl create -f clusterip.yaml

Output:

service/nginx-service-cluster-ip created

-or-

Expose a deployment of ClusterIP type using the imperative command:

kubectl expose deployment nginx-deployment --type=ClusterIP --name=nginx-service-cluster-ip

Output:

service "nginx-service-cluster-ip" deleted
  • Create a NodePort service

1. To do so, we create the file, nodeport.yaml, and then set type to NodePort.

For example:

cat <<EOF > nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service-nodeport
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF

2. Then we create the NodePort object in Kubernetes.

Create the object and apply the nodeport.yaml file using the declarative command:

kubectl create -f nodeport.yaml

-or-

Now, to expose a deployment of NodePort type using the imperative command:

kubectl expose deployment nginx-deployment --type=NodePort --name=nginx-service-nodeport

Output:

service/nginx-service-nodeport exposed

3. Then we need to get information about the nginx-service:

kubectl get service/nginx-service-nodeport

Output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service-nodeport NodePort 10.100.106.151 <none> 80:30994/TCP 27s

4. Suppose the node is in a public subnet and is reachable from the internet. Then we check the node’s public IP address:

kubectl get nodes -o wide | awk {'print $1" " $2 " " $7'} | column -t

Output:

NAME STATUS EXTERNAL-IP
ip-xx-x-x-xxx.eu-west-1.compute.internal Ready 1.1.1.1
ip-xx-x-x-xxx.eu-west-1.compute.internal Ready 2.2.2.2

-or-

On the other hand, suppose the node is in a private subnet and is reachable only inside or through a VPC. Then we check the node’s private IP address:

kubectl get nodes -o wide | awk {'print $1" " $2 " " $6'} | column -t

Output:

NAME STATUS INTERNAL-IP
ip-xx-x-x-xxx.eu-west-1.compute.internal Ready xx.x.x.xxx
ip-xx-x-x-xxx.eu-west-1.compute.internal Ready xx.x.x.xxx

5. Eventually, we delete the NodePort service:

kubectl delete service nginx-service-nodeport

Output:

service "nginx-service-nodeport" deleted
  • Create a LoadBalancer service

1. To do so, we create the file, loadbalancer.yaml. Then we set type to LoadBalancer.

For example:

cat <<EOF > loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service-loadbalancer
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF

2. After that, we need to apply the loadbalancer.yaml file:

kubectl create -f loadbalancer.yaml

Output:

service/nginx-service-loadbalancer created

-or-

Then we expose a deployment of LoadBalancer type:

kubectl expose deployment nginx-deployment --type=LoadBalancer --name=nginx-service-loadbalancer

Output:

service "nginx-service-loadbalancer" exposed

3. Eventually, we get information about nginx-service:

kubectl get service/nginx-service-loadbalancer | awk {'print $1" " $2 " " $4 " " $5'} | column -t

Output:

NAME TYPE EXTERNAL-IP PORT(S)
nginx-service-loadbalancer LoadBalancer *****.eu-west-1.elb.amazonaws.com 80:30039/TCP

4. Once done, we need to verify that we can access the load balancer externally:

curl -silent *****.eu-west-1.elb.amazonaws.com:80 | grep title

We will receive a “Welcome to nginx!” output between the HTML title tags.

5. Later, we delete the LoadBalancer service:

kubectl delete service nginx-service-loadbalancer

Output:

service "nginx-service-loadbalancer" deleted

6. To create a NLB with an instance type target, to the service manifest, we add:

service.beta.kubernetes.io/aws-load-balancer-type: nlb

-or-

To create a NLB with IP targets, deploy the AWS Load Balancer Controller, and then create a load balancer that uses IP targets.

[Stuck in between? We are here for you]

Conclusion

In short, we saw how our Support Techs expose Kubernetes services running.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *