Bobcares

Pod status in Amazon EKS

by | Jun 23, 2021

Wondering how to check pod status in Amazon EKS? We can help you.

Here, at Bobcares, we assist our customers with several AWS queries as part of our AWS Support Services.

Today, let us see the steps followed by our Support Techs to check pod status.

 

How to check pod status in Amazon EKS?

Sometimes, Amazon EKS pods running on Amazon EC2 instances or on a managed node group get stuck.

Let us see the steps followed by our Support Techs to check it’s status and get pod running.

Find out the status of your pod

  • Firstly, run the below command to get the information from the events history of your pods :
$ kubectl describe pod YOUR_POD_NAME
  • Based on the status of your pod, complete the steps in one of the following sections: Your pod is in the Pending state, Your pod is in the Waiting state, or Your pod is in the CrashLoopBackOff state.

Now, let us see what action need to take in each case.

Your pod is in the Pending state

Pods in the Pending state can’t be scheduled onto a node.

Your pod could be in the Pending state because of insufficient resources on the available worker nodes, or you’ve defined an occupied hostPort on the pod.

If you have insufficient resources on the available worker nodes, then delete unnecessary pods, or add more worker nodes.

If you’re defining a hostPort for your pod, then consider the following:

  • There are a limited number of places that a pod can be scheduled when you bind a pod to a hostPort.
  • Don’t specify a hostPort unless it’s necessary, because the hostIP, hostPort, and protocol combination must be unique.
  • If you must specify hostPort, then schedule the same number of pods as there are worker nodes.

Your pod is in the Waiting state

Your pod can in the Waiting state because of an incorrect Docker image or repository name, a lack of permissions, or because the image doesn’t exist.

If you have the incorrect Docker image or repository name, then complete the following:

  • Firstly, confirm that the image and repository name is correct by logging into Docker Hub, Amazon ECR, or another container image repository.
  • Then, compare the repository or image from the repository with the repository or image name specified in the pod specification.

If the image doesn’t exist or you lack permissions, then complete the following:

  • Firstly, verify that the image specified is available in the repository and that the correct permissions are configured to allow the image to be pulled.
  • To confirm that image pull is possible and to rule out general networking and repository permission issues, manually pull the image from the Amazon EKS worker nodes with Docker:
$ docker pull yourImageURI:yourImageTag
  • To verify that the image exists, check that both the image and tag are present in either Docker Hub or Amazon ECR.

 

Your pod is in the CrashLoopBackOff state

Pods stuck in CrashLoopBackOff are starting, crashing, starting again, and then crashing again repeatedly.

If you receive the “Back-Off restarting failed container” output message, then your container probably exited soon after Kubernetes started the container.

To look for errors in the logs of the current pod, run the following command:

$ kubectl logs YOUR_POD_NAME

To look for errors in the logs of the previous pod that crashed, run the following command:

$ kubectl logs --previous YOUR-POD_NAME

If the Liveness probe isn’t returning a successful status, verify that the Liveness probe is configure correctly for the application.

If the pod is still stuck after completing steps in the previous sections, try the following steps:
  • Firstly, to confirm that worker nodes exist in the cluster and are in Ready status, run the following command:
$ kubectl get nodes

The output should look similar to the following:

NAME STATUS ROLES AGE VERSION
ip-192-168-6-51.us-east-2.compute.internal Ready <none> 25d v1.14.6-eks-5047ed
ip-192-168-86-33.us-east-2.compute.internal Ready <none> 25d v1.14.6-eks-5047ed

If the nodes are not in the cluster, add worker nodes.

  • To check the version of the Kubernetes cluster, run the following command:
$ kubectl version --short

The output should look similar to the following:

Client Version: v1.14.6-eks-5047ed
Server Version: v1.14.9-eks-c0eccc
  • To check the version of the Kubernetes worker node, run the following command:
$ kubectl get node -o custom-columns=NAME:.metadata.name,VERSION:.status.nodeInfo.kubeletVersion

The output should look similar to the following:

NAME VERSION
ip-192-168-6-51.us-east-2.compute.internal v1.14.6-eks-5047ed
ip-192-168-86-33.us-east-2.compute.internal v1.14.6-eks-5047ed
  • Based on the output from steps 2 and 3, confirm that the Kubernetes server version for the cluster matches the version of the worker nodes within an acceptable version skew.

If the cluster and worker node versions are incompatible, create a new node group with eksctl (see the eksctl tab) or AWS CloudFormation (see the Self-managed nodes tab).

–or–

Create a new managed node group (Kubernetes: v1.14, platform: eks.3 and above) using a compatible Kubernetes version. Then, delete the node group with the incompatible Kubernetes version.

  • To confirm that the Kubernetes control plane can communicate with the worker nodes, verify firewall rules against the recommended rules in Amazon EKS Security Group Considerations, and then verify that the nodes are in Ready status.

 

[Need help with Amazon issue? We are here to help you]

 

Conclusion

Today, we discussed about the steps followed by our Support Techs in checking the Pod status in Amazon EKS.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = "owonCMyG5nEQ0aD71QM";

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.