wesupport

Need help?

Our experts have had an average response time of 13.14 minutes in February 2024 to fix urgent issues.

We will keep your servers stable, secure, and fast at all times for one fixed price.

Kubernetes: Pods remain in Pending phase

by | Nov 28, 2021

Stuck with Kubernetes: Pods remain in Pending phase error? We can help you.

As part of our Server Management Services, we assist our customers with several Kubernetes queries.

Today, let us see how our Support techs proceed to resolve it.

How to resolve Kubernetes: Pods remain in Pending phase error?

Typical error will look as shown below:

Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedSync 47m (x27 over 3h) kubelet error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Warning FailedCreatePodSandBox 7m56s (x38 over 3h23m) kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "xxx": operation timeout: context deadline exceeded

Today, let us see the troubleshooting steps followed by our Support Techs .

1. Firstly, investigate the failing pod

Check the logs of the pod:

$ kubectl logs pod-xxx

Check the events of the pod:

$ kubectl describe pod pod-xxx

2. Then, investigate the node the pod is meant to be scheduled on

Describe pod and see what node the pod is meant to be running on:

$ kubectl describe pod-xxx

Ouput will start with something like this, look for the “Node: ” part:

Name: pod-xxx
Namespace: namespace-xxx
Priority: 0
Node: node-xxx

3. Next, investigate the node

Check the resources of the nodes:

$ kubectl top nodes

Check the events of the node you identified the pod was meant to be scheduled on:

$ kubectl describe node node-xxx

Today, let us see the steps followed by our Support Techs to resolve it:

1. Create Extra Node

You may need to create a new node before draining this node.

Just check with kubectl top nodes to see if the nodes have extra capacity to schedule your drained pods.

If you see you need an extra node before you drain the node, make sure to do so.

In our situation we were using Managed GKE cluster, so we added a new node via the console.

2. Drain Problematic Node

Once you are sure there is enough capacity amongst your remaining nodes to schedule the pods that are on the problematic node, then you can go ahead and drain the node.

$ kubectl drain node-xxx
3. Delete Problematic Node

Check once all scheduled pods have been drained off of the node.

$ kubectl get nodes

Once done you can delete the node:

$ kubectl delete node node-xxx

[Stuck in between? We’d be glad to assist you]

Conclusion

In short, today we saw steps followed by our Support Techs resolve Kubernetes: Pods remain in Pending phase error.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Categories

Tags