Bobcares

wesupport

CLIENT AREACall Us 1-800-383-5193
Bobcares

wesupport

Call Us 1-800-383-5193
Bobcares

wesupport

Call Us 1-800-383-5193

Need help?

Our experts have had an average response time of 11.7 minutes in August 2021 to fix urgent issues.

We will keep your servers stable, secure, and fast at all times for one fixed price.

Troubleshoot kubernetes error with linode – How to do it

by | Jun 1, 2021

Wondering how to troubleshoot Kubernetes error with linode? We can help you.

As part of our Server Management Services, we assist our customers with several Kubernetes queries.

Today, let us see the steps followed by our Support Techs.

 

Troubleshoot Kubernetes error with linode

 

Today, let us discuss about the some of the tools our Support Techs use when troubleshooting.

To troubleshoot issues with your cluster, directly view the logs that are generated by Kubernete components.

 

kubectl get

  • Use the get command to list different kinds of resources in your cluster (nodes, pods, services, etc). The output will show the status for each resource returned.

For example, this output shows that a Pod is in the CrashLoopBackOff status, which means we need to investigate it further.

kubectl get pods
NAME READY STATUS RESTARTS AGE
ghost-0 0/1 CrashLoopBackOff 34 2h
mariadb-0 1/1 Running 0 2h
  • Use the –namespace flag to show resources in a certain namespace:
# Show pods in the `kube-system` namespace
kubectl get pods –namespace kube-system
  • Use the -o flag to return the resources as YAML or JSON. The Kubernetes API’s complete description for the returned resources will be shown:
# Get pods as YAML API objects
kubectl get pods -o yaml
  • Sort the returned resources with the –sort-by flag:
# Sort by name
kubectl get pods –sort-by=.metadata.name
  • Use the –selector or -l flag to get resources that match a label. This is useful for finding all Pods for a given service:
# Get pods which match the app=ghost selector
kubectl get pods -l app=ghost
  • Use the –field-selector flag to return resources which match different resource fields:
# Get all pods that are Pending
kubectl get pods –field-selector status.phase=Pending
# Get all pods that are not in the kube-system namespace
kubectl get pods –field-selector metadata.namespace!=kube-system

 

kubectl describe

  • Use the describe command to return a detailed report of the state of one or more resources in your cluster. Pass a resource type to the describe command to get a report for each of those resources:

kubectl describe nodes

  • Pass the name of a resource to get a report for just that object:

kubectl describe pods ghost-0

  • You can also use the –selector (-l) flag to filter the returned resources, as with the get command.

 

kubectl logs

  • Use the logs command to print logs collected by a Pod:
kubectl logs mariadb-0
  • Use the –selector (-l) flag to print logs from all Pods that match a selector:
kubectl logs -l app=ghost
  • If a Pod’s container was killed and restarted, you can view the previous container’s logs with the –previous or -p flag:
kubectl logs -p ghost-0

 

kubectl exec

  • You can run arbitrary commands on a Pod’s container by passing them to kubectl’s exec command:
kubectl exec mariadb-0 — ps aux

The full syntax for the command is:

kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} — ${CMD} ${ARG1} ${ARG2} … ${ARGN}

The -c flag is optional, and is only needed when the specified Pod is running more than one container.

  • It is possible to run an interactive shell on an existing pod/container. Pass the -it flags to exec and run the shell:
kubectl exec -it mariadb-0 — /bin/bash

Enter exit to later leave this shell.

 

Viewing Master and Worker Logs

If the Kubernetes API server isn’t working normally, then you may not be able to use kubectl to troubleshoot.

When this happens, or if you are experiencing other more fundamental issues with your cluster, you can instead log directly into your nodes and view the logs present on your filesystem.

 

Non-systemd systems

If your nodes do not run systemd, the location of logs on your master nodes should be:

/var/log/kube-apiserver.log – API server

/var/log/kube-scheduler.log – Scheduler

/var/log/kube-controller-manager.log – Replication controller manager

 

Systemd systems

If your nodes run systemd, you can access the logs that kubelet generates with journalctl:

journalctl –unit kubelet

 

Examples to Troubleshoot Kubernetes error with linode

Let us see some of the examples mentioned by our Support Techs

Viewing the Wrong Cluster

If your kubectl commands are not returning the resources and information you expect, then your client may be assigned to the wrong cluster context.

  • To view all of the cluster contexts on your system, run:
kubectl config get-contexts
  • To switch to another context, run:
kubectl config use-context ${CLUSTER_NAME}

 

Can’t Provision Cluster Nodes

If you are not able to create new nodes in your cluster, you may see an error message similar to:

Error creating a Linode Instance: [400] Account Limit reached. Please open a support ticket.

This is a reference to the total number of Linode resources that can exist on your account.

  • To create new Linode instances for your cluster, you will need to either remove other instances on your account, or request a limit increase.
  • To request a limit increase, contact Linode Support.

 

Insufficient CPU or Memory

If one of your Pods requests more memory or CPU than is available on your worker nodes, then one of these scenarios may happen:

  • The Pod will remain in the Pending state, because the scheduler cannot find a node to run it on. This will be visible when running kubectl get pods.
  • The Pod may continually crash. For example, the Ghost Pod specified by Ghost’s Helm chart will show the following error in its logs when not enough memory is available:
kubectl logs ghost –tail=5
SystemError

Message: You are recommended to have at least 150 MB of memory available for smooth operation. It looks like you have ~58 MB available.
  • If your cluster has insufficient resources for a new Pod, you will need to do one or more of the following:
  1. Reduce the number of other pods/deployments/applications running on your cluster.
  2. Add a new worker node or nodes to your cluster.
  3. Add a new Node Pool with access to more resources and migrate the workload to the new pool.

 

[Need help to Troubleshoot Kubernetes error? We can help you]

Conclusion

In short, today we saw some of the tools our Support Techs use to troubleshoot Kubernetes error with linode

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = "owonCMyG5nEQ0aD71QM";

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Categories

Tags