Bobcares

Nodeselector Kubernetes: Explained

by | Jul 20, 2022

The nodeselector Kubernetes is the most recommended way of node selection constraint.

Bobcares answers all questions no matter the size, as part of our Server management service

Let us take a look at the nodeselector Kubernetes in detail.

Nodeselector Kubernetes

nodeselector kubernetes

The simplest recommended form of node selection constraint is node selector. A user can add the nodeselector field to their Pod specification. And, they can define the node labels that they want the target node to have. Only nodes with each label that the user specifies are kept under schedule by Kubernetes to receive the Pod schedule.

Administrators of Kubernetes typically don’t have to select a node to schedule their pods. Instead, the Kubernetes scheduler chooses the appropriate node or nodes for them to schedule their Pods. Users are prevented from choosing unfit nodes or nodes with insufficient resources by automatic node selection.

The Kubernetes scheduler ensures the correct node by comparing the node’s CPU and RAM capacity to the Pod’s resource requests. The scheduler ensures that the sum of all resource requests by the Pods’ containers for each of these resource types is less than the node’s capacity. This mechanism ensures that Pods are among nodes with available resources.

Pods to land on a specific node

There are times, however, when the user wants their Pods to land on a specific node. For example:

  1. Pod(s) to install on a machine with an SSD attached.
  2. The user wishes to co-locate Pods on a specific machine or machines in the same availability zone.
  3. As these services are highly dependent on each other, the user wishes to co-locate a Pod from one Service with a Pod from another Service on the same node. For example, you might want to run a web server alongside an in-memory cache store like Memcached (see the example below).

A number of primitives in Kubernetes address these scenarios:

  1. nodeSelector — This is a simple Pod scheduling feature that allows the user to schedule a Pod on a node whose labels match the nodeSelector labels the user specifies.
  2. Node Affinity — This is an improved version of the nodeSelector that was introduced in Kubernetes 1.4 beta. It provides a more expressive syntax for controlling how Pods are scheduled to specific nodes.
  3. Inter-Pod Affinity — Inter-Pod affinity enables co-location by scheduling Pods on nodes where specific Pods are already running.

As previously stated, nodeSelector is an early Kubernetes feature designed for manual Pod scheduling. The nodeSelector’s allows a Pod to schedule only on nodes that have a label(s) identical to the label(s) in the nodeSelector. These are key-value pairs that can be defined within the PodSpec.

Application of nodeselector on the pod

There are several steps for applying nodeSelector to the Pod. Firstly, the user must assign a label to a node for the nodeSelector to use later. To obtain the names of these nodes, the user must execute:

kubectl get nodes --show-labelsNAME STATUS ROLES AGE VERSION LABELShost01 Ready controlplane,etcd,worker 61d v1.10.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=host01,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/worker=truehost02 Ready etcd,worker 61d v1.10.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=host02,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/worker=truehost03 Ready etcd,worker 61d v1.10.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=host03,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/worker=true

The cluster has three nodes, as shown: host01, host02, and host03. The user must then choose a node to which they want to add a label. For example, to the host02 node, which has SSD storage, add a new label with the key disktype and value SSD. To do so, execute:

kubectl label nodes host02 disktype=ssd
node “host02” labeled

The preceding command adheres to the format. kubectl label nodes =. Finally, run to ensure that the new label is inside:

kubectl get nodes --show-labelsNAME STATUS ROLES AGE VERSION LABELShost01 Ready controlplane,etcd,worker 61d v1.10.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=host01,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/worker=truehost02 Ready etcd,worker 61d v1.10.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=host02, disktype=ssd,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/worker=truehost03 Ready etcd,worker 61d v1.10.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=host03,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/worker=true

The host02 now is under a new label disktype=ssd. The view lets the user view all labels attached to the node, and the user can also run:

kubectl describe node "host02"Name: host02Roles: nodeLabels: beta.kubernetes.io/arch=amd64,
beta.kubernetes.io/os=linux,
kubernetes.io/hostname=host02,
disktype=ssd,
node-role.kubernetes.io/etcd=true,
node-role.kubernetes.io/worker=true

Along with the newly added disktype=ssd label, the user can see labels such as beta.kubernetes.io/arch and kubernetes.io/hostname. All of these are the standard labels that come with Kubernetes nodes. Some of them define the node’s architecture, operating system, or hostname:

kubernetes.io/hostname
failure-domain.beta.kubernetes.io/zone
failure-domain.beta.kubernetes.io/region
beta.kubernetes.io/instance-type
beta.kubernetes.io/os
beta.kubernetes.io/arch

Assign a pod to the node

To assign a Pod to the node with the newly added label, the user must specify a nodeSelector field in the PodSpec. A user’s manifest could look something like this:

apiVersion: v1
kind: Pod
metadata:
name: httpd
labels:
env: prod
spec:
containers:
- name: httpd
image: httpd
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd

In this case, the user adds the spec.nodeSelector field to the PodSpec with the label disktype:ssd, which is identical to the label of the node. Save the configurations to the  test-pod.yaml and run:

kubectl create -f test-pod.yaml

When this command is in execution, the user’s httpd Pod will schedule on the node with the disktype=ssd label. By running kubectl get pods -o wide and inspecting the “NODE” to which the Pod was assigned, the user can confirm this.

NAME READY STATUS RESTARTS IP NODEpod-test-657c7bccfd-n4jc8 2/2 Running 0 172.17.0.7 host03httpd 2/2 Running 0 172.17.0.21 host02........

[Need assistance with similar queries? We are here to help]

Conclusion

To conclude, the nodeSelector is a simple feature for Pod scheduling with some limitations. Kubernetes users now have a more flexible and useful mechanism for Pod scheduling that can do more than nodeSelector, thanks to the introduction of node affinity in Kubernetes 1.2 as alpha.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Speed issues driving customers away?
We’ve got your back!

Privacy Preference Center

Necessary

Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.

PHPSESSID - Preserves user session state across page requests.

gdpr[consent_types] - Used to store user consents.

gdpr[allowed_cookies] - Used to store user allowed cookies.

PHPSESSID, gdpr[consent_types], gdpr[allowed_cookies]
PHPSESSID
WHMCSpKDlPzh2chML

Statistics

Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.

_ga - Preserves user session state across page requests.

_gat - Used by Google Analytics to throttle request rate

_gid - Registers a unique ID that is used to generate statistical data on how you use the website.

smartlookCookie - Used to collect user device and location information of the site visitors to improve the websites User Experience.

_ga, _gat, _gid
_ga, _gat, _gid
smartlookCookie
_clck, _clsk, CLID, ANONCHK, MR, MUID, SM

Marketing

Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.

IDE - Used by Google DoubleClick to register and report the website user's actions after viewing or clicking one of the advertiser's ads with the purpose of measuring the efficacy of an ad and to present targeted ads to the user.

test_cookie - Used to check if the user's browser supports cookies.

1P_JAR - Google cookie. These cookies are used to collect website statistics and track conversion rates.

NID - Registers a unique ID that identifies a returning user's device. The ID is used for serving ads that are most relevant to the user.

DV - Google ad personalisation

_reb2bgeo - The visitor's geographical location

_reb2bloaded - Whether or not the script loaded for the visitor

_reb2bref - The referring URL for the visit

_reb2bsessionID - The visitor's RB2B session ID

_reb2buid - The visitor's RB2B user ID

IDE, test_cookie, 1P_JAR, NID, DV, NID
IDE, test_cookie
1P_JAR, NID, DV
NID
hblid
_reb2bgeo, _reb2bloaded, _reb2bref, _reb2bsessionID, _reb2buid

Security

These are essential site cookies, used by the google reCAPTCHA. These cookies use an unique identifier to verify if a visitor is human or a bot.

SID, APISID, HSID, NID, PREF
SID, APISID, HSID, NID, PREF