Wondering how to use NodeLocal DNSCache in Kubernetes clusters? We can help you.
At Bobcares, we offer solutions for every query, big and small, as a part of our Server Management Service.
Let’s take a look at how our Support Team use NodeLocal DNSCache.
How to use NodeLocal DNSCache in Kubernetes clusters?
NodeLocal DNSCache improves Cluster DNS performance by running a dns caching agent on cluster nodes as a DaemonSet.
In today’s architecture, Pods in ClusterFirst DNS mode reach out to a kube-dns serviceIP for DNS queries.
This is translated to a kube-dns/CoreDNS endpoint via iptables rules added by kube-proxy.
With this new architecture, Pods will reach out to the dns caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking.
The local caching agent will query kube-dns service for cache misses of cluster hostnames.
How to Configure?
The local listen IP address for NodeLocal DNSCache can have any address that can guarantee to not collide with any existing IP in your cluster.
It’s recommended to use an address with a local scope.
This feature can enable using the following steps:
- Firstly, prepare a manifest similar to the sample nodelocaldns.yaml and save it as nodelocaldns.yaml.
- If using IPv6, the CoreDNS configuration file need to enclose all the IPv6 addresses into square brackets if used in IP:Port format.
- If you are using the sample manifest from the previous point, this will require to modify the configuration line L70 like this health
[__PILLAR__LOCAL__DNS__]:8080
- Then, substitute the variables in the manifest with the right values:
kubedns=kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}
domain=<cluster-domain>
localdns=<node-local-address>
<cluster-domain> is “cluster.local” by default. <node-local-address> is the local listen IP address chosen for NodeLocal DNSCache.
- If kube-proxy is running in IPTABLES mode:
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml
__PILLAR__CLUSTER__DNS__ and __PILLAR__UPSTREAM__SERVERS__ will populate by the node-local-dns pods. In this mode, node-local-dns pods listen on both the kube-dns service IP as well as <node-local-address>, so pods can lookup DNS records using either IP address.
If kube-proxy is running in IPVS mode:
sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/,__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml
- Then, run kubectl create -f nodelocaldns.yaml
- If using kube-proxy in IPVS mode, –cluster-dns flag to kubelet needs to modify to use <node-local-address> that NodeLocal DNSCache is listening on.
Otherwise, there is no need to modify the value of the –cluster-dns flag, since
- NodeLocal DNSCache listens on both the kube-dns service IP as well as <node-local-address>.
- Once enabled, node-local-dns Pods will run in the kube-system namespace on each of the cluster nodes.
This Pod runs CoreDNS in cache mode, so all CoreDNS metrics exposed by the different plugins will available on a per-node basis.
You can disable this feature by removing the DaemonSet, using kubectl delete -f <manifest> .
You should also revert any changes you made to the kubelet configuration.
StubDomains and Upstream server Configuration
StubDomains and upstream servers specified in the kube-dns ConfigMap in the kube-system namespace are automatically picked up by node-local-dns pods.
The node-local-dns ConfigMap can also modify directly with the stubDomain configuration in the Corefile format.
Some cloud providers might not allow modifying node-local-dns ConfigMap directly.
In those cases, the kube-dns ConfigMap can be updated.
Setting memory limits
node-local-dns pods use memory for storing cache entries and processing queries.
Since they do not watch Kubernetes objects, the cluster size or the number of Services/Endpoints do not directly affect memory usage.
Memory usage is influence by the DNS query pattern. From CoreDNS docs,
The default cache size is 10000 entries, which uses about 30 MB when completely filled.
This would the memory usage for each server block (if the cache gets completely filled).
Memory usage can reduce by specifying smaller cache sizes.
The number of concurrent queries is linked to the memory demand, because each extra go routine use for handling a query requires an amount of memory.
You can set an upper limit using the max_concurrent option in the forward plugin.
If a node-local-dns pod attempts to use more memory than is available, the operating system may shut down that pod’s container.
If this happens, the container that is terminated (“OOMKilled”) does not clean up the custom packet filtering rules that it previously added during startup.
The node-local-dns container should get restart, but this will lead to a brief DNS downtime each time that the container fails: the packet filtering rules direct DNS queries to a local Pod that is unhealthy.
You can determine a suitable memory limit by running node-local-dns pods without a limit and measuring the peak usage.
You can also set up and use a VerticalPodAutoscaler in recommender mode, and then check its recommendations.
[Looking for a solution to another query? We are just a click away.]
Conclusion
In brief, our skilled Support Engineers at Bobcares demonstrate using NodeLocal DNSCache in Kubernetes clusters
0 Comments