Bobcares

wesupport

CLIENT AREACall Us 1-800-383-5193
Bobcares

wesupport

Call Us 1-800-383-5193
Bobcares

wesupport

Call Us 1-800-383-5193

Need help?

Our experts have had an average response time of 11.7 minutes in August 2021 to fix urgent issues.

We will keep your servers stable, secure, and fast at all times for one fixed price.

Enable Cluster Autoscaler for DigitalOcean Kubernetes Cluster

by | Sep 24, 2021

Wondering how to enable Cluster Autoscaler for DigitalOcean Kubernetes Cluster? We can help you.

As part of our Server Management Services, we assist our customers with several Kubernetes queries.

Today, let us see how our Support techs perform this task.

How to enable Cluster Autoscaler for DigitalOcean Kubernetes Cluster?

First and foremost, digitalOcean Kubernetes (DOKS) is a manage Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerize infrastructure.

Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and block storage volumes.

DigitalOcean Kubernetes provides a Cluster Autoscaler (CA) that automatically adjusts the size of a Kubernetes cluster by adding or removing nodes based on the cluster’s capacity to schedule pods.

You can enable autoscaling with minimum and maximum cluster sizes using either the DigitalOcean Control Panel or doctl, the DigitalOcean command-line tool.

Enable Cluster Autoscaler

Using the DigitalOcean Control Panel

To enable autoscaling on an existing node pool.

Firstly, navigate to your cluster in the Kubernetes section of the control panel.

Then, click on the Nodes tab.

Next, click on the three dots to reveal the option to resize the node pool manually or enable autoscaling.

Then select Resize or Autoscale, and a modal window will pop up asking for configuration details.

After selecting Autoscale, you can set the following options for the node pool:

Minimum Nodes: Determines the smallest size the cluster will allow to “scale down” to; must be no less than 1 and no greater than Maximum Nodes.

Maximum Nodes: Determines the largest size the cluster will allow to “scale up” to.

The upper limit is constrained by the Droplet limit on your account, which is 25 by default, and the number of Droplets already running, which subtracts from that limit.

You can request to have your Droplet limit increased.

Using doctl

Firstly, you can use doctl to enable cluster autoscaling on any node pool.

Then, you’ll need to provide three specific configuration values:

auto-scale: Specifies that autoscaling should enable
min-nodes: Determines the smallest size the cluster will be allowed to “scale down” to; must be no less than 1 and no greater than max-nodes
max-nodes: Determines the largest size the cluster will allow to “scale up” to.

The upper limit is constrained by the Droplet limit on your account, which is 25 by default, and the number of Droplets already running, which subtracts from that limit.

Then, you can request to have your Droplet limit increased.

Next, you can apply autoscaling to a node pool at cluster creation time if you use a semicolon-delimited string.

doctl kubernetes cluster create mycluster --node-pool "name=mypool;auto-scale=true;min-nodes=1;max-nodes=10"

You can also configure new node pools to have autoscaling enable at creation time.

doctl kubernetes cluster node-pool create mycluster mypool --auto-scale --min-nodes 1 --max-nodes 10

If your cluster is already running, you can enable autoscaling on an any existing node pool.

doctl kubernetes cluster node-pool update mycluster mypool --auto-scale --min-nodes 1 --max-nodes 10

[Need help with similar queries? We’d be happy to assist you]

Conclusion

In short, we saw how our Support Techs Enable Cluster Autoscaler for DigitalOcean Kubernetes Cluster.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Categories

Tags