Bobcares

For every $500 you spend, we will provide you with a $500 credit on your account*

BLACK FRIDAY SPECIAL EXTENSION

*The maximum is $4000 in credits, Offer valid till December 6th, 2024, New Customers Only, Credit will be applied after purchase and expires after six (6) months

For every $500 you spend, we will provide you with a $500 credit on your account*

BLACK FRIDAY SPECIAL EXTENSION

*The maximum is $4000 in credits, Offer valid till December 6th, 2024, New Customers Only, Credit will be applied after purchase and expires after six (6) months

Add OSD Node To Ceph Cluster | A 7-Step Method

by | Nov 30, 2022

Let’s look into the steps to add an OSD node to the Ceph cluster. At Bobcares, with our Server Management Services, we can handle your Ceph cluster issues.

How to add an OSD node to the Ceph cluster?

We can add Object Storage Daemons/OSD whenever we want to expand a Ceph cluster. The steps to add the OSD node are as follows:

  1. Verifying Ceph version
  2. Set up flags
  3. Adding new nodes
  4. Running the device-aliasing Playbook to generate the host.yml file for the new OSD Nodes
  5. Running the core playbook to add new OSD nodes
  6. Unset the flags
  7. Verification

Let’s look into the details of each step.

Verifying Ceph version

1. Firstly, we must make sure that all the nodes have the same Ceph version. We can use the below code:

CentOS command: rpm -qa | grep ceph
Ubuntu Command: apt list --installed | grep ceph

Set up flags

1. Now we’ve to set the maintenance flags such as norecover, nobackfill and norebalance using the below code:

ceph osd set norebalance 
ceph osd set noout 
ceph osd set norecover

2. Then ensure the cluster flags are set.

ceph -s

Adding new nodes

1. In order to add new nodes to the host file, include the IPs of the new OSDs in the /etc/hosts file.

vim /etc/hosts

2. Then make passwordless SSH access to the new node(s).

ssh-copy-id root@OSD3

3. Now we must add the new OSD to the host’s file at this time. Only add the new OSD to the “osds” section; leave the old OSD nodes alone.

vim /usr/share/ceph-ansible/hosts

4. To check that the server can ping the new OSD(s), issue the following command, making sure that the only OSD(s) being pinged are those that are being added to the cluster.

cd /usr/share/ceph-ansible
ansible -i hosts -m ping osds

Running the device-aliasing Playbook to generate the host.yml file for the new OSD Nodes

1. We can run the core playbook to add it to the cluster after we make sure the new OSDs can be ping.

cd /usr/share/ceph-ansible
ansible-playbook -i hosts device-alias.yml

2. This makes a .yml file in the /usr/share/ceph-ansible/host_vars/ directory.

3. If the SSD journal drives are present in these files, then we may use cat to check that it discovered all of the OSD discs and that it wasn’t using them as storage drives if we have any. If that’s the case, just delete their line or comment them out.

Running the core playbook to add new OSD nodes

We can use the below code to run the core playbook. The new nodes will be in the cluster after the playbook completion.

cd /usr/share/ceph-ansible
ansible-playbook -i hosts core.yml --limit osds

Unset the flags

To begin the backfill procedure, make sure to unset the flags that were previously set.

ceph osd unset noout
ceph osd unset norecover
ceph osd unset norebalance

Verification

By running ceph -s, we must now see the new total of OSDs including the additional nodes added to the cluster.

[Looking for a solution to another query? We are just a click away.]

Conclusion

The article explains the steps from our Support team to add OSD nodes to the Ceph cluster.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.