Gluster volume heal – To sync files in replicated volume
Do you know how Gluster recover the data from a brick that went offline? It does this by using gluster volume heal command.
So, let’s have a detailed look into the self-healing process initiated by the Gluster.
At Bobcares, we often get requests to manage GlusterFS, as a part of our Server Management Services.
Today, let’s see how our Support Engineers do this.
Self-healing of a volume in Gluster
Gluster is an open-source distributed file system. It is highly scalable as it allows the rapid provisioning of additional storage as required.
In GlusterFS the basic unit of storage is a brick. Sometimes these bricks in a replicate module go offline.
When a user updates any data in this duration it should be in other bricks too. That is, it needs to be healed. Earlier we need to manually trigger the task of bringing the bricks online and syncing the files.
But now a pro-active self-healing daemon does this. It automatically initiates the healing in specific intervals. Let’s have a much detailed look into this volume heal process.
How we use gluster volume heal command?
Whenever bricks go offline, we bring it back online. Then, to list the files in a volume that needs healing, we check its info.
For this, our Support Engineers use the command,
gluster volume heal <VOLNAME> info
This lists all the files that need healing. Basically there are two cases of healing. One is in split-brain or else the files that need healing. Both these will be specified in the list.
Then to trigger healing only on the required files, we use the command,
gluster volume heal <VOLNAME>
It heals the files that require healing. And to heal all the files in the volume use the command,
gluster volume heal <VOLNAME> full
After healing it shows up a success message. This appears as,
Hence, the files are properly synced now.
[Still, having trouble in managing GlusterFS? – We can help you.]
So far, we had a detailed description of gluster volume heal. We saw various cases of gluster volume heal info. In today’s writeup, we saw the commands our Support Engineers use for this.