Are you looking forward to know why some files are in the split-brain condition in Gluster?
Usually, users notice this when they get an I/O error while accessing corrupted files. This occurs when bricks in a volume go offline due to network errors.
At Bobcares, we often get requests to fix the split-brain conditions in GlusterFS, as a part of our Server Management Services.
Today, let’s see how our Support Engineers heal files in split-brain.
Split-brain condition in Gluster
Gluster is a highly scalable distributed file system. For high availability and reliability, we use replicated volumes. It replicates files across the bricks in the volume.
But due to network failure, some bricks go offline. Hence the data in the files are in a state of inconsistency. That is the same files in different bricks have mismatching data. This situation is known as split-brain.
Here the Gluster is not sure which copy in the replica is the actual one. Basically, there are three types of split-brain,
- Data split-brain: In this case, the data is different on different brick.
- Metadata split-brain: Here the metadata differs.
- Entry/GFID split-brain: Here the GFID is different. And we cannot heal it automatically.
How to heal this condition in Gluster?
Initially, we check the files that need healing. For this, we use the command,
gluster volume heal <VOLNAME> info
The output of test volume appears as,
We can also check the info of files in the split-brain condition alone. Here we use the command,
gluster volume heal <VOLNAME> info split-brain
Now let’s see how our Support Engineers heal the data and metadata split-brain.
1. Fixing split-brain using Gluster CLI
So far we saw how our Support Engineers identify files in split-brain. Next, let’s see various policies we use to fix this via CLI.
With a bigger source file
For per file healing we use a known file with a bigger size as the source. Then we use the command,
gluster volume heal <VOLNAME> split-brain bigger-file <FILE>
This command heals the files by considering the brick as the source.
Source files with latest mtime
Alternatively, we can take the file with the latest modification time as the source file. And here we use the command,
gluster volume heal <VOLNAME> split-brain latest-mtime <FILE>
Where mtime
denotes the latest modification time.
Using one brick in replica as the source for one file
In this case, we use one brick as the source to heal one file. And the command we use is,
gluster volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME> <FILE>
Here <HOSTNAME:BRICKNAME>
is the source.
Using one brick in replica as the source for all files
If so many files are in the split-brain condition, we can use one brick of replica as the source. And use the command,
gluster volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME>
Here one brick is the source to heal all the files.
2. Fixing split-brain from the mount point
To fix the split-brain from mount point our Support Engineers use getfattr
and setfattr
commands. Here we consider a volume and obtain its directory structure.
Now to obtain the details on files in split-brain, we use the command,
getfattr -n replica.split-brain-status <path-to-file>
Then we choose a brick and access the file in split-brain. And the command to do this is,
setfattr -n replica.split-brain-choice -v "choiceX" <path-to-file>
That is, here we keep a file as a choice to recover the actual file in split-brain. When this choice is fixed then we heal it using the command,
setfattr -n replica.split-brain-heal-finalize -v <heal-choice> <path-to-file>
Hence this resolves the metadata and data split-brain on files.
3. Manually fixing the split-brain
Let’s see how our Support Engineers manually do this. Firstly, we check the path of the files from the info.
Now we close the application which uses this file. And we select a correct copy of the file by analyzing the changelog attributes.
getfattr -d -m . -e hex <file-path-on-brick>
Then we fix the file in split-brain using the command,
setfattr -n <attribute-name> -v <attribute-value> <file-path-on-brick>
Finally, we trigger self-healing using the command,
ls -l <file-path-on-gluster-mount>
Hence it heals the files.
How we avoid this situation?
To reduce the chance of split-brain condition our Support Engineers recommend using Replica 3 volume and arbiter volume. This is because both use the client quorum option.
The client quorum is a feature in the Automatic File Replication module. It prevents split-brain in the I/O path of replicated and distributed-replicate volume.
[Still, having trouble in fixing split-brain in GlusterFS? – We can help you.]
Conclusion
In short, the Gluster split-brain condition means the same files in different bricks in a volume have a mismatch in data. We saw various ways our Support Engineers fix this situation.
0 Comments