Removal of mdadm RAID Devices is quite easy. It involves a quick 6 steps.
As part of our Server Management Sevices, we assist our customers with several Mdadm queries.
Today, let us see how our support techs remove the same.
Removal of mdadm RAID Devices
In order to remove the mdadm RAID Devices, our Support Techs recommend the following steps:
An Overview:
- Step 1: Check the RAID details
- Step 2: Unmount and Remove All Filesystems
- Step 3: Determine mdadm RAID Devices
- Step 4: Stop mdadm RAID Device
- Step 5: Remove mdadm RAID Device
- Step 6: Remove the Superblocks
- Step 7: Verify RAID Device Was Removed
- Step 8: Overwrite Disks With Random Data
- Step 9: Change the Number of Devices for the RAID Array
- FAQs
Step 1: Check the RAID details
To begin with, we have to check the RAID details and analyze information about the device. We can check our mdadm RAID device details with the following command:
$ mdadm --detail /dev/md1
Copy Code
We are trying to remove the RAID array /dev/md1 in this example.
The output will display details about the /dev/md1 RAID array. This lets us confirm whether the RAID is in good condition for any action.
Step 2: Unmount and Remove All Filesystems
We need to make sure all filesystems have been unmounted. For that, we use umount. It also ensures we have exclusive access to the disk.
umount /dev/md1
We can remove a logical volume with this command:
$ lvremove /dev/test1/install-images</code
Copy Code
The essence of unmounting active filesystems in an mdadm RAID you want to remove is to prevent file corruption or irrecoverable data loss.
Step 3: Determine mdadm RAID Devices
Here, we will identify the exact drives used in the RAID array. Once we run this command, remember to note down the drive details of the mdadm RAID we want to uninstall.
cat /proc/mdstat
Copy Code
This will give us an output of the drives in the RAID array.
For example, the output will be similar to this:
md1 : active raid1 sdf1[1] sde1[0] 2929555446 blocks super 1.2 [2/2] [UU] bitmap: 0/22 pages [0KB], 65534KB chunk md0 : active raid10 sda1[3] sdd1[1] sdb1[0] sdc1[2] 976502774 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
unused devices: <none>
Now, we need to note the disks that are part of the RAID group.
When we proceed to step 5, we will need the names.
Step 4: Stop mdadm RAID Device
Our next step is stopping the RAID before removing a device.
mdadm --stop /dev/md1
Copy Code
For example, the output will look similar to this:
mdadm: stopped /dev/md1
Although this command will stop the RAID array from working, do not remove any drive from the computer. Wait till we go through the remaining steps before before removing a physical drive from the host PC.
Additionally, we have to edit the mdadm configuration file to remove references to the RAID array we just stopped. This reduces the chances of data loss.
Run this command to remove configuration references from the RAID device:
$ nano /etc/mdadm/mdadm.conf
Copy Code
Step 5: Remove mdadm RAID Device
Now, run this command to remove a RAID device is:
$ mdadm --fail /dev/sdf1 --remove /dev/sdf1
Copy Code
This command removes the specified drive from the RAID array.
In case we want to remove all the drives, run this command:
mdadm --remove /dev/md1
Copy Code
In case we run into the following error at this stage, move on to Step 6:
mdadm: error opening md1: No such file or directory
Step 6: Remove the Superblocks
When you remove drives from an mdadm RAID array, it’s important to manually clear their superblocks to avoid future issues. The superblock helps mdadm manage RAID configurations, and if not removed, the drive may be mistakenly re-added to an array, causing errors or data loss.
- If we removed several drives, run the command for each one:
$ mdadm --zero-superblock /dev/sdf1
Copy Code - If we removed just one drive, use the same command for that specific drive:
$ mdadm --zero-superblock /dev/sdf1
Copy Code
If we don’t clear the superblock, mdadm may try to automatically add the drive back into a RAID array, leading to fatal storage errors and potential data loss. Always remove the superblock before reusing any drive from a RAID array.
Step 7: Verify RAID Device Was Removed
To verify if we have removed the RAID device, run this command:
cat /proc/mdstat
Copy Code
The output should differ from what was displayed the first time we ran this command before removing the drive.
Furthermore, to prevent problems in the future, we have to delete the mount point from /etc/fstab and remove the RAID configuration from /etc/mdadm/mdadm.conf.
We can remove the mount point from the /etc/fstab file as seen here:
sudo sed -i '/\/dev\/md1/d' /etc/fstab
Copy Code
The RAID configuration from /etc/mdadm/mdadm.conf can be removed to prevent the RAID from being automatically assembled:
sudo rm /etc/mdadm/mdadm.conf
Copy Code
By following these steps, you can safely remove an mdadm RAID device and ensure that it is no longer active or interfering with other disk operations.
Step 8: Overwrite Disks With Random Data
If we want to overwrite the data on a drive removed from an mdadm RAID array, we can use the dd command. However, this will erase any previous data on the drive.
dd if=/dev/urandom bs=4096 of=/dev/sdf1
Copy Code
For Ubuntu 22.04+ and Debian 11+, we can use /dev/random, especially on NVMe drives.
Step 9: Change the Number of Devices for the RAID Array
Optionally, we can adjust the number of drives in an existing RAID after adding or removing a drive. However, this is a risky move. Any typo in the command could lead to irreversible data loss, so proceed with caution.
If we don’t update the number of drives in the array, a removed drive will always be marked as “missing” on system restarts. For example, if we originally had four drives in our RAID and removed one (leaving three), use the following command to reconfigure the array:
mdadm --grow --raid-devices=3
Copy Code
Recreating the array with three drives may be safer, as this step is array-specific.
To add a new drive to the array, connect the drive and use the “–grow” command:
mdadm --add /dev/md1 /dev/sdc # /dev/sdc is the new drive
mdadm --grow --raid-devices=4
Copy Code
Remember to always back up data before attempting this. This command won’t allow us to switch RAID levels (e.g., from RAID 1 to RAID 0 or RAID 5). To change RAID levels, we must stop, unmount, and delete the existing RAID, then create a new one.
FAQs
How do I rebuild a RAID 5 after a drive failure?
First, identify the failed drive via the RAID controller. Replace it with a new or spare drive. The controller will automatically begin rebuilding the array using data from the remaining drives. Once complete, the RAID 5 array will be restored.
How do I disable RAID options?
Enter the system BIOS, navigate to the RAID configuration menu, select “Disable,” then save and exit. Use arrow keys to move through options, Enter to select, and Esc to return to the main menu.
Can you remove a drive from a RAID?
Yes, but only from RAID types that support redundancy (like RAID 1 or 5). Use mdadm to safely remove the drive. Removing a drive from RAID 0 will result in complete data loss.
Can you use RAID with one drive?
Yes, RAID 1 can operate with a single drive. It’s often used in disaster recovery setups to ensure high availability until a second disk is added for mirroring.
[Need further help? We are here to assist you]
Conclusion
To conclude, here, we saw how our Support Techs remove mdadm RAID Devices.
PREVENT YOUR SERVER FROM CRASHING!
Never again lose customers to poor server speed! Let us help you.
Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.
Great explanation – commendable – short sweet and to the point! However, needed to go to man mdadm to verify that instead of a single dash on the above command options, a double dash is necessary to execute the full command.
You forgot to mention 2 things:
a. Make sure the mount point is also deleted from /etc/fstab
b. Remember to delete /etc/mdadm/mdadm.conf
I remembered the first bit when the server choked after I rebooted it. I remembered the second bit after I rebooted another server and the RAID array miraculously came back
YOU JUST SAVED ME! I just spent 3 hours hunting how to undo all this. Great tutorial!
Hi Scott,
Thanks for the feedback.We are glad to know that our article helps you solves the issue ?
After stoppping the raid-array the –remove won’t work and –remove is not used to removed raid-devices, but individual members. The step with the zero-superblock gives errors about not being able to open the drive for writing. Better use wipefs there as this will wipe the raid-signature (the superblock) but also any partition/filesystem markers on the device.