The term high availability is the buzz word nowadays. Nobody tolerates downtime, whether it is the owner of a web site, owner of a server or owner of a data center. The challenge is how you can offer the least down time and the term “high availbility”, has it all.
DRBD refers to “Distributed Replicated Block Device” and is used for building high availability (HA) clusters. This can be attained by mirroring or replicating a block device via network. DRBD can be considered as equivalent to a RAID 1 setup.
See how we help web hosting companies
In RAID 1, a drive has its data duplicated on two different drives using a RAID controller. Same is the case with DRBD, where the local block holds the data to be replicated and then it is written to another host’s blocks. The only difference here is DRBD allows data replication for more than 2 nodes.
In simple words DRBD is a Linux Kernel module that supports a distributed storage system, by which you can share 2 or more block devices(data or file system).
DRBD works with two or more servers and each of these are denoted as nodes and the node which has the read/write access to data (also called DRBD data) is known as the primary node. The other node to which the data is replicated is referred to as the secondary node. If there are numerous secondary nodes in the high availability cluster, it is referred as a DRBD cluster.
In a nut shell, DRBD takes the data, writes it to the local disk, and sends it to the other nodes. This local disk can either be a physical disk partition, a partition from a volume group, RAID device or any other block device. This block holds the data to be replicated.
DRBD can also be used along with LVM2 Logical Volume Manager.
DRBD Requirements
Setting up OS for DRBD
- The host names for the DRBD nodes should be unique and correct.
- Each DRBD node should have a unique IP address.
- There should be an unused disk or disk partition to store DRBD data to be replicated.
- The disk or the partition should not be given a file system. The cases c and d are applicable for each DRBD node.
- These partitions on each node, should be identical in size(most preferred).
- Make sure that the kernel-devel packages and header files are installed for building kernel modules for DRBD.
Check for the following Packages and Tools
Note: Make sure that you upgrade the kernel to the latest stable version.
- Kernel header files
- Kernel source files
- gcc
- gcc-c++
- glib2
- glib-devel
- glib2-devel
- flex
- bison
- kernel-smp
- kernel-smp-devel
- pkg-config
- ncurses-devel
- rpm-devel
- rpm-build
- net-snmp-devel
- lm_sensors-devel
- perl-DBI
- python-devel
- perl-DBI
- libselinux-devel
- bzip2-devel
Other Optional Packages
If you need to have a secure communication between the DRBD nodes, you need to install the following packages:
- openssl
- openssl-devel
- gnutls
- gnutls-devel
- libgcrypt
- libgcrypt-devel
Install these via yum or up2date depending on your Linux distro.
[ Use your time to build your business. We’ll take care of your servers. Hire our hosting support experts to maintain your servers secure and stable 24/7 . ]
Installing DRBD From Source
Since DRBD is a Linux module, it can only be used along with the Linux distros.
- Download the source from any mirror location or from http://oss.linbit.com/drbd/
- Follow the instructions in the INSTALL file and choose the method that suits your UNIX flavor.
- Once you have successfully built DRBD, test loading the DRBD module using insmod and verify using lsmod.
- If it is successfully loaded remove it using rmmod.
Configuring DRBD Primary Node
Configuring DRBD Service
The main configuration file of DRBD is /etc/drbd.conf. This contains the definition of DRBD devices, block size, frequency of updates etc.
Synchronization
Set the synchronization between the two nodes with respect to the network connection speed as DRBD replication is going to take place over the network.
Giga bit Ethernet supports upto 125 MBps synchronization rate and 100Mbps Ethernet upto 12MBps.
In the /etc/drbd.conf set the synchronization rate as follows:
syncer { rate SR; }
;where SR=synchronization rate
Password Protecting DRBD Data
It is possible to set authentication for DRBD nodes so that only the hosts that has this shared secret joins the DRBD node group.
The format is as follows:
cram-hmac-alg "sha1" shared-secret "password";
DRBD Configuration File
A basic drbd.conf file should have the following information:
- device -> path of the logical block device that will be created by DRBD.
- disk -> the block device that will be used to store the data.
- address -> the IP address and port number of the host that will hold this DRBD device.
- meta-disk -> the location where the metadata about the DRBD device will be stored.
When meta-disk is set as internal, DRBD will use the physical block device to store data.
A sample configuration file is given below:
resource drbd0 { protocol=X fsck-cmd=fsck.ext2 -p -y on drbd-master { device=/dev/nbx disk=/dev/hdax address=x.x.x.x port=x meta-disk internal; } on drbd-slave { device=/dev/nby disk=/dev/hday address=y.y.y.y port=y meta-disk internal; } }
[ You don’t have to lose your sleep over server errors. Our expert hosting support specialists are online 24/7/365 to help you fix all server errors. ]
Mounting a DRBD Logical Block
First, we will create a file system for the DRBD block:
# mkfs.ext3 /dev/nbx
Then mount the file system on a mount point:
# mount /dev/drbd0 /mnt/drbd
Now you can copy the files to be replicated to this mount point. Restart drbd service on both master and slave.
Configuring DRBD Secondary Node
The configuration settings on the drbd slave is the same as the master. The only difference here is that there is no need to create a file system on the slave, as the data is transferred from the master.
To get the exact settings you may even copy the drbd.conf file of the master and paste it in the slave node(s). The only things to be changed are the IP, port(if it has a different one), device etc.
Now create metadata on the underlying disk device using drbdadm command:
# drbdadm create-md all # Restart drbd service.
You can check the /proc/drbd virtual file to check if the master and slave nodes are syncing.
This file shows the status and state of the DRBD nodes.
Status Codes in /proc/drbd File
# cs - connection state # st - node state (local/remote) # ld - local data consistency # ds - data consistency # ns - network send # nr - network receive # dw - disk write # dr - disk read # pe - pending (waiting for ack) # ua - unack'd (still need to send ack) # al - access log write count
DRBD Protocols
- Protocol ‘A’ -> A write operation is complete as soon as the data is written to disk and sent to the network.
- Protocol ‘B’ -> A write operation is complete as soon as a reception acknowledgment arrives.
- Protocol ‘C’ -> A write operation is complete as soon as a write acknowledgment arrives.
You can use the protocol you need while configuring the DRBD block.The preferred and recommended protocol is C, as it is the only protocol which ensures the consistency of the local and remote physical storage.
0 Comments