Bobcares

For every $500 you spend, we will provide you with a $500 credit on your account*

BLACK FRIDAY SPECIAL EXTENSION

*The maximum is $4000 in credits, Offer valid till December 6th, 2024, New Customers Only, Credit will be applied after purchase and expires after six (6) months

For every $500 you spend, we will provide you with a $500 credit on your account*

BLACK FRIDAY SPECIAL EXTENSION

*The maximum is $4000 in credits, Offer valid till December 6th, 2024, New Customers Only, Credit will be applied after purchase and expires after six (6) months

How to setup Percona bootstrap in an XtraDB cluster

by | Mar 30, 2016

In an earlier post, we discussed how Percona XtraDB Cluster (PXC) can be used to achieve database high availability. PXC uses master-master replication, which makes each server in the cluster capable of making changes to the database. In such a system, data update conflicts are possible if servers are not in sync. Percona bootstrapping is a way to avoid conflicts when starting up a cluster.

What is Percona bootstrapping?

To explain bootstrapping, it is important to know why it is needed.

Why is bootstrapping needed?

I’ll explain with an example.

Imagine a Percona cluster of 3 servers starting up. If all three are brought online simultaneously, each server can receive new updates to the database. Now, 3 servers will have 3 different versions of the database. How can it be decided which database should be chosen as the “right” one?

No matter which database version is chosen as the right one, the updates made to the other two will be lost. How can this problem be solved?

How does Percona solve data conflicts?

The solution implemented by Galera (the clustering system Percona adopted) is to boot up a cluster one by one. Initially, only one server is booted up. At this point there can be no conflict. All reads and writes are served from just this one database.

Then the second server is booted up, but before it can start serving queries, it should sync with the first server (known as Snapshot State Transfer or SST in tech terms). Once it’s done, the two servers can start executing queries in parallel.

The same is repeated for the rest of the database nodes. This process is known as bootstrapping.

When is Percona bootstrapping required?

Bootstrapping is required when a PXC system is initially deployed, but more commonly, it is required when a cluster shuts down. There are situations like network errors, power failures, firewall issues, etc. when nodes in a cluster cannot talk to one another. Percona then uses a method called “quorum” to avoid database corruption.

To illustrate with an example, let’s say a network issue prevents the servers in a 3-node cluster in talking to each other. Each server then knows that it doesn’t have a “majority” to update the database. Each server then shuts down to prevent updates to the database. In these cases, the cluster needs to be booted up one by one to bring all servers online in a consistent way.

[ Want to avoid downtime due to database server failures? Here’s how to setup Percona high availability. ]

How to setup Percona bootstrapping on a fresh installation

Let’s take a look at how bootstrapping is done on a fresh Percona XtraDB Cluster.

For this, I’ll use the example of a 3 node PXC system. PXC can be installed in CentOS servers using the command:

yum install http://www.percona.com/downloads/percona-release/percona-release-0.0-1.i386.rpm http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm socat Percona-XtraDB-Cluster-56

The next step is to setup one server as the “primary”.

1. Configuring primary node

Edit /etc/my.cnf in one server (called “node 1”), and input the following entries. See comments to know the purpose of each entry.

 [mysqld]
 datadir=/var/lib/mysql
 socket=/var/lib/mysql/mysql.sock
 user=mysql
 # Specifying the path to galera library for replication
 wsrep_provider=/usr/lib/libgalera_smm.so
 # Cluster initialization configuration for initial startup – mention IP addresses of nodes in the cluster
 wsrep_cluster_address=gcomm://172.17.9.36,172.17.9.37,172.17.9.38
 # Mention IP address of self
 wsrep_node_address=172.17.9.36
 # Method to sync the states between the nodes In the cluster
 wsrep_sst_method=rsync
 # Specify the storage engine and log format
 binlog_format=ROW
 default_storage_engine=InnoDB
 innodb_autoinc_lock_mode=2

Now this server (node 1) is ready to be started up as “primary” – also known as bootstrapping.

2. Bootstrapping the primary node

After editing the my.cnf, the primary node is started in bootstrap mode, which is used to designate it as the primary, to which other nodes would sync to.

Use this command:

# service mysql bootstrap-pxc

3. Configuring the other 2 nodes in the cluster

To bring the other 2 nodes online, first the /etc/my.cnf from “node 1” should be copied over to the other servers and “wsrep_node_address” setting should be updated to their corresponding server addresses.

Then the servers can be started with the command:

# service mysql start

Once all nodes are started, the cluster would be online. The status of the cluster can be checked on any node as shown here:

mysql> show status like 'wsrep%';
 +------------------------------+----------------------------------------------------+
 | Variable_name | Value |
 +------------------------------+----------------------------------------------------+
 | wsrep_local_state_uuid | fd28bf16-e205-11e5-b1f7-1fcad8d42d44 |
 | wsrep_local_state_comment | Synced |
 | wsrep_incoming_addresses | 172.17.9.36:3306,172.17.9.38:3306,172.17.9.37:3306 |
 | wsrep_evs_state | OPERATIONAL |
 | wsrep_cluster_conf_id | 11|
 | wsrep_cluster_size | 3 |
 | wsrep_cluster_state_uuid | fd28bf16-e205-11e5-b1f7-1fcad8d42d44 |
 | wsrep_cluster_status | Primary |
 | wsrep_connected | ON |
 | wsrep_ready | ON |
 +------------------------------+----------------------------------------------------+
 58 rows in set (0.07 sec)

[ Setting up a cluster is only half the job. To obtain performance gains, you’ll need a good load balancing solution. Here’s how you can set up a good Percona load balancing system. ]

How to recover from a cluster crash using Percona bootstrap

When all nodes in a cluster has crashed for some reason, choosing the primary for bootstrapping must be done with care. The node with the most advanced data should be chosen as the primary, to avoid data loss.

This can be found by executing the below command in all nodes:

mysql> show status like 'wsrep_last_committed';
+----------------------+--------+
| Variable_name | Value |
+----------------------+--------+
| wsrep_last_committed | 236129 |
+----------------------+--------+

Choose the node with the highest number as the primary. Start that server with the command:

# service mysql start --wsrep-new-cluster

Once the primary is online, others can be started one by one with the command:

# service mysql start

[ Server crashes can be catastrophic. You’ll need backups if the cluster cannot be recovered. Here’s how you can backup your Percona database using XtraBackup. ]

Summary

Percona bootstrap is critical in maintaining database consistency. Here we’ve explained how bootstrapping should be done on a fresh PXC installation and on a crashed cluster.

Bobcares helps business websites of all sizes achieve world-class performance and uptime, using tried and tested website architectures. If you’d like to know how to make your website more reliable, we’d be happy to talk to you.

 

Get a FREE consultation

Do you face frequent server performance issues?

Never again lose customers to poor page speed! Let us help you.

We keep your servers optimized, secured and updated at all times. Our engineers monitor your applications 24/7 and fix issues before it can affect your customers.

Talk to our application infrastructure specialist today to know how to keep your service top notch!

TALK TO AN EXPERT NOW!

 

 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.