[go: up one dir, main page]

0% found this document useful (0 votes)
38 views12 pages

DRBD8 MySQL HA On The Pacemaker Cluster Stack

This technical guide outlines the implementation of high availability for MySQL databases using the Pacemaker cluster stack, focusing on the DRBD based Shared-Nothing cluster option. It details the necessary software installation, configuration steps, and advantages of using DRBD over Shared Storage clusters. The guide also addresses various failure modes and special considerations for optimizing MySQL performance in a clustered environment.

Uploaded by

oontida.o
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views12 pages

DRBD8 MySQL HA On The Pacemaker Cluster Stack

This technical guide outlines the implementation of high availability for MySQL databases using the Pacemaker cluster stack, focusing on the DRBD based Shared-Nothing cluster option. It details the necessary software installation, configuration steps, and advantages of using DRBD over Shared Storage clusters. The guide also addresses various failure modes and special considerations for optimizing MySQL performance in a clustered environment.

Uploaded by

oontida.o
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

MySQL High Availability on the Pacemaker cluster stack

Brian Hellman, Florian Haas


Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2. Installing Required Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1. Requirements for any Pacemaker/MySQL configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2. Requirements for any DRBD-based MySQL configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.3. Post-installation setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
3. Initial Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.1. Configuring a DRBD resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.2. Creating a Filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.3. Creating a Database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4. Setting up cluster resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4.1. Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4.2. Basic configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4.3. Adding network connectivity monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5. Using your MySQL database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5.1. Securing the MySQL installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5.2. Importing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5.3. Accessing your database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
6. Failure modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
6.1. Node failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
6.2. Storage subsystem failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
6.3. MySQL daemon failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
6.4. Network failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
7. Special Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
7.1. MySQL Storage Engine Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
7.2. InnoDB Buffer Pool Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
MySQL High Availability on the Pacemaker cluster stack: 1. Introduction

1. Introduction
This Technical Guide describes an approach for designing and implementing high availability for MySQL databases,
managed by the Pacemaker cluster stack.

There are several options to achieve high availability for MySQL with this stack, all of which may be combined:

• Shared Storage cluster. This deployment type relies on a single, shared data "silo" which holds the data files
associated with the MySQL database. This option, while creating redundancy at the server level, relies on a
single instance of data which itself is typically not highly available. Clusters of this type may use a fibre channel
or iSCSI based storage area network (SAN). When properly configured, Shared Storage clusters guarantee
transaction integrity even across fail-over.
• DRBD based Shared-Nothing cluster. This cluster type makes use of local, directly attached storage whose
content is synchronously replicated between cluster nodes. This adds an additional layer of redundancy in that
MySQL’s data storage is available on more than one node. Like Shared Storage clusters, DRBD based clusters
guarantee transaction integrity accross fail-over.
• MySQL Replication based Shared-Nothing cluster. This cluster type makes use of local, directly attached
storage and uses MySQL replication to propagate database modifications across the cluster. Like DRBD, this
adds redundancy in that MySQL’s data storage is available on more than one node. Unlike DRBD, MySQL
replication can deal with multiple slaves for a single master, is asynchronous, and is hence highly suitable for
scale-out. MySQL Replication, however, does not make any guarantees about not losing updates on fail-over.

This Technical Guide describes the DRBD based Shared-Nothing option. This approach has advantages over the Shared
Storage based one:

• In a Shared Storage cluster, while there exists server redundancy and the cluster can always tolerate the failure
of a cluster node, the Shared Storage itself is often not redundant. Thus, as soon as the cluster loses access to
its data — which may or may not involve data destruction — the cluster as a whole is out of service. By
contrast, in a DRBD based Shared-Nothing cluster, every node has access to its own replica of the data. Thus
DRBD provides redundancy at both the data and the node levels.
• As a consequence, DRBD based Shared-Nothing clusters may be deployed, as a whole, across separate fire
areas, separate floors, or separate buildings. By contrast, a Shared Storage clusters typically deploys nodes
over several such compartments, but does not do so for its data storage.

SERVICE SERVICE

PAGE CACHE PAGE CACHE

FILE SYSTEM FILE SYSTEM

RAW DEVICE RAW DEVICE


NETWORK STACK NETWORK STACK

I/O SCHEDULER I/O SCHEDULER

DISK DRIVER NIC DRIVER NIC DRIVER DISK DRIVER

Shared-Nothing clusters can also be achieved by using storage replication mechanisms other than

 DRBD. Such synchronous replication solutions, however, are typically proprietary and strongly
coupled to a specific set of storage hardware. DRBD is entirely software based, open source, and
hardware agnostic.

1
MySQL High Availability on the Pacemaker cluster stack: 2. Installing Required Software

2. Installing Required Software


The Pacemaker cluster stack is available on the following platforms:

• Any platform supported by the LINBIT® Cluster Stack™ (includes SLES, RHEL, Debian, Ubuntu)
• SUSE Linux Enterprise Server 11 High Availability Extension
• Red Hat Enterprise Linux 6 High Availability Add On (as a Technology Preview)
• Debian GNU/Linux 6 and up
• Ubuntu 10.04 and up

You may be required to install packages other than the ones explicitly mentioned here, due to
 package dependencies. However, when using a package management utility such as aptitude, yum,
or zypper, these dependencies should be taken care of for you, automatically.

2.1. Requirements for any Pacemaker/MySQL configuration


In order to create a highly available MySQL service, you will need to install the following software packages:

• MySQL is an open source relation database management system (RDMBS). Most distributions split the MySQL
client and server into two separate packages, typically named mysql-client and mysql-server, or similar. This
technical guide requires that you install one of the following MySQL versions: 5.0, 5.1 or 5.5.
• Pacemaker is a cluster resource management framework which you will use to automatically start, stop,
monitor, and migrate resources. Distributions typically bundle Pacemaker in a package simply named
pacemaker. This technical guide assumes that you are using at least Pacemaker 1.0.9.
• Heartbeat is one of the cluster messaging layers that Pacemaker is capable of using. In distributions, the
Heartbeat package is usually named heartbeat. This guide assumes at least Heartbeat version 3.0.3.

The other Pacemaker supported cluster messaging layer is named Corosync, which you may use in
place of Heartbeat. Corosync is the only supported Pacemaker messaging layer in Red Hat
 Enterprise Linux 6 and SUSE Linux Enterprise Server 11, other distributions ship both Corosync and
Heartbeat. For the sake of simplicity, this technical guide presents the solution on only one
messaging layer, which is Heartbeat.

2.2. Requirements for any DRBD-based MySQL configuration


If your MySQL service is meant to be DRBD based, you will need to install the following software packages (in addition
to the ones specified in the previous section):

• DRBD is a kernel block-level synchronous replication facility which serves as an imported shared-nothing
cluster building block. If your distribution ships with a 2.6.33 or newer Linux kernel, you will only need to install
the DRBD userspace management utilities (typically named drbd-utils, drbd8-utils, or similar). If it does not,
most distribution ship a separate DRBD kernel module package (drbd8-module, kmod-drbd8, or similar), which
you will have to install along with the userspace tools.

2.3. Post-installation setup


After you have installed the required packages, you should take care of a few settings applying to your boot process,
using your distribution’s preferred utility for doing so. Typically you can use

• chkconfig --list to list enabled services, and chkconfig <service> on/off to enable or disable a service, or
• rcconf --list to list enabled services, and rcconf --on/--off <service> to enable or disable a service, or
• insserv --default/--remove <service> to enable or disable a service.

2
MySQL High Availability on the Pacemaker cluster stack: 3. Initial Configuration

Make sure that:

• Heartbeat does start automatically on system boot. This will also start Pacemaker.
• the drbd init script does not start automatically on system boot. Pacemaker takes care of all DRBD related
functionality; the init script is not needed for this purpose.
• the mysql (or mysqld) init script does not start automatically on system boot. Pacemaker takes care of all
MySQL related functionality; the init script is not needed for this purpose.

3. Initial Configuration
This section describes the initial configuration of a highly available MySQL databases in the context of the Pacemaker
stack.

Note that the configuration described here will work for MySQL databases using MySQL versions 5.x and higher.

3.1. Configuring a DRBD resource


First, it is necessary to configure a DRBD resource to hold your data. This resource will act as the backing device of a
filesystem to be created later. The DRBD resource uses the name mysql.

It is highly recommended that you put your resource configuration in a file that resides in the /etc/drbd.d directory,
whose name is identical to that of the resource — hence, /etc/drbd.d/mysql. Its contents should look similar to the
example below.

Based on your system configuration, you will be required to modify some of the configuration

 parameters stated in this example. Specifically, the disk and address options should be adapted to
fit your setup. Please refer to the relevant section in the DRBD User’s Guide for details on
configuring DRBD resources.

resource mysql {
device /dev/drbd0;
disk /dev/sda1;
meta-disk internal;
disk {
on-io-error detach;
}
on alice {
address 10.0.42.1:7790;
}
on bob {
address 10.0.42.2:7790;
}
}

After you have created this resource (and copied the configuration file to the other DRBD node), you must proceed
with initializing and synchronizing the resource, as specified in the DRBD User’s Guide, Section "Configuring DRBD".

3.2. Creating a Filesystem


Once the DRBD resource has been created and activated, you can create a filesystem on the new device. This example
assumes ext4 as the filesystem type:

3
MySQL High Availability on the Pacemaker cluster stack: 3.3. Creating a Database

drbdadm primary mysql


mkfs -t ext4 /dev/drbd/by-res/mysql

 It is not necessary to repeat mkfs on the peer node.

3.3. Creating a Database


After the filesystem has been created, you may populate it with a fresh MySQL database. In doing so, it is
recommended to mount the device to a temporary mount point:

mount /dev/drbd/by-res/mysql /mnt

Next, you can install a fresh MySQL database with the mysql_install_db command:

mysql_install_db --datadir=/mnt --user=mysql

When this is done, your preparation is complete, and you may unmount the temporarily mounted filesystem:

umount /mnt

 It is not necessary to repeat these steps on the peer node.

4. Setting up cluster resources


This section assumes you are about to configure a highly available MySQL database with the following configuration
parameters:

• The DRBD resources to be used as your database storage area is named mysql, and it manages the device
/dev/drbd0.
• The DRBD device holds an ext4 filesystem which is to be mounted to /var/lib/mysql (the default MySQL data
directory).

The MySQL database is to utilize that filesystem, and listen on a dedicated cluster IP address, 10.9.9.180.

4.1. Prerequisites
The following sections assume that you have a working Heartbeat/Pacemaker configuration, with two cluster nodes
forming a running Pacemaker cluster. In order to create such a configuration, please refer to the relevant section in the
Linux-HA User’s Guide.

4.2. Basic configuration


In order to create the appropriate cluster resources, open the crm configuration shell as root and issue the following
commands:

4
MySQL High Availability on the Pacemaker cluster stack: 4.2. Basic configuration

crm(live)# configure
crm(live)configure# property stonith-enabled=false
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# rsc_defaults resource-stickiness=200
crm(live)configure# primitive p_drbd_mysql ocf:linbit:drbd \
params drbd_resource="mysql" \
op monitor interval="29s" role="Master" \
op monitor interval="31s" role="Slave"
crm(live)configure# ms ms_drbd_mysql p_drbd_mysql \
meta master-max="1" master-node-max="1" \
clone-max="2" clone-node-max="1" \
notify="true"
crm(live)configure# primitive p_fs_mysql ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/mysql" \
directory="/var/lib/mysql" \
fstype="ext4"
crm(live)configure# primitive p_ip_mysql ocf:heartbeat:IPaddr2 \
params ip="10.9.9.180" \
cidr_netmask="24" \
nic="eth0"
crm(live)configure# primitive p_mysql ocf:heartbeat:mysql \
params binary="/usr/sbin/mysqld" \
config="/etc/my.cnf" \
datadir="/var/lib/mysql" \
pid="/var/run/mysqld/mysql.pid" \
socket="/var/run/mysqld/mysql.sock" \
additional_parameters="--bind-address=10.9.9.180" \
op start timeout=120s \
op stop timeout=120s \
op monitor interval=20s timeout=30s

You must set an appropriate shutdown and startup timeout based on your database utilization and
 expected workload. Failure to do so will cause Pacemaker to prematurely consider operations
timed-out and initiate recovery operations.

crm(live)configure# group g_mysql \


p_fs_mysql p_ip_mysql p_mysql
crm(live)configure# colocation c_mysql_on_drbd \
inf: g_mysql ms_drbd_mysql:Master
crm(live)configure# order o_drbd_before_mysql \
inf: ms_drbd_mysql:promote g_mysql:start
crm(live)configure# commit
crm(live)configure# exit

Once this configuration has been committed, Pacemaker will

• Start DRBD on both cluster nodes,


• Select one node for promotion to the DRBD Primary role,
• Mount the filesystem, configure the cluster IP address, and start the MySQL server on the same node,
• Commence resource monitoring.

5
MySQL High Availability on the Pacemaker cluster stack: 4.3. Adding network connectivity monitoring

4.3. Adding network connectivity monitoring


Finally, you may configure for Pacemaker to monitor the upstream network and ensure that MySQL runs only on nodes
that have connectivity to clients. In order to to so, pick one or more IP addresses that the cluster node can expect to
always be accessible, such as the subnet’s default gateway, a core switch, or similar. The example below uses the
address 10.9.9.254 and 10.9.0.1.

Then, add ping resources as follows:

crm(live)# configure
crm(live)configure# primitive p_ping ocf:pacemaker:ping \
params name="ping" \
multiplier="1000" \
host_list="10.9.9.254 10.9.0.1" \
op monitor interval="15s" timeout="60s" \
start timeout="60s"
crm(live)configure# clone cl_ping p_ping \
meta interleave="true"

Finally, add a location constraint to tie the Master role of your DRBD resource to a node with upstream network
connectivity:

crm(live)configure# location l_drbd_master_on_ping ms_drbd_mysql \


rule $role="Master" \
-inf: not_defined ping or ping number:lte 0
crm(live)configure# commit
crm(live)configure# exit

Once these changes have been made, Pacemaker will

• Monitor both upstream IP addresses from both cluster nodes,


• Periodically update a node attribute for each node with a value corresponding to the number of reachable
upstream hosts,
• Move resources away from any node that loses connectivity to both upstream IP addresses.

5. Using your MySQL database


After you have configured and started your highly available database, it is ready to be used.

5.1. Securing the MySQL installation


You should first secure your database against unauthorized access.

An initial recommended step to do so is to run the mysql_secure_installation utility. This utility allows you to set a
MySQL password for the root user, disallow any root logins over the network, and also drop the test database:

$ mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MySQL to secure it, we'll need the current

6
MySQL High Availability on the Pacemaker cluster stack: 5.1. Securing the MySQL installation

password for the root user. If you've just installed MySQL, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):


OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MySQL
root user without the proper authorisation.

You already have a root password set, so you can safely answer 'n'.

Change the root password? [Y/n]


New password:
Re-enter new password:

Password updated successfully!


Reloading privilege tables..
... Success!

By default, a MySQL installation has an anonymous user, allowing anyone


to log into MySQL without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n]


... Success!

Normally, root should only be allowed to connect from 'localhost'. This


ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n]


... Success!

By default, MySQL comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n]


- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n]


... Success!

Cleaning up...

All done! If you've completed all of the above steps, your MySQL

7
MySQL High Availability on the Pacemaker cluster stack: 5.2. Importing data

installation should now be secure.

Thanks for using MySQL!

Database security is a complex and involved matter. Your security requirements may call for
 configuration steps far beyond running mysql_secure_installation. Consult with a MySQL expert
for details.

5.2. Importing data


After installing, configuring and securing MySQL, you can set up a new database. You may do so with an installation
script, import an existing database dump, or execute any other configuration steps your application may require.
Typically, those steps would also include the configuration of a database user.

The example below assumes and import of a compressed database dump created with the mysqldump utility:

bzcat dbdump.sql.bz2 | mysql -u root -p


Enter password:

5.3. Accessing your database


Assuming your database configuration allows a user named dbuser to access a database named example from any
host, you would, on a single-node installation, normally access this database via the MySQL UNIX socket:

$ mysql -u dbuser example


Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 53

From a remote client, assuming the database is running on host alice, you would access your MySQL installation as
follows:

$ mysql -h alice -u dbuser example


Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 53

However, if your database is a highly available one managed by Pacemaker, then it is vital that clients connect to it only
via the virtual, floating cluster IP address, as in the example below:

$ mysql -h 10.9.9.180 -u dbuser example


Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 53

When you do so, you client connections will always connect to the "correct" node. That is to say, they will always
connect to whichever node currently holds the DRBD resource in the Primary role, has mounted the file system,
advertises the cluster IP address, and runs the MySQL server daemon.

8
MySQL High Availability on the Pacemaker cluster stack: 6. Failure modes

6. Failure modes
This section highlights specific failure modes and the cluster’s reaction to them.

6.1. Node failure


In case one of the cluster nodes suffers an outage, the cluster shifts all resources to the other node. Since DRBD
provides a synchronous replica of all MySQL data to the other node, MySQL can continue to serve the database
contents from there.

When a node fails, Heartbeat declares the node dead after the deadtime (as defined in ha.cf) has expired. Once this
happens, Pacemaker moves the node from the Online to the Offline state, and starts the affected resources on the
surviving peer node.

Details of node failure are also explained in the DRBD User’s Guide.

Node failure does entail MySQL database recovery on the node taking over the service. See MySQL Storage Engine
Recommendations and InnoDB Buffer Pool Size for important considerations applying to database recovery.

6.2. Storage subsystem failure


In case the storage subsystem backing a DRBD-enabled node fails, DRBD transparently detaches from its backing
device, and continues to serve data, over the DRBD replication link, from its peer node. Details of this functionality are
explained in the DRBD User’s Guide.

6.3. MySQL daemon failure


In case of a failure of the MySQL server daemon (such as an unexpected server shutdown, a segmentation fault, or
similar), the monitor operation for the p_mysql resource detects that failure and restarts the daemon.

6.4. Network failure


In case of loss of upstream connectivity, Pacemaker will automatically move resources away from nodes with failed
network links. This requires that ping monitoring is set up as explained in Adding network connectivity monitoring.

In case the DRBD replication link fails, DRBD just continues to serve data from the Primary node, and resynchronizes
the DRBD resource automatically, as soon as network connectivity is restored. Details of this functionality are
explained in the DRBD User’s Guide.

7. Special Considerations
In a highly available database on Pacemaker, a few considerations apply that do not exist on standalone databases. This
section highlights some of these considerations.

7.1. MySQL Storage Engine Recommendations


Highly available systems are, by definition, designed to gracefully recover from a hard server failure. In database
applications, this means that the database must support transactions, and be crash safe. In MySQL, the MyISAM
storage engine does not fulfill these requirements, and should thus be avoided.

Highly available MySQL installations should always employ the InnoDB storage engine. This can be set to the storage
engine default with the following entry in the MySQL configuration file, /etc/my.cnf:

9
MySQL High Availability on the Pacemaker cluster stack: 7.2. InnoDB Buffer Pool Size

[mysqld]
default-storage-engine = InnoDB

Existing tables using MyISAM can be converted to the InnoDB format with the following SQL command:

ALTER TABLE <tablename> ENGINE=InnoDB

7.2. InnoDB Buffer Pool Size


MySQL performance tuning guides often call for selecting a large InnoDB buffer pool size (typically around 80% of the
available physical memory on the machine). While this reduces I/O load and is generally a sound approach on a stand-
alone machine, it does have its drawbacks on a highly available system.

A large buffer pool increases InnoDB recovery time after a hard server failure, such as a node crash or forced failover.
A properly configured InnoDB database will eventually recover from such a condition, but possibly after a lengthy
recovery process — potentially on the order of hours. This may lead to extended and unexpected system outages.

It is thus often necessary to accept somewhat reduced performance on the highly available MySQL system by
selecting a smaller buffer pool, to ensure proper failover times in return. Proper values for this setting vary greatly
based on both hardware and application load, and users should always consult with a MySQL high availability expert to
select a good value.

The InnoDB buffer pool size is set in the MySQL configuration file, /etc/my.cnf:

[mysqld]
innodb-buffer-pool-size = <value>

10

You might also like