This post is is a guide on how to create a MySQL cluster on Ubuntu 12.04 LTS server distribution. I will not go trough basic Ubuntu installation as that can be expected a pre-requisite when creating an MySQL cluster. I will try to cover every aspect of the installation process and touch some parts regarding testing. There is another excellent guide behind the following link which has pretty much the same information and helped me get trough the installation process (LINK).
The cluster can as easily be build on physical hardware as well as on an virtual environment. Obviously if you do it on an virtual enviroment you can create snapshots of the VM’s themselves to minimize the damage if something goes wrong.
Please do note that MySQL clusters have set limitations compared to one server MySQL configuration. Before venturing on with creating a MySQL cluster, do check the following page here. Personally I would recommend creating a PostgreSQL cluster.
1. SQL Cluster VM’s
To create a fully functional cluster you need a minimum of 3 VM’s to complete. One Virtual Machine to act as the cluster management and two database nodes. In case one of the DB-nodes goes offline, the cluster can still work and will synchronize once the node is back up. The Management node can also be inaccessible but will most likely case a “Split-Brain” issue with inconsistent data between the two working DB-nodes. In my example I will include a second management node to the cluster to give you an example on how it will work.
In this tutorial I will walk you trough installing a total of 5 VM’s, where two of which as Management Nodes, two are DB nodes and one is a MySQL proxy. It is possible to add additional nodes to the cluster later on and the single MySQL-proxy is possible to make Highly Avaiable (HA), but not covered in this post.
All virtual machines are in the same 172.17.0.0/24 network, but the load-balancer could as well have a connection to the outside network. This will vary from your network configuration and should be treated as a simple example network. Before you continue, you should have all VM’s working and have at least local network connectivity working.
2. Installing the management nodes
First off we need to install the management nodes of the MySQL cluster. We will start off with SQL-MGMT-1 and once it is completed, the procedure should be carried out on the second management host. If you prefer to have only one management VM, just leave it out of the configuration and you are clear to go.
Before you continue, go to http://www.mysql.com/downloads/cluster/#downloads and verify which version is the current one. This guide has been made for MySQL cluster version 7.2.10 and might vary for later versions. Change your download links accordingly!
First we want to download the MySQL cluster packages to the management host to a good location. Navigate to /usr/src directrory and create mysql-mgm directory.
mkdir /usr/src/mysql-mgm cd /usr/src/mysql-mgm
After that is done, download the latest source from the mysql download site and extract the package.
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.2/mysql-cluster-gpl-7.2.10-linux2.6-x86_64.tar.gz tar xvfz mysql-cluster-gpl-7.2.10-linux2.6-x86_64.tar.gz
Navigate to the extracted folder and move the binaries.
cd mysql-cluster-gpl-7.2.10-linux2.6-x86_64 cp bin/ndb_mgm /usr/bin cp bin/ndb_mgmd /usr/bin
Change the permissions for the directory and optionally remove the downloaded source.
chmod 755 /usr/bin/ndb_mg* cd /usr/src rm -rf /usr/src/mysql-mgm
Next, we must create the cluster configuration file in /var/lib/mysql-cluster/ named config.ini
The folder doesn’t exist so you need to create it:
After which a config.ini file can be created with your favourite text editor.
[NDBD DEFAULT] NoOfReplicas=2 DataMemory=80M IndexMemory=18M [MYSQLD DEFAULT] [NDB_MGMD DEFAULT] DataDir=/var/lib/mysql-cluster [TCP DEFAULT] # Section for the cluster management node [NDB_MGMD] NodeId=1 # IP address of the first management node (this system) HostName=172.17.0.10 [NDB_MGMD] NodeId=2 #IP address of the second management node HostName=172.17.0.11 # Section for the storage nodes [NDBD] # IP address of the first storage node HostName=172.17.0.20 DataDir= /var/lib/mysql-cluster [NDBD] # IP address of the second storage node HostName=172.17.0.21 DataDir=/var/lib/mysql-cluster # one [MYSQLD] per storage node [MYSQLD] [MYSQLD]
Note that all hosts are defined at this stage, even though we are only installing the first one. Note that the management host nodes need the NodeID specifed where as the NDBD nodes do not.
!Note! If you copy the lines directly, the doubble hypen "--" won't work, you need to type it out
Once this is done you can start the management node with the following command:
ndb_mgmd -f /var/lib/mysql-cluster/config.ini -–configdir=/var/lib/mysql-cluster/
Once that is done, you can automate the start procedure by entering an entry to the init.d with the following commands:
echo ”ndb_mgmd -f /var/lib/mysql-cluster/config.ini --configdir=/var/lib/mysql-cluster/” > /etc/init.d/ndb_mgmd chmod 755 /etc/init.d/ndb_mgmd
Once everything is working, follow the same procedure on the second management node and use the same configuration. You mustn’t change the node ID’s in the configuration file!
You can verify that both management nodes are operational by entering ndb_mgm (just enter ndb_mgm in terminal) and by typing show once in the configuration utility. At this point both ndbd nodes and mysql nodes are disconnected.
3. The database nodes
Creating the DB nodes is fairly simliar to creating the management nodes. First off lets start by creating a mysql group and adding a user to it.
groupadd mysql useradd -g mysql mysql
Navigate to /usr/local and download the same compressed file as to the management nodes and extract it.
cd /usr/local/ wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.2/mysql-cluster-gpl-7.2.10-linux2.6-x86_64.tar.gz tar xvfz mysql-cluster-gpl-7.2.10-linux2.6-x86_64.tar.gz
Create a symbolic link named mysql pointing to the extracted folder (this will be later used by the DB cluster so do not delete it!). Once the symlink is complete you can install the database.
ln -s mysql-cluster-gpl-7.2.10-linux2.6-x86_64 mysql cd mysql scripts/mysql_install_db -–user=mysql
Change the owner to the newly created mysql group
chown -R root:mysql . chown -R mysql data
Like on the management nodes, we want the databae engine to start automatically and thus we need to create the command to init.d.
cp support-files/mysql.server /etc/init.d/ chmod 755 /etc/init.d/mysql.server
Lastly copy the bin files to the /usr/bin location to keep everything neat and create a symlink to keep references right.
cd /usr/local/mysql/bin mv * /usr/bin cd ../ rm -fr /usr/local/mysql/bin ln -s /usr/bin /usr/local/mysql/bin
The MySQL configuration file is missing at first so we need to create it ourselves. The file is located in /etc/ and is named my.cnf. Open your favorite texteditor and add the following lines to it:
[mysqld] ndbcluster # IP address of the cluster management node ndb-connectstring=172.17.0.10,172.17.0.11 [mysql_cluster] # IP address of the cluster management node ndb-connectstring=172.17.0.10,172.17.0.11
Note that both management nodes are entered, seperated by a comma. If you only have one management node, just remove the second one in the list. Once my.cnf file has been saved we need to create the data folders for MySQL.
Once that is done, we just need to initialize the cluster and start the service. The initialization needs to be done only when you start the node for the first time, or when /var/lib/mysql-cluster/config.ini file has been changed on the management node(s).
cd /var/lib/mysql-cluster ndbd –initial /etc/init.d/mysql.server start
After this, secure the MySQL installation by running the appropriate script:
And lastly we need ndb to start automatically:
echo ”ndbd” > /etc/init.d/ndbd chmod 755 /etc/init.d/ndbd
Once everything is done, you’ve completed the first node. Follow the steps on the second database node to get it up and running.
4. Testing and Verification
Once everything is completed it’s time to verify our configuration to see that everything is working as intended. To do this, we need to verify that all nodes are visible and connected one one of the management nodes. Open the ndb management shell by typing ndb_mgm in terminal and type show. This time you should see that the database nodes have been populated in the output.
root@SQL-MGMT-Node1:~# ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=3 @172.17.0.20 (mysql-5.5.29 ndb-7.2.10, Nodegroup: 0, Master) id=4 @172.17.0.21 (mysql-5.5.29 ndb-7.2.10, Nodegroup: 0) [ndb_mgmd(MGM)] 2 node(s) id=1 @172.17.0.10 (mysql-5.5.29 ndb-7.2.10) id=2 @172.17.0.11 (mysql-5.5.29 ndb-7.2.10) [mysqld(API)] 2 node(s) id=5 @172.17.0.21 (mysql-5.5.29 ndb-7.2.10) id=6 @172.17.0.20 (mysql-5.5.29 ndb-7.2.10)
If you see a similar output, you are clear to go and try out some basic SQL commands. Move to the first DB node in the cluster. Log in to the SQL database and create a new database, table to verify replication. Note that when creating a database, the engine for the tables has to be NDBCLUSTER. If you use InnoDB for example, the data will NOT be replicated between the cluster nodes. There are some drawbacks when using ndbcluster as your engine, which can be found in the MySQL website.
mysql -u root -p CREATE DATABASE mysqlclustertest; USE mysqlclustertest; CREATE TABLE testtable (i INT) ENGINE=NDBCLUSTER; INSERT INTO testtable () VALUES (1); SELECT * FROM testtable;
Connect to the second database node and lets see if we get the same output.
mysql -u root -p USE mysqlclustertest; SELECT * FROM testtable;
You should see the same output as on the first node when doing the select statement. Now if you insert a new entry to the table, it will be replicated back to the first node.
For additional testing please see this guide section 5 here.
The last part in our guide is to install the load-balancer to get some additional use of the MySQL-cluster. The loadbalancer can be something else than the mysql-proxy, but this is easy and simple to install and does it’s job. You could use something like pFsense if you’d like to.
root@mysql-proxy:~# apt-get install mysql-proxy root@mysql-proxy:~# mkdir /etc/mysql-proxy root@mysql-proxy:~# cd /etc/mysql-proxy root@mysql-proxy:/etc/mysql-proxy# nano mysql-proxy.conf
And add the following to your mysql-proxy.conf
[mysql-proxy] daemon = true proxy-address = 172.17.0.5:3306 proxy-skip-profiling = true keepalive = true event-threads = 50 pid-file = /var/run/mysql-proxy.pid log-file = /var/log/mysql-proxy.log log-level = debug proxy-backend-addresses = 172.17.0.20:3306,172.17.0.21:3306 proxy-lua-script=/usr/lib/mysql-proxy/lua/proxy/balance.lua
For automatic start with additional options create the following file /etc/default/mysql-proxy
ENABLED="true" OPTIONS="--defaults-file=/etc/mysql-proxy.conf --plugins=proxy"
After that you can start the mysql-proxy by invoking the following command: /etc/init.d/mysql-proxy start/stop/status
Once this is completed, you should be able to connect to the MySQL servers using the proxy address. Do remember that for this to work, you will need to create a new user that has a specific subnet allowed to connect to it. You will also need to add to the my.cnf file the bind-address for the MySQL servers.
SQL users do not replicate, so the same user has to be added to all database nodes individually. Once logged in to the DB node SQL shell, execute the following command:
CREATE USER 'newuser'@'172.17.0.%' IDENTIFIED BY 'password'; FLUSH PRIVILEGES; SELECT * FROM mysql.user;
Change newuser, ip and password as per your configuration needs. The % in the IP-address acts as a wildcard and thus the whole subnet it allowed to connect remotely to this DB node. Remember to flush privileges after the user has been added. Remember to add the same user with same configuration to all other Database nodes in this cluster.