It isn’t necessarily immediately obvious how to set up a Cluster on LINUX; this post attempts to show how to get a simple Cluster up and running. For simplicity, all of the nodes will run on a single host – a subsequent post will take the subsequent steps of moving some of them to a second host. As with my Windows post the Cluster will contain the following nodes:
- 1 Management node (ndb_mgmd)
- 2 Data nodes (ndbd)
- 3 MySQL Server (API) nodes (mysqld)
Downloading and installing
Browse to the MySQL Cluster LINUX download page at mysql.com and download the correct version (32 or 64 bit) and store it in the desired directory (in my case, /home/billy/mysql) and then extract and rename the new folder to something easier to work with…
1
2
|
<span style="color: #993300;">[billy@ws1 mysql]$ tar xvf mysql-cluster-gpl-7.0.6-linux-x86_64-glibc23.tar.gz [billy@ws1 mysql]$ mv mysql-cluster-gpl-7.0.6-linux-x86_64-glibc23 7_0_6</span> |
Create 3 data folders (one for each of the MySQL API – mysqld – processes) and setup the files that will be needed for them to run correctly…
1
2
3
4
5
6
7
|
<span style="color: #993300;">[</span><span style="color: #993300;"><span style="color: #993300;">billy@ws1 mysql]$ cd 7_0_6/data [bil</span>ly@ws1 data]$ mkdir data1 data2 data3 [billy@ws1 data]$ mkdir data1/mysql data1/test data2/mysql data2/test data3/mysql data3/test [billy@ws1 7_0_6]$ cd .. [billy@ws1 7_0_6]$ scripts/mysql_install_db --basedir=/home/billy/mysql/7_0_6 --datadir=/home/billy/mysql/7_0_6/data/data1 [billy@ws1 7_0_6]$ scripts/mysql_install_db --basedir=/home/billy/mysql/7_0_6 --datadir=/home/billy/mysql/7_0_6/data/data2 [billy@ws1 7_0_6]$ scripts/mysql_install_db --basedir=/home/billy/mysql/7_0_6 --datadir=/home/billy/mysql/7_0_6/data/data3</span> |
Configure and run the Cluster
Create a sub-directory called “conf” and create the following 4 files there:
config.ini
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
<span style="color: #333399;">[ndbd default] noofreplicas=2 [ndbd] hostname=localhost id=2 [ndbd] hostname=localhost id=3 [ndb_mgmd] id = 1 hostname=localhost [mysqld] id=4 hostname=localhost [mysqld] id=5 hostname=localhost [mysqld] id=6 hostname=localhost</span> |
my.1.conf
1
2
3
4
5
6
7
8
|
<span style="color: #333399;">[mysqld] ndb-nodeid=4 ndbcluster datadir=/home/billy/mysql/7_0_6/data/data1 basedir=/home/billy/mysql/7_0_6 port=3306 server-id=1 log-bin</span> |
my.2.conf
1
2
3
4
5
6
7
8
|
<span style="color: #333399;">[mysqld] ndb-nodeid=5 ndbcluster datadir=/home/billy/mysql/7_0_6/data/data2 basedir=/home/billy/mysql/7_0_6 port=3307 server-id=2 log-bin</span> |
my.3.conf
1
2
3
4
5
6
7
8
|
<span style="color: #333399;">[mysqld] ndb-nodeid=6 ndbcluster datadir=/home/billy/mysql/7_0_6/data/data3 basedir=/home/billy/mysql/7_0_6 port=3308 server-id=3 log-bin</span> |
Those files configure the nodes that make up the Cluster. From a command prompt window, launch the management node:
1
2
3
|
<span style="color: #993300;">[billy@ws1 7_0_6]$ bin/ndb_mgmd --initial -f conf/config.ini --configdir=</span><span style="color: #993300;">/home/billy/mysql/7_0_6</span><span style="color: #993300;">/conf 2009-06-17 13:00:08 [MgmSrvr] INFO -- NDB Cluster Management Server. mysql-5.1.34 ndb-7.0.6 2009-06-17 13:00:08 [MgmSrvr] INFO -- Reading cluster configuration from 'conf/config.ini'</span> |
Check that the management node is up and running:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
<span style="color: #993300;">[billy@ws1 7_0_6]$ bin/ndb_mgm ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 (not connected, accepting connect from localhost) id=3 (not connected, accepting connect from localhost) [ndb_mgmd(MGM)] 1 node(s) id=1 @localhost (mysql-5.1.34 ndb-7.0.6) [mysqld(API)] 3 node(s) id=4 (not connected, accepting connect from localhost) id=5 (not connected, accepting connect from localhost) id=6 (not connected, accepting connect from localhost) ndb_mgm> quit</span> |
and then start the 2 data nodes (ndbd) and 3 MySQL API/Server nodes (ndbd) and then check that they’re all up and running:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
<span style="color: #993300;">[billy@ws1 7_0_6]$ bin/ndbd --initial -c localhost:1186 2009-06-17 13:05:47 [ndbd] INFO -- Configuration fetched from 'localhost:1186', generation: 1 [billy@ws1 7_0_6]$ bin/ndbd --initial -c localhost:1186 2009-06-17 13:05:51 [ndbd] INFO -- Configuration fetched from 'localhost:1186', generation: 1 [billy@ws1 7_0_6]$ bin/mysqld --defaults-file=conf/my.1.conf& [billy@ws1 7_0_6]$ bin/mysqld --defaults-file=conf/my.2.conf& [billy@ws1 7_0_6]$ bin/mysqld --defaults-file=conf/my.3.conf& [billy@ws1 7_0_6]$ bin/ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @127.0.0.1 (mysql-5.1.34 ndb-7.0.6, Nodegroup: 0, Master) id=3 @127.0.0.1 (mysql-5.1.34 ndb-7.0.6, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @127.0.0.1 (mysql-5.1.34 ndb-7.0.6) [mysqld(API)] 3 node(s) id=4 @127.0.0.1 (mysql-5.1.34 ndb-7.0.6) id=5 @127.0.0.1 (mysql-5.1.34 ndb-7.0.6) id=6 @127.0.0.1 (mysql-5.1.34 ndb-7.0.6) ndb_mgm> quit</span> |
Using the Cluster
There are now 3 API nodes/MySQL Servers/mysqlds running; all accessing the same data. Each of those nodes can be accessed by the mysql client using the ports that were configured in the my.X.cnf files. For example, we can access the first of those nodes (node 4) in the following way (each API node is accessed using the port number in its associate my.X.cnf file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
<span style="color: #993300;">[billy@ws1 7_0_6]$ bin/mysql -h localhost -P 3306 Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 4 Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL) Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> use test; Database changed mysql> create table assets (name varchar(30) not null primary key, -> value int) engine=ndb; 090617 13:21:36 [Note] NDB Binlog: CREATE TABLE Event: REPL$test/assets 090617 13:21:36 [Note] NDB Binlog: logging ./test/assets (UPDATED,USE_WRITE) 090617 13:21:37 [Note] NDB Binlog: DISCOVER TABLE Event: REPL$test/assets 090617 13:21:37 [Note] NDB Binlog: DISCOVER TABLE Event: REPL$test/assets 090617 13:21:37 [Note] NDB Binlog: logging ./test/assets (UPDATED,USE_WRITE) 090617 13:21:37 [Note] NDB Binlog: logging ./test/assets (UPDATED,USE_WRITE) Query OK, 0 rows affected (0.99 sec) mysql> insert into assets values ('Car','1900'); Query OK, 1 row affected (0.03 sec) mysql> select * from assets; +------+-------+ | name | value | +------+-------+ | Car | 1900 | +------+-------+ 1 row in set (0.00 sec) mysql> quit Bye</span> |
Note that as this table is using the ndb (MySQL Cluster) storage engine, the data is actually held in the data nodes rather than in the SQL node and so we can access the exact same data from either of the other 2 SQL nodes:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
<span style="color: #993300;">[billy@ws1 7_0_6]$ bin/mysql -h localhost -P 3307 Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 5 Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL) type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> use test; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> select * from assets; +------+-------+ | name | value | +------+-------+ | Car | 1900 | +------+-------+ 1 row in set (0.00 sec) mysql> quit Bye</span> |
Your next steps
This is a very simple, contrived set up – in any sensible deployment, the nodes would be spread across multiple physical hosts in the interests of performance and redundancy (take a look at the new article (Deploying MySQL Cluster over multiple host) to see how to do that). You’d also set several more variables in the configuration files in order to size and tune your Cluster.