Table of Contents [+/-]
- 4.1 Installing MySQL Cluster on Linux [+/-]
- 4.2 Installing MySQL Cluster on Windows [+/-]
- 4.3 Initial Configuration of MySQL Cluster
- 4.4 Initial Startup of MySQL Cluster
- 4.5 MySQL Cluster Example with Tables and Data
- 4.6 Safe Shutdown and Restart of MySQL Cluster
- 4.7 Upgrading and Downgrading MySQL Cluster [+/-]
This section describes the basics for planning, installing, configuring, and running a MySQL Cluster. Whereas the examples in Chapter 5, Configuration of MySQL Cluster NDB 6.1-7.1 provide more in-depth information on a variety of clustering options and configuration, the result of following the guidelines and procedures outlined here should be a usable MySQL Cluster which meets the minimum requirements for availability and safeguarding of data.
For information about upgrading or downgrading a MySQL Cluster between release versions, see Section 4.7, “Upgrading and Downgrading MySQL Cluster”.
This section covers hardware and software requirements; networking issues; installation of MySQL Cluster; configuration issues; starting, stopping, and restarting the cluster; loading of a sample database; and performing queries.
Assumptions. The following sections make a number of assumptions regarding the cluster's physical and network configuration. These assumptions are discussed in the next few paragraphs.
Cluster nodes and host computers. The cluster consists of four nodes, each on a separate host computer, and each with a fixed network address on a typical Ethernet network as shown here:
|Management node (mgmd)||192.168.0.10|
|SQL node (mysqld)||192.168.0.20|
|Data node "A" (ndbd)||192.168.0.30|
|Data node "B" (ndbd)||192.168.0.40|
This may be made clearer by the following diagram:
In the interest of simplicity (and reliability), this
How-To uses only numeric IP addresses.
However, if DNS resolution is available on your network, it is
possible to use host names in lieu of IP addresses in configuring
Cluster. Alternatively, you can use the
/etc/hosts for Linux and
other Unix-like operating systems,
Windows, or your operating system's equivalent) for providing
a means to do host lookup if such is available.
Potential hosts file issues.
A common problem when trying to use host names for Cluster nodes
arises because of the way in which some operating systems
(including some Linux distributions) set up the system's own
host name in the
installation. Consider two machines with the host names
ndb2, both in the
cluster network domain. Red Hat Linux
(including some derivatives such as CentOS and Fedora) places the
following entries in these machines'
/etc/hosts: 127.0.0.1 ndb1.cluster ndb1 localhost.localdomain localhost
/etc/hosts: 127.0.0.1 ndb2.cluster ndb2 localhost.localdomain localhost
SUSE Linux (including OpenSUSE) places these entries in the
/etc/hosts: 127.0.0.1 localhost 127.0.0.2 ndb1.cluster ndb1
/etc/hosts: 127.0.0.1 localhost 127.0.0.2 ndb2.cluster ndb2
In both instances,
ndb1.cluster to a loopback IP address, but gets a
public IP address from DNS for
to a loopback address and obtains a public address for
ndb1.cluster. The result is that each data node
connects to the management server, but cannot tell when any other
data nodes have connected, and so the data nodes appear to hang
You cannot mix
localhost and other host names
or IP addresses in
config.ini. For these
reasons, the solution in such cases (other than to use IP
addresses for all
entries) is to remove the fully qualified host names from
/etc/hosts and use these in
config.ini for all cluster hosts.
Host computer type. Each host computer in our installation scenario is an Intel-based desktop PC running a supported operating system installed to disk in a standard configuration, and running no unnecessary services. The core operating system with standard TCP/IP networking capabilities should be sufficient. Also for the sake of simplicity, we also assume that the file systems on all hosts are set up identically. In the event that they are not, you should adapt these instructions accordingly.
Network hardware. Standard 100 Mbps or 1 gigabit Ethernet cards are installed on each machine, along with the proper drivers for the cards, and that all four hosts are connected through a standard-issue Ethernet networking appliance such as a switch. (All machines should use network cards with the same throughput. That is, all four machines in the cluster should have 100 Mbps cards or all four machines should have 1 Gbps cards.) MySQL Cluster works in a 100 Mbps network; however, gigabit Ethernet provides better performance.
MySQL Cluster is not intended for use in a network for which throughput is less than 100 Mbps or which experiences a high degree of latency. For this reason (among others), attempting to run a MySQL Cluster over a wide area network such as the Internet is not likely to be successful, and is not supported in production.
We use the
world database which is available
for download from the MySQL Web site (see
http://dev.mysql.com/doc/index-other.html). We assume that
each machine has sufficient memory for running the operating
system, required MySQL Cluster processes, and (on the data nodes)
storing the database.
For general information about installing MySQL, see Installing and Upgrading MySQL. For information about installation of MySQL Cluster on Linux and other Unix-like operating systems, see Section 4.1, “Installing MySQL Cluster on Linux”. For information about installation of MySQL Cluster on Windows operating systems, see Section 4.2, “Installing MySQL Cluster on Windows”.
For general information about MySQL Cluster hardware, software, and networking requirements, see Section 3.3, “MySQL Cluster Hardware, Software, and Networking Requirements”.