Documentation Home
MySQL Utilities 1.5 Manual
Related Documentation Download this Manual
PDF (US Ltr) - 1.5Mb
PDF (A4) - 1.5Mb
HTML Download (TGZ) - 289.5Kb
HTML Download (Zip) - 301.7Kb


8.7.4.3 Configure DRBD

If your nodes do not already have an empty partition that you plan to use for the DRBD, then create one. If you are using a Virtual Machine, you can add a new storage to your machine. These details go beyond the scope of this guide.

This partition is used as a resource, managed (and synchronized between nodes by DRBD); in order for DRBD to be able to do this a new configuration file (in this case called clusterdb_res.res) must be created in the /etc/drbd.d/ directory; the contents should look similar to:

resource clusterdb_res {
  protocol C;
  handlers {
    pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
    pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
    local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
    fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
  }
  startup {
    degr-wfc-timeout 120;    # 2 minutes
    outdated-wfc-timeout 2;  # 2 seconds
  }
  disk {
    on-io-error detach;
  }
  net {
    cram-hmac-alg "sha1";
    shared-secret "clusterdb";
    after-sb-0pri disconnect;
    after-sb-1pri disconnect;
    after-sb-2pri disconnect;
    rr-conflict disconnect;
  }
  syncer {
    rate 10M;
    al-extents 257;
    on-no-data-accessible io-error;
  }
  on host1 {
    device /dev/drbd0;
    disk /dev/sdb;
    address 192.168.1.101:7788;
    flexible-meta-disk internal;
  }
  on host2 {
    device /dev/drbd0;
    disk /dev/sdb;
    address 192.168.1.102:7788;
    meta-disk internal;
  }
}

The IP addresses and disk locations should be specific to the hosts that the cluster are using. In this example the device that DRBD creates is located at /dev/drbd0 - it is this device that is swapped back and forth between the hosts by DRBD. This resource configuration file should be copied over to the same location on the second host:

[root@host1]# scp clusterdb_res.res host2:/etc/drbd.d/

The configuration file previously presented uses DRBD 8.3 dialect. Although DRBD 8.4 is the newest version, some distributions might still contain DRBD 8.3. If you have installed DRBD 8.4 do not worry though because it understands the DRBD 8.3 configuration file.

Before starting the DRBD daemon, meta data must be created for the new resource (clusterdb_res) on each host using the command:

[root@host1]# drbdadm create-md clusterdb_res

[root@host2]# drbdadm create-md clusterdb_res
    

It is now possible to start the DRBD daemon on each host:

[root@host1]# /etc/init.d/drbd start

[root@host2]# /etc/init.d/drbd start
    

At this point the DRBD service is running on both hosts but neither host is the "primary" and so the resource (block device) cannot be accessed on either host; this can be confirmed by querying the status of the service:

[root@host1]# /etc/init.d/drbd status

[root@host2]# /etc/init.d/drbd status

In order to create the file systems (and go on to storing useful data in it), one of the hosts must be made the primary for the clusterdb_res resource, so execute the following on host1

[root@host1]# drbdadm -- --overwrite-data-of-peer primary all
[root@host1]# /etc/init.d/drbd status

The status output also shows the progress of the block-level syncing of the device from the new primary (host1) to the secondary (host2). This initial sync can take some time but it should not be necessary to wait for it to complete in order to complete the other steps.

Now that the device is available on host1, it is possible to create a file system on it:

[root@host1]# mkfs -t ext4 /dev/drbd0
Note

The above does not need to be repeated on the second host as DRBD handles the syncing of the raw disk data

In order for the DRBD file system to be mounted, the /var/lib/mysql_drbd directory should be created on both hosts:

[root@host1]# mkdir /var/lib/mysql_drbd
[root@host1]# chown mysql /var/lib/mysql_drbd
[root@host1]# chgrp mysql /var/lib/mysql_drbd

[root@host2]# mkdir /var/lib/mysql_drbd
[root@host2]# chown mysql /var/lib/mysql_drbd
[root@host2]# chgrp mysql /var/lib/mysql_drbd
    

On just the one (DRBD active) host, the DRBD file system must be temporarily mounted:

[root@host1]# mount /dev/drbd0 /var/lib/mysql_drbd

User Comments
Sign Up Login You must be logged in to post a comment.