<< Back to shouce.jb51.net

C.5. LOCK_GULM, SLM External, and GNBD

This example configures a cluster with three GFS nodes and two GFS file systems. It will require three nodes for the GFS cluster, one node to run a LOCK_GULM server, and another node for a GNBD server. (A total of five nodes are required in this example.)

This section provides the following information about the example:

C.5.1. Key Characteristics

This example configuration has the following key characteristics:

Host NameIP AddressLogin NamePassword
apc10.0.1.10apcapc

Table C-15. APC MasterSwitch Information

Host NameIP AddressAPC Port Number
n0110.0.1.11
n0210.0.1.22
n0310.0.1.33

Table C-16. GFS Node Information

Host NameIP AddressAPC Port Number
lcksrv10.0.1.44

Table C-17. Lock Server Node Information

Host NameIP AddressAPC Port Number
gnbdsrv10.0.1.55

Table C-18. GNBD Server Node Information

MajorMinor#BlocksName
8168388608sda
8178001sda1
8188377897sda2
8328388608sdb
8338388608sdb1

Table C-19. Storage Device Information

NoteNotes
 

The storage must only be visible on the GNBD server node. The GNBD server node will ensure that the storage is visible to the GFS cluster nodes via the GNBD protocol.

For shared storage devices to be visible to the nodes, it may be necessary to load an appropriate device driver. If the shared storage devices are not visible on each node, confirm that the device driver is loaded and that it loaded without errors.

The small partition (/dev/sda1) is used to store the cluster configuration information. The two remaining partitions (/dev/sda2, sdb1) are used for the GFS file systems.

You can display the storage device information at each node in your GFS cluster by running the following command: cat /proc/partitions. Depending on the hardware configuration of the GFS nodes, the names of the devices may be different on each node. If the output of the cat /proc/partitions command shows only entire disk devices (for example, /dev/sda instead of /dev/sda1), then the storage devices have not been partitioned. If you need to partition a device, use the fdisk command.

C.5.2. Kernel Modules Loaded

Each node must have the following kernel modules loaded:

C.5.3. Setup Process

The setup process for this example consists of the following steps:

  1. Create and export GNBD devices.

    Create and export a GNBD device for the storage on the GNBD server (gnbdsrv) to be used for the GFS file systems and CCA device. In the following example, gfs01 is the GNBD device used for the pool of the first GFS file system, gfs02 is the device used for the pool of the second GFS file system, and cca is the device used for the CCA device.

    gnbdsrv# gnbd_export -e cca -d /dev/sda1 -c
    gnbdsrv# gnbd_export -e gfs01 -d /dev/sda2 -c
    gnbdsrv# gnbd_export -e gfs02 -d /dev/sdb1 -c

    CautionCaution
     

    The GNBD server should not attempt to use the cached devices it exports — either directly or by importing them. Doing so can cause cache coherency problems.

  2. Import GNBD devices on all GFS nodes and the lock server node.

    Use gnbd_import to import the GNBD devices from the GNBD server (gnbdsrv):

    n01# gnbd_import -i gnbdsrv
    n02# gnbd_import -i gnbdsrv
    n03# gnbd_import -i gnbdsrv
    lcksrv# gnbd_import -i gnbdsrv
  3. Create pool configurations for the two file systems.

    Create pool configuration files for each file system's pool: pool_gfs01 for the first file system, and pool_gfs02 for the second file system. The two files should look like the following:

    poolname pool_gfs01
    subpools 1 
    subpool 0 0 1 
    pooldevice 0 0 /dev/gnbd/gfs01
    poolname pool_gfs02
    subpools 1 
    subpool 0 0 1 
    pooldevice 0 0 /dev/gnbd/gfs02
  4. Create a pool configuration for the CCS data.

    Create a pool configuration file for the pool that will be used for CCS data. The pool does not need to be very large. The name of the pool will be alpha_cca. (The name of the cluster, alpha, followed by _cca). The file should look like the following:

    poolname alpha_cca 
    subpools 1 
    subpool 0 0 1 
    pooldevice 0 0 /dev/gnbd/cca
  5. Create the pools using the pool_tool command.

    NoteNote
     

    This operation must take place on a GNBD client node.

    Use the pool_tool command to create all the pools as follows:

    n01# pool_tool -c pool_gfs01.cf pool_gfs02.cf alpha_cca.cf
    Pool label written successfully from pool_gfs01.cf
    Pool label written successfully from pool_gfs02.cf
    Pool label written successfully from alpha_cca.cf
  6. Activate the pools on all nodes.

    NoteNote
     

    This step must be performed every time a node is rebooted. If it is not, the pool devices will not be accessible.

    Activate the pools using the pool_assemble -a command for each node as follows:

    n01# pool_assemble -a  <-- Activate pools
    alpha_cca assembled 
    pool_gfs01 assembled 
    pool_gfs02 assembled
    
    n02# pool_assemble -a  <-- Activate pools
    alpha_cca assembled 
    pool_gfs01 assembled 
    pool_gfs02 assembled
    
    n03# pool_assemble -a  <-- Activate pools
    alpha_cca assembled
    pool_gfs01 assembled 
    pool_gfs02 assembled
    
    lcksrv# pool_assemble -a  <-- Activate pools
    alpha_cca assembled
    pool_gfs01 assembled 
    pool_gfs02 assembled
  7. Create CCS files.

    1. Create a directory called /root/alpha on node n01 as follows:

      n01# mkdir /root/alpha
      n01# cd /root/alpha
    2. Create the cluster.ccs file. This file contains the name of the cluster and the name of the nodes where the LOCK_GULM server is run. The file should look like the following:

      cluster { 
         name = "alpha" 
         lock_gulm { 
            servers = ["lcksrv"] 
         } 
      }
    3. Create the nodes.ccs file. This file contains the name of each node, its IP address, and node-specific I/O fencing parameters. The file should look like the following:

      nodes { 
         n01 { 
            ip_interfaces { 
               eth0 = "10.0.1.1"
            } 
            fence { 
               power { 
                  apc { 
                  port = 1 
                  } 
               } 
            } 
         }
         n02 { 
            ip_interfaces { 
               eth0 = "10.0.1.2"
            }
            fence { 
               power { 
                  apc { 
                  port = 2
                  }
               }
            }
         }
         n03 {
            ip_interfaces {
               eth0 = "10.0.1.3"
            }
            fence { 
               power { 
                  apc { 
                  port = 3 
                  } 
               } 
            } 
         } 
         lcksrv {
            ip_interfaces {
               eth0 = "10.0.1.4"
            }
            fence { 
               power { 
                  apc { 
                  port = 4
                  } 
               } 
            } 
         } 
         gnbdsrv {
            ip_interfaces {
               eth0 = "10.0.1.5"
            }
            fence { 
               power { 
                  apc { 
                  port = 5
                  } 
               } 
            } 
         } 
      }
    4. Create the fence.ccs file. This file contains information required for the fencing method(s) used by the GFS cluster. The file should look like the following:

      fence_devices { 
         apc { 
            agent = "fence_apc" 
            ipaddr = "10.0.1.10" 
            login = "apc" 
            passwd = "apc" 
         } 
      }
  8. Create the CCS Archive on the CCA Device.

    NoteNote
     

    This step only needs to be done once and from a single node. It should not be performed every time the cluster is restarted.

    Use the ccs_tool command to create the archive from the CCS configuration files:

    n01# ccs_tool create /root/alpha /dev/pool/alpha_cca
    Initializing device for first time use... done.
  9. Start the CCS daemon (ccsd) on all the nodes.

    NoteNote
     

    This step must be performed each time the cluster is rebooted.

    The CCA device must be specified when starting ccsd.

    n01# ccsd -d /dev/pool/alpha_cca
    
    n02# ccsd -d /dev/pool/alpha_cca
    
    n03# ccsd -d /dev/pool/alpha_cca
    
    lcksrv# ccsd -d /dev/pool/alpha_cca
  10. At each node, start the LOCK_GULM server. For example:

    n01# lock_gulmd
    
    lcksrv# lock_gulmd
  11. Create the GFS file systems.

    Create the first file system on pool_gfs01 and the second on pool_gfs02. The names of the two file systems are gfs01 and gfs02, respectively, as shown in the example:

    n01# gfs_mkfs -p lock_gulm -t alpha:gfs01 -j 3 /dev/pool/pool_gfs01 
    Device: /dev/pool/pool_gfs01 
    Blocksize: 4096 
    Filesystem Size:1963216 
    Journals: 3 
    Resource Groups:30 
    Locking Protocol:lock_gulm 
    Lock Table: alpha:gfs01 
    
    Syncing... 
    All Done
    
    n01# gfs_mkfs -p lock_gulm -t alpha:gfs02 -j 3 /dev/pool/pool_gfs02 
    Device: /dev/pool/pool_gfs02 
    Blocksize: 4096 
    Filesystem Size:1963416 
    Journals: 3 
    Resource Groups:30 
    Locking Protocol:lock_gulm 
    Lock Table: alpha:gfs02
    
    Syncing... 
    All Done

  12. Mount the GFS file systems on all the nodes.

    Mount points /gfs01 and /gfs02 are used on each node:

    n01# mount -t gfs /dev/pool/pool_gfs01 /gfs01 
    n01# mount -t gfs /dev/pool/pool_gfs02 /gfs02
    
    n02# mount -t gfs /dev/pool/pool_gfs01 /gfs01
    n02# mount -t gfs /dev/pool/pool_gfs02 /gfs02
    
    n03# mount -t gfs /dev/pool/pool_gfs01 /gfs01
    n03# mount -t gfs /dev/pool/pool_gfs02 /gfs02