This example sets up a single node mounting two GFS file systems. Only a single node is required because the file system will not be mounted in cluster mode.
This section provides the following information about the example:
This example configuration has the following key characteristics:
Number of GFS nodes — 1. Refer to Table C-20 for node information.
Locking protocol — LOCK_NOLOCK.
Number of shared storage devices — 1. One direct-attached storage device is used. Refer to Table C-21for storage device information.
Number of file systems — 2.
File system names — gfs01 and gfs02.
File system mounting — The GFS node mounts the two file systems.
Table C-21. Storage Device Information
For storage to be visible to the node, it may be necessary to load an appropriate device driver. If the storage is not visible on the node, confirm that the device driver is loaded and that it loaded without errors.
The two partitions (/dev/sda1, sdb1) are used for the GFS file systems.
You can display the storage device information at each node in your GFS cluster by running the following command: cat /proc/partitions. Depending on the hardware configuration of the GFS nodes, the names of the devices may be different on each node. If the output of the cat /proc/partitions command shows only entire disk devices (for example, /dev/sda instead of /dev/sda1), then the storage devices have not been partitioned. If you need to partition a device, use the fdisk command.
Each node must have the following kernel modules loaded:
The setup process for this example consists of the following steps:
Create pool configurations for the two file systems.
Create pool configuration files for each file system's pool: pool_gfs01 for the first file system, and pool_gfs02 for the second file system. The two files should look like the following:
poolname pool_gfs01 subpools 1 subpool 0 0 1 pooldevice 0 0 /dev/sda1
poolname pool_gfs02 subpools 1 subpool 0 0 1 pooldevice 0 0 /dev/sdb1
Use the pool_tool command to create all the pools as follows:
n01# pool_tool -c pool_gfs01.cf pool_gfs02.cf Pool label written successfully from pool_gfs01.cf Pool label written successfully from pool_gfs02.cf
Activate the pools.
This step must be performed every time a node is rebooted. If it is not, the pool devices will not be accessible.
Activate the pools using the pool_assemble -a command as follows:
n01# pool_assemble -a pool_gfs01 assembled pool_gfs02 assembled
Create the CCS Archive.
Create a directory called /root/alpha on node n01 as follows:
n01# mkdir /root/alpha n01# cd /root/alpha
Create the CCS Archive on the CCA Device.
This step only needs to be done once. It should not be performed every time the cluster is restarted.
Use the ccs_tool command to create the archive from the CCS configuration files:
n01# ccs_tool create /root/alpha /root/alpha_cca Initializing device for first time use... done.
Start the CCS daemon (ccsd).
This step must be performed each time the node is rebooted.
The CCA device must be specified when starting ccsd.
n01# ccsd -d /dev/pool/alpha_cca
Create the GFS file systems.
Create the first file system on pool_gfs01 and the second on pool_gfs02. The names of the two file systems are gfs01 and gfs02, respectively, as shown in the example:
n01# gfs_mkfs -p lock_gulm -t alpha:gfs01 -j 1 /dev/pool/pool_gfs01 Device: /dev/pool/pool_gfs01 Blocksize: 4096 Filesystem Size:1963216 Journals: 1 Resource Groups:30 Locking Protocol:lock_nolock Lock Table: Syncing... All Done n01# gfs_mkfs -p lock_gulm -t alpha:gfs02 -j 1 /dev/pool/pool_gfs02 Device: /dev/pool/pool_gfs02 Blocksize: 4096 Filesystem Size:1963416 Journals: 1 Resource Groups:30 Locking Protocol:lock_nolock Lock Table: Syncing... All Done
Mount the GFS file systems on the nodes.
Mount points /gfs01 and /gfs02 are used on the node:
n01# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n01# mount -t gfs /dev/pool/pool_gfs02 /gfs02