<< Back to shouce.jb51.net

8.2. LOCK_GULM

OmniLock RLM and SLM are both implemented by the LOCK_GULM system.

LOCK_GULM is based on a central server daemon that manages lock and cluster state for all GFS/LOCK_GULM file systems in the cluster. In the case of RLM, multiple servers can be run redundantly on multiple nodes. If the master server fails, another "hot-standby" server takes over.

The LOCK_GULM server daemon is called lock_gulmd. The kernel module for GFS nodes using LOCK_GULM is called lock_gulm.o. The lock protocol (LockProto) as specified when creating a GFS/LOCK_GULM file system is called lock_gulm (lower case, with no .o extension).

NoteNote
 

You can use the lock_gulmd init.d script included with GFS to automate starting and stopping lock_gulmd. For more information about GFS init.d scripts, refer to Chapter 12 Using GFS init.d Scripts.

8.2.1. Selection of LOCK_GULM Servers

The nodes selected to run the lock_gulmd server are specified in the cluster.ccs configuration file (cluster.ccs:cluster/lock_gulm/servers). Refer to Section 6.5 Creating the cluster.ccs File.

For optimal performance, lock_gulmd servers should be run on dedicated nodes; however, they can also be run on nodes using GFS. All nodes, including those only running lock_gulmd, must be listed in the nodes.ccs configuration file (nodes.ccs:nodes).

8.2.2. Number of LOCK_GULM Servers

You can use just one lock_gulmd server; however, if it fails, the entire cluster that depends on it must be reset. For that reason, you can run multiple instances of the lock_gulmd server daemon on multiple nodes for redundancy. The redundant servers allow the cluster to continue running if the master lock_gulmd server fails.

Over half of the lock_gulmd servers on the nodes listed in the cluster.ccs file (cluster.ccs:cluster/lock_gulm/servers) must be operating to process locking requests from GFS nodes. That quorum requirement is necessary to prevent split groups of servers from forming independent clusters — which would lead to file system corruption.

For example, if there are three lock_gulmd servers listed in the cluster.ccs configuration file, two of those three lock_gulmd servers (a quorum) must be running for the cluster to operate.

A lock_gulmd server can rejoin existing servers if it fails and is restarted.

When running redundant lock_gulmd servers, the minimum number of nodes required is three; the maximum number of nodes is five.

8.2.3. Starting LOCK_GULM Servers

If no lock_gulmd servers are running in the cluster, take caution before restarting them — you must verify that no GFS nodes are hung from a previous instance of the cluster. If there are hung GFS nodes, reset them before starting lock_gulmd servers. Resetting the hung GFS nodes before starting lock_gulmd servers prevents file system corruption. Also, be sure that all nodes running lock_gulmd can communicate over the network; that is, there is no network partition.

The lock_gulmd server is started with no command line options.

8.2.4. Fencing and LOCK_GULM

Cluster state is managed in the lock_gulmd server. When GFS nodes or server nodes fail, the lock_gulmd server initiates a fence operation for each failed node and waits for the fence to complete before proceeding with recovery.

The master lock_gulmd server fences failed nodes by calling the fence_node command with the name of the failed node. That command looks up fencing configuration in CCS to carry out the fence operation.

When using RLM, you need to use a fencing method that shuts down and reboots a node. With RLM you cannot use any method that does not reboot the node.

8.2.5. Shutting Down a LOCK_GULM Server

Before shutting down a node running a LOCK_GULM server, lock_gulmd should be terminated using the gulm_tool command. If lock_gulmd is not properly stopped, the LOCK_GULM server may be fenced by the remaining LOCK_GULM servers.

CautionCaution
 

Shutting down one of multiple redundant LOCK_GULM servers may result in suspension of cluster operation if the remaining number of servers is half or less of the total number of servers listed in the cluster.ccs file (cluster.ccs:lock_gulm/servers).

8.2.5.1. Usage

gulm_tool shutdown IPAddress

IPAddress

Specifies the IP address or hostname of the node running the instance of lock_gulmd to be terminated.