<< Back to shouce.jb51.net

1.2. Performance, Scalability, and Economy

You can deploy GFS in a variety of configurations to suit your needs for performance, scalability, and economy. For superior performance and scalability, you can deploy GFS in a cluster that is connected directly to a SAN. For more economical needs, you can deploy GFS in a cluster that is connected to a LAN with servers that use the GFS VersaPlex architecture. The VersaPlex architecture allows a GFS cluster to connect to servers that present block-level storage via an Ethernet LAN. The VersaPlex architecture is implemented with GNBD (Global Network Block Device), a method of presenting block-level storage over an Ethernet LAN. GNBD is a software layer that can be run on network nodes connected to direct-attached storage or storage in a SAN. GNBD exports a block interface from those nodes to a GFS cluster.

You can configure GNBD servers for GNBD multipath. GNBD multipath allows you to configure multiple GNBD server nodes with redundant paths between the GNBD server nodes and storage devices. The GNBD servers, in turn, present multiple storage paths to GFS nodes via redundant GNBDs. With GNBD multipath, if a GNBD server node becomes unavailable, another GNBD server node can provide GFS nodes with access to storage devices.

The following sections provide examples of how GFS can be deployed to suit your needs for performance, scalability, and economy:

NoteNote
 

The deployment examples in this chapter reflect basic configurations; your needs might require a combination of configurations shown in the examples. Also, the examples show GNBD (Global Network Block Device) as the method of implementing the VersaPlex architecture.

1.2.1. Superior Performance and Scalability

You can obtain the highest shared-file performance when applications access storage directly. The GFS SAN configuration in Figure 1-1 provides superior file performance for shared files and file systems. Linux applications run directly on GFS clustered application nodes. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with direct-connect storage; yet, each GFS application node has equal access to all data files. GFS supports over 300 GFS application nodes.

Figure 1-1. GFS with a SAN

1.2.2. Performance, Scalability, Moderate Price

Multiple Linux client applications on a LAN can share the same SAN-based data as shown in Figure 1-2. SAN block storage is presented to network clients as block storage devices by GNBD servers. From the perspective of a client application, storage is accessed as if it were directly attached to the server in which the application is running. Stored data is actually on the SAN. Storage devices and data can be equally shared by network client applications. File locking and sharing functions are handled by GFS for each network client.

NoteNote
 

Clients implementing ext2 and ext3 file systems can be configured to access their own dedicated slice of SAN storage.

GFS with VersaPlex (implemented with GNBD, as shown in Figure 1-2) and a SAN provide fully automatic application and device failover with failover software and redundant devices.

Figure 1-2. GFS and GNBD with a SAN

1.2.3. Economy and Performance

Figure 1-3 shows how Linux client applications can take advantage of an existing Ethernet topology to gain shared access to all block storage devices. Client data files and file systems can be shared with GFS on each client. Application and device failover can be fully automated with mirroring, failover software, and redundant devices.

Figure 1-3. GFS and GNBD with Direct-Attached Storage