Friday, June 17, 2011

Basic Idea on GFS file system

■ Requirement : GFS file system
■ OS Environment : Linux[RHEL, Centos]
■ Application: gfs
■ Resolution :

      Global File System (GFS) is a shared disk file system for Linux computer clusters. It can maximize the benefits of clustering and minimize the costs.

It does following :

  • Greatly simplify your data infrastructure
  •  Install and patch applications once, for the entire cluster
  •  Reduce the need for redundant copies of data
  •  Simplify back-up and disaster recovery tasks
  •  Maximize use of storage resources and minimize your storage costs
  •  Manage your storage capacity as a whole vs. by partition
  •  Decrease your overall storage needs by reducing data duplication
  •  Scale clusters seamlessly, adding storage or servers on the fly
  •  No more partitioning storage with complicated techniques
  •  Add servers simply by mounting them to a common file system
  •  Achieve maximum application uptime

         While a GFS file system may be used outside of LVM, Red Hat supports only GFS file systems that are created on a CLVM logical volume. CLVM is a cluster-wide implementation of LVM, enabled by the CLVM daemon clvmd, which manages LVM logical volumes in a Red Hat Cluster Suite cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. Red Hat supports it if gfs is deployed on LVM

        GULM (Grand Unified Lock Manager) is not supported in Red Hat Enterprise Linux 5. If your GFS file systems use the GULM lock manager, you must convert the file systems to use the DLM lock manager. This is a two-part process.

* While running Red Hat Enterprise Linux 4, convert your GFS file systems to use the DLM lock manager.
* Upgrade your operating system to Red Hat Enterprise Linux 5, converting the lock manager to DLM when you do.

“GFS with a SAN” provides superior file performance for shared files and file systems. Linux applications run directly on GFS nodes. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with directly connected storage; yet, each GFS application node has equal access to all data files. GFS supports up to 125 GFS nodes.

GFS Software Components :

  • gfs.ko : Kernel module that implements the GFS file system and is loaded on GFS cluster nodes.
  • lock_dlm.ko : A lock module that implements DLM locking for GFS. It plugs into the lock harness, lock_harness.ko and communicates with the DLM lock manager in Red Hat Cluster Suite.
  • lock_nolock.ko : A lock module for use when GFS is used as a local file system only. It plugs into the lock harness, lock_harness.ko and provides local locking.

            The system clocks in GFS nodes must be within a few minutes of each other to prevent unnecessary inode time-stamp updating. Unnecessary inode time-stamp updating severely impacts cluster performance. Need to use ntpd for accurate time with time server.

A) (On shared disk ) : Create GFS file systems on logical volumes created in Step 1. Choose a unique name for each file system.

You can use either of the following formats to create a clustered GFS file system:
Initial Setup Tasks:

  • 1. Setting up logical volumes
  • 2. Making a GFS files system
  • 3. Mounting file systems

$ gfs_mkfs -p lock_dlm -t ClusterName:FSName -j NumberJournals BlockDevice
$ mkfs -t gfs -p lock_dlm -t LockTableName -j NumberJournals BlockDevice

OR

$ gfs_mkfs -p LockProtoName -t LockTableName -j NumberJournals BlockDevice
$ mkfs -t gfs -p LockProtoName -t LockTableName -j NumberJournals BlockDevice

B)At each node, mount the GFS file systems. For more information about mounting a GFS file system.

Command usage:

$ mount BlockDevice MountPoint
$ mount -o acl BlockDevice MountPoint

The -o acl mount option allows manipulating file ACLs. If a file system is mounted without the -o acl mount option, users are allowed to view ACLs (with getfacl), but are not allowed to set them (with setfacl).

NOTE : 

        Make sure that you are very familiar with using the LockProtoName and LockTableName parameters. Improper use of the LockProtoName and LockTableName parameters may cause file system or lock space corruption.

LockProtoName :

Specifies the name of the locking protocol to use. The lock protocol for a cluster is lock_dlm. The lock protocol when GFS is acting as a local file system (one node only) is lock_nolock.
LockTableName: This parameter is specified for GFS file system in a cluster configuration. It has two parts separated by a colon (no spaces) as follows: ClusterName:FSName

* ClusterName, the name of the Red Hat cluster for which the GFS file system is being created.
* FSName, the file system name, can be 1 to 16 characters long, and the name must be unique among all file systems in the cluster.

NumberJournals:

       Specifies the number of journals to be created by the gfs_mkfs command. One journal is required for each node that mounts the file system. (More journals than are needed can be specified at creation time to allow for future expansion.)

$ gfs_mkfs -p lock_dlm -t alpha:mydata1 -j 8 /dev/vg01/lvol0
$ gfs_mkfs -p lock_dlm -t alpha:mydata2 -j 8 /dev/vg01/lvol1

     Before you can mount a GFS file system, the file system must exist , the volume where the file system exists must be activated, and the supporting clustering and locking systems must be started. After those requirements have been met, you can mount the GFS file system as you would any Linux file system.

EXAMPLE : mount /dev/vg01/lvol0 /mydata1

Displaying GFS Tunable Parameters : gfs_tool gettune MountPoint

2 comments:

  1. dear sir i do have in my network GFS partition and i would like to mount in in windows 2012 to backup the data from it this partition on a sna storage i can see the partition in the windows but i cannot mount into the windows disk manager do you know how?

    ReplyDelete
  2. Hi,

    What are the parameters need to configure to get better performance from gfs2?

    Ben

    ReplyDelete