Thursday, June 30, 2011

How to make bridge over VLAN?

■ Requirement : create bridge over vlan
■ OS Environment : Linux[RHEL, Centos]
■ Application:vlan, bridge
■ Implementation Steps :

Bridging over VLAN's :  By constructing a bridge between a "normal" and a "VLAN" ethernet interface, the Linux computer will add and remove the VLAN headers on behalf of any other device(s) plugged into the "normal" card.

           It takes a slight modification of the procedures above. For this example, let's presume we have an Ethernet interface eth0 connected to the network where a VLAN id 2 is present, and we have a device or devices on eth1 that need to be bridged into that VLAN 2.

Go ahead and first construct the VLAN interface like we did before (copy ifcfg-eth#, change DEVICE, add VLAN=yes), except also remove the BOOTPROTO, IPADDR, NETMASK, and GATEWAY lines if present. Add a line BRIDGE=br2 (or a different named bridge device of your choice).

1. edit /etc/sysconfig/network-scripts/ifcfg-eth0.2 (connected to VLAN2)

DEVICE=eth0.2
VLAN=yes
TYPE=Ethernet
HWADDR=##:##:##:##:##:##
ONBOOT=yes
BRIDGE=br2

2. edit /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
TYPE=Ethernet
HWADDR=##:##:##:##:##:##
ONBOOT=yes
BRIDGE=br2

3. edit /etc/sysconfig/network-scripts/ifcfg-br2

DEVICE=br2
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.2.24
NETMASK=255.255.255.0
GATEWAY=192.168.2.254
DELAY=0
STP=off

4. service network restart. 


How to assign NIC in vlan?

■ Requirement : Assign NIC in vlan
■ OS Environment : Linux[RHEL, Centos]
■ Application:vlan
■ Implementation Steps :

Why VLAN : create a virual lan(grouping some computers from actual LAN with out using any switch/routers etc. Only needs software. Can also be done using hardware)

$ yum install vconfig

1. copy /etc/sysconfig/network-scripts/ifcfg-eth0 to etc/sysconfig/network-scripts/ifcfg-eth0.2 and edit : VLAN=yes like :

DEVICE=eth0.2
VLAN=yes
TYPE=Ethernet
HWADDR=##:##:##:##:##:##
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.1.24
GATEWAY=192.168.1.254

Note : VLAN id is here 2 for eth0

2. Do for other nic and assign proper vlan ID. Contact network admin to get vlan details.
3. Restart network service. 

How to configure network bridge / VLAN on linux machine?

Setup bridge network :

Why bridge : Virtually connect multiple Ethernets in one interface(virtual). May be wireless and physical Ethernet will be connected together for communication.

Install required package :

#yum install bridge-utils

Let machine has two NIC eth0 and eth1 :

So, change it like :

1. vi /etc/sysconfig/network-scripts/ifcfg-eth0

---
DEVICE=eth0
TYPE=Ethernet
HWADDR=##:##:##:##:##:##
ONBOOT=yes
BRIDGE=br0
---

2. vi /etc/sysconfig/network-scripts/ifcfg-eth1

---
DEVICE=eth1
TYPE=Ethernet
WADDR=##:##:##:##:##:##
ONBOOT=yes
BRIDGE=br0
---

3. create a file ifcfg-br0 for the bridge device br0. vi /etc/sysconfig/network-scripts/ifcfg-br0 (Note IP address has been mentioned here)

For static :

---
DEVICE=br0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.1.24
NETMASK=255.255.255.0
GATEWAY=192.168.1.254
DELAY=0
STP=off
---

For DHCP :

---
DEVICE=br0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=dhcp
DELAY=0
STP=off
---

4. service network restart
5. test : brctl show

==========
How to setup VLAN network on linux ?

Why VLAN : create a virual lan(grouping some computers from actual LAN with out using any switch/routers etc. Only needs software. Can also be done using hardware)

Required Packages :

#yum install vconfig

Next, go to the /etc/sysconfig/network-scripts directory and decide which eth# device you're going to add a VLAN to. Note that the VLAN device will run alongside (in parallel to, at the same time) as the original eth# device, so there is no need to change your existing configuration.

1. copy /etc/sysconfig/network-scripts/ifcfg-eth0 to etc/sysconfig/network-scripts/ifcfg-eth0.2 and edit : VLAN=yes like :
---
DEVICE=eth0.2
VLAN=yes
TYPE=Ethernet
HWADDR=##:##:##:##:##:##
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.1.24
GATEWAY=192.168.1.254
---

Note : VLAN id is here 2 for eth0

2. Do for other nic and increase VLAN ID and check like ifconfig
3. Restart network service. That's it.
------
How to make bridge over VLAN?

Bridging over VLAN's :

By constructing a bridge between a "normal" and a "VLAN" ethernet interface, the Linux computer will add and remove the VLAN headers on behalf of any other device(s) plugged into the "normal" card.

How :

Okay, now for the tricky part. It takes a slight modification of the procedures above. For this example, let's presume we have an Ethernet interface eth0 connected to the network where a VLAN id 2 is present, and we have a device or devices on eth1 that need to be bridged into that VLAN 2.

Go ahead and first construct the VLAN interface like we did before (copy ifcfg-eth#, change DEVICE, add VLAN=yes), except also remove the BOOTPROTO, IPADDR, NETMASK, and GATEWAY lines if present. Add a line BRIDGE=br2 (or a different named bridge device of your choice).

1. vi /etc/sysconfig/network-scripts/ifcfg-eth0.2 (connected to VLAN2)

---
DEVICE=eth0.2
VLAN=yes
TYPE=Ethernet
HWADDR=##:##:##:##:##:##
ONBOOT=yes
BRIDGE=br2
----

2. vi /etc/sysconfig/network-scripts/ifcfg-eth1

--
DEVICE=eth1
TYPE=Ethernet
WADDR=##:##:##:##:##:##
ONBOOT=yes
BRIDGE=br2
---

3. vi /etc/sysconfig/network-scripts/ifcfg-br2

----
DEVICE=br2
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.2.24
NETMASK=255.255.255.0
GATEWAY=192.168.2.254
DELAY=0
STP=off
---

Note : IP address has in br2

4. service network restart. That's it.

Try :)

Thursday, June 23, 2011

How to scan newly added LUN using rescan-scsi-bus.sh ?

■ Requirement : scan LUN through HBA
■ OS Environment : Linux[RHEL 5.4 and later]
■ Application: scsi
■ Implementation Steps :

      Suggest you NOT to scan the existing LUNs since I/O operations are still in use and if you scan them it will/may corrupt the file system. So, I always suggest you to scan the new added device or storage. Once you add it, HBA will detect the device and then you can scan this non-existent LUNs to the HBA. As an example you can execute the command like :

script rescan-scsi-bus.h comes with RHEL.

Following command can be used :

$ rescan-scsi-bus.sh --hosts=1 --luns=2

Note : I assume that on host 1/or on HBA 1, lun 2 doesn't exist.

For more details please get help from :

$ rescan-scsi-bus.sh --help


Friday, June 17, 2011

Basic Idea on GFS file system

■ Requirement : GFS file system
■ OS Environment : Linux[RHEL, Centos]
■ Application: gfs
■ Resolution :

      Global File System (GFS) is a shared disk file system for Linux computer clusters. It can maximize the benefits of clustering and minimize the costs.

It does following :

  • Greatly simplify your data infrastructure
  •  Install and patch applications once, for the entire cluster
  •  Reduce the need for redundant copies of data
  •  Simplify back-up and disaster recovery tasks
  •  Maximize use of storage resources and minimize your storage costs
  •  Manage your storage capacity as a whole vs. by partition
  •  Decrease your overall storage needs by reducing data duplication
  •  Scale clusters seamlessly, adding storage or servers on the fly
  •  No more partitioning storage with complicated techniques
  •  Add servers simply by mounting them to a common file system
  •  Achieve maximum application uptime

         While a GFS file system may be used outside of LVM, Red Hat supports only GFS file systems that are created on a CLVM logical volume. CLVM is a cluster-wide implementation of LVM, enabled by the CLVM daemon clvmd, which manages LVM logical volumes in a Red Hat Cluster Suite cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. Red Hat supports it if gfs is deployed on LVM

        GULM (Grand Unified Lock Manager) is not supported in Red Hat Enterprise Linux 5. If your GFS file systems use the GULM lock manager, you must convert the file systems to use the DLM lock manager. This is a two-part process.

* While running Red Hat Enterprise Linux 4, convert your GFS file systems to use the DLM lock manager.
* Upgrade your operating system to Red Hat Enterprise Linux 5, converting the lock manager to DLM when you do.

“GFS with a SAN” provides superior file performance for shared files and file systems. Linux applications run directly on GFS nodes. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with directly connected storage; yet, each GFS application node has equal access to all data files. GFS supports up to 125 GFS nodes.

GFS Software Components :

  • gfs.ko : Kernel module that implements the GFS file system and is loaded on GFS cluster nodes.
  • lock_dlm.ko : A lock module that implements DLM locking for GFS. It plugs into the lock harness, lock_harness.ko and communicates with the DLM lock manager in Red Hat Cluster Suite.
  • lock_nolock.ko : A lock module for use when GFS is used as a local file system only. It plugs into the lock harness, lock_harness.ko and provides local locking.

            The system clocks in GFS nodes must be within a few minutes of each other to prevent unnecessary inode time-stamp updating. Unnecessary inode time-stamp updating severely impacts cluster performance. Need to use ntpd for accurate time with time server.

A) (On shared disk ) : Create GFS file systems on logical volumes created in Step 1. Choose a unique name for each file system.

You can use either of the following formats to create a clustered GFS file system:
Initial Setup Tasks:

  • 1. Setting up logical volumes
  • 2. Making a GFS files system
  • 3. Mounting file systems

$ gfs_mkfs -p lock_dlm -t ClusterName:FSName -j NumberJournals BlockDevice
$ mkfs -t gfs -p lock_dlm -t LockTableName -j NumberJournals BlockDevice

OR

$ gfs_mkfs -p LockProtoName -t LockTableName -j NumberJournals BlockDevice
$ mkfs -t gfs -p LockProtoName -t LockTableName -j NumberJournals BlockDevice

B)At each node, mount the GFS file systems. For more information about mounting a GFS file system.

Command usage:

$ mount BlockDevice MountPoint
$ mount -o acl BlockDevice MountPoint

The -o acl mount option allows manipulating file ACLs. If a file system is mounted without the -o acl mount option, users are allowed to view ACLs (with getfacl), but are not allowed to set them (with setfacl).

NOTE : 

        Make sure that you are very familiar with using the LockProtoName and LockTableName parameters. Improper use of the LockProtoName and LockTableName parameters may cause file system or lock space corruption.

LockProtoName :

Specifies the name of the locking protocol to use. The lock protocol for a cluster is lock_dlm. The lock protocol when GFS is acting as a local file system (one node only) is lock_nolock.
LockTableName: This parameter is specified for GFS file system in a cluster configuration. It has two parts separated by a colon (no spaces) as follows: ClusterName:FSName

* ClusterName, the name of the Red Hat cluster for which the GFS file system is being created.
* FSName, the file system name, can be 1 to 16 characters long, and the name must be unique among all file systems in the cluster.

NumberJournals:

       Specifies the number of journals to be created by the gfs_mkfs command. One journal is required for each node that mounts the file system. (More journals than are needed can be specified at creation time to allow for future expansion.)

$ gfs_mkfs -p lock_dlm -t alpha:mydata1 -j 8 /dev/vg01/lvol0
$ gfs_mkfs -p lock_dlm -t alpha:mydata2 -j 8 /dev/vg01/lvol1

     Before you can mount a GFS file system, the file system must exist , the volume where the file system exists must be activated, and the supporting clustering and locking systems must be started. After those requirements have been met, you can mount the GFS file system as you would any Linux file system.

EXAMPLE : mount /dev/vg01/lvol0 /mydata1

Displaying GFS Tunable Parameters : gfs_tool gettune MountPoint

Thursday, June 16, 2011

How to list out number of files inside each directory?

■ Requirement : How to list out number of files inside each directory
■ OS Environment : Linux[RHEL, Centos]
■ Application: bash
■ Implementation Steps :

$ cat ./find_large_small_files.sh

#!/bin/bash
find . -type d -exec ls -d {} \; &> ./tmpfile
let i=0
for i in `cat ./tmpfile`
do
echo "Directory $i has following no of files : `ls -al $i|wc -l`"
done

rm -f ./tmpfile

Monday, June 13, 2011

How to enable debugging log level in postfix?

■ Requirement : enable debugging in postfix
■ OS Environment : Linux[RHEL, Centos]
■ Application: postfix
■ Implementation Steps : 

1. Add the following lines in /etc/postfix/main.cf file ::

debug_peer_list = example.com
debug_peer_level = 2

NOTE: Here example.com is the domain for which log messages will be displayed.

2. enable -v -v in master.cf file(/etc/postfix/master.cf) like :

smtp unix - - n - - smtp -v -v
smtp inet n - n - - smtpd -v -v

3. Restart postfix. check /var/log/maillog :

$ service postfix restart
$ tailf /var/log/maillog



Thursday, June 9, 2011

How to use smartctl to probe SATA disk?

■ Requirement : Use of smartctl to probe SATA
■ OS Environment : Linux[RHEL, Centos]
■ Application: smartctl
■ Resolution  : 

        To check hardware on linux(HW diagnosis tool): smartctl tool can't properly read SMART parameters from SATA devices. However, recent development of libata package makes it easy to read the data. You should use "ata" option while you probe the hardware. So, exact command will be

$ smartctl -d ata -A /dev/sdb 

To enable SMART on the SATA device you should use the following command :

$ smartctl -s on -d ata /dev/sdb

Then enable SMART :

$smartctl -s on -d ata /dev/sdb

Now run overall-health self-assessment test:

$ smartctl -d ata -H /dev/sdb

You can read more data from hard disk by typing following command:

$ smartctl -d ata -a /dev/sdb