Sunday, May 9, 2010

Log message analysis of Qmail :

You'll get path of the log message from /etc/syslog.conf. If it's not there then the path is : /usr/local/psa/var/log/maillog. Just try to do tail -f /usr/local/psa/var/log/maillog
and send message :

Here's a typical log sequence for a message sent to a remote system from the local system:

1 @4000000038c3eeb027f41c7c new msg 93869
2 @4000000038c3eeb027f6b0a4 info msg 93869: bytes 2343 from qp 18695 uid 49491
3 @4000000038c3eeb02877ee94 starting delivery 2392: msg 93869 to remote lwq@w3.to
4 @4000000038c3eeb0287b55ac status: local 0/10 remote 1/20
5 @4000000038c3eeb104a13804 delivery 2392: success: 209.85.127.177_accepted_message.
/Remote_host_said:_250_CAA01516_Message_accepted_for_delivery/
6 @4000000038c3eeb104a4492c status: local 0/10 remote 0/20
7 @4000000038c3eeb104a6ecf4 end msg 93869

Line 1 indicates that qmail has received a new message, and its queue ID is 93869. The queue ID is the i-node number of the /var/qmail/queue/mess/NN/ file--the queue file that contains the message. The queue ID is guaranteed to be unique as long as the message remains in the queue.

Line 2 says that the message is from dave@sill.org and is 2343 bytes.

Line 3 says qmail-remote is starting to deliver the message to lwq@w3.to, and it's assigning the ID 2392 to the delivery.

Line 4 says 0 local deliveries and 1 remote delivery are pending.

Line 5 says delivery 2392 is complete and successful, and it returns the remote server's response, which often contains information the remote mail administrator would find helpful in tracking a delivery. In this case, the "CAA01516" is the remote system's delivery ID.

Line 6 says 0 local deliveries and 0 remote deliveries are pending, i.e., the delivery is complete.

Line 7 says that the message has been delivered completely and removed from the queue. At this point, the queue ID, 93869, is reusable for another delivery.

Saturday, May 8, 2010

Directory structure of Qmail :-

[root@ds1130 var]# tree qmail/ | grep [a-zA-Z]
qmail/
|-- alias
|-- bin
| |-- autoresponder
| |-- bouncesaying
| |-- cmd5checkpw
| |-- condredirect
| |-- datemail
| |-- elq
| |-- except
| |-- forward
| |-- maildir2mbox
| |-- maildirmake
| |-- maildirwatch
| |-- mailsubj
| |-- matchup
| |-- mm_wrapper
| |-- pinq
| |-- predate
| |-- preline
| |-- qail
| |-- qbiff
| |-- qmail-clean
| |-- qmail-dk
| |-- qmail-getpw
| |-- qmail-inject
| |-- qmail-local
| |-- qmail-local.moved
| |-- qmail-local.plesk
| |-- qmail-lspawn
| |-- qmail-newmrh
| |-- qmail-newu
| |-- qmail-pop3d
| |-- qmail-popup
| |-- qmail-pw2u
| |-- qmail-qmqpc
| |-- qmail-qmqpd
| |-- qmail-qmtpd
| |-- qmail-qread
| |-- qmail-qstat
| |-- qmail-queue
| |-- qmail-queue.drweb
| |-- qmail-queue.moved
| |-- qmail-queue.origin
| |-- qmail-queue.plesk
| |-- qmail-remote
| |-- qmail-remote.moved
| |-- qmail-remote.plesk
| |-- qmail-rspawn
| |-- qmail-send
| |-- qmail-showctl
| |-- qmail-smtpd
| |-- qmail-start
| |-- qmail-tcpok
| |-- qmail-tcpto
| |-- qreceipt
| |-- qsmhook
| |-- relaylock
| |-- sendmail
| |-- smtp_auth
| |-- splogger
| |-- tcp-env
| `-- true
|-- boot
| |-- binm1
| |-- binm1+df
| |-- binm2
| |-- binm2+df
| |-- binm3
| |-- binm3+df
| |-- home
| |-- home+df
| |-- proc
| `-- proc+df
|-- control
| |-- clientcert.pem -> /var/qmail/control/servercert.pem
| |-- defaultdelivery
| |-- dhparam1024.pem
| |-- dhparam512.pem
| |-- locals
| |-- me
| |-- rcpthosts
| |-- rejectnonexist
| |-- rsa512.pem
| |-- servercert.pem
| |-- smtpplugins
| |-- spfrules
| `-- virtualdomains
|-- handlers
| |-- before-local
| |-- before-queue
| |-- before-remote
| |-- info
| `-- spool
|-- mailnames
| `-- energy.com.pa
| `-- alex
| |-- @attachments
| `-- Maildir
| |-- cur
| |-- maildirsize
| |-- new
| `-- tmp
|-- plugins
| `-- chkrcptto
|-- queue
| |-- bounce
| |-- info
| |-- intd
| |-- local
| |-- lock
| | |-- sendmutex
| | |-- tcpto
| | `-- trigger
| |-- mess
| |-- pid
| |-- remote
| `-- todo
`-- users
|-- assign
|-- cdb
`-- poppasswd
168 directories, 2089 files
[root@ds1130 var]

How to install Qmail?

1. Qmail
qmail is a secure, reliable, efficient, simple message transfer agent. It is meant as a replacement for the entire sendmail-binmail system on typical Internet-connected UNIX hosts.

Secure: Security isn't just a goal, but an absolute requirement. Mail delivery is critical for users; it cannot be turned off, so it must be completely secure.

Reliable: qmail's straight-paper-path philosophy guarantees that a message, once accepted into the system, will never be lost. qmail also supports maildir, a new, super-reliable user mailbox format. Maildirs, unlike mbox files and mh folders, won't be corrupted if the system crashes during delivery. Even better, not only can a user safely read his mail over NFS, but any number of NFS clients can deliver mail to him at the same time.

Efficient: On a Pentium under BSD/OS, qmail can easily sustain 200000 local messages per day---that's separate messages injected and delivered to mailboxes in a real test! Although remote deliveries are inherently limited by the slowness of DNS and SMTP, qmail overlaps 20 simultaneous deliveries by default, so it zooms quickly through mailing lists.

Simple: qmail is vastly smaller than any other Internet MTA. Some reasons why:

(1) Other MTAs have separate forwarding, aliasing, and mailing list mechanisms. qmail has one simple forwarding mechanism that lets users handle their own mailing lists.
(2) Other MTAs offer a spectrum of delivery modes, from fast+unsafe to slow+queued. qmail- send is instantly triggered by new items in the queue, so the qmail system has just one delivery mode: fast+queued.
(3) Other MTAs include, in effect, a specialized version of inetd that watches the load average. qmail's design inherently limits the machine load, so qmail-smtpd can safely run from your system's inetd.

Replacement for sendmail: qmail supports host and user masquerading, full host hiding, virtual domains, null clients, list-owner rewriting, relay control, double-bounce recording, arbitrary RFC 822 address lists, cross-host mailing list loop detection, per-recipient checkpointing, downed host backoffs, independent message retry schedules, etc. In short, it's up to speed on modern MTA features. qmail also includes a drop-in ``sendmail'' wrapper so that it will be used transparently by your current UAs.

2. Required packages

There are four packages needed for this qmail install.

2.1 netqmail-1.06.tar.gz
qmail is a secure, reliable, efficient, simple message transfer agent. It is designed for typical Internet-connected UNIX hosts. As of October 2001, qmail is the second most common SMTP server on the Internet, and has by far the fastest growth of any SMTP server.

2.2 ucspi-tcp-0.88.tar.gz
It is a tool similar to inetd. ucspi-tcp listens in 25 port and spawns qmail-smtpd when required. ucspi-tcp stands for Unix Client Server Program Interface for TCP.

2.3 daemontools-0.76.tar.gz
daemontools is actually a tool to manage & monitor daemons linux. It is used in qmail as well to manage qmail daemons.

2.4 checkpassword-0.90.tar.gz
checkpassword provides a simple, uniform password-checking interface to all root applications. It is suitable for use by applications such as login, ftpd, and pop3d.

3. Qmail Install

3.1 Get the files

Download files and place them into the /usr/local/src directory. This document refers to that directory for install procedures.

========================================================
cd /usr/local/src
wget http://www.qmail.org/netqmail-1.06.tar.gz
wget http://cr.yp.to/ucspi-tcp/ucspi-tcp-0.88.tar.gz
wget http://cr.yp.to/daemontools/daemontools-0.76.tar.gz
wget http://cr.yp.to/checkpwd/checkpassword-0.90.tar.gz
=========================================================

Now create /package directory and move daemontools-0.76.tar.gz to /package.

=========================================================
mkdir /package
mv -iv /usr/local/src/daemontools-0.76.tar.gz /package
=========================================================

3.2 Create users and groups

Run following commands one by one, to create required users & groups

==============================================
groupadd nofiles
useradd -g nofiles -d /var/qmail qmaild
useradd -g nofiles -d /var/qmail qmaill
useradd -g nofiles -d /var/qmail qmailp
useradd -g nofiles -d /var/qmail/alias alias
groupadd qmail
useradd -g qmail -d /var/qmail qmailq
useradd -g qmail -d /var/qmail qmailr
useradd -g qmail -d /var/qmail qmails
==============================================

3.3 Compile & Install

Untar the Qmail source

============================
cd /usr/local/src
tar -xzvf netqmail-1.06.tar.gz
===========================

Compile the source

===================================
cd /usr/local/src/netqmail-1.06
make setup check
===================================

4. Configure Qmail

4.1 Post Installation setup

Post installation configuration can be done by running following script.

=============
./config;
==============

4.2 Configure Qmail aliases.

Create a user named "adminmails" to receive all administrator emails.

================================================
useradd adminmails;
cd ~alias;
echo "adminmails" > .qmail-postmaster;
echo "adminmails" > .qmail-mailer-daemon;
echo "adminmails" > .qmail-root;
echo "adminmails" > .qmail-postmaster;
echo "adminmails" > .qmail-abuse;
chmod 644 ~alias/.qmail* ;
==============================================

Create Maildir for "adminmails" user

========================================
su - adminmails
/var/qmail/bin/maildirmake ~/Maildir
========================================


4.3 Configure Qmail to use Maildir

Now we need to configure qmail to use the Maildir Format.

Create "/var/qmail/rc" with following contents.

====================================================================================

#!/bin/sh

# Using stdout for logging
# Using control/defaultdelivery from qmail-local to deliver messages by default

exec env - PATH="/var/qmail/bin:$PATH" \
qmail-start "`cat /var/qmail/control/defaultdelivery`"

=====================================================================================

Make "/var/qmail/rc" executable

============================

chmod 755 /var/qmail/rc

============================

Create "/var/qmail/control/defaultdelivery" file.

=====================================================

echo ./Maildir/ >/var/qmail/control/defaultdelivery

=====================================================

4.4 Replace Sendmail binaries

======================================================
chmod 0 /usr/lib/sendmail ;
chmod 0 /usr/sbin/sendmail ;
mv /usr/lib/sendmail /usr/lib/sendmail.bak ;
mv /usr/sbin/sendmail /usr/sbin/sendmail.bak ;
ln -s /var/qmail/bin/sendmail /usr/lib/sendmail ;
ln -s /var/qmail/bin/sendmail /usr/sbin/sendmail
=======================================================

5. Install ucspi-tcp

Untar the ucspi-tcp source.

=============================================================
cd /usr/local/src/
tar -xzvf ucspi-tcp-0.88.tar.gz
==============================================================

Patch ucspi-tcp with "ucspi-tcp-0.88.errno.patch" provided with net qmail.

==============================================================================
cd ucspi-tcp-0.88
patch < /usr/local/src/netqmail-1.06/other-patches/ucspi-tcp-0.88.errno.patch
===============================================================================

Install ucspi-tcp.

========================
make
make setup check
=========================

6. Install checkpassword

Untar checkpassword source.

=========================================
cd /usr/local/src
tar -xzvf checkpassword-0.90.tar.gz
=========================================

Patch checkpassword with "checkpassword-0.90.errno.patch" provided with net qmail.

================================================================
cd checkpassword-0.90
patch < /usr/local/src/netqmail-1.06/other-patches/checkpassword-0.90.errno.patch
================================================================

Install checkpassword.

==================================
make ;
make setup check
==================================

7. Install daemontools

Untar the daemontools source

=========================================
cd /package
tar -xzvf daemontools-0.76.tar.gz
=========================================

Patch daemontools with "daemontools-0.76.errno.patch" provided with net qmail.

=========================================================================
cd /package/admin/daemontools-0.76/src
patch < /usr/local/src/netqmail-1.06/other-patches/daemontools-0.76.errno.patch
=========================================================================

Install daemontools

====================
cd ..
package/install
====================

8. Qmail Startup script

The "qmailctl" script is used as startup script for qmail.

8.1 Download qmailctl

===========================================================
cd /var/qmail/bin/
wget http://lifewithqmail.org/qmailctl-script-dt70
===========================================================

8.2 Setup qmailctl

========================================
mv -iv qmailctl-script-dt70 qmailctl
chmod 755 /var/qmail/bin/qmailctl
ln -s /var/qmail/bin/qmailctl /usr/bin
========================================

8.3 Modify qmailctl for qmail-pop3d

Add following lines to qmailctl's "start" section.

========================================================================
if svok /service/qmail-pop3d ; then
svc -u /service/qmail-pop3d /service/qmail-pop3d/log
else
echo qmail-pop3d supervise not running
fi
========================================================================

Add following lines to qmailctl's "stop" section.

======================================================================
echo " qmail-pop3d"
svc -d /service/qmail-pop3d /service/qmail-pop3d/log
======================================================================

Add following lines to qmailctl's "stat" section.

=======================================
svstat /service/qmail-pop3d
svstat /service/qmail-pop3d/log
=======================================

Add the following lines to qmailctl's "pause" section.

=======================================
echo "Pausing qmail-pop3d"
svc -p /service/qmail-pop3d
=======================================

Add following lines to qmailctl's "cont" section.

=======================================
echo "Continuing qmail-pop3d"
svc -c /service/qmail-pop3d
=======================================

Add following lines to qmailctl's "restart" section.

=========================================================
echo "* Restarting qmail-pop3d."
svc -t /service/qmail-pop3d /service/qmail-pop3d/log
=========================================================


9. Setup qmail-send & qmail-smtpd

9.1 Create supervise script directories for qmail daemons

Create supervise directories for qmail-send, qmail-smtpd & qmail-pop3d.

======================================================
mkdir -p /var/qmail/supervise/qmail-send/log
mkdir -p /var/qmail/supervise/qmail-smtpd/log
mkdir -p /var/qmail/supervise/qmail-pop3d/log
======================================================

9.2 Create supervise script for qmail-send

Create supervise script for qmail-send with name "/var/qmail/supervise/qmail-send/run".

The file should have following contents.

====================
#!/bin/sh
exec /var/qmail/rc
====================

9.3 qmail-send log daemon supervise script

Create qmail-send log daemon supervise script with name "/var/qmail/supervise/qmail-send/log/run".

The script should have following contents

======================================================================================
#!/bin/sh
exec /usr/local/bin/setuidgid qmaill /usr/local/bin/multilog t /var/log/qmail
======================================================================================

9.4 qmail-smtpd daemon supervise script

Create qmail-smtpd daemon supervise script with name "/var/qmail/supervise/qmail-smtpd/run".

The script should have following contents

=========================================================================================
#!/bin/sh

QMAILDUID=`id -u qmaild`
NOFILESGID=`id -g qmaild`
MAXSMTPD=`cat /var/qmail/control/concurrencyincoming`
LOCAL=`head -1 /var/qmail/control/me`

if [ -z "$QMAILDUID" -o -z "$NOFILESGID" -o -z "$MAXSMTPD" -o -z "$LOCAL" ]; then
echo QMAILDUID, NOFILESGID, MAXSMTPD, or LOCAL is unset in
echo /var/qmail/supervise/qmail-smtpd/run
exit 1
fi

if [ ! -f /var/qmail/control/rcpthosts ]; then
echo "No /var/qmail/control/rcpthosts!"
echo "Refusing to start SMTP listener because it'll create an open relay"
exit 1
fi

exec /usr/local/bin/softlimit -m 9000000 \
/usr/local/bin/tcpserver -v -R -l "$LOCAL" -x /etc/tcp.smtp.cdb -c "$MAXSMTPD" \
-u "$QMAILDUID" -g "$NOFILESGID" 0 smtp /var/qmail/bin/qmail-smtpd 2>&1
==========================================================================================

Create the concurrencyincoming control file.

======================================================
echo 20 > /var/qmail/control/concurrencyincoming
chmod 644 /var/qmail/control/concurrencyincoming
======================================================

9.5 qmail-smtpd log daemon supervise script

Create qmail-smtpd log daemon supervise script with name "/var/qmail/supervise/qmail-smtpd/log/run".

The script should have following contents

========================================================================================
#!/bin/sh
exec /usr/local/bin/setuidgid qmaill /usr/local/bin/multilog t /var/log/qmail/smtpd
========================================================================================

9.6 qmail-pop3d daemon supervise script

Create qmail-pop3d daemon supervise script with name "/var/qmail/supervise/qmail-pop3d/run" .

The script should have contents.

=================================================================================
#!/bin/sh
exec /usr/local/bin/softlimit -m 9000000 \
/usr/local/bin/tcpserver -v -R -H -l 0 0 110 /var/qmail/bin/qmail-popup \
FQDN /bin/checkpassword /var/qmail/bin/qmail-pop3d Maildir 2>&1
=================================================================================

Please replace FQDN with fully qualified domain name of the POP server
E.g: pop.example.com

9.7 qmail-pop3d log daemon supervise script

Create qmail-pop3d log daemon supervise script with name "/var/qmail/supervise/qmail-pop3d/log/run".

The script should have following contents

====================================================================
#!/bin/sh
exec /usr/local/bin/setuidgid qmaill /usr/local/bin/multilog t \
/var/log/qmail/pop3d
====================================================================

9.8 Create the log directories and add execute permissions on the run scripts.

=====================================================
mkdir -p /var/log/qmail/smtpd
mkdir /var/log/qmail/pop3d

chown qmaill /var/log/qmail
chown qmaill /var/log/qmail/smtpd
chown qmaill /var/log/qmail/pop3d

chmod 755 /var/qmail/supervise/qmail-send/run
chmod 755 /var/qmail/supervise/qmail-send/log/run

chmod 755 /var/qmail/supervise/qmail-smtpd/run
chmod 755 /var/qmail/supervise/qmail-smtpd/log/run

chmod 755 /var/qmail/supervise/qmail-pop3d/run
chmod 755 /var/qmail/supervise/qmail-pop3d/log/run
======================================================

10. Create soft link for the daemons in /service folder

10.1 Add qmail-send to /service folder

=================================================================
ln -s /var/qmail/supervise/qmail-send /service/qmail-send
=================================================================

10.2 Add qmail-smtpd to /service folder

===================================================================
ln -s /var/qmail/supervise/qmail-smtpd /service/qmail-smtpd
===================================================================

10.3 Add qmail-pop3d in /service folder.

=====================================================================
ln -s /var/qmail/supervise/qmail-pop3d /service/qmail-pop3d
=====================================================================

Note 1: The /service directory is created when daemontools is installed.

Note 2: The qmail system will start automatically shortly after these links are created.

If you don't want it running now, do: qmailctl stop



Reference
1. http://tac-au.com/howto/qmail-mini-HOWTO.txt
2. http://www.lifewithqmail.org/lwq.html
3. http://www.blogger.com/

How to configure alias, forwarder, virtual domain etc in Qmail?

Take some parameters :

Server : titanic_mail.com
Local domain : disco.com
Remote domain : gmail.com or ureka.com
IP : Remote IP : 209.85.231.83 or 82.98.86.164
A/c : test123@disco.com
A/c : testremote@ureka.com

1. Forwarding e-mail to another host :
a. Go to /var/qmail/control directory.
b. Put entries in /var/qmail/control/rcpthosts like: disco.com
c. Now go to /var/qmail/control/smtproutes and put
disco.com:gmail.com or disco.com :ureka.com //will forward to remote server.
d. Or disco.com : 209.85.231.83 or disco.com : 209.85.231.83:1027 //Port mentioned.

2. Users and alias users :-

Qmail uses a different method of handling users and aliases than do other SMTP servers, such as sendmail. In sendmail, all system aliases are defined in the /etc/aliases file. In Qmail, these aliases are defined in "dot-qmail" files, which are found in the /var/qmail/alias directory for system-wide aliases or in the user's home directory for user-specific aliases.

So, here are the steps :
a. cd /var/qmail/alias
touch .qmail-test123
b. Put the following line in that hidden file.
&testremote@ureka.com or @test@disco.com like this. Just for e-mail forwarding.
c. To forward e-mail to local users, you could inject the e-mail directly into a mailbox file or a maildir directory. Inside .qmail-test, type this line:

/var/qmail/mailnames/disco.com/test/Mailbox.test //for mailbox
/var/qmail/mailnames/disco.com/test/Maildir/ //for maildir

3. Mailing list :
a. Create a/c like qmail-list@disco.com
b. Create .dot file like : .qmail-list in /var/qmail/alias
c. Put all the e-mail addresses in this file like :
&test123
&bubby@disco.com
&example@disco.com

4. Aliases that run programs :
First, you must define the alias file in the same way that you would for forwarding e-mail to another address, mailbox, or maildir. Use a pipe (|) to begin the command line of the program that you want to call. For example, e-mail to auto@mydomain.com must run the program /usr/bin/myprogram. So, create the /var/qmail/alias/.qmail-auto alias file and include the following line in this file:

|/usr/bin/binfiles

5. Virtual domains :
The domain disco123.com has not hosted on titanic_mail.com but need to its DB record has present and it contains MX records. So, you can setup forwarder. I mean from this domain to any other exist domain.
Qmail also supports virtual domains. If your system (mydomain.com) wishes to accept e-mail for virtual.com, then add the following line to your /var/qmail/control/rcpthosts file:
virtual.com

Add the domain discovir.com in /var/qmail/control/virtualdomains file.

How to configure alias, forwarder, virtual domain etc with Qmail?

Take some parameters :

Server : titanic_mail.com
Local domain : disco.com
Remote domain : gmail.com or ureka.com
IP : Remote IP : 209.85.231.83 or 82.98.86.164
A/c : test123@disco.com
A/c : testremote@ureka.com

1. Forwarding e-mail to another host :
a. Go to /var/qmail/control directory.
b. Put entries in /var/qmail/control/rcpthosts like: disco.com
c. Now go to /var/qmail/control/smtproutes and put
disco.com:gmail.com or disco.com :ureka.com //will forward to remote server.
d. Or disco.com : 209.85.231.83 or disco.com : 209.85.231.83:1027 //Port mentioned.

2. Users and alias users :-

Qmail uses a different method of handling users and aliases than do other SMTP servers, such as sendmail. In sendmail, all system aliases are defined in the /etc/aliases file. In Qmail, these aliases are defined in "dot-qmail" files, which are found in the /var/qmail/alias directory for system-wide aliases or in the user's home directory for user-specific aliases.

So, here are the steps :
a. cd /var/qmail/alias
touch .qmail-test123
b. Put the following line in that hidden file.
&testremote@ureka.com or @test@disco.com like this. Just for e-mail forwarding.
c. To forward e-mail to local users, you could inject the e-mail directly into a mailbox file or a maildir directory. Inside .qmail-test, type this line:

/var/qmail/mailnames/disco.com/test/ //for mailbox

Qmail : How does it work ?

Qmail programs & configuration files
Qmail works using about 15 small programs. It also uses a pretty large number of configuration files. At the beginning, this may appear as quite confusing. Once you are familiarised with these configuration files & programs, qmail administration will be easier.

Qmail programs

Qmail continuously runs 5 daemons. Remaining 10 programs are launched by these 5 daemons as and when required.

Let us see which are the 5 daemons.

1. qmail-send
2. qmail-lspawn
3. qmail-rspawn
4. qmail-clean
5. tcpserver

How Qmail works

1. Email arrival in Qmail

Mail arrives in Qmail in two different ways.

(i) Locally injected emails.

There is a program called sendmail that comes with qmail. It is a program that mimics functionality of legacy sendmail, its arguments are also similar. sendmail accepts the local email and passes it to qmail-inject.

(ii) Remote emails arrived via SMTP

* tcpserver listens incoming connections on the SMTP port.
* upon a new SMTP connection, qmail-smtpd is launched.
* qmail-smtpd receives emails via SMTP.



2. Queuing emails

* qmail-inject & qmail-smpd pass received emails to qmail-queue.
* qmail-queue places emails in the folder /var/qmail/todo
* qmail-queue adds necessary headers to emails
* Then, it notifies qmail-send about newly queued emails.

3. Processing queued emails

* qmail-send takes the message out of /var/qmail/queue/todo folder
* qmail-send checks the recipient address of the email
* If the recipient addres is local, email is passed to qmail-lspwan
* If the recipient address is remote, email is passed to qmail-rspawn

4. Email delivery to local and remote recipients

* qmail-lspawn passes email to qmail-local
* qmail-local delivers email to local email address
* qmail-rspawn passes email to qmail-remote
* qmail-remote connects to remote mail server and delivers email to remote email address

5. Cleaning queue after delivering emails

* Once all messages are delivered, qmail-send notifies qmail-clean
* qmail-clean removes the delivered emails from the queue

Qmail configuration files

Qmail configuration files are located in the folder /var/qmail/control.

1. badmailfrom
All "from addresses" which are blacklisted.

2. bouncefrom
It is the bounce email from address. Usually it is "mailer-daemon".

3. bouncehost
It is host name of server

4. concurrencyincoming
Maximum number of simultaneous incoming SMTP connections allowed.

5. concurrencylocal
Maximum number of simultaneous local deliveries

6. concurrencyremote
Maximum number of simultaneous remote deliveries

7. defaultdomain
Default domain name of server

8. defaulthost
Host name of server

9. databytes
Maximum number of bytes in message (0=no limit)

10. doublebouncehost
It is the bounce email from address. Usually it is "mailer-daemon".

11. doublebounceto
It is the bounce email to address. Usually it is "postmaster".

12. helohost
It is the host name used in SMTP HELO command

13. idhost
It is host name of server. It is used when creating Message-ID.

13. localiphost
It is local IP address

14. locals
List of all local domains.

15. me
It is the hostname of server

16. morercpthosts
Only 50 domains can be added in rcpthosts, remaining domains should be in morercpthosts.

17. queuelifetime
It is the number of seconds an email can remain in queue

18. rcpthosts
Domains of all locally hosted email addresses.

19 smtpgreeting
It is the SMTP greeting message used by mail server.

20. timeoutconnect
Time in seconds, the server has to wait for SMTP connection

21. timeoutremote
Time in seconds, server has to wait for remote server

22. timeoutsmtpd
Time in seconds, server has to wait for SMTP client

23. virtualdomains
List of all virtual domains

Wednesday, May 5, 2010

Make swap partition

1. First, create an empty file which will serve as a swap file by issuing the following command:
dd if=/dev/zero of=/swap bs=1024 count=1048576
where /swap is the desired name of the swap file, and count=1048576 sets the size to 1024 MB swap (for 512 MB use 524288 respectively).


2. Set up a Linux swap area with:
mkswap /swap
Old versions of mkswap needed the size of the swap file/partition, but with newer ones it’s better not to specify it at all since a small typo can erase the whole disk.


3. It’s wise to set the permissions as follows:
chmod 0600 /swap


4. The last thing – add the new swap file to /etc/fstab:
/swap swap swap defaults,noatime 0 0
This way it will be loaded automatically on boot.


5. To enable the new swap space immediately, issue:
swapon -a

Cannot lock Container

check lock file and remove PID :

vi /vz/lock/434.lk
ps aux | grep vz //check vzquota is running , kill the process

MLAT/cpulimit/cpuunits

http://wiki.openvz.org/User_Guide/Managing_Resources#Managing_CPU_Share //Referred

Resource control VPS :

The root cause of this problem was high "MLAT" value in his container
(as shown by vzstat). This was happening due to this resource setting
of his VPS: cpulimit = 50% & cpus = 4. I have now changed his cpulimit
to 400%. After some research, I found that cpulimit works on PER-CPU
basis. So, for 1 CPU, cpulimit = 100% means "fully use 1 CPU". But
when you have 4 CPUs, you must specify cpulimit = 400% to "fully use
ALL 4 CPUs".

So, if there are 16 CPUs , cpulimit = 1600% ie CPULIMIT = 1600 "fully use
ALL 16 CPUs".

=====
processor at HW Node : 4 (2000.304Mhz each, 1024 KB cache, Dual-Core AMD), 7 GB RAM , 1 GB SWAP
No of processor at Each CT : 1 (each CT RAM = 1 GB)
So, each CT should have CPULIMITS =100, for 4 processor it'll be 400
BURST_CPULIMIT="0" //for each CT

=========

[root@node3 scripts]#for i in `vzlist | awk '{print $1 }'`; do grep CPULIMIT /etc/vz/conf/$i.conf; done //command to check CPUlimit of the container.

Tune CTs :

[root@node3 conf]# for i in `vzlist -a| awk '{print $1 }'`; do sed -i -e 's/CPULIMIT="0"/CPULIMIT="1600"/g' /etc/vz/conf/$i.conf; done
[root@node3 conf]# for i in `vzlist -a| awk '{print $1 }'`; do sed -i 's/BURST_CPULIMIT="1600"/BURST_CPULIMIT="0"/g' /etc/vz/conf/$i.conf; done

[root@node3 conf]#for i in `vzlist -a|grep stopped | awk '{print $1 }'`; do sed -i -e 's/CPUS="0"/CPUS="16"/g' /etc/vz/conf/$i.conf; done

[root@node4 log]# for i in `vzlist -a|awk '{print $1 }'`; do sed -i -e 's/CPUS="2"/CPUS="16"/g' /etc/vz/conf/$i.conf; done


check cpuunits in node and containers :

vzcpucheck and

for i in `vzlist | awk '{print $1 }'`; do grep CPUUNITS /etc/vz/conf/$i.conf; done

=========
In the following example, Container 102 is guaranteed to receive about 2% of the CPU time even if the Hardware Node is fully used, or in other words, if the current CPU utilization equals the power of the Node. Besides, CT 102 will not receive more than 4% of the CPU time even if the CPU is not fully loaded:

# vzctl set 102 --cpuunits 1500 --cpulimit 4 --save
Configuring Number of CPUs Inside Container : # vzctl set 101 --cpus 2 --save

========

primary parameter :


avnumproc
The average number of processes and threads.
V
numproc
The maximal number of processes and threads the CT may create.
V
numtcpsock
The number of TCP sockets (PF_INET family, SOCK_STREAM type). This parameter limits the number of TCP connections and, thus, the number of clients the server application can handle in parallel.
V
numothersock
The number of sockets other than TCP ones. Local (UNIX-domain) sockets are used for communications inside the system. UDP sockets are used, for example, for Domain Name Service (DNS) queries. UDP and other sockets may also be used in some very specialized applications (SNMP agents and others).
V
vmguarpages
The memory allocation guarantee, in pages (one page is 4 Kb). CT applications are guaranteed to be able to allocate additional memory so long as the amount of memory accounted as privvmpages (see the auxiliary parameters) does not exceed the configured barrier of the vmguarpages parameter. Above the barrier, additional memory allocation is not guaranteed and may fail in case of overall memory shortage.

============

How is the values currentlu used bu CT ?

vzctl exec 101 cat /proc/user_beancounters
Monitoring Memory Consumption
You can monitor a number of memory parameters for the whole Hardware Node and for particular Containers with the help of the vzmemcheck utility. For example:
# vzmemcheck -v -A //in Megabyte

Automatic adjustment of Container

Automatic adjustment

FIXME: vzcfgvalidate -r|-i

-r Enter the correction mode to fix the wrong resources configuration.
-i Enter the interactive correction mode.

Validating Container Configuration(VPS)

Validating Container Configuration : -

# vzcfgvalidate /etc/vz/conf/101.conf

The utility checks constraints on the resource management parameters and displays all the constraint violations found. There can be three levels of violation severity:
Recommendation >> This is a suggestion, which is not critical for Container or Hardware Node operations. The configuration is valid in general; however, if the system has enough memory, it is better to increase the settings as advised.
----------
Warning >>A constraint is not satisfied, and the configuration is invalid. The Container applications may not have optimal performance or may fail in an ungraceful way.
----------
Error >> An important constraint is not satisfied, and the configuration is invalid. The Container applications have increased chances to fail unexpectedly, to be terminated, or to hang.
----------

Monitoring Memory Consumption

Monitoring Memory Consumption :

You can monitor a number of memory parameters for the whole Hardware Node and for particular Containers with the help of the vzmemcheck utility. For example:

# vzmemcheck -v

Monitor resources usage of VPS

1. Monitoring System Resources Consumption :

It is possible to check the system resource control parameters statistics from within a Container. The primary use of these statistics is to understand what particular resource has limits preventing an application to start. Moreover, these statistics report the current and maximal resources consumption for the running Container. This information can be obtained from the /proc/user_beancounters file.

command #vzctl exec 101 cat /proc/user_beancounters


The failcnt column displays the number of unsuccessful attempts to allocate a particular resource. If this value increases after an application fails to start, then the corresponding resource limit is in effect lower than is needed by the application.

The held column displays the current resource usage, and the maxheld column – the maximal value of the resource consumption for the last accounting period. The meaning of the barrier and limit columns depends on the parameter and is explained in the UBC guide.

Inside a CT, the /proc/user_beancounters file displays the information on the given CT only, whereas from the Hardware Node this file displays the information on all the CTs.

Managa VPS resources :

7. Managing Container CPU resources :-
The current section explains the CPU resource parameters that you can configure and monitor for each Container.

The table below provides the name and the description for the CPU parameters. The File column indicates whether the parameter is defined in the OpenVZ global configuration file (G) or in the CT configuration files (V).

parameter Description File
======================================================
A)ve0cpuunits >> determines the minimal guaranteed share of the CPU time Container 0 (the Hardware Node itself) will receive. It is recommended to set the value of this parameter to be 5-10% of the power of the Hardware Node(commadn to show power of node :vzcpucheck). After the Node is up and running, you can redefine the amount of the CPU time allocated to the command : Node by using the --cpuunits parameter with the vzctl set 0 command. >> G
B)cpuunits >> This is a positive integer number that determines the minimal guaranteed share of the CPU time the corresponding Container will receive. >> V

C)cpulimit >> This is a positive number indicating the CPU time in per cent the corresponding CT is not allowed to exceed. >> V

D ) cpus >>The number of CPUs to be used to handle the processes running inside the corresponding Container.
>> V

==========


8. Managing CPU Share : -The OpenVZ CPU resource control utilities allow you to guarantee any Container the amount of CPU time this Container receives. The Container can consume more than the guaranteed value if there are no other Containers competing for the CPU and the cpulimit parameter is not defined.

Note: The CPU time shares and limits are calculated on the basis of a one-second period. Thus, for example, if a Container is not allowed to receive more than 50% of the CPU time, it will be able to receive no more than half a second each second.

To get a view of the optimal share to be assigned to a Container, check the current Hardware Node CPU utilization:


# vzcpucheck =========>>>
Current CPU utilization: 5166
Power of the node: 73072.5

The output of this command displays the total number of the so-called CPU units(ie 5166 seconds) consumed by all running Containers and Hardware Node processes. Power of the node: 429892 , this is the max no of seconds unites of this node can perform.

Besides, CT 102 will not receive more than 4% of the CPU time even if the CPU is not fully loaded:

# vzctl set 102 --cpuunits 1500 --cpulimit 4 --save
Saved parameters for CT 102
# vzctl start 102
Starting CT …
CT is mounted
Adding IP address(es): 192.168.1.102
CT start in progress…
# vzcpucheck
Current CPU utilization: 6667
Power of the node: 73072.5

To set the --cpuunits parameter for the Hardware Node, you should indicate 0 as the Container ID (e.g. vzctl set 0 --cpuunits 5000 --save).

==============================


9. Configuring Number of CPUs Inside Container : -


If your Hardware Node has more than one physical processor installed, you can control the number of CPUs which will be used to handle the processes running inside separate Containers. By default, a Container is allowed to consume the CPU time of all processors on the Hardware Node, i.e. any process inside any Container can be executed on any processor on the Node. However, you can modify the number of physical CPUs which will be simultaneously available to a Container using the --cpus option of the vzctl set command. For example, if your Hardware Node has 4 physical processors installed, i.e. any Container on the Node can make use of these 4 processors, you can set the processes inside Container 101 to be run on 2 CPUs only by issuing the following command:

# vzctl set 101 --cpus 2 --save

Note: The number of CPUs to be set for a Container must not exceed the number of physical CPUs installed on the Hardware Node. In this case the 'physical CPUs' notation designates the number of CPUs the OpenVZ kernel is aware of (you can view this CPU number using the cat /proc/cpuinfo command on the Hardware Node).

=============================

10 : Managing System Parameters :

The resources a Container may allocate are defined by the system resource control parameters. These parameters can be subdivided into the following categories: primary, secondary, and auxiliary parameters. The primary parameters are the start point for creating a Container configuration from scratch. The secondary parameters are dependent on the primary ones and are calculated from them according to a set of constraints. The auxiliary parameters help improve fault isolation among applications in one and the same Container and the way applications handle errors and consume resources. They also help enforce administrative policies on Containers by limiting the resources required by an application and preventing the application to run in the Container.

Listed below are all the system resource control parameters. The parameters starting with «num» are measured in integers. The parameters ending in «buf» or «size» are measured in bytes. The parameters containing «pages» in their names are measured in 4096-byte pages (IA32 architecture). The File column indicates that all the system parameters are defined in the corresponding CT configuration files (V).

==
Primary parameters Parameter Description File
avnumproc >> The average number of processes and threads. V
numproc >> The maximal number of processes and threads the CT may create. V
numtcpsock >> The number of TCP sockets (PF_INET family, SOCK_STREAM type). This parameter limits the number of TCP connections and, thus, the number of clients the server application can handle in parallel. V
numothersock >> The number of sockets other than TCP ones. Local (UNIX-domain) sockets are used for communications inside the system. UDP sockets are used, for example, for Domain Name Service (DNS) queries. UDP and other sockets may also be used in some very specialized applications (SNMP agents and others). V
vmguarpages >> The memory allocation guarantee, in pages (one page is 4 Kb). CT applications are guaranteed to be able to allocate additional memory so long as the amount of memory accounted as privvmpages (see the auxiliary parameters) does not exceed the configured barrier of the vmguarpages parameter. Above the barrier, additional memory allocation is not guaranteed and may fail in case of overall memory shortage. V
------------------------

Secondary parameters Parameter Description File
kmemsize >> The size of unswappable kernel memory allocated for the internal kernel structures for the processes of a particular CT. V
tcpsndbuf >> The total size of send buffers for TCP sockets, i.e. the amount of kernel memory allocated for the data sent from an application to a TCP socket, but not acknowledged by the remote side yet. V
tcprcvbuf >> The total size of receive buffers for TCP sockets, i.e. the amount of kernel memory allocated for the data received from the remote side, but not read by the local application yet. V
othersockbuf >> The total size of UNIX-domain socket buffers, UDP, and other datagram protocol send buffers. V
dgramrcvbuf >> The total size of receive buffers of UDP and other datagram protocols. V
oomguarpages >> The out-of-memory guarantee, in pages (one page is 4 Kb). Any CT process will not be killed even in case of heavy memory shortage if the current memory consumption (including both physical memory and swap) does not reach the oomguarpages barrier. V

---------------------

Auxiliary parameters Parameter Description File
lockedpages >> The memory not allowed to be swapped out (locked with the mlock() system call), in pages. V
shmpages >> The total size of shared memory (including IPC, shared anonymous mappings and tmpfs objects) allocated by the processes of a particular CT, in pages. V
privvmpages >> The size of private (or potentially private) memory allocated by an application. The memory that is always shared among different applications is not included in this resource parameter. V
numfile >>The number of files opened by all CT processes. V
numflock >>The number of file locks created by all CT processes. V
numpty >>The number of pseudo-terminals, such as an ssh session, the screen or xterm applications, etc. V
numsiginfo >> The number of siginfo structures (essentially, this parameter limits the size of the signal delivery queue). V
dcachesize >> The total size of dentry and inode structures locked in the memory. V
physpages >> The total size of RAM used by the CT processes. This is an accounting-only parameter currently. It shows the usage of RAM by the CT. For the memory pages used by several different CTs (mappings of shared libraries, for example), only the corresponding fraction of a page is charged to each CT. The sum of the physpages usage for all CTs corresponds to the total number of pages used in the system by all the accounted users. V
numiptent >> The number of IP packet filtering entries. V

You can edit any of these parameters in the /etc/vz/conf/CTID.conf file of the corresponding Container by means of your favorite text editor (for example, vi or emacs), or by running the vzctl set command. For example:

# vzctl set 101 --kmemsize 2211840:2359296 --save
Saved parameters for CT 101

======================

Virtuzoo commands

[root@test /]#vzctl start/stop/restart 101 // start the VPS 101
[root@test /]#vzctl enter 101 // enter into 101 VPS
[root@101 /]#exit // log out from 101 VPS
[root@test /]#vzlist // display the list of active VPS’s
[root@test /]#vzlist -a // display the list of all VPS’s
[root@test /]#vzcalc -v 101 // show resources usage on VPS
[root@test /]#vzctl exec 101 df -m // execute commands against the VPS(in this case ‘df -m’)
[root@test /]#vzyum 101 -y install package // install package using yum on VPS
[root@test /]#vzrpm 101 -Uvh package // install package using rpm on VPS
vzstat

Saturday, May 1, 2010

Increase connection speed of FTP server

IdentLookups off
UseReverseDNS off
Quotas on
AllowStoreRestart on
AllowRetrieveRestart on
TimeoutIdle 1800

Create a proftpd.include2 with the following details:

IdentLookups off
UseReverseDNS off
Quotas on
AllowStoreRestart on
AllowRetrieveRestart on
TimeoutIdle 1800

Enter the following line in the proftpd.conf

Include /etc/proftpd.include2

restart ftp /

Add IP and route

Commands :
[root@morgan]# ifconfig eth0 192.168.99.14 netmask 255.255.255.0 up
[root@morgan]# route add default gw 192.168.99.254 //gateway add

Adding a static route with route :
[root@morgan]# route add -net 192.168.98.0 netmask 255.255.255.0 gw 192.168.99.1

Removing a static network route and adding a static host route :
[root@morgan]# route del -net 192.168.98.0 netmask 255.255.255.0 gw 192.168.99.1
[root@morgan]# route add -net 192.168.98.42 netmask 255.255.255.255 gw 192.168.99.1 //for network
[root@morgan]# route add -host 192.168.98.42 gw 192.168.99.1

Plesk Config & Binary file

Plesk Config & Binary file

Linux :

===========================================================
# Plesk tree
PRODUCT_ROOT_D - /usr/local/psa
==================================================
# Directory of SysV-like Plesk initscripts
PRODUCT_RC_D - /etc/init.d

/etc/psa/psa.conf // common config file
/etc/psa/.psa.shadow // admin password file
/var/log // common log file
/etc/local/psa/admin/logs //plesk log file
================================================

# Directory for config files
PRODUCT_ETC_D - /usr/local/psa/etc
================================================

# Virtual hosts directory
HTTPD_VHOSTS_D - /var/www/vhosts
/var/www/vhosts//anon_ftp , cgi-bin, conf, error_docs , httpdocs, httpsdocs , pd , private , statistics-->/logs(domain logs), subdomains , web

Domain based httpd.conf file:- /var/www/vhosts/conf/httpd.include
Doc root : /var/www/vhosts//httpdocs
===============================================

# Apache configuration files directory
HTTPD_CONF_D - /etc/httpd/conf/httpd.conf //server based
/var/www/vhosts/conf/httpd.include // domain based
------------------------------------------------------------------------------------------------------------
# Apache include files directory
HTTPD_INCLUDE_D - /etc/httpd/conf.d/mailman.conf , perl.conf, python.conf , webalizer.conf, zz010_psa_httpd.conf, fcgid.conf, manual.conf , php.conf, ssl.conf, welcome.conf
------------------------------------------------------------------------------------------------------------
# Apache binary files directory
HTTPD_BIN_D - /usr/bin
--------------------------------------------------------------------------------------------------------------
#Apache log files directory
HTTPD_LOG_D - /var/log/httpd
-----------------------------------------------------------------------------------------------------------
#apache startup script
HTTPD_SERVICE httpd
/etc/init.d/httpd [start|stop|restart|status]


================================================

# Qmail directory
QMAIL_ROOT_D /var/qmail

# Location of qmail maildirs
QMAIL_MAILNAMES_D /var/qmail/mailnames/
/var/qmail/mailnames//mail a/c/Maildir/new,cur,tmp
# Path to rblsmtpd
RBLSMTPD /usr/sbin/rblsmtpd

# Courier-IMAP
COURIER_IMAP_ROOT_D /

/etc/init.d/qmail [status,start,stop,restart] //service control

================================================
# Proftpd
FTPD_CONF /etc/proftpd.conf
FTPD_CONF_INC /etc/proftpd.include
FTPD_BIN_D /usr/bin
FTPD_VAR_D /var/run/proftpd
FTPD_SCOREBOARD /var/run/proftpd/scoreboard

Log file :- /var/log/xferlog
Service control :- /etc/init.d/proftpd [start,stop,status,restart]
================================================
# Bind
NAMED_RUN_ROOT_D /var/named/run-root/etc/named.conf
/var/named/run-root/var/ // db record

Service control :- /etc/init.d/named [status,start,stop,restart]
log:- /var/log/messages
===============================================

# Webalizer
WEB_STAT /usr/bin/webalizer
===============================================
# Logrotate
LOGROTATE /usr/local/psa/logrotate/sbin/logrotate
===============================================
# MySQL
MYSQL_VAR_D /var/lib/mysql
MYSQL_BIN_D /usr/bin

Service control:- /etc/rc.d/init.d /mysqld [start,stop,restart,status]
log :- /var/log/messages
===============================================
# PostgreSQL
PGSQL_DATA_D /var/lib/pgsql/data
PGSQL_BIN_D /usr/bin
===============================================
# Backups directory
DUMP_D /var/lib/psa/dumps
===============================================
# Mailman directories
MAILMAN_ROOT_D /usr/lib/mailman
MAILMAN_VAR_D /var/lib/mailman
===============================================
# Python binary
PYTHON_BIN /usr/bin/python2.3

# Tomcat root directory
CATALINA_HOME /usr/share/tomcat5

# DrWeb
DRWEB_ROOT_D /opt/drweb
DRWEB_ETC_D /etc/drweb

# GnuPG binary
GPG_BIN /usr/bin/gpg

# Tar binary
TAR_BIN /bin/tar
===============================================
# Curl certificates
CURL_CA_BUNDLE_FILE /usr/share/curl/curl-ca-bundle.crt
=========================================================
# AWStats
AWSTATS_ETC_D /etc/awstats
AWSTATS_BIN_D /var/www/cgi-bin/awstats
AWSTATS_TOOLS_D /usr/share/awstats
AWSTATS_DOC_D /var/www/html/awstats
===============================================
# openssl binary
OPENSSL_BIN /usr/bin/openssl

LIB_SSL_PATH /lib/libssl.so
LIB_CRYPTO_PATH /lib/libcrypto.so

CLIENT_PHP_BIN /usr/local/psa/bin/php-cli
---------------------------------------------------------------------------------------------------------------------
Psa Spammasion:-
/usr/local/psa/admin/bin/spamd --status
--stop
--start
------------------------------------------------------------------------------------------------------

Exim comman cheet sheet

URL : http://bradthemad.org/tech/notes/exim_cheatsheet.php

spam check on linux server

Check spamming is occurring or spam is issuing from the server or not :

node #ps -C exim -fH ewww|awk '{for(i=1;i<=40;i++){print $i}}'|sort|uniq -c|grep PWD|sort -n

node # grep "cwd=" /var/log/exim_mainlog|awk '{for(i=1;i<=10;i++){print $i}}'|sort|uniq -c|grep cwd|sort -n

======

Check on qmail server :

Find ID :
1. for i in `/var/qmail/bin/qmail-qread | awk '{print $6}'|cut -d# -f2`; do find /var/qmail/queue -iname $i; done> test
2. for i in `cat test`; do grep -irl "GoodFaith Proposal" $i; done //output of queue whic containsthat subject
3. Sort the IDs and delete them from qmail q and inform the customer.

If there are too many messages in the queue, try to find out where the spam is coming from. If the mail is being sent by an authorized user, but not from a PHP script, you can find out which user sent most of the messages with the following command:
# cat /usr/local/psa/var/log/maillog |grep -I smtp_auth |grep -I user |awk '{print $11}' |sort |uniq -c |sort -n

Guide :

http://download1.parallels.com/Plesk/PPP9/Doc/en-US/plesk-9.3-unix-advanced-administration-guide/index.htm?fileName=61674.htm

How to create swap space on linux system?

1. First, create an empty file which will serve as a swap file by issuing the following command:
dd if=/dev/zero of=/swap bs=1024 count=1048576
where /swap is the desired name of the swap file, and count=1048576 sets the size to 1024 MB swap (for 512 MB use 524288 respectively).


2. Set up a Linux swap area with:
mkswap /swap
Old versions of mkswap needed the size of the swap file/partition, but with newer ones it’s better not to specify it at all since a small typo can erase the whole disk.


3. It’s wise to set the permissions as follows:
chmod 0600 /swap


4. The last thing – add the new swap file to /etc/fstab:
/swap swap swap defaults,noatime 0 0
This way it will be loaded automatically on boot.


5. To enable the new swap space immediately, issue:
swapon -a

Resource control hardware node

Resource control hardware node :

The root cause of this problem was high "MLAT" value in his container
(as shown by vzstat). This was happening due to this resource setting
of his VPS: cpulimit = 50% & cpus = 4. I have now changed his cpulimit
to 400%. After some research, I found that cpulimit works on PER-CPU
basis. So, for 1 CPU, cpulimit = 100% means "fully use 1 CPU". But
when you have 4 CPUs, you must specify cpulimit = 400% to "fully use
ALL 4 CPUs".

So, if there are 16 CPUs , cpulimit = 1600% ie CPULIMIT = 1600 "fully use
ALL 16 CPUs".

=====
processor at HW Node : 4 (2000.304Mhz each, 1024 KB cache, Dual-Core AMD), 7 GB RAM , 1 GB SWAP
No of processor at Each CT : 1 (each CT RAM = 1 GB)
So, each CT should have CPULIMITS =100, for 4 processor it'll be 400
BURST_CPULIMIT="0" //for each CT

=========

[root@node3 scripts]#for i in `vzlist | awk '{print $1 }'`; do grep CPULIMIT /etc/vz/conf/$i.conf; done //command to check CPUlimit of the container.

Tune CTs :

[root@node3 conf]# for i in `vzlist -a| awk '{print $1 }'`; do sed -i -e 's/CPULIMIT="0"/CPULIMIT="1600"/g' /etc/vz/conf/$i.conf; done
[root@node3 conf]# for i in `vzlist -a| awk '{print $1 }'`; do sed -i 's/BURST_CPULIMIT="1600"/BURST_CPULIMIT="0"/g' /etc/vz/conf/$i.conf; done

[root@node3 conf]#for i in `vzlist -a|grep stopped | awk '{print $1 }'`; do sed -i -e 's/CPUS="0"/CPUS="16"/g' /etc/vz/conf/$i.conf; done

[root@node4 log]# for i in `vzlist -a|awk '{print $1 }'`; do sed -i -e 's/CPUS="2"/CPUS="16"/g' /etc/vz/conf/$i.conf; done


check cpuunits in node and containers :

vzcpucheck and

for i in `vzlist | awk '{print $1 }'`; do grep CPUUNITS /etc/vz/conf/$i.conf; done

=========
In the following example, Container 102 is guaranteed to receive about 2% of the CPU time even if the Hardware Node is fully used, or in other words, if the current CPU utilization equals the power of the Node. Besides, CT 102 will not receive more than 4% of the CPU time even if the CPU is not fully loaded:

# vzctl set 102 --cpuunits 1500 --cpulimit 4 --save
Configuring Number of CPUs Inside Container : # vzctl set 101 --cpus 2 --save

List iptable rules

List iptable rules:
iptables -n -L (-n prevents slow reverse DNS lookup)

Reject all from an IP Address:
iptables -A INPUT -s 136.xxx.xxx.xxx -d 136.xxx.xxx.xxx -j REJECT

Allow in SSH:
iptables -A INPUT -d 136.xxx.xxx.xxx -p tcp --dport 22 -j ACCEPT

If Logging - Insert Seperate Line *BEFORE* the ACCEPT / REJECT / DROP
iptables -A INPUT -d 136.xxx.xxx.xxx -p tcp --dport 3306 -j LOG
iptables -A INPUT -d 136.xxx.xxx.xxx -p tcp --dport 3306 -j ACCEPT
Block All:
iptables -A INPUT -j REJECT

===============
Remove / Delete an individual /single Iptable Rule
iptables -D INPUT -s 127.0.0.1 -p tcp --dport 111 -j ACCEPT
// -D = delete appropriate rule. If you dont know the exact syntax of the rule to delete do the following:
iptables -L
//count down the number of lines until you reach the rule you wish to delete
iptables -D INPUT 4
//format = iptables -D CHAIN #Rule_No

======================
Saving ALL IPTABLE Rules
It seems that the method for saving & loading iptable rules from /etc/init.d/iptables load|save active|inactive does not save NAT rules.
The command for saving iptable rules manually is:
root:~# iptables-save > rules-saved
There is also command called iptables-restore. It is:
root:~# iptables-restore rules-saved

Virtuozzo Command Line Utilities

Virtuozzo Command Line Utilities :
The table below contains the full list of Virtuozzo command-line utilities.
General utilities are intended for performing day-to-day maintenance tasks:
vzctl
Utility to control Containers.
vzlist
Utility to view a list of Containers existing on the Node with additional information.
vzquota
Utility to control Virtuozzo Containers disk quotas.

Licensing utilities allow you to install a new license, view the license state, generate a license request for a new license:
vzlicview
Utility to display the Virtuozzo license status and parameters.
vzlicload
Utility to manage Virtuozzo licenses on the Hardware Node.
vzlicupdate
Utility to activate the Virtuozzo Containers installation, update the Virtuozzo licenses installed on the Hardware Node, or transfer the Virtuozzo license from the Source Node to the Destination Node.

Container migration tools allow to migrate Containers between Hardware Nodes or within one Hardware Node:
vzmigrate
Utility for migrating Containers from one Hardware Node to another.
vzmlocal
Utility for the local cloning or moving of the Containers.
vzp2v
Utility to migrate a physical server to a Container on the Node.
vzv2p
Utility to migrate a Container to a physical server.

Container backup utilities allow to back up and restore the Container private areas, configuration files, action scripts, and quota information:
vzbackup
Utility to back up Containers.
vzrestore
Utility to restore backed up Containers.
vzabackup
Utility to back up Hardware Nodes and their Containers. As distinct from vzbackup, this utility requires the Parallels Agent software for its functioning.
vzarestore
Utility to restore backed up Hardware Nodes and Containers. As distinct from vzrestore, this utility requires the Parallels Agent software for its functioning.

Template management tools allow the template creation, maintenance and installation of applications into a Container:
vzpkg
Utility to manage OS and application EZ templates either inside your Containers or on the Hardware Node itself.
vzmktmpl
Utility to create OS and application EZ templates.
vzveconvert
Utility to convert Containers based on Virtuozzo standard templates to EZ template-based Containers.
vzpkgproxy
Utility to create caching proxy servers for handling OS and application EZ templates.
vzrhnproxy
Utility to create RHN proxy servers for handling the packages included in the RHEL 4 and RHEL 5 OS EZ templates.
vzpkgls
Utility to get a list of templates available on the Hardware Node and in Containers.
vzpkginfo
Utility to get the information on any template installed on the Hardware Node.
vzpkgcreat
Create a new package set from binary RPM or DEB files.
vzpkgadd
Utility to add a new template to a Container.
vzpkglink
Utility to replace real files inside a Container with symlinks to these very files on the Node.
vzpkgrm
Utility to remove a template from a Container.
vzpkgcache
Update a set of preinstalled Container archives after new template installation.

Supplementary tools perform a number of miscellaneous tasks in the Hardware Node and Container context:
vzup2date
Utility to update your Virtuozzo software and templates.
vzup2date-mirror
Utility to create local mirrors of the Virtuozzo official repository.
vzfsutil
Utility for the VZFS optimization and consistency checking.
vzcache
Utility to gain extra disk space by caching the files identical in different Containers.
vzsveinstall
Utility to create the Service Container on the Hardware Node.
vzsveupgrade
Utility to update the packages inside the Service Container.
vzps and vztop
Utilities working as the standard ps and top utilities, with Container-related functionality added.
vzsetxinetd
Utility to switch some services between a standalone and xinetd-dependent modes.
vzdqcheck
Print file space current usage from quota's point of view.
vzdqdump and vzdqload
Utilities to dump the Container user/group quota limits and grace times from the kernel or the quota file or for loading them to a quota file.
vznetstat
Utility that prints network traffic usage statistic by Containers.
vzcpucheck
Utility for checking CPU utilization by Containers.
vzmemcheck
Utility for checking the Hardware Node and Container current memory parameters.
vzcalc
Utility to calculate resource usage by a Container.
vzcheckovr
Utility to check the current system overcommitment and safety of the total resource control settings.
vzstat
Utility to monitor the Hardware Node and Container resources consumption in real time.
vzpid
Utility that prints Container id the process belongs to.
vzsplit
Utility to generate Container configuration file sample, "splitting" the Hardware Node into equal parts.
vzcfgscale
Utility to scale the Container configuration.
vzcfgvalidate
Utility to validate Container configuration file correctness.
vzcfgconvert
Utility to convert Virtuozzo 2.0.2 Container configuration files to Virtuozzo 2.5.x format.
vzstatrep
Utility to analyze the logs collected by vzlmond and to generate statistics reports on the basis of these logs (in the text and graphical form).
vzreport
Utility to draw up a problem report and to automatically send it to the Parallels support team.
vzhwcalc
Utility to scan the main resources on any Linux server and create a file where this information will be specified.
vzveconvert
Utility to convert the Containers based on Virtuozzo standard OS templates to the EZ template-based ones.
vznetcfg
Utility to manage network devices on the Hardware Node.
vzmtemplate
Utility to migrate the installed OS and application templates from the one Hardware Node to another.

Virtuozzo Configuration Files

Virtuozzo Configuration Files :
There are a number of files responsible for the Virtuozzo system configuration. Most of the files are located in the /etc directory on the Hardware Node. However, some configuration files are stored in the /etc directory inside the Service Container, on the Backup Node, inside a Container, or on a dedicated server. In case a configuration file is located in a place other than the Hardware Node, we point clearly the exact position (the Service Container, etc.) where it can be found.

A list of configuration files is presented in the table below:
/etc/vz/vz.conf
The Virtuozzo global configuration file. This file keeps system-wide settings, affecting Container and Virtuozzo template default location, global network settings and so on.
/etc/vz/conf/.conf
The private configuration file owned by a Container numbered . The file keeps Container specific settings – its resource management parameters, location of private area, IP address and so on.
/etc/vz/conf/ve-.conf.sample
Sample files, containing a number of default Container configurations, which may be used as a reference for Container creation. The following samples are shipped with Virtuozzo: basic, cpanel, confixx, slm.plesk, slm.256MB, slm.512MB, slm.1024MB, slm.2048MB. You may also create your new samples customized for your own needs.
/etc/vz/conf/dists/.conf
scripts :
/etc/sysconfig/vz-scripts/dists/scripts
/backetc/sysconfig/vz-scripts/dists/scripts
The configuration files used to determine what scripts are to be run on performing some operations in the Container context (e.g. on adding a new IP address to the Container). These scripts are different from Virtuozzo action scripts and depend on the Linux version the given Container is running.
/etc/sysconfig/vzsve
The configuration file used for the Service Container creation by vzsveinstall.
/etc/sysconfig/vzagent/
Parallels Agent configuration files.
/etc/vz/conf/networks_classes
The definition of network classes, used by traffic shaping and bandwidth management in Virtuozzo.
/etc/sysconfig/vzup2date/vzup2date.conf
This file specifies the default connection parameters for the vzup2date utility.
//.conf
This configuration file specifies the default connection parameters for the vzup2date-mirror utility. It should be located on the computer where you are planning to run vzup2date-mirror.
/etc/cron.d/vereboot
The configuration file for the cron daemon. Using this file, Virtuozzo emulates the "reboot" command working inside a Container.
/etc/vzvpn/vzvpn.conf
The configuration file used to define the parameters for establishing a private secure channel to the Parallels support team server.
/etc/vzreport.conf
The configuration file used to define the parameters for sending your problem report to the Parallels support team.
/etc/sysctl.conf
Kernel parameters. Virtuozzo adjusts a number of kernel sysctl parameters, and modifies the default /etc/sysctl.conf file.
/etc/vzredirect.d/*.conf
These files define the offline management modes for controlling Containers by Container administrators.
/etc/vzlmond.conf
This configuration file defines the parameters used by the vzlmond daemon to collect information on the main Hardware Node resources consumption.
/etc/vzstat.conf
The file lists the warning and/or error levels for a number of resource control parameters. If a parameter hits the warning or error value, the vzstat utility will display this parameter in yellow or red.
/etc/vzstatrep.conf
This configuration file is located on the Monitor Node and used by the vzstatrep utility when generating statistic reports and graphics on the Hardware Node resource consumption and sending these reports to the Node administrator.
/etc/vzbackup.conf
The global configuration file residing on the Backup Node and determining the global Container backup settings.
/etc/vz/pkgproxy/rhn.conf
The Red Hat Network (RHN) Proxy Server configuration file used by the vzrhnproxy utility when setting up the RHN Proxy Server. This file can be located on any computer where the vzrhnproxy package is installed.
/etc/vzpkgpoxy/vzpkgproxy.conf
This configuration file is used by the vzpkgproxy utility when creating special caching proxy servers for OS and application EZ templates. The file can be located on any computer where the vzpkgproxy package is installed.
/etc/vztt/vztt.conf
This configuration file is used by the vzpkg utility when managing OS and application EZ templates.

===========