• Articles Configuration fedora How To How To's KVM Tutorials Virtualization

    How to Fix Bridge problem in CentOS/RHEL 5.4 for KVM

    How to Fix Bridge problem in CentOS/RHEL 5.4 for KVM

    .Dated: 05-Nov-2009

    KVM in the CentOS 5.4 – How to fixing bridge Problem

    When we use KVM in CentOS 5.4 we will notice that there is no bridge setup to allow your virtual guests to directly connect to the local network.

    We need to do so simple steps to fix it.

    As we are using libvirt,
    We need to follwo steps below to fix it.

    Step 1: Create the bridge script at /etc/sysconfig/network-scripts/ifcfg-br0

    [root@babar /root]# vi /etc/sysconfig/network-scripts/ifcfg-br0

    DEVICE=br0
    BOOTPROTO=static
    TYPE=Bridge
    IPADDR=192.168.0.100
    NETMASK=255.255.255.0
    ONBOOT=yes
    NM_CONTROLLED=no

    Save & exit

    As you can see, I use static IP config.

    Step 2: hook up eth0 to the bridge and remove it’s IP config in /etc/sysconfig/network-scripts/ifcfg-eth0

    [root@babar /root]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
    DEVICE=eth1
    BOOTPROTO=none
    ONBOOT=yes
    BRIDGE=br0
    NM_CONTROLLED=no
    TYPE=Ethernet

    Please restart the network service or your system and your system will start working fine after reboot. Now create a new virtual machine with virt-manager,we can select to have it directly hooked up to the physical network.

    [root@babar ~]# ifconfig
    br0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
    inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0
    inet6 addr: fe80::215:17ff:febd:c94d/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:168 errors:0 dropped:0 overruns:0 frame:0
    TX packets:33 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:11475 (11.2 KiB) TX bytes:9580 (9.3 KiB)

    eth0 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:xx
    inet addr:192.168.1.253 Bcast:192.168.1.255 Mask:255.255.255.0
    inet6 addr: fe80::215:17ff:febd:c94c/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:3240 errors:0 dropped:0 overruns:0 frame:0
    TX packets:5286 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:417544 (407.7 KiB) TX bytes:5574477 (5.3 MiB)
    Memory:b1a20000-b1a40000

    eth1 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
    inet6 addr: fe80::215:17ff:febd:c94d/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:9113 errors:0 dropped:0 overruns:0 frame:0
    TX packets:122 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:755212 (737.5 KiB) TX bytes:27911 (27.2 KiB)
    Memory:b1a00000-b1a20000

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:969 errors:0 dropped:0 overruns:0 frame:0
    TX packets:969 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:2744681 (2.6 MiB) TX bytes:2744681 (2.6 MiB)

    virbr0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
    inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
    inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:0 errors:0 dropped:0 overruns:0 frame:0
    TX packets:52 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:0 (0.0 b) TX bytes:9535 (9.3 KiB)

    vnet0 Link encap:Ethernet HWaddr A2:F7:06:6D:C1:2F
    inet6 addr: fe80::a0f7:6ff:fe6d:c12f/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:57 errors:0 dropped:0 overruns:0 frame:0
    TX packets:1035 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:500
    RX bytes:9782 (9.5 KiB) TX bytes:208709 (203.8 KiB)

    Now it is working fine.

    Published by:
  • How to Install and configure Web Server Load Balancing cluster on RedHat 5.x / CentOS 5.x /6.x Using LVS, Heart Beat with Highly Available MySQL Database server using DRBD and Heart Beat

    How to Install and configure Web Server Load Balancing cluster  on RedHat 5.x / CentOS 5.x /6.x Using LVS, Heart Beat with Highly Available MySQL Database server using DRBD and Heart Beat

    by Babar Zahoor
    Coordinated by Muhammad Farrukh Siddque(LPIC)
    Speacial Thanks to
    Mr.Muhammad Kamran Azeem http://www.Wbitt.com
    http://www.LinuxUrduCBTs.com
    Linux Load Balancer Urdu CBT  using  Piranha,Pulse,IPVsadm and Highly Available MySQL using DRBD & HearTBeat.6 Nodes Load Balancing Cluster SetupTwo nodes for LVS (Piranha Pulse nannay Ipvsadm) as Load balancer
    Two nodes for Web servers can be multiple upto your requirement.
    Two nodes for mysql database server using Drbd & heartbeat for highly available MySQL database.
    3 types of Load Balancers Cluster Computing
    1. Nating
    2. Direct Routing
    3. Tunneling
     Continue reading 
    Published by:
  • High Availability Squid Web Cache Cluster with DRBD Heartbeat by Babar Zahoor

    High Availability Linux Cluster Setup using  DRBD  and Heart Beat on CentOS 5.x 6.x /RHEL 5.x 6.X/ Fedora

    #### This How To belongs to My video on High Availability Squid Cache using DRBD and HeartBeat ####

    OS CentOS 5.3 on both machines.

    We will setup for Transparent squid on High Availability Cluster.

    Packages are available on CentOS extras repository.

    Our Scenario

    We have two servers

    baber 192.168.1.50 Primary server

    farrukh 192.168.1.60 Secondry server

    Setup for IP to name resolve ## we don’t have DNS we need this step

    Basic Setup Configuration

    [root@baber ~]# vim /etc/hosts
    192.168.1.50 baber
    192.168.1.60 farrukh
    wq!
    [root@baber ~]# ping baber
    PING baber (192.168.1.50) 56(84) bytes of data.
    64 bytes from baber (192.168.1.50): icmp_seq=1 ttl=64 time=4.15 ms
    64 bytes from baber (192.168.1.50): icmp_seq=2 ttl=64 time=0.126 ms
    64 bytes from baber (192.168.1.50): icmp_seq=3 ttl=64 time=1.88 ms
    [1]+ Stopped ping baber
    [root@baber ~]# ping farrukh
    PING farrukh (192.168.1.60) 56(84) bytes of data.
    64 bytes from farrukh (192.168.1.60): icmp_seq=1 ttl=64 time=1.32 ms
    64 bytes from farrukh (192.168.1.60): icmp_seq=2 ttl=64 time=0.523 ms
    64 bytes from farrukh (192.168.1.60): icmp_seq=3 ttl=64 time=1.79 ms
    [2]+ Stopped ping farrukh
    
    
    [root@baber ~]#
    [root@baber ~]# scp /etc/hosts 192.168.1.60:/etc/hosts

    On Node1 servers:

    stop unwanted services on both servers

    [root@baber ~]# /etc/init/sendmail stop
    [root@baber ~]# chkconfig –level 235 sendmail off
    [root@baber ~]# iptables -F
    [root@baber ~]#service iptables save
    [root@farrukh ~]# /etc/init/sendmail stop
    [root@farrukh ~]# chkconfig –level 235 sendmail off
    [root@farrukh ~]# iptables -F
    [root@farrukh ~]#service iptables save
    [root@baber ~]# rpm -qa | grep ntp
     ntp-4.2.2p1-9.el5.centos.1
    
    

    Then we need to open ntp server configuration file.

    [root@baber ~]# vi /etc/ntp.conf
    # Permit time synchronization with our time source, but do not
    # permit the source to query or modify the service on this system.
    restrict default kod nomodify notrap nopeer noquery
    # Permit all access over the loopback interface. This could
    # be tightened as well, but to do so would effect some of
    # the administrative functions.
    restrict 127.0.0.1
    # Hosts on local network are less restricted.
    #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
    # Use public servers from the pool.ntp.org project.
    # Please consider joining the pool (http://www.pool.ntp.org/join.html).
    ### Edited By Babar Zahoor Jun 16 2009 ###
    #server 0.centos.pool.ntp.org
    #server 1.centos.pool.ntp.org
    #server 2.centos.pool.ntp.org
    #broadcast 192.168.1.255 key 42 # broadcast server
    #broadcastclient # broadcast client
    #broadcast 224.0.1.1 key 42 # multicast server
    #multicastclient 224.0.1.1 # multicast client
    #manycastserver 239.255.254.254 # manycast server
    #manycastclient 239.255.254.254 key 42 # manycast client
    # Undisciplined Local Clock. This is a fake driver intended for backup
    # and when no outside source of synchronized time is available.
    ########## for server use this and on clients comment this and use server serverIP ##################
    server 127.127.1.0 # local clock
    #fudge 127.127.1.0 stratum 10
    # Drift file. Put this in a directory which the daemon can write to.
    # No symbolic links allowed, either, since the daemon updates the file
    # by creating a temporary in the same directory and then rename()’ing
    # it to the file.
    # driftfile /var/lib/ntp/drift
    # Key file containing the keys and key identifiers used when operating
    # with symmetric key cryptography.
    # Specify the key identifiers which are trusted.
    # trustedkey 4 8 42
    # Specify the key identifier to use with the ntpdc utility.
    # requestkey 8
    # Specify the key identifier to use with the ntpq utility.
    #controlkey 8
    keys /etc/ntp/keys
    wq!
    
    [root@baber ~]#
    [root@baber ~]# /etc/init.d/ntpd start
    [root@baber ~]# chkconfig –level 235 ntpd on
    
    [root@farrukh ~]# vim /etc/ntp.conf
    # Permit time synchronization with our time source, but do not
    # permit the source to query or modify the service on this system.
    restrict default kod nomodify notrap nopeer noquery
    # Permit all access over the loopback interface. This could
    # be tightened as well, but to do so would effect some of
    # the administrative functions.
    #restrict 127.0.0.1
    #estrict -6 ::1
    # Hosts on local network are less restricted.
    #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
    # Use public servers from the pool.ntp.org project.
    # Please consider joining the pool (http://www.pool.ntp.org/join.html).
    server 192.168.1.50 ### add this line on second server ###
    #server 0.centos.pool.ntp.org
    #server 1.centos.pool.ntp.org
    #server 2.centos.pool.ntp.org
    #broadcast 192.168.1.255 key 42 # broadcast server
    #broadcastclient # broadcast client
    #broadcast 224.0.1.1 key 42 # multicast server
    #multicastclient 224.0.1.1 # multicast client
    #manycastserver 239.255.254.254 # manycast server
    #manycastclient 239.255.254.254 key 42 # manycast client
    # Undisciplined Local Clock. This is a fake driver intended for backup
    # and when no outside source of synchronized time is available.
    #server 127.127.1.0 # local clock ##### #####
    #fudge 127.127.1.0 stratum 10
    # Drift file. Put this in a directory which the daemon can write to.
    # No symbolic links allowed, either, since the daemon updates the file
    # by creating a temporary in the same directory and then rename()’ing
    # it to the file.
    driftfile /var/lib/ntp/drift
    # Key file containing the keys and key identifiers used when operating
    # with symmetric key cryptography.
    keys /etc/ntp/keys
    # Specify the key identifiers which are trusted.
    #trustedkey 4 8 42
    # Specify the key identifier to use with the ntpdc utility.
    #requestkey 8
    # Specify the key identifier to use with the ntpq utility.
    #controlkey 8
    wq!
    
    [root@farrukh ~]# /etc/init.d/ntpd start
    [root@farrukh ~]# chkconfig –level 235 ntpd on
    [root@farrukh ~]# ntpdate -u 192.168.1.50
    [root@farrukh ~]# watch ntpq -p -n[root@baber ~]# watch ntpq -p -n

    PARTITION SETUP On Both Servers.

    Partion setup on both server identical same with fdisk

    We have 3GB disks on both servers.

    Partition Setup for Cluster Servers

    We need to create LVM partition

    [root@baber ~]# fdisk -l
    [root@baber ~]# fdisk /dev/sdb
    [root@baber ~]# fdisk /dev/sd
     sda sda1 sda2 sdb sdb1
    
    [root@farrukh ~]# fdisk /dev/sdb
    Command (m for help): m
    Command action
    a toggle a bootable flag
    b edit bsd disklabel
    c toggle the dos compatibility flag
    d delete a partition
    l list known partition types
    m print this menu
    n add a new partition
    o create a new empty DOS partition table
    p print the partition table
    q quit without saving changes
    s create a new empty Sun disklabel
    t change a partition’s system id
    u change display/entry units
    v verify the partition table
    w write table to disk and exit
    x extra functionality (experts only)
    Command (m for help): p
    Disk /dev/sdb: 4294 MB, 4294967296 bytes
    255 heads, 63 sectors/track, 522 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 1 522 4192933+ 8e Linux LVM
    Command (m for help): d
    Selected partition 1
    Command (m for help): n
    Command action
    e extended
    p primary partition (1-4)
    p
    Partition number (1-4): 1
    First cylinder (1-522, default 1):
    Using default value 1
    Last cylinder or +size or +sizeM or +sizeK (1-522, default 522): +4000M
    Command (m for help): p
    Disk /dev/sdb: 4294 MB, 4294967296 bytes
    255 heads, 63 sectors/track, 522 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 1 487 3911796 83 Linux
    Command (m for help): t
    Selected partition 1
    Hex code (type L to list codes): 8e
    Changed system type of partition 1 to 8e (Linux LVM)
    Command (m for help): p
    Disk /dev/sdb: 4294 MB, 4294967296 bytes
    255 heads, 63 sectors/track, 522 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 1 487 3911796 8e Linux LVM
    Command (m for help):
    Command (m for help): w
    
    [root@baber ~]# partprobe
    Create Physical Volume for LVM this is second step for LVM partition.
    
    [root@baber ~]# pvcreat /dev/sdb1
    
    Create Volume Group with this command
    [root@baber ~]# vgcreate vgdrbd /dev/sdb1
    
    Create Logical volume partition
    [root@baber ~]# lvcreate -n lvdrbd /dev/mapper/vgdrbd -L +4000M
    
    Note: Create LVM on Both servers identical same ……………

    Please add these three values in sysctl.conf

    [root@baber ~]#vi /etc/sysctl.conf
    net.ipv4.conf.eth0.arp_ignore = 1
    net.ipv4.conf.all.arp_announce = 2
    net.ipv4.conf.eth0.arp_announce = 2
    save & quit
    [root@baber ~]# sysctl -p
    net.ipv4.ip_forward = 0
    net.ipv4.conf.default.rp_filter = 1
    net.ipv4.conf.eth0.arp_ignore = 1
    net.ipv4.conf.all.arp_announce = 2
    net.ipv4.conf.eth0.arp_announce = 2
    net.ipv4.conf.default.accept_source_route = 0
    kernel.sysrq = 0
    kernel.core_uses_pid = 1
    net.ipv4.tcp_syncookies = 1
    kernel.msgmnb = 65536
    kernel.msgmax = 65536
    kernel.shmmax = 4294967295
    kernel.shmall = 268435456
    [root@baber ~]#

    DRBD Setup

    Please install drbd82 & kmod-drbd82 rpms using yum command.

    [root@baber ~]#yum install -y drbd82 kmod-drbd82

    open /etc/drbd.conf

    [root@baber ~]#vim /etc/drbd.conf
    global {
    usage-count yes;
    }
    common {
    syncer { rate 10M; }
    }
    resource r0 {
    protocol C;
    handlers {
    pri-on-incon-degr “echo o > /proc/sysrq-trigger ; halt -f”;
    pri-lost-after-sb “echo o > /proc/sysrq-trigger ; halt -f”;
    local-io-error “echo o > /proc/sysrq-trigger ; halt -f”;
    outdate-peer “/usr/lib/heartbeat/drbd-peer-outdater -t 5″;
    }
    startup {
    }
    disk {
    on-io-error detach;
    }
    net {
    after-sb-0pri disconnect;
    after-sb-1pri disconnect;
    after-sb-2pri disconnect;
    rr-conflict disconnect;
    }
    syncer {
    rate 10M;
    al-extents 257;
    }
    on node1 {
    device /dev/drbd0;
    disk /dev/VGdrbd/lvdrbd;
    address 192.168.1.50:7788;
    meta-disk internal;
    }
    on node2 {
    device /dev/drbd0;
    disk /dev/VGdrbd/lvdrbd;
    address 192.168.1.60:7788;
    meta-disk internal;
    }
    }
    wq!
    [root@baber ~]#
    [root@baber ~]# scp /etc/drbd.conf farrukh:/etc/drbd.conf

    We need to run module on both servers to run drbd

    Load DRBD module both nodes:

    [root@baber ~]# modprobe drbd
    [root@baber ~]# echo “modprobe drbd” >> /etc/rc.local
    
    [root@farrukh ~]# modprobe drbd
    [root@farrukh ~]# echo “modprobe drbd” >> /etc/rc.local

    ##### run this on both servers ######

    [root@baber ~]#drbdadm create-md r0
    [root@farrukh ~]#drbdadm create-md r0
    [root@baber ~]#drbdadm attach r0
    [root@farrukh ~]#drbdadm attach r0
    [root@baber ~]#drbdadm syncer r0
    [root@farrukh ~]#drbdadm syncer r0
    [root@baber ~]#drbdadm connect r0
    [root@farrukh ~]#drbdadm connect r0

    On Primary Node only

    [root@baber ~]#drbdadm — –overwrite-data-of-peer primary r0

    On both Nodes:

    [root@baber ~]#drbdadm up all
    [root@farrukh ~]#drbdadm up all

    On Primary Node only

    [root@baber ~]#drbdadm — primary all #### ON Node one Only ####
    [root@baber ~]#watch cat /proc/drbd

    only on baber ########## Primary Node ########

    [root@baber ~]#mkfs.ext3 /dev/drbd0
    [root@baber ~]#mkdir /data/
    [root@baber ~]#mount /dev/drbd0 /data/
    [root@baber ~]#
    [root@baber ~]# df -hk
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/mapper/VolGroup00-LogVol00
    5967432 2625468 3033948 47% /
    /dev/sda1 101086 12074 83793 13% /boot
    tmpfs 257720 0 257720 0% /dev/shm
    /dev/drbd0 4031516 107600 3719128 3% /data
    [root@baber ~]#

    On farrukh ####### Secondry Node #######

    [root@farrukh ~]#mkdir /data
    Heartbeat Setup:

    Install heartbeat package using yum

    Note: Internet connection is required or configure yum repository on your local machine with extras.

    [root@baber ~]#yum install -y heartbeat heartbeat-pils heartbeat-stonith heartbeat-devel
    
    ## Create this file and copy this text ##
    [root@baber ~]#vim /etc/ha.d/ha.cf 
    logfacility local0
    keepalive 2
    #deadtime 30 # USE THIS!!!
    deadtime 10
    # we use two heartbeat links, eth2 and serial 0
    bcast eth0 ####### We can use eth1 instead of eth0 it’s better option ########
    #serial /dev/ttyS0
    baud 19200
    auto_failback on ################## Active Active state #################
    node baber
    node farrukh
    save & quit.
    Server Baber  (Primary Node)
    [root@baber ~]#vi /etc/ha.d/haresources
    baber IPaddr::192.168.1.190/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 squid
    wq!
    Server farrukh: Secondary Node
    [root@farrukh ~]#vi /etc/ha.d/haresources
    farrukh IPaddr::192.168.1.190/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 squid
    wq!
    
    On Both Servers:
    [root@baber ~]#vi /etc/ha.d/authkeys
    auth 3
    3 md5 redhat ######### Use Long name as password #########
    both NODE:
    [root@baber ~]#chmod 600 /etc/ha.d/authkeys
    [root@baber ~]#scp /etc/ha.d/authkeys farrukh:/etc/ha.d/authkeys
    [root@baber ~]#chkconfig –level 235 heartbeat on

    Note: if you have problem mounting /dev/drbd0 on /data then run these commands to check the status if you found the drbddisk stopped then start it.

    [root@baber ~]#/etc/ha.d/resource.d/drbddisk r0 status
    [root@baber ~]#/etc/ha.d/resource.d/drbddisk r0 start
    [root@baber ~]#/etc/ha.d/resource.d/drbddisk r0 restart
    
    [root@baber data]# service drbd status
    drbd driver loaded OK; device status:
    version: 8.0.13 (api:86/proto:86)
    GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by buildsvn@c5-i386-build, 2008-10-02 13:31:44
    m:res cs st ds p mounted fstype
    0:r0 Connected Primary/Secondary UpToDate/UpToDate C /data ext3
    we can see that servers are in Primary/Secondary state and working well with /data directory mounted.

    To takeover the machine node1 to node2 forcefully.

    [root@baber ~]#/usr/lib/heartbeat/hb_takeover
    Transparent Squid Configuration on both servers.
    [root@baber ~]#vim /etc/sysctl.conf
    # Controls IP packet forwarding
    net.ipv4.ip_forward = 1 #### If it is 0 make it 1 for packet forwarding ####
    wq!
    
    [root@baber ~]#scp /etc/sysctl.conf farrukh:/etc/sysctl.conf
    [root@baber ~]#sysctl -p
    
    [root@farrukh ~]# sysctl -p
    [root@baber ~]#yum install -y squid
    [root@baber ~]#vim /etc/squid/squid.conf
    search these options using / and edit as required
    http_port 3128 transparent
    acl our_networks src 192.168.1.0/24 192.168.2.0/24
    http_access allow our_networks
    cache_dir ufs /data/squid 1000 32 256 ##### cache directories must be at /data/squid #####
    visible_hostname squid.ha-cluster.com
    wq!
    [root@baber ~]# cd /data
    [root@baber ~]# mkdir squid
    [root@baber ~]# chown squid:squid squid

    Note: This is required on only primary server i.e baber

    [root@baber ~]#scp /etc/squid/squid.conf farrukh:/etc/squid/squid.conf
    [root@baber ~]#iptables -F
    [root@baber ~]#iptables -t nat -A PREROUTING -p tcp -i eth0 –dport 80 -j REDIRECT –to-port 3128
    [root@baber ~]#iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    [root@baber ~]#service iptables save
    [root@farrukh ~]#iptables -F
    [root@farrukh ~]#iptables -t nat -A PREROUTING -p tcp -i eth0 –dport 80 -j REDIRECT –to-port 3128
    [root@farrukh ~]#iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    [root@farrukh ~]#service iptables save

    On both servers

    [root@baber ~]#/etc/init/heartbeat start
    [root@baber ~]#ifconfig
    [root@baber ~]#tail -f /var/log/squid/access.log
    [root@farrukh ~]#/etc/init/heartbeat start
    [root@farrukh ~]#ifconfig

    Note: We must use VIP/Service IP which we define in heartbeat i.e. 192.168.1.190 as default gateway IP for accessing the internet transparently.

    ALHAMDULILLAH We have Done it………….

    Published by:
  • High Availability Linux Cluster for SQUID Proxy using DRBD and HeartBeat on CentOS /RHEL / Fedora by Babar Zahoor

    High Availability Linux Cluster Setup using DRBD and Heart Beat on CentOS 5.x 6.x /RHEL 5.x 6.X/ Fedora

    #### This How To belongs to My video on High Availability Squid Cache using DRBD and HeartBeat ####

    OS CentOS 5.3 on both machines.redhat-logo1

    We will setup for Transparent squid on High Availability Cluster.

    Packages are available on CentOS extras repository.

    Our Scenario

    We have two servers

    baber 192.168.1.50 Primary server

    farrukh 192.168.1.60 Secondry server

    Setup for ip to name resolve ## we don’t have DNS we need this step ##

    Basic Setup Configuration.

    [root@baber ~]# vim /etc/hosts
     192.168.1.50 baber
     192.168.1.60 farrukh
     save & exit
     [root@baber ~]# ping baber
     PING baber (192.168.1.50) 56(84) bytes of data.
     64 bytes from baber (192.168.1.50): icmp_seq=1 ttl=64 time=4.15 ms
     64 bytes from baber (192.168.1.50): icmp_seq=2 ttl=64 time=0.126 ms
     64 bytes from baber (192.168.1.50): icmp_seq=3 ttl=64 time=1.88 ms
     [1]+ Stopped ping baber
     [root@baber ~]# ping farrukh
     PING farrukh (192.168.1.60) 56(84) bytes of data.
     64 bytes from farrukh (192.168.1.60): icmp_seq=1 ttl=64 time=1.32 ms
     64 bytes from farrukh (192.168.1.60): icmp_seq=2 ttl=64 time=0.523 ms
     64 bytes from farrukh (192.168.1.60): icmp_seq=3 ttl=64 time=1.79 ms
     [2]+ Stopped ping farrukh
    [root@baber ~]#
    [root@baber ~]# scp /etc/hosts 192.168.1.60:/etc/hosts

    On Node1 servers:

    Please before going to next step, stop unwanted services on both servers

    [root@baber ~]# /etc/init/sendmail stop
     [root@baber ~]# chkconfig --level 235 sendmail off
     [root@baber ~]# iptables -F
     [root@baber ~]#service iptables save
     [root@farrukh ~]# /etc/init/sendmail stop
     [root@farrukh ~]# chkconfig --level 235 sendmail off
     [root@farrukh ~]# iptables -F
     [root@farrukh ~]#service iptables save
     [root@baber ~]# rpm -qa | grep ntp
     ntp-4.2.2p1-9.el5.centos.1
     [root@baber ~]#

    Then we need to open ntp server configuration file.

    [root@baber ~]#vi /etc/ntp.conf
     # Permit time synchronization with our time source, but do not
     # permit the source to query or modify the service on this system.
     restrict default kod nomodify notrap nopeer noquery
     # Permit all access over the loopback interface. This could
     # be tightened as well, but to do so would effect some of
     # the administrative functions.
     restrict 127.0.0.1
     # Hosts on local network are less restricted.
     #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
     # Use public servers from the pool.ntp.org project.
     # Please consider joining the pool (http://www.pool.ntp.org/join.html).
     ### Edited By Babar Zahoor Jun 16 2009 ###
     #server 0.centos.pool.ntp.org
     #server 1.centos.pool.ntp.org
     #server 2.centos.pool.ntp.org
     #broadcast 192.168.1.255 key 42 # broadcast server
     #broadcastclient # broadcast client
     #broadcast 224.0.1.1 key 42 # multicast server
     #multicastclient 224.0.1.1 # multicast client
     #manycastserver 239.255.254.254 # manycast server
     #manycastclient 239.255.254.254 key 42 # manycast client
     # Undisciplined Local Clock. This is a fake driver intended for backup
     # and when no outside source of synchronized time is available.
     ########## for server use this and on clients comment this and use server serverIP ##################
     server 127.127.1.0 # local clock
     #fudge 127.127.1.0 stratum 10
     # Drift file. Put this in a directory which the daemon can write to.
     # No symbolic links allowed, either, since the daemon updates the file
     # by creating a temporary in the same directory and then rename()'ing
     # it to the file.
     # driftfile /var/lib/ntp/drift
     # Key file containing the keys and key identifiers used when operating
     # with symmetric key cryptography.
     # Specify the key identifiers which are trusted.
     # trustedkey 4 8 42
     # Specify the key identifier to use with the ntpdc utility.
     # requestkey 8
     # Specify the key identifier to use with the ntpq utility.
     #controlkey 8
     keys /etc/ntp/keys
     save quit.
    
     [root@baber ~]#
     [root@baber ~]# /etc/init.d/ntpd start
     [root@baber ~]# chkconfig --level 235 ntpd on
     [root@farrukh ~]# vim ntp.conf
     # Permit time synchronization with our time source, but do not
     # permit the source to query or modify the service on this system.
     restrict default kod nomodify notrap nopeer noquery
     # Permit all access over the loopback interface. This could
     # be tightened as well, but to do so would effect some of
     # the administrative functions.
     #restrict 127.0.0.1
     #estrict -6 ::1
     # Hosts on local network are less restricted.
     #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
     # Use public servers from the pool.ntp.org project.
     # Please consider joining the pool (http://www.pool.ntp.org/join.html).
     server 192.168.1.50 ### add this line on second server ###
     #server 0.centos.pool.ntp.org
     #server 1.centos.pool.ntp.org
     #server 2.centos.pool.ntp.org
     #broadcast 192.168.1.255 key 42 # broadcast server
     #broadcastclient # broadcast client
     #broadcast 224.0.1.1 key 42 # multicast server
     #multicastclient 224.0.1.1 # multicast client
     #manycastserver 239.255.254.254 # manycast server
     #manycastclient 239.255.254.254 key 42 # manycast client
     # Undisciplined Local Clock. This is a fake driver intended for backup
     # and when no outside source of synchronized time is available.
     #server 127.127.1.0 # local clock ##### #####
     #fudge 127.127.1.0 stratum 10
     # Drift file. Put this in a directory which the daemon can write to.
     # No symbolic links allowed, either, since the daemon updates the file
     # by creating a temporary in the same directory and then rename()'ing
     # it to the file.
     driftfile /var/lib/ntp/drift
     # Key file containing the keys and key identifiers used when operating
     # with symmetric key cryptography.
     keys /etc/ntp/keys
     # Specify the key identifiers which are trusted.
     #trustedkey 4 8 42
     # Specify the key identifier to use with the ntpdc utility.
     #requestkey 8
     # Specify the key identifier to use with the ntpq utility.
     #controlkey 8
     save & exit
     [root@farrukh ~]# /etc/init.d/ntpd start
     [root@farrukh ~]# chkconfig --level 235 ntpd on
     [root@farrukh ~]# ntpdate -u 192.168.1.50
     [root@farrukh ~]# watch ntpq -p -n
     [root@baber ~]# watch ntpq -p -n

    PARTITION SETUP On Both Servers.

    Partition setup on both server identical same with fdisk here we have 3GB disks on both servers, here we will setup partition for HA Cluster Servers. We need to create LVM partitions on both machines, we will explain one server named as farrukh.

    [root@baber ~]# fdisk -l
    [root@baber ~]# fdisk /dev/sdb
    [root@baber ~]# fdisk /dev/sd
    sda sda1 sda2 sdb sdb1
    
    
    [root@farrukh ~]# fdisk /dev/sdb
     Command (m for help): m
     Command action
     a toggle a bootable flag
     b edit bsd disklabel
     c toggle the dos compatibility flag
     d delete a partition
     l list known partition types
     m print this menu
     n add a new partition
     o create a new empty DOS partition table
     p print the partition table
     q quit without saving changes
     s create a new empty Sun disklabel
     t change a partition's system id
     u change display/entry units
     v verify the partition table
     w write table to disk and exit
     x extra functionality (experts only)
     Command (m for help): p
     Disk /dev/sdb: 4294 MB, 4294967296 bytes
     255 heads, 63 sectors/track, 522 cylinders
     Units = cylinders of 16065 * 512 = 8225280 bytes
     Device Boot Start End Blocks Id System
     /dev/sdb1 1 522 4192933+ 8e Linux LVM
     Command (m for help): d
     Selected partition 1
     Command (m for help): n
     Command action
     e extended
     p primary partition (1-4)
     p
     Partition number (1-4): 1
     First cylinder (1-522, default 1):
     Using default value 1
     Last cylinder or +size or +sizeM or +sizeK (1-522, default 522): +4000M
     Command (m for help): p
     Disk /dev/sdb: 4294 MB, 4294967296 bytes
     255 heads, 63 sectors/track, 522 cylinders
     Units = cylinders of 16065 * 512 = 8225280 bytes
     Device Boot Start End Blocks Id System
     /dev/sdb1 1 487 3911796 83 Linux
     Command (m for help): t
     Selected partition 1
     Hex code (type L to list codes): 8e
     Changed system type of partition 1 to 8e (Linux LVM)
     Command (m for help): p
     Disk /dev/sdb: 4294 MB, 4294967296 bytes
     255 heads, 63 sectors/track, 522 cylinders
     Units = cylinders of 16065 * 512 = 8225280 bytes
     Device Boot Start End Blocks Id System
     /dev/sdb1 1 487 3911796 8e Linux LVM
     Command (m for help):
     Command (m for help): w
     [root@baber ~]# partprobe

    New Create Physical Volume for LVM this is second step for LVM partition

    [root@farrukh ~]# pvcreat /dev/sdb1
     Create Volume Group with this command
     [root@farrukh ~]# vgcreate vgdrbd /dev/sdb1
     Create Logical volume partition
     [root@farrukh ~]# lvcreate -n lvdrbd /dev/mapper/vgdrbd -L +4000M

    Note: Create LVM on Both servers identical same ……………….

    Note:Please also add these three values in sysctl.conf

    [root@baber ~]#vi /etc/sysctl.conf
     net.ipv4.conf.eth0.arp_ignore = 1
     net.ipv4.conf.all.arp_announce = 2
     net.ipv4.conf.eth0.arp_announce = 2
     save & quit
     [root@baber ~]# sysctl -p
     net.ipv4.ip_forward = 0
     net.ipv4.conf.default.rp_filter = 1
     net.ipv4.conf.eth0.arp_ignore = 1
     net.ipv4.conf.all.arp_announce = 2
     net.ipv4.conf.eth0.arp_announce = 2
     net.ipv4.conf.default.accept_source_route = 0
     kernel.sysrq = 0
     kernel.core_uses_pid = 1
     net.ipv4.tcp_syncookies = 1
     kernel.msgmnb = 65536
     kernel.msgmax = 65536
     kernel.shmmax = 4294967295
     kernel.shmall = 268435456
     [root@baber ~]#

    DRBD Setup

    Now install drbd82 & kmod-drbd82 or latest available package rpms using yum command on both servers

    [root@baber ~]#yum install -y drbd82 kmod-drbd82

    Now open /etc/drbd.conf using any text editor, I am using here VIM for this purpose

    [root@baber ~]#vim /etc/drbd.conf
     global {
     usage-count yes;
     }
     common {
     syncer { rate 10M; }
     }
     resource r0 {
     protocol C;
     handlers {
     pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
     pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
     local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
     outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";
     }
     startup {
     }
     disk {
     on-io-error detach;
     }
     net {
     after-sb-0pri disconnect;
     after-sb-1pri disconnect;
     after-sb-2pri disconnect;
     rr-conflict disconnect;
     }
     syncer {
     rate 10M;
     al-extents 257;
     }
     ####add the below information according to your HA server setup####
     on baber {
     device /dev/drbd0;
     disk /dev/VGdrbd/lvdrbd;
     address 192.168.1.50:7788;
     meta-disk internal;
     }
     on node2 {
     device /dev/drbd0;
     disk /dev/VGdrbd/lvdrbd;
     address 192.168.1.60:7788;
     meta-disk internal;
     }
     }
     save it........
     [root@baber ~]#
     [root@baber ~]# scp /etc/drbd.conf farrukh:/etc/drbd.conf
    
     Now we need to run modules on both servers to run drbd

    Load DRBD module both nodes:

    [root@baber ~]# modprobe drbd
     [root@baber ~]# echo "modprobe drbd" >> /etc/rc.local
     [root@farrukh ~]# modprobe drbd
     [root@farrukh ~]# echo "modprobe drbd" >> /etc/rc.local
     ##### Please run these command on both servers ######
    [root@baber ~]#drbdadm create-md r0
    [root@farrukh ~]#drbdadm create-md r0
    [root@baber ~]#drbdadm attach r0
    [root@farrukh ~]#drbdadm attach r0
    [root@baber ~]#drbdadm syncer r0
    [root@farrukh ~]#drbdadm syncer r0
    [root@baber ~]#drbdadm connect r0
    [root@farrukh ~]#drbdadm connect r0
    
    Please run below command on Primary Node only
    [root@baber ~]#drbdadm -- --overwrite-data-of-peer primary r0

    Now please run below commands on both Nodes:

    [root@baber ~]#drbdadm up all
    [root@farrukh ~]#drbdadm up all
    
    Please run below command on Primary Node only
    [root@baber ~]#drbdadm -- primary all
    [root@baber ~]#watch cat /proc/drbd

    Please run this command on Primary Node only

    [root@baber ~]#mkfs.ext3 /dev/drbd0
    [root@baber ~]#mkdir /data/
    [root@baber ~]#mount /dev/drbd0 /data/
    [root@baber ~]#
    [root@baber ~]# df -hk
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/mapper/VolGroup00-LogVol00
    5967432 2625468 3033948 47% /
    /dev/sda1 101086 12074 83793 13% /boot
    tmpfs 257720 0 257720 0% /dev/shm
    /dev/drbd0 4031516 107600 3719128 3% /data
    [root@baber ~]#
    
    
     Please run this command on Secondary Node only and no need to mount the partition here
    [root@farrukh ~]#mkdir /data

    Heartbeat Setup:

    Install heartbeat package using yum and for installing these packages Internet connection is required or you can configure yum repository on your local machine with extras where the servers / both nodes will be able to found these packages

     

    [root@baber ~]#yum install -y heartbeat heartbeat-pils heartbeat-stonith heartbeat-devel

    Now we will configure HA configuration file, if you can’t find this file please create new file and copy below test into that file

    [root@baber ~]#vim /etc/ha.d/ha.cf ## Create this file and copy this text ##
     logfacility local0
     keepalive 2
     #deadtime 30 # USE THIS!!!
     deadtime 10
     # we use two heartbeat links, eth2 and serial 0
     bcast eth0 ####### We can use eth1 instead of eth0 it's better option ########
     #serial /dev/ttyS0
     baud 19200
     auto_failback on ################## Active Active state #################
     node baber
     node farrukh
     save & quit.
     Again below configuration on same primary server "Baber"
    [root@baber ~]#vi /etc/ha.d/haresources
     baber IPaddr::192.168.1.190/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 squid
     Now please configure HARESOURCES file on secondary Server farrukh:
    [root@farrukh ~]#vi /etc/ha.d/haresources
    farrukh IPaddr::192.168.1.190/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 squid
     Below configuration for authorization to access the remote resources on Both Servers:
    [root@baber ~]#vi /etc/ha.d/authkeys
     auth 3
     3 md5 redhat ######### Use Long name as password #########
     Again on both Nodes
    [root@baber ~]#chmod 600 /etc/ha.d/authkeys
     [root@baber ~]#scp /etc/ha.d/authkeys farrukh:/etc/ha.d/authkeys
     [root@baber ~]#chkconfig --level 235 heartbeat on

    Note: if you have problem mounting /dev/drbd0 on /data then run these commands to check the status if you found the drbddisk stopped then start it.

    [root@baber ~]#/etc/ha.d/resource.d/drbddisk r0 status
     [root@baber ~]#/etc/ha.d/resource.d/drbddisk r0 start
     [root@baber ~]#/etc/ha.d/resource.d/drbddisk r0 restart
    [root@baber data]# service drbd status
     drbd driver loaded OK; device status:
     version: 8.0.13 (api:86/proto:86)
     GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by buildsvn@c5-i386-build, 2008-10-02 13:31:44
     m:res cs st ds p mounted fstype
     0:r0 Connected Primary/Secondary UpToDate/UpToDate C /data ext3
    We can see that servers are in Primary/Secondary state and working well with /data directory mounted and to takeover the machine node1 to node2 forcefully
    [root@baber ~]#/usr/lib/heartbeat/hb_takeover
    Now configuration of service on both server and we are configuring Squid Transparent
    [root@baber ~]#vim /etc/sysctl.conf
    # Controls IP packet forwarding
    net.ipv4.ip_forward = 1 #### If it is 0 make it 1 for packet forwarding ####
    save it
    Then
    [root@baber ~]#scp /etc/sysctl.conf farrukh:/etc/sysctl.conf
    [root@baber ~]#sysctl -p
    [root@farrukh ~]# sysctl -p
    [root@baber ~]#yum install -y squid
    [root@baber ~]#vim /etc/squid/squid.conf
    search these options using / and edit as required
    http_port 3128 transparent
    acl our_networks src 192.168.1.0/24 192.168.2.0/24
    http_access allow our_networks
    cache_dir ufs /data/squid 1000 32 256 ##### cache directories must be at /data/squid #####
    visible_hostname squid.ha-cluster.com
    save & exit
    [root@baber ~]# cd /data
    [root@baber ~]# mkdir squid
    [root@baber ~]# chown squid:squid squid
    Note: This setup is required on only on primary server i.e baber
    [root@baber ~]#scp /etc/squid/squid.conf farrukh:/etc/squid/squid.conf
    [root@baber ~]#iptables -F
    [root@baber ~]#iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j REDIRECT --to-port 3128
    [root@baber ~]#iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    [root@baber ~]#service iptables save
    [root@farrukh ~]#iptables -F
    [root@farrukh ~]#iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j REDIRECT --to-port 3128
    [root@farrukh ~]#iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    [root@farrukh ~]#service iptables save
    On both servers
    [root@baber ~]#/etc/init/heartbeat start
    [root@baber ~]#ifconfig
    [root@baber ~]#tail -f /var/log/squid/access.log
    [root@farrukh ~]#/etc/init/heartbeat start
    [root@farrukh ~]#ifconfig
    Note: We must use VIP/Service IP which we define in heartbeat i.e. 192.168.1.190 as default gateway IP for accessing the internet transparently.

    ALHAMDULILLAH We have Done it………….

    Published by: