CommuniGate Pro
Version 6.3
 

Cluster Load Balancers

The CommuniGate Pro Cluster architecture allows a load balancer to direct any connection to any working Server, eliminating a requirement for complex and unreliable "high-level" load balancer solutions. Inexpensive Layer 4 Switches can be used to handle the traffic.

Additionally, CommuniGate Pro Dynamic Cluster can manage software load balancers, such as the Linux "ipvs" kernel module: the CommuniGate Pro Cluster collects the information about working Servers belonging to one or several "balancer groups", learns which Servers can be used as Load Balancers, selects one Load Balancer for each group, informs it about all active Servers in that group, and assigns the Load Balancer duties to a different server, if the selected one goes down.



DSR (Direct Server Response) or DR (Direct Routing)

The DSR/DR is the preferred Load-Balancing method for larger installations. When this method is used, each Server is configured to have the VIP (Virtual IP) shared addresses as its local IP addresses. This allows each Server to receive all packets directed to the VIP addresses, and to send responses directly to the clients using the VIP as the "source" address.
The servers should not respond to the arp requests for these VIP addresses. Instead the load balancer responds to these requests, and thus all incoming packets directed to the VIP addresses are delivered to the load balancer, which redirects them to Servers. When redirecting these incoming packets, the load balancer sends them directly to the Server MAC address, without changing the packet destination address, that remains the VIP address.

Note: Because MAC addresses are used to redirect incoming packets, the Load Balancer and all balanced Servers (usually - CommuniGate Pro Cluster frontends) must be connected to the same network segment; there should be no router between the Load Balancer and those Servers.

To use the DSR method, create an "alias" for the loopback network interface on each Frontend Server. While the standard address for the loopback interface is 127.0.0.1, create an alias with the VIP address and the 255.255.255.255 network mask:

Solaris
ifconfig lo0:1 plumb
ifconfig lo0:1 VIP netmask 255.255.255.255 up
To make this configuration permanent, create the file /etc/hostname.lo0:1 with the VIP address in it.
FreeBSD
To change the configuration permanently, add the following line to the /etc/rc.conf file:
ifconfig_lo0_alias0="inet VIP netmask 255.255.255.255"
Linux
ifconfig lo:0 VIP netmask 255.255.255.255 up
or
ip address add VIP/32 dev lo
To make this configuration permanent, create the file /etc/sysconfig/network-scripts/ifcfg-lo:0:
DEVICE=lo
IPADDR=VIP
NETMASK=255.255.255.255
ONBOOT=yes

Make sure that the kernel is configured to avoid ARP advertising for this lo interface (so the VIP address is not linked to any Frontend server in arp-tables). Subject to the Linux kernel version, the following commands should be added to the /etc/sysctl.conf file:

# ARP: reply only if the target IP address is
# a local address configured on the incoming interface
net.ipv4.conf.all.arp_ignore = 1
#
# When an arp request is received on eth0, only respond
# if that address is configured on eth0.
net.ipv4.conf.eth0.arp_ignore = 1
#
# Enable configuration of arp_announce option
net.ipv4.conf.all.arp_announce = 2
# When making an ARP request sent through eth0, always use an address
# that is configured on eth0 as the source address of the ARP request.
net.ipv4.conf.eth0.arp_announce = 2
#
# Repeat for eth1, eth2 (if exist)
#net.ipv4.conf.eth1.arp_ignore = 1
#net.ipv4.conf.eth1.arp_announce = 2
#net.ipv4.conf.eth2.arp_ignore = 1
#net.ipv4.conf.eth2.arp_announce = 2

If you plan to have many VIPs, or if you plan to use CommuniGate Pro Load Balancing with the Linux built-in ipvs load balancer, do not create /etc/sysconfig/network-scripts/ifcfg-lo:n files.
Create the /etc/sysconfig/vipaddrs configuration file instead, and put all VIP addresses into it, as addresses, or subnetworks, one address per line. For example:

# single addresses
72.20.112.45
72.20.112.46
# a subnetwork
72.20.112.48/29

Note: line starting with the # symbol are ignored. They can be used as comments.

Note: subnetwork masks must be 24 bits or longer.

Create the following configuration scripts:
 


/etc/sysconfig/network-scripts/ifvip-utils
#!/bin/bash
#
# /etc/sysconfig/network-scripts/ifvip-utils
#
VIPADDRFILE="/etc/sysconfig/vipaddrs"

VIPLIST=""           # list of VIP masks: xxx.yy.zz.tt/mm where mm should be >= 24

for xVIP in `cat $VIPADDRFILE | grep -v '^#'`; do
  if [[ $xVIP != */* ]]; then xVIP=$xVIP/32; fi
  if (( ${xVIP##*/} < 24)); then
    echo "Incorrect mask: $xVIP" >2 ; exit 1;
  fi
  VIPLIST="$VIPLIST$xVIP "
done

CURRENT=`ip address show dev lo | egrep '^ +inet [0-9]+\.[0-9]+\.[0-9]+\.[0-9]+\/32 .*$' | sed -r 's/ +inet ([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+).*/\1/' `

function contains() {
  local x;
  for x in $1; do
    if [[ $x == $2 ]]; then return 0; fi
  done
  return 1
}

 


/etc/sysconfig/network-scripts/ifup-lo
#!/bin/bash
#
# /etc/sysconfig/network-scripts/ifup-lo
#
/etc/sysconfig/network-scripts/ifup-eth ${1} ${2}
#
# Bring up all addresses listed in the VIPADDRFILE file, as lo aliases
#
. /etc/sysconfig/network-scripts/ifvip-utils

for xVIP in $VIPLIST; do
  xIP=${xVIP%/*}       # xx.xx.xx.yy/mm -> xx.xx.xx.yy
  xIP0=${xIP%.*}       # xx.xx.xx.yy/mm -> xx.xx.xx
  xIP1=${xIP##*.}      # xx.xx.xx.yy/mm -> yy
  xMask=$(( 2 ** (32 - ${xVIP##*/}) ))
  for (( index=0; index<$xMask; index++ )); do
    thisIP=$xIP0.$((xIP1 + index))
    if ! $(contains "$CURRENT" "$thisIP"); then
      ip address add $thisIP/32 dev lo
    fi
    done
  done

 


/etc/sysconfig/network-scripts/ifdown-lo
#!/bin/bash
#
# /etc/sysconfig/network-scripts/ifdown-lo
#
# Bring down all addresses listed in the VIPADDRFILE file
#
. /etc/sysconfig/network-scripts/ifvip-utils

for xVIP in $VIPLIST; do
  xIP=${xVIP%/*}       # xx.xx.xx.yy/mm -> xx.xx.xx.yy
  xIP0=${xIP%.*}       # xx.xx.xx.yy/mm -> xx.xx.xx
  xIP1=${xIP##*.}      # xx.xx.xx.yy/mm -> yy
  xMask=$(( 2 ** (32 - ${xVIP##*/}) ))
  for (( index=0; index<$xMask; index++ )); do
    thisIP=$xIP0.$((xIP1 + index))
    if $(contains "$CURRENT" "$thisIP"); then
      ip address delete $thisIP/32 dev lo
    fi
    done
  done

/etc/sysconfig/network-scripts/ifdown-eth ${1} ${2}
other OS
consult with the OS vendor

Note: when a network "alias" is created, open the General Info page in the CommuniGate Pro WebAdmin Settings realm, and click the Refresh button to let the Server detect the newly added IP address.

The DSR method is transparent for all TCP-based services (including SIP over TCP/TLS), no additional CommuniGate Pro Server configuration is required: when a TCP connection is accepted on a local VIP address, outgoing packets for that connection will always have the same VIP address as the source address.

To use the DSR method for SIP UDP, the CommuniGate Pro frontend Server configuration should be updated:

  • use the WebAdmin Interface to open the Settings realm. Open the SIP receiving page in the Real-Time section
  • follow the UDP Listener link to open the Listener page
  • by default, the SIP UDP Listener has one socket: it listens on "all addresses", on the port 5060.
  • change this socket configuration by changing the "all addresses" value to the VIP value (the VIP address should be present in the selection menu).
  • click the Update button
  • create an additional socket to receive incoming packets on the port 5060, "all addresses", and click the Update button
Now, when you have 2 sockets - the first socket for VIP:5060, the second one for all addresses:5060, the frontend Server can use the first socket when it needs to send packets with VIP source address.
Repeat this configuration change for all "balanced" Servers.

Pinging

Load Balancers usually send some requests to servers in their "balanced pools". Lack of response tells the Load Balancer to remove the server from the pool, and to distribute incoming requests to remaining servers in that pool.

With SIP Farming switched on, the Load Balancer own requests can be relayed to other servers in the SIP Farm, and responses will come from those servers. This may cause the Load Balancer to decide that the server it has sent the request to is down, and to exclude the server from the pool.
To avoid this problem, use the following SIP requests for Load Balancer "pinging":

OPTION sip:aaa.bbb.ccc.ddd:5060 SIP/2.0
Route: <sip:aaa.bbb.ccc.ddd:5060;lr>
other SIP packet fields
where aaa.bbb.ccc.ddd is the IP address of the CommuniGate Pro Server being tested.

These packets are processed with the aaa.bbb.ccc.ddd Server, which generates responses and sends them back to the Load Balancer (or other testing device).


Sample Balancer Configurations

Sample configuration:

  • Router at 64.173.55.161 (netmask 255.255.255.224), DNS server at 64.173.55.167.
  • 4 frontend Servers (fe5, fe6, fe7, fe8) with "real" IP addresses 64.173.55.{180,181,182,183}
  • inter-Cluster network 192.168.10.xxx, with frontend "cluster" addresses 192.168.10.{5,6,7,8}
  • load balancer with 64.173.55.164 address (VIP address).
  • a loopback interface on each frontend Server has an alias configured for the 64.173.55.164 address configured.
The multi-IP no-NAT RTP method is used.

The CommuniGate Pro configuration (WebAdmin Settings realm):

  • the Network->LAN IP->Cluster-wide page: WAN IPv4 Address: 64.173.55.164
  • the Network->LAN IP->Server-wide page (on each frontend Server): WAN IPv4 Address: 64.173.55.{180,181,182,183}
  • the RealTime->SIP->Receiving->UDP Listener page (on each frontend Server): {port 5060, address:64.173.55.164} and {port: 5060, address: all addresses;}

A "no-NAT" configuration with "normal" load balancing for POP, IMAP, and "DSR" load balancing for SIP (UDP/TCP), SMTP, HTTP User (8100).

The Load Balancer configuration:

Foundry ServerIron® (64.173.55.176 is its service address)
Startup configuration:
!
server predictor round-robin
!
server real fe5 64.173.55.180
 port pop3
 port pop3 keepalive
 port imap4
 port imap4 keepalive
 port 5060
 port 5060 keepalive
 port smtp
 port smtp keepalive
 port 8100
 port 8100 keepalive
!
server real fe6 64.173.55.181
 port pop3
 port pop3 keepalive
 port imap4
 port imap4 keepalive
 port 5060
 port 5060 keepalive
 port smtp
 port smtp keepalive
 port 8100
 port 8100 keepalive
!
server real fe7 64.173.55.182
 port pop3
 port pop3 keepalive
 port imap4
 port imap4 keepalive
 port 5060
 port 5060 keepalive
 port smtp
 port smtp keepalive
 port 8100
 port 8100 keepalive
!
server real fe8 64.173.55.183
 port pop3
 port pop3 keepalive
 port imap4
 port imap4 keepalive
 port 5060
 port 5060 keepalive
 port smtp
 port smtp keepalive
 port 8100
 port 8100 keepalive
!
!
server virtual vip1 64.173.55.164
 predictor round-robin
 port pop3
 port imap4
 port 5060
 port 5060 dsr
 port smtp
 port smtp dsr
 port 8100
 port 8100  dsr
 bind pop3  fe5 pop3  fe6 pop3  fe7 pop3  fe8 pop3
 bind imap4 fe5 imap4 fe6 imap4 fe7 imap4 fe8 imap4
 bind 5060  fe8 5060  fe7 5060  fe6 5060  fe5 5060
 bind smtp  fe8 smtp  fe7 smtp  fe6 smtp  fe5 smtp
 bind 8100  fe5 8100  fe6 8100  fe7 8100  fe8 8100
!
ip address 64.173.55.176 255.255.255.224
ip default-gateway 64.173.55.161
ip dns server-address 64.173.55.167
ip mu act
end
Note: you should NOT use the port 5060 sip-switch, port sip sip-proxy-server, or other "smart" (application-level) Load Balancer features.
Alteon/Nortel AD3® (64.173.55.176 is its service address, hardware port 1 is used for up-link, ports 5-8 connect frontend Servers)
script start "Alteon AD3" 4  /**** DO NOT EDIT THIS LINE!
/* Configuration dump taken 21:06:57 Mon Apr  9, 2007
/* Version 10.0.33.4,  Base MAC address 00:60:cf:41:f5:20
/c/sys
        tnet ena
        smtp "mail.communigatepro.ru"
        mnet 64.173.55.160
        mmask 255.255.255.224
/c/sys/user
        admpw "ffe90d3859680828b6a4e6f39ad8abdace262413d5fe6d181d2d199b1aac22a6"
/c/ip/if 1
        ena
        addr 64.173.55.176
        mask 255.255.255.224
        broad 64.173.55.191
/c/ip/gw 1
        ena
        addr 64.173.55.161
/c/ip/dns
        prima 64.173.55.167
/c/sys/ntp
        on
        dlight ena
        server 64.173.55.167
/c/slb
        on
/c/slb/real 5
        ena
        rip 64.173.55.180
        addport 110
        addport 143
        addport 5060
        addport 25
        addport 8100
        submac ena
/c/slb/real 6
        ena
        rip 64.173.55.181
        addport 110
        addport 143
        addport 5060
        addport 25
        addport 8100
        submac ena
/c/slb/real 7
        ena
        rip 64.173.55.182
        addport 110
        addport 143
        addport 5060
        addport 25
        addport 8100
        submac ena
/c/slb/real 8
        ena
        rip 64.173.55.183
        addport 110
        addport 143
        addport 5060
        addport 25
        addport 8100
        submac ena
/c/slb/group 1
        add 5
        add 6
        add 7
        add 8
        name "all-services"
/c/slb/port 1
        client ena
/c/slb/port 5
        server ena
/c/slb/port 6
        server ena
/c/slb/port 7
        server ena
/c/slb/port 8
        server ena
/c/slb/virt 1
        ena
        vip 64.173.55.164
/c/slb/virt 1/service pop3
        group 1
/c/slb/virt 1/service imap4
        group 1
/c/slb/virt 1/service 5060
        group 1
        udp enabled
        udp stateless
        nonat ena
/c/slb/virt 1/service smtp
        group 1
        nonat ena
/c/slb/virt 1/service 8100
        group 1
        nonat ena
/
script end  /**** DO NOT EDIT THIS LINE!
F5 Big-IP® (64.173.55.176 is its service address)
Use the nPath Routing feature for SIP UDP/TCP traffic. This is F5 Networks, Inc. term for the Direct Server Response method.
Because F5 BigIP is not a switch, you must use the DSR (nPath Routing) method for all services.
bigip_base.conf:
vlan external {
   tag 4093
   interfaces
      1.1
      1.2
}
stp instance 0 {
   vlans external
   interfaces
      1.1
         external path cost 20K
         internal path cost 20K
      1.2
         external path cost 20K
         internal path cost 20K
}
self allow {
   default
      udp snmp
      proto ospf
      tcp https
      udp domain
      tcp domain
      tcp ssh
}
self 64.173.55.176 {
   netmask 255.255.255.224
   vlan external
   allow all
}

bigip.conf:
partition Common {
   description "Repository for system objects and shared objects."
}
route default inet {
   gateway 64.173.55.161
}
monitor MySMTP {
   defaults from smtp
   dest *:smtp
   debug "no"
}
profile fastL4 CGS_fastL4 {
   defaults from fastL4
   idle timeout 60
   tcp handshake timeout 15
   tcp close timeout 60
   loose initiation disable
   loose close enable
   software syncookie disable
}
pool Frontends {
   monitor all MySMTP and gateway_icmp
   members
      64.173.55.180:any
      64.173.55.181:any
      64.173.55.182:any
      64.173.55.183:any
}
node * monitor MySMTP

bigip_local.conf:
virtual address 64.173.55.164 {
   floating disable
   unit 0
}
virtual External {
   translate address disable
   pool Frontends
   destination 64.173.55.164:any
   profiles CGS_fastL4
}

Outgoing TCP Connections

When VIP addresses are assigned to CommuniGate Pro Domains, you may want to configure your CommuniGate Pro Modules to initiate outgoing TCP connections using these VIP addresses as source IP addresses. If you do so, the response TCP packets will be directed to the Load Balancer, which should be configured to direct them to the proper Cluster Member - to the CommuniGate Pro Server that has initiated the TCP connection.

For each Cluster Member that can initiate TCP connections (usually the frontend servers), select a port range for outgoing connections. These ranges should not intersect. For example, select the port range 33000-33999 for the first Cluster Member, 34000-34999 for the second Cluster Member, etc.

Make sure that the server OS is configured so that the selected port range is outside of the OS "ephemeral port" range. For example, the following command can be used to check the Linux OS "ephemeral port" range:

[prompt]# cat /proc/sys/net/ipv4/ip_local_port_range
32768 61000
[prompt]#
and the following command can be used to change the Linux OS "ephemeral port" range:
[prompt]# echo "50000 61000" >/proc/sys/net/ipv4/ip_local_port_range
cat /proc/sys/net/ipv4/ip_local_port_range
50000 61000
[prompt]#
To make these changes permanent, add the following line to the Linux /etc/sysctl.conf file:
net.ipv4.ip_local_port_range = 50000 61000

For each of these Cluster members, open the Network settings in the WebAdmin Settings realm, and specify the selected TCP port range. Disable the Use for Media Proxy only option to make the CommuniGate Pro Server software use the selected port range for all outgoing TCP connections with a predefined source address.

Configure the Load Balancer: all packets coming to VIP address(es) and to any port in the selected port range should be directed to the corresponding Cluster Member.


Software Load Balancer

The CommuniGate Pro Dynamic Cluster can be used to control software load balancers (such as Linux IPVS), running on the same systems as the Cluster members.

Select the cluster members to distribute the incoming traffic to. In a frontend-backend configuration, you would usually use all or some of the frontend servers for that.

Make sure that all selected cluster members have the VIP addresses configured as "loopback aliases" (see above).

Use the WebAdmin Interface to open the Cluster page in the Settings realm and select the Load Balancer group A for all selected servers:

Load Balancer Group: Balancer Weight:
Balancer Weight
Use this setting to specify relative weight of this server in the Load Balancer Group. The more this value, the larger part of incoming TCP connections and UDP packets will be directed to this server.

All or some of the selected servers should be equipped with a software load balancer, and they should have an "External Load Balancer" Helper application configured. This application should implement the Load Balancer Helper protocol.

External Load Balancer
Log Level: Program Path:
Time-out: Auto-Restart:

As soon as the first Load Balancer helper application starts on some Cluster Member, the Cluster Controller activates that Helper, making it direct all incoming traffic to its Cluster member, and distribute that traffic to all active Cluster members in its Load Balancer Group.

If the Cluster Member running the active Load Balancer fails or it is switched into the "non-ready" state, the Cluster Controller activates some other Load Balancer member in that group (if it can find one).

Linux IPVS

The CommuniGate Pro Linux package includes the Services/IPVSHelper.sh shell application that can be used to control the IPVS software load balancer.

The application expects that the VIP addresses are stored in the /etc/sysconfig/vipaddrs file, and the local interface (lo) aliases for these addresses have beeen created (see above).

Specify $Services/IPVSHelper.sh parameters as the External Load Balancer "program path", and start it by selecting the Helper checkbox.
The following parameters are supported:

-p number
persistence: all connections from the same IP address will be directed to the same Cluster Member if they come within number seconds after the existing connection. Specify 0 (zero) to switch persistance off. The default value is 15 seconds.
-i interface
the ethernet interface used to receive packets addressed to the VIP address(es). The default value is eth0
-s number
the "syncID" value used to synchronize connection tables on the active load balancer and other Cluster Members that can become load balancers. The default value is 0.
-t number
the time-out (in seconds) for reading a command sent by the CommuniGate Pro Server. The default value is 15.
-f filePath
the file system path for the file containing the list of the VIP addresses. The default value is /etc/sysconfig/vipaddrs
-r number
the relative weight of the active load balancer in the Load Balancer group. All other group members have the weight of 100. The default value is 100.
-m
if this parameter is specified, the Helper application does not execute actual shell commands, but only copies the commands it would execute to its standard output, so they are recorded in the CommuniGate Pro System Log.

Note: the Linux kernel 3.5.3-1 or better is recommended. When an earlier version is used, all TCP connections made to the active Load Balancer are dropped, when a different server becomes the active Load Balancer.

Note:If a Cluster member has the External Balancer Helper application switched on, and then it is switched off, some active connections may be broken. If you do not plan to switch the Helper application back on, restart the ipvsadm service or switch it off completely.


CommuniGate Pro Guide. Copyright © 2024, AO SBK