HAProxy With Keepalived For Fail Over ~ Tue, 14 Oct 2014 02:19:15 +0000
There are plenty of articles on the web that detail setting up HAProxy and Keepalived to create a load balancer with failover support. So why write another one? There are a couple points that I didn't find addressed in my research; at least, not clearly. They are:
- How do the load balanced IP addresses get managed?
- What ports need to be open in the load balancers firewalls?
Instead of simply answering those questions, and making you learn the other details elsewhere, I will answer them in a full write up of a simple scenario:
- We have services foo.example.com, bar.example.com, and baz.example.com
- foo.example.com has a "public" IP 10.0.0.1 and is provided by private IPs 192.168.1.5 and 192.168.1.6
- bar.example.com = 10.0.0.2 provided by 192.168.2.5 and 192.168.2.6
- baz.example.com = 10.0.0.3 provided by 192.168.3.5 and 192.168.3.6
We're going to setup one instance of HAProxy to load balance all three services in Virtual Machine 1 (VM1), and a second instance in VM2 to provide a fail over for VM1 (or real hardware, whatever).
Note to Red Hat users: I have a repository available that makes building HAProxy and Keepalived RPMs trivial. You can reach it at https://github.com/jsumners/failover-lb.
HAProxy
I'm not going to spend much time on the HAProxy configuration. There are many, many, well documented options for HAProxy and your actual scenario may extend beyond this simple one. If suffices to say that the following configuration will look for any traffic coming into the currently active VM (VM1 or VM2) on the public IPs 10.0.0.1, 10.0.0.2, and 10.0.0.3 on port 80. It will then proxy said traffic to one of the private servers according to the round robin load balancing algorithm and/or a cookie denoting the destination server.
global
log 127.0.0.1 local0 debug debug
maxconn 4096
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
#debug
#quiet
stats socket /var/lib/haproxy/stats mode 600 level admin
defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
option httpclose
retries 3
maxconn 3000
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
listen foo-http
bind 10.0.0.1:80
balance roundrobin
cookie foo-server-id insert indirect nocache preserve maxidle 1h maxlife 1h domain .foo.example.com
# proxy to port 80 and set foo-server-id cookie to "foo1"
server foo1-http 192.168.1.5 cookie foo1
# proxy to port 8080 and set foo-server-id cookie to "foo2"
server foo2-http 192.168.1.6:8080 cookie foo2
listen bar-http
bind 10.0.0.2:80
balance roundrobin
cookie bar-server-id insert indirect nocache preserve maxidle 1h maxlife 1h domain .bar.example.com
server bar1-http 192.168.2.5 cookie bar1
server bar2-http 192.168.2.6 cookie bar2
listen baz-http
bind 10.0.0.2:80
balance roundrobin
cookie baz-server-id insert indirect nocache preserve maxidle 1h maxlife 2h domain .baz.example.com
server baz1-http 192.168.3.5:8080 cookie baz1
server baz2-http 192.168.3.6:8080 cookie baz2
Keepalived
Keepalived will be managing the fail over duties for us. It does this by:
- Running a script to check if a managed service is running
- Managing the public IP addresses for our services via IPVS
- Monitoring the status of a sister Keepalived process via VRRP
For our scenario, the Keepalived configuration would be:
vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid existence
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}
vrrp_instance VI_1 {
interface eth0 # physical interface that is connected to the network
state MASTER
virtual_router_id 51
priority 101 # 101 on master, 100 on backup
virtual_ipaddress {
10.0.0.1 # foo.example.com
10.0.0.2 # bar.example.com
10.0.0.3 # baz.example.com
}
track_script {
chk_haproxy
}
}
Note that this is the only place on the system where we declare that the IPs 10.0.0.1, 10.0.0.2, and 10.0.0.3 will be managed by the kernel. We don't declare them in a /etc/network/interfaces (Debian), /etc/sysconfig/network-scripts/ifcfg-eth0:{1,2,3} (Red Hat), or other configuration file. When Keepalived launches, it will register the IP addresses with the kernel on the physical interface eth0.
Also note that we are configuring Keepalived to verify that HAProxy is running by checking for an HAProxy process identifier every two seconds. If it doesn't find a PID, it won't increase the VM1's priority, VM2 will/should increase its priority, and VM2 will take over (at which point the IPs on VM1 will be released).
Finally, for the VRRP polling to work, we simply need to make sure VM1 and VM2 accept VRRP broadcast traffic:
$ iptables -A INPUT -p vrrp -d 224.0.0.0/8 -j ACCEPT