Monday, 27 October 2014

Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 6

Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 1 - Introduction and lab description
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 2 - Deploy and configure the PKI infrastructure
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 3 - Configure and test the Exchange 2013 Client Access role
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 4 - Install CentOS 7
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 5 - Install and configure HAProxy
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 6 - Make HAProxy highly available (this page)
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 7 - Demo



In part 5 we installed and fully configured HAProxy. Technically we would be good to go, but we take it one step further: we want our HAProxy servers to be highly available.

In this part we will install and configure keepalived and will make HAProxy highly available. Part 6 is organised into the following sections:

  • Install and configure keepalived.
  • Testing keepalived.

Install and Configure keepalived

We log on to lab-hap01 via Putty. By default we'll download the source tarball to root's home directory:

cd ~
wget http://www.keepalived.org/software/keepalived-1.2.13.tar.gz













We then uncompress the tarball, change to the uncompressed directory, configure the installation, compile the program, and install it. Lots of fast scrolling output, so no screenshots. Here are the commands:

tar –zxvf keepalived-1.2.13.tar.gz
cd keepalived-1.2.13
./configure
make
make install

We need to tell the kernel to allow binding to non-local addresses, so we open the /etc/sysctl.conf file and add the following line:

net.ipv4.ip_nonlocal_bind=1







We create the /etc/keepalived/keepalived.conf file (note that the file will be created and written to by vi when we save it)...

mkdir /etc/keepalived
vi /etc/keepalived/keepalived.conf

...and add the following content:

global_defs {
  notification_email {
    administrator@digitalbrain.com.au
  }
  notification_email_from lab-hap01@digitalbrain.com.au
  smtp_server 10.30.1.11
  smtp_connect_timeout 30
}

vrrp_script check_haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}

vrrp_instance VI_1 {
  interface ens160
  state MASTER
  virtual_router_id 10
  priority 101
  virtual_ipaddress {
    10.30.1.15
  }
  track_script {
    check_haproxy
  }
  smtp_alert
}


In your lab, update the interface name in the interface ens160 line with your server’s interface, for example interface eth0. If not sure what your interface name is, then run ifconfig on your server:








Also, if you still remember from Part 1, the HAProxy virtual IP in my lab is 10.30.1.15. In yours, replace the virtual_ipaddress value with one that’s valid in your environment.

Our keepalived solution also supports SMTP (email) notifications in case something happens. In your implementation, change the recipient in the notification_email directive. Also change the sender e-mail address on the notification_email_from line with the hostname@yourdomain that’s valid for your environment. Hostname is the computer’s host part of its FQDN. Technically it can be anything you like, but it makes sense to have it set so.

Due to a coding issue in keepalived which returns a blank host name under certain conditions, we need to add the following line to the /etc/hosts file, otherwise email notifications will fail:

10.30.1.13   lab-hap01.localdomain



It is important that we add the FQDN of the server, and not just the hostname.

For those interested, I found in my lab that the gethostbyname(name.nodename) function in /root/keepalived-1.2.13/lib/utils.c (remember that we extracted the sources to /root/keepalived-1.2.13) will return NULL, and keepalived will greet Exchange with HELO (null). Exchange doen't know who (null) is, and therefore it will drop the communication, causing SMTP notifications to fail.









I also want to make the point that in my lab the SMTP server is a single point of failure: email notifications go to the IP address of a single server as opposed to a clustered/HA SMTP agent. In real life I would send notifications to a system that is always up and not affected by failures of a single mail server

For additional safety in terms of monitoring, SNMP support can also be built into keepalived and integrated into your enterprise monitoring system of choice. Not in this lab.

We now make the keepalived daemon start automatically:

cp /usr/local/etc/rc.d/init.d/keepalived /etc/init.d/
chmod +x /etc/init.d/keepalived
chkconfig keepalived on
cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/






What these commands do:
  • cp - it copies the keepalived init script from the default installation location to /etc/init.d. More on /etc/init.d here.
  • chmod - makes the script executable. More on chmod here.
  • chkconfig - enables the keepalived service to run at startup. More on chkconfig here.
  • cp - it copies the default keepalived configuration file to /etc/sysconfig/. The /etc/sysconfig directory contains system configuration files, including our keepalived configuration file. For more click here.
The default daemon line in /etc/init.d/keepalived looks like this:

daemon keepalived ${KEEPALIVE_OPTIONS}

Open /etc/init.d/keepalived in your favorite text editor and change the daemon line as follows so that keepalived can actually start:

daemon /usr/local/sbin/keepalived ${KEEPALIVED_OPTIONS}














Our CentOS 7 minimal install doesn’t include killall. We need it in our keepalived config script to test whether the haproxy service is running. We install it as part of the psmisc package:

yum install psmisc –y

Also, by default, the CentOS 7 firewll blocks VRRP traffic. VRRP is essential for keepalived to function. We allow VRRP traffic with the following command – read more about it here:

firewall-cmd –-permanent -–add-rich-rule='rule protocol value="vrrp" accept'

Now we restart our server:

shutdown –r now

Log back on as root and run these commands to do a basic health check:
  • service keepalived status
  • The service is running.
  • cat /var/log/messages | grep VRRP_Instance
  • keepalived started in MASTER mode.

 
  • ip a | grep "inet 10"
  • We have the virtual IP bound to our ens160 interface on lab-hap01...
 

  • ping 10.30.1.15 (run it on another machine, e.g. LAB-WS01)
  • ...and it is communicating on the VIP.

  • firewall-cmd –-list-rich-rule
  • We confirm that our firewall rule survived the restart.

Awesome, we are looking good!

Now we repeat these steps on lab-hap02, with a couple of important differences.
  1. In the /etc/keepalived/keepalived.conf file we change the priority to a lower value than the master, for instance to 100:
  1. While still in the /etc/keepalived/keepalived.conf file, we also change the notification_email_from line to lab-hap02@digitalbrain.com.au.
  2. This is an obvious one, but need to ensure it doesn't slip through the cracks: in the /etc/hosts file we enter the correct hostname for lab-hap02.
When it’s all done and lab-hap02 has been rebooted, we repeat the same tests:

  • service keepalived status
  • The service is running.

  • cat /var/log/messages | grep VRRP_Instance
  • keepalived started in BACKUP mode.

  • Note that the server entered the BACKUP state because it received a higher priority advert and removed the VIP from its network card as the VIP is supposed to live on the MASTER.
  •  
  • ip a | grep "inet 10"
  • We do NOT have the virtual IP bound to our ens160 interface on lab-hap02 because lab-hap02 is the BACKUP node.
  • firewall-cmd --list-rich-rule
  • We confirm that our firewall rule survived the restart.

We skipped the ping test as the VIP is bound to lab-hap01 and therefore it hasn’t got anything to do with lab-hap02 testing.

Testing keepalived

Time for some HA testing. To recap:
  • haproxy is running on both lab-hap01 and lab-hap02.
  • keepalived is running on both lab-hap01 and lab-hap02.
  • lab-hap01 is the MASTER and lab-hap02 is the BACKUP.
  • lab-hap01 holds the VIP.

Let’s confirm. On lab-hap01:

ps –A | grep haproxy
ps –A | grep keepalived
ip a | grep "inet 10"









Same check on lab-hap02:









On lab-hap01 we stop haproxy and we check its IP addresses:

systemctl stop haproxy.service
ip a | grep "inet 10"






Then we confirm that lab-hap-01 is no longer the MASTER (expected for the VIP is not bound to its network card):

cat /var/log/messages | grep VRRP-Instance






On lab-hap02 we confirm that the VIP has been bound to the NIC:

ip a | grep "inet 10"






Then we confirm that lab-hap02 is now the new MASTER:

cat /var/log/messages | grep VRRP-Instance








Up to this point we confirmed that stopping haproxy on lab-hap01 was correctly detected and the VIP has been transferred to lab-hap02. Therefore, if we point our Exchange DNS records to the VIP, continued service is assured.

Now we start the haproxy service on lab-hap01 and check the IP address:

systemctl start haproxy.service
ip a | grep "inet 10"






Checking the IP address on lab-hap02 shows that the VIP has been removed from it:






And last, we want to know how long it takes for the VIP to fail over once a service failure is detected. For this we kick off a continuous PING to the VIP from LAB-WS02:

ping 10.30.1.15 -t

Then we stop the haproxy service on lab-hap01 and watch how many pings are lost while service failure is detected and the VIP is moved to lab-hap02:

systemctl stop haproxy.service

Finally we start the haproxy service on lab-hap01 and, again, we watch the pings:

systemctl start haproxy.service

The screenshot shows that failover is virtually instantaneous, with only one ping lost during service failover:












Impressive!

In this part we installed, configured and tested keepalived, the bit which makes HAProxy highly available, on both HAProxy servers. Technically we've almost reached the end of our journey, with only one last step left: confirm that client access actually works, traffic is load balanced, and service level failure is correctly detected and handled.

In part 7, our last part, we will test various client access methods and we’ll confirm that load balancing, error detection and high availability actually works from a client’s perspective too.



Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 1 - Introduction and lab description
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 2 - Deploy and configure the PKI infrastructure
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 3 - Configure and test the Exchange 2013 Client Access role
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 4 - Install CentOS 7
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 5 - Install and configure HAProxy
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 6 - Make HAProxy highly available (this page)
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 7 - Demo

Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 4

Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 1 - Introduction and lab description
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 2 - Deploy and configure the PKI infrastructure
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 3 - Configure and test the Exchange 2013 Client Access role
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 4 - Install CentOS 7 (this page)
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 5 - Install and configure HAProxy
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 6 - Make HAProxy highly available
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 7 - Demo



In Part 3 we configured our Exchange servers’ CAS role, issued and installed the SAN certificate, and performed a basic client access validation using OWA.

In Part 4 we will start building our HAProxy systems. This part will focus on the installation and configuration of the operating system. I chose CentOS 7. If you feel like you are up to the task, use any Linux distro you are comfortable with, but you will have to adapt the commands, and possibly some of the procedures. If you want to keep it simple and just follow my post, then download the minimal installation ISO of CentOS 7 from http://mirror.anl.gov/pub/centos/7.0.1406/isos/x86_64/CentOS-7.0-1406-x86_64-Minimal.iso.

We create a virtual machine and connect the ISO to the virtual CD-ROM. Stating the obvious: we must ensure that the virtual CD is our first boot option. When our VM boots, we select Install CentOS 7.
















Then we press Enter to begin the installation.













Select your language. Mine is English, Australia.
















Click on Date & Time to configure the time zone.
















Mine is Sydney, Australia.
















Click on INSTALLATION DESTINATION to select where CentOS 7 will be installed.
















Nothing to change: accept the default settings and click Done in the top-left corner. Even if we didn’t change anything, we had to instructed the installer to use defaults – the “Begin Installation” button in the screenshot above is grayed out, as opposed to two screenshots below.
















Click NETWORK & HOSTNAME to configure networking.
















In the Hostname field type a name for this server. Mine is lab-hap01.localdomain.

IMPORTANT: Do NOT use a single label name such as lab-hap01!

In the top-right corner, notice that the network card is in a disconnected state. That’s alright for now. Click Configure.
















On the General tab, make sure that the network is started automatically and that all users are allowed to use the connection.


















On the IPv4 Settings tab, set the configuration method to Manual, then configure the IP address, netmask, gateway and DNS. Make sure that the DNS server is pointing to LAB-DC01 (10.30.1.10 in my lab) as that is the DNS server holding all the DNS records which make things work. Click Save.


















Click “ON” to turn networking on. The state of the network card will change to Connected. Click Done.

















Click Begin Installation.
















While the files are being installed, click ROOT PASSWORD.

















Type a password for the root user. Mine is a generic, weak password that I always use in my lab. I am not concerned about it as long as it works – it’s only a lab. In real life, I would pick a strong password.
















I use P@ssw0rd all across my lab. It is a dictionary word, known to be commonly used in labs. It is weak, therefore I need to click Done for the second time to indicate to the system that I think I know what I am doing.
















We aren’t creating any users, so I didn’t bother going into the USER CREATION section. Meanwhile the server has finished installing, so click Reboot.
















Once rebooted, we log on as root/P@ssw0rd.












Time to install some supporting software.

IMPORTANT: The lab *must* have Internet connectivity. Otherwise things will fail and you will have to find alternative ways to download all the required software and transfer them to the CentOS 7 box. In my article I assume that Internet connectivity is available.

Since this is a minimal installation, we need to install a couple of things that will enable us to make it work. Bear in mind that Linux is CaSE sENsitiVe, so type the commands exactly as I typed. Sorry, no screenshots. Most commands will run off the screen really quickly, so no point clogging this post with screenshots of fast scrolling output.

Install wget. We need it to download stuff:

yum install wget –y

Install the development tools. We need them to compile stuff from source:

yum groupinstall ‘Development Tools’ -y

The same goes for the FTP client. Although it isn’t strictly necessary because we aren’t using it in this lab, it is good to have it anyway. You can skip this step if you like.

yum install ftp –y

Also, I always find it handy to easily locate files in folders. While not strictly necessary, I make extensive use of the locate command, so let’s install it. Side note: if you have just installed a package and you want to find where its files have been placed, you must run updatedb first before locate can find it, or wait until the updatedb task runs as scheduled. More on updatedb here. Updatedb is installed along with locate as part of the mlocate package.

yum install mlocate –y

Then we need openssl. This is mandatory because HAProxy will work as an SSL bridge, and we also need the developer package too to compile sources. While openssl is already installed, it is good to update it anyway:

yum install openssl –y
yum install openssl-devel –y

We need the openssl perl module to manipulate certificates:

yum install openssl-perl.x86_64 –y

Zlib is another mandatory package. However you’ll find that by the time we need it, it will have been installed along other packages as a dependency. We needn’t worry about it.

Then I want the traditional network tools (notice “want” as opposed to “need”). The CentOS 7 minimal installation comes with no ifconfig (we can still use ip instead). I love ifconfig, although I can (and will in this lab) use ip as an alternative. Ip is already there, comes with the minimal Centos 7 package. So I also run this:

yum install net-tools –y

Now I have ifconfig too.

It is time now to make things a bit easier for the administrator (me). My intention is to use Putty, but I have no low-privileged user. All I have is root, and by default root access via SSH is disabled. I am going to give remote SSH access to root so that I can benefit from the enhanced interface offered by Putty. As I said repeatedly, this is a test lab so it’s OK to bend the rules. It is NOT best practice. Don’t do it in production!

Now that the big red flashing warning is out, let’s do it anyway. But before we make any changes, let’s check whether we really need to.

Download Putty. It is a stand-alone application, no installation required.

Run Putty.exe. Enter the Linux server’s IP address, select SSH, and click Open.




















We are presented with a security alert. Expected and completely normal. Click Yes.
















Enter the root credentials. Access denied. Bummer! We DO need to relax security! Not in production though – find another way.






To relax security, we’ll access the CentOS 7 console the way we did when installed the operating system. Open the /etc/ssh/sshd_config file. I prefer to use vi, but you can use nano or other text editors if you like.




Uncomment the PermitRootLogin yes line and save the file.







Finally type the following command to restart the sshd service:

/bin/systemctl restart sshd.service




Test Putty access again – and we are cooking with gas!







Now that easy access is sorted, we’ll need an easy way to transfer files to/from the Linux servers. I settled for WinScp, a free, Windows-based file transfer utility with GUI. It’s versatile, every admin should have it in his/her toolbox. Download the portable version and unzip it to your preferred location. Any time you need to transfer a file, just run the executable, no installation required. CentOS 7 requires no additional configuration either – it allows WinScp access out of the box, even with root access.

That’s it. Now we need to repeat the same procedure for our second HAProxy box, lab-hap02. Whether you want to go through these steps from scratch again or clone lab-hap01 and reconfigure the clone as lab-hap02, is entirely up to you. I chose to build it from scratch just to confirm that the instructions in this article are easy to follow, accurate, and they work. However before you start reading the next paragraph, you should have lab-hap02 up and running in a similar configuration as lab-hap01.

In this part we have done the groundwork for installing HAProxy: installed the operating system and the required tools that will allow us to deploy and configure HAProxy easily.

In the next part we’ll download and install the latest available HAProxy source files. We’ll compile the source, install HAProxy, enable it as a service, and create the configuration file. Additionally we’ll prepare the certificates – yes, the dreaded certificates are part of our life now, so we must master it, like it or not. It’s not rocket science, don’t freak out.

Keep tuned for the next part



Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 1 - Introduction and lab description
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 2 - Deploy and configure the PKI infrastructure
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 3 - Configure and test the Exchange 2013 Client Access role
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 4 - Install CentOS 7 (this page)
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 5 - Install and configure HAProxy
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 6 - Make HAProxy highly available
Highly Available L7 Load Balancing for Exchange 2013 with HAProxy – Part 7 - Demo