Auto Scaling with HAProxy

In my last post I showed you how to setup Auto Scaling with Elastic Load Balancing. In this post I will show you how you can utilize Amazon’s Auto Scaling with an instance running HAProxy, and have HAProxy automatically update it’s configuration files when your setup scales up and down.

The Architecture

For the rest of this post I am going to assume that we have two ec2 images we are using. The first image is our load balancer running HAProxy, this image is setup to forward incoming traffic to our second image, which is the application image. The application image will be setup with Amazon’s Auto Scaling to build scale up and down depending on the load of the instances. Lets assume our load balancer image has a AMI of ami-10000000 and our application image has an AMI of ami-20000000.

Auto Scaling Setup

The setup for the auto scaling will be pretty similar to what we did in the previous post. First we will setup a launch config:

as-create-launch-config auto-scaling-test --image-id ami-20000000 --instance-type c1.medium

Then we will create our auto scaling group:

as-create-auto-scaling-group auto-scaling-test --availability-zones us-east-1a --launch-configuration auto-scaling-test --max-size 4 --min-size 2

You will notice we ran this command without setting a load balancer. Since we are using HAProxy we do not need to set this up. Finally we will create our trigger for the scaling:

as-create-or-update-trigger auto-scaling-test --auto-scaling-group auto-scaling-test --measure CPUUtilization --statistic Average --period 60 --breach-duration 120 --lower-threshold 30 --lower-breach-increment"=-1" --upper-threshold 60 --upper-breach-increment 2

Once you have executed these commands, you will have 2 new application instances launched. This presents us with a problem, in order to send traffic to these instances we need to update the HAProxy config file so that it knows where to direct the traffic.

Updating HAProxy

In the following example I am going to show you how with the power of a simple script we can monitor for instances being launched or removed, and update HAProxy accordingly. I will be using S3 storage, PHP and the Amazon PHP Library to do this. You can use any programming language you prefer if you care to rewrite this code, the key is understanding what is going on.

In order for us to identify the running application instances we will need to know the AMI of the instance. We know our application instance has an AMI of ami-20000000. We could store this information straight into our script, however if we ever rebuilt the application instance, we would then have to update the script, which would then force us to have to rebuild our HAProxy instance. Not a lot of fun. What I like to do, is to store the AMI in S3. That way if I ever update my application instance, I can just change a small file in S3 and have my load balancer pick up those changes. Lets assume I stored the AMI of the application image in a file called ami-application and uploaded it to an S3 bucket so that it can be found at http://stefans-test-ami-bucket.s3.amazonaws.com/ami-application.

The Script

Basically what our script is going to do is the following:

  • Get the AMI from S3 of our application image
  • Get a list of all running instances from Amazon, and log which ones match our application image
  • Get a default config file for HAProxy
  • Append the IP addresses of the running application instances to the config file
  • Compare the new config file to the old config file, if they are the same no action is needed, if they are different, replace the haproxy.cfg and restart the server

Our default config file for HAProxy will essentially be the full config without any server directives in the backend section. For example, our default config file could look something like:

global
        log 127.0.0.1   local0
        log 127.0.0.1   local1 notice
        maxconn 50000
        user haproxy
        group haproxy
        daemon
        chroot /var/chroot/haproxy

defaults
        log       global
        mode    tcp
        option   httplog
        option   forwardfor
        retries   2
        redispatch
        maxconn       50000
        timeout connect    10000
        timeout client        30000
        timeout server       60000
        stats uri /ha_stats
        stats realm Global\ statistics
        stats auth myusername:mypassword

frontend www *:80
        maxconn 40000
        mode http
        default_backend www_farm

backend www_farm
        mode http
        balance roundrobin

Now that we have our default config file, all we have to do to generate our HAProxy config is to add a server line to the end of it for each instance we have running. Lets look at the PHP code to do that:

<?php
// setup our include path, this is needed since the script is run from cron
ini_set("include_path", ".:../:./include:../include:/path/to/this/script/update-haproxy");

// including the Amazon EC2 PHP Library
require_once("Amazon/EC2/Client.php");

// include the config file containing your AWS Access Key and Secret
include_once ('.config.inc.php');

// location of AMI of the application image
$ami_location = 'http://stefans-test-ami-bucket.s3.amazonaws.com/ami-application';
$ami_id = chop(file_get_contents($ami_location));


// connect to Amazon and pull a list of all running instances
$service = new Amazon_EC2_Client(AWS_ACCESS_KEY_ID,
                                       AWS_SECRET_ACCESS_KEY);

$response = $service->describeInstances($request);

$describeInstancesResult = $response->getDescribeInstancesResult();
$reservationList = $describeInstancesResult->getReservation();


// loop the list of running instances and match those that have an AMI of the application image
$hosts = array();
foreach ($reservationList as $reservation) {
        $runningInstanceList = $reservation->getRunningInstance();
        foreach ($runningInstanceList as $runningInstance) {
                $ami = $runningInstance->getImageId();

                $state = $runningInstance->getInstanceState();

                if ($ami == $ami_id && $state->getName() == 'running') {

                        $dns_name = $runningInstance->getPublicDnsName();

                        $app_ip = gethostbyname($dns_name);

                        $hosts[] = $app_ip;
                }
        }
}

// get our default HAProxy configuration file
$haproxy_cfg = file_get_contents("/share/etc/.default-haproxy.cfg");

foreach ($hosts as $i=>$ip) {
        $haproxy_cfg .= '
        server server'.$i.' '.$ip.':80 maxconn 250 check';
}
// test if the configs differ
$current_cfg = file_get_contents("/path/to/haproxy.cfg");
if ($current_cfg == $haproxy_cfg) {
        echo "everything is good, configs are the same.\n";
}
else {
        echo "file out of date, updating.\n";
        file_put_contents('/path/to/this/script/.latest-haproxy.cfg', $haproxy_cfg);
        system("cp /path/to/this/script/.latest-haproxy.cfg /path/to/haproxy.cfg");
        system("/etc/init.d/haproxy reload");
}
?>

I think this script is pretty self explanatory, and does what we outlined as our goals for it above. If you ran the script from command line your HAProxy config file would get updated, and the server would be restarted.

Now that we have our working script, the last thing we need to do is setup this script to run on cron. I find updating every 2-5 minutes is sufficient to keep your config updated for auto scaling.

One of the nice benefits of having this script is it allows you the ability to easily pre-scale your solution if you know you are going to receive a big traffic spike. All you have to do is launch as many new instances of your application image and the script will manage the setup of HAProxy for you.

Conclusion

With under 100 lines of code and a few tools we were able to setup a script to keep the HAProxy config file up to date with your running application instances. This allows us the ability to use HAProxy instead of Amazon Load Balancing, but still get to have all the benefits of Auto Scaling. Lastly it is to note this script is just an example, and should be tailored to your own environment as you see fit. It is also best to store this script, and the default HAProxy config in a EBS volume if possible, as it will save you from having to rebuild your instance.

Auto Scaling with Elastic Load Balancing

Along with the ability to Setup Elastic Load Balancing, which I showed you how to setup in my previous post. Amazon provides the ability to auto scale your instances. Auto Scaling groups allow you to set up groups of instances that will scale up and down depending on triggers you can create. For example you can set up a scaling group to always have 2 instances in it, and to scale up another server if the CPU utilization of the servers grows over a certain threshold. This is extremely helpful when you receive unexpected traffic and you are unable to react in time to add new instances. The beauty of Auto Scaling in conjunction with Elastic Load Balancing is that it will automatically assign the new instances to the load balancer you provide.

Creating an Auto Scaling Launch Config

The first step in setting up Auto Scaling is to create a launch config. The launch config is used to determine what ec2 image, and size (small, medium, etc) will be used to setup a new instance for your Auto Scaling group. To setup a launch config you will call the as-create-launch-config. For example to create a new launch config called auto-scaling-test that would launch the image ami-12345678 of size c1.medium you would run the following command:


as-create-launch-config auto-scaling-test --image-id ami-12345678 --instance-type c1.medium

Create an Auto Scaling Group

The next step to enabling Auto Scaling is to setup an Auto Scaling Group. An Auto Scaling group tells Amazon what zones you want your instances created in, the minimum and maximum number of instances to ever launch, and which launch config to utilize. To create an Auto Scaling group you will call the as-create-auto-scaling-group command. For example if you wanted to create a new group with a name of auto-scaling-test using the availability zones of us-east-1a with a minimum number of instances being 2 and a maximum of 4 using our newly created launch config you would run:


as-create-auto-scaling-group auto-scaling-test --availability-zones us-east-1a --launch-configuration auto-scaling-test --max-size 4 --min-size 2

When this command is executed 2 new instances will be created as per the directions of the launch config. the as-create-auto-scaling-group can also take be linked to a load balancer. Thus if we wanted to have this group setup with the load balancer we created in the previous article, you would run:


as-create-auto-scaling-group auto-scaling-test --availability-zones us-east-1a --launch-configuration auto-scaling-test --max-size 4 --min-size 2 --load-balancers test-balancer

After execution this would setup 2 new instances as per the instructions of the launch config, and register these instances with the load balancer test-balancer.

Creating Auto Scaling Triggers

Triggers are used by Auto Scaling to determine whether to launch or terminate instances within an Auto Scaling Group. To setup a trigger you will use the as-create-or-update-trigger command. Here is an example using the auto scaling group we created earlier:


as-create-or-update-trigger auto-scaling-test --auto-scaling-group auto-scaling-test --measure CPUUtilization --statistic Average --period 60 --breach-duration 120 --lower-threshold 30 --lower-breach-increment"=-1" --upper-threshold 60 --upper-breach-increment 2

Lets walk through what this command is doing. Basically what this command is saying is, create a new trigger called auto-scaling-test. This trigger should use the auto-scaling group called auto-scaling-test. It should measure the average CPU utilization of the current instances in the auto scaling group every 60 seconds. If the CPU utilization goes over 60% over the period of 120 seconds launch 2 new instances. Alternatively if the CPU utilization drops below 30% over the period of 120 seconds terminate 1 of the instances. Remember that the trigger will never terminate more instances than the minimum number of instances and it will not launch more instances than the maximum number of instances as defined in the Auto Scaling Group.

Shutting Down an Auto Scaling Group

Initially shutting down an Auto Scaling group can be a bit tricky as you cannot delete an Auto Scaling Group until all the instances are terminated or deregistered from the group. The best way to terminate an Auto Scaling group, it’s triggers and launch config is to do the following steps:

  • Delete all triggers
  • Update the Auto Scaling group to have a minimum and maximum number of instances of 0
  • Wait for the instances registered with the Auto Scaling group to be terminated
  • Delete the Auto Scaling group
  • Delete the launch config

To do this with the examples we used above we would issue the following commands:


as-delete-trigger auto-scaling-test --auto-scaling-group auto-scaling-test

as-update-auto-scaling-group auto-scaling-test --min-size 0 -max-size 0

as-delete-auto-scaling-group auto-scaling-test

as-delete-launch-config auto-scaling-test

With those 4 commands you can easily delete your Auto Scaling group as well as any launch configs or triggers that are associated with it.

Conclusion

Auto Scaling provides an easy and efficient way to grow and shrink your hosting solution based on your current traffic. In events where you are hit with unexpected traffic Auto Scaling can provide a failsafe by automatically launching new instances and scaling up your solution to meet this new traffic. When the traffic subsides Auto Scaling will scale down your solution so that you are not wasting money by running more instances than you require.

One thing to note is that if you know before hand that you will be receiving a traffic spike at a specific time, it may be more benefitial to launch new instances manually before the spike. This will save your system from getting hammered before having the Auto Scaling launches new instances to cope with the additional load. If you rely on Auto Scaling alone in this scenario you could see many requests at the start of the heavy traffic timeout or fail as the minimum number of instances likely won’t be able to handle the traffic load.