Load Balancing with Amazon Web Services

In my previous post I talked about load balancing with HAProxy. In this post I am going to discuss load balancing with Amazon Web Services, using Elastic Load Balancing.

Elastic Load Balancing vs HAProxy

Elastic Load Balancing (ELB) was introduced by Amazon Web Services in May of 2009. Elastic Load Balancing distributes traffic across multiple EC2 instances and scales up as your traffic increases. This all sounds like a perfect solution for your load balancing. However it does present a few issues. The rate at which ELB scales up is often a little slow. This means that if your website typically does not get a lot of traffic, when it does it a giant spike it will likely take the load balancer some time to catch up to the demand of the new traffic. If your traffic increases are more gradual however, ELB appears to scale up a lot smoother. For more information on tests of Elastic Load Balancing’s performance see this article:

http://agiletesting.blogspot.com/2010/04/load-balancing-in-cloud-whitepaper-by.html

HAProxy on the other hand will handle large traffic spikes very handily. In addition HAProxy gives you the power to fine tune your load balancing, as well as additional features such as error pages, real time statistics, and the use of acl’s. However not everyone requires these features, so you will need to decide for yourself if you want to go with the easy solution, or the more robust route. I will describe how you can setup Elastic Load Balancing, as well as how to auto scale with ELB or by using HAProxy.

Setting Up Elastic Load Balancing

Setting up Elastic Load Balancing is as easy as running a single command using the Amazon API Tools. To create a new load balancer you will run the elb-create-lb command. When you run the command you will specify the name of the new load balancer, the availability zones you want to use, the incoming port and the protocol to use for it, and the port the traffic should be sent to on the instance. For example if we want to create a new load balancer called test-balancer, in the us-east-1a zone, that is listening on port 80 for http traffic, and will send traffic to port 80 on our instances we would run:


elb-create-lb test-balancer --availability-zones us-east-1a --listener "lb-port=80,instance-port=80,protocol=http"

Executing this command will return the DNS name of the new load balancer that was created. You will use this DNS name to point traffic to so that it gets distributed by your load balancing. Unfortunately the recommended way to point this traffic is via a DNS CNAME. The reason this is a problem is that you cannot use a CNAME with a root domain. For example I would not be able to point kloppmagic.ca to the load balancer. I could however point www.kloppmagic.ca. Using a sub domain appears to be the common work around people are utilizing. The other option is to do a lookup of the DNS name returned from the command and using the IP address. This approach you will constantly have to monitor the DNS name from Amazon however, as they are not utilizing static IP’s for load balancing.

Once you have load balancer setup you will need to associate running EC2 instances with the load balancer. To do this you will run the command elb-register-instances-with-lb. This command allows you to register an instance with a specific load balancer. For example if we had 2 instances with instance id’s of i-123456 and i-123457 that you wanted to associate with the test-balancer load balancer you would run:


elb-register-instances-with-lb test-balancer --instances i-123456 i-123457

In the same manor you can use the elb-deregister-instances-with-lb to remove instances from a given load balancer. Finally if you wish to view all the load balancers you currently have setup you can utilize the elb-describe-lbs

Ultimate it is that easy to setup load balancing with Amazon Web Services. If you do not require all the features of HAProxy, and want something quick and easy to setup, Elastic Load Balancing cannot be beat.