Linear vs Stick Load Balancing.
For many companies, the use of a Load Balancer isn’t well understood. This article looks at the use of Source NAT to minimise the impact of deploying a load balancer in an existing corporate network with minimal interruption. I call this deploying your Load Balancer in “Stick Mode” (following the NAT on a Stick model and minimise the changes in your network.
This diagram outlines what I call the linear load balancing design:
People who are new to load balancing tend to use this model. It works ok and it meets the primary requirements for the routing and symmetric packet flow. One of the key factors for load balancer design is that the flow that is load balanced must return through the load balancer.A load balancer is usually a good router but a very expensive use of resources.
IP Address Modification
What happens when an IP Packet passes through a Load Balancer ? A simple Load Balancing VIP will simply modify the destination address. That is, the Virtual IP of the load balancer is the destination address from the client. When the IP flow hits the load balancer, it will examine the IP header, run through the configuration to select a Real Server, rewrite the header and then forward the packet. Aside from selecting from many possible server/destination, this is Destination NAT. Therefore the server sees the IP Source as the client, and must be able to route the packet back to client, via the Load Balancer, who will undo the NAT, and return the packet to the client.
Note that a full load balancing design does a lot more than simple NAT, but it’s a useful concept for basic visualisation.
Therefore you need to add some routing to your network
Problem with Linear Deployment
There are are number of problems with this type of design.
The biggest problem is that the Load Balancer becomes part of the network infrastructure, and must be able to handle all the traffic that flows to and from the servers. Given that Load Balancers (especially F5 Load Balancers) are expensive, this is not a good design. Consider what happens when you want to backup your servers ? That’s a lot of traffic that might impact the performance of your load balancer.
The second problem is that your servers must be physically in the right location for the Ethernet connection to the Load Balancer. In most data centres, this may require specific planning for space and power which isn’t always possible or easily done. For example, a server may already be live and not able to move, but the Load Balancing VLANs are on another switch.
The third problem is that the Load Balancer will get blamed for every problem that the server might be experiencing. Of course this is silly, but the server admins will be convinced that the LB is somehow magically affecting AD replication etc. Which then leads to this bad idea :
Some people might attempt setup something like the above system by adding a L3 function in front of the server and use policy routing or even policy based NAT to send non-balanced traffic by a different path. Obviously, trying to use routing will end up with a simple routing loop.
Use a second interface on the server
This is the most common solution to “Load Balancer as Router” problem.
This scenario causes major problems with the server since it now a router. That is, a routing table on the server decides which interface is used for traffic. Thus, a static route is added to the server for all “backup destinations” to use the secondary interface. This obviously is a support nightmare, since different packets go different ways according to which source or destination address is being used. Of course, any problems will still be blamed on the network because Server Admins often don’t understand routing very well (Which is fair enough, I say).
Other Posts in This Series
- Cisco ACE - Enterprise Load Balancing on a Stick using Source NAT - Part 3 (14th February 2011)
- Cisco ACE - Enterprise Load Balancing on a Stick using Source NAT - Part 2 (9th February 2011)
- Cisco ACE - Enterprise Load Balancing on a Stick using Source NAT - Part 1 (8th February 2011)