I’ve been working on a design for a client who is building a web application to operate at scale. Before I finalise the design I’m looking to get some review and comments from a wide range of people. Therefore I present an outline of a segmented front end load balancer design here and ask for your comments and questions to help clear out all the problems or find anything that I have missed.
- CapEx light – The business is startup and needs to keep capital spend within practical limits. Spend where necessary but avoid where possible.
- Scale Fast – the business plan calls for viral growth and needs the Internet bandwidth to scale exponentially in the first two years.
- Risk Light – some technology risk is acceptable but must be balanced with trusted suppliers and deep support
- Web Only – The company is web only and running a single application in the data centre across approx. 50 racks.
- Application – there is only one application developed and managed for this project by the client. This isn’t an enterprise data centre where no one knows what might arrive in the future, a niche design that matches the application is a practical solution. Also, the application may be changed (somewhat) to match the infrastructure requirements where costs can be justified and operating risk reduced.
- Self Build – application to be hosted in managed data centre
- Security – security isn’t a top line issue but must be addressed throughout the design.
- Branding – I haven’t mentioned many names here, the final vendor & product decisions have not been made. Not that it matters at stage, the products and vendors are interchangeable in this architecture. In fact, appliances could be replaced by open source alternatives.
Load Balancer Design
In previous networks, I’ve experienced many problems with load balancers performance and operation.
- It is really, really expensive to buy a load balancer that can handle 40 Gigabits or more of traffic.
- Buying a redundant load balancer is a significant waste of working capital. The redundant unit doesn’t create revenue, only manages risk at the cost of complexity for HA functions.
- Buying 4 x 10Gbps load balancers costs 50% less than one pair 40Gbps load balancer but similar capabilities and better reliability.
- Configuring a single load balancers can be quite risky – one mistake and the entire site can go down.
- Upgrading a single load balancers in a HA pair is a nerve-wracking experience. Faulty code can cause serious business loss and it’s a “all or nothing” upgrade with very little risk reduction (other than buying even more expensive load balancers).
To address these concerns, I’m looking for a design that scale horizontally instead of vertically. The diagram below shows 3 Load Balancer units as standalone units. Each Load Balancer is connected to a single ISP so that the load of one ISP connection is shared as part of the total.
I would expect to develop scripts/automation to manage operational tasks across multiple devices. Since each device will have different VLAN IDs, IP Addressing, routing etc will need some level of automation.
Product Selection – The final selection list was between A10 Networks and Brocade Load Balancer (was Riverbed). The market leader F5 Networks was too expensive after preliminary pricing ( 300 – 500% more).
Internet Connectivity and ISP Resilience
A web application needs many different criteria from web connectivity.
- multiple Internet connections for bandwidth
- multiple Internet connections for provider redundancy
- multiple connections empowers the client to have a stronger negotiating position with providers.
- latency of user sessions will be improved by reducing path distance to user (requires DNS Load Balancing) Note: IP Anycast could be used but it is complex to monitor performance and operation.
IPv4 Address Exhaustion
Another major concern is the IPv4 address exhaustion. The acquisition of Public IPv4 address allocation for even a /24 has become almost impossible. It is possible to purchase IPv4 range from broker but the costs are very high. A lack of IPv4 address could restrict business growth since the company is predicting exponential growth.
It’s possible to ask provider to allocate smaller range of IP Address per connection, typically a /28. This would avoid the require to own IP address. Those IPv4 address would not belong to the company and means that the network design must be independent of the IP Address.
In a large data centre/co-lo facility , access to bandwidth is reasonably straightforward. Both 1GbE and 10GbE circuits are readily available but it remain desirable to load balance across different bandwidths. One use case is when a global product gains significant traction in a specific geography and it may be practical to purchase sub-rate gigabits per second on a 10GbE bearer from a specific provider to improve customer experience.
BGP and Front End Router Avoidance
In previous reviews of this design, engineers have highlighted the lack of front end routers with BGP for load balancing. When using BGP-enabled routers for Internet paths, control over the “client to front-end” path is close to zero and determined by external factors in the Internet. You cannot influence the path from client to server as this is determined by the BGP routing protocol and pathways in the Internet.
Secondly, the capital cost of large physical routers to connect 100Gbps of Internet bandwidth is an enormous up front cost for zero business benefit (its not an investment just a cost).
In this case, each load balancer is the “internet router”. This reduces the component count, simplifies the operation and scales horizontally as more bandwidth is added. Instead of purchasing one pair of very large hardware routers, I can consider using software OR hardware load balancers according to link speed. For certain load balancer vendors, you can run virtual instances on hardware in this mode.
Some people recommend the use of the Cisco IOS routers running the Performance Routing (PfR) feature to monitor packet and network quality from Internet. While PfR is a good solution, the complexity of configuration and operation is serious drawback. Attempting to train a 24-hour help desk function on PfR is a serious challenge that requires a large and ongoing training investment. Discussions with colleagues and personal experience suggest Cisco PfR has been a poor experience due to bugs – a cost was associated with this risk.
Overall, the use of DNS Load Balancing for a web application provides much greater control over the user to web server flow. It also allows for multiple data centre designs in longer term. Avoiding the capital cost is significant business benefit.
There are other options for controlling the Internet. Companies like Thousand Eyes can give visibility and Noction has a monitoring and BGP route modification product. I haven’t yet discounted using these other solutions.
Product Selection – the product short list was pre-owned Cisco hardware routers such as C6500 or C7200 routers because the simplicity of the connectivity. In the end, we did not purchase any routers because they were provided by the carriers as part of the service.
DNS Load Balancing over Multiple ISPs
DNS Load Balancing works by providing responding to DNS A Record Query with different IP addresses for individual client lookups. A DNS query will typically respond with two or more IP Addresses so that the client can failover between IP Addresses. Thus, User 1 would get VIP1, , User might get VIP 3 and so on across the available paths.
A DNS Load Balancing function would use an algorithm to select the IP address allocated. The DNS response can return multiple IP address to the client so DNS Query could return the IP Address for VIP1 and VIP3 such that in the event that one pathway goes down, the client can rotate to the second connection. I’ve seen DNS queries for Google return up to 12 different IP Addresses and while I need to do some testing, I currently thinking that newer desktop, tablet and smartphone clients have improved DNS capabilities.
Managed DNS Services
At this stage, the recommendation is use a cloud service for hosting and managing DNS. The current DDoS attacks on DNS infrastructure mean that hosting your own DNS is impractical. Preventing DNS floods and preventing DNS reflection hijacks is a serious effort and could impact on the core business of running a web application. Therefore, DNS is expected to be outsourced.
If you aren’t familiar, check out Akamai Terra product brief on load balancing:
- Customers can select from four service variants of Global Traffic Management: Failover directs requests to an alternate location when there is a failure at a primary site. The Failover solution can be used across disparate network carriers.
- IP Intelligence directs requests to a closest datacenter based on geographic or IP rules.
- Weighted customers can configure a fixed % of DNS requests to be routed to one of multiple data centers specified.
- Performance requests are directed as determined through multivariate policy rules featuring percentage-based load balancing and load feedback to allow for server specific load balancing within a datacenter.
- Performance Plus enhances the Performance version by deploying Akamai network agents at each point of origin to enable a datacenter-to-Internet viewpoint allowing for more precise decisions when directing end user requests.
As should be clear, this is much more granular control that can be achieved with BGP for mid-sized content provider.
The “Traditional Front End” design is of a better design for enterprises that need simple connectivity, redundancy and static IP Addressing for services such VPN concentrators. It’s not a good design for web services because it lacks granular control over flows.
User Packet Pathways
The result now is that the USER1 is connected to VIP1. The Load Balancer would analyse the available servers in the web pool and select one.
The VIP on the load balancer are configured to use a SNAT pool.
Hat Tip: Working at scale means that the size of the NAT pool is a major concern. A million concurrent sessions requires 32 IP addresses in the SNAT pool to ensure TCP port availability.
Securing the Default Route
In my view, one of the most significant weakness in the web front end is compromise, escalation and exfiltration. An external party will attempt to compromise a server and look to gain privilege escalation. The best method to restrict privilege escalation attack is to tight control the outbound traffic flows. Web servers rarely need to make outbound connections to hosts on the Internet and should be blocked by default.
In this design, the Source NAT translation on the Load Balancers leave the option to have a separate pathway for outbound Internet connections. In this network design, we are considering a mid-range Fortinet firewall for simple logging and control or a pre-owned Cisco ASA. The security value is derived from “deny any outbound” not from having any “clever & fancy security”.
Keeping it simple makes it easy to operate and lowers the operational costs.
The only outbound traffic should be explicitly named update servers, email hosts, DNS resolvers, and general administration traffic. This is a very useful method to enforce SSH Gateways in the server cluster and the firewall can be configured to prevent SSH Tunnels so as to control cowboy developers.
The “IP Router” is, of course, some sort of Ethernet switching infrastructure with L3 routing. Server are connected to the switch core in the well known ways. The most likely solution here will target an L2/L3 ECMP implementation in the long term but probably use an MLAG implementation in the short term as the number of ports will start small and grow large. MLAG is popular to avoid the capital cost of chassis switches (big money). The Ethernet/L3 network does need to support a distributed storage system so East/West bandwidth and latency must be excellent otherwise the LAN is straightforward and simple enough.
Your Feedback Requested
I’m looking for feedback on this design. A good design is one that has wide review and input, taken apart, put back together, and changed, modified and updated. Obviously I’m confident enough to publish the design here but I’d like to take questions and discuss any areas that I might have missed.
I look forward to seeing your comments.
Other Posts in A Series On The Same Topic
- A Segmented Front End Web Network Architecture (15th September 2015)