Protect Against Network Failures It is no surprise that whenever there is an outage in a public or private cloud, organizations lose business, face the wrath of angry customers and take a hit on their brands. The effects of an outage can be quite damaging; a 10-hour network outage at hosting company Peak Web ultimately […]
It is no surprise that whenever there is an outage in a public or private cloud, organizations lose business, face the wrath of angry customers and take a hit on their brands. The effects of an outage can be quite damaging; a 10-hour network outage at hosting company Peak Web ultimately led to its bankruptcy.
Any enterprise is vulnerable to a crippling outage similar to the recent major AWS outage due to two primary reasons; increasing complexity and rate of change. These factors put too much stress on human administrators, who have no way of ensuring that their everyday actions do not cause unintended outages.
Advances in computer science, predictive algorithms and availability of massive compute capacity at a reasonable price-point allow the emergence of new approaches and solutions to guarantee system resilience, uptime, availability and disaster recovery. It is important for data-center administrators to take advantage of new and advanced techniques whenever possible.
Telcos and carriers achieve a robust architecture by ensuring reliability in every component. Every network device in this type of architecture is compliant with very stringent Network Equipment Building System (NEBS) standards. NEBS-compliant devices are capable of withstanding extreme environmental conditions. The standard requires testing for fire resistance, seismic stability, electromagnetic shielding, humidity, noise and more.
Cloud providers take a completely different approach to reliability. They build many smaller systems using inexpensive components that fail more often than the NEBS-compliant systems used by telcos. These systems are then grouped into “atomic units” of small failure domains – typically in one data-center rack. This approach gives a smaller “blast radius” when things go wrong. Hundreds of thousands of such atomic units are deployed in large data centers, an approach that enables a massive scale out.
Formal verification can be used to perform an exhaustive mathematical analysis of the entire network’s state against a set of user intentions in real time, without emulation and without requiring a replica of the network. Users can evaluate a broad range of factors, such as network-wide reachability, quality issues, loops, configuration inconsistencies and more. Mathematical modeling can allow “what-if” scenario analyses of proposed changes; such modeling would have prevented a 2011 Amazon outage caused by a router configuration error.
Similarly, it is possible to continuously test and verify every tiny change in the network configuration with continuous testing tools. This is a drastic departure from the traditional approach of making large number of changes in a service window, which can be too risky and disruptive.
By Milind Kulkarni