Recent Major Cloud Outages
It is no surprise that whenever there is an outage in a public or private cloud, organizations lose business, face the wrath of angry customers and take a hit on their brands. The effects of an outage can be quite damaging; a 10-hour network outage at hosting company Peak Web ultimately led to its bankruptcy.
Causes of Outages
Any enterprise is vulnerable to a crippling outage similar to the recent major AWS outage due to two primary reasons; increasing complexity and rate of change. These factors put too much stress on human administrators, who have no way of ensuring that their everyday actions do not cause unintended outages.
Five Possible Solutions
Advances in computer science, predictive algorithms and availability of massive compute capacity at a reasonable price-point allow the emergence of new approaches and solutions to guarantee system resilience, uptime, availability and disaster recovery. It is important for data-center administrators to take advantage of new and advanced techniques whenever possible.
- Architectural Approach: This is the most fundamental choice in data-center architecture. A robust, available, resilient data center can be built with two seemingly different architectures.
Telcos and carriers achieve a robust architecture by ensuring reliability in every component. Every network device in this type of architecture is compliant with very stringent Network Equipment Building System (NEBS) standards. NEBS-compliant devices are capable of withstanding extreme environmental conditions. The standard requires testing for fire resistance, seismic stability, electromagnetic shielding, humidity, noise and more.
Cloud providers take a completely different approach to reliability. They build many smaller systems using inexpensive components that fail more often than the NEBS-compliant systems used by telcos. These systems are then grouped into “atomic units” of small failure domains – typically in one data-center rack. This approach gives a smaller “blast radius” when things go wrong. Hundreds of thousands of such atomic units are deployed in large data centers, an approach that enables a massive scale out.
- Active Fault Injection: Many cloud providers deploy this technique. The philosophy is that “the best defense is to fail often.” A team of people is chartered to actively inject faults into the system every single day and to create negative scenarios by forcing ungraceful system shutdowns, physically unplugging network connectivity, shutting down power in the data center or even simulating application-level attacks. This approach forces the dev-ops team to fine tune their software and processes. The Chaos Monkey tool from Netflix, which terminates application VMs randomly, is an example of this approach.
- Formal Verification: Formal verification methods, by definition, ensure integrity, safety and security of the end-to-end system. Such methods have been used in aerospace, airline and semiconductor systems for decades. With advances in computing, it is now possible to bring formal verification to the networking layer of IT infrastructure, using it to build a mathematical model of the entire network.
Formal verification can be used to perform an exhaustive mathematical analysis of the entire network’s state against a set of user intentions in real time, without emulation and without requiring a replica of the network. Users can evaluate a broad range of factors, such as network-wide reachability, quality issues, loops, configuration inconsistencies and more. Mathematical modeling can allow “what-if” scenario analyses of proposed changes; such modeling would have prevented a 2011 Amazon outage caused by a router configuration error.
- Continuous Testing: This approach is an extension of the continuous integration (CI) and continuous delivery (CD) commonly employed with cloud applications. Developers of cloud-native applications (e.g., Facebook, Amazon Shopping, Netflix, etc.) typically make hundreds of tiny improvements to their software on a single day using CI/CD. The end user rarely notices these tiny changes, but over a longer period of time, they lead to a significant improvement.
Similarly, it is possible to continuously test and verify every tiny change in the network configuration with continuous testing tools. This is a drastic departure from the traditional approach of making large number of changes in a service window, which can be too risky and disruptive.
- Automation: In a 2016 survey of 315 network professionals conducted by Dimensional Research, 97 percent indicated that human error leads to outages, 45 percent said those outages are frequent. This problem can be mitigated by automating configuration and troubleshooting as much as possible. However, automation is a double-edged sword because it is done by a software program. If there is an error in automation code, problems are replicated quickly and throughout a much broader “blast zone,” as happened in Amazon’s February 2017 outage. In this case, an error in the automation script caused it to take down more servers than intended. Even automation tools need some human input – commands, parameters or higher-level configuration – and any human error will be magnified by automation.
By Milind Kulkarni, VP product management, Veriflow
Milind Kulkarni is the vice president of product management at Veriflow. Prior to joining Veriflow, Milind shaped networking and server products and go-to-market strategy for Oracle Cloud and Engineered Systems. Prior to Oracle, Milind held product management, product marketing, business development, and engineering roles at Brocade, Cisco, and Center for Development of Advanced Computing (C-DAC).