Five Things Organizations Can Do To Protect Against Network Failures

Protect Against Network Failures

It is no surprise that whenever there is an outage in a public or private cloud, organizations lose business, face the wrath of angry customers and take a hit on their brands. The effects of an outage can be quite damaging; a 10-hour network outage at hosting company Peak Web ultimately led to its bankruptcy.

Causes of Outages

Any enterprise is vulnerable to a crippling outage similar to the recent major AWS outage due to two primary reasons; increasing complexity and rate of change. These factors put too much stress on human administrators, who have no way of ensuring that their everyday actions do not cause unintended outages.

Five Possible Solutions

Advances in computer science, predictive algorithms and availability of massive compute capacity at a reasonable price-point allow the emergence of new approaches and solutions to guarantee system resilience, uptime, availability and disaster recovery. It is important for data-center administrators to take advantage of new and advanced techniques whenever possible.

  1. Architectural Approach: This is the most fundamental choice in data-center architecture. A robust, available, resilient data center can be built with two seemingly different architectures.

Telcos and carriers achieve a robust architecture by ensuring reliability in every component. Every network device in this type of architecture is compliant with very stringent Network Equipment Building System (NEBS) standards. NEBS-compliant devices are capable of withstanding extreme environmental conditions. The standard requires testing for fire resistance, seismic stability, electromagnetic shielding, humidity, noise and more.

Cloud providers take a completely different approach to reliability. They build many smaller systems using inexpensive components that fail more often than the NEBS-compliant systems used by telcos. These systems are then grouped into “atomic units” of small failure domains – typically in one data-center rack. This approach gives a smaller “blast radius” when things go wrong. Hundreds of thousands of such atomic units are deployed in large data centers, an approach that enables a massive scale out.

  1. Active Fault Injection: Many cloud providers deploy this technique. The philosophy is that “the best defense is to fail often.” A team of people is chartered to actively inject faults into the system every single day and to create negative scenarios by forcing ungraceful system shutdowns, physically unplugging network connectivity, shutting down power in the data center or even simulating application-level attacks. This approach forces the dev-ops team to fine tune their software and processes. The Chaos Monkey tool from Netflix, which terminates application VMs randomly, is an example of this approach.
  1. Formal Verification: Formal verification methods, by definition, ensure integrity, safety and security of the end-to-end system. Such methods have been used in aerospace, airline and semiconductor systems for decades. With advances in computing, it is now possible to bring formal verification to the networking layer of IT infrastructure, using it to build a mathematical model of the entire network.

Formal verification can be used to perform an exhaustive mathematical analysis of the entire network’s state against a set of user intentions in real time, without emulation and without requiring a replica of the network. Users can evaluate a broad range of factors, such as network-wide reachability, quality issues, loops, configuration inconsistencies and more. Mathematical modeling can allow “what-if” scenario analyses of proposed changes; such modeling would have prevented a 2011 Amazon outage caused by a router configuration error.

  1. Continuous Testing: This approach is an extension of the continuous integration (CI) and continuous delivery (CD) commonly employed with cloud applications. Developers of cloud-native applications (e.g., Facebook, Amazon Shopping, Netflix, etc.) typically make hundreds of tiny improvements to their software on a single day using CI/CD. The end user rarely notices these tiny changes, but over a longer period of time, they lead to a significant improvement.

Similarly, it is possible to continuously test and verify every tiny change in the network configuration with continuous testing tools. This is a drastic departure from the traditional approach of making large number of changes in a service window, which can be too risky and disruptive.

  1. Automation: In a 2016 survey of 315 network professionals conducted by Dimensional Research, 97 percent indicated that human error leads to outages, 45 percent said those outages are frequent. This problem can be mitigated by automating configuration and troubleshooting as much as possible. However, automation is a double-edged sword because it is done by a software program. If there is an error in automation code, problems are replicated quickly and throughout a much broader “blast zone,” as happened in Amazon’s February 2017 outage. In this case, an error in the automation script caused it to take down more servers than intended. Even automation tools need some human input – commands, parameters or higher-level configuration – and any human error will be magnified by automation.

By Milind Kulkarni

Gilad David Maayan

Accessing (HPC) High Performance Computing

HPC in the Cloud Big data and Machine Learning (ML) can provide businesses with incredible insights and an innovative edge. However, to properly analyze the data collected or to train your ML models, you need ...

Explainable Intelligence Part 1 – XAI, the third wave of AI

Explainable Intelligence Artificial Intelligence (AI) is democratized in our everyday life. Tractica forecasts the global artificial intelligence software market revenues will grow from around 9.5 billion US dollars in 2018 to an expected 118.6 billion by 2025 ...

The Quest to Bring Computers to People – Personal Computing

The quest to bring computers to people,' rather than people to computers" resulted in the invention of Personal Computer The world changed its direction a dozen years ago when Steve Jobs introduced a revolutionary new ...
Patrick Joggerst

Session Border Control as a Service: Faster, More Secure and Dramatically Less Complex Enterprise Communications

Session Border Control as a Service As businesses are increasingly moving to cloud-based unified communications (UC) for improved collaboration and productivity, they must also ensure that their networks and systems are as secure as possible ...
Aruna Headshot

Top Four Predictions in 2020 for Unified Collaboration

Predictions in 2020 The year 2020 promises to usher in significant new developments in collaboration and communication. It’s part of an unending climb, moving higher on a logarithmic curve of progress. New technologies continue to ...
Karen Gondoly

You Don’t Need Cloud Desktops, You Need Cloud-Based VDI. Here’s Why

Cloud Desktops / Cloud-Based VDI Virtual Desktop Infrastructures (VDI) have been around for a while. As an example, VMware started selling their first VDI product in 2008. Every year since then, analysts consistently predicted that ...