Five Things Organizations Can Do To Protect Against Network Failures

Milind Kulkarni

Protect Against Network Failures

It is no surprise that whenever there is an outage in a public or private cloud, organizations lose business, face the wrath of angry customers and take a hit on their brands. The effects of an outage can be quite damaging; a 10-hour network outage at hosting company Peak Web ultimately led to its bankruptcy.

Causes of Outages

Any enterprise is vulnerable to a crippling outage similar to the recent major AWS outage due to two primary reasons; increasing complexity and rate of change. These factors put too much stress on human administrators, who have no way of ensuring that their everyday actions do not cause unintended outages.

Five Possible Solutions

Advances in computer science, predictive algorithms and availability of massive compute capacity at a reasonable price-point allow the emergence of new approaches and solutions to guarantee system resilience, uptime, availability and disaster recovery. It is important for data-center administrators to take advantage of new and advanced techniques whenever possible.

  1. Architectural Approach: This is the most fundamental choice in data-center architecture. A robust, available, resilient data center can be built with two seemingly different architectures.

Telcos and carriers achieve a robust architecture by ensuring reliability in every component. Every network device in this type of architecture is compliant with very stringent Network Equipment Building System (NEBS) standards. NEBS-compliant devices are capable of withstanding extreme environmental conditions. The standard requires testing for fire resistance, seismic stability, electromagnetic shielding, humidity, noise and more.

Cloud providers take a completely different approach to reliability. They build many smaller systems using inexpensive components that fail more often than the NEBS-compliant systems used by telcos. These systems are then grouped into “atomic units” of small failure domains – typically in one data-center rack. This approach gives a smaller “blast radius” when things go wrong. Hundreds of thousands of such atomic units are deployed in large data centers, an approach that enables a massive scale out.

  1. Active Fault Injection: Many cloud providers deploy this technique. The philosophy is that “the best defense is to fail often.” A team of people is chartered to actively inject faults into the system every single day and to create negative scenarios by forcing ungraceful system shutdowns, physically unplugging network connectivity, shutting down power in the data center or even simulating application-level attacks. This approach forces the dev-ops team to fine tune their software and processes. The Chaos Monkey tool from Netflix, which terminates application VMs randomly, is an example of this approach.
  1. Formal Verification: Formal verification methods, by definition, ensure integrity, safety and security of the end-to-end system. Such methods have been used in aerospace, airline and semiconductor systems for decades. With advances in computing, it is now possible to bring formal verification to the networking layer of IT infrastructure, using it to build a mathematical model of the entire network.

Formal verification can be used to perform an exhaustive mathematical analysis of the entire network’s state against a set of user intentions in real time, without emulation and without requiring a replica of the network. Users can evaluate a broad range of factors, such as network-wide reachability, quality issues, loops, configuration inconsistencies and more. Mathematical modeling can allow “what-if” scenario analyses of proposed changes; such modeling would have prevented a 2011 Amazon outage caused by a router configuration error.

  1. Continuous Testing: This approach is an extension of the continuous integration (CI) and continuous delivery (CD) commonly employed with cloud applications. Developers of cloud-native applications (e.g., Facebook, Amazon Shopping, Netflix, etc.) typically make hundreds of tiny improvements to their software on a single day using CI/CD. The end user rarely notices these tiny changes, but over a longer period of time, they lead to a significant improvement.

Similarly, it is possible to continuously test and verify every tiny change in the network configuration with continuous testing tools. This is a drastic departure from the traditional approach of making large number of changes in a service window, which can be too risky and disruptive.

  1. Automation: In a 2016 survey of 315 network professionals conducted by Dimensional Research, 97 percent indicated that human error leads to outages, 45 percent said those outages are frequent. This problem can be mitigated by automating configuration and troubleshooting as much as possible. However, automation is a double-edged sword because it is done by a software program. If there is an error in automation code, problems are replicated quickly and throughout a much broader “blast zone,” as happened in Amazon’s February 2017 outage. In this case, an error in the automation script caused it to take down more servers than intended. Even automation tools need some human input – commands, parameters or higher-level configuration – and any human error will be magnified by automation.

By Milind Kulkarni

Marty

How cloud technologies improve innovation in the healthcare industry?

How cloud technologies improve innovation in the healthcare industry? The uptake of VPS hosting in the cloud within the heavily regulated healthcare industry has until ...
Jen Klostermann

FinTech and Blockchain vs Traditional Banking

FinTech and Blockchain Growth "The Rise of FinTech - New York’s Opportunity for Tech Leadership", a report by Accenture and the Partnership Fund for New ...
Or Lenchner

Destination IPPN: why the travel sector must harness a global IP proxy network

Destination IPPN While massive growth in the travel sector has been predicted, the digital environment has also massively upped competition amongst service providers, keen to ...
David Balaban

Ransomware – Cybercriminal Groups Know The Weak Points

Cybercriminal Groups Grow Data breaches and leaks represent a quickly growing security problem these days. When plenty of people work from home, the risk of ...
Kayla Matthews

40% of Organizations Are Leaving Office 365 Data Vulnerable

Office 365 Data Vulnerable Microsoft Office 365 is a popular platform for individuals and organizations alike. But, recent research shows many organizations are apparently too ...
Gilad David Maayan

Accessing (HPC) High Performance Computing

HPC in the Cloud Big data and Machine Learning (ML) can provide businesses with incredible insights and an innovative edge. However, to properly analyze the ...
Martin Mendelsohn

Who Should Protect Our Data?

Who Should Protect Our Data in The Cloud? You would think that cloud service providers are safe havens for your personal data – they all ...
Bigcommerce

Magento 1 Is Nearing Its End – Is It Time To Migrate To BigCommerce?

Time To Migrate To BigCommerce? Nearly three years ago, Magento declared that they would be ending support for their Magento 1 software. All versions of ...
Karen Gondoly

You Don’t Need Cloud Desktops, You Need Cloud-Based VDI. Here’s Why

Cloud Desktops / Cloud-Based VDI Virtual Desktop Infrastructures (VDI) have been around for a while. As an example, VMware started selling their first VDI product ...
Al Castle E911

Businesses Need E911 for Remote Employees

E911 for Remote Employees Remote working is no longer a luxury or a distant possibility – it’s the norm for enterprises around the world. The ...