Disaster Recovery – A Thing Of The Past!

Disaster Recovery 

Ok, ok – I understand most of you are saying disaster recovery (DR) is still a critical aspect of running any type of operations. After all – we need to secure our future operations in case of disaster. Sure – that is still the case but things are changing – fast.

There are really two things forcing us to look at disaster recovery differently across the board. On the one hand the sheer volume of data is rapidly becoming unmanageable. On the other hand – there are really few customer facing services that do not require 100% uptime and are considered mission critical. As a leading IaaS-provider we know that the person running his first e-commerce offering with zero income feels he or she is losing as bad as the large company that might be losing millions per hour when down. The feeling and result is the same no matter what stage you are in running your business. We all feel it is always critical to be up and running.

The first time we realized that DR in its traditional meaning will not work was when we setup OpenStack Swift over 5 nodes in 3 geographical spread data centers intended for volumes in Petabytes. It really only comes down to one aspect. Recovery time. Sure we have large volumes since many years back but we are fast approaching where the time to get things back – becomes too long for it to be a viable solution. The downtime in a full disaster would be too great. Even in the same data centers – should you truly need to get a few Petabytes copied from one set of hardware to another – it will take some time. Too long? Most likely. While there are ways to divide and conquer large data – it is time to think differently.

recovery

Data Points

As the volume of data goes in one direction – the acceptable down time in general – goes the other direction. Down. The solution? Multiple data centers that allows for live-live service with contained restore points locally in each data center.

Logical errors you say? From time to time human errors might force us to be able to restore from an older version. No doubt restore points are a must regardless of how you build your service. Those can many times be done in more contained parts of each solution. They can be done locally in the same DC where local networks allow for greater speed. With a live-live solution running over two or more data centers you can do maintenance with much less risk of having to take the service down – regardless of the situation.

Ask yourself – all that data that I am shooting to a different data center for DR – when did you do a full scale test to see how long it would be to restore 100 TB or more? Is it not time to go live – live over multiple data centers for all of your critical services? If you are running your services in the cloud – schedule full tests and make sure you contingency plans are up-to-date because when disaster strikes – they are what stands between you and true business disaster.

By Johan Christenson

Suraj Gupta

The Rise of the “Ecosystem of Ecosystems”

Ecosystems Emergence Even during these uncertain times, once fierce competitors are now collaborating and co-existing to not only survive, but thrive. Salesforce is partnering with ...
Brad Thies

SOC Reporting Requirements You Need to Know in a Cloud Environment

SOC Reporting Requirements Security lapses in some of the world's biggest companies continue to appear in news headlines, and information security is top of mind ...
David Friend

Tech Evolution – Why Multi-Cloud Will Win

Why Multi-Cloud Will Win When I was growing up in the 1970’s, IBM ruled the roost in corporate data centers. If you walked into a ...
Jen Klostermann

Telemedicine to medical smartphone applications

Telemedicine to medical smartphone applications With the current and growing worldwide concerns regarding the Coronavirus (COVID 19). Telemedicine is more important now than ever. What ...
Mark Barrenechea

Introducing the Information Advantage

Technology. Information. Disruption. The world is moving faster than ever before at unprecedented scale. Businesses today are operating in the next industrial revolution, and the ...
Mark Casey Apcela

Why CloudHubs are an Important Ingredient to Optimizing Performance of Cloud-based Applications

CloudHubs - Optimizing Application Performance It may seem hard to believe, but even in this day and age, there are still some enterprises that are ...
Fig 2

Leveraging machine learning models for predictive maintenance of network services

Leveraging machine learning models As per lightreading's service assurance and analytics research study conducted with 100+ network operators and service providers, nearly 40% reported that ...
Robert Van Der Meulen

Focusing on Online Gaming Security During Development

Online Gaming Security Infrastructure Updated article: June 2nd, 2020 There are millions of gamers around the globe and as of 2018, video games generated sales ...
Building a Robust Virtual Agent (VA) Rollout Strategy for DSPs

Building a Robust Virtual Agent (VA) Rollout Strategy for DSPs

Building a Robust Virtual Agent (VA) Rollout Strategy for DSPs Proven methods to increase VA containment & customer satisfaction The virtual agent’s market is at ...
Daniela Streng

Preventing IT Outages and Downtime

Preventing IT Outages As businesses continue to embrace digital transformation, availability has become a company’s most valuable commodity. Availability refers to the state of when ...