Why Reliability Is The Buzz Word For Cloud In 2014
Any discussion of cloud adoption primarily boils down to two important concerns – data security and cloud reliability. A study conducted by Tata Consultancy Services sometime back revealed some interesting insights – while customers in Europe and Asia-Pacific saw data security as the most important parameter while picking a cloud vendor, their counterparts in the US and Latin America wanted reliability more than security. It is very likely that this scenario may have changed quite drastically over the past year after the NSA revelations.
While concerns about data security have definitely made businesses jittery about migrating to the cloud, it is reliability where the concerns just wouldn’t go away. With respect to security, it is relatively easier for a cloud vendor to showcase their infrastructural capabilities – has your provider deployed the necessary standards to make data hack-proof and tamper-proof? Are proper firewalls and intrusion detection mechanisms in place? If the answer to these questions is a ‘yes’, then you could be rest assured that your data is safe and secure in the servers of your cloud provider.
The same however cannot be said of reliability. Can your vendor promise a 100% or even 99.5% uptime guarantee for the next year? Are you absolutely sure your services would not go down tomorrow? Contracts often tie downtimes with financial compensation. So while you may get monetary credits for unforeseen downtimes, none of this will ensure your customers shall be able to access your service with 100% reliability. Cloud technology will continue to face questions till these concerns are put to rest.
This is exactly why I believe 2014 could be the year of cloud reliability. We are already seeing signs for this happening. Late last year, IBM unveiled the wraps on their ‘cloud of clouds’ toolkit based service called InterCloud Storage that could go a long way in ensuring service reliability. Put simply, InterCloud Storage makes it possible for customers to store their data in a multi-vendor setup so that their data can be made available from an alternate server even if their primary vendor is facing a downtime. In essence, IBM’s patent pending technology makes the vendor’s performance-guarantee independent of their server availability. With this, a 100% uptime assurance could actually be a reality!
Another interesting process is being built by Microsoft. A recent patent application from the company has revealed their work on a new performance-based pricing system for cloud. Today, customers pay for cloud services without any assurance of how reliable the network could be. Consequently, customers pay the same price for an hour of service regardless of how much downtime their services could face in that period. Microsoft’s technology makes it possible for vendors to charge customers based on performance metrics like uptime, I/O rate, etc. instead of merely paying for the duration of consumption.
The technologies announced by IBM and Microsoft could go a long way in securing the future of the cloud. Cloud is already seeing terrific growth rates in industries that have traditionally served on-premise solutions. According to Mary Ellen Power, the Vice President of Marketing at Silanis; a company that offers electronic signature technology to the US Army among other organizations, the cloud based solutions in their sector are seeing a 50% annual growth rate of late.
For such businesses that serve solutions to customers in regulated industries (banking, insurance, military, etc.), reliability and security are of utmost importance. Technologies like InterCloud and performance based pricing would ensure the reliability of their solutions. In turn, this is likely to help in furthering the proliferation of cloud through the next few years. What are your thoughts?
By Anand Srinivasan,
Anand is a writer and technology consultant based out of Bangalore, India. He may be reached at email@example.com