Answers To Gartner’s Six Cloud Computing Risks

Answers To Gartner’s Six Cloud Computing Risks

Cloud computing has been the subject of ever-increasing hype. Anything exposed to such publicity is always accompanied by criticism, whether it be constructive or destructive. Gartner, an information technology research and advisory firm, has completed a report which signifies some crucial risks in the cloud computing industry.

Given below, with appropriate answers to each, are six risks highlighted by the Gartner report.

1: Privileged User Access: A risk which deals with who manages the data of an organization in the cloud. Interestingly, the way most datacenters operate is that there are not very many people around. It is largely an automated process; software is in control of other software or data. In contrast, an organization could have untrustworthy or unreliable employees at its on-premises datacenter. The very fact that automated processes look after an organization’s data means that clouds are more secure compared to data in the hands of the organization itself.

2: Regulatory Compliance: A risk regarding certifications and regulations in relation to a cloud service. Here, the argument is that it is in the cloud service provider’s interest to get as many certifications as it can. Owing to the fact that Gartner’s report on cloud computing risks was published back in 2008, prominent cloud providers have actually acquired certification for their services and datacenters.

3:  Data location: Organizations think of it as a big issue regarding what will happen if their data swims out of control. Taking a step back, if one pictures an individual walking out of his office with a laptop on which his critical data is stored, the chances are high that this laptop could be snatched from him. So, the risk of data location is much greater if one does not store one’s data in the cloud. An intelligent response to the menace of data location is to choose multiple cloud services and store different portions of data in different clouds, so decreasing the danger of data location.

4: Data Segregation: An aspect which deals with the issue that one’s data should not mix with someone else’s data. Yet again, the response to this issue is automation. Today’s cloud services use highly automated services which literally decrease the chances of data loss and data segregation to nearly zero.

5: Data recovery: A topic which implies that consumers might not be able to get their data back. Principally, if some data is mission critical to an organization, the organization will double or even triple back up. More importantly, an organization cannot blame a cloud service for a logical failure – an organization is responsible for deleting its own files, and it cannot hold a cloud responsible for its lost data.

6: Long-term Viability: An aspect which implies that the cloud provider remains in service for eternity. Ideally speaking, there are two aspects of this situation. The first facet, as mentioned before, indicates that an organization should keep its mission-critical data backed up with other cloud services or in-house datacenters. The second part deals with the continuity of a business service. A cloud provider can easily achieve a higher level than a business on its own, particularly in the case of today’s small-scale businesses. Looking at the cloud giants of today, they do not look like they are going to hit any difficulties anytime soon.

Going even further and taking a look from a different angle, there could come a point where cloud providers have grown to such huge sizes that it would not be in the interests of governments to intervene – similar to the banking scenario of today.

By Haris Smith

0 Responses to Answers To Gartner’s Six Cloud Computing Risks

  1. What Gartner does not talk about as a Risk which is huge is the “Outages’. There have been several outages in the recent past where businesses have been impacted and there was nothing that the business could do?

    • @anantadya
      If one looks at public clouds like AWS as pure utility companies (which, over time they will for sure become) like any utility company be it water, gas, electricity or compute the provider will suffer an outage of some description, it’s almost completely unavoidable. The challenge is if your core business relies on the services of a cloud provider and if that provider has an issue that takes your site down, and then you have to have a secondary provider as part of a DR strategy.
       
      Simply relying on one is just crazy as we can see from the AWS outages over the last two weeks, even if your application is designed to elastically scale and you have built in redundancy within your Hosts site, means nothing if the actual site goes down. When that happens it’s like you lost your rudder in the middle of the ocean in the middle of a storm.
       
      In the “bricks and mortar” world companies think nothing of having a secondary generator for their own office building or another Telco provider with extra lines in the event of an outage – it’s about time their psychical world DR strategy was also expanded to encompass their virtual world as well.