Is Performance Still An Issue In The Cloud?

Recovery Experts.png
Data Fallout.png
Byod.png
Hair Loss.png
Disaster Recovery Plan.png

Is Performance Still An Issue In The Cloud?

The initial promise of cloud generated a lot of excitement particularly in the test and development worlds. It was easy to use and just as easy to dismiss. Although that gave way to disappointment as early adopters discovered most if not all of the familiar old problems around administration, networks and performance applied to the cloud as much as dedicated. With most first generation cloud platforms adopting iSCI-based storage platforms, performance particularly stood out as an issue, and it became accepted opinion that cloud could never outpace dedicated equipment. Is that still the case? A third party benchmark test on seven leading cloud platforms using a dedicated server sheds some interesting light on the discussion

As cloud moves from being a bleeding edge technology to a more common place service tool, it’s still common to find IT professionals assuming that cloud performance simply cannot match up to that of dedicated hardware. Early experiences with cloud platforms have left many with sub-optimal experiences, and there is a widespread view in the market that high IOPS applications are best left in-house.

Are your four cores the same as mine?

To understand the origins of this belief, remember that the cloud was created as a tool for testing and development. As its adoption spread, and excitement over its potential grew, developers and then businesses put more and more demands on their cloud environments.

This led to a natural, if not unfortunate, dynamic at the commodity end of the market. With a focus on expanding profit margins and controlling expenditures, many businesses decided to trim costs around the biggest single expense of a cloud platform — the back end. The short-sighted decision to save money by using cheap storage systems led predictably to subpar performance – which in turn led to some of the more publicized outages in recent years.

Another factor is a lack of standardization. A recent study that benchmarked the performance of major players in the IaaS space discovered a wide variance in specifications. For example, with a common Instance type of 4 cores/16GB, the variance between one provider and another can be as much as 50 percent. This means some specifications can be misleading to the point of being meaningless. If one platform performs at only 50 percent rate of its neighbor, then twice as many resources must be provisioned.

In another example of unreliability, commodity cloud players with iSCI in the back end have an ethernet hop in their infrastructure that inevitably slows down performance. As a result, applications that require high IOPS don’t function smoothly on those platforms. This results in the classic trade-off of price versus performance.

All of which means, that potential buyers must do thorough research on cloud platforms to understand what they will actually deliver. A detailed analysis of a platform’s technologies is essential before making a sizeable investment. It’s a pity that so few cloud providers share the details of their infrastructure with end users, or allow them to audit their platforms. While commercial secrets may be kept and embarrassing details hid, it means that IT providers have to use the rumor mill to make decisions about where they host their applications.

You get what you pay for

Given the background of some cloud performance issues, some IT pros might be surprised to hear that cloud platforms can outperform dedicated servers. But they can – and there’s even third party data proving that shared technologies can compete with and even outclass dedicated hardware. The platforms simply have to be built with performance in mind and managed correctly.  Of course its still true that with several platforms, if performance is an issue, using dedicated can be the better option. But why don’t all cloud providers offer competitive performance?

The answer is roadblocks. The two most common obstacles work in tandem — expense and the relentless race to the bottom. When providers like Amazon and Google  prioritize offering low-cost services, they must cut costs elsewhere to enable those offerings – and those cuts often mean a failure to invest in the proven technologies needed to provide high performance. As a result, users eager to find an economical platform will often experience weak performance.

To “re-brand” cloud environments as reliable, speedy and secure, providers must invest the capital necessary to build an optimal, high-quality platform. Only then will they deliver the performance their customers deserve. This puts cloud providers who have already built out low cost storage in a bind. Should they rip out their existing infrastructure and replace what they have with high-end technologies such as fibre channel? The disruption is prohibitive and the cost would surely have to be passed onto the user. When a customer can leave with little or no notice, it would risk the business. So it is unlikely that we will see a wholesale rebuild of a platform any time soon.

Is it game over for dedicated?

Inevitably there will be applications that do not run well in the cloud. For instance, some proprietary big data applications more or less have to be run on dedicated servers. Customers like to stick with habits and suppliers too, which will keep dedicated around for some time. Look at how many mainframes are still deployed. But for the most part, the choice is obvious. Just take a look at the latest round of financial results from hosting providers. The numbers paint a picture of a flat or barely growing dedicated hosting customer base and revenues. Meanwhile cloud revenues and momentum grow inexorably.

By Daniel Beazer

Daniel Beazer has an extensive history of research and strategy with hosting and cloud organizations.  As director of strategy at FireHost, Daniel Beazer oversees interactions with enterprise and strategic customers. In this role, he identifies pain points that are unique to high-level customers and utilises his significant knowledge of cloud computing and hosting to help them. 

Dinesh Varadharajan

How to Prepare Your Company for the Future with Automation

The Future with Automation Many entrepreneurs believe digital technologies will transform the way their companies work. By 2022, the worldwide hyper-automation technology market is expected to be worth $596.6 billion. And by 2055, almost half ...
Derrek Schutman

Implementing Digital Capabilities Successfully to Boost NPS and Maximize Value Realization

Implementing Digital Capabilities Successfully Building robust digital capabilities can deliver huge benefits to Digital Service Providers (DSPs). A recent TMForum survey shows that building digital capabilities (including digitization of customer experience and operations), is the ...
Kelly Dyer

Achieving Data Security Compliance in the Cloud

Achieving Data Security Compliance As individuals, we go through life sharing information about ourselves in every aspect of our daily existence. From credit checks for securing a loan, through to entire personal and family medical ...
Yuliya Melnik

DevOps Services Outsourcing: What Is it and Why Do You Need it?

DevOps Services Outsourcing The sooner you release your unique idea to the public, the higher the chance that it will receive the lion's share of the audience's attention. Delays in development can lead competitors to ...
Threat Security

Azure Red Hat OpenShift: What You Should Know

Azure Red Hat OpenShift: What You Should Know What Is Azure Red Hat OpenShift? Red Hat OpenShift provides a Kubernetes platform for enterprises. Azure Red Hat OpenShift permits you to deploy fully-managed OpenShift clusters in ...

CLOUD MONITORING

The CloudTweaks technology lists will include updated resources to leading services from around the globe. Examples include leading IT Monitoring Services, Bootcamps, VPNs, CDNs, Reseller Programs and much more...

  • Datadog

    DataDog

    DataDog is a startup based out of New York which secured $31 Million in series C funding. They are quickly making a name for themselves and have a truly impressive client list with the likes of Adobe, Salesforce, HP, Facebook and many others.

  • Opsview

    Opsview

    Opsview is a global privately held IT Systems Management software company whose core product, Opsview Enterprise was released in 2009. The company has offices in the UK and USA, boasting some 35,000 corporate clients. Their prominent clients include Cisco, MIT, Allianz, NewVoiceMedia, Active Network, and University of Surrey.

  • Sematext Logo

    Sematext

    Sematext bridges the gap between performance monitoring, real user monitoring, transaction tracing, and logs. Sematext all-in-one monitoring platform gives businesses full-stack visibility by exposing logs, metrics, and traces through a single Cloud or On-Premise solution. Sematext helps smart DevOps teams move faster.

  • Nagios

    Nagios

    Nagios is one of the leading vendors of IT monitoring and management tools offering cloud monitoring capabilities for AWS, EC2 (Elastic Compute Cloud) and S3 (Simple Storage Service). Their products include infrastructure, server, and network monitoring solutions like Nagios XI, Nagios Log Server, and Nagios Network Analyzer.