May 30, 2013

Is Performance Still An Issue In The Cloud?

By Daniel Price

Is Performance Still An Issue In The Cloud?

The initial promise of cloud generated a lot of excitement particularly in the test and development worlds. It was easy to use and just as easy to dismiss. Although that gave way to disappointment as early adopters discovered most if not all of the familiar old problems around administration, networks and performance applied to the cloud as much as dedicated. With most first generation cloud platforms adopting iSCI-based storage platforms, performance particularly stood out as an issue, and it became accepted opinion that cloud could never outpace dedicated equipment. Is that still the case? A third party benchmark test on seven leading cloud platforms using a dedicated server sheds some interesting light on the discussion

As cloud moves from being a bleeding edge technology to a more common place service tool, it’s still common to find IT professionals assuming that cloud performance simply cannot match up to that of dedicated hardware. Early experiences with cloud platforms have left many with sub-optimal experiences, and there is a widespread view in the market that high IOPS applications are best left in-house.

Are your four cores the same as mine?

To understand the origins of this belief, remember that the cloud was created as a tool for testing and development. As its adoption spread, and excitement over its potential grew, developers and then businesses put more and more demands on their cloud environments.

This led to a natural, if not unfortunate, dynamic at the commodity end of the market. With a focus on expanding profit margins and controlling expenditures, many businesses decided to trim costs around the biggest single expense of a cloud platform — the back end. The short-sighted decision to save money by using cheap storage systems led predictably to subpar performance – which in turn led to some of the more publicized outages in recent years.

Another factor is a lack of standardization. A recent study that benchmarked the performance of major players in the IaaS space discovered a wide variance in specifications. For example, with a common Instance type of 4 cores/16GB, the variance between one provider and another can be as much as 50 percent. This means some specifications can be misleading to the point of being meaningless. If one platform performs at only 50 percent rate of its neighbor, then twice as many resources must be provisioned.

In another example of unreliability, commodity cloud players with iSCI in the back end have an ethernet hop in their infrastructure that inevitably slows down performance. As a result, applications that require high IOPS don’t function smoothly on those platforms. This results in the classic trade-off of price versus performance.

All of which means, that potential buyers must do thorough research on cloud platforms to understand what they will actually deliver. A detailed analysis of a platform’s technologies is essential before making a sizeable investment. It’s a pity that so few cloud providers share the details of their infrastructure with end users, or allow them to audit their platforms. While commercial secrets may be kept and embarrassing details hid, it means that IT providers have to use the rumor mill to make decisions about where they host their applications.

You get what you pay for

Given the background of some cloud performance issues, some IT pros might be surprised to hear that cloud platforms can outperform dedicated servers. But they can – and there’s even third party data proving that shared technologies can compete with and even outclass dedicated hardware. The platforms simply have to be built with performance in mind and managed correctly.  Of course its still true that with several platforms, if performance is an issue, using dedicated can be the better option. But why don’t all cloud providers offer competitive performance?

The answer is roadblocks. The two most common obstacles work in tandem — expense and the relentless race to the bottom. When providers like Amazon and Google  prioritize offering low-cost services, they must cut costs elsewhere to enable those offerings – and those cuts often mean a failure to invest in the proven technologies needed to provide high performance. As a result, users eager to find an economical platform will often experience weak performance.

To “re-brand” cloud environments as reliable, speedy and secure, providers must invest the capital necessary to build an optimal, high-quality platform. Only then will they deliver the performance their customers deserve. This puts cloud providers who have already built out low cost storage in a bind. Should they rip out their existing infrastructure and replace what they have with high-end technologies such as fibre channel? The disruption is prohibitive and the cost would surely have to be passed onto the user. When a customer can leave with little or no notice, it would risk the business. So it is unlikely that we will see a wholesale rebuild of a platform any time soon.

Is it game over for dedicated?

Inevitably there will be applications that do not run well in the cloud. For instance, some proprietary big data applications more or less have to be run on dedicated servers. Customers like to stick with habits and suppliers too, which will keep dedicated around for some time. Look at how many mainframes are still deployed. But for the most part, the choice is obvious. Just take a look at the latest round of financial results from hosting providers. The numbers paint a picture of a flat or barely growing dedicated hosting customer base and revenues. Meanwhile cloud revenues and momentum grow inexorably.

By Daniel Beazer

Daniel Beazer has an extensive history of research and strategy with hosting and cloud organizations.  As director of strategy at FireHost, Daniel Beazer oversees interactions with enterprise and strategic customers. In this role, he identifies pain points that are unique to high-level customers and utilises his significant knowledge of cloud computing and hosting to help them. 

Daniel Price

Daniel is a Manchester-born UK native who has abandoned cold and wet Northern Europe and currently lives on the Caribbean coast of Mexico. A former Financial Consultant, he now balances his time between writing articles for several industry-leading tech (CloudTweaks.com & MakeUseOf.com), sports, and travel sites and looking after his three dogs.
Frank Suglia

Forecasting Cloud Trends in 2024

The past few years have rapidly accelerated cloud adoption and impacted the overall IT landscape. [...]
Read more
Steve Prentice

Episode 21: Building a better backup – getting the whole organization to play better in the sandbox

Building a better backup – getting the whole organization to play better in the sandbox [...]
Read more
Alex Fink, CEO of the Otherweb

AI is Eating the World: The Evolution of Content in the Age of AI

AI is Flooding The World In the constantly evolving landscape of technology, “AI is eating [...]
Read more
Randy

Gain Critical AI Insights: The Oxford Artificial Intelligence Programme

Acquire Essential Skills for Success in the AI Industry The expansion of online learning within [...]
Read more
Anastasios Arampatzis

Insider Threats: The Trojan Horses in Intellectual Property Theft

The Invisible Enemy In the rapidly evolving landscape of global business, intellectual property (IP) stands [...]
Read more
John Case

Leverage Cloud-based Technology to Expand Business Opportunities

Leverage Cloud-based Technology It’s no secret that the cloud has changed the way business is [...]
Read more

SPONSOR PARTNER

Explore top-tier education with exclusive savings on online courses from MIT, Oxford, and Harvard through our e-learning sponsor. Elevate your career with world-class knowledge. Start now!
© 2024 CloudTweaks. All rights reserved.