May 30, 2013

Is Performance Still An Issue In The Cloud?

By Daniel Price

Is Performance Still An Issue In The Cloud?

The initial promise of cloud generated a lot of excitement particularly in the test and development worlds. It was easy to use and just as easy to dismiss. Although that gave way to disappointment as early adopters discovered most if not all of the familiar old problems around administration, networks and performance applied to the cloud as much as dedicated. With most first generation cloud platforms adopting iSCI-based storage platforms, performance particularly stood out as an issue, and it became accepted opinion that cloud could never outpace dedicated equipment. Is that still the case? A third party benchmark test on seven leading cloud platforms using a dedicated server sheds some interesting light on the discussion

As cloud moves from being a bleeding edge technology to a more common place service tool, it’s still common to find IT professionals assuming that cloud performance simply cannot match up to that of dedicated hardware. Early experiences with cloud platforms have left many with sub-optimal experiences, and there is a widespread view in the market that high IOPS applications are best left in-house.

Are your four cores the same as mine?

To understand the origins of this belief, remember that the cloud was created as a tool for testing and development. As its adoption spread, and excitement over its potential grew, developers and then businesses put more and more demands on their cloud environments.

This led to a natural, if not unfortunate, dynamic at the commodity end of the market. With a focus on expanding profit margins and controlling expenditures, many businesses decided to trim costs around the biggest single expense of a cloud platform — the back end. The short-sighted decision to save money by using cheap storage systems led predictably to subpar performance – which in turn led to some of the more publicized outages in recent years.

Another factor is a lack of standardization. A recent study that benchmarked the performance of major players in the IaaS space discovered a wide variance in specifications. For example, with a common Instance type of 4 cores/16GB, the variance between one provider and another can be as much as 50 percent. This means some specifications can be misleading to the point of being meaningless. If one platform performs at only 50 percent rate of its neighbor, then twice as many resources must be provisioned.

In another example of unreliability, commodity cloud players with iSCI in the back end have an ethernet hop in their infrastructure that inevitably slows down performance. As a result, applications that require high IOPS don’t function smoothly on those platforms. This results in the classic trade-off of price versus performance.

All of which means, that potential buyers must do thorough research on cloud platforms to understand what they will actually deliver. A detailed analysis of a platform’s technologies is essential before making a sizeable investment. It’s a pity that so few cloud providers share the details of their infrastructure with end users, or allow them to audit their platforms. While commercial secrets may be kept and embarrassing details hid, it means that IT providers have to use the rumor mill to make decisions about where they host their applications.

You get what you pay for

Given the background of some cloud performance issues, some IT pros might be surprised to hear that cloud platforms can outperform dedicated servers. But they can – and there’s even third party data proving that shared technologies can compete with and even outclass dedicated hardware. The platforms simply have to be built with performance in mind and managed correctly.  Of course its still true that with several platforms, if performance is an issue, using dedicated can be the better option. But why don’t all cloud providers offer competitive performance?

The answer is roadblocks. The two most common obstacles work in tandem — expense and the relentless race to the bottom. When providers like Amazon and Google  prioritize offering low-cost services, they must cut costs elsewhere to enable those offerings – and those cuts often mean a failure to invest in the proven technologies needed to provide high performance. As a result, users eager to find an economical platform will often experience weak performance.

To “re-brand” cloud environments as reliable, speedy and secure, providers must invest the capital necessary to build an optimal, high-quality platform. Only then will they deliver the performance their customers deserve. This puts cloud providers who have already built out low cost storage in a bind. Should they rip out their existing infrastructure and replace what they have with high-end technologies such as fibre channel? The disruption is prohibitive and the cost would surely have to be passed onto the user. When a customer can leave with little or no notice, it would risk the business. So it is unlikely that we will see a wholesale rebuild of a platform any time soon.

Is it game over for dedicated?

Inevitably there will be applications that do not run well in the cloud. For instance, some proprietary big data applications more or less have to be run on dedicated servers. Customers like to stick with habits and suppliers too, which will keep dedicated around for some time. Look at how many mainframes are still deployed. But for the most part, the choice is obvious. Just take a look at the latest round of financial results from hosting providers. The numbers paint a picture of a flat or barely growing dedicated hosting customer base and revenues. Meanwhile cloud revenues and momentum grow inexorably.

By Daniel Beazer

Daniel Beazer has an extensive history of research and strategy with hosting and cloud organizations.  As director of strategy at FireHost, Daniel Beazer oversees interactions with enterprise and strategic customers. In this role, he identifies pain points that are unique to high-level customers and utilises his significant knowledge of cloud computing and hosting to help them. 

Daniel Price

Daniel is a Manchester-born UK native who has abandoned cold and wet Northern Europe and currently lives on the Caribbean coast of Mexico. A former Financial Consultant, he now balances his time between writing articles for several industry-leading tech (CloudTweaks.com & MakeUseOf.com), sports, and travel sites and looking after his three dogs.
Bharti Patel

The Goldilocks Principle of Cloud Management: Striking the Ideal Balance

It’s not an all-or-nothing proposition: How to strike the right balance with cloud The pandemic [...]
Read more
Craig Lowell

Scaling Smart: Planning Strategically for Cloud Expansion

Scaling Strategically As cloud spending continues to surge, managing and forecasting costs has become a [...]
Read more
David Cantor

Impact of AI in Storytelling and Creativity 

These are monumental topics that command volumes of diligent research, backed by empirical evidence and [...]
Read more
Randy

2024 Cloud Security Trends: Navigating the Evolving Landscape of Protection and Backup

2024 Cloud Security Trends Cloud protection and backup trends in 2024 are evolving rapidly, influenced [...]
Read more
Veljko

5 Recruiting Software Tools For Technology And Digital Companies

5 Recruiting Software Tools Finding the best candidate in the sea of thousands of developers [...]
Read more
Daniel Barber

Q&A Daniel Barber – 2024 AI + Data Privacy Predictions

2024 AI + Data Privacy Predictions In a recent interview with CloudTweaks, Daniel Barber, Co-Founder [...]
Read more

SPONSOR PARTNER

Explore top-tier education with exclusive savings on online courses from MIT, Oxford, and Harvard through our e-learning sponsor. Elevate your career with world-class knowledge. Start now!
© 2024 CloudTweaks. All rights reserved.