How Amazon Brainwashed Us All (and Joyent Too)
When you enjoy a first-mover advantage in a new market as Amazon has for the last 7 years in the public cloud, you get to dictate the terms of the initial conversation (Think Henry Ford “You can have any color so long as it’s black”). That doesn’t mean we all have to keep listening, though, allowing them to brainwash everyone into thinking about cloud exclusively in a way that plays to their advantages. Instead of challenging Amazon to stick to the fundamentals of flexibility that supposedly necessitated the public cloud in the first place, companies like Joyent are following along…
Why cloud in the first place?
(Image Source – thanks, http://www.chades.net)
This is the diagram that justified public cloud, showing how in a classic on-premise solution you were forced into capital expense based on future capacity predictions. With cloud, we were told, you don’t have to make predictions which are ultimately doomed to failure anyway causing either an overcapacity spending nightmare or an under capacity business limiting situation. It’s supposed to be about flexibility.
At least, until it’s inconvenient for Amazon to be flexible.
The deal you REALLY make with Amazon
When you sign up for AWS, the deal you’re really making is that you’ll agree the following:
- I’ll let you dictate to me what sizes my VMs can be. Instead of letting me pick how many CPU cores and how much RAM I need for my workload, I’ll choose from among 18 sizes you pick for me and pay for resources I don’t need. After all, those cookie-cutter sizes make the multi-tenancy density easier and more profitable for you so I’ll do my part by paying for things I won’t use.
- I can’t possibly expect consistent performance from my VMs. Is that fair to you, really? Instead of holding you accountable for any quality, I’ll design around it with a deployment strategy that launches five VMs, runs performance benchmarks on each, and keeps the one good one.
- If I need to scale, I’ll do so horizontally. Why even consider a vertical scaling option? All my apps were designed with an on-premise solution in mind where we could add memory whenever we wanted to. I’m sure running something intended for a single machine will run on multiple VMs just fine as my demand grows. What could possibly go wrong?
- To get the best pricing, I’ll predict how much resource I expect to need and pay you a large upfront fee. I don’t even have to recoup all of it if my predictions were wrong. I’d just like to reserve that price for resources I might not use.
Wait a minute, what about that last one? Isn’t cloud supposed to be flexible resources on-demand instead of making a prediction on capacity need that will ultimately be incorrect? So how come the best AWS pricing comes with that exact same model?
Don’t ask Joyent, they just did the same exact thing.
A hypnotized market
The conversation around public cloud is starting to resemble that old Jon Lovitz SNL skit about the hypnotist with a Broadway show (Amazing Alexander, thanks Hulu) where too many people keep repeating the same lines Amazon feeds them over and over again. Amazon’s business model is to buy commodity hardware at insane volume direct from manufacturers and then sell a notion of flexibility that isn’t what it could be. It’s better than the old on-premise world, but when you overhear someone at a trade show debating the merits of a m1.xlarge vs a m2.xlarge vs m2.2xlarge instead of just how many CPU cores and how much RAM they actually need, there’s room for improvement.
So when Joyent announced support for reserved pricing in a model very similar to what Amazon is doing, not only are they chasing a competitor whose economies of scale they probably can’t match they are sending a dangerous signal to the marketplace that the status quo is just fine.
Worldwide IT spend is estimated to be around $4 trillion and public cloud spend only $4 billion. What are the 99.9% waiting for? Something better.
Price/Performance > Price
There’s growing sentiment that price/performance should be a part of the purchase decision. Value with anything, including public cloud, is derived from a combination of factors and getting started with a performance characterization of public cloud providers has never been easier. Third party cloud benchmarking and performance reports, like those available from Cloud Spectator, can provide a guide for narrowing choices among IaaS vendors before running your own application-specific tests. Only then, and when considering flexibility factors, can you truly judge the total cost of ownership for a cloud solution.
The bottom line is that Amazon got to define the way the market things about public cloud functionality by going first, but that doesn’t mean they get to own the definition in perpetuity. As Rackspace CEO Lew Moorman recently said “When public cloud came out, and you could suddenly provision a server in a minute when it used to take 3 months, those were intoxicating advances. . . you get drunk on them, but when things settle in there are tradeoffs.” As a consumer, you owe it to yourself to explore what those tradeoffs are and in what choices Amazon is falling short in not providing.
By Pete Johnson,
Senior Director of Cloud Platform Evangelism, ProfitBricks
After a 19-year career with HP that included a 6-year stint running Enterprise Architecture for HP.com as well as being a founding member of HP’s public cloud efforts, Pete Johnson joined ProfitBricks in February 2013 as Senior Director of Cloud Platform Evangelism. @nerdguru on Twitter, Pete is active in social media, trade shows, and meet ups to raise awareness of Cloud Computing 2.0 from ProfitBricks.
- The Intelligent Industrial Revolution - October 24, 2016
- Data Sharing: A Matter of Transparency and Control - October 23, 2016
- The Five Rules of Security and Compliance in the Public Cloud Era - October 18, 2016
- 5 Ways Cloud-based Tools Can Help Accountants Escape The IT Treadmill - October 17, 2016
- Cloud Native Trends Picking Up – Legacy Security Losing Ground - October 6, 2016