The Tolly Group Report: How Dimension Data beat out some big players to help keep your data up to date
(Update Revision: Initial Post, August 30th)
The next time you check out busy commercial websites – those, for example, that talk about products, sell them, ship them and generate buzz and conversation about them, spare a thought for all of the billions of bits of data running around behind the scenes to make these sites’ videos, promos and “buy now” catalogues work smoothly, reliably and securely. Much of the infrastructure behind sites like these comes to you courtesy of a few organizations that have recognized the need for a more cohesive approach to collection and redistribution of data on the cloud, through the use of a “network-centric,” rather than “best effort” structure.
The technological wizardry behind complex websites tends to go unnoticed by the average consumer; at least until something goes wrong, at which point the great “fail whale” emerges to spoil the fun.
The cloud is growing by leaps and bounds, but a great deal of the infrastructure is built on existing components. It can often be a hodgepodge of servers and programs built using elements that were not always designed to scale up to the degree and with the versatility currently required. “Cloud” may exist at the top of every CIO’s agenda, but, according to Gartner Research, it still forms a relatively small portion of the 3.7 trillion dollar IT industry.
This means we are still in the early days of the cloud as a primary technology. It has a way to go to emerge as a platform for more than just testing and development, and to become the place for hosting mission-critical data applications.
Enter the Tolly Group.
The Tolly Group was founded in 1989 to provide hands-on evaluation and certification of IT products and services. In a recent study, conducted in May 2013, Tolly researchers tested the cloud performance of some major providers: Amazon, Rackspace, IBM and Dimension Data in all four areas: CPU, RAM, storage and network performance. Their findings exposed the price and performance limitations of today’s “commodity” or “best effort” clouds that rely on traditional, server-centric architectures. The report found that of these four big players, the network-centric approach used by Dimension Data’s enterprise-class cloud helped lower cost and risk, and accelerate migration of mission-critical apps to the cloud.
Keao Caindec, CMO of the Cloud Solutions Business for Dimension Data was obviously pleased with the results of Tolly’s stringent testing, but not surprised. He points out that the report tells an interesting story. He says it shows how not all clouds are created equal, and that there is a big difference between providers. This, he believes, will force end-users to look more critically at underlying performance of any provider they choose to do business with.
As an example, Caindec points out that when someone goes and buys a router switch or server, these pieces come with specs. But such specs don’t exist broadly in the cloud world. In many cases, he says, clouds were developed as low cost compute platforms – a best effort. Now, however, this is not enough. A provider must demonstrate a great deal more reliability in terms of speed, security and scalability – for example, designing an application to scale either up or out. When scaling up, a provider must be able to add more power to the cloud server. When scaling out, it must be able to easily add more instances. He points out that clients in a growth phase must be careful about scaling up, since such expansions may not lead to the desired level increased performance.
Caindec points to some specific types of work that Dimension Data does with its high-profile clients: “We help them with their websites by leveraging the public cloud for testing and development. This allows granular configuration of the server, which means that each server is configured with as much storage/power as is needed.” He points out that customers often need to make sure they are not buying too much of one resource. For example a database app needs lots of memory, maybe 32 Gig of memory on a server, but not necessarily a lot of computing power. Dimension Data, he says, takes care to help clients to configure exact amount of resources necessary, allowing them to save money by not over-provisioning.
Caindec finds the Tolly study to be eye-opening primarily because it begs the question: are low cost clouds really low cost? “If the model is more best effort and because of that you have to run more servers, are you being as economical as you could?” For the most part, he points out, the costs of cloud providers are similar. But performance levels vary much more dramatically. In other words, “You may not be saving all the money you could. You may find a lower cost per hour, but in a larger environment, especially when running thousands of servers, this does not become economic.”
Caindec points out that at this point in IT history there is still a great deal that is not well understood. There are not a lot of statistics. He hopes that IT managers and CTOs everywhere will be able to obtain more granular insights from the full Tolly Report. Insights such as the fact that more memory does not mean applications will run better or provide better throughput. “If you scale up the size of the server, the server runs faster, but requires higher throughput to reach other servers.” He says companies must be careful to benchmark their own applications. It is not necessary to hire a high-profile testing firm like Tolly to do this, however; testing tools are available publicly, but he strongly advises more testing and awareness as standard practice.
By Steve Prentice
Latest posts by Steve Prentice (see all)
- The Value of Hybrid Cloud - February 25, 2015
- From C:\Prompt To CYOD – The Timely Shift To Desktop as a Service - February 18, 2015
- How To Keep A Cloud (And Your Data) Inside Your Borders - January 28, 2015