Gartner has recenty predicted that by 2020, a corporate “no-cloud” policy will be as rare as a “no-internet” policy is today. CIOs will increasingly leverage a multitude of cloud computing providers across the entire IT stack to enable a huge variety of use cases and meet the requirements of their business unit peers. Indeed, the tides are shifting toward a “cloud-first” or even “cloud-only” policy... 

Marc Wilczek

Need for Speed: Towards Satiating End Consumer Cloud Speed Craving

Need for Speed: Towards Satiating End Consumer Cloud Speed Craving

Every passing day, end consumers have involuntarily been fine-tuned to anticipate technology at large becoming swift in terms of the offered data rate, superior in terms of quality of service and inexpensive in terms of aggregated cost.

In essence, the modern day consumer is in a constant state of technological rush – the distinctive feature that has an impact on all facets of business operations and returns. It is not long ago that users exhibited a craving for web based content to be delivered instantly after system power up. The advent of optical fiber network made achieving this feat a piece of cake; after all optical fiber meant reaching out to the world at the speed of light. Came along on the shoulder of this giant the concept of cloud computing, and intrinsic to it was mounting end-user expectancy of seamless cloud platform access without spatiotemporal bounds of any sort.

It is worth to note that the mere incorporation of optical fiber as the core physical layer element itself is not enough to render the hosted cloud applications to function at lightning speeds. For that to take place the cloud architecture needs to be worked out. The end-user geographical location and the underlying root technology in particular have a lot to do with the actual experienced cloud data pull-down speed. Warranting thunderous access to cloud-based information in general and business-intensive data in particular, thus, needs ample consideration.

The primary aspect to take into consideration is, well simple – distance. To synchronize two or more computing machines for reliable data exchange a three-way handshake is essential. While a file is being pulled down off the cloud, the end-user’s system sends an OK signal for each block of data that has been acquired correctly. So logically, shorter distances between cloud hosts and end-users are valued more as compared to longer distances. The idea is to liberate computers of idle time in which they wait for reception confirmations and instead put them to the actual task of data transit. In simple words, a person located in New Delhi, India might find it extremely difficult to achieve connectivity speeds of 11-14 Mbps with a cloud located in New York, whereas the same rate could have easily reached up to 35 Mbps had the same cloud been hosted in, say, Dubai.

In addition to hosting (ideally multiple) clouds at appropriate proximity of end-users, another important facet is to come up with shortest possible routes which makes perfect sense – shorter the distance lesser the latency and greater the pull-down speed.

Yet another move that is bound to vouchsafe high speed access is the incorporation of the cloud within the network itself. Once morphed into the cloud computing backbone, the network can provide instant access to the required information.

Cloud computing augmented with surgically tailored intelligent network additives can lead to outright satiation of the escalated craving for utterly fast access to remote storage and data processing services. With the data rates improving, the pieces of the cloud technology future puzzle fit together just perfectly.

By Humayun Shahid


With degrees in Communication Systems Engineering and Signal Processing, Humayun currently works as a lecturer at Pakistan's leading engineering university. The author has an inclination towards incorporating quality user experience design in smartphone and web applications.