The Cloud: Focusing On Cloud Performance – Part 2
Companies are usually recommended to do a workload analysis exercise before deciding on moving a process to the cloud and choosing a provider. For vendors it is, again, critical to understand in both business and computational terms (like number of records, size distributions, CPU and memory consumptions) the loads of the companies they wish to serve. While this may sound obvious, the outcome of a study conducted by the IEEE will explain why it needs to be articulated. The study published in November 2010, Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing, found that the major four cloud computing commercial services were an order of magnitude less in performance to be useful for the scientific community. It is understandable that the cloud was not designed specifically for the scientific community in the first place. Yet nothing prevented cloud services from being touted as viable alternatives to grids and clusters for the scientific community. This study, while seemingly unconnected to industry, may still offer the relevant message to potential cloud service vendors. The cloud should be designed to cater to the workload. And perhaps, for related reasons, it will be good to have clouds for specific tasks or industries. Another observation of the study that should not be lost on us is that virtualization and resource time sharing add significant performance overheads. Such overheads should be thoroughly assessed at the providing data center. Current practices do not do more workload processing for less hardware. On the contrary, due to overheads of virtualization, more hardware is required than would be for doing the job by hardware installed at individual companies. However, savings could result from using hardware resources that would not be used except during peak loads in the case of individual installations.
The Cisco Global Cloud Index provides forecasts for IP traffic and guidance for both vendors and communities dependent on networks. Such forecasts are invaluable for both cloud and network providers. The forecasts seem to be based on current applications that are moved to the cloud even though the Cisco forecast does mention that by 2015 about 50 percent of all workloads will be catered by cloud data centers. Sizable workloads, such as order fulfillment for a major retailer or an OSS application for a telecom provider, could easily throw off such forecasts, when added to the cloud. The study indicates a very useful point: as more applications move to the cloud, more traffic that would remain within a data center would be moved to the Internet. Also, redundancy and availability features that are so asked for of the cloud will cause more traffic to be routed through the Internet. Online transaction-intensive workloads need tons of network bandwidth. For these reasons and many more it is important for a cloud provider to quantitatively assess the traffic that would be generated through its offering. Hence the cloud provider needs to work with the network providers, give them the assessments, and ensure that network bandwidth and capacity is available prior to having the offering up and running in the scale required. This might be dismissed as superfluous in the light of techniques meant to allow elastic IP capacity, but is still a required exercise to quantify the size of the task at hand in network terms and evaluate if network providers can match such size—with or without such techniques.
The performance-related promises that the cloud has to offer should not slacken subscribing companies’ interest in understanding their own performance requirements thoroughly and completely. While the onus of performance has indeed shifted to the vendor, having a grip over their requirements is still needed. Traditionally, understanding the performance requirements of client companies has been a weak link between clients and vendors. In cloud environments this link could snap altogether. Peak business loads and volumes and average usage patterns should be known to both. Expectations, such as application response times and operational constraints (such as time windows available for batches), should also be known to both. Well-understood requirements not only help define meaningful SLAs between the client and provider but, when provided earlier, can help the provider design robust offerings.
By Suri Chitti
Suri Chitti (firstname.lastname@example.org) is a software and technology management professional. He has packed many years of experience and insight having worked in different industrial domains, for different stakeholders like companies, vendors and consultancies and has contributed to different aspects of software production like functional analysis, testing, design and management.
- Tech Trends That Will Shape 2017: Cloud, IoT and AI - January 19, 2017
- Morgan Stanley Modernization Plan – Company to Invest in Fintech - January 18, 2017
- Financial Robo-advisors Cannot Replace Humans - January 10, 2017
- How Industrial Robots Will Brighten Our Future - January 9, 2017
- Clutch Survey Explores Which Cloud Providers Companies Prefer - January 6, 2017