Why All Those New Google / Amazon Data Centers Won't Really Go To Waste – Cloud Computing's First Supercomputer

Why All Those New Google / Amazon Data Centers Won’t Really Go To Waste – Cloud Computing’s First Supercomputer

As the market leaders of Cloud Computing’s rapidly growing industry, both Google and Amazon are looking to steadily increase the size of their data centers. However, many opponents to this idea are asking questions such as what will happen if or when the need for these data centers falls? Unlike the rest of us who are using Google and Amazon Cloud services in an elastic and dynamic manner as needs require it, as the actual hardware-backed Cloud providers they won’t be able to be so flexible. Well, if it does occur that public Cloud needs should drop, Amazon has found another use for their growing data centers in the form of Cloud Computing’s first supercomputer.

Stringing together a cluster of 30,000 processing cores, Amazon’s EC2 or Elastic Compute Cloud has managed to achieve the rank of 42 in the top 500 supercomputer ranking of the world. Granted, it isn’t the first in performance with a score of 240 trillion calculations per second but it is by no means an average performer either. The main point is that it is available to anyone, unlike your usual supercomputer cluster which has been built with a dedicated purpose in mind and therefore has rather limited access (and even longer waiting lines).

Amazon proved this by doing an actual paid-for supercomputing process at a mere $1279 an hour. While this may seem like a lot to some people, the people who have set up an actual supercomputer cluster will be shaking their heads in disbelief (and probably regret!) at the millions of dollars used to create a dedicated supercomputer cluster, much less keep one running. What boggles the mind even further is that Amazon did this while running all of their other Cloud related services at the same time.

While there are more supercomputers now than there were before, the need for supercomputing has also increased. More and more scientists require supercomputer simulations for the best results in their DNA sequencing, molecular dynamics and so forth. These days it is not just the scientists who need a supercomputer, as big data continues to grow, the need for a supercomputer to do global financial risk analysis or to render the latest in 3D entertainment has also grown exponentially.

This means that if a few couple of thousand processing cores should grow unused during the lean months, Amazon or Google can just opt for a supercomputer processing deal for all those currently waiting in line for a turn at the other dedicated supercomputer clusters. Bear in mind that since this is just the first Cloud Computing based supercomputer process, there are bound to be more areas in which it can be optimized even further. With Amazon’s EC2 already achieving a very impressive 42 rank placing it isn’t too far of a stretch to expect a place in the top 10 for the next Cloud Computing based supercomputer.

By Muz Ismail

3 Responses to Why All Those New Google / Amazon Data Centers Won't Really Go To Waste – Cloud Computing's First Supercomputer

  1. WFTCloud – Leaders in SAP Private Cloud Reveals Top Trends of 2011 as well as … | Cloud Business | Wanting to Run a Business in the Clouds? Don't Know Where to Start? says:

    [...] Why All Those New Google / Amazon Data Centers Won't Really Go To Waste … As the marketplace leaders of Cloud Computing's fast flourishing industry, both Google as well as Amazon have been seeking to usually enlarge the distance of their interpretation centers. However, most opponents to this thought have been asking questions such as what will occur if or when … Read some-more upon CloudTweaks News [...]