Enjoy thousands of technology related articles, news stories and whitepapers. We appreciate the support and will continue to help transcend new technologies into the cloud...

Storing Massive Files On The Cloud

Storing Massive Files On The Cloud

On December 9th, Amazon announced, in a blog post, an increase in the maximum size of
file that could be stored on the Amazon Simple Storage Service (S3). Before the increase, the largest
file that could be stored was 5GB – anything larger had to be split into chunks and put back together
within an application, using an intermediate server or on the client device. Now, the limit has been
increased to 5TB, an increase of approximately 1000 times.

This new limit allows the storage of large databases, video files or scientific data, but to make use of
it properly, the object needs to be uploaded using Amazon’s new Multipart Upload API. The original
objects are split into smaller parts and then uploaded separately before being stitched together on
the Amazon S3 cloud. Objects can then be worked on either as a whole, or in part, on the Amazon
Elastic Compute Cloud (EC2) before storing the finished product back on S3.

Data stored on Amazon S3 is designed to have 99.99% annual availability and to withstand the
concurrent failure of data in two facilities, so you can be certain your data would be available at all
times. For less critical data, Amazon’s Reduced Redundancy Storage (RRS) offers a cheaper way of
storing your non-critical data, by utilizing less storage facilities. There is an associated loss of data
durability, but the quoted availability is still 99.99%.

OpenStack have also announced that they are working on increasing the size of files that can be
stored on its open-source private cloud system, and it is expected that a 5TB file size limit will be
available with the “Bexar” release of OpenStack early in 2011.

A file size of 5TB is a massive amount of data, but it is clear that the cloud storage limits are going to
increase even further in the near future. It is only a matter of time before the maximum object size is
in the petabyte or even exabyte range.

The only question then is exactly how you can generate enough data to need that amount of storage!
Amazon Web Services Blog.

By CloudTweaks

Follow Us!

CloudTweaks

Established in 2009, CloudTweaks.com is recognized as one of the leading authorities in cloud computing information. Most of the excellent CloudTweaks articles are provided by our own paid writers, with a small percentage provided by guest authors from around the globe, including CEOs, CIOs, Technology bloggers and Cloud enthusiasts. Our goal is to continue to build a growing community offering the best in-depth articles, interviews, event listings, whitepapers, infographics and much more...
Follow Us!

Add Comment Here