Leveraging a Virtualized Data Center to Improve Business Agility – Conclusion

Leveraging a Virtualized Data Center to Improve Business Agility – Conclusion

Leveraging a Virtualized Data Center to Improve Business Agility – Conclusion

Read Part 1,    Part 2…

Virtualized Data Center – Keeping it Simple

Early designs of cloud computing focused on blades with an independent Storage Area Network (SAN) architecture. This blueprint consolidated the CPU and memory into dense blade server configurations connected via several high-speed networks (typically a combination of Fibre Channel and 10GB) to large Storage Area Networks. This has been a typical blueprint delivered by traditional off the shelf pre-built virtualization infrastructure, especially in the enterprise in private cloud configurations.

More recently, hardware vendors have been shipping modular commodity hardware in dense configurations known as Hyperscale computing. The most noticeable difference is the availability of hard drives, or solid state drives (SSDs), within the modules. This gives the virtualization server and the VMs access to very fast persistent storage and eliminates the need for an expensive SAN to provide storage in the cloud. The hyperscale model not only dramatically changes the Price / Performance model of cloud computing but, because it is modular, it also allows you to build redundancies into the configuration for an assumed failure architecture. For the mCloud solution, this architecture also provides the implementation team more flexibility by affording a “Lego block” like model of combining compute nodes and storage nodes into optimal units within a VLAN grouping of a deployment. This allows for the management of a large resource pool of compute and storage into an individually controlled subset of the data center infrastructure.

A hyperscale architecture is a synergistic infrastructure for SOA. It too uses the idea of simple, commodity functions. Hyperscale architectures have removed expensive system management components and, instead, focus on what matters to the cloud, which is compute power and high density storage.

Simple architectures are easy to scale. In other words, an architecture that contains  system management and other resiliency features in order to achieve high availability will be more difficult to scale, due to complexity, than an architecture with simpler commodity components that offloads failover to the application.

The hyperscale model makes it easy and cost effective to create a dynamic infrastructure because of low-cost, easily replaceable components, which can be located either in your data center or in remote places. The components are easy to acquire and replace. In contrast, an architecture that puts the responsibility for HA in the infrastructure, is much more complex and harder to scale.

Using this approach, in a massively scalable system, it’s been reported that  IT operators wait for many disks (even up to 100) to fail before scheduling a mass replacement, thereby making maintenance more predictable as well.

Enterprises require application availability, performance, scale, and a good price. If you’re trying to remain competitive today, your philosophy must assume that application availability is the primary concern for your business. And you will need the underlying infrastructure that allows your well-architected applications to be highly available, scalable, and performant.

Businesses and their developers are realizing that in order to take advantage of cloud, their applications need to be based on a Service Oriented Architecture (SOA). SOA facilitates scalability and high availability (HA) because the services which comprise an SOA application can be easily deployed across the cloud. Each service performs a specific function, provides a standard and well-understood interface and, therefore, is easily replicated and deployed. If a service fails, there is typically an identical service that can transparently support the user request (e.g., clustered web servers). If any of these services fail, they can be easily restarted, either locally or remotely (in the event of a disaster).

Well-written applications can take advantage of the innovate streamlined, high-performing, and scalable architecture of hyperscale clouds. Hosted Private Clouds built on hyperscale hardware and leverage open source aim to provide a converged architecture (software services to hardware components) in which everything is easy to troubleshoot and is easily replaceable with the minimum disruption.

With the micro-datacenter design, failure of the hardware is decoupled from the failure of the application. If your application is designed to take advantage of the geographically dispersed architecture, your users will not be aware of hardware failures because the application is still running elsewhere. Similarly, if your application requires more resources, Dynamic Resource Scaling allows your application to burst transparently from the user’s perspective.

Conclusion

By abstracting the function of computation from the physical platform on which computations run, virtual machines (VMs) provided incredible flexibility for raw information processing. Close on the heels of compute virtualization came storage virtualization, which provided similar levels of flexibility. Dynamic Resource Scaling technology, amplified by Carrier Ethernet Exchanges, provides high levels of location transparency, high availability, security, and reliability. In fact, by leveraging the Hosted Private Clouds with DRS, an entire data center can be incrementally defined by software and temporarily deployed. One could say a hosted private cloud combined with dynamic resource scaling creates a secures and dynamic “burst-able data center.”

Applications with high security and integration constraints, and which IT organizations previously found difficult to deploy in burst-able environments, are now candidates for deployment in on-demand scalable environments made possible by DRS. By using DRS, enterprises have the ability to scale the key components of the data center (compute, storage, and networking) in a public cloud-like manner (on-demand, OpEx model), yet retain the benefits of private cloud control (security, ease of integration).

Furthermore, in addition to the elasticity, privacy, and cost savings, hyperscale architecture affords enterprises new possibilities for disaster mitigation and business continuity. Having multiple, geographically dispersed nodes gives you the ability to fail over across regions.

The end result is a quantum leap in business agility and competitiveness.

By Winston Damarillo/CEO and Co-founder of Morphlabs

Winston is a proven serial entrepreneur with a track record of building successful technology start-ups. Prior to his entrepreneurial endeavors, Winston was among the highest performing venture capital professionals at Intel, having led the majority of his investments to either a successful IPO or a profitable corporate acquisition. In addition to leading Morphlabs, Winston is also involved in several organizations that are focused on combining the expertise of a broad range of thought leaders with advanced technology to drive global innovation and growth.

Follow Us!

CloudTweaks

Established in 2009, CloudTweaks.com is recognized as one of the leading authorities in cloud computing information. Most of the excellent CloudTweaks articles are provided by our own paid writers, with a small percentage provided by guest authors from around the globe, including CEOs, CIOs, Technology bloggers and Cloud enthusiasts. Our goal is to continue to build a growing community offering the best in-depth articles, interviews, event listings, whitepapers, infographics and much more...
Follow Us!

0 Responses to Leveraging a Virtualized Data Center to Improve Business Agility – Conclusion

Join Our Newsletter

Receive updates each week on news, tips, events, comics and much more...

Can I Contribute To CloudTweaks?

Yes, much of our focus in 2015 will be on working with other influencers in a collaborative manner. If you're a technology influencer looking to collaborate with CloudTweaks – a globally recognized leader in cloud computing information – drop us an email with “tech influencer” in the subject line.

What is the 12/12 Program?

This program is designed to better handle the thousands of requests we receive from people looking to submit articles. The 12/12 program is the commitment of 12 articles delivered over a 12-month period.  

Wait! What if I just want to submit one article?

Our popular pay as you go sponsorship program provides the flexibility to submit as you wish and is designed for all budgets.

Contributors

Ten Tips For Successful Business Intelligence Implementation

Ten Tips For Successful Business Intelligence Implementation

Ten Tips for Successful Business Intelligence Implementation The cost of Business Intelligence (BI) software goes far beyond the purchase price. Time spent researching, implementing, and maintaining your BI investment can snowball quickly and mistakes are often expensive. Your time is valuable – save it by learning from other businesses’ experiences. We’ve compiled the top ten

Knots And Cloud Service Providers

Knots And Cloud Service Providers

How Do These Two Compare? In Boy Scouts, I learned how to tie knots. The quickest knot you can tie is the slipknot. It’s very effective for connecting one thing to another via the rope you have. It was used in setting up tents, mooring boats to docks temporarily and lifting your food up into

What Ever Happened To Google Glass?

What Ever Happened To Google Glass?

What Ever Happened to Google Glass? It was supposed to be the next big thing in tech so where did it go? Last year you could not go anywhere without hearing about some insane new use for the product and now it seems to have vanished in a plume of smoke. A Lackluster Rollout Back

Posted on by

Big Data

To Have and Have Not: Big Data Initiatives In Developing Countries

To Have and Have Not: Big Data Initiatives In Developing Countries

Big Data Initiatives In Developing Countries The poor of the developing countries are becoming increasingly connected, to the point where they too are part of the Big Data revolution that’s happening across the globe. It didn’t come with laptops, though, as some supposed it would. Whereas it costs a fortune to connect broadband to a

Big Data In Your Garden: Initiatives For Better Understanding Nature

Big Data In Your Garden: Initiatives For Better Understanding Nature

Big Data in Your Garden Big Data and IoT initiatives are springing up all across the globe, making cities, protesters–and just about everything else–smarter. However, thus far there’s been little attention paid to the interactions between these bizarre technologies and living things other than humans. Biology, that is, human biology is one field where Big

Who Holds the Key to the City: Big Data and City Management

Who Holds the Key to the City: Big Data and City Management

Big Data and City Management Cities like New York, Madrid, and especially Rio de Janeiro are augmented with Big Data-powered initiatives that range from combating crime with predictive analytics (New York & Madrid) to providing real-time data for improved management. Although Big Data is no panacea and is mainly used in conjunction with a greater

Internet of Things

Where’s the Capital of the Internet of Things?

Where’s the Capital of the Internet of Things?

Where’s the Capital? We all know the capitals of fashion are London, New York and Paris, while the capital of film is Hollywood (or Bollywood!) – but what’s the new capital of the internet? Specifically, the internet of things? The answer – according to new research by Ozy – might surprise you. It’s not Tokyo, Seoul,

Smart Cities – How Big Data Is Changing The Power Grid

Smart Cities – How Big Data Is Changing The Power Grid

Smart Cities And Big Data As Anthony Townsend argues in his SMART CITIES, even though the communications industry has changed beyond recognition since its inception, the way we consume power has remained stubbornly anachronistic. The rules of physics are, of course, partially to blame, for making grid networks harder to decentralize, as opposed to communication

Aggregated News

Popular News Sources

Q&A: Intel’s Take on Chinese Startups, Innovation

Q&A: Intel’s Take on Chinese Startups, Innovation

Intel’s venture-capital arm on Tuesday said it would be investing $28 million in five Chinese startups that work on new technologies ranging from wearable devices to iris detection. It is Intel Capital’s first infusion from a $100 million China fund launched in April … Read the source article at WSJ Blogs About Latest Posts Follow Us!CloudTweaksEstablished in 2009, CloudTweaks.com is recognized

Smart glasses: Prototypes ‘look to rival Google Glass’

Smart glasses: Prototypes ‘look to rival Google Glass’

With an increasing number of firms looking to enter the wearable technology market, smart glasses were in abundance at this year’s Ceatec electronics show in Japan. New designs featured fitness trackers and technology that could act as a speedometer … Read the source article at BBC – Homepage About Latest Posts Follow Us!CloudTweaksEstablished in 2009, CloudTweaks.com is recognized as

Microsoft to enter the STRUGGLE of the Human Wrist

Microsoft to enter the STRUGGLE of the Human Wrist

It’s not just a thumb war, it’s total digit war The battle for the future of the human wrist entered a new phase on Monday after it was claimed that tech goliath Microsoft is planning to release its own wearable computer in the coming weeks.…Read the source article at The Register About Latest Posts Follow