Newsletter Subscribe

Bringing you thought leadership, news, infographics, resources and our own brand of comics each week to your inbox...

Leveraging a Virtualized Data Center to Improve Business Agility – Conclusion

Virtualized Data Center

Early designs of cloud computing focused on blades with an independent Storage Area Network (SAN) architecture. This blueprint consolidated the CPU and memory into dense blade server configurations connected via several high-speed networks (typically a combination of Fibre Channel and 10GB) to large Storage Area Networks. This has been a typical blueprint delivered by traditional off the shelf pre-built virtualization infrastructure, especially in the enterprise in private cloud configurations.

More recently, hardware vendors have been shipping modular commodity hardware in dense configurations known as Hyperscale computing. The most noticeable difference is the availability of hard drives, or solid state drives (SSDs), within the modules. This gives the virtualization server and the VMs access to very fast persistent storage and eliminates the need for an expensive SAN to provide storage in the cloud. The hyperscale model not only dramatically changes the Price / Performance model of cloud computing but, because it is modular, it also allows you to build redundancies into the configuration for an assumed failure architecture. For the mCloud solution, this architecture also provides the implementation team more flexibility by affording a “Lego block” like model of combining compute nodes and storage nodes into optimal units within a VLAN grouping of a deployment. This allows for the management of a large resource pool of compute and storage into an individually controlled subset of the data center infrastructure.

A hyperscale architecture is a synergistic infrastructure for SOA. It too uses the idea of simple, commodity functions. Hyperscale architectures have removed expensive system management components and, instead, focus on what matters to the cloud, which is compute power and high density storage.

Simple architectures are easy to scale. In other words, an architecture that contains  system management and other resiliency features in order to achieve high availability will be more difficult to scale, due to complexity, than an architecture with simpler commodity components that offloads failover to the application.

The hyperscale model makes it easy and cost effective to create a dynamic infrastructure because of low-cost, easily replaceable components, which can be located either in your data center or in remote places. The components are easy to acquire and replace. In contrast, an architecture that puts the responsibility for HA in the infrastructure, is much more complex and harder to scale.

Using this approach, in a massively scalable system, it’s been reported that  IT operators wait for many disks (even up to 100) to fail before scheduling a mass replacement, thereby making maintenance more predictable as well.

Enterprises require application availability, performance, scale, and a good price. If you’re trying to remain competitive today, your philosophy must assume that application availability is the primary concern for your business. And you will need the underlying infrastructure that allows your well-architected applications to be highly available, scalable, and performant.

Businesses and their developers are realizing that in order to take advantage of cloud, their applications need to be based on a Service Oriented Architecture (SOA). SOA facilitates scalability and high availability (HA) because the services which comprise an SOA application can be easily deployed across the cloud. Each service performs a specific function, provides a standard and well-understood interface and, therefore, is easily replicated and deployed. If a service fails, there is typically an identical service that can transparently support the user request (e.g., clustered web servers). If any of these services fail, they can be easily restarted, either locally or remotely (in the event of a disaster).

Well-written applications can take advantage of the innovate streamlined, high-performing, and scalable architecture of hyperscale clouds. Hosted Private Clouds built on hyperscale hardware and leverage open source aim to provide a converged architecture (software services to hardware components) in which everything is easy to troubleshoot and is easily replaceable with the minimum disruption.

With the micro-datacenter design, failure of the hardware is decoupled from the failure of the application. If your application is designed to take advantage of the geographically dispersed architecture, your users will not be aware of hardware failures because the application is still running elsewhere. Similarly, if your application requires more resources, Dynamic Resource Scaling allows your application to burst transparently from the user’s perspective.

Conclusion

By abstracting the function of computation from the physical platform on which computations run, virtual machines (VMs) provided incredible flexibility for raw information processing. Close on the heels of compute virtualization came storage virtualization, which provided similar levels of flexibility. Dynamic Resource Scaling technology, amplified by Carrier Ethernet Exchanges, provides high levels of location transparency, high availability, security, and reliability. In fact, by leveraging the Hosted Private Clouds with DRS, an entire data center can be incrementally defined by software and temporarily deployed. One could say a hosted private cloud combined with dynamic resource scaling creates a secures and dynamic “burst-able data center.”

Applications with high security and integration constraints, and which IT organizations previously found difficult to deploy in burst-able environments, are now candidates for deployment in on-demand scalable environments made possible by DRS. By using DRS, enterprises have the ability to scale the key components of the data center (compute, storage, and networking) in a public cloud-like manner (on-demand, OpEx model), yet retain the benefits of private cloud control (security, ease of integration).

Furthermore, in addition to the elasticity, privacy, and cost savings, hyperscale architecture affords enterprises new possibilities for disaster mitigation and business continuity. Having multiple, geographically dispersed nodes gives you the ability to fail over across regions.

The end result is a quantum leap in business agility and competitiveness.

By Winston Damarillo

CEO and Co-founder of Morphlabs

Winston is a proven serial entrepreneur with a track record of building successful technology start-ups. Prior to his entrepreneurial endeavors, Winston was among the highest performing venture capital professionals at Intel, having led the majority of his investments to either a successful IPO or a profitable corporate acquisition. In addition to leading Morphlabs, Winston is also involved in several organizations that are focused on combining the expertise of a broad range of thought leaders with advanced technology to drive global innovation and growth.

About CloudTweaks

Established in 2009, CloudTweaks is recognized as one of the leading authorities in connected technology information and services.

We embrace and instill thought leadership insights, relevant and timely news related stories, unbiased benchmark reporting as well as offer green/cleantech learning and consultive services around the world.

Our vision is to create awareness and to help find innovative ways to connect our planet in a positive eco-friendly manner.

In the meantime, you may connect with CloudTweaks by following and sharing our resources.

Philips spotlights connected technology, predictive analytics software, and artificial intelligence advancing population health and precision medicine at HIMSS 2017 AMSTERDAM, Feb. 17, 2017 /PRNewswire/ -- Featuring new and enhanced connected health offerings at the 2017 HIMSS Conference & Exhibition (HIMSS17), Royal Philips (NYSE: PHG,AEX: PHIA), a global leader in health technology, will showcase a broad range of population health management, ...
Read More
Cupertino, California — Apple today announced its 28th annual Worldwide Developers Conference (WWDC) — hosting the world’s most talented developer community — will be held at the McEnery Convention Center in San Jose. The conference, kicking off June 5, will inspire developers from all walks of life to turn their passions into the next great innovations and apps that customers ...
Read More
When Cisco Systems Inc. reports earnings Wednesday, the big question will be if the networking giant’s repeated gambles on software can reverse a yearlong sales slide, or at least point to a reversal of that trend in the future. Cisco CSCO, +1.06%  is scheduled to report fiscal second-quarter earnings less than a month after announcing its latest multibillion-dollar software acquisition, ...
Read More
Offering Integrated and Automated Solutions, Expansive Partner Ecosystem, Advanced Architecture with Cross-Industry Collaboration SAN FRANCISCO, Feb. 14, 2017 – Today Intel Security outlined a new, unifying approach for the cybersecurity industry that strives to eliminate fragmentation through updated integrated solutions, new cross-industry partnerships and product integrations within the Intel Security Innovation Alliance and Cyber Threat Alliance (CTA). “Transforming isolated technologies ...
Read More
IoT Enablement, Analytics Offer Strong Monetisation Opportunities HAMPSHIRE, UNITED KINGDOM--(Marketwired - February 13, 2017) - A new study from Juniper Research has calculated that mobile network operators can realise an additional $85 billion in revenues over the next five years through the deployment and enhancement of non-core services including Big Data analytics and IoT (Internet of Things) enablement. Operators "Can ...
Read More