March 1, 2012

Leveraging a Virtualized Data Center to Improve Business Agility

By Winston Damarillo

Improve Business Agility – Part 2

Read Part 1…

The Micro-Data Center and Converged Cloud

It is possible to build a cloud from the myriad services available today in many different ways. But, fundamentally, the goal is to utilize the most price performant hardware and networking bound by software to dynamically compose usable, stable, and secure computing resources. If you have set about the procurement and evaluation process in the past year, you will understand how difficult that has become, with services that address variable portions of the stack, leaving a maddening mix-and-match challenge with no single vendor responsible for the resulting cloud architecture.

Designed to mitigate these issues, converged infrastructure solutions, like VCE’s vBlock, NetApp’s Flexpod and Morphlabs’ mCloud DCU compose compute, storage and networking into one dynamic cloud system.

A dedicated converged infrastructure solution employs a “share-nothing” architecture. There are many concerns when an Enterprise deploys on clouds, public or private, chief among them price/performance, security, efficiency and quality-of-service (QoS). Utilizing a “share-nothing” architecture in which an enterprise runs its cloud on dedicated hardware alleviates most of these concerns. When not sharing infrastructure in a cloud an Enterprise is guaranteed QoS and can remain compliant with HIPAA or PCI

From a service provider perspective, a dedicated converged infrastructure solution offers an innovative Enterprise business model in the increasingly crowded IaaS market. One service provider can service multiple enterprise clients securely while still offering all of the benefits of a typical public cloud, including scalability, elasticity and on-demand capacity.

An enterprise can subscribe to a dedicated converged infrastructure solution, either behind its own firewall or remotely from a Service Provider, as a hosted private cloud. Critically, dedicated converged infrastructure solutions that include Dynamic Resource Scaling can expand capacity by provisioning additional Compute or Storage Nodes, as needed.

Performance will always be an issue on shared hardware, such as in a public cloud. The networking itself, especially over the Internet, is part of the performance problem. This has prompted a move to the more modular hyperscale computing architecture being delivered today. For further performance enhancements, trends in SSD-based options are proliferating, though the mCloud DCU is the first cloud solution to employ SSDs in compute as well as storage. With this architecture as the basis for consumption of cloud resources, the question becomes how can we optimized utilization and performance.

Cloud bursting and Carrier ethernet

To mitigate the security, integration, compatibility, control, and complexity concerns associated with bursting to the public cloud, Dynamic Resource Scaling (DRS) technology is meant to scale hosted private clouds.

DRS provides cloud bursting capability to a Virtual Private Cloud (VPC). With DRS, your applications burst to excess capacity that is exclusively dedicated to your environment. In addition to providing remote compute resources, DRS also provides access to remote storage. According to Forrester Consulting, “cloud-using companies are starting to accept cloud bursting as a means to help further reduce costs for their cloud environments and increase efficiency. The dynamic combination of external cloud resources with spare capacity On-Premises is a key strategy to achieve this goal,” from The Next Maturity Step for Cloud Computing Management.

The major difference between cloud bursting and Dynamic Resource Scaling is that DRS implements a private cloud which bursts to a pool of dedicated resources, adding them to your private cloud, unlike the typical cloud bursting hybrid of private to public cloud. Having dedicated hardware guarantees Quality of Service, performance, and security.

Spare Compute and Storage farms can be accessible both locally and remotely.

Users can monitor their resource usage from the user interface and add additional compute capacity when needed, as specified by their site policy. Application traffic is load balanced by the virtual load balancer, which will route traffic to the additional VMs.

The benefits of the Dynamic Resource Scaling topology are:

Addressable Capacity – Capacity can be accessed remotely over ethernet

Fault Tolerance – Potential for cross-regional fail-over increases fault tolerance

Uniformity – Unlike cloud bursting where you burst to the public cloud, with mCloud DRS you are utilizing excess capacity from mCloud resources. Therefore, the user interface, tools, and infrastructure are the same as your private cloud, giving you one portal to manage and scale your private cloud on-demand

Security – Enterprises burst to private, unshared resources so your data is more secure

Flexibility – With mCloud DRS, Tier-1 applications with security and integration constraints can utilize bursting capability, whereas typical cloud bursting is often limited to non-secure, less integration intensive applications

In traditional cloud bursting, enterprises with a private cloud burst by adding virtual machines or storage from a public cloud, perhaps storing data in more than one datacenter – introducing Vulnerabilities at the network and geographic levels. DRS essentially expands the virtual private cloud (a micro-datacenter) into a software-defined virtualized datacenter. With the entire infrastructure virtualized at the level of the datacenter, even networking efficiencies can be gained on top of utilization and true private cloud elasticity, leading to savings passed on to both operators and consumers.

Service Providers globally are leveraging their relationships with each other to increase the benefits of Hosted or Virtual Private Cloud. This is shown in Figure 4 as well as with VMware’s vCloud Data Center model in which service providers that are deploying VMware can share capacity.

If an enterprise in Los Angeles requires more compute or storage capacity, it can be provided from excess local capacity or from excess global capacity. This has the benefit of increasing utilization of hardware to gain maximum efficiency for the Service Provider and allowing high-performance cross-regional failover options.

For efficient and cost effective long distance bursting and sharing of resources, high-speed low-latency connections are required. Carrier Ethernet is a new technology that has allowed providers to achieve this. .

Carrier Ethernet Exchanges allow ethernet networks to exchange data over telecom networks. A Carrier Ethernet Exchange provides end-to-end ethernet service. This requires only one protocol (ethernet) which increases data integrity and efficiency. This gives the cloud user access to their remote, private cloud resources over a private network connection or Layer 2 VPN. Remote access to public cloud resources is still provided over the internet.

The advantages of Carrier Ethernet, versus the public internet, are that cloud traffic is secure and controlled, and performance is better and also more predictable.

By Winston Damarillo

Winston Damarillo

Winston Damarillo is the CEO and Co-founder of Morphlabs,

Winston is a proven serial entrepreneur with a track record of building successful technology start-ups. Prior to his entrepreneurial endeavors, Winston was among the highest performing venture capital professionals at Intel, having led the majority of his investments to either a successful IPO or a profitable corporate acquisition. In addition to leading Morphlabs, Winston is also involved in several organizations that are focused on combining the expertise of a broad range of thought leaders with advanced technology to drive global innovation and growth.
Katrina Thompson

Why Zombie APIs are Such an Important Vulnerability

Zombie APIs APIs have a lifecycle, the same as anything else. They are born, they [...]
Read more
Veljko

5 Recruiting Software Tools For Technology And Digital Companies

5 Recruiting Software Tools Finding the best candidate in the sea of thousands of developers [...]
Read more
Randy

2024 Cloud Security Trends: Navigating the Evolving Landscape of Protection and Backup

2024 Cloud Security Trends Cloud protection and backup trends in 2024 are evolving rapidly, influenced [...]
Read more
Frank Suglia

Forecasting Cloud Trends in 2024

The past few years have rapidly accelerated cloud adoption and impacted the overall IT landscape. [...]
Read more
Randy

Gain Critical AI Insights: The Oxford Artificial Intelligence Programme

Acquire Essential Skills for Success in the AI Industry The expansion of online learning within [...]
Read more
Mariusz Michalowski

Streamlining Infrastructure Management with Terraform Automation

Streamlining Infrastructure Management The growth of cloud computing and infrastructure as code (IaC) practices has [...]
Read more

SPONSOR PARTNER

Explore top-tier education with exclusive savings on online courses from MIT, Oxford, and Harvard through our e-learning sponsor. Elevate your career with world-class knowledge. Start now!
© 2024 CloudTweaks. All rights reserved.