Leveraging a Virtualized Data Center to Improve Business Agility

Improve Business Agility – Part 2

Read Part 1…

The Micro-Data Center and Converged Cloud

It is possible to build a cloud from the myriad services available today in many different ways. But, fundamentally, the goal is to utilize the most price performant hardware and networking bound by software to dynamically compose usable, stable, and secure computing resources. If you have set about the procurement and evaluation process in the past year, you will understand how difficult that has become, with services that address variable portions of the stack, leaving a maddening mix-and-match challenge with no single vendor responsible for the resulting cloud architecture.

Designed to mitigate these issues, converged infrastructure solutions, like VCE’s vBlock, NetApp’s Flexpod and Morphlabs’ mCloud DCU compose compute, storage and networking into one dynamic cloud system.

A dedicated converged infrastructure solution employs a “share-nothing” architecture. There are many concerns when an Enterprise deploys on clouds, public or private, chief among them price/performance, security, efficiency and quality-of-service (QoS). Utilizing a “share-nothing” architecture in which an enterprise runs its cloud on dedicated hardware alleviates most of these concerns. When not sharing infrastructure in a cloud an Enterprise is guaranteed QoS and can remain compliant with HIPAA or PCI

From a service provider perspective, a dedicated converged infrastructure solution offers an innovative Enterprise business model in the increasingly crowded IaaS market. One service provider can service multiple enterprise clients securely while still offering all of the benefits of a typical public cloud, including scalability, elasticity and on-demand capacity.

An enterprise can subscribe to a dedicated converged infrastructure solution, either behind its own firewall or remotely from a Service Provider, as a hosted private cloud. Critically, dedicated converged infrastructure solutions that include Dynamic Resource Scaling can expand capacity by provisioning additional Compute or Storage Nodes, as needed.

Performance will always be an issue on shared hardware, such as in a public cloud. The networking itself, especially over the Internet, is part of the performance problem. This has prompted a move to the more modular hyperscale computing architecture being delivered today. For further performance enhancements, trends in SSD-based options are proliferating, though the mCloud DCU is the first cloud solution to employ SSDs in compute as well as storage. With this architecture as the basis for consumption of cloud resources, the question becomes how can we optimized utilization and performance.

Cloud bursting and Carrier ethernet

To mitigate the security, integration, compatibility, control, and complexity concerns associated with bursting to the public cloud, Dynamic Resource Scaling (DRS) technology is meant to scale hosted private clouds.

DRS provides cloud bursting capability to a Virtual Private Cloud (VPC). With DRS, your applications burst to excess capacity that is exclusively dedicated to your environment. In addition to providing remote compute resources, DRS also provides access to remote storage. According to Forrester Consulting, “cloud-using companies are starting to accept cloud bursting as a means to help further reduce costs for their cloud environments and increase efficiency. The dynamic combination of external cloud resources with spare capacity On-Premises is a key strategy to achieve this goal,” from The Next Maturity Step for Cloud Computing Management.

The major difference between cloud bursting and Dynamic Resource Scaling is that DRS implements a private cloud which bursts to a pool of dedicated resources, adding them to your private cloud, unlike the typical cloud bursting hybrid of private to public cloud. Having dedicated hardware guarantees Quality of Service, performance, and security.

Spare Compute and Storage farms can be accessible both locally and remotely.

Users can monitor their resource usage from the user interface and add additional compute capacity when needed, as specified by their site policy. Application traffic is load balanced by the virtual load balancer, which will route traffic to the additional VMs.

The benefits of the Dynamic Resource Scaling topology are:

Addressable Capacity – Capacity can be accessed remotely over ethernet

Fault Tolerance – Potential for cross-regional fail-over increases fault tolerance

Uniformity – Unlike cloud bursting where you burst to the public cloud, with mCloud DRS you are utilizing excess capacity from mCloud resources. Therefore, the user interface, tools, and infrastructure are the same as your private cloud, giving you one portal to manage and scale your private cloud on-demand

Security – Enterprises burst to private, unshared resources so your data is more secure

Flexibility – With mCloud DRS, Tier-1 applications with security and integration constraints can utilize bursting capability, whereas typical cloud bursting is often limited to non-secure, less integration intensive applications

In traditional cloud bursting, enterprises with a private cloud burst by adding virtual machines or storage from a public cloud, perhaps storing data in more than one datacenter – introducing Vulnerabilities at the network and geographic levels. DRS essentially expands the virtual private cloud (a micro-datacenter) into a software-defined virtualized datacenter. With the entire infrastructure virtualized at the level of the datacenter, even networking efficiencies can be gained on top of utilization and true private cloud elasticity, leading to savings passed on to both operators and consumers.

Service Providers globally are leveraging their relationships with each other to increase the benefits of Hosted or Virtual Private Cloud. This is shown in Figure 4 as well as with VMware’s vCloud Data Center model in which service providers that are deploying VMware can share capacity.

If an enterprise in Los Angeles requires more compute or storage capacity, it can be provided from excess local capacity or from excess global capacity. This has the benefit of increasing utilization of hardware to gain maximum efficiency for the Service Provider and allowing high-performance cross-regional failover options.

For efficient and cost effective long distance bursting and sharing of resources, high-speed low-latency connections are required. Carrier Ethernet is a new technology that has allowed providers to achieve this. .

Carrier Ethernet Exchanges allow ethernet networks to exchange data over telecom networks. A Carrier Ethernet Exchange provides end-to-end ethernet service. This requires only one protocol (ethernet) which increases data integrity and efficiency. This gives the cloud user access to their remote, private cloud resources over a private network connection or Layer 2 VPN. Remote access to public cloud resources is still provided over the internet.

The advantages of Carrier Ethernet, versus the public internet, are that cloud traffic is secure and controlled, and performance is better and also more predictable.

By Winston Damarillo

Mike Johnson

Data Transmission Travel Plans – From The Ground Up

Don’t Forget Networking The term “cloud” was first used by the telecomm industry in early schematics of the Internet to identify the various, non-specific uses data was put to at the end of their cables ...
Gary Bernstein

Exposed Data From 21 Million VPN Mobile Users

Exposed Data From 21 Million VPN Mobile Users The data and credentials from 21 million mobile VPN users were found for sale last week in an internet forum. A cyber thief posted the credentials for ...
Kaylamatthews

What You Need to Know – IoT and Real-Time Operating Systems

Real-Time Operating Systems A real-time operating system, or real-time OS, appears to execute tasks while using a single processing core simultaneously.  However, what's really happening is that the tasks' response time is so fast that ...
Alex Brisbourne

Industrial IoT Cyberattacks Continue To Rise

IoT Industrial Security The Internet of Things (IoT) includes both traditional electronics and everyday ‘things’ embedded with sensors, computing, and networking capabilities. From smart coffee makers and smart homes to smart lighting and smart cities, ...
Business Virtual

Open Virtual Exchange (OVX) – Helping DSPs Fast Track the Monetization of SDWAN

Open Virtual Exchange (OVX) Bring agility and speed to market with intelligent network automation Digital Service Providers (DSPs) do have high expectations from virtual network services such as Software-Defined WAN (SD-WAN), as it promises to ...
Gary Bernstein

5 Notable Proxy Servers Adding That Extra Layer Of Privacy

What’s A Proxy Server? A proxy server is a gateway between the user and the internet. This is an intermediary server that separates end users from the websites they browse. It’s completely legal to use ...