Winston Damarillo

Leveraging a Virtualized Data Center to Improve Business Agility

Leveraging a Virtualized Data Center

Everyone knows that the age of cloud computing is here. The bigger question, on which even the greatest minds of technology revolution can’t agree, is what the impact will be and how best to apply this approach to computing resources and business optimization.

Many enterprises looking for the benefits of public cloud-style scaling are deterred by increasingly complex compliance and regulatory requirements. Morphlabs has origins in open source disruption and has been working with cloud computing technology for the past five years. And, during our own evolution, we have had first-row seats to the trials and tribulations that the enterprise IT organization has when faced with today’s options.

Enterprises deploying private clouds have been searching for the ability to expand their cloud resources while keeping their core infrastructure private. This is known as cloud bursting. Dynamic Resource Scaling (DRS), a proprietary form of cloud bursting, is a technology that enables elastic allocation of dedicated physical resources, thus retaining security and control – a previously unavailable feature in private cloud deployments. Enterprises need to scale compute and storage capacity either locally or via Carrier Ethernet to remote resources. Service Providers can offer hosted private cloud and Dynamic Infrastructure Services (DIS) – the Forrester-defined private equivalent of IaaS – with cloud bursting capabilities while retaining competitive public cloud pricing for their customers.

The ability to elastically expand and contract computes and storage across physical locations is foundational to the virtualized data center. Furthermore, implementing a modular hyperscale architecture, like that of Amazon, Google, and Facebook, maximizes the cost savings and simplifies configuration. Deploying this modular blueprint using high-performance hyperscale hardware, virtualized resource pricing rivals Amazon Web Services offerings. And by leveraging dedicated hardware to deliver cloud to the enterprise, this solution guarantees higher Quality of Service in addition to increased security.

This article will focus on the principles of dynamic resource scaling – cloud bursting on dedicated hardware – as an example of how to approach and leverage the optimized Virtual Private Data Center to improve enterprise business agility.

The State of the Market – Public vs. Private

Most enterprises are either already using or are interested in moving to cloud-based solutions. However, many ask, “What is the most efficient, cost effective, and secure way to utilize the cloud?” Before answering that question, it’s important to know what capabilities are cited as being most desirable to enterprises.

Forrester research has found that on-demand capacity and scalability are the most important reasons for purchasing cloud-based Infrastructure as a Service (IaaS). This coincided with a shift from bottom line cost improvement focus to top line business performance goals.  Forrester research shows that 41% of enterprises require on-demand scaling.

Cloud infrastructure is more than just software – it’s the integration of best of breed hardware and software.  When you spend even a short time on the front lines of cloud deployment, it is impossible to ignore the challenges faces by IT organizations stretched for time and lacking the expertise needed to research, architect, and composed highly complex cloud systems which are reliable and secure enough for enterprise usage.

Cloud Bursting Today

Before discussing the potential impact of technology like Dynamic Resource Scaling, let’s review the concept on which it improves. From a technical standpoint, cloud bursting is an application deployment model in which an application primarily runs in a private cloud, then, due to load demands, expands to consume external resources which are typically located in a public cloud. The need for additional capacity is often seasonal. For example, a flower shop might require more web application capacity for Valentine’s Day, or an enterprise might need additional compute power to run periodic business analytics. After the peak demand subsides, the additional resources are released back to the original pool of resources.

From a business standpoint, cloud bursting is an alternative way to fulfill user experience requirements or committed service level agreements (SLAs) without purchasing and owning resources to meet peak demands.

Underlying cloud bursting is a hybrid cloud architecture. A hybrid cloud is a deployment model in which two or more separate clouds are bound together. Often, a hybrid cloud is a private cloud accessing one or more public clouds with, preferably, a secure network connection between the private and the public resources.

Advantages

Cloud bursting provides the following benefits:

  • Pay-as-you-go – Only pay for spare capacity when needed
  • On-Demand – Scale when needed
  • Self-service – User controls the provisioning for application bursting requests
  • Diversity – Supports a variety of IT and business use cases
  • Flexibility – Run application components with less security requirements in the public cloud and keep secured application components and data in the private cloud
  • Fault-tolerance – Run an application in multiple locations to increase redundancy and reduce downtime

Challenges

Although cloud bursting has the potential to save money and provide a consistent level of performance, there can be significant challenges to overcome:

  • Application components running on bursted (i.e., public) resources need access to secure data
  • Applications must be designed to scale or they cannot take advantage of cloud bursting; retrofitting applications to be able to burst can be time-consuming and expensive
  • Accessing databases and where the databases reside are issues. If the database contains secure data, it is behind the firewall, but the application is running on public resources so the app does not have access to the data it needs
  • Depending on where you burst to, the platform may not be compatible with the platform on which you developed and tested the application
  • Security and regulatory compliance
  • A hybrid environment is more complex architecturally; including possibly different APIs, different policies, unfamiliar user interface and tools
  • Load balancing applications to the additional virtual machines to fully utilize the additional capacity
  • IT organization has less control over the computing infrastructure when using external resources

Forrester surveyed companies using cloud solutions. The survey found that companies using cloud bursting classify their workloads to determine which are best able to take advantage of bursting. Each enterprise uses their own method of classification, but here is an example from the report:

  • Productive workloads of back-office data and processes, such as financial applications or customer-related transactions. These need to remain on-premises.
  • Productive workloads of front-office data and processes, such as customer relationship management (CRM). These could go to a cloud provider with high privacy levels.
  • Development, test, and simulation environments, which contain no customer data and are not subject to compliance regulations. Thus, they can operate on any public infrastructure.

To avoid security and integration issues, organizations tend to not burst applications that require sensitive data or that integrate with other apps residing behind the firewall. This limits business agility and impacts competitiveness, leaving a massive need in the industry for a way to address secure bursting.

Dynamic Resource Scaling technology is the first of its kind to make it possible to burst with increased security and control using expansion by increments of dedicated physical hardware, even for applications with tight security and integration constraints. This allows organizations to leverage cloud bursting for achieving business agility and competitive advantages.

Virtualized Data Center

Early designs of cloud computing focused on blades with an independent Storage Area Network (SAN) architecture. This blueprint consolidated the CPU and memory into dense blade server configurations connected via several high-speed networks (typically a combination of Fibre Channel and 10GB) to large Storage Area Networks. This has been a typical blueprint delivered by traditional off the shelf pre-built virtualization infrastructure, especially in the enterprise in private cloud configurations.

More recently, hardware vendors have been shipping modular commodity hardware in dense configurations known as Hyperscale computing. The most noticeable difference is the availability of hard drives, or solid state drives (SSDs), within the modules. This gives the virtualization server and the VMs access to very fast persistent storage and eliminates the need for an expensive SAN to provide storage in the cloud. The hyperscale model not only dramatically changes the Price / Performance model of cloud computing but, because it is modular, it also allows you to build redundancies into the configuration for an assumed failure architecture. For the mCloud solution, this architecture also provides the implementation team more flexibility by affording a “Lego block” like model of combining compute nodes and storage nodes into optimal units within a VLAN grouping of a deployment. This allows for the management of a large resource pool of compute and storage into an individually controlled subset of the data center infrastructure.

A hyperscale architecture is a synergistic infrastructure for SOA. It too uses the idea of simple, commodity functions. Hyperscale architectures have removed expensive system management components and, instead, focus on what matters to the cloud, which is compute power and high density storage.

Simple architectures are easy to scale. In other words, an architecture that contains  system management and other resiliency features in order to achieve high availability will be more difficult to scale, due to complexity, than an architecture with simpler commodity components that offloads failover to the application.

The hyperscale model makes it easy and cost effective to create a dynamic infrastructure because of low-cost, easily replaceable components, which can be located either in your data center or in remote places. The components are easy to acquire and replace. In contrast, an architecture that puts the responsibility for HA in the infrastructure, is much more complex and harder to scale.

Using this approach, in a massively scalable system, it’s been reported that  IT operators wait for many disks (even up to 100) to fail before scheduling a mass replacement, thereby making maintenance more predictable as well.

Enterprises require application availability, performance, scale, and a good price. If you’re trying to remain competitive today, your philosophy must assume that application availability is the primary concern for your business. And you will need the underlying infrastructure that allows your well-architected applications to be highly available, scalable, and performant.

Businesses and their developers are realizing that in order to take advantage of cloud, their applications need to be based on a Service Oriented Architecture (SOA). SOA facilitates scalability and high availability (HA) because the services which comprise an SOA application can be easily deployed across the cloud. Each service performs a specific function, provides a standard and well-understood interface and, therefore, is easily replicated and deployed. If a service fails, there is typically an identical service that can transparently support the user request (e.g., clustered web servers). If any of these services fail, they can be easily restarted, either locally or remotely (in the event of a disaster).

Well-written applications can take advantage of the innovate streamlined, high-performing, and scalable architecture of hyperscale clouds. Hosted Private Clouds built on hyperscale hardware and leverage open source aim to provide a converged architecture (software services to hardware components) in which everything is easy to troubleshoot and is easily replaceable with the minimum disruption.

With the micro-datacenter design, failure of the hardware is decoupled from the failure of the application. If your application is designed to take advantage of the geographically dispersed architecture, your users will not be aware of hardware failures because the application is still running elsewhere. Similarly, if your application requires more resources, Dynamic Resource Scaling allows your application to burst transparently from the user’s perspective.

Conclusion

By abstracting the function of computation from the physical platform on which computations run, virtual machines (VMs) provided incredible flexibility for raw information processing. Close on the heels of compute virtualization came storage virtualization, which provided similar levels of flexibility. Dynamic Resource Scaling technology, amplified by Carrier Ethernet Exchanges, provides high levels of location transparency, high availability, security, and reliability. In fact, by leveraging the Hosted Private Clouds with DRS, an entire data center can be incrementally defined by software and temporarily deployed. One could say a hosted private cloud combined with dynamic resource scaling creates a secures and dynamic “burst-able data center.”

Applications with high security and integration constraints, and which IT organizations previously found difficult to deploy in burst-able environments, are now candidates for deployment in on-demand scalable environments made possible by DRS. By using DRS, enterprises have the ability to scale the key components of the data center (compute, storage, and networking) in a public cloud-like manner (on-demand, OpEx model), yet retain the benefits of private cloud control (security, ease of integration).

Furthermore, in addition to the elasticity, privacy, and cost savings, hyperscale architecture affords enterprises new possibilities for disaster mitigation and business continuity. Having multiple, geographically dispersed nodes gives you the ability to fail over across regions.

The end result is a quantum leap in business agility and competitiveness.

By Winston Damarillo

Winston Damarillo

Winston Damarillo is the CEO and Co-founder of Morphlabs,

Winston is a proven serial entrepreneur with a track record of building successful technology start-ups. Prior to his entrepreneurial endeavors, Winston was among the highest performing venture capital professionals at Intel, having led the majority of his investments to either a successful IPO or a profitable corporate acquisition. In addition to leading Morphlabs, Winston is also involved in several organizations that are focused on combining the expertise of a broad range of thought leaders with advanced technology to drive global innovation and growth.

View Website
Why ‘Data Hoarding’ Increases Cybersecurity Risk

Why ‘Data Hoarding’ Increases Cybersecurity Risk

Data Hoarding The proliferation of data and constant growth of content saved on premise, in cloud storage, or a non-integrated ...
Achieving Network Security In The IoT

Achieving Network Security In The IoT

Security In The IoT The network security market is experiencing a pressing and transformative change, especially around access control and ...
A Closer Look at the Hidden Costs of Collaboration Solutions

A Closer Look at the Hidden Costs of Collaboration Solutions

The Hidden Costs of Collaboration Solutions Collaboration technology is key to efficient communication and productivity for a dispersed and global ...
How Machine Learning Quantifies Trust & Improves Employee Experiences

How Machine Learning Quantifies Trust & Improves Employee Experiences

Machine Learning Quantifies Trust Bottom Line: By enabling enterprises to scale security with user behavior-based, contextual intelligence, Next-Gen Access strategies are ...
How B2B Ecosystems & (Big) Data Can Transform Sales and Marketing Practices

How B2B Ecosystems & (Big) Data Can Transform Sales and Marketing Practices

B2B Ecosystems & (Big) Data Managing your relationships with customers, suppliers, and partners and constantly improving their experience is a ...
Will 2018 Be the Year Augmented Reality Moves Outside ‘Pokémon Go’?

Will 2018 Be the Year Augmented Reality Moves Outside ‘Pokémon Go’?

2018 Augmented Reality If you’ve never heard of “Pokémon Go” — or at least never had the concept explained to ...
Four Providers of Smart City Kiosks Named IDC Innovators

Four Providers of Smart City Kiosks Named IDC Innovators

FRAMINGHAM, Mass., July 11, 2018 – International Data Corporation (IDC) today published an IDC Innovators report profiling four providers that are considered key emerging vendors in the Smart City kiosks market. The four companies named as ...
Coupa Named a Leader in IDC Marketscape Worldwide SaaS and Cloud-Enabled Supplier Relationship Management Applications 2018

Coupa Named a Leader in IDC Marketscape Worldwide SaaS and Cloud-Enabled Supplier Relationship Management Applications 2018

SAN MATEO, Calif., July 09, 2018 (GLOBE NEWSWIRE) -- Coupa (NASDAQ:COUP), a leader in business spend management (BSM), today announced that it has been named a Leader in IDC’s Marketscape Worldwide SaaS and Cloud-Enabled Supplier Relationship ...
Closer Collaboration Between C-Suite and CISOs Needed to Bridge Gap in Cyber Readiness, Finds Accenture Report

Closer Collaboration Between C-Suite and CISOs Needed to Bridge Gap in Cyber Readiness, Finds Accenture Report

New Accenture survey finds fewer than one-third of CISOs and business leaders collaborate on a cybersecurity plan and budget NEW YORK; July 10, 2018 – With the proliferation of more and more sensitive data, expanding ...