BRAND VISIBILITY 2019

The CloudTweaks brand visibility 2019 program provides a number of terrific opportunities to help leverage your brand and service bringing it to the forefront of the technology world. We provide sponsorship, lead generation services, custom content packages, blog management and promotion. Contact us for a quote!

Winston Damarillo

Leveraging a Virtualized Data Center to Improve Business Agility

Leveraging a Virtualized Data Center

Everyone knows that the age of cloud computing is here. The bigger question, on which even the greatest minds of technology revolution can’t agree, is what the impact will be and how best to apply this approach to computing resources and business optimization.

Many enterprises looking for the benefits of public cloud-style scaling are deterred by increasingly complex compliance and regulatory requirements. Morphlabs has origins in open source disruption and has been working with cloud computing technology for the past five years. And, during our own evolution, we have had first-row seats to the trials and tribulations that the enterprise IT organization has when faced with today’s options.

Enterprises deploying private clouds have been searching for the ability to expand their cloud resources while keeping their core infrastructure private. This is known as cloud bursting. Dynamic Resource Scaling (DRS), a proprietary form of cloud bursting, is a technology that enables elastic allocation of dedicated physical resources, thus retaining security and control – a previously unavailable feature in private cloud deployments. Enterprises need to scale compute and storage capacity either locally or via Carrier Ethernet to remote resources. Service Providers can offer hosted private cloud and Dynamic Infrastructure Services (DIS) – the Forrester-defined private equivalent of IaaS – with cloud bursting capabilities while retaining competitive public cloud pricing for their customers.

The ability to elastically expand and contract computes and storage across physical locations is foundational to the virtualized data center. Furthermore, implementing a modular hyperscale architecture, like that of Amazon, Google, and Facebook, maximizes the cost savings and simplifies configuration. Deploying this modular blueprint using high-performance hyperscale hardware, virtualized resource pricing rivals Amazon Web Services offerings. And by leveraging dedicated hardware to deliver cloud to the enterprise, this solution guarantees higher Quality of Service in addition to increased security.

This article will focus on the principles of dynamic resource scaling – cloud bursting on dedicated hardware – as an example of how to approach and leverage the optimized Virtual Private Data Center to improve enterprise business agility.

The State of the Market – Public vs. Private

Most enterprises are either already using or are interested in moving to cloud-based solutions. However, many ask, “What is the most efficient, cost effective, and secure way to utilize the cloud?” Before answering that question, it’s important to know what capabilities are cited as being most desirable to enterprises.

Forrester research has found that on-demand capacity and scalability are the most important reasons for purchasing cloud-based Infrastructure as a Service (IaaS). This coincided with a shift from bottom line cost improvement focus to top line business performance goals.  Forrester research shows that 41% of enterprises require on-demand scaling.

Cloud infrastructure is more than just software – it’s the integration of best of breed hardware and software.  When you spend even a short time on the front lines of cloud deployment, it is impossible to ignore the challenges faces by IT organizations stretched for time and lacking the expertise needed to research, architect, and composed highly complex cloud systems which are reliable and secure enough for enterprise usage.

Cloud Bursting Today

Before discussing the potential impact of technology like Dynamic Resource Scaling, let’s review the concept on which it improves. From a technical standpoint, cloud bursting is an application deployment model in which an application primarily runs in a private cloud, then, due to load demands, expands to consume external resources which are typically located in a public cloud. The need for additional capacity is often seasonal. For example, a flower shop might require more web application capacity for Valentine’s Day, or an enterprise might need additional compute power to run periodic business analytics. After the peak demand subsides, the additional resources are released back to the original pool of resources.

From a business standpoint, cloud bursting is an alternative way to fulfill user experience requirements or committed service level agreements (SLAs) without purchasing and owning resources to meet peak demands.

Underlying cloud bursting is a hybrid cloud architecture. A hybrid cloud is a deployment model in which two or more separate clouds are bound together. Often, a hybrid cloud is a private cloud accessing one or more public clouds with, preferably, a secure network connection between the private and the public resources.

Advantages

Cloud bursting provides the following benefits:

  • Pay-as-you-go – Only pay for spare capacity when needed
  • On-Demand – Scale when needed
  • Self-service – User controls the provisioning for application bursting requests
  • Diversity – Supports a variety of IT and business use cases
  • Flexibility – Run application components with less security requirements in the public cloud and keep secured application components and data in the private cloud
  • Fault-tolerance – Run an application in multiple locations to increase redundancy and reduce downtime

Challenges

Although cloud bursting has the potential to save money and provide a consistent level of performance, there can be significant challenges to overcome:

  • Application components running on bursted (i.e., public) resources need access to secure data
  • Applications must be designed to scale or they cannot take advantage of cloud bursting; retrofitting applications to be able to burst can be time-consuming and expensive
  • Accessing databases and where the databases reside are issues. If the database contains secure data, it is behind the firewall, but the application is running on public resources so the app does not have access to the data it needs
  • Depending on where you burst to, the platform may not be compatible with the platform on which you developed and tested the application
  • Security and regulatory compliance
  • A hybrid environment is more complex architecturally; including possibly different APIs, different policies, unfamiliar user interface and tools
  • Load balancing applications to the additional virtual machines to fully utilize the additional capacity
  • IT organization has less control over the computing infrastructure when using external resources

Forrester surveyed companies using cloud solutions. The survey found that companies using cloud bursting classify their workloads to determine which are best able to take advantage of bursting. Each enterprise uses their own method of classification, but here is an example from the report:

  • Productive workloads of back-office data and processes, such as financial applications or customer-related transactions. These need to remain on-premises.
  • Productive workloads of front-office data and processes, such as customer relationship management (CRM). These could go to a cloud provider with high privacy levels.
  • Development, test, and simulation environments, which contain no customer data and are not subject to compliance regulations. Thus, they can operate on any public infrastructure.

To avoid security and integration issues, organizations tend to not burst applications that require sensitive data or that integrate with other apps residing behind the firewall. This limits business agility and impacts competitiveness, leaving a massive need in the industry for a way to address secure bursting.

Dynamic Resource Scaling technology is the first of its kind to make it possible to burst with increased security and control using expansion by increments of dedicated physical hardware, even for applications with tight security and integration constraints. This allows organizations to leverage cloud bursting for achieving business agility and competitive advantages.

Virtualized Data Center

Early designs of cloud computing focused on blades with an independent Storage Area Network (SAN) architecture. This blueprint consolidated the CPU and memory into dense blade server configurations connected via several high-speed networks (typically a combination of Fibre Channel and 10GB) to large Storage Area Networks. This has been a typical blueprint delivered by traditional off the shelf pre-built virtualization infrastructure, especially in the enterprise in private cloud configurations.

More recently, hardware vendors have been shipping modular commodity hardware in dense configurations known as Hyperscale computing. The most noticeable difference is the availability of hard drives, or solid state drives (SSDs), within the modules. This gives the virtualization server and the VMs access to very fast persistent storage and eliminates the need for an expensive SAN to provide storage in the cloud. The hyperscale model not only dramatically changes the Price / Performance model of cloud computing but, because it is modular, it also allows you to build redundancies into the configuration for an assumed failure architecture. For the mCloud solution, this architecture also provides the implementation team more flexibility by affording a “Lego block” like model of combining compute nodes and storage nodes into optimal units within a VLAN grouping of a deployment. This allows for the management of a large resource pool of compute and storage into an individually controlled subset of the data center infrastructure.

A hyperscale architecture is a synergistic infrastructure for SOA. It too uses the idea of simple, commodity functions. Hyperscale architectures have removed expensive system management components and, instead, focus on what matters to the cloud, which is compute power and high density storage.

Simple architectures are easy to scale. In other words, an architecture that contains  system management and other resiliency features in order to achieve high availability will be more difficult to scale, due to complexity, than an architecture with simpler commodity components that offloads failover to the application.

The hyperscale model makes it easy and cost effective to create a dynamic infrastructure because of low-cost, easily replaceable components, which can be located either in your data center or in remote places. The components are easy to acquire and replace. In contrast, an architecture that puts the responsibility for HA in the infrastructure, is much more complex and harder to scale.

Using this approach, in a massively scalable system, it’s been reported that  IT operators wait for many disks (even up to 100) to fail before scheduling a mass replacement, thereby making maintenance more predictable as well.

Enterprises require application availability, performance, scale, and a good price. If you’re trying to remain competitive today, your philosophy must assume that application availability is the primary concern for your business. And you will need the underlying infrastructure that allows your well-architected applications to be highly available, scalable, and performant.

Businesses and their developers are realizing that in order to take advantage of cloud, their applications need to be based on a Service Oriented Architecture (SOA). SOA facilitates scalability and high availability (HA) because the services which comprise an SOA application can be easily deployed across the cloud. Each service performs a specific function, provides a standard and well-understood interface and, therefore, is easily replicated and deployed. If a service fails, there is typically an identical service that can transparently support the user request (e.g., clustered web servers). If any of these services fail, they can be easily restarted, either locally or remotely (in the event of a disaster).

Well-written applications can take advantage of the innovate streamlined, high-performing, and scalable architecture of hyperscale clouds. Hosted Private Clouds built on hyperscale hardware and leverage open source aim to provide a converged architecture (software services to hardware components) in which everything is easy to troubleshoot and is easily replaceable with the minimum disruption.

With the micro-datacenter design, failure of the hardware is decoupled from the failure of the application. If your application is designed to take advantage of the geographically dispersed architecture, your users will not be aware of hardware failures because the application is still running elsewhere. Similarly, if your application requires more resources, Dynamic Resource Scaling allows your application to burst transparently from the user’s perspective.

Conclusion

By abstracting the function of computation from the physical platform on which computations run, virtual machines (VMs) provided incredible flexibility for raw information processing. Close on the heels of compute virtualization came storage virtualization, which provided similar levels of flexibility. Dynamic Resource Scaling technology, amplified by Carrier Ethernet Exchanges, provides high levels of location transparency, high availability, security, and reliability. In fact, by leveraging the Hosted Private Clouds with DRS, an entire data center can be incrementally defined by software and temporarily deployed. One could say a hosted private cloud combined with dynamic resource scaling creates a secures and dynamic “burst-able data center.”

Applications with high security and integration constraints, and which IT organizations previously found difficult to deploy in burst-able environments, are now candidates for deployment in on-demand scalable environments made possible by DRS. By using DRS, enterprises have the ability to scale the key components of the data center (compute, storage, and networking) in a public cloud-like manner (on-demand, OpEx model), yet retain the benefits of private cloud control (security, ease of integration).

Furthermore, in addition to the elasticity, privacy, and cost savings, hyperscale architecture affords enterprises new possibilities for disaster mitigation and business continuity. Having multiple, geographically dispersed nodes gives you the ability to fail over across regions.

The end result is a quantum leap in business agility and competitiveness.

By Winston Damarillo

Winston Damarillo

Winston Damarillo is the CEO and Co-founder of Morphlabs,

Winston is a proven serial entrepreneur with a track record of building successful technology start-ups. Prior to his entrepreneurial endeavors, Winston was among the highest performing venture capital professionals at Intel, having led the majority of his investments to either a successful IPO or a profitable corporate acquisition. In addition to leading Morphlabs, Winston is also involved in several organizations that are focused on combining the expertise of a broad range of thought leaders with advanced technology to drive global innovation and growth.

View Website

Rainmaking From The Cloud - CIOs Struggle To Keep Pace With IT Demands

Rainmaking From The Cloud – CIOs Struggle To Keep Pace With IT Demands

Rainmaking from the Cloud In the digital era, where customers can select virtually anything with a click of a button, ...
How IoT and OT collaborate to usher in the data-driven factory of the future

How IoT and OT collaborate to usher in the data-driven factory of the future

The Data-driven Factory The next BriefingsDirect Internet of Things (IoT) technology trends interview explores how innovation is impacting modern factories and supply chains ...
Cloud Migration Strategies and Their Impact on Security and Governance

Cloud Migration Strategies and Their Impact on Security and Governance

Cloud Migration Strategies Public cloud migrations come in different shapes and sizes, but I see three major approaches. Each of ...
The Fully Aware, Hybrid-Cloud Approach

The Fully Aware, Hybrid-Cloud Approach

Hybrid-Cloud Approach For over 20 years, organizations have been attempting to secure their networks and protect their data. However, have ...
Secure Business Agility

Contrary to popular belief, a pro-privacy stance is good for business

Pro-Privacy Stance Right now privacy is a hot topic on LinkedIn posts, especially as it pertains to compliance with the ...
What the Dyn DDoS Attacks Taught Us About Cloud-Only EFSS

What the Dyn DDoS Attacks Taught Us About Cloud-Only EFSS

DDoS Attacks October 21st, 2016 went into the annals of Internet history for the large scale Distributed Denial of Service (DDoS) ...

Cloud Supporters

Five Reasons Why Machine Learning Needs To Make Resumes Obsolete

Five Reasons Why Machine Learning Needs To Make Resumes Obsolete

Machine Learning Needs To Make Resumes Obsolete Hiring companies nationwide miss out on 50% or more of qualified candidates and tech firms incorrectly classify up 80% of candidates due to inaccuracies and shortcomings of existing Applicant Tracking Systems (ATS), illustrating how faulty these systems are
“Culture Eats (Your Digital) Strategy for Breakfast” – Peter Drucker

“Culture Eats (Your Digital) Strategy for Breakfast” – Peter Drucker

Struggling with digital disruption, moving to the cloud, maybe even trying AI? Listen to Drucker. Go with the flow and try incremental change. The late management guru, Peter Drucker is attributed with the quote: “Culture eats strategy for breakfast”. What does this mean? In a

"Top 100 Brand Influencer, Cloud”
-ONALYTICA

"Best Cloud Computing Blog"
-SYSADMIN MAGAZINE

"Top 10 Sites For Cloud Computing"
-DIGITALISTMAG SAP

"Top 10 Cloud Computing Blogs”
-MARKETING ENVY

"Top 25 Must Read Cloud Blogs"
-CLOUDENDURE