RANSOMWARE TRACKING MAPS

Recent problems experienced with Ransomware are evident from infections, which have occurred in 99 countries including China and Russia. The organization that was worst hit by the attack was the National Health Service in England. It was reported that there was a WannaCry programme that demanded...

Leveraging a Virtualized Data Center to Improve Business Agility – Conclusion

Virtualized Data Center

Early designs of cloud computing focused on blades with an independent Storage Area Network (SAN) architecture. This blueprint consolidated the CPU and memory into dense blade server configurations connected via several high-speed networks (typically a combination of Fibre Channel and 10GB) to large Storage Area Networks. This has been a typical blueprint delivered by traditional off the shelf pre-built virtualization infrastructure, especially in the enterprise in private cloud configurations.

More recently, hardware vendors have been shipping modular commodity hardware in dense configurations known as Hyperscale computing. The most noticeable difference is the availability of hard drives, or solid state drives (SSDs), within the modules. This gives the virtualization server and the VMs access to very fast persistent storage and eliminates the need for an expensive SAN to provide storage in the cloud. The hyperscale model not only dramatically changes the Price / Performance model of cloud computing but, because it is modular, it also allows you to build redundancies into the configuration for an assumed failure architecture. For the mCloud solution, this architecture also provides the implementation team more flexibility by affording a “Lego block” like model of combining compute nodes and storage nodes into optimal units within a VLAN grouping of a deployment. This allows for the management of a large resource pool of compute and storage into an individually controlled subset of the data center infrastructure.

A hyperscale architecture is a synergistic infrastructure for SOA. It too uses the idea of simple, commodity functions. Hyperscale architectures have removed expensive system management components and, instead, focus on what matters to the cloud, which is compute power and high density storage.

Simple architectures are easy to scale. In other words, an architecture that contains  system management and other resiliency features in order to achieve high availability will be more difficult to scale, due to complexity, than an architecture with simpler commodity components that offloads failover to the application.

The hyperscale model makes it easy and cost effective to create a dynamic infrastructure because of low-cost, easily replaceable components, which can be located either in your data center or in remote places. The components are easy to acquire and replace. In contrast, an architecture that puts the responsibility for HA in the infrastructure, is much more complex and harder to scale.

Using this approach, in a massively scalable system, it’s been reported that  IT operators wait for many disks (even up to 100) to fail before scheduling a mass replacement, thereby making maintenance more predictable as well.

Enterprises require application availability, performance, scale, and a good price. If you’re trying to remain competitive today, your philosophy must assume that application availability is the primary concern for your business. And you will need the underlying infrastructure that allows your well-architected applications to be highly available, scalable, and performant.

Businesses and their developers are realizing that in order to take advantage of cloud, their applications need to be based on a Service Oriented Architecture (SOA). SOA facilitates scalability and high availability (HA) because the services which comprise an SOA application can be easily deployed across the cloud. Each service performs a specific function, provides a standard and well-understood interface and, therefore, is easily replicated and deployed. If a service fails, there is typically an identical service that can transparently support the user request (e.g., clustered web servers). If any of these services fail, they can be easily restarted, either locally or remotely (in the event of a disaster).

Well-written applications can take advantage of the innovate streamlined, high-performing, and scalable architecture of hyperscale clouds. Hosted Private Clouds built on hyperscale hardware and leverage open source aim to provide a converged architecture (software services to hardware components) in which everything is easy to troubleshoot and is easily replaceable with the minimum disruption.

With the micro-datacenter design, failure of the hardware is decoupled from the failure of the application. If your application is designed to take advantage of the geographically dispersed architecture, your users will not be aware of hardware failures because the application is still running elsewhere. Similarly, if your application requires more resources, Dynamic Resource Scaling allows your application to burst transparently from the user’s perspective.

Conclusion

By abstracting the function of computation from the physical platform on which computations run, virtual machines (VMs) provided incredible flexibility for raw information processing. Close on the heels of compute virtualization came storage virtualization, which provided similar levels of flexibility. Dynamic Resource Scaling technology, amplified by Carrier Ethernet Exchanges, provides high levels of location transparency, high availability, security, and reliability. In fact, by leveraging the Hosted Private Clouds with DRS, an entire data center can be incrementally defined by software and temporarily deployed. One could say a hosted private cloud combined with dynamic resource scaling creates a secures and dynamic “burst-able data center.”

Applications with high security and integration constraints, and which IT organizations previously found difficult to deploy in burst-able environments, are now candidates for deployment in on-demand scalable environments made possible by DRS. By using DRS, enterprises have the ability to scale the key components of the data center (compute, storage, and networking) in a public cloud-like manner (on-demand, OpEx model), yet retain the benefits of private cloud control (security, ease of integration).

Furthermore, in addition to the elasticity, privacy, and cost savings, hyperscale architecture affords enterprises new possibilities for disaster mitigation and business continuity. Having multiple, geographically dispersed nodes gives you the ability to fail over across regions.

The end result is a quantum leap in business agility and competitiveness.

By Winston Damarillo

CEO and Co-founder of Morphlabs

Winston is a proven serial entrepreneur with a track record of building successful technology start-ups. Prior to his entrepreneurial endeavors, Winston was among the highest performing venture capital professionals at Intel, having led the majority of his investments to either a successful IPO or a profitable corporate acquisition. In addition to leading Morphlabs, Winston is also involved in several organizations that are focused on combining the expertise of a broad range of thought leaders with advanced technology to drive global innovation and growth.

About CloudTweaks

Established in 2009, CloudTweaks is recognized as one of the leading authorities in cloud connected technology information and services.

We embrace and instill thought leadership insights, relevant and timely news related stories, unbiased benchmark reporting as well as technology related infographics and comics.

SYNDICATED NEWS SOURCES

Hack of U.S. securities regulator rattles investors, stirs doubts

By CloudBuzz | September 21, 2017

WASHINGTON/NEW YORK (Reuters) – Wall Street’s top regulator faced questions on Thursday about its defenses against cyber criminals after admitting hackers breached its electronic database of corporate announcements and may have used it for insider trading. The incursion at the…

Leaking Cloud Databases and Servers Expose Over 1 Billion Records

By CloudBuzz | September 21, 2017

Servers Expose Over 1 Billion Records As The Wall Street Journal recently pointed out, some clients of cloud service providers such as Amazon and Microsoft are accidentally leaving their cloud databases exposed due to misconfigurations of their services. Coupled with recent headline-making…

Thales Joins the Microsoft Enterprise Cloud Alliance

By CloudBuzz | September 21, 2017

SAN JOSE, Calif., Sept. 21, 2017 /PRNewswire/ — Thales, a leader in critical information systems, cybersecurity and data security, is now a member of the Microsoft Enterprise Cloud Alliance (ECA). Designed to foster innovation and promote awareness of partner solutions, the ECA membership…

Addressing the UK NCSC’s Cloud Security Principles

By CloudBuzz | September 20, 2017

As your organization adopts more cloud services, it’s essential to get a clear picture of how sensitive data will be protected. Many authorities, from government regulators, to industry standards bodies and consortia, have provided guidance on how to evaluate cloud…

RiskVision Named 2017 Cybersecurity Breakthrough Awards Winner

By CloudBuzz | September 20, 2017

RiskVision Named 2017 Cybersecurity Breakthrough Awards Winner for Enterprise Risk Management (ERM) Software of the Year SUNNYVALE, CA–(Marketwired – Sep 20, 2017) – RiskVision, the enterprise risk intelligence company formerly known as Agiliance, today announced that the RiskVision platform has…

Amazon working on ‘smart glasses’ as its first wearable device: FT

By CloudBuzz | September 20, 2017

(Reuters) – Amazon.com Inc is working on its first wearable device – a pair of ‘smart glasses’, the Financial Times reported on Wednesday. The device, designed like a regular pair of spectacles, will allow Amazon’s digital assistant Alexa to be…