The Competitive Cloud Data Center

The Competitive Cloud

The corporate data center was long the defacto vehicle for all application deployment across an enterprise. Whether reporting to Real Estate, Finance or IT, this relationship served both data centers and their users well, as it allowed the organization to invest in specialized skills and facilities and provided the enterprise with reliable and trusted services.

The first major shift in this structure occurred with the success of Salesforce.com in the early 2000s, where Salesforce removed a significant hurdle for their key target, the VP of Sales, by providing the application “off premise”. VPs of Sales typically have no significant relationship with their IT organization, so this hosted offering allowed the VP to authorize the purchase fairly autonomously. Fast forward to today, and nearly all functions, from HR to Finance to ERP, are offered “as a Service”, further pressuring the corporate data center.

competitive cloud

A recent report from Gartner estimated that today, 60% of workloads are running on-prem, and this percentage is expected to drop to 33% by 2021 and 20% by 2024. All major software companies, from Microsoft to IBM to Oracle, are focused on offering their software as a Service and in fact report on this growth as a key metric in their financials. Microsoft, in particular, noted “that by FY ’19 we’ll have two-thirds of commercial office in Office 365, and Exchange will be north of that, 70 percent.” Traditionally, these applications have represented some of the largest enterprise workloads, so the trend is unmistakable.

There are many reasons enterprise data centers will be required: location, security, financial considerations and regulatory requirements to name a few; certain organizations will continue to use them as strategic and differentiating assets. However, enterprise data centers will increasingly need to justify themselves against alternatives ranging from Applications-as-a-Service to Amazon Web Services to colocation providers like Digital Realty and Equinix.

The market has accepted that most organizations will have a happy medium of hybrid facilities, a combination of on-prem and cloud or colocation options. A most important “point of inflection” occurs in this journey: The moment an organization extends their footprint beyond their in-house data centers, they immediately need to address a host of new issues; a few include:

  • Management by SLA: Availability will always be key, but when the facilities are not theirs, an organization must rely on the SLAs and escalation procedures of the provider. This is usually one of the biggest leaps in the journey to colocation as many procedures and scenarios need to be rethought and redefined, now with an outside party.
  • Application placement: Now that there are options, where does an application run best? On prem? In a colo? In AWS? What are the right metrics to manage this placement? Cost? Security? Availability? How dynamic should this workload placement be? This is a nascent area which many organizations overlook; many vendors, from startup to established, are investing heavily to provide tools and intelligence to assist.
  • devops: Organizations have developed detailed DevOps procedures; when a platform such as AWS is brought into the mix, those procedures need to be reworked and tailored specifically for that platform.

The trend towards hybrid clouds is unmistakable, with commensurate benefits, but an organization must balance and justify this hybridization. One of the best examples of this consideration process come from the U.S. Federal Government in their Data Center Optimization Initiative (https://datacenters.cio.gov/). DCOI is a federal mandate that “requires agencies to develop and report on data center strategies to

  • Consolidate inefficient infrastructure,
  • Optimize existing facilities,
  • Improve security posture,
  • Achieve cost savings,
  • and transition to more efficient infrastructure, such as cloud services and inter-agency shared services.”

Like numerous organizations, the U.S. Federal government has a cloud-first policy, but it has instituted a rigorous reporting and oversight process to prudently manage their data center footprint and computing strategy. Many other organizations, both in the public and private sector, should consider similar processes and transparency as they optimize their many hybrid cloud-computing options.

Existing enterprise data centers represent significant assets; how should their role in the overall cloud computing fabric be determined? Availability, cost and security will always be the dominant factors for any physical computing topology, with agility a recent addition. Define the key metrics and drivers, transparently and consistently, for the overall environment, and the best set of options will present themselves.

By Enzo Greco

Jim Fagan

Behind The Headlines: Capacity For The Rest Of Us

Capacity For The Rest Of Us We live in the connected age, and the rise of cloud computing that creates previously unheard of value in our professional and personal lives is at the very heart ...
Derrek Schutman

Implementing Digital Capabilities Successfully to Boost NPS and Maximize Value Realization

Implementing Digital Capabilities Successfully Building robust digital capabilities can deliver huge benefits to Digital Service Providers (DSPs). A recent TMForum survey shows that building digital capabilities (including digitization of customer experience and operations), is the ...
Martin Mendelsohn

Of Rogues, Fear and Chicanery: The Colonial Pipeline Dilemma and CISO/CSO Priorities

The Colonial Pipeline Dilemma The Colonial Pipeline is one of a number of essential energy and infrastructure assets that have been recently targeted by the global ransomware group DarkSide, and other aspiring non-state actors, with ...
Matrix

Are We Building The Matrix?…

When sci-fi films like Tom Cruise’s Oblivion depict humans living in the clouds, we imagine that humanity might one day leave our primitive dwellings attached to the ground and ascend to floating castles in the ...
Kelly Dyer

Healthcare Data Security: Why It Matters

Healthcare Data Security Today, electronic healthcare data exists at every point along a patient’s journey. So frequently is it being processed, accessed, and shared between multiple providers, that we’d be forgiven for forgetting the highly ...

CLOUD MONITORING

The CloudTweaks technology lists will include updated resources to leading services from around the globe. Examples include leading IT Monitoring Services, Bootcamps, VPNs, CDNs, Reseller Programs and much more...

  • Opsview

    Opsview

    Opsview is a global privately held IT Systems Management software company whose core product, Opsview Enterprise was released in 2009. The company has offices in the UK and USA, boasting some 35,000 corporate clients. Their prominent clients include Cisco, MIT, Allianz, NewVoiceMedia, Active Network, and University of Surrey.

  • Nagios

    Nagios

    Nagios is one of the leading vendors of IT monitoring and management tools offering cloud monitoring capabilities for AWS, EC2 (Elastic Compute Cloud) and S3 (Simple Storage Service). Their products include infrastructure, server, and network monitoring solutions like Nagios XI, Nagios Log Server, and Nagios Network Analyzer.

  • Datadog

    DataDog

    DataDog is a startup based out of New York which secured $31 Million in series C funding. They are quickly making a name for themselves and have a truly impressive client list with the likes of Adobe, Salesforce, HP, Facebook and many others.

  • Sematext Logo

    Sematext

    Sematext bridges the gap between performance monitoring, real user monitoring, transaction tracing, and logs. Sematext all-in-one monitoring platform gives businesses full-stack visibility by exposing logs, metrics, and traces through a single Cloud or On-Premise solution. Sematext helps smart DevOps teams move faster.