Author Archives: CloudTweaks

IBM Delivers New Services To Help Clients Move Enterprise Applications To The Cloud

IBM Delivers New Services to Help Clients Move Enterprise Applications to the Cloud

Financial, healthcare, government and electronics clients tap IBM SmartCloud Enterprise+; IBM SmartCloud for SAP applications is now available globally

ARMONK, N.Y. – 29 January 2013: IBM (NYSE:IBM) today announced global availability for its cloud service on five continents—plus a new center opening in Spain– based on its industry-leading sourcing business to host SAP® applications and other core operations. Now clients can turn to cloud computing for enterprise applications while they reduce the overall cost of IT and at the same time, expand online access, while they invest in innovative analytics, social business and mobile computing.

Many organizations are eager to leverage the economic advantages of cloud computing to run their critical applications on the cloud. Their applications require deep technical expertise, around-the-clock customer service, tight security and ongoing maintenance — features typically found in IT sourcing arrangements but not in the “one-size-fits-all” model of self-service clouds.

To address this, IBM developed an infrastructure-as-a-service cloud built on decades of hosting experience gained by being the world’s largest provider of IT sourcing services with more than 1,000 clients. Called IBM SmartCloud Enterprise+ (SCE+), the service combines the best features of sourcing– high service level agreements, security and reliability– with the best features of cloud – elasticity and subscription-based pricing.

This service offers the same level of assurance normally associated with a hosted service to make sure clients can always access their core applications for ERP, CRM, analytics, social business and mobile computing from the cloud. The new service goes beyond the infrastructure offered as a service with typical public clouds. With this cloud service, IBM also helps manage patch updates and identity management, improving security, which analysts often cite as an inhibitor to cloud adoption.

This is a logical evolution of IBM’s sourcing business that gives us an advantage both in our services relationships and the cloud market as we define a new enterprise-grade cloud today,” said Jim Comfort, general manager of IBM SmartCloud Services. “Our clients want sophisticated, economical cloud-based services that provide the same quality and service level as a private, hosted IT environment. With that assurance, they can focus more on driving business value from their data and operations, and less on managing their IT.”

An Enterprise-Grade Cloud Service—SmartCloud for SAP Applications

IBM today is announcing SmartCloud for SAP applications, an enterprise service unique to IBM, is available globally.

As customers expand their use of SAP applications to more business processes, such as marketing campaigns based on Big Data, they often will benefit from more systems and greater management. Operating and managing IT environments running SAP solutions requires an advanced infrastructure and strong SAP operational skills.

IBM SmartCloud for SAP applications automates and standardizes provisioning of IT environments, and can accelerate service delivery with expert certified staff. The SmartCloud services for SAP applications delivers 99.7 percent availability based on a global delivery model to support cloud-based systems around the clock. This service is available for SAP Business Suite software and the SAP BusinessObjects™ solution portfolio as an enterprise-class, fully managed Platform-as-a-Service (PaaS) offering for running SAP solutions in a production environment.

IBM’s new cloud service for SAP applications exemplifies our two companies’ work together in the last 40 years in delivering enterprise value to thousands of clients,” said Dr. Vishal Sikka, member of the SAP Executive Board, Technology and Innovation. “Cloud computing is helping our clients transform their IT infrastructures and businesses. We are confident that our partnership with IBM — using their SmartCloud platform and our business applications – will help drive differentiated value to clients around the globe.”

In addition, IBM is marrying its Global Business Services deep expertise, tools and processes with SmartCloud for SAP applications to deliver LifeCycle as a Service. This can transform implementations of SAP applications end to end—from sandbox to production. With this service, IBM takes responsibility and control of the SAP applications and provides management, including software patching of SAP solutions as well as support for the underlying operating system, database and middleware.

Clients may set up their SAP solutions development and test operations on IBM’s public cloud service—SmartCloud Enterprise. Then those SAP applications can be transitioned to the SCE+ platform for production to further assure higher availability of the operations.

Client wins

IBM has clients in finance, manufacturing, telecommunications, electronics, government and healthcare using SCE+.

For example, IBM SCE+ is the cloud platform powering the Philips Smart TV platform for Internet services, which delivers greater interactive services to millions of TV viewers in more than 30 countries in Europe, as well as Brazil and Argentina.

We needed a cloud computing environment resilient enough to support unexpected demands at any given time when millions of TV viewers access a variety of services on our network,” said Albert Mombarg, head of Philips Smart TV at TP Vision, a joint venture between Philips and TV manufacturer TPV. “IBM SmartCloud Enterprise+ provides an economic, flexible way to create new services for our viewers and we expect it to transform the way we deliver Philips Smart TV and drive ongoing business innovation.

Heathcare is also well suited for SCE+. Summit Health, a health care management company, is tapping IBM SCE+ to support the company’s growth plans around health care management and proactive wellness programs. The Generalitat de Cataluña, a regional government in Spain, is planning on using SCE+ in a new IBM cloud datacenter in Spain to improve its healthcare system and share resources among its universities and town halls.

Details on SmartCloud Enterprise+

SCE+ is offered from IBM’s cloud centers in Japan, Brazil, Canada, France, Australia, the U.S. and Germany, giving clients broad geographic choice of where their data resides. IBM announced today the opening of its first cloud center in Spain, located in Barcelona, to service clients worldwide, which will be operative by mid-2013.

The SCE+ environment can have service levels that guarantee availability for each single OS-instance from 98.5 percent up to 99.9 percent.

New also is IBM Migration Services for SmartCloud Enterprise+, which helps clients migrate to cloud more quickly and cost effectively by determining which workloads are best suited to the SmartCloud Enterprise+ environment. Standardized and automation-assisted, IBM Migration Services are economically priced, aim to deliver ROI in 6 to18 months.

About IBM Cloud Computing

IBM has helped thousands of clients adopt cloud models and manages millions of cloud-based transactions every day. With cloud, IBM helps clients rethink their IT and reinvent their business. IBM assists clients in areas as diverse as banking, communications, healthcare and government to build their own clouds or securely tap into IBM cloud-based business and infrastructure services. IBM is unique in bringing together key cloud technologies, deep process knowledge, a broad portfolio of cloud solutions, and a network of global delivery centers. For more information about cloud offerings from IBM, visit http://www.ibm.com/smartcloud. Follow us on Twitter at http://www.twitter.com/ibmcloud and on our blog at http://www.thoughtsoncloud.com.

Source: IBM

Whitepaper: Enterprise Cloud Development (ECD) Platforms

Whitepaper: Enterprise Cloud Development (ECD) Platforms

Achieving Competitive Differentiation Through Agility

With cloud-based agile software development, organizations can respond to fast-changing needs at the speed their business demands.

report-cloud

Organizations are finding that a cloud-based platform for agile software development provides the speed and flexibility they need to respond to opportunities and market changes. But this will not happen overnight. To support cloud adoption, enterprise cloud development (ECD) platforms can provide a secure path for managing development and deployment in a hybrid cloud environment. This research report highlights how organizations are implementing and benefiting from cloud-based agile software development.

In today’s competitive business landscape, a convergence of game-changing trends is placing new value on immediacy. With rampant social media use, mobile device proliferation and an exponential increase in data, “now” is the only time that matters.

Companies are responding by gradually and selectively moving data and applications into the cloud (which enables elastic and instant application provisioning and deployment) and by embedding agile concepts and practices into their corporate cultures and software development processes. Using agile practices, developers can quickly and easily collaborate on applications that enable workers to gather, access, analyze and act immediately on information drawn from widespread and disparate sources.

Consider the following:

▪ Business users and consumers expect to instantly access and intuitively interact with information via digitally connected mobile devices.

▪ Vital information and services are contained on multiple platforms, and companies must develop applications that support them all. “Cloud, mobile, Web, legacy systems—it’s what we call multimodal development, and it’s changing the way companies develop and deploy software,” says Melinda Ballou,  program director for Application Life-Cycle Management and Executive Strategies research at IDC.

▪ Data is available from device sensors, social media sites and corporate legacy systems.

▪ Companies need applications to help them quickly find and analyze pertinent information to support rapid decision-making and execution.

▪ A shaky economy has emphasized the need to find new growth opportunities within the exist.

Read The Full Report

Cloud Infographic: CIOs & BIG DATA: What Your IT Team Wants You To Know

Cloud Infographic: CIOs & BIG DATA: What Your IT Team Wants You To Know

CIOs And Big Data

The purpose of Big Data is to supply companies with actionable information on any variety of aspects. But this is proving to be far more difficult than it looks.

Studies have shown that more than half of Big Data projects are left uncompleted, with 58% of them citing inaccurate scope as a cause. This hints at a widespread shortcoming of Big Data project management.

In fact, two of the most often reported reasons for project failures are a lack of expertise in data analysis, both from a correlative and contextual standpoint. Reports show that data processing, management and analysis are all difficult in any phase of the project, with IT teams citing each of those reasons more than 40% of the time.

However, failures in Big Data projects may not solely lie on faulty project management. In a recent survey, a staggering 80% of Big Data’s biggest challenges are from a lack of appropriate talent. The field’s relative infancy is making it hard for Big Data enterprises to find the necessary staff to see projects to fruition. The result is underutilized data and missed project goals.

IT teams are quickly recognizing a chasm between executives and frontline staffers whose job it is to apply the findings from Big Data. In the end, Big Data may not be the anticipated cure-all for 21st century business management. It is only as good as good as the system that runs it.

Big-Data-Infographic-Report

Infographic Source: Infochimps.com

How Can Cloud IDEs Save Your Time?: Build and Deploy – Part 2

How Can Cloud IDEs Save Your Time?: Build and Deploy – Part 2

IDE in a cloud uses cloud’s resources on demand to make development process more productive. 

  • You can win a minute or so by decreasing IDE boot time. It might look insignificant, but just multiply those minutes to your team size. Add those seconds to the time you can save with faster builds and deploys, and you will get an extra coffee break every day. 
  • With an IDE installed on a local machine, your productivity pretty much depends on specs of your laptop or PC. Cloud IDE builds projects using powerful servers which in most cases can give you a few extra seconds. So, a few minutes a day just by building projects in the cloud. It looks promising, isn’t it? Besides, you can code, build and deploy projects even using your travel stained laptop with mediocre specs. 
  • What do you think of an IDE that anticipates your actions? Auto saving, pre-compiling and pre-deploying intermediate incremental files makes it possible to ensure almost instantaneous coding experience. Exo IDE, for instance, supports use of JRebel plugin that allows making changes to your code, saving the project and updating the app without redeploying it. In other words, cloud IDE is getting ready to compile and deploy changes even before you requested these actions. The build server, editor and a testing VM are hosted side by side and configured in such a way to automate as many functions as possible, which makes it a pretty powerful combo. It’s not perfect yet and there’s much to work on, but that is exactly where most industry experts see the biggest benefits of cloud IDEs versus conventional desktop environments that limit productivity of dev teams. Yes, the process of updating apps, i.e. implementing changes, does not differ much from the way it’s happening with conventional IDE. Yet, having committed changes, a project is built on a powerful cloud server which definitely saves you at least a few seconds. Some PaaS, for example Openshift, choose to build projects on their side, while cloud IDE just pushes changes.

Corporate benefits 

Quite often developers have fears of storing code on a third party server, but not on a local machine. Yet, as a collaborative tool, cloud IDE offers an easy control over sensitive code. Many companies want and even have to trace code copies as well as monitor performance of mid sized and big development teams, i.e. find out how much code has been written. Sure, having one tenant for one project is the easiest and the fastest way to set up one centralized system. Well, add more spare time normally spent for an administrative control.

Social integration 

Coding with colleagues is joyous, especially if you code in the cloud. There’s no need to push code to repositories to let someone have a look at it and fix that annoying bug that has been a real pain in the neck. Using a cloud IDE, it is possible to invite collaborators directly from your workplace, as mentioned above. Not only are cloud IDEs productive, but they’re social friendly! Here’s how social invitation feature is realized in Exo IDE:

Collaborate on your projects, brainstorm new ideas and edit your code in the cloud. Social integration and collaborative features are undoubtedly the core strengths of cloud IDEs. Isn’t it great to code together with a fellow programmer who’s located on the other part of the continent?

One hour of extra coding guaranteed

All in all, it is really possible to save 1+ hour during a normal 6-8 hr working day just by using cloud IDE in a development process. Typically, a coder may make about 30 iterations within just one hour. So, shaving off seconds with each edit-compile-debug-deploy cycle results in 1-2 hours of extra coding time daily. Multiply this figure by the number of dev team members, and you will get pretty amazing results.

Conclusion

 It would not be fair to claim that cloud IDEs totally oust conventional development environments like, Eclipse. Moreover, offline IDEs are still competitive offerings in the market. As for now, web based IDEs do not have a wide range of features modern developers may need, but hold an enormous potential for a skyrocketing development in 2013. Cloud9, Koding and Exo IDE are moving at an immense speed, adding new features each month. More productivity, more code editing features, better infrastructure and more PaaS deployment possibilities – this is what we should expect from major cloud IDE players this year.

 By Eugene Ivantsov

How Can Cloud IDEs Save Your Time? Part 1

How Can Cloud IDEs Save Your Time? Part 1

How Can Cloud IDEs Save Your Time? Part 1

Cloud computing is definitely among the most dynamically developing industries in IT. It’s also definitely the one that promises to surprise the global IT community with new technologies, tools and services in 2013. Cloud hosting, document management and data storage have become common and ordinary even for non IT-savvy people. If you can store your docs and host your apps in the cloud, why can’t you code in your browser?

That’s what exactly the idea that came to enthusiasts from across a few years ago. As a result, the first web based IDEs emerged to revolutionize the market and the development process at large. Like with the majority of new cool things, cloud IDEs did not seem that cool for many developers who stood by offline IDEs and the conventional development, testing and code sharing practices. So, why are cloud IDEs cool and how can they improve software development productivity?

Here are a few keypoints.

Cloud IDE: Basics

 IDE-Image

The model of a typical web based IDE is quite simple. Users access their cloud workspaces through a website. In their cloud IDE accounts, they can use IDE’s own resources and services, for example to run and debug apps in the cloud, use code assistant etc. There are also external resources, i.e. third party services, like AWS or Google App Engine. Cloud IDE users can deploy to PaaS, update and manage apps directly from their cloud workspaces.

Projects are hosted on cloud IDE servers (with 256-bit encryption protection, for example). Traditionally, web based development environments provide free access for all users, however, private projects might be fee based (depends on the cloud vendor though).

Getting started and sharing your projects

Developers spend quite some time configuring environment for coding and testing applications. That’s certainly not a big deal for one developer with 1-2 PCs. Yet, when it comes to big teams of developers, it takes time and money. What if one tenant is created and all settings and properties are enclosed in one URL that you can share with the team? A project is created with particular development, VM and testing environment, as well as Git and PaaS. Once a developer receives an invitation and accepts it, he/she can collaborate on this project right away! Perhaps, no developers like those hours of joining or starting new projects spent for configuring environment and installing all necessary tools (the toolbox may be really huge in some cases). Onboarding in a cloud IDE is as easy as joining a group in Facebook. A few clicks will take you to a fully configured workspace.

5 Minutes to Create a Deploy an App?

With Cloud IDE you need no more than 5 minutes to create a simple Hello World app (say, Java or Python) and deploy it to Google App Engine or CloudFoundry, while the same process with Eclipse will take up to 4-5 hours (downloads, installation, configuring settings etc)! Isn’t it much just for a trial attempt? Check out the below video to see how easy it is to create a simple app and deploy it to GAE with a cloud IDE. Can Eclipse perform faster, even if everything is downloaded, configured and fine-tuned?

The answer is quite obvious.

Arduous discussions can be started on whether to go with an IDE on your machine or use a web based one, but a simple demo will show you the real mettle of a cloud based IDE. Of course, this is not to say that offline IDEs, and Eclipse in particular, have no advantages at all. Let’s be frank, Eclipse still rocks especially when it comes to Junit testing (convenient graphical interface), numerous plugins and, of course, its open source nature. Flexibility of Eclipse is what makes it incredibly popular among a huge coding community worldwide. By the way, the multitude of Eclipse adepts is one of the factors behind its popularity.

Read Part 2 Tomorrow: Instant build, commit and deploy

Guest Post By Eugene Ivantsov

Security In The Cloud – Maintaining A Secure Environment

Security In The Cloud – Maintaining A Secure Environment

One of the most prevalent points brought up by skeptics of cloud computing is the integrity of the security for said systems. Different reservations are held against different models of cloud computing, in particular for public clouds. The mere fact that public clouds host environments for multiple organizations and further supply the ability to accommodate multiple tenants for each group give the perception that information stored on such system may be accessible to anyone.

The effectiveness of the security of a cloud system relies on several different factors. First and foremost, the infrastructure upon which it is built will reflect the overall security capabilities of the system. The platform, or operating system, that exists on top of the underlying hardware will be used to restrict access to records and other services that regulate the operation of the system. This will correlate to the efficacy of stopping potentially the likes of malicious administrators, as well as other users that have legitimate access to the system who may intend to harm a business digitally. For example, file auditing, a feature that has been readily available since the inception of Windows Server 2003, is a great resource both as preventative security measure, like a visible security camera, and as a tool for review, should something go awry.

Mostly, the security of the system is the responsibility of the end user. This is where a few key concepts come into play. Educating staff is the most effective way to ensure that guidelines are followed; hence instilling secure for a cloud environment.

The following are some of the most important aspects to creating and maintaining a secure environment.

  • Be smart with credentials. When creating an account with just about any web service, you are generally required to create a strong password. This means the password should be at least eight characters long, and contain a combination of upper case letters, lower case letters, numbers, and special characters. The password(s) used should also be completely unique. This will inhibit anyone from guessing the password and prevent password generating software from easily gaining access.
  • Regularly back up data. Data back-up procedures should not be put to the wayside, even if the infrastructure of the cloud is fully redundant. There are still times where even the most seasoned IT professional will encounter a problem that leaves him shaking his head in confusion. Furthermore, accidents can happen. A back up of information from the system to another location will help prevent catastrophe should some mishap on a “perfect” system wipe out critical data.
  • Keep up with workstations and mobile devices. More than likely, a hacker is going to take the path of least resistance. To gain access to a system, it is a lot easier to extract information from an auxiliary component of the network, rather than directly attacking the network’s infrastructure. Workstations, especially Windows, should always be updated with current security patches. Antivirus and firewalls are important to help prevent malevolent applications from accessing residual information on the computer that could allow for entry. Moreover, networking hardware, such as Bluetooth, should remain disabled. The fewer conduits for someone to attack means it is less likely to happen.

By Deney Dentel

Deney serves as CEO for Nordisk Systems, Inc. Nordisk Systems expertized in various IT services by providing the best solutions for you businesses on  cloud compuing, virtualization, backup and recovery, and managed services.

Disaster Recovery In The Cloud, Or DRaaS: Revisited

Disaster Recovery In The Cloud, Or DRaaS: Revisited

The idea of offering Disaster Recovery services has been around as long as SunGard or IBM BCRS (Business Continuity & Resiliency Services). Disclaimer: I worked for the company that became IBM Information Protection Services in 2008, a part of BCRS.

It seems inevitable that Cloud Computing and Cloud Storage should have an impact on the kinds of solutions that small, medium and large companies would find attractive and would fit their requirements. Those cloud-based DR services are not taking the world by storm, however. Why is that?

Cloud infrastructure seems perfectly suited for economical DR solutions, yet I would bet that none of the people reading this blog has found a reasonable selection of cloud-based DR services in the market. That is not to say that there aren’t DR “As a Service” companies, but the offerings are limited. Again, why is that?

Much like Cloud Computing in general, the recent emergence of enabling technologies was preceded by a relatively long period of commercial product development. In other words, virtualization of computing resources promised “cloud” long before we actually could make it work commercially. I use the term “we” loosely…Seriously, GreenPages announced a cloud-centric solutions approach more than a year before vCloud Director was even released. Why? We saw the potential, but we had to watch for, evaluate, and observe real-world performance in the emerging commercial implementations of self-service computing tools in a virtualized datacenter marketplace. We are now doing the same thing in the evolving solutions marketplace around derivative applications such as DR and archiving.

I looked into helping put together a DR solution leveraging cloud computing and cloud storage offered by one of our technology partners that provides IaaS (Infrastructure as a Service). I had operational and engineering support from all parties in this project and we ran into a couple of significant obstacles that do not seem to be resolved in the industry.

Bottom line:

  1. A DR solution in the cloud, involving recovering virtual servers in a cloud computing infrastructure, requires administrative access to the storage as well as the virtual computing environment (like being in vCenter).
  2. Equally important, if the solution involves recovering data from backups, is the requirement that there be a high speed, low latency (I call this “back-end”) connection between the cloud storage where the backups are kept and the cloud computing environment. This is only present in Amazon at last check (a couple of months ago), and you pay extra for that connection. I also call this “locality.”
  3. The Service Provider needs the operational workflow to do this. Everything I worked out with our IaaS partners was a manual process that went way outside normal workflow and ticketing. The interfaces for the customer to access computing and storage were separate and radically different. You couldn’t even see the capacity you consumed in cloud storage without opening a ticket. From the SP side, notification of DR tasks they would need to do, required by the customer, didn’t exist. When you get to billing, forget it. Everyone admitted that this was not planned for at all in the cloud computing and operational support design.

Let me break this down:

  • Cloud Computing typically has high speed storage to host the guest servers.
  • Cloud Storage typically has “slow” storage, on separate systems and sometimes separate locations from a cloud computing infrastructure. This is true with most IaaS providers, although some Amazon sites have S3 and EC2 in the same building and they built a network to connect them (LOCALITY).

Scenario 1: Recovering virtual machines and data from backup images

Scenario 2: Replication based on virtual server-based tools (e.g. Veeam Backup & Replication) or host-based replication

Scenario 3: SRM, array or host replication

Scenario 1: Backup Recovery. I worked hard on this with a partner. This is how it would go:

  1. Back up VMs at customer site; send backup or copy of it to cloud storage.
  2. Set up a cloud computing account with an AD server and a backup server.
  3. Connect the backup server to the cloud storage backup repository (first problem)
    • Unless the cloud computing system has a back end connection at LAN speed to the cloud storage, this is a showstopper. It would take days to do this without a high degree of locality.
    • Provider solution when asked about this.
      • Open a trouble ticket to have the backups dumped to USB drives, shipped or carried to the cloud computing area and connected into the customer workspace. Yikes.
      • We will build a back end connection where we have both cloud storage and cloud computing in the same building—not possible in every location, so the “access anywhere” part of a cloud wouldn’t apply.

4. Restore the data to the cloud computing environment (second problem)

    • What is the “restore target”? If the DR site were a typical hosted or colo site, the customer backup server would have the connection and authorization to recover the guest server images to the datastores, and the ability to create additional datastores. In vCenter, the Veeam server would have the vCenter credentials and access to the vCenter storage plugins to provision the datastores as needed and to start up the VMs after restoring/importing the files. In a Cloud Computing service, your backup server does NOT have that connection or authorization.
    • How can the customer backup server get the rights to import VMs directly into the virtual VMware cluster? The process to provision VMs in most cloud computing environments is to use your templates, their templates, or “upload” an OVF or other type of file format. This won’t work with a backup product such as Veeam or CommVault.

5. Recover the restored images as running VMs in the cloud computing environment (third problem), tied to item #4.

    • Administrative access to provision datastores on the fly and to turn on and configure the machines is not there. The customer (or GreenPages) doesn’t own the multitenant architecture.
    • The use of vCloud Director ought to be an enabler, but the storage plugins, and rights to import into storage, don’t really exist for vCloud. Networking changes need to be accounted for and scripted if possible.

Scenario 2: Replication by VM. This has cost issues more than anything else.

    • If you want to replicate directly into a cloud, you will need to provision the VMs and pay for their resources as if they were “hot.” It would be nice if there was a lower “DR Tier” for pricing—if the VMs are for DR, you don’t get charged full rates until you turn them on and use for production.
      • How do you negotiate that?
      •  How does the SP know when they get turned on?
      • How does this fit into their billing cycle?
    • If it is treated as a hot site (or warm), then the cost of the DR site equals that of production until you solve these issues.
    • Networking is an issue, too, since you don’t want to turn that on until you declare a disaster.
      • Does the SP allow you to turn up networking without a ticket?
      • How do you handle DNS updates if your external access depends on root server DNS records being updated—really short TTL? Yikes, again.
    • Host-based replication (e.g. WANsync, VMware)—you need a host you can replicate to. Your own host. The issues are cost and scalability.

Scenario 3: SRM. This should be baked into any serious DR solution, from a carrier or service provider, but many of the same issues apply.

    • SRM based on host array replication has complications. Technically, this can be solved by the provider by putting (for example) EMC VPLEX and RecoverPoint appliances at every customer production site so that you can replicate from dissimilar storage to the SP IDC. But, they need to set up this many-to-one relationship on arrays that are part of the cloud computing solution, or at least a DR cloud computing cluster. Most SPs don’t have this. There are other brands/technologies to do this, but the basic configuration challenge remains—many-to-one replication into a multi-tenant storage array.
    • SRM based on VMware host replication has administrative access issues as well. SRM at the DR site has to either accommodate multi-tenancy, or each customer gets their own SRM target. Also, you need a host target. Do you rent it all the time? You have to, since you can’t do that in a multi-tenant environment. Cost, scalability, again!
    • Either way, now the big red button gets pushed. Now what?
      • All the protection groups exist on storage and in cloud computing. You are now paying for a duplicate environment in the cloud, not an economically sustainable approach unless you have a “DR Tier” of pricing (see Scenario 2).
      • All the SRM scripts kick in—VMs are coming up in order in protection groups, IP addresses and DNS are being updated, CPU loads and network traffic climb…what impact is this?
      • How does that button get pushed? Does the SP need to push it? Can the customer do it?

These are the main issues as I see it, and there is still more to it. Using vCloud Director is not the same as using vCenter. Everything I’ve described was designed to be used in a vCenter-managed system, not a multi-tenant system with fenced-in rights and networks, with shared storage infrastructure. The APIs are not there, and if they were, imagine the chaos and impact on random DR tests on production cloud computing systems, not managed and controlled by the service provider. What if a real disaster hit in New England, and a hundred customers needed to spin up all their VMs in a few hours? They aren’t all in one datacenter, but if one provider that set this up had dozens, that is a huge hit. They need to have all the capacity in reserve, or syndicate it like IBM or SunGard do. That is the equivalent of thin-provisioning your datacenter.

This conversation, as many I’ve had in the last two years, ends somewhat unsatisfactorily with the conclusion that there is no clear solution—today. The journey to discovering or designing a DRaaS is important, and it needs to be documented, as we have done here with this blog and in other presentations and meetings. The industry will overcome these obstacles, but the customer must remain informed and persistent. The goal of an economically sustainable DRaaS solution can only be achieved by market pressure and creative vendors. We will do our part by being your vigilant and dedicated cloud services broker and solution services provider.

By Randy Weis

CloudTweaks Comics
5 Essential Cloud Skills That Could Make Or Break Your IT Career

5 Essential Cloud Skills That Could Make Or Break Your IT Career

5 Essential Cloud Skills Cloud technology has completely changed the infrastructure and internal landscape of both small businesses and large corporations alike. No professionals in any industry understand this better than IT pros. In a cutthroat field like IT, candidates have to be multi-faceted and well-versed in the cloud universe. Employers want to know that…

Cloud-Based or On-Premise ERP Deployment? Find Out

Cloud-Based or On-Premise ERP Deployment? Find Out

ERP Deployment You know how ERP deployment can improve processes within your supply chain, and the things to keep in mind when implementing an ERP system. But do you know if cloud-based or on-premise ERP deployment is better for your company or industry? While cloud computing is becoming more and more popular, it is worth…

Three Factors For Choosing Your Long-term Cloud Strategy

Three Factors For Choosing Your Long-term Cloud Strategy

Choosing Your Long-term Cloud Strategy A few weeks ago I visited the global headquarters of a large multi-national company to discuss cloud strategy with the CIO. I arrived 30 minutes early and took a tour of the area where the marketing team showcased their award winning brands. I was impressed by the digital marketing strategy…

Cloud Computing – The Good and the Bad

Cloud Computing – The Good and the Bad

The Cloud Movement Like it or not, cloud computing permeates many aspects of our lives, and it’s going to be a big part of our future in both business and personal spheres. The current and future possibilities of global access to files and data, remote working opportunities, improved storage structures, and greater solution distribution have…

Comparing Cloud Hosting Services

Comparing Cloud Hosting Services

Cloud Hosting Services Cloud hosting service providers are abundant and varied, with typical structures affording the reliability of virtual partitions, drawing resources externally; secure data centers; scalability and flexibility not limited by physical constraints; pay-per-use costing; and responsive load balancing for changing demands. While high end (and high price) services offer an extensive range of…

Cloud Computing Then & Now

Cloud Computing Then & Now

The Evolving Cloud  From as early as the onset of modern computing, the possibility of resource distribution has been explored. Today’s cloud computing environment goes well beyond what most could even have imagined at the birth of modern computing and innovation in the field isn’t slowing. A Brief History Matillion’s interactive timeline of cloud begins…

Cloud Infographic – The Future (IoT)

Cloud Infographic – The Future (IoT)

The Future (IoT) By the year 2020, it is being predicted that 40 to 80 billion connected devices will be in use. The Internet of Things or IoT will transform your business and home in many truly unbelievable ways. The types of products and services that we can expect to see in the next decade…

Teach Yourself The Cloud: Cloud Computing Knowledge In 5 Easy Steps

Teach Yourself The Cloud: Cloud Computing Knowledge In 5 Easy Steps

Teach Yourself The Cloud Learn how to get to grips with cloud computing in business  Struggling to get your head around the Cloud? Here are five easy ways you can improve your cloud knowledge and perhaps even introduce cloud systems into your business.  Any new technology can appear daunting, and cloud computing is no exception.…

Low Cost Cloud Computing Gives Rise To Startups

Low Cost Cloud Computing Gives Rise To Startups

Balancing The Playing Field For Startups According to a Goldman Sachs report, cloud infrastructure and platform spending could reach $43 billion by 2018, which is up $16 billion from last year, representing a growth of around 30% from 2013 said the analyst. This phenomenal growth is laying the foundation for a new breed of startup…

Utilizing Digital Marketing Techniques Via The Cloud

Utilizing Digital Marketing Techniques Via The Cloud

Digital Marketing Trends In the past, trends in the exceptionally fast-paced digital marketing arena have been quickly adopted or abandoned, keeping marketers and consumers on their toes. 2016 promises a similarly expeditious temperament, with a few new digital marketing offerings taking center stage. According to Gartner’s recent research into Digital Marketing Hubs, brands plan to…

5% Of Companies Have Embraced The Digital Innovation Fostered By Cloud Computing

5% Of Companies Have Embraced The Digital Innovation Fostered By Cloud Computing

Embracing The Cloud We love the stories of big complacent industry leaders having their positions sledge hammered by nimble cloud-based competitors. Saleforce.com chews up Oracle’s CRM business. Airbnb has a bigger market cap than Marriott. Amazon crushes Walmart (and pretty much every other retailer). We say: “How could they have not seen this coming?” But, more…

Three Reasons Cloud Adoption Can Close The Federal Government’s Tech Gap

Three Reasons Cloud Adoption Can Close The Federal Government’s Tech Gap

Federal Government Cloud Adoption No one has ever accused the U.S. government of being technologically savvy. Aging software, systems and processes, internal politics, restricted budgets and a cultural resistance to change have set the federal sector years behind its private sector counterparts. Data and information security concerns have also been a major contributing factor inhibiting the…

Staying on Top of Your Infrastructure-as-a-Service Security Responsibilities

Staying on Top of Your Infrastructure-as-a-Service Security Responsibilities

Infrastructure-as-a-Service Security It’s no secret many organizations rely on popular cloud providers like Amazon and Microsoft for access to computing infrastructure. The many perks of cloud services, such as the ability to quickly scale resources without the upfront cost of buying physical servers, have helped build a multibillion-dollar cloud industry that continues to grow each…

Cloud Services Providers – Learning To Keep The Lights On

Cloud Services Providers – Learning To Keep The Lights On

The True Meaning of Availability What is real availability? In our line of work, cloud service providers approach availability from the inside out. And in many cases, some never make it past their own front door given how challenging it is to keep the lights on at home let alone factors that are out of…

Disaster Recovery – A Thing Of The Past!

Disaster Recovery – A Thing Of The Past!

Disaster Recovery  Ok, ok – I understand most of you are saying disaster recovery (DR) is still a critical aspect of running any type of operations. After all – we need to secure our future operations in case of disaster. Sure – that is still the case but things are changing – fast. There are…

Cloud Native Trends Picking Up – Legacy Security Losing Ground

Cloud Native Trends Picking Up – Legacy Security Losing Ground

Cloud Native Trends Once upon a time, only a select few companies like Google and Salesforce possessed the knowledge and expertise to operate efficient cloud infrastructure and applications. Organizations patronizing those companies benefitted with apps that offered new benefits in flexibility, scalability and cost effectiveness. These days, the sharp division between cloud and on-premises infrastructure…

Cost of the Cloud: Is It Really Worth It?

Cost of the Cloud: Is It Really Worth It?

Cost of the Cloud Cloud computing is more than just another storage tier. Imagine if you’re able to scale up 10x just to handle seasonal volumes or rely on a true disaster-recovery solution without upfront capital. Although the pay-as-you-go pricing model of cloud computing makes it a noticeable expense, it’s the only solution for many…

Three Tips To Simplify Governance, Risk and Compliance

Three Tips To Simplify Governance, Risk and Compliance

Governance, Risk and Compliance Businesses are under pressure to deliver against a backdrop of evolving regulations and security threats. In the face of such challenges they strive to perform better, be leaner, cut costs and be more efficient. Effective governance, risk and compliance (GRC) can help preserve the business’ corporate integrity and protect the brand,…