Author Archives: CloudTweaks

Outsourcing Services – Risks And Advantages

Outsourcing Services – Risks And Advantages

Outsourcing non-core activities represents an essential preoccupation of every modern IT company. But, as IT companies consider the vast positive aspects and allure of IT services outsourcing, they must balance the risks and issues with the potential for labor arbitrage (the economic phenomenon where jobs move to countries where labor and the cost of doing business is inexpensive).


  • Save money: Organizations that decide to outsource IT services, whether offshore or through a nearby service provider, transform fixed expenses to variable ones, liberating up capital for utilization in other areas.
  • Focus on Core Operations: Outsourcing IT services enables you to pay attention to the core capabilities of your company.
  • Having IT resources near those of big companies: Outsourcing IT systems and services creates a more reasonable playing field between small firms and large organizations.


Purchasers of outsourcing services must seriously consider risk. The results are often included in the outsourcing contract. Because many IT companies outsource their infrastructure to cloud suppliers, a Service Level Agreement is an essential topic. According to IBM, a Service Level Agreement “defines how the consumer will use the services and how the provider will deliver them”. I recommend you to read further about this topic on this article.

The potential risks of outsourcing IT services are:

  • Questionable Accessibility: Companies that depend on an outside service run the risk of downtime during critical system failures, leading to possible decline of productivity. Usually, providers offer warranty for operations in 99.9% of the time, so this risk is minimized.
  • Loss of Personal Touch: An in-house system manager becomes familiar with the system he/ she manages. As a result of this, he/she can provide results more efficiently, rapidly and professionally. IT outsourcing solely can’t provide a personal touch that comes close to that of an in-house IT professional. The solution consists in considering IT outsourcing provider being an extension of the own team.
  • Questionable Security Protocols: Companies must examine whether the outsourcing provider utilizes security procedures as powerful as their own. This is especially important when interacting with offshore providers. While these often have outstanding security standards, there is always the risk that one of the outsourcing company employees will break security. You must be careful when you negotiate the Service Level Agreements with the provider. It is also important for you to understand exactly how your applications and data will be protected and the policies that the IT services provider follows to maintain the security of your data.

Before taking the decision of moving to the cloud, every company manager should take into account the risks and advantages for its company and make the most appropriate decision for the organization.

By Rick Blaisdell

Cloud Infographic: Moving To The Cloud

Cloud Infographic: Moving To The Cloud

Moving To The Cloud

Here are the pros of moving your business to the cloud:

Set up cost is almost zero. When setting up your business for cloud computing, all the expenses that you would have incurred if the system was setup in-house are non-existent. There will be no hardware purchases for servers and no over spending on the man-hours that it would take to setup everything. Service providers may even wave installation fees.

Great scalability which eases expansion. Because of its nature, cloud services scale dramatically from very small needs of home businesses to huge corporate demands. Because of its pay to use model, clients will only need to pay for what they need and scale up if ever their needs grow.

Infographic Source: YorkshireCloud

The Problem With Cloud Services And Downtime

The Problem With Cloud Services And Downtime

Cloud Services And Downtime

One of the problems that you may run into when you are working in the cloud is that some cloud services cannot be counted on for nonstop service. There are certain cheaper services that may be down and not able to be accessed at certain points during the day. You will definitely want to avoid these services, because that defeats the purpose of the cloud.

If you are going to be doing a lot of your projects in the cloud, then you need to make sure that the cloud service you are using is actually reliable. Even the top cloud services on the market can experience some problems from time to time. Amazon’s cloud services recently had a failure that affected many of the businesses that are associated with them. This kind of downtime can be a huge problem if you are not properly prepared.

It’s always a good idea to back up your data on a few different cloud services to make sure that you don’t lose any important documents or files. You never want to leave yourself vulnerable to any kind of data problem. Even if you are not working in the cloud, you should always have your data backed up in two different locations. Some people actually use the cloud for nothing more than backup services. Although you get more benefits from the cloud when you use all of its features, it is not a bad idea to simply use it to back up your files from time to time.

Certain cloud services will come with added backup options, so these should be the ones you are considering as your best options. Safety and security should still be your top priorities while you are working in the cloud. There is no point in even signing up for cloud services if everything is not going to be secure and backed up in two other locations.

At the end of the day, the top cloud services are going to be able to give you high-quality service that will not cause your business any problems. Stick with the cloud companies that are used by other businesses, because these are the ones that have been reliable over the long term. The cloud can definitely improve your business productivity overall, but it will actually do more harm than good if you sign up for a service that is not reliable and which is unable to keep their servers up and running.

By Kyle Torpey

Cloud Infographic: Who Will Acquire RIM?

Cloud Infographic: Who Will Acquire RIM?

Mobile Cloud Computing (MCC) is the next logical evolution for cloud computing.

Though mobile technology is still evolving, especially the operating systems, it is doing so in a very rapid pace. Most modern mobile devices are now sporting capable web browsers mainly due to advances in mobile operating systems done by Google, Microsoft, and Apple.  Now one such company that clearly is not advancing let alone keeping up with the rest of the pack is Research In Motion.  At this point speculation surrounds the company and one of the main areas of focus is acquisition.

Included is an infographic provided by Firmex which outlines some of the large cloud vendors and carriers that could come into play regarding the possible acquisition of RIM.

Inforgaphic provided by: Firmex

Optimize Your Infrastructure; From Hand-Built To Mass-Production

Optimize Your Infrastructure; From Hand-Built To Mass-Production

If you’ve been reading this blog, you’ll know that I write a lot about cloud and cloud technologies, specifically around optimizing IT infrastructures and transitioning them from traditional management methodologies and ideals toward dynamic, cloud-based methodologies.  Recently, in conversations with customers as well as my colleagues and peers within the industry, it is becoming increasingly clear that the public, at least the subset I deal with, are simply fed up with the massive amount of hype surrounding cloud.  Everyone is using that as a selling point and have attached so many different meanings that it has become meaningless…white noise that just hums in the background and adds no value to the conversation.  In order to try to cut through that background noise I’m going to cast the conversation in a way that is a lot less buzzy and a little more specific to what people know and are familiar with.  Let’s talk about cars (haa ha, again)…and how Henry Ford revolutionized the automobile industry.

First, let’s be clear that Henry Ford did not invent the automobile, he invented a way to make automobiles affordable to the common man or as he put it, the “great multitude.”  After the Model A, he realized he’d need a more efficient way to mass produce cars in order to lower the price while keeping them at the same level of quality they were known for. He looked at other industries and found four principles that would further his goal:interchangeable parts, continuous flow, division of labor, and reducing wasted effort. Ford put these principles into play gradually over five years, fine-tuning and testing as he went along. In 1913, they came together in the first moving assembly line ever used for large-scale manufacturing. Ford produced cars at a record-breaking rate…and each one that rolled off the production line was virtually identical to the one before and after.

Now let’s see how the same principles (of mass production) can revolutionize the IT Infrastructure as they did the automobile industry…and also let’s be clear that I am not calling this cloud, or dynamic datacenter or whatever the buzz-du-jour is, I am simply calling it an Optimized Infrastructure because that is what it is…an IT infrastructure that produces the highest quality IT products and services in the most efficient manner and at the lowest cost.

Interchangeable Parts

Henry Ford discovered significant efficiency by using interchangeable parts which meant making the individual pieces of the car the same every time. That way any valve would fit any engine, any steering wheel would fit any chassis. The efficiencies to be gained were proven in the assembly of standardized photography equipment pioneered by George Eastman in 1892. This meant improving the machinery and cutting tools used to make the parts. But once the machines were adjusted, a low-skilled laborer could operate them, replacing the skilled craftsperson that formerly made the parts by hand.

In a traditional “Hand-Built” IT infrastructure, skilled engineers are basically building servers—physical and virtual—and other IT assets from scratch and are typically reusing very little with each build.  They may have a “golden image” for the OS, but they then build multiple images based on the purpose of the server, its language or the geographic location of the division or department it is meant to serve.  They might layer on different software stacks with particularly configured applications or install each application one after another.  These assets are then configured by hand using run books, build lists etc. Then tested by hand, etc. which means that it takes time and skilled effort and there are still unacceptable amounts of errors, failures and expensive rework.

By significantly updating and improving the tools used (i.e. virtualization, configuration and change management, software distribution, etc.), the final state of IT assets can be standardized, the way they are built can be standardized, and the processes used to build them can be standardized…such that building any asset becomes a clear and repeatable process of connecting different parts together; these interchangeable parts can be used over and over and over again to produce virtually identical copies of the assets at much lower costs.

Division of Labor

Once Ford standardized his parts and tools, he needed to divide up how things were done in order to be more efficient. He needed to figure out which process should be done first so he divided the labor by breaking the assembly of the Model T into 84 distinct steps. Each worker was trained to do just one of these steps but always in the exact same order.

The Optimized Infrastructure relies on the same principle of dividing up the effort (of defining, creating, managing and ultimately retiring each IT asset) so that only the most relevant technology, tool or sometimes, yes, human, does the work. As can be seen in later sections, these “tools” (people, process or technology components) are then aligned in the most efficient manner such that it dramatically lowers the cost of running the system as well as guarantees that each specific work effort can be optimized individually, irrespective of the system as a whole.

Continuous Flow

To improve efficiency even more, and lower the cost even further, Ford needed the assembly line to be arranged so that as one task was finished, another began, with minimum time spent in set-up (set-up is always a negative production value). Ford was inspired by the meat-packing houses of Chicago and a grain mill conveyor belt he had seen. If he brought the work to the workers, they spent less time moving about. He adopted the Chicago meat-packers overhead trolley to auto production by installing the first automatic conveyer belt.

In an Optimized Infrastructure, this conveyor belt (assembly line) consists of individual process steps (automation) that are “brought to the worker” (each specific technological component responsible for that process step….see; division of labor) in a well-defined pattern (workflow) and then each workflow arranged in a well-controlled manner (orchestration) because it is no longer human workers doing those commodity IT activities (well, in 99.99% of the cases) but the system itself, leveraging virtualization, fungible resource pools and high levels of standardization among other things. This is the infrastructure assembly line and is how IT assets are mass produced…each identical and of the same high quality at the same low cost.

Reducing Wasted Effort

As a final principle, Ford called in Frederick Winslow Taylor, the creator of “scientific management,” to do time and motion studies to determine the exact speed at which the work should proceed and the exact motions workers should use to accomplish their tasks, thereby reducing wasted effort. In an Optimized Infrastructure, this is done through understanding and using continuous process improvement (CPI), but CPI cannot be done correctly unless you are monitoring the performance details of all the processes and the performance of the system as a whole and then documenting the results on a constant basis. This requires an infrastructure-wide management and monitoring strategy which, as you’ve probably guessed, was what Fredrick Taylor was doing in the Ford plant in the early 1900s.

Whatever You Call It…

From the start, the Model T was less expensive than most other hand-built cars because of expert engineering practices, but it was still not attainable for the “great multitude” as Ford had promised the world. He realized he’d need a more efficient way to produce the car in order to lower the price, and by using the four principles ofinterchangeable parts, continuous flow, division of labor, and reducing wasted effort, in 1915 he was able to drop the price of the Model T from $850 to $290 and, in that year, he sold 1 million cars.

Whether you prefer to call it cloud, or dynamic datacenter, or the Great Spedini’s Presto-Chango Cave of Magic Data doesn’t really matter…the fact is that those four principles listed above can be used along with the tools, technologies and operational methodologies that exist today—which are not rocket science or bleeding edge—to revolutionize your IT Infrastructure and stop hand-building your IT assets (employing your smartest and best workers to do so) and start mass producing those assets to lower your cost, increase your quality and, ultimately, significantly increase the value of your infrastructure.

With an Optimized Infrastructure of automated tools and processes where standardized/interchangeable parts are constantly reused based on a well-designed and efficiently orchestrated workflow that is monitored end-to-end, you too can make IT affordable for the “great multitude” in your organization.

By Trevor Williamson

Cloud Infographic: How To Avoid Problems For Your Business With Cloud Storage

Cloud Infographic: How To Avoid Problems For Your Business With Cloud Storage

How To Avoid Problems For Your Business

Most people know that cloud storage comes with some risks, but it can also be a savior for your business if you use it in the correct manner. Many people just think about storing all of their data in a cloud service, and just hope for the best as they continue to use that cloud on a daily basis. You should make sure that all of your data is backed up on a regular basis when you use the cloud, because there are a number of things that could go wrong while you are in the cloud. Anything from an intrusion to a loss of service could cost your business money in the long run.

Whenever you are storing information in the cloud, you need to make sure that information is backed up somewhere else. Some people actually use the cloud as their backup location, which is sometimes a useful practice. If cloud storage is going to be the place that you access more than anything else, then you need to have some private servers and a few other cloud networks where you can backup the data in your main cloud on a regular basis.

Cloud storage comes with many benefits in the short term, but you are also putting yourself at risk of long-term problems. You can avoid these problems with the right security measures that make sure you have a contingency plan in case something goes wrong with your main database in the cloud. You should actually have at least two backup clouds that can be turned to extremely quickly when something goes wrong in your main cloud. You should also think about backing up your files locally so you have one aspect of your storage that cannot be accessed by the outside world.

Although you may think that security is your biggest concern when it comes to cloud storage, it is not – that honor belongs to human error. Many companies have faced a problem where an employee has accidentally deleted a large number of files, so there needs to be a way to get that data back quickly. Not only do you want to make sure that you do not lose any files, but you also need to make sure that you can keep your business running as if nothing terrible has happened behind the scenes. This is where the backup cloud comes in to save the day.

(Infographic Source: Infographics Mania)

By Kyle Torpey

Q&A With Rob Fox: On-Premise Data, aka “Cloud Cache”

Q&A With Rob Fox: On-Premise Data, aka “Cloud Cache”

On-Premise Data, aka “Cloud Cache”

We caught up with Rob Fox, Senior Director of Software Development for Liaison Technologies, about the growing need for businesses and consumers to store and access data in the cloud as quickly as if it were locally stored.

Why are businesses and consumers moving away from on-premise data storage to cloud storage?

Consumers are the early adopters of cloud data storage. For years, they’ve been storing and sharing vast numbers of photos in the cloud with services like Shutterfly and Snapfish, and even Facebook. Newer services like Apple’s iCloud store and sync data, photos, videos and music, and there are a host of cloud-based computer back-up services for individual PCs. Many of these services have been driven by the explosion of mobile computing, which has been enabled by coupling with cloud computing.

Up until recently, the cloud was primarily thought of as a place to store backup data. This has changed significantly over the past 18 months. With the explosion of mobile applications, Big Data and improved bandwidth, the traditional walls around data have dissolved. In the case of mobile computing, resources such as disk space are limited. In the case of Big Data, organizations simply cannot afford to store copious amounts of data on local hardware. Part of the issue isn’t just the size of the data, but the fact that elastic storage provisioning models in the cloud make it easy to right-size storage and pay for only what you need – something you simply cannot do on-premise. If you look at how digital music, social media and online e-Commerce function in 2012, you see that it makes sense for Big Data to exist in the cloud.

What challenges do businesses face when storing Big Data in the cloud?

The challenge for storing Big Data in the cloud is for businesses to be able to access it as quickly as if it were stored on-premise. For years, we’ve been butting up against Moore’s Law, making faster computers and improving access, and now, we have moved the focus to where we want to store information, but the challenges are the same. Look at Hadoop (built on HDFS) and related storage technologies, or consumer applications that sit on top of these technologies like Spotify. They try and process data locally or as if it were local, hence the cloud cache. The trick is to make it seem like the data is local, when it is not. That’s why we need the cloud cache by storing small amounts locally, using similar techniques as traditional computing.

What’s the best way to implement cloud caching so that it behaves like on-premise caching?

I remember studying memory caching techniques in my computer architecture course in college, learning about how memory is organized and about overall caching strategies. The Level 1 (L1) or primary cache is the primary form of storage, and considered to be the fastest form of data storage. The L1 cache exists directly on the processor (CPU) and is limited in size to data that is accessed often or that is considered critical for quick access.

With data living somewhere else, applications and services that require real-time high availability/low latency can be a real challenge. The solution is exactly the same as the L1 cache concept – so more specifically, I predict that on-premise storage will simply be a form of high-speed cache. Systems will only store a small subset of Big Data locally. I’m already seeing this with many cloud-hosted audio services that stream MRU (most recently used) or MFU (most frequently used) datasets to local devices for fast access. What is interesting in this model is the ability to access data even when cloud access is not currently available (think of your mobile device in airplane mode).

I have no doubt that at some point, on-premise storage will simply be considered a “cloud cache.” Don’t be surprised if storage on a LAN is considered L1 cache and intermediary cloud storage is geographically proximal to an L2 cache, before finally reaching the true source of the data, which, by the way, is probably already federated across many data stores optimized for this kind of access. Regardless of how the cache is eventually constructed, it’s a good mental exercise.

By Robert Fox

Robert Fox is the Senior Director of Software Development at Liaison Technologies, a global provider of secure cloud-based integration and data management services and solutions based in Atlanta. An original contributor to the ebXML 1.0 specification, the former Chair of Marketing and Business Development for ASC ANSI X12, and a co-founder and co-chair of the Connectivity Caucus, he can be reached at

Does Microsoft Windows Azure Threaten vCloud

Does Microsoft Windows Azure Threaten vCloud?

It appears like Microsoft is desperate to become the OS for cloud computing. At Microsoft’s Worldwide Partner Conference on July 10th, the company has announced its Windows Azure’s white-label version which is targeted at the web hosts currently based on Windows Server. This can be viewed as a challenge for VMware, which is trying to push its vCloud agenda. In recent years, VMware is spanning cloud-based and on-premise deployments.

In the past, Microsoft has received flak for not having a lawful hybrid cloud-strategy. But, Microsoft appears to be back on the right track with this announcement. The new offering is known as Service management Portal, which is presently in the Community Technology Preview mode. With this, Microsoft’s partners can provide an infrastructure similar to Windows Azure to their customers without actually employing the Microsoft’s cloud. This can be achieved through the standardized management portal as well as through extensible APIs. The extensible APIs allow the developers to link their applications to the specialized services of the hosts, to the on-premise resources, or to Windows Azure itself.

According to Sinclair Schuller, the CEO & founder of Microsoft Partner Apprenda, Microsoft’s this move is smart and the company has tried to develop its cloud-computing footprint. He said,” As Microsoft is expanding their toolkit they’re trying to make sure it’s not a disjointed experience.” Users might wish to use multiple cloud services, but they want a single interface to interact with them.

Apprenda provides an on-premise platform for .NET applications and is now a partner to Microsoft’s latest offering. If a web host performs an internal deployment of an Apprenda instance & plugs it into the portal, the auto-scaling & PaaS capabilities of Apprenda can be deployed directly from the Service Management Portal by the customers.

The Service Management Portal may prove to be the most prominent weapon for Microsoft in its war against virtualization king VMware to become cloud’s operating system. The vCloud-Datacenter-Services program & the vCloud-Director-Management software have provided the head start to VMware, but now Microsoft has got the answers to both. Microsoft, although, still needs to mature a lot, particularly if it wishes to contend for the enterprise-workloads.

Microsoft did something similar few years back with the introduction of Windows Azure Appliance, but it never worked. This strategy was confined to few big partners including Dell, Fujitsu, and HP. Apparently, Fujitsu is the only partner that followed through this strategy.

Microsoft is adding insult to the injury by planning to grow a service-provider-partners ecosystem with a latest program. This will assist them to shift from VMware’s hypervisor to Microsoft’s Hyper-V.

By Doug Hamilton

CloudTweaks Comics
The DDoS That Came Through IoT: A New Era For Cyber Crime

The DDoS That Came Through IoT: A New Era For Cyber Crime

A New Era for Cyber Crime Last September, the website of a well-known security journalist was hit by a massive DDoS attack. The site’s host stated it was the largest attack of that type they had ever seen. Rather than originating at an identifiable location, the attack seemed to come from everywhere, and it seemed…

Cloud Infographic: Security And DDoS

Cloud Infographic: Security And DDoS

Security, Security, Security!! Get use to it as we’ll be hearing more and more of this in the coming years. Collaborative security efforts from around the world must start as sometimes it feels there is a sense of Fait Accompli, that it’s simply too late to feel safe in this digital age. We may not…

Update: Timeline of the Massive DDoS DYN Attacks

Update: Timeline of the Massive DDoS DYN Attacks

DYN DDOS Timeline This morning at 7am ET a DDoS attack was launched at Dyn (the site is still down at the minute), an Internet infrastructure company whose headquarters are in New Hampshire. So far the attack has come in 2 waves, the first at 11.10 UTC and the second at around 16.00 UTC. So…

The Conflict Of Net Neutrality And DDoS-Attacks!

The Conflict Of Net Neutrality And DDoS-Attacks!

The Conflict Of Net Neutrality And DDoS-Attacks! So we are all cheering as the FCC last week made the right choice in upholding the principle of net neutrality! For the general public it is a given that an ISP should be allowed to charge for bandwidth and Internet access but never to block or somehow…

Three Reasons Cloud Adoption Can Close The Federal Government’s Tech Gap

Three Reasons Cloud Adoption Can Close The Federal Government’s Tech Gap

Federal Government Cloud Adoption No one has ever accused the U.S. government of being technologically savvy. Aging software, systems and processes, internal politics, restricted budgets and a cultural resistance to change have set the federal sector years behind its private sector counterparts. Data and information security concerns have also been a major contributing factor inhibiting the…

Data Breaches: Incident Response Planning – Part 1

Data Breaches: Incident Response Planning – Part 1

Incident Response Planning – Part 1 The topic of cybersecurity has become part of the boardroom agendas in the last couple of years, and not surprisingly — these days, it’s almost impossible to read news headlines without noticing yet another story about a data breach. As cybersecurity shifts from being a strictly IT issue to…

5% Of Companies Have Embraced The Digital Innovation Fostered By Cloud Computing

5% Of Companies Have Embraced The Digital Innovation Fostered By Cloud Computing

Embracing The Cloud We love the stories of big complacent industry leaders having their positions sledge hammered by nimble cloud-based competitors. chews up Oracle’s CRM business. Airbnb has a bigger market cap than Marriott. Amazon crushes Walmart (and pretty much every other retailer). We say: “How could they have not seen this coming?” But, more…

Digital Twin And The End Of The Dreaded Product Recall

Digital Twin And The End Of The Dreaded Product Recall

The Digital Twin  How smart factories and connected assets in the emerging Industrial IoT era along with the automation of machine learning and advancement of artificial intelligence can dramatically change the manufacturing process and put an end to the dreaded product recalls in the future. In recent news, Samsung Electronics Co. has initiated a global…

Cloud Services Providers – Learning To Keep The Lights On

Cloud Services Providers – Learning To Keep The Lights On

The True Meaning of Availability What is real availability? In our line of work, cloud service providers approach availability from the inside out. And in many cases, some never make it past their own front door given how challenging it is to keep the lights on at home let alone factors that are out of…

5 Things To Consider About Your Next Enterprise Sharing Solution

5 Things To Consider About Your Next Enterprise Sharing Solution

Enterprise File Sharing Solution Businesses have varying file sharing needs. Large, multi-regional businesses need to synchronize folders across a large number of sites, whereas small businesses may only need to support a handful of users in a single site. Construction or advertising firms require sharing and collaboration with very large (several Gigabytes) files. Financial services…

Are CEO’s Missing Out On Big Data’s Big Picture?

Are CEO’s Missing Out On Big Data’s Big Picture?

Big Data’s Big Picture Big data allows marketing and production strategists to see where their efforts are succeeding and where they need some work. With big data analytics, every move you make for your company can be backed by data and analytics. While every business venture involves some level of risk, with big data, that risk…


Sponsored Partners