Category Archives: Technology

5 Tips For Building A High Growth IT Platform

5 Tips For Building A High Growth IT Platform

5 Tips For Building a High Growth IT Platform

Building and maintaining today’s enterprise computing platforms is a lot more challenging than it was in the past. The competitive and fast moving nature of business requires a corporate network capable of meeting a company’s ever changing needs and requirements. For IT, this poses difficult challenges as its common for fast moving business to throw out many requests that can leave IT in the wake—unless the in-house IT pros strategized and have constructed a computing platform built for growth and flexibility.

So how do you build a high growth computing platform? Well, I am sure there are many thoughts and ideas on how to do this but I want to focus on five objectives, that if followed, will put you on the correct path and integrate your IT group with the business.


1 – A noisy platform doesn’t scale.

The computing environment needs to be able to scale up quickly based on the plethora of business requests. In order to scale effectively and efficiently you need to have a quiet, stable network platform. A noisy network cannot scale because when you grow the amount of noise, the network’s issues or instabilities grows as well and it doesn’t take long for the amplification of these issues to cause a roadblock which can ultimately lead to unacceptable downtime and outages. So, it is very important to build a stable network driven by clear process and procedures. When issues arise, nothing less than root cause analysis should be accepted by post-mortem meetings that address and correct the original problem area. Solutions that do not name the root cause should not be accepted. If you do not address the problem at its root, the issue is guaranteed to reoccur and cause more downtime, user dissatisfaction and troubleshooting time. IT pros can avoid these uncomfortable situations by addressing the problem correctly on its first occurrence.

In addition to root cause analysis, a good control process is maintaining a stable environment. All environments need to be maintained to remain stable and the number one cause of downtime is typically human error. A systematic and well-documented change control process can greatly reduce human error and increase network uptime.

2 – Embrace the Cloud where applicable.

When it comes to building stable and scalable networks most cloud enabled Managed Service Providers know how to do it right—as they should because that is their business. So, why not leverage an IaaS Managed Service provider to host your systems, provide storage and backup. You can still manage your systems and applications, but consider offloading the mundane task of worrying about server hardware, system backups, provisioning storage and other complex and costly tasks.

Leveraging the Cloud also provides a platform upon which you can quickly scale up or down in an efficient and economic manner as the business requires. Scaling up and down in the Cloud is much easier than the traditional method of putting together a requisition form, obtaining necessary approvals, sending to accounting and waiting for a purchase order, ordering and waiting 30-60 days for delivery and then another 30-60 days to get the equipment installed and tested. With the Cloud, servers and storage can be provisioned in minutes.


3 – Focus on saying “Yes.”

Many companies and executive teams have a perception that IT is often a roadblock, stopping progress and the achievement of business goals. The best way to erase this perception and demonstrate the value of IT to the business is by saying “yes” when the business is in need of IT services. Transform the process of interacting with IT into a positive and productive encounter and in turn, this unit will move into a strategic position. In order to say “yes” IT must have a game plan for taking on any request thrown at them. This requires the flexible, scalable and dynamic infrastructure I’ve been discussing. IT will be required to remain open minded and leverage the appropriate resources for each need, whether it be internal, external, public cloud, private cloud or SaaS. The focus must be on how to achieve the business need in a cost effective and efficient way so Shadow IT does not decide to go around you.

4 – Speed to execution trumps control.

Tying into the previous point is also the thought of speed to execution. When it comes to a project, speed to execution is more important than who controls the solution. For decades, IT has designed, implemented and managed the majority of solutions for their users. With the emergence of the Cloud and Cloud based services the mentality of where the solution resides and who controls it must be secondary to speed. Today’s businesses are in constant competition and speed to implement is often the difference between success and failure. If a solution can be implemented in the Cloud quicker than internally, with similar cost and effectiveness, then it should be considered. IT cannot place their need to control and manage a solution internally over the ability to quickly have a solution provisioned externally. Using a strategic mindset, IT pros must understand that the main objective is to meet the business needs in the best manner possible.

5 – Reduce the cost of “keeping the lights on.

It may take constant investment to keep a computing platform stable and ready for growth. Though, industry standards tell us that 80% or more of the IT budget can go towards maintenance costs. This does not allow much spend on investing in new technologies or services to keep the computing platform ready for growth in an effective and efficient manner. IT needs to find creative or more effective ways to perform their tasks to lower these maintenance costs in order to increase their budget for growth. One way to lower maintenance costs is to evaluate leveraging IaaS for server hosting, storage and backup needs. IaaS can usually lower your operation cost of servers, storage and backup by 20% -25% while giving your team time back to focus on working with the business. There are also many cost effective SaaS solutions which can lower your software budget as well as the services required to deploy, support and upgrade these software suites.

In conclusion, building a computing platform for growth not only benefits the business but will also cause the IT department to escalate within the company. IT can transform into a player at the C-level table and help set future direction once they are perceived to be a value-add to the business and not a gatekeeper. Providing value starts by enabling the business to achieve it goals and growth objectives and that is best done by building a stable, scalable and agile computing platform. I hope these tips help set you in that direction.

(Image Source: Shutterstock)

By Marc Malizia

Risks Of Virtualization In Public And Private Clouds

Risks Of Virtualization In Public And Private Clouds

Risks Of Virtualization

Server virtualization is one of the cornerstone technologies of the cloud. The ability to create multiple virtual servers that can run any operating system on a single physical server has a lot of advantages.

These advantages can appear in public, Infrastructure as a Service (IaaS), as well as in private clouds.

However, it also brings some risks.


Virtualization reduces the number of physical servers required for a given workload, which brings cost benefits. It also allows for more flexible sizing of computer resources such as CPU and memory. This in turn tends to speed up development projects, even without automatic provisioning. Virtualization can even increase the security of IT because it is easier to set up the right network access controls between machines.

So in order to get real benefits, steer clear of the risks. A pretty extensive overview of these risks was written by the US National Institute of Standards and Technology (NIST). You can find it at Special Publication 800-125. This article is partly based on that.

Let us first get some of the important concepts straight. The host is the machine on which the hypervisor runs. The hypervisor is the piece of software that does the actual virtualization. The guests then, are the virtual machines on top of the hypervisor, each of which runs its own operating system.

The hypervisors are controlled through what is called the ‘management plane’, which is a web console or similar that can remotely manage the hypervisors. This is a good deal more convenient than walking up to the individual servers. It is also a lot more convenient for remote hackers. So it is good practice to control access to the management plane. That might involve using two-factor authentication (such as smart cards), and only giving individual administrators the access that is needed for their task.

An often mentioned risk of virtualization is the so-called ‘guest escape’, where one virtual machine would access or break into its neighbor on the same virtual machine. This could happen through a buggy hypervisor or insecure network connections on the host machine. The hypervisor is a piece of software like any other software. In fact, it is often based on a scaled-down version of Linux, and any Linux vulnerability could affect the hypervisor. However, if you control the hypervisor, you control not just one system, you can control the entire cloud system. So it is of the highest importance that you are absolutely certain that you run the right version of the hypervisor. You should be very sure of where it came from (its provenance), and you should be able to patch or update it immediately.

Network Design

Related to this is the need for good network design. The network should allow real isolation of any guest, so that they will not be able to see any traffic from other guests, nor traffic to the host.


An intrinsic risk of server virtualization is so called ‘resource abuse’, where one guest (or tenant) is overusing the physical resources, thereby starving the other guests of the resources required to run their workloads. This is also called the ‘noisy neighbor’ problem. To address it can require a number of things. The hypervisor might be able to limit the over usage of a guest, but in the end, somebody should be thinking about how to avoid putting too many guests on a single host. That is a tough balance to strike: too few guests means you are not saving enough money, too many guests mean you risk performance issues.

In the real world, there are typically a lot of virtual servers that are identical. They run from the same ‘image’, and each virtual server is then called an instance of that image, or instance for short.

Then, with virtual servers it becomes easy to clone, snapshot, replicate, start and stop images. This has advantages, but also creates new risks. It can lead to an enormous sprawl or proliferation of server images that need to be stored somewhere. This can become hard to manage and represents a security risk. For example, how do you know that when a dormant image is restarted after a long time, that it is still up to date and patched? I heard a firsthand story of an image that got rootkitted by a hacker.

So the least you should do; is do your anti-malware, virus and version checking also on images that are not in use. Even when you work with a public IaaS provider, you are still responsible for patching the guest images.

In summary, server virtualization brings new power to IT professionals. But as the saying goes, with great power comes great responsibility.

(Image source: Shutterstock)

By Peter HJ van Eijk

OpenStack Interoperability – Dawn Of A New Era?

OpenStack Interoperability – Dawn Of A New Era?

The Interoperability Challenge!

OpenStack has always had interoperability as one of its unique selling points. Simply put – you can use OpenStack on-premise and what you develop will also work with other OpenStack environments. Open APIs and open source is the common denominator. However until now, it has been an elusive feature or really dream that many talks about – but never truly implemented at scale. It has not been mature enough and for the few that has managed to interoperate between clouds – has spent a lot of time getting there. The industry often says – build a private cloud and then scale out in the public cloud. While there are ways to do this – there are few that truly has gotten this to work fluidly and efficiently. It is not just a technical question but equally management. Governance around what can run where – is a tricky balancing act and requires solid and clear policies – not just technical capability. No vendor lock-in sounds great – but is it a reality? There are of course always grades of lock-in.

Connect And Build


The reasons why you would like to connect two clouds – whether private or public are many. Scaling out from private to public brings not only true scalability but there are workloads that simply fit better in a different environment. Maybe as simple as you requiring a different price point? The same could be between two public clouds. From not putting all your eggs in one basket to actually utilize different features, price points or simply compliance aspects.

Enter OpenStack Kilo. It is a milestone for the promise of true interoperability in the OpenStack community. While Defcore was already created after the summit in Hong Kong in 2013 it is now that the community is starting to put the critical pieces together. In the latest OpenStack interoperability press release during the summit in Vancouver – 32 companies signed up to adhere to the guidelines to make sure it is not just software – but companies and people behind the promise of true interoperability. In the kilo release OpenStack allows for seamless federation between private and public clouds and the interest is big from both vendors as well as customers.

OpenStack has shown the way and it is now up to the vendors to not only sign up and adhere to a set of guidelines. For true interoperability cloud vendor will have to take it further than that. The CEO of Aptira, Tristan Goode held a presentation in 2012, where he outlined exactly what we have today. It took three more years to get to basic interoperability. I think we will see about the same amount of time for public cloud vendors to get governance and other management issues cleared for customers to easily move workloads between vendors. It also forces strict ways in controlling upgrades and functionality to make it as smooth as we all would like to see it.

Universal Connections


2015 is the year when we go from the dream of interoperability to actually starting to plan for us to fluidly move workloads between cloud-vendors as well as between private and public clouds. Private to public will be first out but I am sure we will soon see public to public clouds as well. We are not there but we are seeing the light and the future ahead for OpenStack is bright and will finally allow for true “no-vendor-lock-in”.

While I think OpenStack has a lot to offer any company already today – I think the interoperability is what will make it truly unique and is an immense opportunity for both vendors and customers. As we all engage our businesses more and more into the cloud – it is important for us to also have ways out of it – or at least be able to move.

(Image Source: Shutterstock)

By Johan Christenson

Do You Know What Amazon Web Services (AWS) Is? Most Still Don’t

Do You Know What Amazon Web Services (AWS) Is? Most Still Don’t

Do You Know What AWS Is?

Amazon Web Services, or AWS, have been a key component of the Internet for almost a decade. Amazon founder Jeff Bezos launched the client-side platform in 2006, which has grown from one of the first fledgling cloud computing platforms into what today accounts for nearly 27% of the $40 billion (and rapidly growing) IaaS cloud infrastructure sector — including 600 government agencies around the world and more than one million customers spread across almost every country in the world.

Despite all the success of AWS, perhaps the most surprising statistic is how few actually understand it, or for that matter even know what it is. While this probably has something to do with the client-side nature of the service, it is still somewhat shocking that more than two-thirds of Americans have never even heard of AWS.

As the cloud grows in significance, and platforms like AWS become increasingly important their clients, having at least a foundational understanding of the Internet architecture on which it is based is worthwhile.

As one of the leading providers of AWS training courses, Udemy has gone to great lengths to make AWS accessible for men and women of every skill level. In an effort to increase understanding for the two-thirds of Americans who have no knowledge whatsoever of the cloud hosting service, Udemy created an easy-to-understand infographic beginners guide to Amazon Web Services. The infographic describes the basic history of AWS, why it is important, and its place in the overall technology sector.


By Caroline Gilbert / Udemy

Data Science – Brain Tracking, Jobs And Communities

Data Science – Brain Tracking, Jobs And Communities

The Exciting World Of Data Science

Data Science is an exciting field which provides tons of new opportunities. From Data Scientist job growth,  Brain Tracking to covering Data Science Community startups such as Kaggle. There is a growing fascination in this area. You may also note that it is becoming increasingly difficult to visit a company job page and not see a callout for Data Scientists. They are in extremely high demand at the moment and this pattern will be expected to continue.

To add to the mix, we’ve come across a very comprehensive infographic by DataCamp called Data Science Wars which provides you with a breakdown between the programing languages R and Python.


Cloud Services Makes Your Work Efficient With Server Consolidation

Cloud Services Makes Your Work Efficient With Server Consolidation

Cloud Services Makes Your Work Efficient

Server merging is a way to deal with the proficient utilization of PC server assets so as to lessen the aggregate number of servers or server areas that an association requires. The practice grew because of the issue of server sprawl, a circumstance in which numerous, under-used servers consume up more room and provide a greater number of assets than can be advocated by their workload.

In terms of Big Data, there is no cure-all or solitary approach that takes into account associations to discover esteem in the stunning volume of information being gathered. Accordingly, our group of editors present this aide as a method for helping you to decide how you anticipate utilizing and dissecting enormous information – and on selecting the proper foundation segments to backing those endeavors.

Studies affirm that cloud services are cost-efficient:


As per Tony Lams, Senior Analyst at D.H. Chestnut Associates Inc. in Port Chester, NY, servers in numerous organizations regularly keep running at 15-20% of their ability, which may not be a practical proportion in the current monetary environment. Organizations are progressively swinging to server combination as one method for cutting pointless expenses and boosting quantifiable profit (return on initial capital investment) in the server farm. Out of 518 respondents in a Gartner Group examination contemplate, six percent had directed a server union venture, 61% were right now directing one, and 28% were wanting to do as such in the prompt future.

Despite the fact that union can considerably build the effective utilization of server assets, it might likewise bring about complex setups of information, applications, and servers that can be mistaking for the normal client to battle with. To lighten this issue, server virtualization may be utilized to cover the points of interest of server assets from clients while improving asset sharing. Another way to deal with server solidification is the utilization of cutting edge servers to amplify the effective utilization.

Server union alludes to the utilization of a physical server to oblige one or more server applications or client occurrences. Server solidification makes it conceivable to share a server’s register assets among numerous applications and administrations at the same time. It is chiefly used to diminish the quantity of servers needed in an association.

The server consolidation process:


The essential target behind server combining is to expend the greater part of a server’s accessible assets and decrease the capital and operational costs connected with various servers. Generally, just 15-30 percent of a physical server’s general limit is utilized. With server merging, the usage rate can be expanded to well more than 80 percent. Server union takes a shot at the standards of server virtualization, where one or more virtual servers lives on a physical server.

Server combining uses a multi-occupant construction modeling where all the introduced and facilitated virtual servers share a processor, stockpiling, memory and other I/O and system forms. In any case, each virtual server has a different working framework, applications and interior administrations.


Server consolidation brings many benefits to cloud users, enabling them to get work done fast and with considerably less effort. A good server architecture is required in order to get the project started. Each virtual server has different operation requirements and its important to review them all in order for the cloud system to work efficiently.

(Image Source: Shutterstock)

By Deney Dental

The Concept Of Securing IoT To Secure Your Building

The Concept Of Securing IoT To Secure Your Building

Securing IoT

Ah, security. It is the dulcet tone of a symphony that we play over and over in the IT world. IoT (Internet of Things) and the myriad of connected devices allow us some intriguing security options. For example, in a mesh array of sensors, you could effectively force users to correctly identify themselves more than once. A user would identify herself repeatedly, but you wouldn’t prompt the user more than one time. By using an array of sensors, you could effectively determine the employee’s identity many times before she has even entered your facility. Or, for that matter, before the employee has put her badge in the badge reader to enter the building.

Facial recognition using IoT video sensors is a quick and easy solution. I see you. I look you up. I know who you are and let you through to the next array of sensors. But I can also measure your voice as it has a unique timbre, analyze your particular speech patterns, and measure your gait. The very way you walk can also be used to not only determine you are who you say you are, but can also be used to verify you should be where you are.


You can deploy these sensors in places people wouldn’t suspect them. Parking lots make for great places to observe someone’s gait. You can do further security checks as they stand and wait to get their bags checked or to get visitor badges. You record and identify their voice. You could even use the unique identifier that is the IMEI of a cellular phone to determine if someone is who they say they are. You can steal someone’s phone but not their voice and gait. Even the most practiced mimic is off just enough that we are able to tell the difference.

Previously, I wrote about the internet of illness. The end game in that scenario involves your employer scanning you as you ‘badge’ into your workplace, or greeting you at the visitor entrance, and then basically telling you to go home. You are sick, the system tells you. Come back when you are well. So you take a sick day or work from home, and the infection stops with you. Instead of walking into the office and infecting 200 people, you stay home and infect your cat. The same can be applied to people entering a building.

These sensors allow you to have good solid security at your entrances quickly. Over time, organizations can share data about people that visit them. The US Government and other governments could sell security information about the gait, voice, and other attributes that can be quickly captured. Imagine security of the day after tomorrow. You are asked to step into a booth; the booth has a camera and a microphone. You are asked to say three sentences. A single ding and the door opens and you pass through a metal detector. The booth finds no weapons, no threat, and that you are who you say you are.


(Image Source: Shutterstock)

Ah, security. You could, in cases where there are other issues and concerns like bombs and other risks, create artificial barriers to walking up the walkway to your building and add bomb and chemical sensors to the solution as well. Expense, however, is always the issue in terms of security. You can’t spend so much money on security that you bankrupt the company. Likewise, you can’t spend so much time and energy that the organization can’t get any work done because it takes a half hour to get from your car to your desk (even though you only travel a total of 10 feet).

IoT devices will add a great deal of information quickly for people that end up staffing visitor desks. It will allow organizations to increase the safety of their workers by reducing the number of ways you can get weapons or dangerous devices into the building. If they somehow get past the entrance, you can lock them into stairwells or shut off elevators and let them know you know they have a gun or knife, and that you have notified the authorities. The application of security for and of people is a great use of IoT sensors and devices, and can be effectively utilized while still managing costs. It will increase the overall security of your building and people greatly. You can have people log into the system over and over again, without them having to sign in more than once!

By Scott Andersen

Growth Of The European Cloud Market

Growth Of The European Cloud Market

The European Cloud Market

We’ve written about the international cloud markets a number of times on CloudTweaks and it makes for some interesting discussion. A number of our 12/12 contributors have touched on some of the key points as to why there are such cloud adoption challenges.

Roger Strukhoff discusses: “There are significant issues of data sovereignty and security entangled in distributed cloud infrastructures that cross international borders, to be sure. But inter-governmental organizations from the European Union (EU) to the Association of Southeast Asian Nations (ASEAN) to the East African Community (EAC) and many more are stocked with serious-minded people working to address and solve the political issues so that the technology may flow and improve the lives of their people…

Gary Gould continues that: “Businesses within the EU are not adopting cloud-based systems as quickly as expected. Do data protection laws and restrictions play a role in Europe’s slow uptake? Despite the fact that cloud computing now plays a vital role in the development of our businesses, use of the Cloud amongst businesses in the European Union did not increase in 2014…

While Johan Christenson has an interesting take: “The competitive mentality in the US poses a stark contrast. Researchers even identified a gene in those that once emigrated from Europe to the US, which allows for higher tolerance of risk. Starting a company in the US is a given. Most Europeans are still asking what job they should get, not what company they will start. Europe also does not have the force from the society to be as competitive. Equally most European struggle with diverse markets and different languages which overall does not help…” 


To help shed some additional light on the international cloud market, we’ve come across an interesting and helpful BSA Global Cloud Computing Scorecard. The scorecard ranks 24 countries based on seven policy categories that measure the countries’ preparedness to support the growth of cloud computing.



See the full list here.

For some of the good news, it appears that the European cloud market is expecting positive growth over the next 4 years in the areas of: Public, Private and Hosted Private Cloud services. Included is an infographic provided by IDC titled “The Wonderful World Of The Cloud“.


CloudTweaks Comics
The DDoS That Came Through IoT: A New Era For Cyber Crime

The DDoS That Came Through IoT: A New Era For Cyber Crime

A New Era for Cyber Crime Last September, the website of a well-known security journalist was hit by a massive DDoS attack. The site’s host stated it was the largest attack of that type they had ever seen. Rather than originating at an identifiable location, the attack seemed to come from everywhere, and it seemed…

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

DDoS Knocks Out Several Websites Cyber attacks targeting the internet infrastructure provider Dyn disrupted service on major sites such as Twitter and Spotify on Friday, mainly affecting users on the U.S. East Coast. It was not immediately clear who was responsible. Officials told Reuters that the U.S. Department of Homeland Security and the Federal Bureau…

A New CCTV Nightmare: Botnets And DDoS attacks

A New CCTV Nightmare: Botnets And DDoS attacks

Botnets and DDoS Attacks There’s just so much that seems as though it could go wrong with closed-circuit television cameras, a.k.a. video surveillance. With an ever-increasing number of digital eyes on the average person at all times, people can hardly be blamed for feeling like they’re one misfortune away from joining the ranks of Don’t…

Timeline of the Massive DDoS DYN Attacks

Timeline of the Massive DDoS DYN Attacks

DYN DDOS Timeline This morning at 7am ET a DDoS attack was launched at Dyn (the site is still down at the minute), an Internet infrastructure company whose headquarters are in New Hampshire. So far the attack has come in 2 waves, the first at 11.10 UTC and the second at around 16.00 UTC. So…

The Conflict Of Net Neutrality And DDoS-Attacks!

The Conflict Of Net Neutrality And DDoS-Attacks!

The Conflict Of Net Neutrality And DDoS-Attacks! So we are all cheering as the FCC last week made the right choice in upholding the principle of net neutrality! For the general public it is a given that an ISP should be allowed to charge for bandwidth and Internet access but never to block or somehow…

Cloud Infographic: Security And DDoS

Cloud Infographic: Security And DDoS

Security, Security, Security!! Get use to it as we’ll be hearing more and more of this in the coming years. Collaborative security efforts from around the world must start as sometimes it feels there is a sense of Fait Accompli, that it’s simply too late to feel safe in this digital age. We may not…

Cloud Infographic – DDoS attacks, unauthorized access and false alarms

Cloud Infographic – DDoS attacks, unauthorized access and false alarms

DDoS attacks, unauthorized access and false alarms Above DDoS attacks, unauthorized access and false alarms, malware is the most common incident that security teams reported responding to in 2014, according to a recent survey from SANS Institute and late-stage security startup AlienVault. The average cost of a data breach? $3.5 million, or $145 per sensitive…

The Rise Of BI Data And How To Use It Effectively

The Rise Of BI Data And How To Use It Effectively

The Rise of BI Data Every few years, a new concept or technological development is introduced that drastically improves the business world as a whole. In 1983, the first commercially handheld mobile phone debuted and provided workers with an unprecedented amount of availability, leading to more productivity and profits. More recently, the Cloud has taken…

Adopting A Cohesive GRC Mindset For Cloud Security

Adopting A Cohesive GRC Mindset For Cloud Security

Cloud Security Mindset Businesses are becoming wise to the compelling benefits of cloud computing. When adopting cloud, they need a high level of confidence in how it will be risk-managed and controlled, to preserve the security of their information and integrity of their operations. Cloud implementation is sometimes built up over time in a business,…

Data Breaches: Incident Response Planning – Part 1

Data Breaches: Incident Response Planning – Part 1

Incident Response Planning – Part 1 The topic of cybersecurity has become part of the boardroom agendas in the last couple of years, and not surprisingly — these days, it’s almost impossible to read news headlines without noticing yet another story about a data breach. As cybersecurity shifts from being a strictly IT issue to…

Using Cloud Technology In The Education Industry

Using Cloud Technology In The Education Industry

Education Tech and the Cloud Arguably one of society’s most important functions, teaching can still seem antiquated at times. Many schools still function similarly to how they did five or 10 years ago, which is surprising considering the amount of technical innovation we’ve seen in the past decade. Education is an industry ripe for innovation…

Virtual Immersion And The Extension/Expansion Of Virtual Reality

Virtual Immersion And The Extension/Expansion Of Virtual Reality

Virtual Immersion And Virtual Reality This is a term I created (Virtual Immersion). Ah…the sweet smell of Virtual Immersion Success! Virtual Immersion© (VI) an extension/expansion of Virtual Reality to include the senses beyond visual and auditory. Years ago there was a television commercial for a bathing product called Calgon. The tagline of the commercial was Calgon…

Three Factors For Choosing Your Long-term Cloud Strategy

Three Factors For Choosing Your Long-term Cloud Strategy

Choosing Your Long-term Cloud Strategy A few weeks ago I visited the global headquarters of a large multi-national company to discuss cloud strategy with the CIO. I arrived 30 minutes early and took a tour of the area where the marketing team showcased their award winning brands. I was impressed by the digital marketing strategy…

Connecting With Customers In The Cloud

Connecting With Customers In The Cloud

Customers in the Cloud Global enterprises in every industry are increasingly turning to cloud-based innovators like Salesforce, ServiceNow, WorkDay and Aria, to handle critical systems like billing, IT services, HCM and CRM. One need look no further than Salesforce’s and Amazon’s most recent earnings report, to see this indeed is not a passing fad, but…

Cloud Native Trends Picking Up – Legacy Security Losing Ground

Cloud Native Trends Picking Up – Legacy Security Losing Ground

Cloud Native Trends Once upon a time, only a select few companies like Google and Salesforce possessed the knowledge and expertise to operate efficient cloud infrastructure and applications. Organizations patronizing those companies benefitted with apps that offered new benefits in flexibility, scalability and cost effectiveness. These days, the sharp division between cloud and on-premises infrastructure…


Sponsored Partners