Category Archives: SaaS

Powering The Internet Of Things

Powering The Internet Of Things

Powering The Internet Of Things

The Internet of Things is designed on the premise that sensors can be embedded in everyday object to help monitor and track them. The scope of this is huge – for example, the sensors could monitor and track everything from the structural integrity of bridges and buildings to the health of your heart. Unfortunately, one of the biggest stumbling blocks to widespread adoption at the moment is finding a way to cheaply and easily power these devices and thus enable them to connect to the internet.

Luckily, engineers at the University of Washington have a potential solution. They have designed a new system which uses radio frequency signals as a power source and reuses existing WiFi infrastructure to provide the connectivity. The technology is called WiFi Backscatter and is believed to be the first of its kind.

Building on previous research showed how low-powered devices could run without batteries or cords by obtaining their power from radio, TV and wireless signals, the new design goes further by connecting the individual devices to the internet. Previously it wasn’t possible, the difficulty in providing WiFi connectivity was that traditional, low-power WiFi consumes significantly more power than can be gained from the wireless signals. This has been solved by developing a ultra-low power tag prototype which has the required antenna and circuitry to talk to laptops and smartphones.

“If Internet of Things devices are going to take off, we must provide connectivity to the potentially billions of battery-free devices that will be embedded in everyday objects”, said Shyam Gollakota, one of University of Washington’s Professor in the Computer Science and Engineering department. “We now have the ability to enable WiFi connectivity for devices while consuming orders of magnitude less power than what WiFi typically requires”.

The tags on the new ultra low-power prototype work by scanning for WiFi signals that are moving between the router and the laptop or smartphone. Data is encoded by either reflecting or not reflecting the WiFi router signal – thus slightly changing the signal itself. It means that WiFi enabled devices would detect the miniscule changes and thus receive data from the tag.

“You might think, how could this possibly work when you have a low-power device making such a tiny change in the wireless signal? But the point is, if you’re looking for specific patterns, you can find it among all the other Wi-Fi reflections in an environment”, said Joshua Smith, another University of Washington professor who works in the same department as Gollakota.

The technology has currently communicated with a WiFi device at a rate of 1 kilobit per second with two metres between the devices, though the range will soon be expanded to twenty metres. Patents have been filed.

By Daniel Price

Powering Up The Cloud

Powering Up The Cloud

Powering Up The Cloud

Battery life has been the biggest problem for almost all smartphone users, but manufacturers have rarely been seen trying to accommodate the user’s needs in this department. A high-end smartphone these days won’t last you a day if your performance is moderate to heavy. This need of smartphone users for more battery power has allowed the charge cases and external battery manufacturing companies to flourish over the years.

Lithium is considered the prime metal to be used for batteries because it offers the most potential since it is lightweight and has the highest energy density. Simply put, with lithium you get more power per volume and weight.


Nowadays, researchers are constantly looking for ways to decrease the workload on a smartphone in order to increase its battery life. It’s a new way to tackle the problem of a small smartphone battery. We can use Amazon cloud as an example; its cloud can store personal data and perform computations on that data. So the main question we can ask ourselves is, can offloading mobile applications into the cloud, save us energy? The answer is a complex one, especially since mobile computing uses limited energy and because of the data transfer speed of the wireless connection.

Everything boils down to efficiency. If a mobile application uses too much energy in the cloud, running it is not feasible. Increased energy consumption means smaller battery life, and since battery life is of key importance to smartphone users, this is not efficient. To make computation offloading more attractive, applications would need to be developed in such a way that they would be more efficient running within the cloud network, rather than on the mobile device itself.

There are obvious concerns to the privacy and security of such data transfers, not to mention that these mobile applications can only be used when we’re connected to a high-speed network; preferably 4G/LTE or a Wi-Fi connection. There are areas where there is no coverage of either of these two networks; so these applications would not be available to the user at that time. If the data transfer speed of these networks decreases, the smartphone would ultimately use more power to facilitate the transfer of data and the original purpose of offloading these applications would become moot since the power consumption would ultimately increase.

We can ultimately conclude that even though there are many ways to save battery life on a mobile device, many of these ways come built-in from the manufacturer themselves. Cloud computing can potentially save a lot of energy for mobile users and erase the need for high-tech batteries. Not all applications would be able to save energy when migrated to the cloud but rather, they would do the opposite. In the end, the quality of computation would depend upon the privacy, security and the reliability of the data connection because if the connection contained all these qualities, it would lead to the mobile offloading becoming common and a good share of the mobile battery being saved.

By Margaret Evans

Salesforce Service Cloud: Air Traffic Control For Your Customer

Salesforce Service Cloud: Air Traffic Control For Your Customer

Salesforce Service Cloud

One of the greatest benefits of the increasingly reliable and ubiquitous state of cloud technology is the removal of business silos and the consolidation of information flow, both in-house and on the road. This is of particular importance to the many different types of professionals whose work involves customer relationship management (CRM). Very often opportunities are lost due to client data being unavailable at the moment it is needed, the right solution lost in a pile of electronic bookmarks, or the fact that people simply cannot directly connect, resulting in telephone tag, delays and a marked drop in customer satisfaction.

Numerous organizations have striven to improve the CRM experience, both for the customer and company agents and representatives, but few have attained the degree of seamless execution that is apparent in the new Salesforce Service Cloud, from San Francisco-based; an application that succeeds in pulling together all of the disparate elements of CRM into a single dashboard, which also happens to be whichever dashboard you happen to be sitting in front of: in your office, in your car, or on your smart device.

When a company has the reputation that enjoys – as an organization that has established itself as a leader and an innovator in numerous sales and client management techniques, the expectations for an impressive rollout are high, and delivers.

salesforce-michaelchouThe Salesforce Service Cloud offers a platform that essentially transfers the traditional contact center into an engagement center, fulfilling the mandate of the modern social media culture to not simply deliver information, but to fully involve the client,” says Michael Chou, Director of Product Management for

It achieves this in a number of ways including an essential one-touch approach to integrating customer histories and tasks into a single location, and then augmenting that by enabling additional inbound information such as Facebook likes or Twitter feeds to add to the profile. This same social media connection ensures that when customers are talked to, it is on the platform they prefer, whether that is traditional phone or email, or through a preferred social media channel.

Adding to the growing benefits of crowdsourcing, collaboration and wikis, the Salesforce Service Cloud places high priority on “swarming,” in which additional agents can be brought in online, to fully address the specific needs of a customer, to provide access to case histories and solutions and to escalate immediately when necessary.

Real-time knowledge about a customer is provided by a 360 degree view (items such as product purchase history, service/help requests, latest tweets, or FaceBook interests) which can be displayed on a multi-monitor setup, to allow a CRM agent to deliver a type of air-traffic control for every individual.

A degree of added coolness is delivered through its dynamic knowledge base structure. Salesforce Service Cloud encourages and then taps into conversation communities in which customers interact and converse with each other as well as with company reps, discussing experiences, sharing stories and offering specific solutions. These are then collected and added to a knowledge base alongside any relevant data located through Google-style web searches, to create a dynamic and easy-to-use wiki.

Naturally, this technique is KCS certified.

Although Salesforce is far from the only fish in the CRM sea, they are certainly a big fish, and they have not let their size or relative age slow them down. The Salesforce Service Cloud platform, which has been developed and refined over a number of years, demonstrates a degree of agility, innovation and pragmatic adoption of new age technologies that appear to offer a powerful solution to the double-edged sword of CRM: clients who expect but do not receive satisfactory service, and agents who are hamstrung by time, resources and caseload. “With the Salesforce Service Cloud, we are finally in a place where problems, solutions, customers and agents can meet and communicate in one place, and drive the business relationship forward, which is the goal of every company,” said Chou.

Salesforce Service Cloud Logo

Additional information about the Salesforce Service Cloud including demos and trial offers are available.

Post Sponsored By Salesforce 

By Steve Prentice

Endurance – A New Way To Measure Churn

Endurance – A New Way To Measure Churn

Endurance – A New Way To Measure Churn

Being in the Hosting industry for over 15 years, an IPO of any Hosting company always intrigues me. You can always learn more information and get new insights into their business from their Prospectus & Roadshow presentations. I closely watched Endurance presentations, and their claim of “1% Churn” got my attention. They have repeated that claim on their Q4-2013 earnings call while defining the MRR retention at 99%. They have kept that statement in their Q1-2014 results. Endurance International Group Holdings, Inc. (Nasdaq:EIGI) is a leading provider of cloud-based platform solutions designed to help small and medium-sized businesses succeed online

Out of the over 50+ companies that were bought over the past years, there are a few which were players in the Hosting market for many years and were struggling with churn throughout those years. Included among them are: HostCentric, Hostmonster, Bizland, homestead,, Ehost, ApolloHosting.

I wondered what the Endurance team did in order to reduce the industry-inherited churn in these companies.

Before I jump into my analysis of endurance and churn numbers, I would like to refresh your memory about the churn of SaaS companies in general and Hosting ones in particular. See for example, my previous articles. As demonstrated, churn is always an issue in any subscription-based businesses. There are however situations in which Churn is to be expected and is more or less bearable, while in others it is like a cancer to the SaaS organism.

Endurance 1% Churn Model

Before I begin, I would like to present the following disclaimer: as I did not speak to the endurance team nor to analyst which covers them, the below analysis , is based on my own 15 years of experience in the industry and on the material they have chosen to share with their investors during their roadshow. Therefore, I might be completely wrong….

MRR Explained

Let’s assume that on January 2013, a SaaS company had 100 new customers joining one of their Hosting brands. Assuming each customer would sign on for a subscription account of $10/Month plan, we would see an MRR (Monthly Recurring Revenue) of: 100 customers X $10/mo. = $1,000 MRR.

Recognized Revenues

Recognized revenues in a SaaS company refer to revenues for services actually being rendered to the customers. And in a Hosting company, in which the service is a monthly hosting service, over the course of each month the customer got the service he paid for, and the company can recognize the resulting revenues for that month.

Assuming all customers pay either on a monthly base or upfront for a yearly plan, their monthly recognized revenues would be equal to the MRR (for monthly paying customers they would recognize all of their monthly payment, while for yearly plans they would recognize 1/12 of their payment for each month of service). $1,000 MRR = $1,000 Recognized revenues.

However, in our case, January 2013 was the 1st month those 100 customers subscribed to the service. In such a case, those customers would join throughout the month, as the sales & marketing activities are spread over the month.

For the sake of this calculation we assume an even spread over the month. Therefore, on average, each customer would get a service of 15 days out of the month. In such a case, they would recognize only 50% of his monthly payment (while deferring the remaining amount to the following month). Therefore, their collection would be: 100 customers X $10 each = $1,000

However, their recognized revenues would only be 50% of that, as they delivered service for 50% of the month, on average: $1,000 X 50% = $500 Recognized Revenues

Impact on Recognized Revenues

On the following month, assuming they would keep all of their customers, the service will be rendered for a full month and therefore: $1,000 MRR = $1,000 Recognized Revenues

This means that the contribution of the customers who joined on January 2013 to their aggregated recognized revenues would be doubled during February 2013, and would remain the same for the following months, assuming none of the customers would leave the service.

Ongoing Model

Now, in order to demonstrate a bit, let’s look at several months outlook while pinpointing the cohorts:


We can see here, that our numbers are growing on the total user base, while our cohorts are being churned. Clearly, when time passes and the amount of users joining will be equal to the amount of those who are leaving, we will be in the classical “churn trap.” However, all those are not new to us, but let’s look at the Rec. Rev. Model Endurance are using:

Recognized Revenue Model



  1. They would recognize only 50% of the MRR the new customers generate (on an average 15 days a month).
  2. 20 customers canceled that month, but on average they did got a service for 15 days on an average.
  3. Same goes here with 10 customers leaving.

What we could see here, is that by using the Rev. Rec. guidelines we are “flattening” the impact of the churn on the revenues (not on the MRR), by differing revenues of new customers to the following month, and by recognizing revenues & differed revenues of customers who are leaving that month.


I learned few things from this exercise:

  1.  The Endurance guys used in their analysis a model which is not the industry common practice, while using industry terminology.
  2.  Their businesses suffers from the same issues other ‘hosters’ suffer, and most probably at the same level.
  3.  Revenues recognition guidelines may need to be modified for the use of SaaS subscriptions companies, as MRR would better reflect their upcoming financial outlook.

In summary, Churn has always been a key performance indicator of a Hosting company. No hosting company can avoid it, and they all continue to struggle to minimize it.
However, the churn should be dealt with product, operational and service manners and not by the finance department…..


Roy Saar  is a partner at Mangrove Capital Partners, a bold but patient venture capital firm helping innovative entrepreneurs start and grow global, disruptive companies. was involved in the launch of Wix. Roy was also the founder of Sphera Technologies (sold to Parallels in 2007), which was one of the very first software platforms for SaaS providers. Roy seats on the boards of: WIX, WalkMe & RFcell

Cloud Infographic: Saving Means Sacrificing

Cloud Infographic: Saving Means Sacrificing

Cloud Infographic: Saving Means Sacrificing

One of the biggest deciding factors in selecting a cloud service provider are the costs. Companies want to save money and saving money is not only a good thing – but a great thing! Unfortunately, low prices in most cases will reflect in the quality of the services you pay for which can be greatly detrimental to your business. Low prices can mean hindered site performance, inexperienced support, increased security concerns and a whole slew of other issues that can, and will arise. You must take a look at the whole picture and budget accordingly.

Included is an infographic provided by Cloudamize which covers some of the Myths vs Facts related to Cloud computing.


Pinup: Pertino Brings Cloud Networks To Small Businesses

Pinup: Pertino Brings Cloud Networks To Small Businesses

Pinup: Pertino Brings Cloud Networks To Small Businesses

Computing hardware is expensive. No matter what type of virtual activities you are involved in, shelling out money for the hardware side of things has always been part of the process of getting things working successfully. Having powerful software is definitely a good thing; however, without the hardware to run it on, it is about as useful as a trapdoor on a lifeboat.


However, cloud-startup, Pertino, is set to change all of that.

Pertino was first launched in September of 2011 and is located in Los Gatos, CA. Pertino was founded by Craig Elliott, former Apple executive, Scott Hankins, former Director of Engineering at Blue Coat Systems, Andrew Mastracci, Architect and Michael Cartsonis, who has since left for employment with Contrast Security. Pertino has raised close to $30 million, from venture firms like Norwest Venture Partners and Lightspeed Venture Partners.

As Craig Elliott, CEO of Pertino Networks, puts it “The cloud and ‘as-a-service’ delivery models have transformed the IT landscape from a computing, storage and application perspective while, at the same time, the network paradigm has changed little. As a result, networks that once enabled businesses now constrain them when it comes to harnessing the disruptive capabilities and economies of the cloud. Pertino Networks was founded to radically simplify and alter the economics of business networks by bringing them into the cloud era.”

Pertino’s services are geared primarily towards smaller businesses that do not have the resources or manpower to develop and maintain the network they need with a hardware solution. Chief areas where this is seen are:


Pertino delivers fast, LAN-quality connections between all of the devices in your company. Not only are these connections completely hardware free, they also require no advanced configuration to set up. These connections are also fully equipped with 256-bit encryption, which ensures that all your data stays safe, no matter where it is stored.


In addition to the aforementioned encryption, Pertino also includes several other layers of protection. One example of these layers is authentication at the device and user levels, which safeguards your network from access by unauthorized devices and people.


Pertino uses only top-tier data centers, providing the same level of performance whether your employees are local or in another part of the world. Pertino also identifies where the accessing employee is and uses the closest data center to ensure no degradation of performance. Traffic itself is also optimized, which locks data into the shortest and clearest paths to and from the Pertino network.


Pertino’s cloud management console allows you complete access and control over your entire network, all in one easy-to-use location. No matter what type of activities you need to complete, such as update policies, manage users or just monitor performance, it can easily be performed by using the Pertino interface.

Having the capabilities of a powerful network is an excellent way for a small business to develop and grow their company in as short a time as possible. By using Pertino’s services, your company can enjoy the stellar growth possibilities all companies strive for.

By Joe Pellicone

Cooks And The Cloud

Cooks And The Cloud

Cooks And The Cloud 

Bouillabaisse is a French dish that loosely translates to fish stew. Well-made and it is one of the best dinners you will ever have. Poorly made and you will spend the night wishing you hadn’t stopped by the midnight street Bouillabaisse stand. Bouillabaisse is also a fantastic metaphor for cloud computing. Cloud Bouillabaisse is cloud soup. First off it is market driven (fresh fish = good bouillabaisse) as the cloud is (cost, cost, cost). The time taken in preparation and acquisition can greatly impact the soup you serve (Fish or Cloud). That to me is the key for cloud solutions. The time to prepare the stew is as important as how you serve it. Transitioning that to cloud solutions it’s about planning, preparing and migrating your solutions to their new cloud home.


So we start with the stock. In this case the first step is the CSP you choose. I met a chef once in Paris who went every day to the fresh market to get the things for his Bouillabaisse. The same isn’t possible for cloud service providers today, someday maybe but not today. We can however evaluate the provider and the overall capacity of that provider to host our solution. Noisy neighbors, like three day old fish don’t often make a good Bouillabaisse.

Do we have the ingredients we need in our pantry?

Our initial assumption is that we have already picked the Cloud Service Provider (CSP) and that providers are able to meet our technology needs (servers and connectivity) as we start our Cloud Bouillabaisse. Frankly the CSP is a broth for our soup – bad broth equals bad soup so choose wisely. Now we carefully prepare everything we are adding. This includes planning the following “ingredients”:
· Security: what does the provider have today, what additional things do we need?

  • Migration: has the CSP done this before? It isn’t bad if they haven’t it just changes how you cook a little. Instead of sampling at various times you now have to sample all the time. More work but again we are aiming for a great stew here.
  • Migration: If the CSP hasn’t done this before go get a partner who has. Or a partner you trust to make sure as you are sampling they are continuing to stir your wonderful Cloud Bouillabaisse.
  • Cost: did we mention it has to be cheaper than the solution we are running in our data centers today? The nature of stew is not always using the best and most tender cuts of fish, simply that you cook them slowly for a long time breaking them down and making them more appetizing. We can’t break down our cloud provider by boiling them for hours, so we have to start off with the shared cost model of cloud reducing our price from day one.

Does the CSP offer the security our solution requires?

This one has been bouncing around cloud solutions for years. “The cloud is not secure.” Reality here is that in fact the cloud can be secure. Bouillabaisse is as much a process as it is a dinner. From making the stock from fish parts to cutting the vegetables it’s as much how you do it as what you do. The stringent nature of FedRAMP and the requirements around monitoring what is happening in a solution end up being game changers. Adding security monitoring to a FedRAMP cleared solution isn’t horribly hard. Expanding the operational framework to include both the monitoring for security and FedRAMP creates a stronger overall solution. However that said it critical when considering cloud solutions that you evaluate the security capabilities and offerings of your CSP carefully. Simply put it isn’t just enough to taste the Bouillabaisse from time to time, we have to make sure no one else can get into our kitchen and ruin it.

Making a fine Bouillabaisse and building a cloud solution have a lot of things in common. While I have yet to end up with Cloud on my shirt during lunch there are many other common components. Pick the right ingredients, make sure your process works and in the end serve the solution with proper garnishment. I think a fine Cloud Pumpernickel would be perfect with my Cloud Bouillabaisse.

By Scott Andersen

The Future Of Cloud Based ERP

The Future Of Cloud Based ERP

What does the future look like for ERP in the cloud? It’s a question that everyone involved in this business asks, given that the general changeover from on-premise systems to cloud computing is currently happening more slowly than many had anticipated. But according to some industry watchers, the pace will continue to pick up, and the expectation is that over the next few years investment in public IT cloud computing services will approach $100 billion, with an annual growth rate of approximately 25 percent. Most of that is due to companies choosing to shift more and more of their core operations to the cloud.

The future of cloud-based ERP resides in a combination of forces: a recognition of the true safety and security of the cloud; the economic and functional practicalities of replacing licensed software such as Microsoft Excel with externally managed and maintained SaaS applications; the appeal of both two-tier and multi-tenancy ERP setups; and the general bandwagon effect.

Organizations such as Gartner, Deloitte, Accenture and others routinely poll their clients, asking them the reasons behind any major reluctance to move to the cloud, and in general CIOs and CFOs respond that the two primary sticking points are a lack of full familiarity as to cloud offerings, and a constant fear of a breach of security.

Jean Gea, Director of Product Marketing at Acumatica states that these are two issues that are relatively easy for ERP and cloud hosting providers such as Acumatica to handle, through increased communication with customers, delivered alongside one simple fact: when a cloud provider’s main stock-in-trade is to provide cloud access, the onus is upon them to ensure security as a fundamental component of doing business.

Further acceptance of cloud-based ERP will continue as improvements in virtualization, combined with the sharing of resources (called multi-tenancy) drives down the cost to ERP vendors of maintaining servers as well as code, savings that can be passed along to the customer. While many companies are still choosing to keep their ERP solutions in-house (on-premise), or have started to export some data across to a cloud as a hybrid/two-tier solution, they see ongoing costs of purchasing and upgrading software as an increasingly unnecessary annoyance.

Further, it is also significant to note that many legacy ERP systems lack the scalability to support ongoing changes in compliance regulation, which has great impact not only in numbers-based industries such as accounting and finance, but also in engineering and manufacturing, where ISO and related certifications standards continue to evolve. Cloud-based SaaS is in a much better position to move with these changes and to pass the change on to the customer inexpensively, while still adhering to any regulations regarding location of stored data.

The bandwagon effect: this is the age of the agile corporation. Companies that rest on plans and procedures that take days, weeks, or months to get started quickly lose out to those who have the mindset and the leadership to react to market demands more quickly and effectively. This is the essence of virtualization. Fewer silos, fewer legacy systems, faster time to market and reduced bureaucracy.

According to Louis Columbus, writing in Forbes and quoting a Gartner study, “SaaS-based manufacturing and distribution software will increase from 22% in 2013 to 45% by 2023. […] The catalyst for much of this growth will be two-tier ERP system adoption.

Another key area of innovation that will drive the future of cloud ERP is in analytics, especially when used in tandem with mobile computers (tablets/smartphones) that allow for instant updates on operations, as well as electronic paperwork and related company news. “This is something that Acumatica does extremely well,” says Gea, “by optimizing ERP for mobile and touch-based computers, our clients can be on one side of the globe and oversee essential resource management priorities with ease and confidence.” She adds, “As people shake off the mindset that to be in serious business you have to have hard, heavy machines in your own dedicated rooms, and replace that with the freedom of movement that cloud symbolizes, we will see more and more companies continue to shift more and more of their core operations there. As with any economy of scale, the benefits continue to increase as the collective participant group expands.”

Additional information about Acumatica’s cloud ERP services including demos and pricing are available at

Post Sponsored By Acumatica

By Steve Prentice

CloudTweaks Comics
Cloud Infographic: Security And DDoS

Cloud Infographic: Security And DDoS

Security, Security, Security!! Get use to it as we’ll be hearing more and more of this in the coming years. Collaborative security efforts from around the world must start as sometimes it feels there is a sense of Fait Accompli, that it’s simply too late to feel safe in this digital age. We may not…

Timeline of the Massive DDoS DYN Attacks

Timeline of the Massive DDoS DYN Attacks

DYN DDOS Timeline This morning at 7am ET a DDoS attack was launched at Dyn (the site is still down at the minute), an Internet infrastructure company whose headquarters are in New Hampshire. So far the attack has come in 2 waves, the first at 11.10 UTC and the second at around 16.00 UTC. So…

The Conflict Of Net Neutrality And DDoS-Attacks!

The Conflict Of Net Neutrality And DDoS-Attacks!

The Conflict Of Net Neutrality And DDoS-Attacks! So we are all cheering as the FCC last week made the right choice in upholding the principle of net neutrality! For the general public it is a given that an ISP should be allowed to charge for bandwidth and Internet access but never to block or somehow…

A New CCTV Nightmare: Botnets And DDoS attacks

A New CCTV Nightmare: Botnets And DDoS attacks

Botnets and DDoS Attacks There’s just so much that seems as though it could go wrong with closed-circuit television cameras, a.k.a. video surveillance. With an ever-increasing number of digital eyes on the average person at all times, people can hardly be blamed for feeling like they’re one misfortune away from joining the ranks of Don’t…

The DDoS That Came Through IoT: A New Era For Cyber Crime

The DDoS That Came Through IoT: A New Era For Cyber Crime

A New Era for Cyber Crime Last September, the website of a well-known security journalist was hit by a massive DDoS attack. The site’s host stated it was the largest attack of that type they had ever seen. Rather than originating at an identifiable location, the attack seemed to come from everywhere, and it seemed…

Cloud Infographic – DDoS attacks, unauthorized access and false alarms

Cloud Infographic – DDoS attacks, unauthorized access and false alarms

DDoS attacks, unauthorized access and false alarms Above DDoS attacks, unauthorized access and false alarms, malware is the most common incident that security teams reported responding to in 2014, according to a recent survey from SANS Institute and late-stage security startup AlienVault. The average cost of a data breach? $3.5 million, or $145 per sensitive…

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

DDoS Knocks Out Several Websites Cyber attacks targeting the internet infrastructure provider Dyn disrupted service on major sites such as Twitter and Spotify on Friday, mainly affecting users on the U.S. East Coast. It was not immediately clear who was responsible. Officials told Reuters that the U.S. Department of Homeland Security and the Federal Bureau…

Maintaining Network Performance And Security In Hybrid Cloud Environments

Maintaining Network Performance And Security In Hybrid Cloud Environments

Hybrid Cloud Environments After several years of steady cloud adoption in the enterprise, an interesting trend has emerged: More companies are retaining their existing, on-premise IT infrastructures while also embracing the latest cloud technologies. In fact, IDC predicts markets for such hybrid cloud environments will grow from the over $25 billion global market we saw…

Are Cloud Solutions Secure Enough Out-of-the-box?

Are Cloud Solutions Secure Enough Out-of-the-box?

Out-of-the-box Cloud Solutions Although people may argue that data is not safe in the Cloud because using cloud infrastructure requires trusting another party to look after mission critical data, cloud services actually are more secure than legacy systems. In fact, a recent study on the state of cloud security in the enterprise market revealed that…

The Cloud Is Not Enough! Why Businesses Need Hybrid Solutions

The Cloud Is Not Enough! Why Businesses Need Hybrid Solutions

Why Businesses Need Hybrid Solutions Running a cloud server is no longer the novel trend it once was. Now, the cloud is a necessary data tier that allows employees to access vital company data and maintain productivity from anywhere in the world. But it isn’t a perfect system — security and performance issues can quickly…

How The CFAA Ruling Affects Individuals And Password-Sharing

How The CFAA Ruling Affects Individuals And Password-Sharing

Individuals and Password-Sharing With the 1980s came the explosion of computing. In 1980, the Commodore ushered in the advent of home computing. Time magazine declared 1982 was “The Year of the Computer.” By 1983, there were an estimated 10 million personal computers in the United States alone. As soon as computers became popular, the federal government…

3 Keys To Keeping Your Online Data Accessible

3 Keys To Keeping Your Online Data Accessible

Online Data Data storage is often a real headache for businesses. Additionally, the shift to the cloud in response to storage challenges has caused security teams to struggle to reorient, leaving 49 percent of organizations doubting their experts’ ability to adapt. Even so, decision makers should not put off moving from old legacy systems to…

Part 1 – Connected Vehicles: Paving The Way For IoT On Wheels

Part 1 – Connected Vehicles: Paving The Way For IoT On Wheels

Connected Vehicles From cars to combines, the IoT market potential of connected vehicles is so expansive that it will even eclipse that of the mobile phone. Connected personal vehicles will be the final link in a fully connected IoT ecosystem. This is an incredibly important moment to capitalize on given how much time people spend…

Micro-segmentation – Protecting Advanced Threats Within The Perimeter

Micro-segmentation – Protecting Advanced Threats Within The Perimeter

Micro-segmentation Changing with the times is frequently overlooked when it comes to data center security. The technology powering today’s networks has become increasingly dynamic, but most data center admins still employ archaic security measures to protect their network. These traditional security methods just don’t stand a chance against today’s sophisticated attacks. That hasn’t stopped organizations…


Sponsored Partners