Category Archives: Big Data

The Cancer Moonshot: Collaboration Is Key

The Cancer Moonshot: Collaboration Is Key

Cancer Moonshot

In his final State of the Union address in January 2016, President Obama announced a new American “moonshot” effort: finding a cure for cancer. The term “moonshot” comes from one of America’s greatest achievements, the moon landing. If the scientific community can achieve that kind of feat, then surely it can rally around the goal of finding a cure for cancer. President Obama put his second-in-command, Vice President Joe Biden, in charge of “Mission Control” for the cancer moonshot efforts.

Though this is certainly an ambitious undertaking, what’s encouraging is that the project isn’t starting from scratch. Researchers and clinicians have already made remarkable progress in the forms of research, clinical trials, drug development and more. There already have been many masterful achievements that propel this effort to its goal. For example, the successful mapping of the human genome nearly two decades ago provided a tremendous jumping-off point for customized cancer treatments and potential cures. But in order to land this moonshot, there must be significant innovation in how all of these stakeholders communicate, collaborate and share important information.


(Image Source: Shutterstock)

Silo-breaking: as vital as funding?

Two of the biggest challenges of this project are to provide increased funding to the strategic participants, and to increase collaboration and information sharing among the numerous research teams and clinicians all around the world. Vice President Biden has said that he wants to “break down silos and bring all the cancer fighters together—to work together, share information, and end cancer as we know it.” The goal is to double the pace of progress, or as he put it: “to make a decade worth of advances in five years.

Those of us in the cloud computing community are especially invested in the efforts to increase coordination, eliminate silos and open up access to information. These things can only be done through improving upon and innovating technology solutions, so that storing and managing data doesn’t kill productivity. Let’s consider some of the issues that will affect what underlying technologies can be utilized to further drive collaboration and support access to information.

Protecting massive amounts of private data

A project of this magnitude will have massive amounts of data, generated by a multitude of sources. These large data sets must use common data elements (data descriptors, or metadata) to ensure that researchers are comparing apples to apples. Toward this end, the National Cancer Institute (NCI) has developed the Common Data Elements (CDE) to serve as a controlled vocabulary of data descriptors for cancer research. The CDE will help facilitate data interchange and inter-operability between cancer research centers.

Big data and learning algorithms will enable researchers to identify patterns and anomalies that, for instance, may help to identify patients who can benefit from standard treatments, or whose tumors require a different approach. Given that these large data sets will contain highly personal patient health information that can’t be anonymized, they will need to be protected with the strongest measures for data privacy to protect patients‘ rights and to maintain HIPAA compliance.

Preserving data integrity through controlled access


Of course, data integrity is of paramount concern. The data and other forms of information will come from numerous sources, and technology solutions will be needed to ensure that it maintains its consistency—that it isn’t inappropriately accessed and changed or corrupted. This means that access control to research information is critical. Yes, the project aims to increase sharing of data, but it needs to be shared with the right people in the right ways. Much of the information will be in documents, not databases, and this means access control, version control, and document retractions and expirations are important features for the underlying collaboration technology. And of course, all this must be done with strict HIPAA compliancy and patient privacy.

Setting content free to get work done

The time spent on collaborating and sharing information has ballooned by 50 percent or more over the last two decades, according to Harvard Business Review. Too much time is wasted trying to piece together disconnected information among team members who are scattered across the globe, leaving little time for actual work to get done.

Teams need virtual workspaces built for specific business and clinical research processes. Think in terms of flowing content across the extended ecosystem, instead of just improving systems of record behind the firewall. To take on this initiative, the clinical research community requires what some industry analysts call “systems of engagement,” meaning information only comes to life when it is put to use and acted upon. But many technologies fail to account for specific use cases (such as global clinical research) or the security and compliance needs of information in motion (such as confidential patient data).

In this race to exterminate cancer, the first challenge that must be resolved is to control the flow of information across the complete content lifecycle — even after external sharing — while also setting that information free so those who access it can increase productivity. Solving the collaboration challenge will ultimately allow researchers to remain focused on the important work of the cancer moonshot initiative.

The countdown is on…

By Daren Glenister

So How Will Big Data Impact Our Lives?

So How Will Big Data Impact Our Lives?

The Big Data Impact

Last week we took a closer look at a few of the differences between Business Analytics vs Data Science. Clearly there is great deal of interest in this field especially when leading analyst research firm IDC predicts that big data may soon reach worldwide revenues of nearly $187 billion by 2019.

Techcrunch recently spoke with Matt Turck who said: ‘We are entering the most exciting time for big data’. In 2010 only 2.5% of the Series A market was committed to big data. Today, the sector amounts to more than 7.5% of total venture investments.

So how will big data impact our lives? Included is an infographic discovered via which takes a closer look.


How Formal Verification Can Thwart Change-Induced Network Outages and Breaches

How Formal Verification Can Thwart Change-Induced Network Outages and Breaches

How Formal Verification Can Thwart  Breaches

Formal verification is not a new concept. In a nutshell, the process uses sophisticated math to prove or disprove whether a system achieves its desired functional specifications. It is employed by organizations that build products that absolutely cannot fail. One of the reasons NASA rovers are still roaming Mars years after their intended lifespan is because the correctness of their software was mathematically verified before deployment. Similar trusted 24/7/365 technology is embedded into mission-critical airplane flight controls, medical devices, and military defense systems that are too important to malfunction.

Recently, a team of computer science professors and Ph.D. students in the EnterpriseWorks incubator at the University of Illinois at Urbana-Champaign (UIUC) discovered how that same methodology can be applied to bulletproof today’s most complex networks to help prevent change-induced outages and breaches. Mathematical network verification is long overdue. Even if only 2% of modifications to a network’s configuration result in a change-induced outage or vulnerability, let’s put that in perspective: would you board an airplane if you knew that two out of 100 planes could fall out of the sky? Of course not. Why should we expect less from our networks? With so much sensitive data at stake, networks have to be as trustworthy as mission-critical systems and infrastructure.

Why Networks Fail


Four key factors have made network infrastructure particularly vulnerable to breaches and outages. First, when you factor in the cloud, virtualization, the move to software-defined networks (SDN), and mobile and IoT devices, you’ll quickly see that networks have become incredibly complex to manage. Second, every network change leaves an opening for something to possibly be misconfigured. By some estimates, operators of large enterprise or service provider networks make approximately 1,000 changes per month. Third, humans are a constant and unpredictable factor. Gartner analyst Donna Scott has noted that “80 percent of unplanned downtime is caused by people and process issues, including poor change management practices, while the remainder is caused by technology failures and disasters.”

And finally, there is poor policy management. According to the consulting firm Protiviti, one out of three enterprises and service providers lacks policies for IT, information security and data encryption, while 71 percent lack critical knowledge of which policies to institute to mitigate vulnerabilities and disruption. The fact is, most enterprises do not know what their network actually looks like at the deep infrastructure level, whether it is operating as it should be, and what vulnerabilities are lurking in the network. As a result, making any change to the network – even a day-to-day modification like changing access control rules or adding a device – is a time-consuming, manual and risky process. And broad architectural changes such as moving to a hybrid cloud or deploying SDN can be daunting projects.

How Formal Verification Works

Formal verification tries to predict the future: will my design work when I deploy it in the field? Will it always work, no matter what unexpected inputs or attacks are thrown at it? For example, a software application designer might want to know that her code will never crash or that it will never execute certain functions without an authorized login. These are simple, practical questions – but answering them is computationally challenging because of the enormous number of possible ways code may be executed, depending on the inputs and environments it interacts with. Decades of research advances in formal verification algorithms have led to the ability to rigorously answer such questions for certain kinds of software and critical systems.


(Image Source: Shutterstock)

Can we understand the behavior of complex enterprises with the same mathematical rigor? In a network, we want to understand whether the design meets network-wide data-flow policy: Is my network correctly segmented to protect against lateral movement by attackers inside my perimeter? Will my web services be available even after a router interface failure? Today, these questions are addressed through manual spot-checks, auditing and change-review boards that might take weeks to provide answers, all of which still leave plenty of room for error. In contrast, mathematical network verification reasons automatically about the possible behaviors of the network.

Achieving that analysis required several innovations:

  • First, it required rigorous understanding of devices’ data-flow behavior, down to the data plane instructions, which are the “machine code” of the network.
  • Second, it required sophisticated, novel reasoning algorithms and compact data structures that can efficiently explore the exponentially large number of possible packets that may be injected at thousands or tens of thousands of devices and ports in the network.

Using mathematical network verification can help enterprises prevent the outages and breaches that lead to astronomical losses, both informational and financial. Unlike techniques such as penetration testing and traffic analysis, mathematical network verification performs exhaustive analysis of all possible data-flow behavior in the network, before it happens – before vulnerabilities can be exploited, and without waiting for users to experience outages. If there is a network policy violation, verification will find it and provide a precise identification of the vulnerability and how to fix the flaw. The underlying technology allows for millisecond-level analysis of security policies, enabling real-time alerting and policy enforcement, and can provide mathematical evidence that the network is correct, giving organizations the confidence to implement changes to their infrastructure.

Real-time Situational Awareness

Depending on how an organization applies mathematical network verification to its network, the technology can collect a real-time situational awareness of a network’s data-plane state (the lowest and most foundational information in a network device), develop a flow model of the entire network and perform a rigorous analysis of security policies in real time to detect vulnerabilities and policy violations.

In a world where a network intrusion can result in billion-dollar losses, brand damage and outages of critical functionality, mathematical network verification represents a significant step towards improving overall network health and preventing network outages and breaches. It is the only technology available today capable of providing rigorous, real-time analysis at the deep data-plane level of a complex network, and it produces a level of confidence not attainable by other approaches.

Mathematical network verification is a unique technology that has already demonstrated success in multiple Fortune 500 and government networks. We will likely see quick adoption of the technology when organizations discover how quickly and dramatically it improves network security and resilience, making change-induced breaches and outages a thing of the past.

By Dr. Brighten Godfrey

Don’t Be Intimidated By Data Governance

Don’t Be Intimidated By Data Governance

Data Governance

Data governance, the understanding of the raw data of an organization is an area IT departments have historically viewed as a lose-lose proposition. Not doing anything means organizations run the risk of data loss, data breaches and data anarchy – no control, no oversight – the Wild West with IT is just hoping for the best. On the flipside, opening the hood and checking if policies or regulations are being respected means you have to do something about it: define new policies and enforce them everywhere you have data, a months long process that involves people’s time, money, developing new skills, external experts and a lot of pain for IT.

Content Governance is about understanding the unknowns when it comes to your content and answering questions: what do we know about our content? Is it encrypted and stored at the right location? Are the right people is accessing it? Is this file subject to regulations or is it confidential and protected accordingly? What will be the business impact if it is accidentally deleted? It’s a holistic look at the sum of what is produced by your business.


The reality is that all businesses content needs to be governed. In my interactions with customers, Chief Information Officers (CIOs), Chief Data Officers (CDOs) and other peers there are several recurring themes or myths that come up, here’s a look at them and some suggestions for tackling the issues:

1. A content governance solution is long and costly to install. We need experts.

Historically this has been the case with on-premises solutions that require a lot of servers. The more data to analyze the more servers you needed. In addition, running perpetual license software meant organizations needed costly professional services experts to deploy it. By the time an organization was ready to update its content governance policies IT would usually find siloed, inconsistent and outdated systems – a patchwork solution that had been ignored.

This problem is solved with a software-as-a-service (SaaS) content governance solution. No additional hardware, no experts needed. It’s like a whole team of data scientists working for you in the cloud.

2. There is no one size fits-all solution; I need many content governance solutions to cover all my content repositories

This one is actually not a myth. To date, most solutions are very specific, working only for email or only for Microsoft SharePoint and others. These solutions do not play nice across repositories and leave IT in a bind where there are specialty solutions for each type of repository and no single source of the truth.

The fix here is to look to a solution that is open and supports a broad variety of repositories and when it does not support the one you need natively, ensure the solution has a software developer tool kit (SDK) so you or your content repository vendor can integrate with it. This type of solution removes the silo effect and creates a single source of the truth in an agnostic way allowing customization of policies and the ability to apply them across the board all at once.

3. For my cloud content the only option is to rely on the vendors native content governance solution

It’s true that the majority of data governance solutions are mainly focused on on-premises content. Also the cloud is quite new and there is no “cloud standard” for content. As a result, a lot of cloud content applications have their own governance tool. The problem is these solutions are limited (you can’t be an expert at everything) and it means for every app you need to use a different tool, limiting your visibility and control and making your content governance quality dependent on each provider.

The fix here is to abstract the content layer (where the data lives) from the governance logic (how the data is managed) so the policies can applied to any content and the implementation of these policies are translated for each repository. Such an overlay solution will have to be hybrid and unify views across repositories with a consistent level of control.

4. Most content governance solutions are biased and encourage an upsell of their own back-up, archiving or disaster recovery solutions

Historically, content governance has been an adjacent market for data storage vendors with archiving, disaster recovery or back-up solutions leaving customers wondering if the classification telling them which data was mission-critical (typically a huge amount) and had to be duplicated was not a self-fulfilling prophecy to cross-sell solutions. How can you trust a content governance recommendation when interests are not aligned?

Going back to our key of being open, the solution to busting this myth is to look for a vendor that is content repository and data storage agnostic, independent and open. This will allow you to separate the vendor from the solution, the action of the recommendation versus the recommendation. No matter what the recommendation for managing your content, the goal should be to separate the solution from the vendor and ensure the vendor you select can work agnostically across repositories.

5. If my organization deploys a content governance solution, the usability and productivity of users will be impacted

Introducing content governance means monitoring all activities around your content and making the best decision on how to protect it. Solutions designed with IT in mind often ignore usability for non-IT employees. If it takes a knowledge worker more time to look for content on a repository or to even access it there will be dissatisfaction and business productivity loss – users will be driven towards so-called ‘shadow IT’.

The key to avoiding this is to select a vendor that has collaboration expertise and understands content workflow – how information moves throughout the enterprise. The goal should be to control the content not the apps or the users.

At its core a content governance solution allows businesses to understand more about their data, create and enforce policies to govern it and use this information for business insight and decision-making. The goal is to be open, collaborative and supportive of multiple repositories while still supporting contractual and regulatory requirements. To be more productive, users will select their preferred apps regardless of attempts by IT to control them. Instead IT should focus on the content and how it is being access by these users (not the apps).

The Win-Win: users can interact across repositories in an agnostic way and IT can sleep at night.

By Isabelle Guis, Chief Strategy Officer, Egnyte

Isabelle-GuidIsabelle Guis is the Chief Marketing and Strategy Officer at Egnyte, overseeing all global marketing as well as product and go-to-market strategies. She previously served as EMC’s vice president of Marketing for the Public Cloud Solutions Group and Enterprise Storage Division, driving cloud buyer and service provider segmentations, as well as messaging, product positioning and go-to-market strategies for the company’s core storage solutions.

Isabelle has also held leadership positions at Avaya, Big Switch Networks, Cisco Systems, and Nortel Networks. She holds a Master of Science in Electrical Engineering and Computer Science from Supelec (France), and an MBA from Harvard Business School.

Business Analytics Vs Data Science

Business Analytics Vs Data Science

Big Data Continues To Grow

Big Data continues to be a much discussed topic of interest and for good reason.  According to a recent report from International Data Corporation (IDC), “worldwide revenues for big data and business analytics will grow from nearly $122 billion in 2015 to more than $187 billion in 2019, an increase of more than 50% over the five-year forecast period. The new Spending Guide expands on IDC’s previous forecasts by offering greater revenue detail by technology, industry, and geography...)

This is very good news for businesses and investors involved in this growing industry. For anyone looking to break into this market as a career choice will possibly find the infographic of use below.


(Infographic Source:

Educating Our Future Data Scientists

Educating Our Future Data Scientists

Future Data Scientists

Data scientists are one of the many sought-after IT professionals currently enjoying their pick of positions thanks to a general shortage of IT staff. Universities and colleges have created and continue to develop master’s programs attempting to address this skills shortage, and there’s additionally been a reliance on boot camps to help fill the gaps. However, undergraduate programs tailored to big data have been somewhat lacking in the past.

With the Department of Labor’s forecast of a 25% growth in data jobs by 2018, it’s no surprise that US colleges and universities are changing tactics and developing undergraduate programs that address big data skills requirements. The good news is there is growing interest in this field, which means a number of excellent publications and influencers are starting to cover this topic

Universities & Colleges with Data Science Undergrad Programs


(Image Source: Shutterstock)

College of Charleston

Providing their own undergraduate program in data science, students complete internships at tech giants before choosing from 14 degree disciplines which include accounting, biomechanics, CRM, economics, exercise physiology, finance, geoinformatics, molecular biology, organismal biology, physics and astronomy, psychology, sociology, and supply chain management.


The Bachelor of Arts in Decision Analytics provided by DePaul teaches students practical applications of big data including the ethical collection and securing of data, analysis and communication of findings, and the development of solutions to data problems.

Drexel University

Starting in the fall of 2016, this Bachelors of Science Program in Data Science provides students with a well-rounded data education including the analysis and meaningful use of data, determining a business’s data needs, and securing data.

University of Iowa

This Bachelor’s Degree in Business Analytics and Information Systems claims a 100% placement rate and provides two degree tracks. Business analytics revolves around the improvement of business’ data strategy and the building of new processes while the information systems stream focuses on managing technologies that collect, store, and secure data.

Ohio State University

The data analytics program offered by this university provides an interdisciplinary major thanks to a bachelor of science degree from the College of Arts and Sciences through partnerships with the Fisher College of Business, the College of Engineering, and the College of Medicine. Students are able to specialize either in biomedical, business analytics, or computational analytics, covering a range of industries requiring data scientists.

University of San Francisco

Providing majors in data science focusing on computer science and mathematics, the University of San Francisco program promises students a wealth of mathematical, computational, and statistical skills. Students choose between the three streams of mathematical data science, computational data science, and economic data science and courses range from linear algebra to microeconomics to programming.

University of Wisconsin

Through their undergraduate degree in Data Science and Predictive Analytics, University of Wisconsin students receiving not only an education in data science but are also lectured in business and the application of big data concepts. Skills taught include mining and collection of data, analysis of data, and the creation of data visualizations, preparing students for work in fields across finance, marketing, economics, management, and more.

Worcester Polytechnic Institute

The program at Worcester Polytechnic Institute earns students both a Bachelor of Science Degree in Data Science as well as a master’s degree. With research across almost every industry, students will find few limitations to their big data education.

Free Big Data Programs


Of course, formal education programs are not the only providers of big data skills. A number of online courses have emerged helping job seekers beef up their big data and analytics skills covering topics such as machine learning, Hadoop, and various programming languages. Udemy offers Big Data Basics: Hadoop, MapReduce, Hive, Pig, & Spark aimed at beginners interested in the tech foundations of the big data ecosystem, and for slightly more advanced students, the Hadoop Starter Kit offers access to a multi-node Hadoop training cluster. Introduction to Spark, R Basics – R Programming Language Introduction, and Python for Beginners with Examples provide primers for some of the different skills data scientists need while MIT’s Artificial Intelligence course teaches students to develop intelligent systems. These and many other short courses provide an excellent starting point for those interested in data science while the programs being drawn up by top colleges and universities are advancing quickly to meet industry skills needs.

By Jennifer Klostermann

Leveraging IoT & Open Source Tools

Leveraging IoT & Open Source Tools

IoT and Data Growth

Though the data regarding connected devices is anything but cohesive, a broad overview of IoT stats affords a clear picture of how quickly our world is becoming a connected ecosystem: In 1984, approximately 1,000 devices were connected to the Internet; in 2015, Gartner predicted 4.9 billion connected things would be in use; and by 2020 analysts expect we’ll have somewhere between 26 and 50 billion connected devices globally. Said Padmasree Warrior, Chief Technology and Strategy Officer at Cisco, “In 1984, there were 1,000 connected devices. That number rose up to reach a million devices in 1992 and reached a billion devices in 2008. Our estimates say… that we will have roughly 50 billion connected devices by the year 2020.

What’s Connected?


(Infographic Source:

Of course, we’re well past the days when ‘connected’ meant to your computer or mobile phone. Connected devices today include many household gadgets such as heating, lighting, and refrigerators, personal wearables including smart watches and clothing, and industrial equipment such as shipping pallets and automation gear. Innovators are already dreaming up the next big thing, and in the future, we can expect smart couches that keep you warm in winter, smart crockery that tracks what you’re eating, and smart toothbrushes helping fight gum disease. IoT is being implemented in the running of businesses and product manufacturing, as well as into new designs and concepts generated by these firms, and according to Vision Mobile, 91% of IoT developers are using open source technology in these IoT projects.

IoT & Open Source Tools

With data from 3,700 IoT developers across 150 counties, Vision Mobile found that eight out of ten IoT developers use open source whenever they can, and six out of ten contribute to open source projects. The cost (free) of these open source tools tends to be the leading driver behind their use, but developers also point to open source tools providing the best support along with the best technology thanks to constant improvements and peer-to-peer support in the open source community. Open source technology is also considered a valuable method for improving skills and learning new technologies.

oliver-pauzetOliver Pauzet, VP of Market Strategy at Sierra Wireless, additionally points out that “closed, proprietary systems can make interoperability difficult.” Inter-brand connection is thus another challenge open source technology addresses, enabling the devices of different developers to communicate. Pauzet points also to the necessity of creating and employing industry standards which will encourage interoperability for greater choice and flexibility. This would mean developers could use cross-brand devices in the development of specific solutions, promising greater innovation along with cost efficiency. Finding an open source license that is “business-friendly,” along with industrial-grade components released as an open standard is Pauzet’s tip for quickly taking IoT concepts from prototype to mass deployment. Says Pauzet, “The fact that so much of the integration, testing, and validation work is already done, they no longer have to invest big money when the time comes to expand on a global scale.”

Open Source Support

Recently announced, Farnell element14 is calling their mangOH Green Open Hardware IoT Platform the first “all-in-one Hardware, Software and Cloud-based solution for Industrial IoT applications.” Allowing developers rapid testing and prototyping of ideas, IoT solutions can purportedly be taken to market within weeks, and the platform is compatible with other open source initiatives including Linear Technology Dust Networks and Texas Instruments ZigBee, NXP thread module, Wi-Fi and Bluetooth module.

The range of developers making use of and creating open source tools and solutions is extensive; Postscapes Internet of Things Awards 2015/16 takes a look at some of the best IoT open source projects. Projects nominated include platforms for building IoT gateways as well as interaction with everyday objects, CNC farming machinery, tools for the generation of HTML and mobile apps for IoT, and more. Postscapes believes the open source movement is “championing openness, transparency, and the power of collaborative development”; the range of quality open source IoT projects is the proof.

By Jennifer Klostermann

University of Wisconsin – Bridging Together VR and Big Data in Future Courses

University of Wisconsin – Bridging Together VR and Big Data in Future Courses

Virtual Reality Tools Could Bring Big Data to Life for University of Wisconsin Data Science Students

MADISON, WI –(Marketwired – May 16, 2016) – Soon, students in University of Wisconsin data science courses might be putting on virtual reality headsets and bracing themselves for a spectacular ride through mountains of data. The courses are part of the online Master of Data Science degree program offered by UW-Extension in collaboration with six UW campuses.

Ryan Martinez, an instructional designer for UW-Extension, sees both a need and an opportunity to make Big Data come to life in a way that can dramatically change people’s behavior. His proposal to use Oculus virtual reality technology in the online UW Master of Data Science curriculum earned Martinez a highly coveted spot at the Oculus Launch Pad Boot Camp at Facebook headquarters in Menlo Park, California, on May 21.

Oculus VR will provide feedback and mentorship to Martinez and 100 other pioneers joining him at the Oculus Launch Pad Boot Camp. Following the forum, Oculus will determine which concepts will earn scholarships ranging from $5,000 – $50,000, distributed with the objective of helping leaders realize their ideas.

If data scientists could more quickly analyze data that’s already available, they could make faster decisions, act sooner, and potentially even save lives,” says Martinez. “All someone needs is a phone and a VR headset. What we can do to bring the data from virtual reality into real life — it’s incredible.”

We’re proud to continue to lead the way with new technologies and practices in higher education,” says David Schejbal, dean, UW-Extension’s Continuing Education Division. “Ryan’s concept may potentially lead to innovative tools we could offer our data science students, and is an interesting option they probably haven’t even considered.

About the University of Wisconsin-Extension Continuing Education Division

The University of Wisconsin Master of Data Science joins a growing list of online degree and certificate programs offered in collaboration with UW-Extension and UW System campus partners, including the existing bachelor’s degree in Health and Wellness Management; bachelor’s, master’s, and certificate programs in Sustainable Management; a bachelor’s degree in Health Information Management and Technology; bachelor of science in Nursing (RN to BSN) programs; and nine additional degree and certificate programs offered in the self-paced, competency-based UW Flexible Option format.

CloudTweaks Comics
The Conflict Of Net Neutrality And DDoS-Attacks!

The Conflict Of Net Neutrality And DDoS-Attacks!

The Conflict Of Net Neutrality And DDoS-Attacks! So we are all cheering as the FCC last week made the right choice in upholding the principle of net neutrality! For the general public it is a given that an ISP should be allowed to charge for bandwidth and Internet access but never to block or somehow…

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

Reuters News: Powerfull DDoS Knocks Out Several Large Scale Websites

DDoS Knocks Out Several Websites Cyber attacks targeting the internet infrastructure provider Dyn disrupted service on major sites such as Twitter and Spotify on Friday, mainly affecting users on the U.S. East Coast. It was not immediately clear who was responsible. Officials told Reuters that the U.S. Department of Homeland Security and the Federal Bureau…

Cloud Infographic: Security And DDoS

Cloud Infographic: Security And DDoS

Security, Security, Security!! Get use to it as we’ll be hearing more and more of this in the coming years. Collaborative security efforts from around the world must start as sometimes it feels there is a sense of Fait Accompli, that it’s simply too late to feel safe in this digital age. We may not…

Cloud Infographic – DDoS attacks, unauthorized access and false alarms

Cloud Infographic – DDoS attacks, unauthorized access and false alarms

DDoS attacks, unauthorized access and false alarms Above DDoS attacks, unauthorized access and false alarms, malware is the most common incident that security teams reported responding to in 2014, according to a recent survey from SANS Institute and late-stage security startup AlienVault. The average cost of a data breach? $3.5 million, or $145 per sensitive…

Update: Timeline of the Massive DDoS DYN Attacks

Update: Timeline of the Massive DDoS DYN Attacks

DYN DDOS Timeline This morning at 7am ET a DDoS attack was launched at Dyn (the site is still down at the minute), an Internet infrastructure company whose headquarters are in New Hampshire. So far the attack has come in 2 waves, the first at 11.10 UTC and the second at around 16.00 UTC. So…

Why Security Practitioners Need To Apply The 80-20 Rules To Data Security

Why Security Practitioners Need To Apply The 80-20 Rules To Data Security

The 80-20 Rule For Security Practitioners  Everyday we learn about yet another egregious data security breach, exposure of customer data or misuse of data. It begs the question why in this 21st century, as a security industry we cannot seem to secure our most valuable data assets when technology has surpassed our expectations in other regards.…

How To Humanize Your Data (And Why You Need To)

How To Humanize Your Data (And Why You Need To)

How To Humanize Your Data The modern enterprise is digital. It relies on accurate and timely data to support the information and process needs of its workforce and its customers. However, data suffers from a likability crisis. It’s as essential to us as oxygen, but because we don’t see it, we take it for granted.…

Digital Twin And The End Of The Dreaded Product Recall

Digital Twin And The End Of The Dreaded Product Recall

The Digital Twin  How smart factories and connected assets in the emerging Industrial IoT era along with the automation of machine learning and advancement of artificial intelligence can dramatically change the manufacturing process and put an end to the dreaded product recalls in the future. In recent news, Samsung Electronics Co. has initiated a global…

Security: Avoiding A Hatton Garden-Style Data Center Heist

Security: Avoiding A Hatton Garden-Style Data Center Heist

Data Center Protection In April 2015, one of the world’s biggest jewelry heists occurred at the Hatton Garden Safe Deposit Company in London. Posing as workmen, the criminals entered the building through a lift shaft and cut through a 50cm-thick concrete wall with an industrial power drill. Once inside, the criminals had free and unlimited…

How The CFAA Ruling Affects Individuals And Password-Sharing

How The CFAA Ruling Affects Individuals And Password-Sharing

Individuals and Password-Sharing With the 1980s came the explosion of computing. In 1980, the Commodore ushered in the advent of home computing. Time magazine declared 1982 was “The Year of the Computer.” By 1983, there were an estimated 10 million personal computers in the United States alone. As soon as computers became popular, the federal government…

Protecting Devices From Data Breach: Identity of Things (IDoT)

Protecting Devices From Data Breach: Identity of Things (IDoT)

How to Identify and Authenticate in the Expanding IoT Ecosystem It is a necessity to protect IoT devices and their associated data. As the IoT ecosystem continues to expand, the need to create an identity to newly-connected things is becoming increasingly crucial. These ‘things’ can include anything from basic sensors and gateways to industrial controls…

5 Ways To Ensure Your Cloud Solution Is Always Operational

5 Ways To Ensure Your Cloud Solution Is Always Operational

Ensure Your Cloud Is Always Operational We have become so accustomed to being online that we take for granted the technological advances that enable us to have instant access to everything and anything on the internet, wherever we are. In fact, it would likely be a little disconcerting if we really mapped out all that…


Sponsored Partners