Mozilla to Firefox users: Here's how we're protecting you from code injection attacks

Mozilla to Firefox users: Here’s how we’re protecting you from code injection attacks

Mozilla cleans up Firefox to cut risk of code injection attacks and deter use of a dangerous JavaScript function Firefox-maker Mozilla has detailed its recent efforts to harden the browser against code injection attacks. That hardening work has focused on removing "potentially dangerous artifacts" in
/
New Global Research from Accenture Interactive Urges CMOs to Put People Before Data Collection

New Global Research from Accenture Interactive Urges CMOs to Put People Before Data Collection

Nearly 69% of consumers would stop doing business with a brand if data usage became too invasive NEW YORK; Oct. 16, 2019 – New global research released by Accenture Interactive offers guidance to chief marketing officers (CMOs) on strategies to use data respectfully and responsibly
/
Protect Against Network Failures

Five Things Organizations Can Do To Protect Against Network Failures

Protect Against Network Failures

It is no surprise that whenever there is an outage in a public or private cloud, organizations lose business, face the wrath of angry customers and take a hit on their brands. The effects of an outage can be quite damaging; a 10-hour network outage at hosting company Peak Web ultimately led to its bankruptcy.

Causes of Outages

Any enterprise is vulnerable to a crippling outage similar to the recent major AWS outage due to two primary reasons; increasing complexity and rate of change. These factors put too much stress on human administrators, who have no way of ensuring that their everyday actions do not cause unintended outages.

Five Possible Solutions

Advances in computer science, predictive algorithms and availability of massive compute capacity at a reasonable price-point allow the emergence of new approaches and solutions to guarantee system resilience, uptime, availability and disaster recovery. It is important for data-center administrators to take advantage of new and advanced techniques whenever possible.

  1. Architectural Approach: This is the most fundamental choice in data-center architecture. A robust, available, resilient data center can be built with two seemingly different architectures.

Telcos and carriers achieve a robust architecture by ensuring reliability in every component. Every network device in this type of architecture is compliant with very stringent Network Equipment Building System (NEBS) standards. NEBS-compliant devices are capable of withstanding extreme environmental conditions. The standard requires testing for fire resistance, seismic stability, electromagnetic shielding, humidity, noise and more.

Cloud providers take a completely different approach to reliability. They build many smaller systems using inexpensive components that fail more often than the NEBS-compliant systems used by telcos. These systems are then grouped into “atomic units” of small failure domains – typically in one data-center rack. This approach gives a smaller “blast radius” when things go wrong. Hundreds of thousands of such atomic units are deployed in large data centers, an approach that enables a massive scale out.

  1. Active Fault Injection: Many cloud providers deploy this technique. The philosophy is that “the best defense is to fail often.” A team of people is chartered to actively inject faults into the system every single day and to create negative scenarios by forcing ungraceful system shutdowns, physically unplugging network connectivity, shutting down power in the data center or even simulating application-level attacks. This approach forces the dev-ops team to fine tune their software and processes. The Chaos Monkey tool from Netflix, which terminates application VMs randomly, is an example of this approach.
  1. Formal Verification: Formal verification methods, by definition, ensure integrity, safety and security of the end-to-end system. Such methods have been used in aerospace, airline and semiconductor systems for decades. With advances in computing, it is now possible to bring formal verification to the networking layer of IT infrastructure, using it to build a mathematical model of the entire network.

Formal verification can be used to perform an exhaustive mathematical analysis of the entire network’s state against a set of user intentions in real time, without emulation and without requiring a replica of the network. Users can evaluate a broad range of factors, such as network-wide reachability, quality issues, loops, configuration inconsistencies and more. Mathematical modeling can allow “what-if” scenario analyses of proposed changes; such modeling would have prevented a 2011 Amazon outage caused by a router configuration error.

  1. Continuous Testing: This approach is an extension of the continuous integration (CI) and continuous delivery (CD) commonly employed with cloud applications. Developers of cloud-native applications (e.g., Facebook, Amazon Shopping, Netflix, etc.) typically make hundreds of tiny improvements to their software on a single day using CI/CD. The end user rarely notices these tiny changes, but over a longer period of time, they lead to a significant improvement.

Similarly, it is possible to continuously test and verify every tiny change in the network configuration with continuous testing tools. This is a drastic departure from the traditional approach of making large number of changes in a service window, which can be too risky and disruptive.

  1. Automation: In a 2016 survey of 315 network professionals conducted by Dimensional Research, 97 percent indicated that human error leads to outages, 45 percent said those outages are frequent. This problem can be mitigated by automating configuration and troubleshooting as much as possible. However, automation is a double-edged sword because it is done by a software program. If there is an error in automation code, problems are replicated quickly and throughout a much broader “blast zone,” as happened in Amazon’s February 2017 outage. In this case, an error in the automation script caused it to take down more servers than intended. Even automation tools need some human input – commands, parameters or higher-level configuration – and any human error will be magnified by automation.

By Milind Kulkarni

Milind Kulkarni Contributor
Milind Kulkarni is the vice president of product management at Veriflow. Prior to joining Veriflow, Milind shaped networking and server products and go-to-market strategy for Oracle Cloud and Engineered Systems. Prior to Oracle, Milind held product management, product marketing, business development, and engineering roles at Brocade, Cisco, and Center for Development of Advanced Computing (C-DAC).
Aaron Continelli

Cloud-Based or On-Premise ERP Deployment? Find Out

ERP Deployment You know how ERP deployment can improve processes within your supply chain, and the things to keep in mind when implementing an ERP ...
Michela Menting

Protecting Devices From Data Breach: Identity of Things (IDoT)

IoT Ecosystem It is a necessity to protect IoT devices and their associated data. As the IoT ecosystem continues to expand, the need to create ...
Vibhav Agarwal

Principles of an Effective Cybersecurity Strategy

Effective Cybersecurity Strategy A number of trends contribute to today’s reality in which businesses can no longer treat cybersecurity as an afterthought. These include a ...
Michela Menting

Achieving Network Security In The IoT

Security In The IoT The network security market is experiencing a pressing and transformative change, especially around access control and orchestration. Although it has been ...
Chris Gervais

Why Containers Can’t Solve All Your Problems In The Cloud

Containers and the cloud Docker and other container services are appealing for a good reason - they are lightweight and flexible. For many organizations, they ...
It Programs Compressor