Microsoft has released an updated version of Microsoft Cognitive Toolkit, a system for deep learning that is used to speed advances in areas such as speech and image recognition and search relevance on CPUs and NVIDIA® GPUs.
The toolkit, previously known as CNTK, was initially developed by computer scientists at Microsoft who wanted a tool to do their own research more quickly and effectively. It quickly moved beyond speech and morphed into an offering that customers including a leading international appliance maker and Microsoft’s flagship product groups depend on for a wide variety of deep learning tasks.
“We’ve taken it from a research tool to something that works in a production setting,” said Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of Microsoft Cognitive Toolkit.
The latest version of the toolkit, which is available on GitHub via an open source license, includes new functionality that lets developers use Python or C++ programming languages in working with the toolkit. With the new version, researchers also can do a type of artificial intelligence work called reinforcement learning.
Finally, the toolkit is able to deliver better performance than previous versions. It’s also faster than other toolkits, especially when working on big datasets across multiple machines. That kind of large-scale deployment is necessary to do the type of deep learning across multiple GPUs that is needed to develop consumer products and professional offerings…
Blockchain, also known as Distributed Ledger Technology (DLT), is the innovative technology behind Bitcoin. The impact of Bitcoin has been tremendous and, as with any revolutionary technology, was treated initially with awe and apprehension. Since its open source release back in 2009, Bitcoin became a transformative force in the global payments system, establishing itself without the aid or support of the traditional financial infrastructure. While initial usage saw huge success in black markets, Bitcoin defied odds, and the blockchain technology spawned other cryptocurrencies, exchanges, commercial ventures, alliances, consortiums, investments, and uptake by governments, merchants, and financial services worldwide.
On August 12, the World Economic Forum (WEF) published a report on the future of the financial infrastructure, and in particular on the transformative role that blockchain technology is set to play. Notably, it analyzes the technology’s impact on the financial services industry and how it can provide more transparency and security. Potential use cases are examined, including for insurance, deposits and lending, insurance, capital raising, investment management, and market provisioning. The report also looks at the current challenges in instituting a widespread implementation of blockchain, many of which will require international legal frameworks, harmonized regulatory environments, and global standardization efforts.
DLT is already having a serious impact on the financial services industry. The WEF report states that 80% of banks will initiate a DLT project by next year, and more than $1.4 billion has already been invested in the technology in the past three years. More than that, governments and law firms are seriously staking their claim in advancing the technology. Law firm Steptoe & Johnson LLP recently announced the expansion of its Blockchain Team into a multidisciplinary practice involving FinTech, financial services, regulatory, and law enforcement knowledge. The firm is also one of the founders of the Blockchain Alliance, a coalition of blockchain companies and law enforcement and regulatory agencies, alongside the U.S. Chamber of Digital Commerce and Coin Center. This expansion is an endorsement of the potential of DLT, within and eventually beyond financial services.
The possible applications of blockchain are already being explored in numerous new sectors: energy, transportation, intellectual property, regulation and compliance, international trade, law enforcement, and government affairs, among many others. Ethereum is one blockchain endeavor that features smart contract functionality. The distributed computing platform provides a decentralized virtual machine to execute peer-to-peer contracts using the Ether cryptocurrency. The Ether Hack Camp is launching a four-week hackathon in November 2016 for DLT using Ether. Currently, the Camp is requesting developers to propose ideas to the public, which will be voted on by registered fans and those selected will be able to take part in the hackathon. The ideas can be seen online already and are vast and varied, ranging from academic publishing without journals, music licensing reform, decentralized ISP, voting on the blockchain, alternative dispute resolution, and rural land register. The idea winning first place in November will collect $50,000 USD.
IBM is one of the most dynamic forerunners currently pushing DLT for the IoT. The firm just announced it is investing $200 million in blockchain technology to drive forward its Watson IoT efforts. The firm is opening up a new office in Germany, which will serve as headquarter to new blockchain initiatives. The investment is part of the $3 billion that IBM pledged to develop Watson’s cognitive computing for the IoT. The goal of the new investment is to enable companies to share IoT data in a private blockchain. A commercial implementation is already underway with Finnish company Kouvola Innovation, which wants to integrate its capabilities into the IBM Blockchain and link devices for tracking, monitoring, and reporting on shipping container status and location, optimizing packing, and transfer of shipments.
IBM is working hard to align its IoT, AI and Blockchain technologies through Watson. The new headquarters in Germany will be home to a Cognitive IoT Collaboratories for researchers, developers and engineers.
Many of IBM’s current projects are developed leveraging the open source Hyperledger Project fabric, a consortium founded by the Linux Foundation in which IBM is a significant contributor, alongside Cisco and Intel. IBM pushed its involvement even further with the June launch of its New York-based Bluemix Garage. The idea is to allow developers and researchers the opportunity to use IBM Cloud APIs and blockchain technologies to drive cognitive, IoT, unstructured data, and social media technology innovation. Just one month after the launch, IBM announced the launch of a cloud service for companies running blockchain technology. The cloud service is underpinned by IBM’s LinuxONE technology, which is specifically designed to meet the security requirements of critical sectors, such as financial, healthcare, and government.
The potential for DLT is certainly broad and rather long-term, but the engagement by the financial services industry is a testament to its potential. While FinTech remains the big focus for blockchain technologies, its success will drive the use of DLT for other areas. The promise of blockchain is to deliver accountability and transparency; although this could be disrupted significantly if announcements, such as the one made by Accenture on ‘editable’ Blockchain, become a reality. While banks may welcome the feature, it would be a serious blow to not only the integrity, but also the security of blockchain technology.
Prefatory Note: Over the past six weeks, we took NVIDIA’s developer conference on a world tour. The GPU Technology Conference (GTC) was started in 2009 to foster a new approach to high performance computing using massively parallel processing GPUs. GTC has become the epicenter of GPU deep learning — the new computing model that sparked the big bang of modern AI. It’s no secret that AI is spreading like wildfire. The number of GPU deep learning developers has leapt 25 times in just two years. Some 1,500 AI startups have cropped up. This explosive growth has fueled demand for GTCs all over the world. So far, we’ve held events in Beijing, Taipei, Amsterdam, Tokyo, Seoul and Melbourne. Washington is set for this week and Mumbai next month. I kicked off four of the GTCs. Here’s a summary of what I talked about, what I learned and what I see in the near future as AI, the next wave in computing, revolutionizes one industry after another.
A New Era of Computing
Intelligent machines powered by AI computers that can learn, reason and interact with people are no longer science fiction. Today, a self-driving car powered by AI can meander through a country road at night and find its way. An AI-powered robot can learn motor skills through trial and error. This is truly an extraordinary time. In my three decades in the computer industry, none has held more potential, or been more fun. The era of AI has begun.
Our industry drives large-scale industrial and societal change. As computing evolves, new companies form, new products are built, our lives change. Looking back at the past couple of waves of computing, each was underpinned by a revolutionary computing model, a new architecture that expanded both the capabilities and reach of computing.
In 1995, the PC-Internet era was sparked by the convergence of low-cost microprocessors (CPUs), a standard operating system (Windows 95), and a new portal to a world of information (Yahoo!). The PC-Internet era brought the power of computing to about a billion people and realized Microsoft’s vision to put “a computer on every desk and in every home.” A decade later, the iPhone put “an Internet communications” device in our pockets. Coupled with the launch of Amazon’s AWS, the Mobile-Cloud era was born. A world of apps entered our daily lives and some 3 billion people enjoyed the freedom that mobile computing afforded.
Today, we stand at the beginning of the next era, the AI computing era, ignited by a new computing model, GPU deep learning. This new model — where deep neural networks are trained to recognize patterns from massive amounts of data — has proven to be “unreasonably” effective at solving some of the most complex problems in computer science. In this era, software writes itself and machines learn. Soon, hundreds of billions of devices will be infused with intelligence. AI will revolutionize every industry.
GPU Deep Learning “Big Bang”
Why now? As I wrote in an earlier post (“Accelerating AI with GPUs: A New Computing Model”), 2012 was a landmark year for AI. Alex Krizhevsky of the University of Toronto created a deep neural network that automatically learned to recognize images from 1 million examples. With just several days of training on two NVIDIA GTX 580 GPUs, “AlexNet” won that year’s ImageNet competition, beating all the human expert algorithms that had been honed for decades. That same year, recognizing that the larger the network, or the bigger the brain, the more it can learn, Stanford’s Andrew Ng and NVIDIA Research teamed up to develop a method for training networks using large-scale GPU-computing systems.
The world took notice. AI researchers everywhere turned to GPU deep learning. Baidu, Google, Facebook and Microsoft were the first companies to adopt it for pattern recognition. By 2015, they started to achieve “superhuman” results — a computer can now recognize images better than we can. In the area of speech recognition, Microsoft Research used GPU deep learning to achieve a historic milestone by reaching “human parity” in conversational speech.
Image recognition and speech recognition — GPU deep learning has provided the foundation for machines to learn, perceive, reason and solve problems. The GPU started out as the engine for simulating human imagination, conjuring up the amazing virtual worlds of video games and Hollywood films. Now, NVIDIA’s GPU runs deep learning algorithms, simulating human intelligence, and acts as the brain of computers, robots and self-driving cars that can perceive and understand the world. Just as human imagination and intelligence are linked, computer graphics and artificial intelligence come together in our architecture. Two modes of the human brain, two modes of the GPU. This may explain why NVIDIA GPUs are used broadly for deep learning, and NVIDIA is increasingly known as “the AI computing company.”
An End-to-End Platform for a New Computing Model
As a new computing model, GPU deep learning is changing how software is developed and how it runs. In the past, software engineers crafted programs and meticulously coded algorithms. Now, algorithms learn from tons of real-world examples — software writes itself. Programming is about coding instruction. Deep learning is about creating and training neural networks. The network can then be deployed in a data center to infer, predict and classify from new data presented to it. Networks can also be deployed into intelligent devices like cameras, cars and robots to understand the world. With new experiences, new data is collected to further train and refine the neural network. Learnings from billions of devices make all the devices on the network more intelligent. Neural networks will reap the benefits of both the exponential advance of GPU processing and large network effects — that is, they will get smarter at a pace way faster than Moore’s Law.
Whereas the old computing model is “instruction processing” intensive, this new computing model requires massive “data processing.” To advance every aspect of AI, we’re building an end-to-end AI computing platform — one architecture that spans training, inference and the billions of intelligent devices that are coming our way.
Let’s start with training. Our new Pascal GPU is a $2 billion investment and the work of several thousand engineers over three years. It is the first GPU optimized for deep learning. Pascal can train networks that are 65 times larger or faster than the Kepler GPU that Alex Krizhevsky used in his paper. A single computer of eight Pascal GPUs connected by NVIDIA NVLink, the highest throughput interconnect ever created, can train a network faster than 250 traditional servers.
Soon, the tens of billions of internet queries made each day will require AI, which means that each query will require billions more math operations. The total load on cloud services will be enormous to ensure real-time responsiveness. For faster data center inference performance, we announced the Tesla P40 and P4 GPUs. P40 accelerates data center inference throughput by 40 times. P4 requires only 50 watts and is designed to accelerate 1U OCP servers, typical of hyperscale data centers. Software is a vital part of NVIDIA’s deep learning platform. For training, we have CUDA and cuDNN. For inferencing, we announced TensorRT, an optimizing inferencing engine. TensorRT improves performance without compromising accuracy by fusing operations within a layer and across layers, pruning low-contribution weights, reducing precision to FP16 or INT8, and many other techniques.
Someday, billions of intelligent devices will take advantage of deep learning to perform seemingly intelligent tasks. Drones will autonomously navigate through a warehouse, find an item and pick it up. Portable medical instruments will use AI to diagnose blood samples onsite. Intelligent cameras will learn to alert us only to the circumstances that we care about. We created an energy-efficient AI supercomputer, Jetson TX1, for such intelligent IoT devices. A credit card-sized module, Jetson TX1 can reach 1 TeraFLOP FP16 performance using just 10 watts. It’s the same architecture as our most powerful GPUs and can run all the same software.
In short, we offer an end-to-end AI computing platform — from GPU to deep learning software and algorithms, from training systems to in-car AI computers, from cloud to data center to PC to robots. NVIDIA’s AI computing platform is everywhere.
AI Computing for Every Industry
Our end-to-end platform is the first step to ensuring that every industry can tap into AI. The global ecosystem for NVIDIA GPU deep learning has scaled out rapidly. Breakthrough results triggered a race to adopt AI for consumer internet services — search, recognition, recommendations, translation and more. Cloud service providers, from Alibaba and Amazon to IBM and Microsoft, make the NVIDIA GPU deep learning platform available to companies large and small. The world’s largest enterprise technology companies have configured servers based on NVIDIA GPUs. We were pleased to highlight strategic announcements along our GTC tour to address major industries:
AI Transportation: At $10 trillion, transportation is a massive industry that AI can transform. Autonomous vehicles can reduce accidents, improve the productivity of trucking and taxi services, and enable new mobility services. We announced that bothBaiduandTomTomselected NVIDIA DRIVE PX 2 for self-driving cars. With each, we’re building an open “cloud-to-car” platform that includes an HD map, AI algorithms and an AI supercomputer.
Driving is a learned behavior that we do as second nature. Yet one that is impossible to program a computer to perform. Autonomous driving requires every aspect of AI — perception of the surroundings, reasoning to determine the conditions of the environment, planning the best course of action, and continuously learning to improve our understanding of the vast and diverse world. The wide spectrum of autonomous driving requires an open, scalable architecture — from highway hands-free cruising, to autonomous drive-to-destination, to fully autonomous shuttles with no drivers.
NVIDIA DRIVE PX 2 is a scalable architecture that can span the entire range of AI for autonomous driving. At GTC,we announced DRIVE PX 2 AutoCruise designed for highway autonomous driving with continuous localization and mapping.We also released DriveWorks Alpha 1, our OS for self-driving cars that covers every aspect of autonomous driving — detection, localization, planning and action.
We bring all of our capabilities together into our own self-driving car, NVIDIA BB8. Here’s a little video:
NVIDIA is focused on innovation at the intersection of visual processing, AI and high performance computing — a unique combination at the heart of intelligent and autonomous machines. For the first time, we have AI algorithms that will make self-driving cars and autonomous robots possible. But they require a real-time, cost-effective computing platform.
At GTC, we introduced Xavier, the most ambitious single-chip computer we have ever undertaken — the world’s first AI supercomputer chip. Xavier is 7 billion transistors — more complex than the most advanced server-class CPU. Miraculously, Xavier has the equivalent horsepower of DRIVE PX 2 launched at CES earlier this year — 20 trillion operations per second of deep learning performance — at just 20 watts. As Forbes noted, we doubled down on self-driving cars with Xavier.
AI Enterprise: IBM, which sees a $2 trillion opportunity in cognitive computing, announced a new POWER8 and NVIDIA Tesla P100 server designed to bring AI to the enterprise. On the software side, SAP announced that it has received two of the first NVIDIA DGX-1 supercomputers and is actively building machine learning enterprise solutions for its 320,000 customers in 190 countries.
AI City: There will be 1 billion cameras in the world in 2020. Hikvision, the world leader in surveillance systems, is using AI to help make our cities safer. It uses DGX-1 for network training and has built a breakthrough server, called “Blade,” based on 16 Jetson TX1 processors. Blade requires 1/20 the space and 1/10 the power of the 21 CPU-based servers of equivalent performance.
AI Factory: There are 2 billion industrial robots worldwide. Japan is the epicenter of robotics innovation. At GTC, we announced that FANUC, the Japan-based industrial robotics giant, will build the factory of the future on the NVIDIA AI platform, from end to end. Its deep neural network will be trained with NVIDIA GPUs, GPU-powered FANUC Fog units will drive a group of robots and allow them to learn together, and each robot will have an embedded GPU to perform real-time AI. MIT Tech Review wrote about it in its story “Japanese Robotics Giant Gives Its Arms Some Brains.”
The Next Phase of Every Industry: GPU deep learning is inspiring a new wave of startups — 1,500+ around the world — in healthcare, fintech, automotive, consumer web applications and more. Drive.ai, which was recently licensed to test its vehicles on California roads, is tackling the challenge of self-driving cars by applying deep learning to the full driving stack. Preferred Networks, the Japan-based developer of the Chainer framework, is developing deep learning solutions for IoT. Benevolent.ai, based in London and one of the first recipients of DGX-1, is using deep learning for drug discovery to tackle diseases like Parkinson’s, Alzheimer’s and rare cancers. According to CB Insights, funding for AI startups hit over $1 billion in the second quarter, an all-time high.
The explosion of startups is yet another indicator of AI’s sweep across industries. As Fortune recently wrote, deep learning will “transform corporate America.”
AI for Everyone
AI can solve problems that seemed well beyond our reach just a few years back. From real-world data, computers can learn to recognize patterns too complex, too massive or too subtle for hand-crafted software or even humans. With GPU deep learning, this computing model is now practical and can be applied to solve challenges in the world’s largest industries. Self-driving cars will transform the $10 trillion transportation industry. In healthcare, doctors will use AI to detect disease at the earliest possible moment, to understand the human genome to tackle cancer, or to learn from the massive volume of medical data and research to recommend the best treatments. And AI will usher in the 4thindustrial revolution — after steam, mass production and automation — intelligent robotics will drive a new wave of productivity improvements and enable mass consumer customization. AI will touch everyone. The era of AI is here.
“LeEco officially launched its disruptive ecosystem model in the U.S., which breaks boundaries between screens to seamlessly deliver content and services on a wide array of connected smart devices – including smartphones, TVs, smart bikes, virtual reality and electric self-driving vehicles”
It is really tough to overstate the vast scale and ambition of LeEco. The basic concept is quite simple, they want to create an ecosystem of content and devices that can be used together seamlessly to connect you together with all your devices. Today they unveiled a brand new Smartphone, Smart TV, VR Headset, Android Powered Smart Bike, and Autonomous Electric Car (all in a day’s work).
“No other company in the world can do this. Not Apple, not Samsung, Amazon, Google, or Telsa” boasted Danny Bowman, Chief Revenue Officer.
Every month, LeEco’s online video streaming service, le.com, garners 730 million unique visitors (that’s more than double the population of the USA) and they are here to take on the US market. This had been touted as a rivalry to Netflix, but really this is a challenge to Western tech giants like Apple, Google, and Amazon. With the User Planning to User (UP2U™) program LeEco promises an integrated cross platform system, built by and for the users – “With UP2U, you are LeEco”.
LeEco are focused on the idea that the user is the foundation of everything. They pioneer a user-first philosophy that works to create a more seamless and unified experience that unites all devices. They are driven by their vertically integrated EUI that incorporates user, hardware, software and content, breaking down barriers between devices and operating systems for a truly integrated experience.
LeEco’s Ecosystem User Interface (EUI) aims to unify their ecosystem with two core principles: breaking device boundaries and putting content at the heart of the experience. In the real world this means you can cast content from your phone to your car with a simple swipe or receive notifications from your Smart Bike to your TV. EUI allows you to move your experience from one device to another; your content will always be available at a touch of a button, regardless of which device you are using.
The Ecosystem incorporates these devices along with Le Cloud (the cloud-based backbone powering LeEco’s multiple screens, smart devices and content), Le Vision Pictures and Le Vision Entertainment (one of Chinas 3 largest film studios – they are currently producing The Great Wall starring Matt Damon), Le Music (LeEco’s online live-streaming music platform and production company), Le Sports (China’s leading Internet-based eco-sports company) and Le TV (the television arm of LeEco).
At the heart of LeEco, and all this incredible integration, is what has driven LeEco from the very beginning; content. For content in the U.S., LeEco has partnered with top content providers including Lionsgate, MGM, Showtime, Vice Media, Awesomeness TV, A+E, with others being continually added. Combined with the power of content creation via Le Vision Entertainment, I have no doubt that they will soon come to rival Netflix as one of the best streaming services in the world.
LeEco has the potential to truly revolutionise integrated tech and Smart Homes. Taking on tech giants like Apple and Google in creating integrated and intuitive content and services across a wide range of devices. This is different as well though; there is no company out there that provides such a wide range of cross platform and device integration. With a competitive price structure (a 43-inch Eco Smart TV with 3-month free EcoPass membership costing $649) that is aimed at offering their service to a mass audience, they will force Apple, Google and others to not only compete technologically, but in value for money as well. Time will tell whether LeEco will have the same success in the US that they enjoyed in China, but I would back them all the way.
Over the past few years, cloud computing has been evolving at a rapid rate. It is becoming the norm in today’s software solutions. Forrester believes that that cloud computing will be a $191 billion market by 2020. According to the 2016 State of Cloud Survey conducted by RightScale, 96% of its respondents are using the cloud, with more enterprise workloads shifting towards public and private clouds. Adoption in both hybrid cloud and DevOps have gone up as well.
The AI-Cloud Landscape
So where could the cloud computing market be headed next? Could the next wave of cloud computing involve artificial intelligence? It certainly appears that way. In a market that is primarily dominated by four major companies – Google, Microsoft, Amazon, and IBM – AI could possibly disrupt the current dynamic.
In the past few years, there has been a surge of investment in AI capabilities in cloud platforms. The big four (Google, Microsoft, Amazon and IBM) are making huge strides in the AI world. Microsoft is currently offering more than twenty cognitive services such as language comprehension and analyzing images. Last year, Amazon’s cloud division added an AI service which lets people add analytical and predictive capabilities to their applications.
The current AI-cloud landscape can essentially be categorized into two groups: AI cloud services and cloud machine learning platforms.
On the flip slide, there are cloud machine learning platforms. Machine learning is a method of data analysis which automates analytical model building. It enables for computers to find patterns automatically as well as areas of importance. Azure Machine Learning and AWS Machine Learning are examples of cloud machine learning platforms.
IBM and Google Making Waves
Recently IBM and Google having been making news in the AI realm and it reflects a shift within the tech industry towards deep learning. Just last month, IBM unveiled Project DataWorks, which is supposedly an industry first. It is a cloud-based data and analytics platform which can integrate different types of data and enable AI-powered decision making. The platform provides an environment for collaboration between business users and data professionals. Using technologies like Pixiedust and Brunel, users can create data visualizations with very minimal coding, allowing everyone in the business to gain insights at first look.
Earlier this month at an event in San Francisco, Google unveiled a family of cloud computing services which would allow any developer or business to use machine learning technologies that fuel some of Google’s most powerful services. This move is an attempt by Google to get a bigger foothold in the cloud computing market.
According to Sundar Pichai, chief executive of Google, computing is evolving from a mobile-first to an AI-first world. So what would a next-generation AI-first cloud like? Simply put, it would be one built around AI capabilities. In the upcoming years, we could possibly see AI being key in improving cloud services such as computing and storage. The next wave of cloud computing platforms could also see integrations between AI and the existing catalog of cloud services, such as Paas or SaaS.
It remains to be seen whether AI can disrupt the current cloud computing market, but it will definitely influence and inspire a new wave of cloud computing platforms.
“Every minute, we are seeing about half a million attack attempts that are happening in cyberspace.” – Derek Manky, Fortinet global security strategist
Pricewaterhouse Coopers has predicted that cyber security will be one of the top risks facing financial institutions over the course of the next 5 years. They have pointed at a number of risk factors, such as the rapid growth of the Internet of Things, increased use of mobile technology, and cross border data exchange, that will contribute to this ever growing problem.
Gartner has estimated that by 2020, the number of connected devices will jump from around 6.4 billion to more than 20 billion connected devices. In other words, there will be between two and three connected devices for every human being on the planet. Derek Manky of Fortinet, told CNBC that “The largest we’ve seen to date is about 15 million infected machines controlled by one network with an attack surface of 20 billion devices. Certainly that number can easily spike to 50 million or more“. So in a world where Cyber Security seems almost unattainable, is it still possible for you, or for large companies, to remain protected?
According to Cross Domain Solutions “comprehensive security is possible by making all security data accessible and automating security procedures”, which allows threats to dealt with in real time. They suggest an approach focused on data confidentiality, data integrity and the authenticity of users and data placeholders. Although it is theoretically possible, this is unlikely to provide total cyber security in practical situations.
The expansion and widespread adoption of the Internet of Things (IoT) has become the most pressing cyber security issue over the last 5 years. Smart phones, smart watches, smart TVs and smart homes, amongst other devices, have increased the surface area for hackers to take advantage of exponentially. This combined with the problems of perimeter security in cloud-based services, the sheer size of data collection by IoT devices, and the lack of security on many modern IoT devices, mean that complete cyber security (for businesses or individuals) will become increasingly more difficult. In a move that shocked the world earlier this year, hackers made off with tens of millions of dollars from Bangladesh’s central bank by using malware to gain access to accounts. Cyber Security is a very real issue for any business that has valuable information or assets stored digitally.
It has been suggested that we should focus on strategies to reduce risk that use formulas such as cyber risk = threats X vulnerabilities X consequences; thus by reducing one of the factors to zero we can achieve complete Cyber security. The Common Vulnerabilities and Exposures list has more than 50,000 recorded vulnerabilities (with more added every hour), so it is almost impossible to ensure your network can deal with an incessant wall of hackers trying to get in. James Lewis, a cybersecurity expert at the Washington DC-based Center for Strategic and International Studies (CSIS), commented recently that businesses need to stop worrying about preventing intruders from accessing their networks. They should instead be concentrating on minimising the damage they cause when they do gain access. According to the Cisco 2015 Annual Security Report, “Security is no longer a question of if a network will be compromised. Every network will, at some point, be compromised”.
Fortunately for the tech world, the same capabilities that make networks more vulnerable can help to strengthen defences as well. Financial institutions are able to utilise big data analytics to monitor for covert threats, helping them to identify evolving external and internal security risks and react much more quickly. Whilst total cyber security may not be practically possible, the technology exists for businesses to be as security conscious as they feel they want to be. Both consumers and businesses should be assigning cyber security as the highest priority.
Last September, the website of a well-known security journalist was hit by a massive DDoS attack. The site’s host stated it was the largest attack of that type they had ever seen. Rather than originating at an identifiable location, the attack seemed to come from everywhere, and it seemed to have been driven through a botnet that included IoT-connected devices like digital cameras. This was something special and unusual, and a stark warning about the future of cyber warfare.
The attack was so large and relentless that the journalist’s site had to be taken down temporarily. The exercise of fending off the attack and then repairing and rebuilding was extremely expensive. Given that the target was a writer and expert on online security and cybercrime, the attack was not only highly destructive but also symbolic: a warning to security specialists everywhere that the war has changed.
“PC users have become a little more sophisticated with regard to security in recent years,” Sellards says. “They used to be the prime target when creating a botnet and launching DDoS attacks because they rarely patched their systems and browser configuration settings were lax by default. However, with automatic upgrades and an increased use of personal firewalls and security apps, PCs have become a little more of a challenge to penetrate. Attackers almost always take the path of least resistance.”
Consequently, IoT devices have become the new playground. They are the new generation of connected machines that use default passwords, hard coded passwords, and inadequate patching. The rush to make everything IoT compatible and affordable leaves little time or incentive for manufacturers to build in sophisticated security layers. In addition, there is an innocence factor at play. Who would ever suspect their digital camera, fitness tracker or smart thermostat of being an accomplice to cybercrime?
Sellards points out that one of the most interesting aspects of the attack was that GRE (Generic Routing Encapsulation protocol) was used instead of the normal amplification techniques used in most DDoS attacks. This represents a change in tactic specifically designed to take advantage of the high bandwidth internet connections that IP based video cameras use.
These developments have experts like Sellards worried, given the huge – and growing – number of IoT devices that form part of the nation’s critical infrastructure. “If default and hardcoded passwords can be compromised to install malware that launches DDoS attacks, they can also be compromised to launch more nefarious attacks with significantly higher consequences,” he says. It shows IoT installs are insecure and not hardened. They are exposed to the Internet without firewall filtering. “All best business practices we’ve spent decades developing have gone right out the window.”
IoT in general represents a fascinating new chapter in convenience and communication for businesses and consumers alike. But as all security experts already know, the bad guys never rest. The way in which they discovered and exploited both the weaknesses and the built-in features of IoT shows a creativity and dedication that must never be ignored. Thus the value of a CCSP having a seat at the executive table has just increased exponentially.
For more on the CCSP certification from (ISC)2, please visit their website. Sponsored by (ISC)2.
Today the sharing economy is spreading across the entire economy (check out this infographic via Near Me if you don’t believe me). It is praised by some to be “the future of market capitalism”, and lambasted by others as “the desperate economy”, it’s a psuedo-socialist disruption solution to monopolies that have sprung up across the entire global economy. The latest industry to be taken on by the sharing economy has been financial services, in a vein of start-ups now known as Fintech.
Fintech and the sharing economy are conceptually intertwined, decentralisation is at the core of what they are each trying to do. Where platforms like YouTube removed publishing and publicity costs to almost nothing, Uber and Airbnb did the same for their respective industries; and now Fintech is following suit. In a financial context this has been focused on on decentralisation of asset ownership, social payments, crowdfunding, and the growth of peer-to-peer lending and insurance.
Development of new tech, such as the blockchain, has provided excellent opportunities for cost reduction and efficiency in money transfers and payments. By reducing fees, small digital payments have become increasingly more viable and cost effective, with the door being opened to an entire market of new products and services, which can be built around smaller, more granular consumption. The vast majority of consumer financial services are driven by the need to conduct peer-to-peer transactions, that is unlikely to change. What is beginning to change is the platforms and institutions that facilitate these transactions; we are moving from large banks towards a broader ecosystem of banks and Fintech companies, with some commentators suggesting that banks could eliminated altogether (though that is some way down the line).
(Image Source: Pricewaterhouse Coopers)
The sharing economy and Fintech have already made massive inroads in certain areas of the financial services industry, garnered huge investments and has been forecasted to experience astronomical levels of growth year on year. PwC (Pricewaterhouse Coopers), has predicted the scale of the sharing economy market, in fields such as P2P lending, crowd funding, automobiles, housing, media and manpower sourcing, to grow from $15 billion in 2013 to $3.35 trillion in 2025. These figures may seem foolishly optimistic, but you only have to look at the successes that Fintech start-ups have enjoyed thus far, as the scale to which they could impact the economy – look at Kickstarter (and crowdfunding in general). Since 2014, Kickstarter has progressed with approximately 20,000 projects, 780,000 investors and generated approximately U$1.5 billion in investment capital. And that is just with Kickstarter, the global crowdfunding economy was worth $34.4 billion in 2015 and is expected to surpass Venture Capital investment in 2016 (which is, on average, around $45 billion).
Yet, that is just one area in which the sharing economy has begun to disrupt traditional financial markets and institutions. With Apple Pay already redefining digital payments, Apple has now filed a patent application for “person-to-person payments using electronic devices” that could allow iPhone users to transfer money between friends and family more easily, imagine Airdrop for money. This has the potential to further commoditize retail banking; instead of using high cost bankers to broker the connection between two parties, technology can allow us to make the cheaper and more efficiently. The sharing economy has also demonstrated its ability to revolutionise more rural or less developed economies. For example, M-PESA in Kenya, handles deposits and payments using customers’ phones and a network of agents. According to a recent report, the service is now being used by 90% of the adult population in the country; the Economist has even declared Kenya the world leader in mobile money, thanks to M-PESA.
Much like many other aspects of the sharing economy, it has the potential to overtake the entire financial services via Fintech platforms. However, traditional banking is unlikely to die quite as easily as some may hope; we are much more likely to see banks and Fintech start-ups working together in an intertwined and decentralised financial ecosystem. The sharing economy is here to stay, and for Fintech that means more innovation and more investment… Sounds alright to me.
The Conflict Of Net Neutrality And DDoS-Attacks! So we are all cheering as the FCC last week made the right choice in upholding the principle of net neutrality! For the general public it is a given that an ISP should be allowed to charge for bandwidth and Internet access but never to block or somehow…
DDoS attacks, unauthorized access and false alarms Above DDoS attacks, unauthorized access and false alarms, malware is the most common incident that security teams reported responding to in 2014, according to a recent survey from SANS Institute and late-stage security startup AlienVault. The average cost of a data breach? $3.5 million, or $145 per sensitive…
Security, Security, Security!! Get use to it as we’ll be hearing more and more of this in the coming years. Collaborative security efforts from around the world must start as sometimes it feels there is a sense of Fait Accompli, that it’s simply too late to feel safe in this digital age. We may not…
DDoS Knocks Out Several Websites Cyber attacks targeting the internet infrastructure provider Dyn disrupted service on major sites such as Twitter and Spotify on Friday, mainly affecting users on the U.S. East Coast. It was not immediately clear who was responsible. Officials told Reuters that the U.S. Department of Homeland Security and the Federal Bureau…
DYN DDOS Timeline This morning at 7am ET a DDoS attack was launched at Dyn (the site is still down at the minute), an Internet infrastructure company whose headquarters are in New Hampshire. So far the attack has come in 2 waves, the first at 11.10 UTC and the second at around 16.00 UTC. So…
Federal Government Cloud Adoption No one has ever accused the U.S. government of being technologically savvy. Aging software, systems and processes, internal politics, restricted budgets and a cultural resistance to change have set the federal sector years behind its private sector counterparts. Data and information security concerns have also been a major contributing factor inhibiting the…
Choosing Your Long-term Cloud Strategy A few weeks ago I visited the global headquarters of a large multi-national company to discuss cloud strategy with the CIO. I arrived 30 minutes early and took a tour of the area where the marketing team showcased their award winning brands. I was impressed by the digital marketing strategy…
Secure The Enterprise Cloud Data is moving to the cloud. It is moving quickly and in enormous volumes. As this trend continues, more enterprise data will reside in the cloud and organizations will be faced with the challenge of entrusting even their most sensitive and critical data to a different security environment that comes with using…
The Legal Battle For Privacy In early June 2013, Edward Snowden made headlines around the world when he leaked information about the National Security Agency (NSA) collecting the phone records of tens of millions of Americans. It was a dramatic story. Snowden flew to Hong Kong and then Russia to avoid deportation to the US,…
Connected Vehicles From cars to combines, the IoT market potential of connected vehicles is so expansive that it will even eclipse that of the mobile phone. Connected personal vehicles will be the final link in a fully connected IoT ecosystem. This is an incredibly important moment to capitalize on given how much time people spend…
The Importance of Cloud Backups Cloud platforms have become a necessary part of modern business with the benefits far outweighing the risks. However, the risks are real and account for billions of dollars in losses across the globe per year. If you’ve been hacked, you’re not alone. Here are some other companies in the past…
The Digital Twin How smart factories and connected assets in the emerging Industrial IoT era along with the automation of machine learning and advancement of artificial intelligence can dramatically change the manufacturing process and put an end to the dreaded product recalls in the future. In recent news, Samsung Electronics Co. has initiated a global…
Incident Response Planning – Part 1 The topic of cybersecurity has become part of the boardroom agendas in the last couple of years, and not surprisingly — these days, it’s almost impossible to read news headlines without noticing yet another story about a data breach. As cybersecurity shifts from being a strictly IT issue to…