Ajay Malik
Ajay Malik

Explainable Intelligence Part 1 – XAI, the third wave of AI

Explainable Intelligence

Artificial Intelligence (AI) is democratized in our everyday life. Tractica forecasts the global artificial intelligence software market revenues will grow from around 9.5 billion US dollars in 2018 to an expected 118.6 billion by 2025.

Gartner predicts that by 2020, AI technologies will be virtually pervasive in almost every new software product and service. It also predicts that the business value created by AI will reach $3.9T in 2022.

We are becoming accustomed to AI making decisions for us in our daily life, from product and movie recommendations on Netflix and Amazon to friend suggestions on Facebook and tailored advertisements on Google search result pages.

NetFlix uses AI for personalization of movie recommendations, auto-generation and personalization of thumbnails / artwork, location scouting for movie production (pre-production), movie editing (post-production), streaming quality and a lot more (Business Insider).

AI plays a huge role in Amazon’s recommendation engine, which generates 35% of the company’s revenue as well as in Amazon’s GO hardware, which includes color and depth cameras, as well as weight sensors and algorithms.

23% of North American enterprises have machine learning embedded in at least one company function as of last year. 19% of enterprises in developing markets including China and 21% in Europe also have successfully integrated machine learning into functions. Most businesses are considering adding AI capability in their systems for certain Business Value Proposition.

By 2021, the term AI will no longer be considered a differentiator in marketing tech provider solutions.

Although AI is powerful in terms of results and predictions, AI algorithms suffer from opacity, that it is difficult to get insight into their internal mechanism of work. And, there are times when we need to know why it made a specific decision.

In March 2018, an autonomous vehicle operated by Uber hit and killed a woman in Tempe, Ariz., as she was walking her bicycle across the street.

So what happened? Uber car’s computer system had spotted Ms. Herzberg six seconds before impact, but classified Ms. Herzberg, who was not in a crosswalk, first as an unrecognized object, then as another vehicle and finally as a bicycle. The self-driving software decided not to take any actions after the car’s sensors detected the pedestrian. Uber’s autonomous mode disables Volvo’s factory-installed automatic emergency braking system, according to the US National Transportation Safety Board preliminary report on the accident. Uber suspended testing of self-driving vehicles after the crash. In December, the vehicles returned to public roads, though at reduced speeds and in less-challenging environments.

“This product is a piece of sh**” wrote a doctor at Florida’s Jupiter Hospital regarding IBM’s flagship AI program Watson, according to internal documents obtained by Stat. It recommended ‘unsafe and incorrect’ cancer treatments.

In 2013 IBM developed Watson’s first commercial application for cancer treatment recommendation, and the company secured a number of key partnerships with hospitals and research centers over the past five years. But Watson AI Health has not impressed doctors. Some complained it gave wrong recommendations on cancer treatments that could cause severe and even fatal consequences. In February 2017, Forbes reported that MD Anderson had “benched” the Watson for Oncology project.

Amazon HR reportedly used an AI-enabled recruiting software between 2014 and 2017 to help review resumes and make recommendations. The software was, however, found to be more favorable to male applicants.

The software reportedly downgraded resumes that contain the word “women” or implied the applicant was female, for example, because they had attended a women’s college. Amazon has since abandoned the software.

AI is a journey, and in no way, any of these failures are suggesting that we should do to Artificial Intelligence what Luddites tried.

Although Artificial Intelligence has captured the imagination of the world since its inception in 1956, at a historic conference at Dartmouth, it has been stuck in the discovery phase with hand-crafted and custom engineered AI applications. Researchers worked on figuring out how to solve a particular problem, and, then do traditional coding. A key characteristic these applications shared was no learning ability and poor handling of uncertainty. It is the Deep Learning systems, starting in the early 2010’s, aided by huge amounts of training data and massive computational power that have shown us the substantive glimpses of the power and application of AI. The period in AI until 2010 essentially represents the “First Wave” and then the deep learning starting the “Second Wave”.

Deep learning is an architecture modeled loosely on the human brain. It makes use of neural networks that consist of thousands or even millions of nodes (neurons) that are densely interconnected and organized into multiple layers. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data. The essential concept is that there is a “weight” assigned to each of the incoming connection of a node, the node computes the sum of the weighted inputs based on data and weight from all connected nodes beneath it, and then uses a threshold to pass or not pass the data to all its outgoing connections to the nodes in the layer above it (akin to “firing of a neuron”).

For training, huge amounts of labeled data is fed into the neural network. An object recognition system, for instance, might be fed thousands of labeled images of cats, dogs, digits, and so on, and it would find visual patterns in the images that consistently correlate with particular labels. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.

You have “learned” to mathematically recognize incredibly subtle patterns within the mountains of data

Using this “pattern recognition,” you can now “provide” an answer, a decision or a prediction to an input data that you have never seen before.

However, this AI using deep learning still has a long way to go to approach human-level learning, thinking, and problem-solving ability. It is a great tool but is unlike the human brain in basic learning. These algorithms suffer from opacity, and do not provide reasoning. They require mountains of data to train. It has no common sense, conceptual learning, creativity, planning, cross-domain thinking, self-awareness, emotions, etc. It is basically an optimizer based on data for a very specific vertical single task.

Deep Learning is still very weak in intelligence.

Deep Learning

Until we figure out the engineering paths to the general intelligence capabilities, the “fourth wave”, the focus of AI is shifting to opening the blackbox.

We are now entering “third wave” of AI where AI systems will become capable of explaining the reasoning behind every decision made by them. The AI systems themselves will construct models that will explain how it works.

XAI is all about improving trust of AI-based systems. At one end it brings fairness, accountability and transparency to the front and center of AI; and on the other end it enables us to control and continuously improve our AI systems.

Control

To me, this third wave, XAI, is the sine qua non for AI to continue making steady progress without disruption. In last year and a half, I have been spending a lot of time on learning XAI. In this series of articles, I will share my learnings of the strategy, approach, scope, and techniques for XAI.

By Ajay Malik

Ajay Malik Contributor
CTO of Lunera
Ex-Google, Ajay Malik is CTO of Lunera and building a fog cloud by embedding a compute platform within end cap of bulbs and tube lights. With over 25 years of executive engineering leadership and entrepreneurial experience in delivering award-winning innovative products, Ajay joined Lunera from Google, where he was head of architecture and engineering for the worldwide corporate networking business. Prior to Google, Ajay was senior vice president of engineering and products at Meru Networks where he led the transformation of the company's technology resulting in its acquisition by Fortinet. Ajay has also held executive leadership positions at Hewlett-Packard, Cisco, and Motorola. Ajay has 80+ patents pending/approved. Ajay is the author of "RTLS for Dummies" and "Artificial Intelligence for Wireless Networking."
Allan Leinwand

Two 2017 Trends From A Galaxy Far, Far Away

Reaching For The Stars People who know me know that I’m a huge Star Wars fan. I recently had the opportunity to see Rogue One: ...
Daren Glenister

Countdown to GDPR: Preparing for Global Data Privacy Reform

Preparing for Global Data Privacy Reform Multinational businesses who aren’t up to speed on the regulatory requirements of the European Union’s General Data Protection Regulation ...
The Cloud Debate - Private, Public, Hybrid or Multi Clouds?

The Cloud Debate – Private, Public, Hybrid or Multi Clouds?

The Cloud Debate Now that we've gotten over the hump of whether we should adopt the cloud or not, "which cloud" is now the center ...
Brian Wheeler

3 Major Concerns For The Cloud

Concerns For The Cloud With the rise of cloud computing, different concerns about adopting the cloud have arisen over the years. In 2016, the top ...
Cloud Services Are Vulnerable Without End-To-End Encryption

Cloud Services Are Vulnerable Without End-To-End Encryption

End-To-End Encryption The growth of cloud services has been one of the most disruptive phenomena of the Internet era.  However, even the most popular cloud ...
Samsung

Experts Discuss Taking AI to the Next Level at Samsung AI Forum 2019

Samsung AI Forum 2019 Samsung Electronics is committed to leading advancements in the field of artificial intelligence (AI), with the hopes of ushering in a brighter future. To discuss what the ...
Reuters news

Robot Wars: Russia’s Yandex begins autonomous delivery testing

MOSCOW (Reuters) - Russian internet giant Yandex has started testing autonomous delivery robots, the latest addition to its technological arsenal, the company said on Thursday. Named after the space exploration ...
Accenture News

Accenture Expands Cybersecurity Capabilities with Network of “Cyber Ranges” to Help Industrial Companies Simulate and Respond to Cyberattacks

Accenture will also open new Cyber Fusion Center in Houston for industrial control systems NEW YORK; Nov. 7, 2019 – Accenture (NYSE: ACN) has expanded its cybersecurity capabilities with the ...