November 9, 2016

Ringing The Alarm Bells – Preparing For The Potential Dark Future of A.I

By Josh Hamilton

The Future of A.I

On Friday 21st October, the world witnessed the largest cyber-attack in history. The attack set a new precedent for the size, scale, and potential threat of cyber-attacks; it used the Mirai botnet to corrupt IoT (Internet of Things) devices and repurpose them to conduct a massive coordinated DDoS attack. The attack was registered at a massive 1.2Tbps, and, rather worryingly, there has been speculation that the hackers could have used up 5 times as many devices. Many people have been quick to declare this as the beginning of a new era in cyber-crime, and Elon Musk has speculated that it is “only a matter of time before AI is used to do this”. But how at risk are we?

Musk has also postulated that as A.I. gets better and smarter, that we could see it used to optimise attacks on the internet infrastructure, attacks like the one we saw last month. He has brought light that (in his eyes) the internet is “particularly susceptible” to something called a “gradient descent algorithm”. A gradient descent algorithm, is a mathematical process that examines a complex function and finds the optimum solution – something that A.I.s are already incredibly good at, since it is integral to Machine Learning. The consequence of this is ultimately that this process of optimization could be used to launch devastating IoT attacks as a result of the fine-tuned digital weaponry. Ultimately, it could lead to A.I. vs A.I. cyber-warfare on a scale that we can only imagine at the moment.

All this comes as we are on the cusp of an A.I. revolution, the idea of A.I. no longer seems to be some far off concept; it is within our grasp. Yet, there are many parties warning of the vast dangers of A.I., including some of the greatest minds on earth. Bill Gates, Elon Musk, and Stephen Hawking all fear what A.I. could mean for the human race as a whole. Hawking has warned that the moment robots gain the ability to build and redesign themselves, that they will continue to do so faster and faster. Thus spelling the end for the human race as the dominant species on the planet,

Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate….. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” Similarly, Elon Musk made headlines speaking at the MIT Aeronautics and Astronautics department’s Centennial Symposium in October, when he compared creating artificial intelligence to summoning a demon.

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence…. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

Bill Gates, although noting what A.I. and automation can do for us, has also called for great caution to be taken. In a recent AMA on Reddit he questioned why some people aren’t as worried as he is!

First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Those of you who watch Westworld (HBO) will have seen this fear in its earliest form, where one of the A.I. robots is able to understand that it is controlled by a computer, and blackmails two engineers into making it much smarter than it “should be” – a terrifying and chilling moment, perhaps due to the fact that this could soon be reality. The technology is already coming into mainstream culture; Ross the A.I. lawyer has just been hired by a US law firm, Google’s A.I. has already pushed the limits of A.I. creativity, by creating abstract works of art, and that same A.I. has now created it’s very own form of encryption.

With all of these technologies becoming more and more prevalent in our society, it is key that we understand the risks of what we are doing, and take proper precautions. When the people who have led the tech revolution are warning of its risks, you know it is time to listen.

What Are We Doing About It? 

In a way the creation of Artificial Intelligence has become this century’s nuclear power. Yes, nuclear power could help fuel the world in a much more sustainable way that fossil fuels, but the threat of nuclear war is always possible once nuclear power has been created – this risk vs reward is mirrored in the search to create artificial intelligence.

kaplan-jerryIn the book, Humans Need Not Apply, author Jerry Kaplan suggests that for humanity the future is inside a zoo run by “synthetic intelligences“. He suggests that rather than enslave us, A.I.s are much more likely to keep us on some sort of reserve and give us very little reason to want to leave. Because of the horrifying nature of these types of scenarios, often associated with the creation of AI, scientist have long grappled with how to combat the possibilities of an out of control A.I.

Isaac Asimov proposed three laws of robotics, in his 1942 science fiction novel Runaround, that many suggested could be the answer to preventing an AI uprising/takeover (the same 3 laws that prevented an uprising in I, Robot…).

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  1. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  1. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

However, these laws were designed to be flawed and deliberately vague, or Asimov’s books wouldn’t have made for great reading. The books explore the imperfections, loopholes, and ambiguities that are inherent to these laws and ironically they have only taught us how not to deal with A.I.. But are there ways to protect ourselves? Or are we all doomed to live in a human zoo?

The UN have recently been discussing a banning the use of autonomous weapons, in an attempt to combat the idea of A.I. vs A.I. warfare, and have declared that humans must always have meaningful control over machines. Yet that doesn’t ultimately protect us, the UN have notoriously little power, so it is up to science to provide a solution!

A.I Fears

Many leaders in the field of A.I. and Deep Learning have come together to sign an open letter to humanity and the scientific community, to combat the fears associated with Artificial Intelligence. The letter itself discusses both the huge benefits that A.I. could provide and weighs them against the great dangers that come hand in hand, with the overwhelming message that emerges being one of caution,

“The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence….We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do

The letter itself has been signed by a wide spectrum of scientists and politicians from across the world including Elon Musk, Stephen Hawking, Google’s director of research Peter Norvig, co-founders of DeepMind (the British AI company purchased by Google in January 2014), MIT professors, and experts from technology’s biggest corporations, including IBM’s Watson supercomputer team and Microsoft Research. This type of global collaboration and initiative is key to maintaining control of our creations.

This group are not the only people concerned about the implications of creating Artificial Intelligence, there are many who are actively working on practical solutions. Perhaps the most famous of these is Google, and their work on an A.I. “kill switch”. Developers at DeepMind, Google’s artificial intelligence division, have collaborated with Oxford University researchers to develop a way for humans to keep the upper hand with super-intelligent computers. According to Google’s Laurent Orseau and Stuart Armstrong, members of the Future of Humanity Institute, humanity might need to have some sort of “big red button”, to stop AI from carrying out a “harmful sequence of actions” – in other words humans hold the ultimate trump card to combat a rogue A.I..

The most open and positively reviewed way to combat this problem has been the transparent and collaborative approach fostered by the Future of Life Institute (FLI). David Parkes, an FLI researcher and Harvard professor, has been attempting to teach A.I.s in a system to compromise to help a system of A.I.s to reason and work together. This research will only become more important as computing power develops.

The best way to ensure that A.I. doesn’t lead to an end to humanity is to create an open and collaborative environment for research and development. Google, Facebook and Microsoft all have researchers exploring machine learning and artificial intelligence techniques, and competitors at Open AI worry that the currently open nature of this research will close off as the findings grow exponentially in value. Humanity must remain united on this problem, or else, much like the development of nuclear weapons, this could spell the end of the world.

The Dark Side of AI

I wanted to get away from the doom and gloom of A.I. being the end of the world. Ultimately it is this sort of fear that could lead to secrecy and a dangerous lack of transparency emerging amongst potentially condemned scientists. So to counter the problems and dangers associated with AI, I wanted to explore the most abstract and intelligent applications of Artificial Intelligence that is already having an impact on our lives.

A.I. Lawyer

Ross, the first A.I. lawyer, has been designed and built atop IBM’s cognitive computer Watson to handle the Baker & Hostetler bankruptcy practice (which is currently made up of a team of 50 lawyers). It has been built to read and understand language, suggest hypotheses when asked a question, and research case law and precedent to back up its conclusions. Ross also incorporates deep learning, allowing it gain speed and knowledge the more you interact with it.

Rather than relying on researchers and experts to find obscure precedents and case law, Ross can read through the entire history to help you get the most accurate information quicker and more efficiently. Ross can even monitor ongoing cases and new court decisions that may affect the verdict of your case! As if we didn’t already have enough lawyers…

A.I. Personal DJ

SoundHound is a music and artificial intelligence company that is attempting to merge the two into a brand new speaker – the Hurricane Speaker. The speaker combines a voice controlled personal DJ/assistant, music recognition software (that allows you to sing a tune to it for recognition), and a vast music collection from which to draw from.

The speaker will be capable of selecting music based on a your mood, creating personalised playlists with its Predictive Analysis Library (PAL) algorithm, as well as providing updates on weather, sports, setting alarms, and generally helping to organise your life.

A.I. Doctor

ResApp health is an Australian “digital healthcare solutions” company, who have been working on an app that can diagnose respiratory conditions using the microphone on a smartphone (acting like a stethoscope). The app applies deep learning algorithms to analyse cough sounds in an attempt to identify conditions such as pneumonia, asthma, bronchiolitis and chronic obstructive pulmonary disease (COPD).

But this is not an isolated use of machine learning in medicine. Enlitic is using Google’s deep learning open source tech to build an A.I. capable of diagnosing and suggesting treatment in order to help doctors solve medical issue much like a complex data problem. As a test they ran their algorithm with lung CT scans in an attempt to diagnose potentially cancerous growths, comparing it to results given by a panel of world’s top radiologists. Enlitic beat this panel comprehensively, successfully diagnosing every case of cancer when the panel missed 7%, whilst misdiagnosing 19% less cases than the human experts. Survival rates for cancer grow exponentially the earlier it is detected! For bonus points enclitic also helps doctors by showing them similar cases and helping to analyse trends that would be impossible for one doctor to see or consider.

A.I. Journalists Aid

With the associated press already using automation to cover minor league baseball games, it was only a matter of time before A.I. grew into a larger part of journalism. The next step in that growth comes via JUICE, a project funded by Google’s Digital News Initiative, and has been described as a tool to help journalists “discover and explore new creative angles on stories they write”. JUICE is being designed as an add-on to Google Docs, it uses AI systems to analyse what you have written and find creative and productive angles from which to approach the article or story. It is connected to around 470 news sites and automatically runs what they call “creative searches” to pull up relevant articles, cartoons, and multimedia that could be useful to the story. The project is aimed at improving the quality of journalism and helping writers find new ways of approaching their work. The system has had successful trials on journalism students and is expected to be more widely available at some point next year!

Although artificial intelligence can seem like an incredibly scary prospect, it is definitely a tool that has been and can continue to be used to improve many people’s lives and generally be a fantastic aid to the progression of society. However, there is a great deal of caution required in the pursuit of this technology. We cannot allow complacency of the same magnitude that we have allowed in nuclear power, climate change, and cyber security.

By Josh Hamilton

Josh Hamilton

Josh Hamilton ​is an aspiring journalist and writer who has written for a number of publications​ involving Cloud computing, Fintech and Legaltech​. ​Josh has a Bachelor’s Degree in Political Law​ from ​Queen's University in Belfast​​. Studies included, Politics of Sustainable Development, European Law, Modern Political Theory and Law of Ethics​.
Jeff DeVerter

Charting the Course: An Interview with Rackspace’s Jeff DeVerter on AI and Cloud Innovation

Rackspace’s Jeff DeVerter on AI & Cloud Innovation In an insightful conversation with CloudTweaks, Jeff [...]
Read more
Bill Britton

Pioneering Cybersecurity Education: An Interview with Cal Poly’s CIO Bill Britton

Interview with Cal Poly’s CIO Bill Britton Welcome to CloudTweaks, where today we’re diving into [...]
Read more
Dolores

Q&A: Airport Security Trends with Dolores Alemán, Frost & Sullivan Analyst

Airport Security Trends In this CloudTweaks interview, we delve into the evolving landscape of airport [...]
Read more

Navigating M&A Waters: The Core Role of Active Directory Migrations

Navigating M&A Waters On the whole, 2023 was a slow year for mergers and acquisitions. [...]
Read more

Common Malware Anti-Analysis Techniques and How to Counter Them

Common Malware Anti-Analysis Techniques Malware analysis forms the backbone of proactive cybersecurity, making it possible [...]
Read more

AI-Powered Analytics: Q&A with Sonata Software’s Manu Swami

Welcome to today’s enlightening Q&A session on “AI for Enhanced Analytics,” where we are privileged [...]
Read more

SPONSOR PARTNER

Explore top-tier education with exclusive savings on online courses from MIT, Oxford, and Harvard through our e-learning sponsor. Elevate your career with world-class knowledge. Start now!
© 2024 CloudTweaks. All rights reserved.