How Can We Use Artificial Intelligence When We Can’t Handle Real Intelligence?

Brain Data

Artificial Versus Real Intelligence

In this article we will be discussing the pitfalls of societal disillusionment with facts, and how this trend may become troubling for the oncoming artificial intelligence revolution. Dr Edward Maibach has frequently lamented the “huge disconnect between the actual science and what the American public believes” about climate change – but this is not an isolated issue. Even when we consider issues like healthcare, the public are often misinformed about the facts of the situation. How can we expect the general populace (or the politicians who represent them) to understand a threat of such complexity and magnitude when there is so little knowledge about issues that affect our day to day lives?

That is where this societal gap becomes a problem, the general public has very little awareness of the technology, and the huge corporations like Google and Apple who are likely to build this AI will have an unchallenged monopoly. Unless this unparalleled power is immediately pledged to the betterment of humanity the economic landscape would be decimated, with power and wealth going to those who have control over the technology. They would quickly become overlords of humanity.

I spoke to Michael Bell at Albion Research, who noted the different attitudes towards human drivers and self-driving cars. He commented that:

“The problem is we have far more sympathy for human error rather than computer error. If a self-driving car is in one accident, it often means a recall of every single car using that technology.”

This is an interesting observation, there is quite clearly a higher standard to be met by technology than is expected from humanity, so does that mean the public expect more from AI? Or does it simply highlight the fears that most people have about putting their lives in the hands of a machine? For Michael Bell, this fear comes partially from a lack of understanding of how the technology functions – it is easy to see the input or output to any machine, what is difficult, is understanding how an AI processes information and arrives at a conclusion.

Bell noted that “when you get onto the self-taught stuff it isn’t always clear what the machine has learnt…. I think there is always going to be that risk.”

Manhattan Project For Artificial Intelligence

In our “post-truth” world, with politics in it’s current hyper-partisan state, and the public at large seemingly rejecting fact and logic, I find it hard to foresee a scenario in which politics can be set aside in order to listen to the top minds in the field. To thoroughly explore and deal with this issue Sam Harris actually proposed we found a ‘Manhattan Project’ for AI. However, rather than a secret project, it would be an open and collaborative project between the best in the scientific community and governments around the world. That said, such a project would require vast resources and co-operation between the Government and the scientific community, something that is unlikely in our current political climate.

The greatest threat to humanity according to the majority of the scientific community and even the US Department of Defence is Climate Change. Yet time and again, the issue has become politicized and muddied by special interests and lobbyists. Last month, the US House of Representatives Committee on Science, Space, and Technology tweeted a misleading article published by Breitbart (the alt-right “news” outlet) about climate change. How can we trust the same people to properly regulate, or even understand, AI and its implications comprehensively and objectively?

The tech community are doing an admirable job at trying to police themselves in this issue, with groups like OpenAI (one of Elon Musk’s lesser known projects) promoting collaboration across the industry to try to prevent secrecy, and ultimately conflict. They even managed to get the notoriously secretive Apple to start publishing AI research. Unfortunately, thus far, governments around the world have been slow to act on regulating AI development, though government interference wouldn’t necessarily be the best policy.

Barack Obama did recently commission two reports on AI which have urged for two key philosophies:

  1. AI needs to augment humanity instead of replacing it,
  2. AI needs to be ethical, and there must be an equal opportunity for everyone to develop these systems.

But these are only baby steps, ones that may not be properly followed up by the incoming regime.

Until humanity can truly work as one species to tackle the global issues that we are going to face, I don’t think we will be truly ready for AI. These principles also fail to address the potential for abuse of the technology, either economically, militarily, or socially. The world might not ever be ready for AI, but we better start preparing anyway.

We Can’t Handle Real Intelligence

Artificial Intelligence is no longer coming, it is here. 2017 will see deep learning and AI technology applied across nearly every industry on the planet. It can be applied to study noise on bridges to look for failures or weak points, to help doctors detect cancer or tumors, or even help you get approved for a loan or mortgage. Every major tech company on the planet is already researching and/or utilizing Artificial Intelligence more and more as they strive for better, faster, and smarter data analytics and processing. AI and deep learning is an inevitable part of our future, one that we will have to grasp and understand fully, or we run the horrifying risk of human annihilation.

The dangers of AI are far from unrealistic, 6 decades of dystopian science-fiction writing has seen those fears codified and explored in nearly every conceivable scenario. With breakthroughs in the technology occurring faster and faster, some of the world’s most influential minds have all urged for extreme caution in the development and application of AI. Stephen Hawking, Elon Musk, and Bill Gates, are but a few of the many thousands in the tech sector that believe the associated dangers are very real to humanity, and therefore we should proceed with the utmost care and attention to detail. Yet, for me, there is a missing element in the warnings that surround the advent of Artificial Intelligence, a truth that we may soon have to confront. Can we as a society truly handle AI? Are we capable of truly grasping or understanding something that has infinitely more power or knowledge than any one human can ever been blessed with?

Provided below is an infographic discovered via Futurism

Artificial Versus Real Intelligence

Post Truth

Oxford Dictionary named their word of this year as “post-truth”, and it is hard to imagine a word or phrase that could better sum up the year 2016. The Brexit Campaign was dominated by blatant lies from both sides, President Elect Trump managed to lie and insult his way to victory (with a little help from Russia, WikiLeaks, and James Comey), and fake news became the hottest way to spread misinformation. They most shocking part about the lies and fake news, was the public’s inability to distinguish between lies and facts.

The rise of misinformation in what is supposed to be the age of information is an alarming trend, and many people have been quick to place the blame on sites like Facebook and Twitter for the fake news epidemic. But there is a much more obvious factor to blame – our critical thinking skills. How can we as a species be expected to navigate the internet age without the ability to use reason and logic to reach our own conclusions. People generally don’t question sources or fact check the news themselves, they listen for what they want to hear. Hyper-partisanship and identity politics have led us to a place where facts have become blurred with opinion, and the social media echo chambers we have all built for ourselves only perpetuates these problems.

Tech Bubble

Another interesting side of the AI issue, is the vast crevice appearing between the tech community and the rest of the world. Places like Silicon Valley are hurtling so fast into the future, they have arguably become out of touch with the rest of the world. They inhabit completely separate and to many of us, alien, society, concerning themselves with disrupting and reshaping the world, often with the best of ideals. The sharing economy has allowed many people to supplement their income, or to have access to much cheaper and often localized services.

The biggest companies in the tech sector are all racing towards AI, it is already being deployed across almost every industry, in ways that most people don’t even realise. Sam Harris, an American author and neuroscientist, has talked extensively about the problems of creating an AI. At some point we will create a true artificial intelligence, technological progress is not going to simply halt, so the result is inevitable. Even if we were only able to make it as smart as a Doctor at MIT, computer processing runs 1000s of times faster than biological processing, so in a week, it could perform 20,000 years of thought and work. There is simply no way for us to truly understand how a being like that will behave. Herein lies the problem, we have seemingly rejected human intelligence, so how can we possibly deal with an artificial intelligence, who will have a much harder time understanding (or even caring about) our behaviour as a species.

By Josh Hamilton

10 Leading Open Source Business Intelligence Tools
Open Source Business Intelligence Tools It’s impossible to take the right business decisions without having insightful information to back up the decision-making process. Open Source Business Intelligence Tools make it easier to have our raw ...
Jen Klostermann
The Fintech Landscape The Nitty Gritty Although the COVID-19 pandemic has highlighted its existence, most of us have been using fintech in some form or another for quite some time. It’s a big part of ...
Sofia Jaramillo
Augmented Reality in Architecture Augmented reality (AR) is a growing field of study and application in the world of architecture. This useful tool can help us visualize architectural designs by superimposing them onto real-world scenes ...
Gary Bernstein
Common DevOps Misconceptions 86% of businesses say it’s important for their company to develop and produce new software fast to win market share and beat the competition, Harvard Business Review reveals. Yet, just 10% of businesses ...
Mark Greenlaw
Free Cloud Migrations are Expensive The cloud is becoming the primary place where work gets done. By 2025, Gartner estimates that enterprise spending on public cloud computing will overtake traditional IT hardware. Why? One reason ...
Gilad David Maayan
What Is Cloud Deployment? Cloud deployment is the process of deploying and managing applications, services, and infrastructure in a cloud computing environment. Cloud deployment provides scalability, reliability and accessibility over the internet, and it allows ...
Security Breach 10 Useful Cloud Security Tools
Cloud Security Tools Cloud providing vendors need to embed cloud security tools within their infrastructure. They should not emphasize keeping high uptime at the expense of security. Cloud computing has become a business solution for ...
Drew Firment
Stop Focusing on Cloud Adoption and Start Focusing on Cloud Maturity For the past several years, most organizations have made it their priority to shift much of their applications and data from on-premises to the ...