February 27, 2017

The Power of Recurrent Neural Networks

By Josh Hamilton

Recurrent Neural Networks

A recurrent neural network (RNN) is a type of neural network where connections between units form a directed cycle. They were first created in the 1980s, but have gained new popularity as AI and machine learning technology is spreading and progressing across the planet into every industry and process. RNN’s are particularly good at dealing with issues like handwriting and image recognition, which require sequential understanding of data, because RNN’s are able to maintain information about a previous input. This technology has already been used to build on Natural Language Processing (NLP) to produce language models, for example, one such program input a huge number of Shakespeare’s poems, and after training these models they could generate their own “Shakespearean” poems that were almost indistinguishable from the originals!

A similar deep learning algorithm has now been written and successfully tested to describe any input image in a single sentence. This is a significant leap forward in terms of understanding how Machine Learning algorithms arrive at specific conclusions – the problem that scientists and engineers have been grappling with is the inability of AI (or any computer) to explain why it has arrived at the set of results or conclusions that it produced. Scientists at DARPA have been grappling with the problem for a while, but a recent paper has taken image recognition to the next level.

After images were fed into an RNN the algorithm was eventually able to produce a description of each image, including details that are not always obvious to machine learning algorithms (including the act of throwing a ball, as well as the concept of ‘over’ and ‘under’ – something that is notoriously difficult to ascertain from a 2D image). However, this RNN pattern and image recognition is only the beginning of what this technology can do – An even more recent paper has seen the technology careen forward at an incredible rate.

By using the feedback loops that are built into RNN’s sentences have been fed into a neural network, and brand new images have been synthesized from these descriptions. One network is used to create millions of images from the description, while a discriminator network is used to determine whether they match the description – as the program progresses the images become increasingly more accurate and refined as both neural networks work together to constantly improve and perfect their craft.

In a two stage version of this process the results were mind blowing – with the synthesized images going from blurred outlines to much higher quality images. While this technology is rather young and untested, the potential is almost unfathomable – what are currently 256×256 images could soon be HD pictures or animations. Eventually we could see entire novels illustrated or animated by this sort of algorithm – even see entire films played out simply by feeding in a detailed script and allowing the algorithm to do all the hard work.

One of the more abstract uses of deep learning

Google have predictably already been wrestling with the technology to produce synthesized images for their Google Street application. They have used RNN algorithms to produce new images by giving the algorithm an image from two other perspectives and asking the program to fill in the gaps and produce a brand new image. The idea is to create a seamless rolling movie on street view, rather than the disjointed experience that users currently experience when trying to explore streets on the other side of the world. So far they have been met with relative success, although being Google they have lamented the poor resolution quality of some of the more detailed aspects of each image.

This is one of the more abstract uses of deep learning, but it really shows us the wondrous untapped potential of this technology. It is going to infiltrate a revolutionise every single part of society, in ways that ewe can’t even begin to imagine. For now, we can just gaze in wonder at the powers of a few lines of code in Recurrent Neural Networks.

By Josh Hamilton

Josh Hamilton

Josh Hamilton ​is an aspiring journalist and writer who has written for a number of publications​ involving Cloud computing, Fintech and Legaltech​. ​Josh has a Bachelor’s Degree in Political Law​ from ​Queen's University in Belfast​​. Studies included, Politics of Sustainable Development, European Law, Modern Political Theory and Law of Ethics​.
Cloud Computing Humor
Algirdas Stasiūnaitis

The Future of Cybersecurity: Insights from Cyber Upgrade’s Founders

AI and Cybersecurity: Innovations and Challenges In the rapidly evolving landscape of technology, where artificial [...]
Read more

Maximize IT Asset Efficiency: Discover Top Leading Management Tools

Maximize IT Asset Efficiency In today’s digital age, IT Asset Management (ITAM) services have become [...]
Read more

Innovative Solutions Ensuring Cybersecurity in Cloud-Native Deployments

Innovative Solutions Ensuring Cybersecurity The digital landscape is evolving at a breakneck pace, and organizations [...]
Read more
Stacey Farrar

Embracing Governance to Navigate 2024’s Tech Trends

Mastering Governance Strategies for Success The start of a new year is a fitting time [...]
Read more
David Anandraj

Tips to Protect Business Texting & Navigate 10DLC Compliance

Navigating 10DLC Compliance Texting has become a communication game-changer for businesses. Texting allows companies to [...]
Read more

Maximize Workforce Efficiency: Top HR Data Analytics Platforms

HR Data Analytics Platforms In today’s rapidly evolving workplace, human resources (HR) departments are not [...]
Read more

SPONSOR PARTNER

Unlock the power of Google Cloud with a $350 signup credit. Experience enhanced scalability, security, and innovation for your projects today!
© 2024 CloudTweaks. All rights reserved.