The Power of Recurrent Neural Networks

Recurrent Neural Networks

A recurrent neural network (RNN) is a type of neural network where connections between units form a directed cycle. They were first created in the 1980s, but have gained new popularity as AI and machine learning technology is spreading and progressing across the planet into every industry and process. RNN’s are particularly good at dealing with issues like handwriting and image recognition, which require sequential understanding of data, because RNN’s are able to maintain information about a previous input. This technology has already been used to build on Natural Language Processing (NLP) to produce language models, for example, one such program input a huge number of Shakespeare’s poems, and after training these models they could generate their own “Shakespearean” poems that were almost indistinguishable from the originals!

A similar deep learning algorithm has now been written and successfully tested to describe any input image in a single sentence. This is a significant leap forward in terms of understanding how machine learning algorithms arrive at specific conclusions – the problem that scientists and engineers have been grappling with is the inability of AI (or any computer) to explain why it has arrived at the set of results or conclusions that it produced. Scientists at DARPA have been grappling with the problem for a while, but a recent paper has taken image recognition to the next level.

After images were fed into an RNN the algorithm was eventually able to produce a description of each image, including details that are not always obvious to machine learning algorithms (including the act of throwing a ball, as well as the concept of ‘over’ and ‘under’ – something that is notoriously difficult to ascertain from a 2D image). However, this RNN pattern and image recognition is only the beginning of what this technology can do – An even more recent paper has seen the technology careen forward at an incredible rate.

By using the feedback loops that are built into RNN’s sentences have been fed into a neural network, and brand new images have been synthesized from these descriptions. One network is used to create millions of images from the description, while a discriminator network is used to determine whether they match the description – as the program progresses the images become increasingly more accurate and refined as both neural networks work together to constantly improve and perfect their craft.

In a two stage version of this process the results were mind blowing – with the synthesized images going from blurred outlines to much higher quality images. While this technology is rather young and untested, the potential is almost unfathomable – what are currently 256×256 images could soon be HD pictures or animations. Eventually we could see entire novels illustrated or animated by this sort of algorithm – even see entire films played out simply by feeding in a detailed script and allowing the algorithm to do all the hard work.

One of the more abstract uses of deep learning

Google have predictably already been wrestling with the technology to produce synthesized images for their Google Street application. They have used RNN algorithms to produce new images by giving the algorithm an image from two other perspectives and asking the program to fill in the gaps and produce a brand new image. The idea is to create a seamless rolling movie on street view, rather than the disjointed experience that users currently experience when trying to explore streets on the other side of the world. So far they have been met with relative success, although being Google they have lamented the poor resolution quality of some of the more detailed aspects of each image.

This is one of the more abstract uses of deep learning, but it really shows us the wondrous untapped potential of this technology. It is going to infiltrate a revolutionise every single part of society, in ways that ewe can’t even begin to imagine. For now, we can just gaze in wonder at the powers of a few lines of code in Recurrent Neural Networks.

By Josh Hamilton

David Friend

De-Archiving: What Is It and Who’s Doing It?

De-Archiving I first heard the term “De-Archiving” a few months ago on a visit to a few studios in Hollywood. On that trip, I learned ...
Falcon

MarTech’s Fragmented Landscape is Failing Marketers

MarTech’s Fragmented Landscape Mapping the customer journey is one of the biggest strategic shifts currently underway in the marketing industry. With the rise of social ...
Partrick

Global Public Cloud Spending To Double By 2020

The Cloud and Endpoint Modeling The worldwide migration of IT resources to the public cloud continues, at a head-spinning pace. Global public-cloud spending was forecast ...
Will Crump

How To Humanize Your Data (And Why You Need To)

How To Humanize Your Data The modern enterprise is digital. It relies on accurate and timely data to support the information and process needs of ...
Scott Andersen

Virtual Immersion And The Extension/Expansion Of Virtual Reality

Virtual Immersion And Virtual Reality This is a term I created (Virtual Immersion). Ah...the sweet smell of Virtual Immersion Success! Virtual Immersion© (VI) an extension/expansion of ...