Google can see cats
The unofficial star of the Internet is the cat. As a running joke concerning the ease by which time can be wasted online, cat videos and the grumpy cat are recurring stars. Google has now reinjected legitimacy into this cultural meme by using visuals of cats as a flagship for its new image recognition technology which uses artificial-intelligence software to interpret what is going on in a photograph, and then produce a descriptive caption.
Although such an activity seems relatively easy for a human, the ability to separate out the shapes and colors in a picture and place them into context requires an enormous amount of processing power, not only to determine what the objects represent in a picture, but the circumstances surrounding why they are there at all.
The Cat Connection
The cat connection comes from one of the more recent Google projects, in which, in 2012, a joint Google/Stanford team showed millions of images from YouTube videos to a computer that then taught itself how to recognize cats in the images.
According to Google’s own blog, the model for image recognition comes from adapting language processing applications, such as those that would convert a French phrase into a vector representation, which could then be translated into German. Vector recognition is described in a separate Google blog as follows: “[The computer] understands that Paris and France are related the same way Berlin and Germany are (capital and country), and not the same way Madrid and Italy are. [From this] it can learn the concept of capital cities, just by reading lots of news articles — with no human supervision.” The adaptation of vector representation is then blended with image processing software to essentially determine what is a cat and also what is not a cat.
Google concedes that the process is as yet far from perfect. Many of the photographs are misinterpreted, from mildly to wildly as the computers struggle to connect what they see with what actually occurs in the human world.
However as the image of their November 17 2014 post shows, “Two pizzas sitting on top of a stove top oven” is a fair assessment of what is going on in this image.
The potential for image recognition of this sort is limitless. Some immediate uses range from describing images to the visually impaired to faster and more accurate placement on Google Maps by reading house numbers.
By Steve Prentice
Steve Prentice is a project manager, writer, speaker and expert on productivity in the workplace, specifically the juncture where people and technology intersect. He is a senior writer for CloudTweaks.