Google Identifying Cats Wikimedia Commons As intelligent as computers continue to get, it’s still a lot of work for them to perform tasks many humans do on a regular basis–like, say, enjoying cat videos on YouTube. In an attempt to bridge that gap, scientists from Google’s X laboratory created a simulated human brain by putting 16,000 computer processors together and having them browse around the Internet, learning facts about the world as they went. And the simulated human brain successfully found YouTube’s cats. For this study, the Google research team took 10 million randomly selected videos from YouTube and fed thumbnails into the machine. The machine identified a pattern — there seem to be a lot of these furry things, and successfully taught itself to recognize cats through trial and error. As Google fellow Jeff Dean told the New York Times: “It basically invented the idea of a cat.”
The team never gave it any hints; the machine just made enough assumptions based on the millions of images to put tog. The Google research team was led by Stanford University computer scientist Andrew Y. Ng and Google fellow Jeff Dean. The “brain” assembled a dreamlike digital image of a cat by using a hierarchy of memory locations to cull features after exposure to the millions of images. Presented with the digital images, Google’s brain looked for cats. In the human brain, as biologists suggest, neurons detect significant objects and that is what the software neural network closely mirrored, described as turning out to be a “cybernetic cousin” to what takes place in the human visual cortex.
“We would like to understand if it is possible to learn a face detector using only unlabeled images downloaded from the Internet,” said the authors, describing the purpose at the outset of their research. “Contrary to what appears to be a widely held negative belief, our experimental results reveal that it is possible to achieve a face detector via only unlabeled data. Control experiments show that the feature detector is robust not only to translation but also to scaling and 3D rotation,” they said.
Their work in self-teaching machines is an example of scientific interest in what clusters of computers can achieve now in learning systems. According to Ng, the idea is that “You throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data.” At the same time, he is reluctant to even suggest that what scientists are achieving exactly mirrors the human brain, as computing capacity is still dwarfed by the number of connections in the brain. “A loose and frankly awful analogy is that our numerical parameters correspond to synapses,” he said.