Self-Taught AI Shows Similarities to How the Brain Works

For a decade now, many of the most impressive artificial intelligence systems have been taught using a huge inventory of labeled data. An image might be labeled “tabby cat” or “tiger cat. For example, to “train” an artificial neural network to correctly distinguish a tabby from a tiger. The strategy has been both spectacularly successful and woefully deficient. Such “supervised” training requires data laboriously labeled by humans. The neural networks often take shortcuts, learning to associate the labels with minimal and sometimes superficial information.

For example, a neural network might use the presence of grass to recognize a photo of a cow, because cows are typically photographed in fields. We are raising a generation of algorithms that are like undergrads. Who didn’t come to class the whole semester and then the night before the final. They’re cramming,” said Alexei Efros, a computer scientist at the University of California, Berkeley. “They don’t really learn the material, but they do well on the test.

Flawed Supervision

Brain models inspired by artificial neural networks came of age about 10 years ago, around the Phone Number List same time that a neural network named AlexNet revolutionized. Therefore, The task of classifying unknown images. That network, like all neural networks, was made of layers of artificial neurons. Computational units that form connections to one another that can vary in strength, or “weight. If a neural network fails to classify an image correctly, the learning algorithm updates. The weights of the connections between the neurons to make that misclassification less likely in the next round of training. The algorithm repeats this process many times with all the training images, tweaking weights, until the network’s error rate is acceptably low.

Self-Supervised Brains

In systems such as this, some neuroscientists see echoes of how we learn. I think there’s no doubt that 90% of what the BAB Directory brain does is self-supervised learning. Therefore, A computational neuroscientist at McGill University and Mila. The Quebec Artificial Intelligence Institute. Biological brains are thought to be continually predicting, say, an object’s future location as it moves. The next word in a sentence, just as a self-supervised learning algorithm attempts to predict the gap in an image or a segment of text. And brains learn from their mistakes on their own, too — only a small part of our brain’s feedback comes from an external source saying, essentially, “wrong answer.

Leave a Reply

Your email address will not be published. Required fields are marked *