We’ve all heard of Geoff Hinton recently and publicly leaving Google Brain to bring awareness to A.I. risk. Something lost in that hoopla is the specific moment when he changed his mind about AI and the implications that creates. But first, let’s establish what Geoff Hinton has been trying to figure out in the first place.
Geoff Hinton has been trying to understand the human brain for decades. It turned out that artificial neural networks were a promising tool for studying how the brain learns. While relatively little progress has been made in understanding how the human brain learns, an artificial learning algorithm called Backpropagation has unlocked Machine Learning and now Large Language Models to make incredible progress in the last 20 years.
Up till now, it’s been universally accepted among the AI community that humans are much more efficient at learning than AI. The classic example is that human brains require far fewer samples of any given training input to produce good predictions. Indeed, this is the case for learning the alphabet or reading comprehension. It has been estimated that toddlers need something between hundreds to a few thousand exposures to various letters in different contexts. In comparison, the benchmark MNIST dataset contains 60,000 images of handwritten letters. Traditionally, this has been used to deduce that human brains have a higher ability to generalize from rich multisensory inputs.