Canada gained a new Nobel laureate Tuesday when Geoffrey Hinton, a University of Toronto computer scientist, was awarded the Nobel Prize in Physics for co-discovering machine learning breakthroughs that underpin today’s artificial intelligence applications — advances he himself has warned carry dangerous risks for society.
“I’m flabbergasted,” Hinton said when reached by the Nobel committee on the phone Tuesday. Hinton was co-awarded the prize with Princeton University researcher John Hopfield, who also made early breakthroughs; the two will split the prize of 11 million Swedish kronor (about $1.45 million Canadian dollars) equally.
Hinton toiled for years in relative obscurity in a field few thought would yield results, subsisting on modest Canadian grants, until he and two students published a breakthrough in 2012. The trio were soon snapped up by Google, where Hinton worked for many years, splitting his time between the tech giant’s California headquarters and his home in Toronto’s Annex neighbourhood.
Last year, however, Hinton quit Google in order to speak more freely about what he sees as the growing dangers of artificial intelligence. The news sent shock waves across the industry and the broader public, launching the dialogue about “AI safety” into the mainstream.
Here’s everything you need to know about the Nobel winner.
A family of scientists
Hinton was born in the U.K. to a family of accomplished scientists: His paternal great-great-grandparents were the mathematician Mary Everest Boole and the logician George Boole, whose invention of Boolean algebra underpins modern computing. His father was an entomologist, describing many new species of beetles, and his mother was a math teacher. He recalls her telling him it was OK if he didn’t get a PhD, in a tone that clearly suggested it was not.
Hinton switched majors several times as an undergraduate student, trying to find a subject that would allow him to understand the human brain, and by extension the human mind. But he found philosophy, physiology and psychology all unsatisfactory to the task, though he eventually graduated from Cambridge University with a degree from the psychology department.
As a PhD student, his adviser allowed him to indulge his interests in the then-unpopular “neural networks,” algorithmic models designed to mimic the structure of the human brain. As the committee that awards the Nobel Prizes noted, however, “some discouraging theoretical results caused many researchers to suspect that these neural networks would never be of any real use.” That didn’t seem to deter Hinton, who earned his doctorate in 1978 from Edinburgh University.
A pull to Canada
After getting his PhD, Hinton bumped around various American universities. But most of the funding for artificial intelligence research at the time came from the U.S. Defense Department, and Hinton was deeply concerned about the technology being utilized for weapons on the battlefield.
As a result, he came to Canada in 1987, both to avoid entangling his research with the U.S. military and because he was attracted by a position that offered the maximum amount of time to pursue basic research. Hinton and other AI pioneers from this time also credit the support of CIFAR, a Canadian-based research organization that nurtured research into the field starting in that decade.
The Royal Swedish Academy of Sciences, which awards the Nobel Prize, cited Hopfield and Hinton’s work in the 1980s that established the foundations for artificial neural networks. But after a burst of enthusiasm in this period tied to these breakthroughs, interest receded again when the systems faltered on more difficult tasks, ushering in an “AI winter.”
Hinton, toiling away in his very modest office near College and McCaul Streets on the U of T campus, was unphased by these struggles in real-world applications.
”There is a lot of pressure to make things more applied; I think it’s a big mistake,” Hinton told the Star in 2015. “In the long run, curiosity-driven research just works better,” he said.
“Real breakthroughs come from people focusing on what they’re excited about.”
Breakthroughs
Hinton and his graduate students began publishing research in the 2010s that sparked renewed interest in the field. The big breakthrough came in 2012, when Hinton and two of his students, Alex Krizhevsky and Ilya Sutskever, entered an image recognition contest. The neural network they built was so good at correctly identifying images — a person, a cat — that it leapt over the previous best-performing algorithms, taking a big stride towards the success rate of humans.
Google snapped up the trio of U of T researchers, paying millions to acquire the company they formed. For the next decade, Hinton would spend part of the year at the company’s Mountain View, California headquarters, working on a team then called Google Brain, where the company’s research on artificial intelligence was underway.
That decade saw a kind of arms race for artificial intelligence talent across the biggest tech giants, who snapped up the then-small pool of researchers. Many of Hinton’s former students are now working in the upper ranks of companies like Meta and Apple. The neural networks Hinton and his colleagues developed now underpin many of the computer tasks we take for granted, like the ability to identify which rare species of wildflower you’ve just taken a picture of on your phone, or automatically translate speech from a different language.
Ilya Sutskever, Hinton’s former graduate student who was originally snapped up by Google, left the company after three years to co-found OpenAI. The company launched ChatGPT, an artificial intelligence chatbot, in 2022, which immediately stirred both huge interest and concern for its humanlike conversational abilities.
Warnings
Hinton had always been concerned about the marriage of artificial intelligence and military weapons, and had believed that the technology risked replacing human jobs with bots and algorithms. But he thought these risks would not pose a real threat to society for many decades, because the algorithms were still not smart enough.
That all changed with what he saw as improvements in AI’s capacity to mimic or even outperform human thinking. In May 2023, Hinton said he was leaving Google in order to speak more freely about what he saw as these serious risks.
Hinton has enumerated many concerns about the risks of AI since then, from improving surveillance to even existential threats to humanity. At this year’s Collision Conference in Toronto, he told an audience that “As I left Google, I figured I could just warn … that in the long run, these things could get smarter than us and might go rogue. That’s not science fiction…That’s real.”
Google, OpenAI, and other major players in the field insist that they care deeply about AI safety. But Hinton and others have insisted that governments need to institute better guardrails on the technology, and that we as a society need to have deeper conversations about these risks.
The Academy cited these risks when awarding the Nobel on Tuesday.
“While machine learning has enormous benefits, its rapid development has also raised concerns about our future,” Ellen Moons, a member of the Nobel committee at the Royal Swedish Academy of Sciences, said.
“Collectively, humans carry the responsibility for using this new technology in a safe and ethical way for the greatest benefit of humankind.”
With files from The Canadian Press