Ride the Lightning
Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.
The Godfather of AI is Now Frightened by the Technology He Helped Build
May 3, 2023
On May 2, the MIT Technology Review published an article detailing why Geoffrey Hinton, the Godfather of AI, is having second thoughts about the AI he helped build.
Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he left Google to focus on new concerns he has about AI.
Taken aback by the capabilities of new large language models like GPT-4, Hinton seeks to raise public awareness of the serious risks that he now believes may accompany the technology.
The 75-year-old computer scientist, who was a joint recipient with Yann LeCun and Yoshua Bengio of the 2018 Turing Award for his work on deep learning, says he is ready to shift gears.
Hinton wants to spend his time on what he describes as “more philosophical work.” That will focus on what he sees as a very real danger that AI will turn out to be a disaster.
Now that he is no longer with Google, he can speak his mind freely. He adds, “There’s a lot of good things about Google that I want to say, and they’re much more credible if I’m not at Google anymore.”
Hinton says that the new generation of large language models—especially GPT-4, which OpenAI released in March—made him realize that machines are going to be a lot smarter than he thought – and he is concerned about the consequences.
Here’s my favorite Hinton quote: “These things are totally different from us. Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.”
Hinton is best known for his work on a technique called backpropagation, which he proposed (with a pair of colleagues) in the 1980s. This is the algorithm that allows machines to learn. It underpins almost all neural networks today, from computer vision systems to large language models.
Hinton worked on neural networks, software abstractions of brains in which neurons and the connections between them are represented by code. By changing how those neurons are connected—changing the numbers used to represent them—the neural network can be rewired on the fly. In other words, it can be made to learn.
Large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”
Compared with brains, neural networks are widely believed to be bad at learning: it takes vast amounts of data and energy to train them. Brains, on the other hand, pick up new ideas and skills quickly, using a fraction as much energy as neural networks do.
“People seemed to have some kind of magic,” says Hinton. “Well, the bottom falls out of that argument as soon as you take one of these large language models and train it to do something new. It can learn new tasks extremely quickly.”
Hinton is talking about “few-shot learning,” in which pretrained neural networks, such as large language models, can be trained to do something new given just a few examples. For example, he notes that some of these language models can string a series of logical statements together into an argument even though they were never trained to do so directly.
Compare a pretrained large language model with a human in the speed of learning a task like that and the human’s edge vanishes, he says.
What about the fact that large language models make so much stuff up? Called “hallucinations” by AI researchers (though Hinton prefers the term “confabulations,” because it’s the correct term in psychology), these errors are often seen as a fatal flaw in the technology. The tendency to generate them makes chatbots untrustworthy and, many argue, shows that these models have no true understanding of what they say.
Hinton has an answer for that: bullshitting is a feature, not a bug. “People always confabulate,” he says. Half-truths and misremembered details are hallmarks of human conversation: “Confabulation is a signature of human memory. These models are doing something just like people.”
The difference is that humans usually confabulate more or less correctly, says Hinton. To Hinton, making stuff up isn’t the problem. Computers just need a bit more practice.
Remarkably, I thought when reading that part of the article. And I’ve never heard it explained this way.
We also expect computers to be either right or wrong—not something in between. “We don’t expect them to blather the way people do,” says Hinton. “When a computer does that, we think it made a mistake. But when a person does that, that’s just the way people work. The problem is most people have a hopelessly wrong view of how people work.”
This is also fascinating: “If you or I learn something and want to transfer that knowledge to someone else, we can’t just send them a copy,” he says. “But I can have 10,000 neural networks, each having their own experiences, and any of them can share what they learn instantly. That’s a huge difference. It’s as if there were 10,000 of us, and as soon as one person learns something, all of us know it.”
Hinton now thinks there are two types of intelligence in the world: animal brains and neural networks. “It’s a completely different form of intelligence,” he says. “A new and better form of intelligence.”
Not everyone shares that opinion of course.
Hinton fears that these tools could figure out ways to manipulate or kill humans who aren’t prepared for the new technology.
“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he says. “How do we survive that?”
He is especially worried that people could harness these tools to tilt the scales of things like elections and wars.
“Look, here’s one way it could all go wrong,” he says. “We know that a lot of the people who want to use these tools are bad actors like Putin or DeSantis. They want to use them for winning wars or manipulating electorates.”
“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” he says. “He wouldn’t hesitate. And if you want them to be good at it, you don’t want to micromanage them—you want them to figure out how to do it.”
Yann LeCun, Meta’s chief AI scientist, does not share Hinton’s fears. “There is no question that machines will become smarter than humans—in all domains in which humans are smart—in the future,” says LeCun. “It’s a question of when and how, not a question of if.”
But he takes a totally different view on where things go from there. “I believe that intelligent machines will usher in a new renaissance for humanity, a new era of enlightenment,” says LeCun. “I completely disagree with the idea that machines will dominate humans simply because they are smarter, let alone destroy humans.”
“Even within the human species, the smartest among us are not the ones who are the most dominating,” says LeCun. “And the most dominating are definitely not the smartest. We have numerous examples of that in politics and business.”
One of Hinton’s priorities is to work with leaders in the technology industry to see if they can come together and agree on what the risks are and what to do about them. He thinks the international ban on chemical weapons might be one model of how to go about curbing the development and use of dangerous AI. “It wasn’t foolproof, but on the whole people don’t use chemical weapons,” he says.
A fascinating article. At the end of the day, I share Hinton’s fears – and I’m usually an optimist!
Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225, Fairfax, VA 22030
Email: Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology
https://senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson