A groundbreaking innovator in the field of artificial intelligence (AI) is sounding the alarm over the dangers imposed by the technology for which his work laid the foundation.
Geoffrey Hinton, the British computer scientist who has been called the “Godfather of AI,” recently left his position as a vice president and engineering fellow at Google so he could join the dozens of other experts in the field speaking out about the threats and risks of AI.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton, 75, told The New York Times in an interview.
Following the launch of OpenAI’s latest version of its GPT chatbot in March, other AI professionals signed an open letter, written by the nonprofit Future of Life Institute, warning that the technology poses “profound risks to society and humanity.”
Hinton, like the letter’s signatories, said he finds the recent advancements in AI to be “scary” and worries about what they might mean for the future—particularly now that Microsoft has incorporated the technology into its Bing search engine.
With Google now rushing to do the same, Hinton noted that the race between Big Tech companies to develop more powerful AI could easily spin out of control.
One particular facet of AI technology that concerns the computer scientist is its ability to create false images, photos, and text to the point where the average person will “not be able to know what is true anymore.”
He also warned that, in the future, AI could potentially replace humans in the workplace and be used to create fully autonomous weapons.
“The idea that this stuff could actually get smarter than people—a few people believed that,” Hinton said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Hinton is primarily known for his role in the development of deep learning, a form of machine learning that trains computers to process data like the human brain. That work was integral to the development of AI, but in retrospect, Hinton said he regretted his role in the process.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he said. Hinton notified Google last month that he was leaving the company after more than a decade.
On May 1, he clarified that the reason for his departure was solely to isolate the company from his statements and had nothing to do with Google’s approach to AI. I left so that I could talk about the dangers of AI without considering how this impacts Google,” he wrote in a tweet. “Google has acted very responsibly.”
In a statement provided to The New York Times, Jeff Dean, Google senior vice president of research and AI, said: “We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”
Despite Google’s assurances, others have been critical of the company’s methods. In a recent interview with Fox News, Tesla CEO Elon Musk—who also co-founded OpenAI—said he felt that Larry Page, Google co-founder, was not taking the risks of AI seriously.
“He really seemed to want digital superintelligence, basically digital God, as soon as possible,” Musk said, referencing conversations he has had with Page on the matter.
“He’s made many public statements over the years that the whole goal of Google is what’s called AGI, artificial general intelligence, or artificial superintelligence,” he noted. “I agree with him that there’s great potential for good, but there’s also potential for bad.”
Musk, who signed the Future of Life Institute’s letter, has been outspoken about his concerns with AI in general, holding that it poses a serious risk to human civilization.
“AI is perhaps more dangerous than, say, mismanaged aircraft design, or production maintenance, or bad car production, in the sense that it is, it has the potential—however small one may regard that probability, but it is nontrivial—it has the potential of civilizational destruction,” he told Fox News.
Another fear Musk revealed is the worry that AI is being trained in political correctness, which he maintained is just a form of deception and “saying untruthful things.”
Despite those concerns—or perhaps because of them—the tech billionaire has also expressed interest in developing his own “truth-seeking” AI that would be trained to care about humanity: “We want pro-human,” he said. “Make the future good for the humans. Because we’re humans.”