The Singularity Is Less Than 10 Years Away, Says AI Veteran
1 year ago CryptoExpert
Generative AI has become firmly entrenched in the culture zeitgeist, and everyone from computer scientists to social media mavens is tuned in, looking ahead to the next great leap: the singularity, the moment when artificial intelligence surpasses human intelligence—and escapes human control.
Before the mainstream adoption of generative AI, and broadening worries about the dangers of the technology, experts and theorists have speculated that the singularity is decades away, giving humans time to prepare for a world transformed by artificial intelligence and supercomputers. But Ben Goertzel, CEO of AI and blockchain developer SingularityNET, believes the advent of artificial general intelligence (AGI) is years, not decades away.
“I would say now, three to eight years is my take, and the reason is partly that large language models like Meta’s Llama2 and OpenAI’s GPT-4 help and are genuine progress,” Goertzel told Decrypt. “These systems have greatly increased the enthusiasm of the world for AGI, so you’ll have more resources, both money and just human energy—more smart young people want to plunge into work and working on AGI.”
Goertzel is a prominent figure in AI, having spent years articulating the concept of artificial general intelligence (AGI). He holds a Ph.D. in mathematics from Temple University and has contributed to various fields, including AI, cognitive science, and complex systems. Since 2010, Goertzel has served as Chairman and Vice Chairman of Humanity+ and the Artificial General Intelligence Society, respectively.
In 2017, Goertzel co-founded the AI and Blockchain company SingularityNET with David Hanson of Hanson Robotics, developers of the Sophia, Grace, and Desdemona robots.
The tipping point
The technological singularity is a hypothetical future point where technological growth becomes uncontrollable and irreversible, leading to drastic unforeseeable changes in human civilization.
Detractors and skeptics may balk at the idea of computers becoming as intelligent or even more intelligent than humans. Still, Geortzel says that developing AGI comes from the same drive that led to other human advancements, such as the shift from a hunter-gatherer society to an agricultural one. And such shifts are not always for the better.
“Why did we develop agriculture and towns and cities instead of living in a stone age style?” he asked. “According to some metrics life has improved since Stone Age times, but according to other metrics, life has gotten worse—you didn’t have neuroses and mental illness like we do now.”
Another factor pushing the development of AI, Goertzel said, is humanity’s restlessness, adding that while the development of AI appears to be based on individual motives, it’s building towards a greater goal.
“The ‘why’ for AI initially was partly curiosity, but probably military—the US military funded AI, from the ’50s up to the turn of the century,” Goertzel said. “So initially, the ‘why’ was national defense.
“Now the ‘why’ is making money for companies, but also interestingly, for artists or musicians, it gives you cool tools to play with,” he continued.
Beyond the brain
AGI describes artificial intelligence that can learn and perform any intellectual task a human can. Unlike specialized AI, which excels at one task, AGI has a broader understanding of the world, much like a human brain. However, achieving AGI is a challenging and so-far unfulfilled goal.
Those pursuing AGI include Tesla and SpaceX CEO Elon Musk, who launched xAI in July with the goal of an unbiased and transparent AGI. The company’s first chatbot, named Grok, entered early public testing this past weekend.
“The overarching goal of xAI is to build a good [artificial general intelligence] with the overarching purpose of understanding the universe,” Musk said at the time. “The safest way to build an AI is to make one that is maximally curious and truth-seeking.”
AGI is one of the key mileposts on the road to the singularity.
“Our vision is to drive towards a positive, beneficial, benevolent singularity for the benefit of all humankind,” SingularityNET COO Janet Adams told Decrypt in an August interview. As Adams explained, robotics is essential in advancing toward the singularity.
While AI developers have pushed to instill “human values” in generative AI models, Goertzel cautioned that values change with time.
“We don’t want AI to do exactly what we think is right today, because what we think is right 20 years from now won’t be what we think is right today,” Goertzel said.
Edited by Ryan Ozawa.