Share

Do We Have a Moral Obligation To AI Because of Evolution? | Opinion


Last month in San Francisco, AI entrepreneur Dr. Ben Goertzel invited me to publicly debate him on the future of machine intelligence at his event, The Ten Reckonings of AGI. Goertzel is best known for popularizing AGI, a term signifying artificial general intelligence as equal to human-level intelligence.

I’ve long promoted AGI in essays and interviews as a likely liberator to the human race and its problems. Now that generative AI, like ChatGPT, is here and starting to upend society and take jobs, I’m not so sure I’m correct anymore. AI has evolved too fast and spurious for me to continue to optimistically support it.

In the debate, I asked Goertzel a question all AI enthusiasts should answer: Do you think humans have a moral obligation to try to bring AI superintelligence into the world because of evolution?

ROBOY, a humanoid robot developed by the University of Zurich’s Artificial Intelligence Lab is shaking hands with his human counterpart on June 21, 2013.

EThamPhoto/Getty Images

Goertzel winced, because the question is challenging. He believes AI is a creation and extension of ourselves—and therefore an extension of our own evolution, as well as a part of evolution itself. I can admit the issue is complex, having recently finished my graduate degree in ethics from the University of Oxford, where philosophers like Nick Bostrom were my professors.

On one hand, billionaire visionaries like Sam Altman and Elon Musk desire to see how far they can evolve machine intelligence. Both Altman and Musk have hinted at possibly creating god-like intelligences. Afterall, why stop at AGI when you might be able to create a superintelligence that can help solve all the problems in the world?

On the other hand, what if the newly created superintelligence doesn’t like humans—maybe because we’ve ecologically damaged the planet? Or maybe because humans might one day attempt physics experiments that could harm Earth and the universe? In this case, it’s plausible a superintelligent AI would try to stop us or even pursue human extinction.

During our debate, I told Goertzel my first priority was to protect humans and ensure their survival and well-being. Only after that can we ascertain if people have a moral obligation to create AI as the next leading force in evolution on our planet.

I worry, like the Greek mythological figure, Icarus, who flew too close to the sun, that our self-righteousness will blind humans to why we wanted to create AI in the first place. Humanity’s goal with AI was to build a tool to help us prosper, not a tool that would become more powerful than us.

But some experts now expect AI to surpass human intelligence within 5-10 years. Goertzel thinks it could happen in the next 24-36 months, he told Newsweek.

People often think of the creation of AGI as something similar to the creation of nuclear weapons; humanity will find a way to have it not directly harm the world, as has been the case since 1945 with nukes. But that analogy is misguided. AGI is very different than nuclear weaponry. First, it’s impossible to say if we will be able to control any AI intelligence that surpasses our own intellect; some experts think that’s unlikely.

Second, inviting intelligences smarter than us into our world is similar to inviting aliens smarter than us to Earth. It’s unlikely we’d do that under nearly any circumstances, because remaining the dominant species in a predatory world is a priority. Afterall, humans and our ancestors spent millions of years escaping the clutches of being a tasty, regular part of the food chain.

Nobody knows if superintelligent AI will ultimately be kind and beneficial to humans or not. But many people, including myself, increasingly don’t want to find out—which puts us at odds with AI inventors that do. This lack of caution on CEOs and AI engineers building out AGI is frightening, made worse by the fact that some people believe we have a moral obligation to evolution to create this superintelligence.

Some experts twist this even further saying if we don’t purposefully create this AI, in the future when others create it, it will then punish those who didn’t help bring it into existence.

As a transhumanist and longevity advocate, a primary goal in my life has been overcoming biological death with science. While we’re still likely a few decades away from that, creating an AI superintelligence before 2030 is quite plausible. So even if humans could overcome biological death, it won’t make a difference if we can’t overcome a harmful superintelligence.

I understand the allure of using technology to build something better and smarter than us. But in doing so, we must be absolutely sure we are not helping to bring about harm or doom on humanity. I feel strongly that stopping the march toward inventing a superintelligence must become the most important priority of the human race and its governments around the world.

Zoltan Istvan writes and speaks on transhumanism, artificial intelligence, and the future. He is running for California governor as a Democrat in the 2026 elections.

The views expressed in this article are the writer’s own.



Source link