Ride the Lightning

Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.

Could Artificial Intelligence Extinguish Mankind?

September 7, 2022

On August 31, the Washington post published an opinion piece that was a thoughtful read. The author, Émile P. Torres, is a philosopher and historian of global catastrophic risk. Sounds like a fun job to me.

We haven’t been especially good at predicting the future. Noted physicist Ernest Rutherford proclaimed that anyone who propounded atomic power was “talking moonshine.” Computer pioneer Ken Olsen in 1977 supposedly said he didn’t see individuals having any use for a computer in their home.

In general, we see artificial intelligence as a good thing. What if we are wrong? Could it cause great harm – even our extinction?

Certainly, we have reason to believe that AI will be smarter than people. Many systems have attained superhuman abilities in some areas, such as playing Scrabble, chess and poker, where people now consistently lose to AI.

Experts have long predicted that human-level machine intelligence will be reached – they just don’t agree on when. A study from the Global Catastrophic Risk Institute found at least 72 projects around the world having the express goal of creating an artificial general intelligence — the steppingstone to artificial superintelligence (ASI), which would not just perform as well as humans but far exceed our best abilities.

AI might have extraordinary benefits – curing diseases, perhaps cleaning up the environment that we have so screwed up to date.

Torres doesn’t necessarily believe that anyone would develop an ASI to destroy humanity – he believes that if humankind is destroyed, it will likely be an accident.

Because ASIs’ cognitive architectures may be fundamentally different than ours, they are perhaps the most unpredictable thing in our future. Experts are already confounded by what today’s AI does and why.

What if we program an ASI to establish world peace and, to do that, it hacks government systems to launch every nuclear weapon on the planet — reasoning that if no human exists, there can be no more war? We could program it explicitly not to do that. But what about its own Plan B?

We have already seen AI create its own language to talk to other AIs with no human instructions to do that. I don’t know about you, but I find that extremely scary.

Could we shut down an ASI bent on destruction? Surely the ASI could see that it could never achieve the goals it has been assigned if it stops existing. Wouldn’t logic suggest that it try to keep us from unplugging it?

Will humanity ever be prepared for superintelligence? With all our global instability and nascent grasp on tech, “adding in ASI would be lighting a match next to a fireworks factory.” Torres believes that research on artificial intelligence must slow down, or even pause. He believes if researchers won’t make this decision, governments should make it for them.

In today’s climate I can’t see that happening. Everyone wants to be first, for their own reasons, to achieve ASI. I used to watch the Terminator movies and think of them as just science fiction – not anymore.

Sharon D. Nelson, Esq., PresidentSensei Enterprises, Inc.
3975 University Drive, Suite 225Fairfax, VA 22030
Email:   Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology
https://senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson