california local news

A scientist says AI could kill us in about 200 years

profile photo
By Yosi Yahoudai
Founder and Managing Partner

Advertisement

A scientist says AI could kill us in about 200 years

Plenty of time to prep.

Popular Mechanics logo

Updated: 6:12 AM PDT Apr 21, 2024

In a provocative new paper, an English scientist wonders if so-called “artificial intelligence” (AI) is one reason why we’ve never found other intelligent beings in the universe. The Fermi Paradox captures this idea: in an infinite universe, how can it be that there are no other civilizations sending us radio signals? Could it be that, once civilizations develop AI, it’s a short ride into oblivion for most of them? This kind of large-scale event is called a Great Filter, and AI is one of the most popular subjects of speculation about them. Could the inevitable development of AI by technological civilizations create the improbable Great Silence we hear from the universe?Related video above: The White House just released new AI guidelinesMichael Garrett is a radio astronomer at the University of Manchester and the director of the Jodrell Bank Centre for Astrophysics, with extensive involvement in the Search for Extraterrestrial Intelligence (SETI). Basically, while his research interests are eclectic, he’s a highly qualified and credentialed version of the people in TV shows or movies who are listening to the universe to hear signs of other civilizations. In this paper, which was peer-reviewed and published in the journal of the International Academy of Astronautics, he compares theories about artificial superintelligence against concrete observations using radio astronomy.Garrett explains in the paper that scientists grow more and more uneasy the longer we go without hearing any sign of other intelligent life. “This ‘Great Silence’ presents something of a paradox when juxtaposed with other astronomical findings that imply the universe is hospitable to the emergence of intelligent life,” he writes. “The concept of a ‘great filter’ is often employed – this is a universal barrier and insurmountable challenge that prevents the widespread emergence of intelligent life.”There are countless potential Great Filters, from climate extinction to (gulp!) a pernicious global pandemic. Any number of events could stop a global civilization from going multi-planetary. For people who follow and believe in Great Filter theories more ideologically, humans settling on Mars or the moon represents a way to reduce risk. (It will be a long time, if ever, before we have the technology to make these settlements sustainable and independent.) The longer we stay on only Earth, the more likely a Great Filter event is to wipe us out, the thinking goes.Today, AI is not capable of anything close to human intelligence. But, Garrett writes, it is doing work that people previously didn’t believe computers could do. If this trajectory leads to a so-called general artificial intelligence (GAI) — a key distinction meaning an algorithm that can reason and synthesize ideas in a truly human way combined with incredible computing power — we could really be in trouble. And in this paper, Garrett follows a chain of hypothetical ideas to one possible conclusion. How long would it take for a civilization to be wiped out by its own unregulated GAI?Unfortunately, in Garrett’s scenario, it takes just 100-200 years. Coding and developing AI is a singleminded project involving and accelerated by data and processing power, he explains, compared with the messy and multidomain work of space travel and settlement. We see this split today with the flow of researchers into computing fields compared with a shortage in the life sciences. Every day on Twitter, loud billionaires talk about how great and important it is to settle on Mars, but we still don’t know how humans will even survive the journey without being shredded by cosmic radiation. Pay no attention to that man behind the curtain.There are some big caveats, or just things to keep in mind, about this research. Garrett steps through a series of specific hypothetical scenarios, and he uses huge assumptions. He assumes there is life in the Milky Way and that AI and GAI are “natural developments” of these civilizations. He uses the already hypothetical Drake Equation, a way to quantify the likely number of other planetary civilizations, which has several variables we have no concrete idea about.The mixed hypothetical argument arrives at one strong conclusion, though: the need for heavy and continuous regulation of AI. Garrett points out that the nations of Earth are already in a productivity race, scared to miss out if they hesitate to do more regulation. Some misguided futurists also have the odd idea that they can subvert GAI by simply developing a morally good one, faster, to control it more — an argument that makes no sense.In Garrett’s model, these civilizations have just a couple hundred years in their AI eras before they blip off the map. Over distance and the very long course of cosmic time, such tiny timeframes mean almost nothing. They reduce to zero, which, he says, fits with SETI’s current success rate of 0 percent. “Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations,” Garrett said.

In a provocative new paper, an English scientist wonders if so-called “artificial intelligence” (AI) is one reason why we’ve never found other intelligent beings in the universe. The Fermi Paradox captures this idea: in an infinite universe, how can it be that there are no other civilizations sending us radio signals? Could it be that, once civilizations develop AI, it’s a short ride into oblivion for most of them? This kind of large-scale event is called a Great Filter, and AI is one of the most popular subjects of speculation about them. Could the inevitable development of AI by technological civilizations create the improbable Great Silence we hear from the universe?

Related video above: The White House just released new AI guidelines

Advertisement

Michael Garrett is a radio astronomer at the University of Manchester and the director of the Jodrell Bank Centre for Astrophysics, with extensive involvement in the Search for Extraterrestrial Intelligence (SETI). Basically, while his research interests are eclectic, he’s a highly qualified and credentialed version of the people in TV shows or movies who are listening to the universe to hear signs of other civilizations. In this paper, which was peer-reviewed and published in the journal of the International Academy of Astronautics, he compares theories about artificial superintelligence against concrete observations using radio astronomy.

Garrett explains in the paper that scientists grow more and more uneasy the longer we go without hearing any sign of other intelligent life. “This ‘Great Silence’ presents something of a paradox when juxtaposed with other astronomical findings that imply the universe is hospitable to the emergence of intelligent life,” he writes. “The concept of a ‘great filter’ is often employed – this is a universal barrier and insurmountable challenge that prevents the widespread emergence of intelligent life.”

There are countless potential Great Filters, from climate extinction to (gulp!) a pernicious global pandemic. Any number of events could stop a global civilization from going multi-planetary. For people who follow and believe in Great Filter theories more ideologically, humans settling on Mars or the moon represents a way to reduce risk. (It will be a long time, if ever, before we have the technology to make these settlements sustainable and independent.) The longer we stay on only Earth, the more likely a Great Filter event is to wipe us out, the thinking goes.

Today, AI is not capable of anything close to human intelligence. But, Garrett writes, it is doing work that people previously didn’t believe computers could do. If this trajectory leads to a so-called general artificial intelligence (GAI) — a key distinction meaning an algorithm that can reason and synthesize ideas in a truly human way combined with incredible computing power — we could really be in trouble. And in this paper, Garrett follows a chain of hypothetical ideas to one possible conclusion. How long would it take for a civilization to be wiped out by its own unregulated GAI?

Unfortunately, in Garrett’s scenario, it takes just 100-200 years. Coding and developing AI is a singleminded project involving and accelerated by data and processing power, he explains, compared with the messy and multidomain work of space travel and settlement. We see this split today with the flow of researchers into computing fields compared with a shortage in the life sciences. Every day on Twitter, loud billionaires talk about how great and important it is to settle on Mars, but we still don’t know how humans will even survive the journey without being shredded by cosmic radiation. Pay no attention to that man behind the curtain.

There are some big caveats, or just things to keep in mind, about this research. Garrett steps through a series of specific hypothetical scenarios, and he uses huge assumptions. He assumes there is life in the Milky Way and that AI and GAI are “natural developments” of these civilizations. He uses the already hypothetical Drake Equation, a way to quantify the likely number of other planetary civilizations, which has several variables we have no concrete idea about.

The mixed hypothetical argument arrives at one strong conclusion, though: the need for heavy and continuous regulation of AI. Garrett points out that the nations of Earth are already in a productivity race, scared to miss out if they hesitate to do more regulation. Some misguided futurists also have the odd idea that they can subvert GAI by simply developing a morally good one, faster, to control it more — an argument that makes no sense.

In Garrett’s model, these civilizations have just a couple hundred years in their AI eras before they blip off the map. Over distance and the very long course of cosmic time, such tiny timeframes mean almost nothing. They reduce to zero, which, he says, fits with SETI’s current success rate of 0 percent. “Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations,” Garrett said.

author photo
About the Author
Yosi Yahoudai is a founder and the managing partner of J&Y. His practice is comprised primarily of cases involving automobile and motorcycle accidents, but he also represents people in premises liability lawsuits, including suits alleging dangerous conditions of public property, third-party criminal conduct, and intentional torts. He also has expertise in cases involving product defects, dog bites, elder abuse, and sexual assault. He earned his Bachelor of Arts from the University of California and is admitted to practice in all California State Courts, and the United States District Court for the Southern District of California. If you have any questions about this article, you can contact Yosi by clicking here.

(source)