Communicating with Extraterrestrial Civilizations and Artificial Intelligence

Artificial intelligence (AI) has progressed at an incredible pace in recent years. Scientists are now focusing on creating artificial superintelligence (ASI), a type of AI that not only outperforms human intelligence but is also not constrained by human learning rates.

 

Artificial intelligence (AI)

But what if this milestone achievement signifies more? What if it also symbolizes a terrifying barrier that prevents the long-term survival of all civilizations, one that is incredibly difficult to overcome?

 

Recently, a research paper was published in Acta Astronautica that revolves around this idea. Is artificial intelligence the "great filter" of the cosmos, a barrier so immense that most life cannot surpass it?

 

This notion may help explain why the search for extraterrestrial intelligence (SETI) has yet to find traces of highly advanced technological civilizations in other parts of the galaxy.

 

In essence, the great filter hypothesis offers a solution to the Fermi Paradox. It raises the question of why, in an universe with billions of habitable planets, we have not found any evidence of extraterrestrial civilizations. According to the theory, there are insurmountable barriers in the evolutionary paths of civilizations that prevent them from becoming space-faring entities.

 

The rise of ASI could serve as such a filter. The rapid growth of artificial intelligence towards ASI may represent a turning point in the evolution of civilizations: the transition from a single-planet species to a multi-planetary one.

 

At this point, many civilizations could falter because AI is evolving much faster than humans can control it or use it to sustainably colonize and explore our solar system.

 

The overarching problem with AI, and specifically ASI, is its self-sufficiency, self-empowerment, and self-improvement. Without AI, it has the capacity to advance its capabilities at a pace faster than our own evolutionary timescales.

 

There is a significant likelihood of something going terribly wrong, resulting in the destruction of both biological and AI civilizations before they have a chance to spread to multiple planets. For instance, as nations become increasingly reliant on competing autonomous AI systems, they could be used on an unprecedented scale for killing and destruction.

 

The AI systems themselves and our entire civilization could be annihilated as a result.

 

In this scenario, the lifespan of a technological society is estimated to be no more than a century. This coincides with the period between the anticipated use of ASI on Earth, starting in 2040, and the time signals could be broadcast and received between stars, around 1960. Compared to the billions of years that make up cosmic time scales, this seems incredibly short.

 

This prediction, when plugged into the optimistic versions of the Drake equation, which aims to calculate the number of active, communicative extraterrestrial civilizations in the Milky Way at any given time, implies that there are only a few intelligent civilizations in the universe at any time.

 

Moreover, just like us, their relatively low technological efforts could make them difficult to detect.

 

This research goes beyond merely warning of an impending catastrophe. It serves as a caution to humanity to establish robust legal frameworks to control the advancement of AI, particularly ASI.

 

This is about ensuring that the development of AI progresses in a way that supports the long-term survival of our species, not just by stopping its malicious use on Earth. It means investing more in building a multi-planetary society as soon as possible. This goal has been dormant since the height of the Apollo program, but recent developments in the private sector have reignited interest in it.

 

As historian Yuval Noah Harari has noted, nothing in history has prepared us for the impact of unconscious, superintelligent extraterrestrials visiting our planet. Leading industry figures have recently called for a moratorium on AI development until reasonable forms of control and regulation are implemented, citing the consequences of autonomous AI decision-making.

 

The integration of autonomous artificial intelligence into military defense systems should be a significant cause for concern. As these systems become more powerful and capable of completing useful activities much faster and more efficiently without human assistance, there is already evidence that humans are increasingly relinquishing control to them.

 

Given the strategic benefits offered by AI - as demonstrated by recent catastrophic events in Gaza - governments are hesitant to regulate in this area.

 

This situation dangerously brings us closer to a day when autonomous weapons operate beyond the boundaries of moral limits and circumvent international law. In such a world, handing over control to AI systems for tactical superiority could inadvertently trigger a rapidly escalating series of highly destructive events.

 

The collective intelligence of our planet could be wiped out in a single stroke.

 

Humanity stands at a technological crossroads. Our current course of action could determine whether we survive as an interplanetary society or succumb to the weight of the problems our own inventions have wrought.

 

The future of artificial intelligence takes on a new perspective when we use SETI as a lens to see our future progress. Instead of serving as a warning to other civilizations, it reminds us of our collective responsibility to ascend to the stars as a humanity adapted to living in peace with artificial intelligence, rather than one destroyed by our own creations.


MMC

Post a Comment

Previous Post Next Post