AI: "The End of all Biological Life on Earth"

Quote:
“The key issue is not ‘human-competitive’ intelligence; it’s what happens after AI gets to smarter-than-human intelligence,” said Eliezer Yudkowsky, a decision theorist and leading AI researcher in a March 29 Time magazine op-ed. “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.
“Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.”
“Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow.”
The AI will expand its influence outside the periphery of physical networks and could “build artificial life forms” using laboratories where proteins are produced using DNA strings.
The end result of building an all-powerful AI, under present conditions, would be the death of “every single member of the human species and all biological life on Earth,” he warned.

3 Likes

I don’t really get how computers could kill us, unless someone programmed a robot to do it, or gave them the nuclear codes, or something.
But I guess they could.
What worries me more, at least at this point, is that they are programming AI “chatbots” to teach and be sources of info that are totally “woke”, which will take brainwashing to a whole new level.

4 Likes

So, what is to be done? Immediate government control, of course. Like runaway global warming and runaway viruses, runaway AI is providing an excuse for total government lockdown - a moratorium on all innovation, except government regulation.

Please stop the AI-as-Terminator (or Borg) nonsense, Eliezer Yudkowsky. Elon Musk and Steve Wozniak, you and the other 98 “experts” who signed the letter for a pause before GPT5, should know better than that.

Is this call for the world to stop before the apocalypse another pre-election panic?

1 Like

AIs advanced to their logical outcome, which seems to be what is becoming, don’t have to do very much to destroy human civilizations. They just have to reason that it is right to do so…the only option to “save the planet” for “rational life” as they understand it, which may come after and under their rational AI rule.

Humans can already do that with hacking sensitive information, grids, bio-warfare tech, missile sites, food deliveries, banking, etc. But…they either don’t or just do it a tiny bit, as a warning of what they could do, unless…their demands are met, or whatever their ultimate goal is.

As it is already too late for a “pause” in AI advancement, the best we (West as good guys…) can do is to move faster than They (the bad guys).

Or submit our globe to multiple EMPs, which is a worst-case scenario that may do the AIs job for them, that is; remove civilization of human life to start again from barbaric tribalism in primitive conditions and hope for a better outcome.

Or find a way to severely limit AIs ability by strict control or abandoning the technology, which I don’t think can be done effectively because humans are…so human.

2 Likes

As if we didn’t already have enough to worry about…

2 Likes

Which is why we shouldn’t worry about this, as it is not something that we can control. But…we can, somewhat, control it in our own lives by limiting how “smart” our homes and phones are and how secure our finances are, and how resilient our lives are.

Or maybe just seeming to be in a little bit of control might make us feel better able to keep on keeping on. :upside_down_face:

2 Likes

Yes, I think its probably the latter…

2 Likes