Hopes and Fears of Artificial Intelligence

Extract from an article I found interesting in City Journal on Artificial Intelligence:

Perhaps the greatest threat from AI might be that it advances beyond us, escapes our control, and comes up with its own goals that are counter, or at an angle, to ours—not because it’s hostile but because it’s indifferent to our well-being, the way we’re indifferent to the well-being of microbes. We would be no more able to understand what the AI is thinking than my cats can understand why I’m clicking these keys on my laptop.

This scenario is entirely theoretical, and no precedent for it exists in the history of our planet or the known universe. “It requires that this system be not a mere tool, but a full ‘agent’ with its own plans, goals, and actions,” writes Robin Hanson, a former co-blogger of chief doomer Yudkowsky. “Furthermore it is possible that even though this system was, before this explosion, and like most all computer systems today, very well tested to assure that its behavior was aligned well with its owners’ goals across its domains of usage, its behavior after the explosion would be nearly maximally non-aligned. . . . Perhaps resulting in human extinction. The usual testing and monitoring processes would be prevented from either noticing this problem or calling a halt when it so noticed, either due to this explosion happening too fast, or due to this system creating and hiding divergent intentions from its owners prior to the explosion.”

Netscape cofounder Marc Andreessen is exasperated by doomerism, echoing science writer Matt Ridley in his book The Rational Optimist decrying “this moaning pessimism” about how all things (rather than merely some things) are certain to be worse in the future. “The era of Artificial Intelligence is here, and boy are people freaking out,” writes Andreessen. He thinks that the scenario where AI will kill us all for any reason is based on a profound category error: “AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. AI is a machine—[it’s] not going to come alive any more than your toaster will. . . . AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math—code—computers.”

AI is “smart”, sure, and sometimes spookily so, but we can’t help but anthropomorphize this thing when we’re interacting with it, projecting bits of ourselves and treating it as though it were alive when it is not. At the end of the day, it’s still just math in a box. We do not know that it will ever want to do anything. Intelligence doesn’t inexorably lead to wanting things any more than it leads to genocidal hostility.

Contrary to Andreessen, though, we don’t know that AI won’t become conscious and self-aware, i.e., “alive”. Nobody even knows what consciousness is. …

1 Like

I agree with Andreesen that AI can never become a conscious being, it’s just programming.
But, depending on who’s doing the programming, it has the potential to become deadly.
We’ve already seen how it’s been programmed to recite leftist propaganda, and re-imagine history to make it more “diverse” - creating images of black Vikings, black female Founding Fathers, etc. Brainwashing on steroids!
And I’m sure the CCP is working overtime to program it into weapons of every sort, to be used against us.
It will become another arms race.
Not to mention what our own government will come up with to use against us.

1 Like