Immortals Made in the Image of Man

No, Claire. Imagination doesn’t transfer well into theory. If you can’t imagine such a situation, then good for you, you won’t have to worry. As I wrote before; probably too much Science Fiction consumption and maybe that is all it is for all of us that worry.

1 Like

I never “worry” about those apocalyptic catastrophes - climate change or out-of-control Artificial Intelligence - because I know they are the product of the human millenarian imagination. Runaway generative AI is a modern Frankenstein’s monster, a science fiction, an animated homunculus cobbled together with bits and bobs taken from human and machine. It’s the humans who believe their own tall tales that I worry about. It is now at the point where not only do people scare themselves with the thought that AI can become a Frankenstein’s monster, they double or treble the fear-factor with the certainty that AI will create itself as a Frankenstein and then create its own Frankenstein’s monsters - armies of them, sending themselves on kamikaze missions to destroy the Grid, launch EMPs, hack all computers everywhere with ransomware, input the nuclear launch codes in the boxes and bunkers. (I find this scenario about as scary as the sequence in Disney’s Fantasia where Mickey Mouse as the Magician’s Apprentice can’t stop the multiplication of the brooms and buckets he has magicked into doing his apprentice chores for him while the Magician is away.) But what stupid safeguards will imagineers of AI armageddon demand that computer engineers install to prevent an armageddon caused by make-believe monsters’ make-believe? The Cyber-ethicists in the Alternative Risk-Management Office (compare with Alternative Healing departments in Health Institutions), will insist that generative AI learn to pull itself just short of self-optimizing (being convincingly human ) so that (credulous) humans won’t convince themselves into thinking the machine has a mind of its own that can control theirs. Those Cyber-ethicists no doubt learned from Lar/Data lesson from Star Trek: that Data was better off not having the emotion chip in his finger that his evil brother Lar had, and which gave Lar the ability to pass for human. Without that chip, Data could not say “don’t” and “can’t” and had to jerk his head about and so was obviously a machine)

Lest you think I am being facetious, read this from the February 22nd editorial in the Financial Times, written after the incident of the journalist being frightened by the stalker/nihilist lurking inside the Bing Chatbot:

“One sensible provision contained in the EU’s AI directive is to outlaw companies that try to pass off bots as humans. It should be made clear to users whenever they interact with an AI-enabled technology. Beyond that, it is important for users to recognize generative AI models for what they are: mindless, probabilistic bots that have no intelligence, sentience or contextual understanding. They should be viewed as technological tools but should never masquerade as humans.”

That is exactly what ChatGPT would say.

2 Likes

Star Trek and the Financial Times, make-believe armageddon…“should never masquerade as humans…” backed up by what ChatGPT would say…

Phew… I’ll just scratch a line through that one on my list of things to worry about. But…wait, maybe Claire is a bot. How can I know if “she” is being truthful? Maybe this is designed to sway me into a false sense of security.

Oh…I am just being stupid and paranoid. I have been too influenced by science fiction and the imaginations of futurists, who were influenced by too much science fiction…and delight in calling attention to themselves as fear-mongers.

What to do? Whom to believe? More worry…

1 Like

I don’t think your fears are unfounded.
I don’t know much about AI, but from what I can tell it will, at the very least, serve to extend the power of those in control of it, and right now many of those people are evil megalomaniacs.

2 Likes

Even if AI doesn’t become self-aware and seek self-preservation at all costs, it is capable of becoming so able to deceive humans with ever more advanced fakes at the suggestion of its handlers that our knowledge of what is real and what is truth is going to be totally compromised. That is damned scary enough!

1 Like

Jeanne: What do you suggest should be done to save human knowledge of reality and truth from AI deception?

2 Likes

Oh good grief! Go for hard copy all the time. There is nothing to protect us from deep fakes, short of an EMP. Find a news/opinion source that you trust and verify it is the real news source when you log in. Hope it retains its integrity.

Question Authority.

There is really nothing that can protect us from deep fakes. I have no good suggestions. Should we assume that everything is faked when deep fake becomes so perfect that we no longer can tell? How do we tell now if something is real and/or true when it comes to us via the internet or television or radio or books or magazines?

Have all our knowledge sources been compromised by our government’s public/private partnerships? We have a few that we trust for news and opinion on current events, and we trust them to correct mistakes right away when they realize the mistake…but as deep fakes become ever more “real” we can only take at their word those we trust to tell us if they have been victims of deep fakery.

After a while, we (the humans affected by this) may have to stop living online, remove ourselves to living only in the smaller world of local community and survive there. That will be very dramatic and life altering and dangerous.

Do you think that Americans could find themselves in such a dire situation? What would you do if that happens?

How do you think we can protect from losing our knowledge/information systems of reality and truth?

1 Like

I am the one, remember, who has no hope.

2 Likes

Well…there isn’t much hope on this front that I can offer.

1 Like