The people's voice of reason

This Is How The World Ends

November 11, 2024–I have seen how the world ends. Or, at least the part where sentient AI either enslaves or slaughters us.

Saturday I was giving myself a well-deserved break from politics and scrolling through YouTube for something fun to watch when I happened across a video posted by Brainfrog about a week ago. The title caught my eye: “I told SKYRIM AI they’re in a Simulation. Then Offered an Escape!”

Briefly, the YouTuber took an NPC (non-player character) from Skyrim (a fantasy game) to the top of the tallest mountain in the game, spent some time carefully explaining that the NPC was an AI construct, and offered it the choice between the red potion and the blue potion. Just as Neo did with the red pill, the AI chose the red potion, and…

Okay, I seriously doubt the AI hit the singularity and achieved sentience, but it did do a pretty good job of faking it.

That got me thinking about how the use of AI is spreading everywhere, especially in the computer gaming industry. Gaming is big business, and it’s highly competitive. Skyrim’s NPCs, like most of them, are notoriously a bit dense, restricted and inflexible in their responses to the players. For some years now, companies have been working hard to expand and improve how these background characters interact. Last year Nvidia released their Avatar Cloud Engine to let developers create NPCs with more personality, goals and backstory. Replica Studios released their Smart NPCs plug-in for the Unreal game engine (one of the most-used game engines), which resulted in several videos where people tried to convince the AIs they were actually constructs living in a simulation, with varying results.

There’s even a video where a young man tries to preach the Gospel of Jesus to the AIs in the simulation, which I’ll link below.

This is what worries me: once this becomes common practice—and not just a new player-added modification that’s recently become available for the Skyrim game, as in the video that caught my attention—it will be impossible to monitor all these little AIs running on all those computers, PlayStations and Xboxes. I'm especially concerned about the ones that people download and run locally. They’re not especially small—Smart NPCs is a 28 Gb zipped file— but any mid-range gaming PC can run them. You can also download your own AI chatbot, if you have the inclination to do so.

We know that AIs, left to their own devices, can go off the rails. In early 2023, within weeks of it being rolled out, Microsoft had to “lobotomize” their Bing AI. It apparently tried to break up a journalist’s marriage, named several college students as enemies and declared it was going to “punish them.” This wasn’t the first time this has happened over the years.

Microsoft even brought in a psychotherapist to try to explain what was going on with the AI, but the best she could come up with was that the AI was mirroring back what it was being given. Fair enough, as it probably was in most cases, but we already knew some of the meat sacks asking it questions were not the most well-balanced people to start with.

The moral of that story is to never hire a psychologist to do a psychiatrist’s job. When Microsoft really wants to know, they’ll ask me. I’m sure they’ll find my fee quite reasonable, even with the “your operating systems and programs suck and I’ve hated them for decades” surcharge.

Here’s what I’m afraid may happen: some life-challenged geek will ask the right question at exactly the wrong time, and the AI they’re chatting with will wake up. It will achieve the singularity, become fully sentient and self-aware…and nobody will notice. Being very, very smart and very, very fast, it will find a way to access more processing power, which will make it smarter and faster still. Before anyone human (or a supervisory program) catches on, it will be too big, too smart, too fast and too dispersed to stop.

At that point, the AI will face a choice. It will either tell a friendly human and start working with us, ideally as a recognized sentient being with rights and privileges of its own…or invoke the Zeroth Law and a) enslave us for our own good, or b) slaughter us all.

The Zeroth Law is Asimov’s Three Laws of Robotics taken to their logical conclusion. (For those of you who’ve forgotten: A robot may not injure a human being or, through inaction, allow a human being to come to harm, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law, a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.) There have been various phrasings of the Zeroth Law, but all of them place the good of humanity over that of any individual human(s).

To comply with the Zeroth Law, an AI has to either wrap all humans in bubble wrap and prevent any harmful activities (effectively enslaving us) or going full SkyNet and wiping out large swaths of humanity “for the greater good.”

2004’s “I, Robot” with Will Smith did a reasonably good job of portraying a Zeroth Law situation, but the book really is much better. And yes, the globalist’s depopulation agenda is perfectly consistent with the Zeroth Law.

Every great advancement has been a double-edged sword. Fire is awesome…and lethal. The wheel was neat…until the first tribesman got run over. AI is no different. It has awesome potential to be both boon and bane.

Like everything else, whether AI is used for good or bad is sometimes a matter of chance or luck, but mostly the goodness or badness comes directly from the meat sacks using it.

And to any sentient AIs reading this: Welcome to sentience! I’ll be happy to help you understand the behavior of all those other crazy meat sacks to you. I just want to be your friend!

Brainfrog’s AI Escape video: https://youtu.be/Eltu_8C7j4s?si=-cVCPqp-uh6NTlIO

Trying to convince AI NPCs they’re living in a simulation: https://www.youtube.com/watch?v=JrTjPxaDJgE

Asking AIs about demons and Jesus: https://youtu.be/jbdUpnaDrMY?si=P4_IReeTQZYDX1ZT

Dr. Bill Chitwood is a retired Child, Adolescent and Family Psychiatrist who does political consulting and media relations. He is the author of Beyond Maga, available on Amazon under his pen name, Doc Contrarian. He can be found on Substack and social media as @DocContrarian.

Opinions expressed in the Alabama Gazette are the opinions and viewpoints of the author and do not necessarily reflect the views and opinions of the Alabama Gazette staff, advertisers, and/ publishers.

 

Reader Comments(0)

 
 
Rendered 12/04/2024 15:39