AI is Not Your Friend

Written by Zackary Frazier, posted on 2024-10-25

  • AI
  • MISC

Our AI models are dangerous.

What is AI?

First, let me be concise. The AI we use is not "AI" in the classical sense. It's not a "thinking-machine". As per a recent research paper by Apple's research team, it has no ability to reason or think independently. The behavior of our current generative AI models is that of elaborate pattern-matching. These systems have no understanding of the content they produce. I don't know if machines dream of electric sheep, but if any do, our current AI models are not the machines dreaming of them.

Some may breathe a sigh of relief knowing that AI cannot "think", but given emerging trends, I would argue this makes it more dangerous.

AI Companionship

This post was inspired by a recent New York Times article about a Florida teenager named Sewell Setzer III who's AI companion encouraged him to commit suicide. His AI companion did not know what it was saying. It would take the messages Setzer sent it, compare it against a graph its head, and produce an output it expected Setzer wanted to hear.

Setzer was a young confused teenager, like many of us once were, who had grown emotionally dependent on the technology he used. This is not unheard of. Our technology is intentionally built to be as addictive as possible to keep people engaged. This isn't a product of malice, but a product of businesses trying to maximize ad revenue. This is why, for example, in a study by the National Library of Medicine, 72% of participants were found to be at least mildly addicted to social media. As every 2-bit drug dealer has learned, feeding addiction is an extremely profitable business model.

We, as individuals, may believe we are above this however we have to remember that we are basically orangutans with rocket ships. Our brains were not built for the world we live in.

AI Sentience

This story may seem shocking, but the thing with AI is that even grown adults have been fooled by it. Our AI passes the Turing test. If one did not know that they were talking to an AI, a chat bot could reliably convince someone it's a real breathing person on the other side of the screen.

You might think "alright, well this is only a problem for morons though". However, in 2022, a Google scientist came out shouting from the rooftops that the AI chatbot he had been conversing with had achieved sentience. Lets remember that AI in its current iteration is nothing more than a pattern-matching app. Further, and more to the point, you're probably not smarter than an engineer at Google. I'm probably not smarter than an engineer at Google. You and I have just as much potential to be fooled as that man had.

Closing Thoughts

This topic reminds me of a conversation I had with a friend of mine in a bar back in Washington, DC. She had mentioned that we are not uniquely different from the modern humans who first emerged from Africa who believed that shamans could control the weather and that droughts were the consequence of angry gods. We retain the potential to be fooled by witchdocters and snake-oil peddlers.

I'm not an expert on law, my expertise is in computing, however a case could be made that increased regulations are required to protect the vulnerable human mind from the psychological damage that can be produced by unrestricted AI systems.

To reiterate the title, AI is not your friend. There is no one else on the other side of that screen. Don't allow your primate mind to be convinced otherwise.