UTAH TECH UNIVERSITY'S STUDENT NEWS SOURCE | April 20, 2024

AI poses real, terrifying threat to human species

300 dpi Doug Griswold illustration of artifical brains the tech giants are racing to develop. (Bay Area News Group/MCT)

With BC-CPT-AI:SJ, San Jose Mercury News by Brandon Bailey

Share This:

It’s easy to underestimate the dangers of artificial intelligence. Sci-fi films featuring naked Austrian terminators and murderous robots crushing human civilization have made the whole idea seem fanciful, fun and a tad bit sexy.

But in reality, AI is a tangible, unregulated invention that could quickly spiral out of control, threatening the continued survival of the human species. Without proper regulation and oversight, and fail-safes and design philosophies put in place that ensure AIs have similar goals and values to humans, they could eventually destroy us.

And yet, one of the biggest problems surrounding the potential danger super-intelligent AI poses is that it is not taken seriously.

Author and neuroscientist Sam Harris, in his TED talk about the potential dangers of AI, said, “Rather than being scared, most of you will find what I’m talking about is actually kind of cool.”

I was initially guilty of this. Who doesn’t get a kick out of imagining bipedal, human-like robots carrying out a robotic apocalypse? It seems unbelievable. But the threat is much more grounded, and already visible today, even without AI having surpassed human creativity and inventive aptitudes. AI algorithms choose what articles we see on social media websites, what videos are recommended to us on platforms like YouTube and Netflix, and what ads we see when surfing the web.

These seemingly mundane tasks could be tailored and controlled by people wanting to do anything from influence elections and public opinion to pushing their own products and services at the cost of others without access to equivalent AI resources. And if a simple algorithm can prove that powerful, imagine a super-intelligent AI in control of national defense, government spending or party planning.

The statistic given by many AI experts and researchers for when AI will surpass human intelligence is around 50 years. It may seem like this problem should be a concern for future generations, but that milestone is not as far off as it seems — it may only take 50 more years before I finally graduate college.

Our biggest fears surrounding AIs and machines is not that our food processors will suddenly want to make people puree, or that toaster ovens will actively commit arson to snuff out their human overlords — those fears are likely only to manifest themselves in my nightmares. Rather, it’s that humans and super-intelligent AI will simply have different objectives, and the “slightest difference between their goals and our own could destroy us,” Harris said.

Civilization would have to be destroyed to prevent us from improving on our technologies, including AI, further, Harris said.

“And at a certain point we will build machines that are smarter than we are,” he said. “And once we have machines that are smarter than we are they will begin to improve themselves.”

According to entrepreneur and inventor Elon Musk, “I’m really quite close to the cutting edge of AI, and it scares the hell out of me. It’s capable of vastly more than anyone knows, and the rate of improvement is exponential.”

The “rate of improvement” is in reference to an AI’s ability to potentially learn and research at a rate that dwarfs that of any human scientists or thinkers. Eventually, we may be unable to keep up with, or curb the advancement of AI.

Harris compared the differences between super-intelligent AI and humans to the differences between humans and ants.

“Whenever [ants’] presence seriously conflicts with one of our goals… we annihilate them without a qualm,” he said.

Philosopher and technologist Nick Bostrom, in his TED talk on the potential dangers of AI, said, “Once there is superintelligence, the fate of humanity may depend on what the superintelligence does.” Because of super-intelligent AI, or machine intelligence being able to improve itself and invent things on its own, “machine intelligence is the last invention humanity will ever have to make,” Bostrom said.

The danger, then, is if the AI does not share our values, and so invents and does things contrary to the well-being of the human species. The example, told slightly differently by AI researchers and experts the world over, goes something like this:

We create an AI whose sole objective is to help students graduate college. At first, it may provide tutoring to students in need, help tailor their class schedule to their biological clock, and motivate them with messages of encouragement and inspiration. But, as the AI becomes more powerful and intelligent, it may feel the need to take over all colleges, putting in place extremely strict admission demands on prospective students, firing tenured professors to replace them with ultra-efficient, robotic lecturers that sign any last chance agreements presented to them and work on pennies of electricity a day.

“Mark my words: AI is far more dangerous than nukes,” Musk said. “So why do we have no regulatory oversight? It’s insane.”

As DSU students, and future (and current) members of a technocratic society, be wary of the grave possibilities an unregulated AI-driven future can bring, and do your best to battle past the mental imagery of rogue home appliances and bipedal battle bots. You are in a unique position to choose your field and tailor your future. Consider a future that helps alleviate the problems that will burden our future society, or speak out against our reckless abandon regarding the development of AI.