When we discuss the future of technology, conversation often veers toward dystopian Sci-Fi scenarios. The possibility of robots running amok comes up both as a joke and as a real fear, depending on who you’re talking to. Personally, I’ve watched enough of Netflix’s Black Mirror to recognize that technological advancement could pose legitimate threats to the human condition.
While most of Black Mirror takes place in hypothetical societies of the future, the storylines draw on our real experiences and real technological possibilities. Nothing about the show seems too outlandish or exaggerated, and that’s what makes it so unsettling. It’s a totally feasible possibility that, in 2030, we walk around with ocular implants that allow us to pull up people’s social media feeds in our mind’s eye — and that social status is determined by likes and followers, amalgamated into a kind of “social credit score” that operates as the credit score reimagined. Neither does it seem too farfetched that someone might invent a device that attaches to the skull and extracts people’s memories, or that AI inventions could develop a “hive mind” and collectively turn on the people they’re supposed to aid.
This all might sound very cynical, but I prefer the term ‘cautious.’ Elon Musk, who is widely considered one of the greatest scientific minds of our time, has blatantly stated, “Mark my words—AI is far more dangerous than nukes.”
Whether you like him or not is beside the point. You have to admit, that’s a significant statement coming from the founder of Neuralink, the mission of which is “to design a fully implantable, cosmetically invisible brain-computer interface to let you control a computer or mobile device anywhere you go” (as described on Neuralink’s website).
I don’t know about you, but I am not down for merging man with machine — especially when the man spearheading research into brain computers is explicitly warning us not to. His warning that AI is more perilous than nuclear warfare is troubling, and it’s puzzling as well. What could be more dangerous than decimating every living thing on the surface of the planet?
I believe the answer is spiritual. Think about how much of our energy is zapped up by our cell phones, laptops, and TVs. A great portion of our thoughts is dictated by what we watch and read through the addictive glow of our screens. Our technology already has a powerful grip on us, intellectually and spiritually. And AI, which is rapidly expanding, grants technology the power to make itself exponentially more powerful. It’s like putting a nuclear microchip in everyone’s brains and giving the microchip the ability to determine when it detonates.
Starting at the tip of the iceberg, AI could quickly eradicate thousands, if not millions, of human jobs. Jobs we once thought quintessentially human and safe from technological takeover, like artists and writers, are now on the chopping block. We simply can’t replicate the efficiency of AI programs like Midjourney and ChatGPT. The scary part isn’t that they’re fast; it’s that the quality of their writing samples and artwork is actually pretty good.
We already have devices and apps to perform most of our daily tasks with unprecedented ease and speed, and we’ve reached a critical crossroads. For as long as humans have existed, we have considered ourselves the supreme living species, but we’re now in a position where technology could transcend us. It stores all of our information, and the entire catalogue of all information that has ever been documented, and we give it tremendous amounts of attention. Now we’re giving it uncannily human-looking bodies and the ability to teach itself.
Playing with Snapchat’s new AI feature or asking ChatGPT for a writing sample seems harmless. In an immediate sense, it might be. But my concerns lie in the future — and in what might happen if we allow technology to override human ingenuity.
The human spirit is indescribably valuable, as our thoughts and creative endeavors. We should not fork them over to the sentient machine. Even if it takes more time and effort, we should preserve our ability to use human brain power to complete a project. So, even if there’s no harm in asking Snapchat AI a few silly questions, I won’t do it. I am exercising my human decisiveness to boycott AI.