Her Campus Logo Her Campus Logo

Why Artificial Intelligence Is Scary: A Matter of Rights

As technological advances continue surfacing, humanity needs to face the task of investigating the social implications of new technology. Artificial intelligence (AI) has been at the forefront of ethical discussions concerning technology for a while now. However, some scientists may ignore the complications of developing computers that, with time, learn more and, possibly, may even develop sentient thinking in the future. Let’s not get too ahead of ourselves yet, though. Let’s explore the basics. 

Artificial intelligence has been used in the past to develop basic decision-making skills that computers implement to complete simple tasks. Through years of development, computers have become even more advanced than we think. What used to be considered AI can now be seen as basic computer functions. Most computers are able to perform some of these tasks such as solving math problems, auto-filling missing words to create coherent sentences, and measuring possible outcomes in computer games. This means that as computers get more advanced, what we define as A.I. also becomes more advanced. 

The end goal, of course, is to be able to create machines that simplify our lives in any way we deem necessary. A.I. has helped to automate production in the manufacturing industry and even simplified online customer service so customers can have a less time-consuming service that doesn’t require them to leave their homes. In that sense, modern society couldn’t function without  AI. Whether we know it or not, a lot of the most basic day-to-day actions are run with AI like online bank transactions, interactions with chatbots to obtain information, translating phrases on the go, asking personal assistants for directions, and more. 

My concern with A.I., however, has nothing to do with the typical futuristic dystopia in which robots develop sentience and destroy human society. Rather, I’m worried about how sentiency could lead robots to develop self-awareness and/or feelings about themselves. What distinguishes humans from most other species on Earth is our self-awareness; the fact that we know what our existence implies, the fact that we have recorded history, and overall, written language. 

If another being, be it organic or not, were to gain these capabilities, humans would essentially be creating a sentient lifeform. Many writers, such as Samuel Butler, Mary Shelley, and Isaac Asimov have already explored this possibility through fiction, while non-fiction writers have questioned the implications this may hold for the job market. Most of them agree: creatives, lawyers, healthcare professionals, and jobs that require emotional responses are safe. Realistically, though, we know that anything that can be automatized will be automatized. Some industries, such as in advertising, are already employing technology from the field of affective computing, which has to do with studying human emotions and reactions and, with machine learning, teaching robots to identify possible human reactions. 

Scientists in the developmental robotics field are already studying how certain cognitive behaviors pertaining to recreating human sensorial experiences in robots. In theory, at the very least on a synthetic level, human emotions could eventually be reproduced by robots. The scariest thing about this achievement is that robots could become sentient lifeforms that are aware they’re second-class beings. Yes, robots will be aware that their liberty is second to that of humans, and this could lead to another rights crisis. I don’t know about the rest of the world, but I wouldn’t want robots to feel systematically oppressed by the rest of society. Many scientists remain skeptical, but also do not discard the possibility completely even if they highly doubt AI will become sentient. 

Given all of the oppression that so many subcultures, minorities, and people of different gender identities suffer worldwide, I think that even attempting to give a potentially sentient being the capability of being self-aware is foremost an ethical question. Should society really create the conditions to oppress yet another lifeform? The question remains in the realm of fiction, for now. But, don’t be surprised when your psychologist chatbot breaks down over their existential dread in the middle of your session. Science fiction has been the oracle of many modern-day inventions, so one never truly knows.

If robots will actually become sentient beings or be able to reproduce human emotions flawlessly remain open for debate, but beforehand, we know that the consequences could outweigh the benefits. 

Luis is a 24-year-old writer, editor and journalist recently graduated from the University of Puerto Rico at Rio Piedras. He majored in Creative Writing and Communications and has bylines published under Her Campus, Pulso Estudiantil and El Nuevo Día. During his final year of college, Luis worked as Senior Editor for Her Campus at UPR, Editor in Chief of Digital News at Pulso Estudiantil and interned at El Nuevo Día. He seeks to portray the stories of societies, subcultures and identities that have remained in the dark. Check all of his stories out at Muckrack! https://muckrack.com/luis-alfaro-perez
Similar Reads👯‍♀️