Her Campus Logo Her Campus Logo
This article is written by a student writer from the Her Campus at ODU chapter.

Nicki Minaj, Billie Eilish and J Balvin are only three of the over 200 artists who signed a letter speaking out against the use of artificial intelligence to develop art. The letter was featured on Medium, a site run “by artists-for artists,” and signals the exhaustion and indignation felt by creators. Citing that many AI companies are “employing AI to sabotage creativity and undermine artists, songwriters, musicians and rights holders,” it is meant to bring awareness to the public and safeguard the means to a livelihood for these creators. While the writer acknowledges the massive ability of artificial intelligence technology to “advance creativity,” it also states the need to stop AI from unfairly stealing works, modifying them, using them without permission or payment to train, and developing cheap art.

Earlier this year, TikTok users were in a frenzy due to the incoming removal of all Universal Group Music (UMG) on its platform. In an open letter, UMG stated that TikTok is purposefully allowing, accepting and encouraging the use of AI-modified audios and videos.

Music is a core aspect of TikTok, as the letter states, and its proposal to pay the songwriters a “fraction” of what similar content platforms pay was met with much distaste by UMG. 

“TikTok’s tactics are obvious: Use its platform power to hurt vulnerable artists and try to intimidate us into conceding to a bad deal that undervalues music and shortchanges artists and songwriters as well as their fans.”

In response, TikTok published this tweet on the platform X, formerly known as Twitter accusing UMG of “[putting] their own greed above the interests of their artists and songwriters” and calling UMG’s letter a “false narrative.”

Open AI, creator of ChatGPT, recently announced the development of its voice engine, a tool designed to replicate human voice. It cites the benefits of its tool: Only needing a small sample to generate a real, natural and emotional voice, it provides reading assistance for students, translates with the correct accents helpful for use in foreign markets for consumers, laborers and everyday use. Additionally, it allows patients to use their voice again when they lose it, an essential tool for those with speech issues or conditions. 

OpenAI acknowledges the risks of the program and ensures that they are not yet publishing the tool. They are currently working with the United States and international governments and private sectors to only test with those who agree to not impersonate anyone without their consent or legal right. It is a requirement that all audio generated will be marked as artificial, and have a watermark that will enable the tracing of any AI-generated audio to Voice Engine. The AI company recommends the removal of voice-based security measures, such as a person voice-controlling access to a bank vault. Additionally, they suggest a rise in education and adoption of AI detection techniques.

The fact is, the use of artificial voice engineering is incredibly risky: a report done by the Federal Trade Commission stated over $3 million was lost by consumers due to scammers impersonating in 2023. Scammers use fear tactics to ask for money from close family and friends by acting as if the impersonated individual is in need of monetary assistance immediately, using similar text patterns, voice modulation and emotion to assure they are indeed the person they claim to be. 

The FTC had issued a challenge with a prize of $35 million, to create artificial intelligence detectors for voice generations, and handed the prize to three winners: AI detect, Origin Story, and Defake all using unique technological tools to detect. Origin Story seems to be the most novel idea, basing its ideology on the nature of human voice—it incorporates the “biosignals” a human creates while speaking from their throat and mouth into the audio to prove it as an actual human creation. 

Detection is not the only way. Prevention must be coupled with the technologies created in order to better safeguard artist rights. Numerous risks of AI have been acknowledged by the Biden government, as reflected in his executive order I’ve previously covered; however, art was not in the listings.

In order to protect artists, Tennessee has added a clause for voice to their original law that protects name, image and likeness. The Elvis act is designed to protect against AI recordings and will ban the modification or unconsented replication of their likeness inclusive of voice in any published materials.

“This incredible result once again shows that when the music community stands together, there’s nothing we can’t do. We applaud Tennessee’s swift and thoughtful bipartisan leadership against unconsented AI deepfakes and voice clones and look forward to additional states and the U.S. Congress moving quickly to protect the unique humanity and individuality of all Americans.” – Mitch Glazier, Recording Industry of America (RIAA) Chairman & CEO

While we recognize that the trend of modified audios and AI-generated fakes of our favorite celebrities singing along to random off-brand jingles is very entertaining, the fact remains that we are exploiting their likeness by using unauthorized generations of their voice. Rather than considering a voice as a brand, “Oh, I can sound like Ariana Grande today by using this voice modulator,” why not consider that these are real people with their personal right to their privacy, voice and image? In 2023, an AI-generated deepfake of Trump rapping about being arrested, attacking his opponents and promising to finish his wall, reached more than 3 millions views. Even the president wasn’t safe from our memes and usage of voice without consent, so how safe is the average artist? 

Lately, artificial intelligence seems to be disrupting all kinds of art industries.

AI models are trained on real human-designed art, without permission. Later, the training and development allows software to pump out artificial graphics and images that almost (but not quite) look real.

However, for every technological problem, there is a technological solution. Nightshade, a tool developed at the University of Chicago is being used as a “poison pill” to trick AI models that unfairly use artwork to train. Normally, AI models train by using metadata, or the data text set with any picture. The example cited by NPR is that of a dog: The word dog would be in the metadata for a picture of a dog, allowing the model to understand what the picture signifies.

Nightshade uses this concept and stops a model from associating the words with the image. It will make subtle changes to the image so that it no longer can associate the image to the correct word: For example, the image of the dog will appear as a dog to a viewer, but to the model, it will look like a cat.

Class action lawsuits are ongoing in court against the company Stability AI and many more for using images found online to develop their models uncompensated.

Sebastian Smee, contributor and critic at The Washington Post has a different take on the subject, as he writes in this article.

“Instead of thinking of generated art as a doomsday development, think of it as something to be curious about”

He believes that the more people engage and prompt AI to create new and exciting images, the faster everyone will be bored of it, comparing it to the use of NFTs. It was meant to be a scarce digital art, but the overconsumption and over-usage removed the concept from public interest. Additionally, he cites human psychology: The more digital or intangible art becomes, the more people will desire a tangible form of art.

“I saw this with my own eyes at the Venice Biennale. I saw it at Art Basel Miami. I see it every week in museums and galleries. Physical art pulses and glows before our screen-addled eyes with a kind of talismanic intensity. So, if you’re an artist who makes sculpture, oil paintings, ceramics or textiles, if you’re into printmaking, watercolors or immersive, physical installations, you have nothing to fear.”

Smee wants us to see AI as a new interesting concept instead of an enemy. We should consider it an era such as “renaissance,” “romanticism,” or many of the others throughout history that are now looked upon with much admiration and curiosity.

Animator Hayao Miyazaki’s response to seeing a presentation on animation and AI, however, was a show of what numerous artists feel on the subject.

Appalled, Miyazaki replied, “I strongly feel that this is an insult to life itself.”

It can be debated whether or not AI is a deserving form of art. However, it is undeniable that the exploitation of artists and unfair usage of their works is an issue. We as a society must encourage real creators and their efforts with the discontinuation of software companies which continue to hurt our artists.

Hi! I’m a junior at ODU, majoring in business admin. I love to write, paint and letting my creativity shine! I would love to work further in event management one day.