Her Campus Logo Her Campus Logo
UNCO | Culture > Digital

Who Gets to Speak When AI Takes the Stand?

Carly Newberry Student Contributor, University of Northern Colorado
This article is written by a student writer from the Her Campus at UNCO chapter and does not reflect the views of Her Campus.

Fake Voices, Real Consequences

Artificial intelligence no longer lives in the uncanny valley. Just a year ago, AI-generated images still looked… off. Limbs twisted, fingers miscounted, a smile that refused to touch the eyes. Only the most unfamiliar scrollers usually the older and less tech-literate, fell for them. Over time, yet also suddenly, that reality has changed.

Google DeepMind’s Veo 3 now floods social media with AI-generated videos that look like they came straight from a Hollywood studio. The software simulates real-world physics, builds soundtracks, and even simulates convincing dialogue.

Comment
byu/9th_username from discussion
intechnology

Now that AI has improved significantly, we are forced to ask ourselves a series of complicated questions: Can I trust the information I’m seeing on the screen? Am I being influenced to buy this product by a person or an AI? 

And maybe more terrifyingly, In a world where AI can emotionally manipulate us through content it doesn’t understand, are we still choosing how we interpret an event — or is the algorithm choosing for us, in someone else’s profit interest?

This is an important question we must face in deciding whether or not to watch what ABC’s Law&Crime Network has done with the Sean “Diddy” Combs trial.

What Happens When We Apply AI to Courtrooms?

Combs sits on trial facing federal charges of racketeering and sex trafficking. Because cameras aren’t allowed in federal courtrooms, Law&Crime used AI to recreate scenes from the trial using official transcripts.

“Since the public can’t see or hear what’s happening in federal court firsthand, we’re using cutting-edge AI tools to bring these important proceedings to life,” said Law&Crime President Rachel Stockman in a Law&Crime article. “This is a pivotal moment in both popular culture and justice, and our goal is to provide accurate, transparent access to what’s actually being said in that courtroom.”

Yet that framing misses a critical distinction: there’s a vast difference between “bringing proceedings to life” and artificially simulating them. Court transcripts only tell part of the story. Human communication is layered with inflection, pauses, discomfort, physical presence. Even if body language analysis is pseudoscientific, we still intuit meaning when we see it in action. We still feel when someone is scared, or angry, or holding back tears.

Unless a court reporter describes every blink and flinch in the transcript—and they don’t—there is no way to translate that experience authentically to video. There’s no real way to match someone’s tone or cadence unless it’s recorded. So while the AI recreation might look polished, it does not, and cannot, represent the full truth.

And some people are pointing it out. One YouTube commenter noted a visual inconsistency: “Her nails keep changing from no manicure to full set,” they wrote. While this is seemingly a minor detail, it is just one minor detail that is easily verifiable upon first watch.

I will give credit to ABC for watermarking the video as “AI GENERATED” and not depicting the witness’s face, but that did not stop them from trying to mimic cadence and hand/upper body movement. This leads me to question, is there audio recording permitted in the courtroom to ensure accuracy?

The answer is no. That’s why in their courtside reporting of the trial, the Law&Crime Network reads aloud from the testimony instead of playing an audio clip. 

But when storytelling is outsourced to AI, it doesn’t just remove a survivor’s voice—it distorts who they are. It turns lived experience into spectacle, often without consent, often without care. While many of the witnesses chose to be there, that choice becomes irrelevant when it’s removed from them in the media. The context gets flattened. Their trauma is no longer theirs to hold, it’s ours to consume.

Public Victimhood 

The moment a victim becomes visible, they’re judged: too emotional, too calm, too messy, too composed. They stop being people and start being proof. Proof of pain. Or worse, proof of nothing.

Christine Blasey Ford’s memoir One Way Back makes this excruciatingly clear. The real trauma wasn’t just the assault—it was becoming a public survivor. Ford didn’t ask to be a symbol, she just wanted to tell the truth. Doing so unraveled her privacy, her peace, and, for a time, her very sense of self. A PR rep told her bluntly after her testimony, “You can never be anything else now. You can never be different than you were on that day.”

To the general public, Ford is the woman who accused Brett Kavanaugh first, and a professor or a mother second. She is forever inextricably linked to that act. And for what? He still got the job. 

Discussing the book, Ford said that writing it, “was as if I had crawled out of a cave only to walk back in and tell everyone else who was still hiding inside, ‘Don’t go out there. It’s not worth it.’”

These are not empowerment stories, but terrifying warnings. Stories of what it costs to speak out in a country that protects power more than it protects truth. And that’s just one of the risks of letting AI reshape the narrative. Who gets to tell the story? But more importantly, who gets erased in the process?

Cassie Ventura Deserved to Be Heard, Not Replicated

In November 2023, Cassie Ventura filed a civil lawsuit against Sean “Diddy” Combs. The lawsuit alleged a decade-long history of abuse, including rape and physical assault. The lawsuit was settled the following day for $20 million. This lawsuit and the release of the infamous hotel footage served as a catalyst for other allegations against Combs, leading to his indictment on criminal charges. 

But like Christine Blasey Ford, Ventura will never be seen the same way again. We expect a perfect victim, and then get mad when no such thing exists. No matter what else she does, her identity is now pinned to that testimony—forever linked to the violence she endured, and the man she named. And now, potentially, to a fake version of her story. A version rendered by AI. Stripped of her voice, her tone, her presence. A version she never approved.

Did anyone ask if Ventura wanted her voice cloned? Her gestures mimicked? Her silence filled with scripts and software? No. To networks like Law&Crime, Cassie Ventura is no longer a person. She’s content.

I’m a senior majoring in English: writing, editing, and publishing, with a focus on persuasion, politics, visual rhetoric, and humor. In a world where meaning feels mass-produced, I’m trying to move the needle. If not on public discourse, at least on my sewing machine.