[TW: Mentions of war and violence]
AI is integrating into every part of our daily lives. Social media algorithms rely on AI to personalize the content we see, cars with advanced driver-assistance systems use AI to interpret sensor and camera data, and even our Netflix recommendations are based on AI algorithms. As these technologies develop, their potential applications expand.
The Museum of Science in Boston highlights two ethical concerns in their “Exploring AI: Making the Invisible Visible” exhibit: the first being the use of AI in predictive policing, and the second being AI-operated drone warfare.
Predictive Policing
A map hangs on the wall in the “Exploring AI” exhibit, with a small plaque beside it that reads: “Predicting crime: AI software can help police departments predict where crime is likely to occur.” Below, in brackets, the plaque offers a word of caution: “Some argue that this software reinforces racial bias and unfairly flags neighborhoods as high risk.”
Predictive policing is not a new concept.
Analyzing historical crime data and police activity to preemptively deploy officers to high-risk areas has been a common practice for decades. In 2010, the National Institute of Justice convened its first symposium to discuss the framework and impact of predictive policing. Now, police departments are using AI to perform this data analysis.
In a 2024 brief, the NAACP outlined the potential risks of AI use in predictive policing, highlighting concerns that it perpetuates racial bias. Since these models use historical data, racial bias is inherent. The data shows that “the Black community is disproportionately negatively impacted in the criminal justice system due to targeted over-policing and discriminatory criminal laws.” As AI makes predictions, it uses this data to perpetuate continued over-policing and discrimination.
The NAACP lists five recommendations to make the use of AI in law enforcement more ethical and symmetrical:
- Implement rigorous oversight: Have independent oversight bodies monitor AI use to ensure fairness.
- Mandate transparency and accountability: Require law enforcement to make all tools, sources, and assessments publicly available.
- Promote community engagement: Involve community members in decision-making.
- Ban the use of biased data: Prohibit the use of biased historical crime data in AI algorithms.
- Establish legal frameworks: Enact legislation to regulate AI in policing.
Drone Warfare
Right below the predictive policing plaque in the MoS’s AI exhibit hangs another. It reads: “Making lethal decisions: Drones with the ability to kill without human oversight have reportedly already been used, although it is unclear what the results of their use have been.”
This June, UN News published an article calling for regulation on Lethal Autonomous Weapons (LAWs) following several months of drone warfare conducted by the Russian military in the Kherson region of Ukraine. More than 150 civilians were killed in weaponized drone attacks, constituting crimes against humanity. While the debate surrounding LAWs has been going on since 2013, the Russia-Ukraine conflict has exacerbated concerns surrounding autonomous weapons in conflict.
LAWs are a military technology that can autonomously identify and kill human targets based on algorithmic data. They do not need further input from human operators. To clarify, these are not drones going off on their own and deciding for themselves who to target. These are drones pre-programmed with algorithms and sensor technology. They have constraints and descriptions, but they just don’t need involvement from humans beyond their programming.
The UN deems the use of LAWs unethical. AI makes mistakes; wheelchairs can be flagged as weapons, and facial recognition technology is flawed. Furthermore, who takes accountability when war crimes are committed? The machine? The military? Is the company manufacturing the machine? UN Secretary General António Guterres argues that machine killing can’t be ethical because machines don’t understand the value of human life or apply moral judgment, calling for “clear regulations and prohibitions on such systems by 2026.”
The autonomous methods of policing and warfare are ethically questionable at best. However, progress in these fields continues. How far are we willing to go in the name of technological innovation?
Want to keep up with HCBU? Make sure to like us on Facebook, follow us on Instagram, check out our Pinterest board, watch us on TikTok, and read our latest Tweets!