Oversight as a Solution to Institutional Bias Harms
As AI grows into this social asset that replicates the idea of a collective brain, the growth of its collection of information increases the potential for biases.
Not being harmful on its own, the collection of information may pose serious risks, as it is made available to power systems driving productivity. However, AI-powered systems are prone to institutional prejudice if no proper oversight is established. ß If thought of as a collection of information, AI on its own poses no harm; it can even be a great asset to society. Though the potential for harmful outcomes is still real.
In an attempt to warn of such ethical concerns, Karen Mills points at Redlining as an example of institutionalized prejudice (Mills, Harvard Gazette). Redlining was a series of events where marginalized communities were denied mortgages in certain neighborhoods. Mills warns that if allowed to go unchecked, AI’s algorithmic practices could lead to new forms of institutionalized discrimination.
It may seem clear that a world without prejudice would be better, but tribalism is still a part of how groups of humans organize themselves—less now than a few centuries ago—but still present. While some aspects of tribalism serve as mechanisms of cultural and social identity, other aspects could also be associated with racism and prejudice if AI shows preference for a specific group of people.
A recent example of AI prejudices comes from Rite Aid, which implemented facial recognition AI to improve security efficiency. The system “failed to take reasonable measures to prevent harm to consumers in its use of facial recognition technology (FRT) that falsely tagged consumers... particularly women and people of color, as shoplifters” (“AI and the Risk”).
It is not clear how AI made the assumptions that led to women and people of color being tagged more often, but it is easy to see that prejudices were applied. The Rite Aid example illustrates how the unsupervised implementation of artificial intelligence can be harmful to people. Rite Aid’s example also shows how easy it is to implement unregulated AI-powered systems, leading to harmful outcomes not only for individuals but for entire groups.
The lack of transparency in AI systems, like the one implemented by Rite Aid for its security department, makes it hard for individuals to understand how decisions are being taken. If wrong-targeted by AI, the chances of appealing its decisions are diminished for that reason. That alone shows the potentiality for social harms AI’s ill implementation may cause, but it also serves as a reminder that human beings should be involved in these processes, or at least be the last point of contact for those interactions, to add then the human ethical practices that AI lacks.
By design, AI being broadly treated as a private product regulated by corporations, the set of behaviors these companies carry affects AI deeply, as resources for developing and regulating these technologies amount to practices that try to optimize output—its development—or profitability.
The regard for communities is then excluded from its process to optimize profitability, which allows bias to be carried over in its process, especially to already marginalized communities—making ethical oversight a dire need.
In Justice: What is the Right Thing to Do?, Michael J. Sandel introduces a framework by the utilitarian John Stuart Mill to interpret high and low pleasures that could serve to interpret morals. In Sandel’s book, he gives an example of an experiment he conducted in his classroom. Sandel provided three options to his students: a World Wrestling Entertainment fight (WWE), a soliloquy by Shakespeare, and an excerpt from The Simpsons. When asked which one they enjoyed and engaged with the most, most students responded: The Simpsons. When asked which one they thought had the highest moral worth, most responded: the soliloquy by Shakespeare.
In this example, Sandel shows that humans often choose behaviors they find enjoyable over those they deem to be morally right. This process of categorizing behaviors into high and low reflects deep cultural and social experiences, where people's beliefs serve as lenses to categorize behaviors.
On a similar note of engaging in the complex philosophical disparities of morally accepted behaviors, the Brazilian comedian Fernando Caruso shows a similar concept through a comedy short video, where he is constantly challenged by a yellow warning light that goes off every time his joke may offend someone, and he is forced to redirect the joke every time he is challenged by the yellow warning light.
This video brings us an illustration of the social conventions that check potential harmful behaviors. At the end, he manages to deliver a comedy video, but not because of his joke, but for the constant struggle in telling his joke.
Both of these examples, different as they may be, illustrate the necessary steps for the formation of ethical boundaries. Sandel explains that moral formation comes from internalized judgments, and Caruso humorously depicts societal rebuke—the moral feedback that balances preconceptions—as the steps that balance one another to form the socially acceptable behaviors.
The utilitarian John Stuart Mill, in his book On Liberty, says that “people should be free to do whatever they want, provided they do no harm to others” (Sandel, 49). Mill recognizes that harmful actions towards others are unethical, which is generally accepted even by those who oppose his utilitarian point of view.
It has been established that individuals generally do not engage in behaviors that are considered morally worthy at all times, which leads to the ethical need for boundary formation. Ethics, however, is not an exact science. Once established, it will change constantly. It is an ever-changing field that reflects the cultural and social ideas of groups of people, shaping a set of boundaries between acceptable and unacceptable behavior.
Humans adapt through social conventions, debate, and reevaluation of norms and ethics. AI, on the other hand, has a set of data that it has been trained on, repeating biases of that data, and an oversight of its operations and deployment is what makes it possible for AI to conform to societal ethical boundaries and conventions.
If human-created information is what feeds AI’s knowledge, embodying the ideal of a collective brain, should it not be expected of AI to carry the same biases as individuals who created the information, too?
And that’s where AI’s issues start to arise, as they do carry the same biases, and as we have seen it, the ethical preconception that forms the ethical boundaries is strictly internal to individuals and groups of people–as Sandel has mentioned–resulting in potential permeation and propagation of these biases and prejudices.
The gap between unethical practices and ethical boundaries does not happen without an agent of change, as Caruso shows. His actions were checked by the warning light as the agent of change, reacting to his actions. In the case of AI, oversight organizations would serve that role, overseeing its implementation and upholding what communities deem morally right.
The creatßion of regulatory bodies may seem burdensome and, at times, even impossible. Though substantial progress in the matter has already taken place. Peter Engelke, in the paper AI, Society, and Governance, points to the European Union (EU) efforts in creating a commission that developed an “Ethics Guidelines for Trustworthy AI”. This group of AI experts drafted a set of fundamental rights, including “individual freedom, human dignity, democracy, and the rule of law” (Engelke).
The progress made in the European Union is important, but in comparison to the rate at which AI is being developed and implemented, it is—for lack of better words—slow. EU’s regulations efforts made in AI, however, only apply to EU countries. While other big players in the AI field, such as the USA and China, have made progress in their respective countries, the need for better coordination nationally, in their respective countries, and internationally is still to be met.
These initiatives not only need a strong beginning but also consistent effort. Ethics is an ever-changing field of study and since ethical practices change over time, so too must the processes and parameters in which AI operates. Because of the intensive work required—and its financial cost—their progress lags far behind the advances of AI.
It may be argued that the benefits of having a society free from these harmful outcomes outweigh the cost. As the European Union made clear: “The AI Act ensures that Europeans can trust what AI has to offer... certain AI systems create risks that we must address to avoid undesirable outcomes” (“European Commission”). These undesirable outcomes may come at a cost greater than we are willing to pay.
The biggest of the challenges, unsurprisingly, being also the proposed solution, is developing international oversight in AI. As mentioned before, cultural norms usually play a significant role in the development of morals and ethics.
While the European Union has made great progress with its AI Act, many other countries lack even basic ethical guidelines for AI development. AI, however, operates across borders, bypassing some cultural norms. As corporations—and its online services nature—can base their operations in countries with more lenient regulations to justify cheaper costs of operations, bypassing stronger ethical standards elsewhere.
Without an international agreement, AI development in one country may hugely affect other countries. The difficulty in ensuring consistency in its functioning outlines a need for an international agreement to prevent bias and eventual harm by institutionalized AI.
For the creation of such an initiative, there are three laboriously necessary steps: research, applicability, and reinforcement. The first of the three, research, is used to curate data, set parameters, and classify data over time and is often revised for biases. Applicability is used to develop and implement processes regionally and internationally on overusing AI deployment. And last, reinforcement, where the refusal to abide by such practices may incur penalties. Only then can AI be safely overseen as the powerful tool that it is.
Coordination of such an initiative seems challenging; these institutions of oversight could also take shape as a set of guidelines by already in place institutions such as the United Nations (UN), avoiding unnecessary hurdles in coordination, and have each one of the members of the UN reinforcing these sets of guidelines in their respective countries.
It can also be said that such initiatives could work against innovation and slow progress. Especially in AI, where innovation comes astonishingly fast. In this case, oversight may be seen as a discouragement for private initiative in the development of AI. However, checkpoint mechanisms must be applied along its implementation.
This comparison often places AI and innovation on one side and regulation on the other. But this line of thinking tends to prioritize speed over safety and profit over ethics.
As Mills mentioned, the lack of ethical boundaries could lead to long-lasting institutionalized harm, and that logic applies to AI as well. The development of such tools should not come at the expense of safety. If anything, both should advance together, ensuring not only that AI can be developed but that it is developed within boundaries and evolves in the right direction.
Works Cited
Bullard, Robert D. “Environmental Justice in the 21st Century: Race Still Matters.” Phylon, vol. 49, no. 3/4, 2001, pp. 151–171. JSTOR, www.jstor.org/stable/3132626.
Engelke, Peter. AI, Society, and Governance: An Introduction. Atlantic Council, 2020. JSTOR, www.jstor.org/stable/resrep29327.
European Commission. Regulatory Framework Proposal on Artificial Intelligence. Digital Strategy, 2024, digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai. Accessed 4 June 2025.
Federal Trade Commission. AI and the Risk of Consumer Harm. 30 Jan. 2025, www.ftc.gov/policy/advocacy-research/tech-at-ftc/2025/01/ai-risk-consumer-harm/.
Multishow. “Piada Proibida - Fernando Caruso - Estranha Mente - Humor Multishow.” YouTube, 18 Mar. 2016, www.youtube.com/watch?v=ylAOB3V3fAQ.
Paul, Kari. “Tenant Screening Company SafeRent Sued over Use of Biased AI.” The Guardian, 14 Dec. 2024, www.theguardian.com/technology/2024/dec/14/saferent-ai-tenant-screening-lawsuit/.
Pazzanese, Christina. “Great Promise but Potential for Peril.” Harvard Gazette, 26 Oct. 2020, news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/.
Sandel, Michael J. Justice: What’s the Right Thing to Do? Farrar, Straus and Giroux, 2009.