Joy Buolamwini is the founder and CEO of Algorithmic Justice League. She serves on the Global Tech Panel convened by the vice president of the European Commission. She holds two masters degrees from Oxford University and MIT; and a bachelor's degree in Computer Science from the Georgia Institute of Technology. Fortune Magazine named her to their 2019 list of world's greatest leaders describing her as "the conscience of the AI Revolution."
What is the problem?
In today’s world, AI systems are used to decide who gets hired, the quality of medical treatment we receive, and whether we become a suspect in a police investigation. While these tools show great promise, they can also harm vulnerable and marginalized people, and threaten civil rights. Unchecked, unregulated and, at times, unwanted, AI systems can amplify racism, sexism, ableism, and other forms of discrimination.
What is the Solution?
The Algorithmic Justice League (AJL) combines art, research, policy guidance and media advocacy to form a cultural movement towards equitable and accountable artificial intelligence. This entails looking at how AI systems are developed, monitored, and operated to actively prevent the use of harmful AI systems in the context of inequalities.
Originating from the realization of racial and gender bias in a software the founder used when working on an engineering product, the AI required her to wear a white mask over her face while writing the code. This created many questions and inspired the start of the movement, which has since focused on other technologies within the areas of employment, housing, and criminal justice.
Taking a research-based approach to problem identification and solution development, Algorithmic Justice League can help organizations and governments detect harms and biases within their AI decision making tools, and provide best practice examples of how to create equitable and accountable AI. Accordingly, AJL determines that such an AI must provide meaningful transparency into how the system works, must continuously be monitored, and have a pathway for people to context and correct harmful decision making.