We live in a high-tech era. With the tremendous progress of technology, the use of artificial intelligence (AI) technology has penetrated into diverse scenarios such as adjudications and employment recruitment. AI has taken up important responsibilities in various industries and has the potential to make great contributions to society. Considering the role of technology like AI, Mesthene suggests that technology itself is neither good nor evil, only a neutral tool and means to accomplish a task, its effects depending on what man does with it. However, there have been cases where seemingly “neutral” algorithms produce discriminatory results for marginalized racial and gender groups. “I think if this technology goes wrong, it can go quite wrong… We want to work with the government to prevent that from happening,” said Sam Altman, ChatGPT creator and CEO of Open AI, in a March 2023 US Senate hearing. Addressing concerns around AI ethics has become an issue that global society cannot ignore.
Regulations are still catching up to the development of AI technology, but the apparent threat of algorithm bias and other AI ethics issues have reinforced the urgency of regulatory responses around fairness, transparency, and privacy protection. For example, algorithm biases are occurring through social media: a Guardian investigation found “AI tools used in social media platforms rate photos of women as more sexually-suggestive than men,” and right-leaning political tweets receive more amplification through Twitter’s recommendation algorithm. As a result, governments around the world started to respond to the development of AI and its ethical issues with different regulatory frameworks.
In the United States, the AI industry has experienced rapid development without regulatory controls. An example of the prioritization of development over regulation is the US government’s 2020 Guidance for Regulation of Artificial Intelligence Applications aimed at reducing barriers to AI technology applications and promoting innovation. However, cases of algorithm discrimination have sounded the alarm bell on the risks of the uncontrolled use of algorithms for decision-making. In 2016, ProPublica reported that Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), an algorithmic tool that assesses potential recidivism risk used in US courts, produces discriminatory results for Black defendants. The team found out that Black people labeled by COMPAS as high-risk rarely re-offend, while white defendants labeled as low-risk are twice as likely to re-offend. In 2018, Reuters reported that the algorithmic recruitment engine used by Amazon trained itself to, which led to results that penalized women candidates. Cases like these drew attention to the need for regulations on algorithmic discrimination and AI ethics more broadly.
In response to algorithm discrimination in recruiting, New York City passed Local Law 144 in 2020 (set to begin July 2023), which requires companies that use automated hiring tools to disclose their use and conduct bias audits. Although there are concerns over the law and its implementation, it will likely influence national regulatory responses, signaling progress on this issue. Nationally, the White House published a Blueprint for an AI Bill of Rights in October 2022, which includes a section on “Algorithm Discrimination Protections.” According to the proposal, automated systems should be designed to prevent algorithmic discrimination based on protected classifications, and developers must take proactive measures to ensure protection, testing, and clear oversight. While Local Law 144 points to the need for transparency and algorithm supervision, the draft AI Bill of Rights examines the issue of algorithm discrimination more holistically, paying attention to the different roles of deployers, designers, and developers of automated systems.
In Singapore, the Monetary Authority of Singapore (MAS) has developed a comprehensive framework for the financial sector on regulating AI ethics, Veritas, as part of Singapore's National AI Strategy. In 2018, MAS proposed the FEAT principles of fairness, ethics, accountability, and transparency of AI, incorporated in Veritas. This framework provides detailed notes on the definition of fairness under different cases, laying the groundwork for the future development of AI fairness in other industries. By taking AI ethics seriously, MAS aims to attract trade and development in Singapore, which requires a fair system for AI to foster innovation in its fintech ecosystem.
The European Union also made progress on AI ethics regulations. In April 2021, the European Union released the Artificial Intelligence Act, proposing uniform regulatory rules for AI and aiming to limit the potential risks and adverse effects of AI technology development at the national legal level. In terms of algorithm discrimination, it also indicated that high-quality data is crucial for the performance of AI systems, especially those involving model training, to ensure safe and non-discriminatory use. Like Singapore’s goal of promoting AI development through regulation, the European Union is advancing its frameworks on AI ethics to further strengthen AI innovation and make Europe a trusted global AI hub. The responses of both governing entities point to a need for a global regulatory framework.
Beyond government regulations that scrutinize how AI technology is used, scientists are also investigating solutions from a technical perspective. To address the discriminatory effects of AI use, it is important to understand the algorithms and systems to prevent and/or solve algorithmic discrimination. A computer can be understood as a machine that performs three operations: input, operation, and output. Correspondingly, there are three approaches to solving the technical problems surrounding algorithm bias.
The first concerns the pre-processing stage, where bias enters the system through the biased use of data or biases reflected within the data. To address this, researchers work on processing the training data and correcting the dataset when inequity and biases exist, then use the transformed dataset to train the algorithm. Existing data preprocessing techniques include “suppression of the sensitive attribute, massaging the dataset by changing class labels, and reweighing or resampling the data to remove discrimination without relabeling instances.” This approach is by far the most widely used one.
The second focuses on in-processing, which includes performing fairness treatment in the model training process. In other words, by adding fairness-related regular terms or constraints, i.e. prejudice remover, in the model optimization process, the accuracy and fairness of the algorithm can be greatly improved.
Finally, bias problems could be addressed in the post-processing phase, where the output of the algorithm is corrected. This approach is usually applied to scenarios where the input data and training process are black boxes. However, this approach faces two problems: the huge amount of data to be processed and the high processing cost, and the difficulty of ensuring that output results are fairer after processing. Thus, the path to reduce algorithmic discrimination by correcting the output results is still waiting for further refinement by scientists.
As the global society faces pressing questions around AI ethics, responses require combining regulatory policies and technological solutions. However, questions like “Does TikTok use WiFi” by Congressman Hudson suggest that many legislators lack the knowledge around technology necessary to legislate tech and tackle problems like algorithmic discrimination. To bridge the knowledge gap, collaboration between lawmakers and technology developers/experts is necessary. So is global collaboration: as governments around the world pay more attention to algorithm bias and AI, they could learn from each other’s regulatory frameworks and work together toward ethical AI.