This post by Dieuwertje Luitse is part of our series “Global Digital Cultures in times of COVID-19”, written by students of the research master Media Studies at the University of Amsterdam.
Artificial Intelligence (AI) ethics are on the rise. As is increasingly clear, big data and algorithmic systems risk reproducing bias and inequalities towards historically disadvantaged communities (Hoffman, 2019). Such criticism focuses on the lack of AI ethics, fairness, responsibility, accountability and diversity in the field. In response, public institutions including the United Nations are issuing principles and guidelines around the ethics of AI. The last years have seen major tech corporations such as Google, Microsoft and Amazon launching AI ethics boards and creating teams dedicated to guiding the development of ethical and responsible AI technologies. These ethics teams sometimes work together with university researchers to address the critiques and challenges involved in the production and deployment of AI systems. This approach to AI ethics can, however, be criticized for defining solutions to “the problem space very narrowly” (Whittaker et al., 2018) and require further critical inquiry.
For example, Google’s Deepmind research scientist Shakir Mohamed recently published a paper on decolonizing AI to strengthen research ethics in the field (2020). This article formed the basis of a lecture at the TechAide AI4Good conference, asking: How to build a new type of critical and technical practice of AI for Good? Using a concept by information scientist Philip Agre, Mohamed and colleagues (2020) write that a Critical Technical Practice (CPT) of AI involves the “practitioner’s ability to ask critical questions of technology research and design.” The aim is to imagine alternative realities, to question who is developing AI, where this development is taking place, and to examine the role of culture and power embedded in AI systems. But, how do we define what is ‘being critical’ in the case of AI and its development? And how to ‘imagine alternative realities’ during the current COVID-19 pandemic? Moreover, is ‘asking critical questions’ enough?
AI for Good Does Not Equal Ethical AI
Questions of ethics in computational practices are certainly not new. In 1996, Batya Friedman and Helen Nissenbaum discussed the issue of bias in computational systems and argued that practitioners must not only scrutinize system design specifications, but also “couple this scrutiny with a good understanding of relevant biases out in the world” (343), even when these systems are created with ‘good intentions’. More recently, critical data studies scholar Linnet Taylor (2016) pointed to use of public good rhetoric in the context of big data, discussing ethical limitations of projects that:
- make use of big data and analytics for public-good (e.g., projects for public institutions, NGOs and marginalized groups);
- acknowledge the value of big data in national statistics, informing emergencies and their interventions.
Corporate AI ethics initiatives also seem to rely on such rhetorical arguments of “doing good” with AI, Deepmind, for example, claims to aspire to improve medical research, and this narrative has only increased during the current pandemic. In August 2020, the company stated that it used its AlphaFold deep-learning system to predict protein structures associated with COVID-19 in order to help healthcare researchers better understand how the virus functions, without having to go through the months-long (non-computational) structure-determination process. In a similar vein, Amazon researchers recently released an ‘open-source’ COVID-19 Simulator and Machine Learning Toolkit for predicting the spread of the virus. However, the ethics of such corporate contributions have been questioned because of the opaque and unprotected use of large amounts of public healthcare data and the little information these companies have shared about the workings of their systems. This has made it difficult for researchers and policymakers to investigate and regulate such applications. In turn, it allows the companies to not be held publicly accountable for the potential harm their systems may produce. It’s for these reasons that the use of AI for good does not necessarily equal ethical or responsible uses of AI. The latter concepts are much more complex than the singular way in which companies are framing the issue by focusing on the objective rather than the ethics of the system itself.
Issues with AI ethics within the context of COVID-19 are no different from the ethical problems plaguing AI before the pandemic.To contain the spread of the virus, governments around the world have rapidly adopted AI technologies, particularly for the screening, tracking and prediction of cases. In the United States, this has caused severe problems among marginalized communities, who are systematically underrepresented in the data that is processed in automated clinical trials (D’Ignazio & Klein, 2020). Meanwhile, Chinese authorities are working with private companies from the fields of facial recognition to social media for predicting outbreaks, offering medical consultations, and distributing food and medicine (Wayne W. Wang, 2020). This has raised concerns about how the government might use this tech-led pandemic response as an opportunity to permanently implement AI technologies in different social contexts. Professor of economics K. Sudhir describes that the Indian government is increasingly relying on the country’s biometric digital identity program, Aadhaar, to automatically assign and distribute public food, healthcare and subsidies, aiming for “speed and efficiency without leakage through intermediaries.” Yet again, this AI-for-good approach is not infallible. According to Reethika Khera (IIT Delhi) (Amrute et al., 2020), an increasing number of Indian citizens are denied access to corona tests or relief because they lack an Aadhaar identification number or because their registered address is outdated. Such issues show how AI-based technologies do not necessarily support marginalized communities during the COVID-19 pandemic.
Taken together, the problems with AI systems reveal what is needed from public institutions and corporate companies involved in ethical AI initiatives. The key is to ethically and responsibly engage with the data and algorithmic techniques at hand and act upon them to reduce systematic inequalities produced by AI systems. Only asking critical questions and providing AI ethics guidelines does not suffice. For Data and Society researchers Jacob Metcalf, Emmanuel Moss, and danah boyd (2019), this requires an improved mode of “doing ethics” that takes a social, economic and legal responsibility in the creation of technical systems. Instead of following current techno-solutionist approaches to AI ethics at speed, these new technical systems should be created along the lines of fundamental and historically embedded collective social and political values.
Alternative Realities or Refusal?
To develop this proposed mode of “doing ethics” (Metcalf, Moss and boyd, 2019), the consideration of alternative realities might be a starting point to consider the values embedded in AI systems. Focusing on the case of COVID-19, counterdata initiatives such as Data for Black Lives are currently filling existing coronavirus data gaps addressed by D’Ignazio and Klein (2020). Similarly, data activism scholars Stefania Milan and Emiliano Treré (2020) discuss non-western grassroots initiatives that have organized themselves to increase the visibility of problems marginalized communities are facing and improve their conditions during the pandemic. Providing data and information about non-white and/or non-western individuals, such initiatives can be seen to add to the representation of marginalized communities. Following this line of thinking, this information can contribute to the imagination of realities and matters of public concern (Milan, 2020) by AI practitioners and policymakers that exist outside their own imaginative faculty.
However, returning to Friedman’s and Nissenbaum’s statement (1996), the development of new ethical modes for AI requires more than just a systematic correction of data gaps. Instead, it also demands practitioners to be aware of preexisting social and technical inequalities through training (Friedman and Nissenbaum, 1996), potentially focusing less on matters of efficiency and scale with regards to public ethics and values.
New modes for deploying ethics in the field of AI may also reveal that in some cases AI-driven solutions are simply not preferred, and should instead be refused. One example includes the use of facial recognition software, which is increasingly being criticised for being used in law enforcement through its potential for human rights violations. The case for refusal was also made by Kate Crawford in conversation with Alondra Nelson during the AoIR 2020 conference Keynote, in which they discuss their dissatisfaction with current good or ethical AI movements. Arguing that “resignation is the flip side of the politics of refusal,” Crawford proposed that resistance against the implementation of AI systems across the public sector seems necessary to face current social issues with the technology (Crawford & Nelson, 2020). But, what are the values that generate this kind of technological refusal? How could refusal become part of the Critical Technical Practice of AI, beyond the embedding of alternative realities in system design? And what would it mean for technological applications to be refused by its developers—whose work is driven by the market logics of tech corporations and often detached from the work of AI ethics teams?
The consequences of such “acts of refusal” recently became visible when Google’s AI ethics department triggered controversy for forcing out Timnit Gebru, its ethical AI co-lead. Gebru objected to retract her name from a paper which turned a critical eye on the development of large language models such as BERT (Google) and GPT-3 (OpenAI). According to AI reporter Karen Hao (2020), the controversy sparked new debates about growing “corporate influence over AI” and the power of tech companies to limit AI ethics research—even if the research is conducted by its employees. This example, again, emphasizes the complexity of current problems with AI ethics and calls for improved and comprehensive approaches to address them.
Dieuwertje Luitse is a Research Master student in Media Studies (New Media & Digital Culture) at the University of Amsterdam, with a professional background in graphic design and media arts. Her research interests mainly focus on the political economy of platforms and the (historical) development of Artificial Intelligence systems in relation to their social and political implications. (Twitter: @DLuitse)