This post by Dieuwertje Luitse is part of our series “Global Digital Cultures in times of COVID-19”, written by students of the research master Media Studies at the University of Amsterdam.
Artificial Intelligence (AI) ethics are on the rise. As is increasingly clear, big data and algorithmic systems risk reproducing bias and inequalities towards historically disadvantaged communities (Hoffman, 2019). Such criticism focuses on the lack of AI ethics, fairness, responsibility, accountability and diversity in the field. In response, public institutions including the United Nations are issuing principles and guidelines around the ethics of AI. The last years have seen major tech corporations such as Google, Microsoft and Amazon launching AI ethics boards and creating teams dedicated to guiding the development of ethical and responsible AI technologies. These ethics teams sometimes work together with university researchers to address the critiques and challenges involved in the production and deployment of AI systems. This approach to AI ethics can, however, be criticized for defining solutions to “the problem space very narrowly” (Whittaker et al., 2018) and require further critical inquiry.
For example, Google’s Deepmind research scientist Shakir Mohamed recently published a paper on decolonizing AI to strengthen research ethics in the field (2020). This article formed the basis of a lecture at the TechAide AI4Good conference, asking: How to build a new type of critical and technical practice of AI for Good? Using a concept by information scientist Philip Agre, Mohamed and colleagues (2020) write that a Critical Technical Practice (CPT) of AI involves the “practitioner’s ability to ask critical questions of technology research and design.” The aim is to imagine alternative realities, to question who is developing AI, where this development is taking place, and to examine the role of culture and power embedded in AI systems. But, how do we define what is ‘being critical’ in the case of AI and its development? And how to ‘imagine alternative realities’ during the current COVID-19 pandemic? Moreover, is ‘asking critical questions’ enough?
AI for Good Does Not Equal Ethical AI
Questions of ethics in computational practices are certainly not new. In 1996, Batya Friedman and Helen Nissenbaum discussed the issue of bias in computational systems and argued that practitioners must not only scrutinize system design specifications, but also “couple this scrutiny with a good understanding of relevant biases out in the world” (343), even when these systems are created with ‘good intentions’. More recently, critical data studies scholar Linnet Taylor (2016) pointed to use of public good rhetoric in the context of big data, discussing ethical limitations of projects that:
- make use of big data and analytics for public-good (e.g., projects for public institutions, NGOs and marginalized groups);
- acknowledge the value of big data in national statistics, informing emergencies and their interventions.
Corporate AI ethics initiatives also seem to rely on such rhetorical arguments of “doing good” with AI, Deepmind, for example, claims to aspire to improve medical research, and this narrative has only increased during the current pandemic. In August 2020, the company stated that it used its AlphaFold deep-learning system to predict protein structures associated with COVID-19 in order to help healthcare researchers better understand how the virus functions, without having to go through the months-long (non-computational) structure-determination process. In a similar vein, Amazon researchers recently released an ‘open-source’ COVID-19 Simulator and Machine Learning Toolkit for predicting the spread of the virus. However, the ethics of such corporate contributions have been questioned because of the opaque and unprotected use of large amounts of public healthcare data and the little information these companies have shared about the workings of their systems. This has made it difficult for researchers and policymakers to investigate and regulate such applications. In turn, it allows the companies to not be held publicly accountable for the potential harm their systems may produce. It’s for these reasons that the use of AI for good does not necessarily equal ethical or responsible uses of AI. The latter concepts are much more complex than the singular way in which companies are framing the issue by focusing on the objective rather than the ethics of the system itself.
Issues with AI ethics within the context of COVID-19 are no different from the ethical problems plaguing AI before the pandemic.To contain the spread of the virus, governments around the world have rapidly adopted AI technologies, particularly for the screening, tracking and prediction of cases. In the United States, this has caused severe problems among marginalized communities, who are systematically underrepresented in the data that is processed in automated clinical trials (D’Ignazio & Klein, 2020). Meanwhile, Chinese authorities are working with private companies from the fields of facial recognition to social media for predicting outbreaks, offering medical consultations, and distributing food and medicine (Wayne W. Wang, 2020). This has raised concerns about how the government might use this tech-led pandemic response as an opportunity to permanently implement AI technologies in different social contexts. Professor of economics K. Sudhir describes that the Indian government is increasingly relying on the country’s biometric digital identity program, Aadhaar, to automatically assign and distribute public food, healthcare and subsidies, aiming for “speed and efficiency without leakage through intermediaries.” Yet again, this AI-for-good approach is not infallible. According to Reethika Khera (IIT Delhi) (Amrute et al., 2020), an increasing number of Indian citizens are denied access to corona tests or relief because they lack an Aadhaar identification number or because their registered address is outdated. Such issues show how AI-based technologies do not necessarily support marginalized communities during the COVID-19 pandemic.
Taken together, the problems with AI systems reveal what is needed from public institutions and corporate companies involved in ethical AI initiatives. The key is to ethically and responsibly engage with the data and algorithmic techniques at hand and act upon them to reduce systematic inequalities produced by AI systems. Only asking critical questions and providing AI ethics guidelines does not suffice. For Data and Society researchers Jacob Metcalf, Emmanuel Moss, and danah boyd (2019), this requires an improved mode of “doing ethics” that takes a social, economic and legal responsibility in the creation of technical systems. Instead of following current techno-solutionist approaches to AI ethics at speed, these new technical systems should be created along the lines of fundamental and historically embedded collective social and political values.
Alternative Realities or Refusal?
To develop this proposed mode of “doing ethics” (Metcalf, Moss and boyd, 2019), the consideration of alternative realities might be a starting point to consider the values embedded in AI systems. Focusing on the case of COVID-19, counterdata initiatives such as Data for Black Lives are currently filling existing coronavirus data gaps addressed by D’Ignazio and Klein (2020). Similarly, data activism scholars Stefania Milan and Emiliano Treré (2020) discuss non-western grassroots initiatives that have organized themselves to increase the visibility of problems marginalized communities are facing and improve their conditions during the pandemic. Providing data and information about non-white and/or non-western individuals, such initiatives can be seen to add to the representation of marginalized communities. Following this line of thinking, this information can contribute to the imagination of realities and matters of public concern (Milan, 2020) by AI practitioners and policymakers that exist outside their own imaginative faculty.
However, returning to Friedman’s and Nissenbaum’s statement (1996), the development of new ethical modes for AI requires more than just a systematic correction of data gaps. Instead, it also demands practitioners to be aware of preexisting social and technical inequalities through training (Friedman and Nissenbaum, 1996), potentially focusing less on matters of efficiency and scale with regards to public ethics and values.
New modes for deploying ethics in the field of AI may also reveal that in some cases AI-driven solutions are simply not preferred, and should instead be refused. One example includes the use of facial recognition software, which is increasingly being criticised for being used in law enforcement through its potential for human rights violations. The case for refusal was also made by Kate Crawford in conversation with Alondra Nelson during the AoIR 2020 conference Keynote, in which they discuss their dissatisfaction with current good or ethical AI movements. Arguing that “resignation is the flip side of the politics of refusal,” Crawford proposed that resistance against the implementation of AI systems across the public sector seems necessary to face current social issues with the technology (Crawford & Nelson, 2020). But, what are the values that generate this kind of technological refusal? How could refusal become part of the Critical Technical Practice of AI, beyond the embedding of alternative realities in system design? And what would it mean for technological applications to be refused by its developers—whose work is driven by the market logics of tech corporations and often detached from the work of AI ethics teams?
The consequences of such “acts of refusal” recently became visible when Google’s AI ethics department triggered controversy for forcing out Timnit Gebru, its ethical AI co-lead. Gebru objected to retract her name from a paper which turned a critical eye on the development of large language models such as BERT (Google) and GPT-3 (OpenAI). According to AI reporter Karen Hao (2020), the controversy sparked new debates about growing “corporate influence over AI” and the power of tech companies to limit AI ethics research—even if the research is conducted by its employees. This example, again, emphasizes the complexity of current problems with AI ethics and calls for improved and comprehensive approaches to address them.
Bio
Dieuwertje Luitse is a Research Master student in Media Studies (New Media & Digital Culture) at the University of Amsterdam, with a professional background in graphic design and media arts. Her research interests mainly focus on the political economy of platforms and the (historical) development of Artificial Intelligence systems in relation to their social and political implications. (Twitter: @DLuitse)
Amnesty International. (2020, June 11). Calls for Ban on the Use of Facial Recognition Technology for Mass Surveillance. Amnesty International. Retrieved from https://www.amnesty.org/en/latest/research/2020/06/amnesty-international-calls-for-ban-on-the-use-of-facial-recognition-technology-for-mass-surveillance/.
Amrute, S., Khera, R. & Willems, A. (2020). Aadhaar and the Creation of Barriers to Welfare. Interactions, 27(6): 76–79. https://doi.org/10.1145/3428949.
Bao, H. (2020). “Anti-domestic violence little vaccine”: A Wuhan-based feminist activist campaign during COVID-19. Interface: A Journal for and About Social Movements. https://www.interfacejournal.net/wp-content/uploads/2020/05/Bao.pdf.
Crawford, K. & Nelson, A. (2020). The Politics of AI after COVID-19. AoIR 2020 Keynote. Retrieved from https://aoir.org/aoir2020/aoir2020keynote_plenary/.
Crawford, K., Dobbe, R., Dryer, T., Fried, G., Green, B., Kaziunas, E. Kak, A., Mathur, V., McElroy, E., Sánchez, A.N., Raji, D., Rankin, J.L., Richardson, R. Schultz, J., West, S.M., Whittaker, M. (2019). AI Now Report 2019. AI Now Institute, New York University.
Data for Black Lives. (2020). Action: COVID-19 open data by state. Retrieved from: http://d4bl.org/action.html.
Deb, T., Roy, A., Genc, S., Slater, N., Mallya, S., Kass-Hout, T.A. & Hanumaiah, V.. (2020, October 30). Introducing the COVID-19 Simulator and Machine Learning Toolkit for Predicting COVID-19 Spread. AWS Machine Learning Blog. Retrieved from https://aws.amazon.com/blogs/machine-learning/introducing-the-covid-19-simulator-and-machine-learning-toolkit-for-predicting-covid-19-spread/.
Dean, J. (2019, June 28). Responsible AI: Putting our principles into action. Google [Blog]. Retrieved from https://blog.google/technology/ai/responsible-ai-principles/.
D’Ignazio, C. & Klein, L.F.. (2020). Seven intersectional feminist principles for equitable and actionable COVID-19 data. Big Data & Society, 7(2): 1–6. https://doi.org/10.1177/2053951720942544.
Featherstone, L. (2018, May 4). How Big Data Is ‘Automating Inequality’. New York Times. Retrieved from https://www.nytimes.com/2018/05/04/books/review/automating-
inequality-virginia-eubanks.html.
Friedman, B. & Nissenbaum, H. (1996). Bias in Computer Systems. ACM Transactions on Information Systems, 14(3): 330–347.
Gupta, S. (2020, December 23). Coders’ dilemmas: The challenge of developing unbiased algorithms. Feminist Approaches to Labour Collectives (FemLab.co) [Blog]. Retrieved from https://femlab.co/2020/12/23/coders-dilemmas-the-challenge-of-developing-
unbiased-algorithms/.
Hanna, A. & Whittaker, M. (2020, 31 December). Timnit Gebru’s Exit From Google Exposes a Crisis in AI. Wired. Retrieved from https://www.wired.com/story/
timnit-gebru-exit-google-exposes-crisis-in-ai/.
Hao, K. (2020a, December 16). “I started crying”: Inside Timnit Gebru’s last days at Google—and what happens next. MIT Technology Review. Retrieved from https://www.technologyreview.com/2020/12/16/1014634/google-ai-ethics-lead-timnit-gebru-tells-story.
Hoffmann, A.L. (2019). Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7): 900-915.
Jobin, A., Ienca, M. & Vayena, E. (2019). Artificial Intelligence: the global landscape of ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2.
Jumper, J., Tunyasuvunakool, K. Pushmeet Kohli, Demis Hassabis, & the AlphaFold Team. (2020, August 4). Computational predictions of protein structures associated with COVID-19, Version 3. DeepMind. Retrieved from https://deepmind.com/research/
open-source/computational-predictions-of-protein-structures-associated-with-COVID-
19.
Knight, W. (2017, July 12). Biased Algorithms Are Everywhere, and No One Seems to Care. MIT Technology Review. Retrieved from https://www.technologyreview.com/2017/
07/12/150510/biased-algorithms-are-everywhere-and-no-one-seems-to-care/.
Larochelle, H. (2020, October 29). TechAide AI4Good 2020 – Shakir Mohamed: Imaginations of Good, Missions for Change [Video]. YouTube. Retrieved from https://www.youtube.com/watch?v=AG31OtrAigM&feature=emb_logo.
Metcalf, J., Moss, E., & boyd, danah (2019). Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics. Social Research: An International Quarterly, 86(2): 449-476. https://www.muse.jhu.edu/article/732185.
Microsoft. (n.d.). Responsible AI. Microsoft. Retrieved from https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1:primaryr6.
Milan, Stefania. (2020). Techno-solutionism and the standard human in the making of the COVID-19 pandemic. Big Data & Society 7(2): 1-7.
Milan, S., & Treré, E. (2020). The Rise of the Data Poor: The COVID-19 Pandemic Seen From the Margins. Social Media + Society. https://doi.org/10.1177/2056305120948233.
Mohamed, S., Png, MT. & Isaac, W. (2020) Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy & Technology, 33: 659–684. https://doi.org/10.1007/s13347-020-00405-8.
Paul, K. (2019, April 17). ‘Disastrous’ lack of diversity in AI industry perpetuates bias, study finds. The Guardian. Retrieved from https://www.theguardian.com/technology/2019/
apr/16/artificial-intelligence-lack-diversity-new-york-university-study.
Powles, J., Hodson, H. (2017) Google DeepMind and healthcare in an age of algorithms. Health and Technology. 7: 351–367. https://doi.org/10.1007/s12553-017-0179-1
Shead, S. (2019, March 27). Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains A Secret. Forbes. https://www.forbes.com/sites/samshead/
2019/03/27/google-announced-an-ai-council-but-the-mysterious-ai-ethics-board-remains-a-secret/?sh=643b279b614a.
Singh, M. (2020, December 17). Google workers demand reinstatement and apology for fired Black AI ethics researcher. The Guardian. Retrieved from https://www.theguardian.com/technology/2020/dec/16/google-timnit-gebru-fired-letter-reinstated-diversity.
Sudhir, K. (2020, April 8). Can Big Data Fight a Pandemic? University of Yale School of Management [Yale Insights, Blog]. Retrieved from https://insights.som.yale.edu/insights/can-big-data-fight-pandemic.
Taylor, L. (2016). The ethics of big data as a public good: which public? Whose good? Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, 374(2083): 20160126. https://doi.org/10.1098/rsta.2016.0126.
The Wire. (2017). It’s time to disentangle the complex Aadhaar debate. The Wire. Retrieved from https://thewire.in/government/aadharprivacy-analysis.
United Nations. (2020, December, 30). Bias, racism and lies: facing up to the unwanted consequences of AI [UN News]. United Nations. Retrieved from https://news.un.org/en/story/2020/12/1080192.
Wang, W.W. (pseudonym). (2020). China Digital collectivism in a global state of emergency. 114–119, in Taylor, L., et al. (eds.), Data Justice and Covid-19: Global Perspectives. Meatspace press. https://archive.org/download/data-justice-and-covid-19/Data_
Justice_and_COVID-19.pdf.
Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., Myers West, S., Richardson, R. & Schwartz, O. AI Now Report 2018. AI Now Institute, New York University. https://ainowinstitute.org/AI_Now_2018_Report.pdf.
Women’s Forum for the Economy & Society. (2020, June 8). Timnit Gebru of Google talking about Artificial Intelligence & facial recognition. | Women’s Forum [YouTube Video]. YouTube. Retrieved from https://www.youtube.com/watch?v=uYlR3OIUQx4&feature
=youtu.be.