Addressing Gender Bias to Achieve Ethical AI

A crowd looks at "Sophia the Robot," created by Hanson Robotics, during the Industry 4.0 summit on automation and manufacturing technologies in Hanoi, Vietnam on July 13, 2018. (NHAC NGUYEN/AFP via Getty Images)

The sixty-seventh session of the Commission on the Status of Women  (CSW67) is currently taking place at the cusp of many intertwining realities—a world still reeling under the adverse impact of the COVID-19 pandemic, worsening climate change crisis, rising inflation, emerging authoritarianism, armed conflicts. Meanwhile, new and fast-moving technological advancements in Artificial Intelligence (AI)—such as ChatGPT—are expected to transform many aspects of our lives and livelihoods, with consequences that are hard to predict. Amid this confluence of forces, some old and stubborn realities persist. According to the World Economic Forum (WEF), it will take another 132 years to achieve gender equality on a global scale. The COVID-19 pandemic, along with other concurrent shocks, has reversed the gains for gender equality from the previous years.

In the context of CSW67’s priority theme of “innovation and technological change,” closing the gender gap seems even more pertinent, owing to the inherent gender inequities in the technology landscape. For the past 20 years, a substantive gender gap relating to women’s participation in STEM (science, technology, engineering, and mathematics) education and careers has been pointed out. Studies show that women are still largely underrepresented in fields such as computing, digital information technology, engineering, mathematics, and physics. Labor market economists attribute this to the disproportionate burden that women bear in terms of differences in human capital, domestic responsibilities, and employment-related discrimination.

The field of AI is no different. According to 2019 estimates from UNESCO, only 12 percent of AI researchers are women, and they “represent only six percent of software developers and are 13 times less likely to file an ICT (information, communication, and technology) patent than men.” These facts lead one to pose a natural question: how does this gap in representation manifest in the very technologies that are built?

Understanding Bias in AI

While there are different uses of artificial intelligence, such as scanning imaging for signs of cancer, the most commonly known are through AI devices that are ubiquitous in homes and workplaces. Through increased digitization, the COVID-19 pandemic has further contributed to their all-pervading nature. Due to COVID-19, 55 percent of companies have accelerated their AI adoption plans to address the skills deficiencies in various industries. However, the impact of such developments on women and their labor force participation, among other considerations, is yet to be carefully studied and documented.

Gender bias in AI can occur at various stages: in the algorithm’s development process; in the training of datasets; and in AI-generated decision-making. AI applications are run via algorithms, which are a set of instructions for problem solving. Computationally, this process involves transforming input data into output data. To this end, the kind of data that gets inputted can directly influence the subsequent decision-making in algorithms. Thus, if the data originally contain certain biases, this can get replicated by the algorithms, which in turn when used for prolonged periods of time can reinforce these biases in decision-making. To this end, the subjective choices made during the selection, collection, or preparation of data sets can play a role in determining potential biases. This is evident in many branches of AI such as Natural Language Processing (NLP), where “word embeddings” can lead to linguistic biases resulting from sexism, racism, or ableism. For example, while selecting top candidates during a hiring process, Amazon’s automated resume screening system discriminated against women. The data used to train the recruitment model was informed by resume samples from a 10-year period, where women were underrepresented. The resume screening model thus used “linguistic signals” associated with successful male candidates. Once the bias was discovered, Amazon discarded this model.

Feminization and Domestication of AI

Past research shows how gendered divisions are naturalized and reproduced through technology. To begin with, technology often gets equated with “men’s power,” while women and girls are portrayed as less technologically skilled and less interested than their male counterparts. Such stereotypes can contribute to the gender gap in women’s participation in related fields.

The tendency to feminize AI tools mimics, and reinforces, the structural hierarchies and stereotypes in society, which is premised on preassigned gender roles. The gendering of AI can occur in multiple ways—through voice, appearance, or the use of female names or pronouns. Home-based virtual assistants such as Amazon’s Alexa, Microsoft’s Cortana, and Apple’s Siri were given default feminine voices (Apple and Google have since offered alternatives aimed at “diversification” or “neutrality.”) As UNESCO points out, these devices were designed to have “submissive personalities” and stereotypically feminine attributes, such as being “helpful, intelligent, intuitive.” However, as evident from the case of IBM’s Watson, which used a masculine voice while working with physicians on cancer treatment, male voices have been preferred for tasks that involved teaching and instruction, as they were perceived to be “authoritarian and assertive.” Among these applications, Google Assistant is the only one that did not bear a “gendered name,” however its default voice is female.

Gender stereotypes and inequalities can be further reinforced through divisions in occupational roles taken up by these new technologies, as evident in the case of robots. “Male” robots have been deemed more appropriate for security-related jobs, whereas, when Japan launched its first hotel with robot receptionists in 2015, most of these robots were “female.” And while much has been made of the threat that robots and, most recently, emerging AI chatbots present to employment, the sectors that have witnessed increased “robotization” in recent years are those dominated by women—and disproportionately impacted by the pandemic—such as hospitality and tourism, retail, healthcare, and education. In 2022, UNESCO published a report on the Effects of AI on the Working Lives of Women, which stated how AI is changing the landscape of work leading to new skill demands, and how women should not be left behind due to these advancements.

Domesticated and feminized forms of AI are also increasingly performing the “affective labor” that is conventionally expected of women. The gendered (and often invisible) work of “affective labor” involves producing, managing, or modifying emotions in others, and can comprise a range of activities such as “caring, listening, comforting, reassuring, [and] smiling.” Past research has shed light on how affective labor has been particularly associated with women of color and migrant domestic workers, where patterns of “domination and invisibility” exist in their relationship with respect to a white household. Home-based virtual assistants such as Amazon’s Alexa, Microsoft’s Cortana, and Apple’s Siri perform affective labor by managing both data and emotions. And voice interfaces are primarily designed to perform tasks such as “scheduling, reminding, making lists, seeking information, taking notes, making calls, and sending messages.” Moreover, as opposed to the lived realities of women, the fact that virtual assistants are unaffected by stress or other external factors turns them into a product of “fantasy.”

Similarly, the humanization of virtual assistants and robots can also allow for the dehumanization and objectification of women. For example, the humanoid robot “Sophia” was made to look “exceptionally attractive,” evoking a sense of “mechanico-eroticism.” Similarly, humanoid robots are often given “Asian or Caucasian features” and are “hypersexualized.” As a result, the various forms of gender-based violence (GBV) and harassment that women and girls face in both private and public spheres are also mirrored in how feminized forms of AI are treated. In a stark example, the developers of Alexa had to enable a “disengagement mode” as a result of the tool being subjected to verbal harassment within households.

Addressing Gender Bias in AI: The Way Forward

The adoption of AI is currently occurring at an unprecedented pace. In the absence of normative frameworks to guide the development and use of AI, breakthroughs like ChatGPT pose ethical concerns. In 2020, at the virtually held UNESCO’s Global Dialogue on Gender Equality and AI, participants observed that AI normative instruments or principles that successfully address gender equality as a standalone issue were “either inexistent or current practices were insufficient.” Though it is promising to see conversations at the UN on the need to develop a framework for ethical AI around its own AI use, the extent to which gender bias figures in these policy discussions is yet to be studied and explored.

To address these policy gaps, it is critical to identify where gender bias in AI shows up. As algorithms are heavily influenced by the data it uses, biases can stem from how data is collected, stored, and processed. Another possible source is whoever is writing the algorithms, and what guidelines the AI is following, as AI tends to reflect the inherent assumptions and prejudices of its software programmers. Since AIs featuring female characteristics are predominantly developed by men, they mirror their ideas about women, underscoring the need for increasing women’s participation in STEM education and careers. However, studies show how women in STEM careers face unique challenges. For example, globally, half of women scientists are subjected to sexual harassment in the workplace. Thus, alongside increasing women’s participation in STEM careers (including in AI), employers should develop support structures to address these challenges, including putting in place zero-tolerance policies for gender-based violence in the workplace and ways to monitor and enforce them.

There are promising practices on how AI can be used to address gender inequalities. For example, AI-powered gender decoders can inform gender-sensitive employment hiring processes. There are also examples of how developers are increasingly conscious of the gendered impact of AI, especially among young users. These kinds of successes can be achieved through a “human-centered AI” approach which is developed with the user in mind. In addition, “fundamental rights impact assessments” can be employed during the application of output from algorithms to identify biases, which result in discrimination around “gender, age, ethnic origin, religion and sexual or political orientation.” According to the European Union Agency for Fundamental Rights (FRA), algorithms should be audited in the form of “discrimination testing” or “situation testing in real-life situations” to eliminate any form of discrimination. In 2017, the European Parliament adopted a resolution that outlined a normative framework to address biases resulting from the use of technology, including the threat of discrimination resulting from the use of algorithms.

Ethical AI involves taking an intersectional approach when addressing questions around gender, race, ethnicity, socioeconomic status, and other determinants, in addition to adopting a human rights-based approach to AI governance premised on transparency, accountability, and human dignity. To this end, different stakeholders, including business and corporate entities, tech companies, academia, UN entities, civil society organizations, media, and other relevant actors should come together and explore joint solutions. The UN secretary-general’s proposal for a Global Digital Compact to be agreed at the Summit of the Future in September 2024 is a right step in this direction. The Global Digital Compact should also delve into potential gender biases perpetuated by AI and solutions to address this. For AI to be ethical and be a vehicle for the common good, it needs to eliminate any explicit and implicit biases, including on the gender front.

Ardra Manasi works as the Global Program Manager at The Center for Innovation in Worker Organization (CIWO) at Rutgers University. Dr. Subadra Panchanadeswaran is a Professor at the Adelphi University School of Social Work. Emily Sours works to advance the rights of women and girls, LGBTQIA+ persons, and marginalized groups.