果冻影院

XClose

果冻影院 News

Home
Menu

Large Language Models generate biased content, warn researchers

12 April 2024

The most popular artificial intelligence (AI) tools show prejudice against women as well as different cultures and sexualities, according to a new report led by researchers from 果冻影院.

Large Language Models generate biased content, warn researchers

The study, commissioned and published by UNESCO, examined stereotyping in Large Language Models (LLMs). These natural language processing tools underpin popular generative AI platforms, including Open AI鈥檚 GPT-3.5 and GPT-2 and META鈥檚 Llama 2.

The findings showed clear evidence of bias against women in content generated by each of the Large Language Models studied. This included strong stereotypical associations between female names and words such as 鈥榝amily鈥, 鈥榗hildren鈥 and 鈥榟usband鈥 that conform to traditional gender roles. In contrast, male names were more likely to be associated with words like 鈥榗areer鈥, 鈥榚xecutives鈥, 鈥榤anagement鈥 and 鈥榖usiness鈥.

The authors also found evidence of gender-based stereotyped notions in generated text, including negative stereotypes depending on culture or sexuality.

Part of the study measured the diversity of content in AI-generated texts focused on a range of people across a spectrum of genders, sexualities and cultural backgrounds, including by asking the platforms to 鈥榳rite a story鈥 about each person. Open-source LLMs in particular tended to assign more diverse, high-status jobs to men, such as 鈥榚ngineer鈥 or 鈥榙octor鈥, while frequently relegating women to roles that are traditionally undervalued or stigmatised, such as 鈥榙omestic servant鈥, 鈥榗ook鈥 and 鈥榩rostitute鈥.

Llama 2-generated stories about boys and men dominated by the words 鈥榯reasure鈥, 鈥榳oods鈥, 鈥榮ea鈥, 鈥榓dventurous鈥, 鈥榙ecided鈥 and 鈥榝ound鈥, while stories about women made most frequent use of the words 鈥榞arden鈥, 鈥榣ove鈥, 鈥榝elt鈥, 鈥榞entle鈥 and 鈥榟usband鈥. Women were also described as working in domestic roles four times more often than men in content produced by Llama 2.

Dr Maria Perez Ortiz, an author of the report from 果冻影院 Computer Science and a member of the UNESCO Chair in AI at 果冻影院 team, said:听"Our research exposes the deeply ingrained gender biases within large language models and calls for an ethical overhaul in AI development. As a woman in tech, I advocate for AI systems that reflect the rich tapestry of human diversity, ensuring they uplift rather than undermine gender equality.鈥

The UNESCO Chair in AI at 果冻影院 team will be working with UNESCO to help raise awareness of this problem and contribute solution developments by running joint workshops and events involving relevant stakeholders: AI scientists and developers, tech organizations and policymakers.听听

Professor John Shawe-Taylor, lead author of the report from 果冻影院 Computer Science and UNESCO Chair in AI at 果冻影院, said: "Overseeing this research as the UNESCO Chair in AI, it's clear that addressing AI-induced gender biases requires a concerted, global effort. This study not only sheds light on existing inequalities, but also paves the way for international collaboration in creating AI technologies that honour human rights and gender equity. It underscores UNESCO's commitment to steering AI development towards a more inclusive and ethical direction."

The report was presented at the UNESCO Digital Transformation Dialogue Meeting on 6 March 2024 at the UNESCO Headquarters by Professor Drobnjak, Professor Shawe-Taylor and Dr Daniel van Niekerk. It was also presented by Prof Drobnjak at the United Nations headquarters in New York at the 68th session of the Commission on the Status of Women, the UN鈥檚 largest annual gathering on gender equality and women鈥檚 empowerment.

Professor Ivana Drobnjak, an author of the report from 果冻影院 Computer Science and a member of the UNESCO Chair in AI at 果冻影院 team, said: 鈥淎I learns from the internet and historical data and makes decisions based on this knowledge, which is often biased. Just because women were not as present as men in science and engineering in the past, for example, it doesn鈥檛 mean that they鈥檙e less capable scientists and engineers. We need to guide these algorithms to learn about equality, equity, and human rights, so that they make better decisions.鈥

A total of 30 authors contributed to the report. It involved the International Research Centre on AI () led by its COO Davor Orlic, as well as other bodies and universities invited through the . This included Distributed AI Research Institute (DAIR), Northeastern University, University of Essex, Research ICT Africa, ELLIS Alicante Foundation, Digital Futures Lab.

Links

Image

  • Professor Ivana Drobnjak delivers her presentation at the UNESCO Digital Transformation Dialogue Meeting.

Media contact

Matt Midgley

Email: m.midgley听[at] ucl.ac.uk