¹û¶³Ó°Ôº

XClose

¹û¶³Ó°Ôº Computer Science

Home
Menu

¹û¶³Ó°Ôº Computer Science event discusses how to achieve responsible AI

20 August 2024

Academics, students, industry guests and the public gathered for a forum on ethical artificial intelligence. The event highlighted the risks of AI and how it must work for the betterment of society.

Mehrnoosh Sadrzadeh speaking at ¹û¶³Ó°Ôº Responsible AI event for the Festival of Engineering

Artificial intelligence is playing an increasingly influential role in our lives. We use AI to discover information, create content, and guide our decision-making. In healthcare settings, machine learning systems are saving lives. The education sector is poised for widescale adoption of AI technology. Artificial intelligence tools are steering government policy.

As generative AI has become mainstream how are we ensuring that the artificial intelligence driving it is ethical, trustworthy and fair? How can we guarantee it is transparent and inclusive and that our data is protected?

The ¹û¶³Ó°Ôº Computer Science fringe event at the Festival of Engineering was titled 'What does responsible AI mean to you and how can we achieve it?'. Through a series of presentations and three-minute 'lightning talks', academics, industry figures and students introduced their research and opinions about AI.

Audience members attentively listening at ¹û¶³Ó°Ôº Responsible AI event

The strands of responsible AI

Sharing their specific insights and expertise, the speakers covered the concept of responsible AI from many angles.

Public engagement in AI development

Edwin Colyer of Scientia ScriptaÌýemphasised the importance of engaging the public in the research and development of AI systems. Edwin's co-creation pilot project with Professor Crockett and the Turing Institute used juries made up of members of the public. This approach has encouraged developers to prioritise diversity and inclusion and has increased trust in AI.

AI and environmental stewardship

Professor Julia Manning called for humane healthtech, where the use of AI is underpinned by environmental stewardship, trust, education and a holistic approach. Julia is an Honorary Professor of Practice at ¹û¶³Ó°Ôº Computer Science and President of the Digital Health Council at the Royal Society of Medicine.

Gender and cultural Bias in AI

Professor Ivana Drobnjak of ¹û¶³Ó°Ôº Computer Science shared the findings of a report commissioned by UNESCO and presented at the United Nation's Commission on the Status of Women. 'Bias towards women and girls in large language models' combined studies showing commonly used generative AI tools are prejudiced against women, different cultures and sexualities.

Ivana Drobnjak speaking at ¹û¶³Ó°Ôº responsible AI event

Altruism in AI design

How do we design moral agents? This was the question posed by Professor Mirco Musolesi from the Machine Intelligence Lab at ¹û¶³Ó°Ôº Computer Science. Mirco argued for altruism and collaborative multi-agent systems. He also discussed responsible artificial agents against the backdrop of governments' growing use of AI in sectors such as defence.

Risks of AI in education

Professor Wayne Holmes of the IOE (¹û¶³Ó°Ôº's Faculty of Education and Society) spoke of the dangers of using AI in education. He busted myths and asserted that AI is opening the door to the commercialisation of education in schools. He claimed that, so far, there is no evidence at scale for AI's effectiveness, safety or impact in the classroom. Wayne also explained how gen AI models mimic human-like responses rather than demonstrate understanding or creativity and, consequently, ignore marginalised voices and quash innovation.

The black box problem in AIÌý

Professor Bob Coecke is the Chief Scientist at Quantinuum and Emeritus Professor at Wolfson College, Oxford University. As many AI models are 'black box' systems, Bob spoke about Quantinuum's approach to the interpretability of these systems using a mathematical framework called theory of categories. Greater transparency will help us gauge how ethical AI systems are.

The threat of deepfakes

With generative AI's ever more sophisticated capabilities to create images, audio, video and text, the issue of deepfakes comes to the fore. Professor Lewis Griffin, ¹û¶³Ó°Ôº Computer Science, showed that whilst the benign use of gen AI brings benefits, gen AI has also been used to create extremely harmful content. As a society, we need to constantly refine our ability to assess the truthfulness of media.

Lewis Griffin speaking at the ¹û¶³Ó°Ôº Responsible AI event

An ongoing discussion

The overall conclusion was that while we can gain much from AI, there are many vital ethical considerations. We need continued debate and a collaborative approach between universities, industry and governments.

Steve Hailes, Head of Department forÌý¹û¶³Ó°ÔºÌýComputer Science, closed the event. As beacons of education, he called for universities to become game changers when it comes to responsible AI. Steve continued: "We are in an era where generative AI is potentially an incredibly powerful tool and potentially also an incredibly dangerous thing. This is the first event where we have had these open discussions about what works, what doesn't work, and the opportunities and the dangers. I'd like to see us continue these conversations into the future."

Professor Mehrnoosh Sadrzadeh of ¹û¶³Ó°Ôº Computer Science and academic chair of the Department's Equity, Diversity and Inclusion (EDI) Committee planned the event. Matt Grech Sollars and Maria Perez Ortiz from ¹û¶³Ó°Ôº Computer Science’s Responsible AI Working Group were also involved. Staff from ¹û¶³Ó°Ôº Computer Science’s professional services team and a group of PhD students helped run the event.