果冻影院

XClose

果冻影院 Global

Home
Menu

Understanding why people mistrust AI in education

Professor Mutlu Cukurova (IOE, 果冻影院鈥檚 Faculty of Education and Society) used funding from the 果冻影院-Cornell Global Strategic Collaboration Awards (GSCA) to investigate how people respond to AI tutoring.

Professor Mutlu Cukurova from IOE, 果冻影院鈥檚 Faculty of Education and Society

24 May 2024

The use of AI technologies is exploding, and AI generated content is now woven into many aspects of life. With the education sector under pressure, AI could play an important role in supporting teachers and the delivery of high quality education. But AI technologies that get good results when rigorously tested often fail when they are released for use in the real world. Evidence suggests that a key reason for this is that people inherently mistrust AI generated content and suggestions. Understanding this mistrust of AI outputs, and getting a handle on why AI powered solutions don鈥檛 do as well in real life, will be a crucial factor in supporting society with technology into the future.

In response to this, Professor Mutlu Cukurova wanted to partner with Dr Rene Kizilcec from Cornell University, as they share a research interest in this emerging area. After their successful application for the 果冻影院-Cornell Global Strategic Collaboration Awards (GSCA), they were able to run an experiment to test their theories.

Why is AI in education not working as well as it should?

鈥淚n our field, we develop many AI technologies that might help address educational challenges,鈥 Mutlu said. 鈥淲e use co-design and co-construction methodologies. We test and evaluate solutions in schools or education settings. And we get good results, so we release them into the world. And then they don鈥檛 generate the same level of impact long term. This is really disappointing, and we need to understand why.鈥

Mutlu believes there could be many reasons for this. He thinks that sometimes practitioners don鈥檛 have enough knowledge or confidence to use AI tools as intended. Sometimes leaders don鈥檛 fully support the adoption of these technologies, and are sceptical about how beneficial they might be to an education setting. And there are also issues with the ecosystem including the technical infrastructure, policy-making landscape, pedagogical culture, and practices in schools. These reasons also include how much trust people have in the suggestions and recommendations that come from these technologies. Trust is not a subject that has been looked into in depth, particularly in engineering contexts, and Mutlu wanted to interrogate this further.

鈥淭here is a line of work that looks at the concept of algorithm aversion,鈥 Mutlu explained. 鈥淭his is an idea that people erroneously avoid algorithms after seeing them make one mistake. This essentially shows people have an embedded mistrust in algorithmic suggestions 鈥 which are also called machine suggestions or AI suggestions 鈥 compared to human suggestions. We鈥檝e seen this in the financial decision making space. There's strong evidence that algorithmic recommendations in financial decisions are more reliable and more effective. But people tend to believe in human recommendations when making financial decisions, more than they believe in algorithmic recommendations. That mismatch was the inspiration for this line of work in education.鈥

Mutlu and Rene set up an educational tutoring experiment. The purpose was to see if people's judgement of credibility was lower when they thought they were interacting with an AI tutor, versus when they thought they were interacting with a human tutor. They added in a third condition to the experiment, which was a human tutor getting help from AI. The actual content being delivered to participants was the same for the AI tutor, the human tutor, and the human tutor aided by AI. They recruited 600 people to take part using an online recruitment platform. They included surveys, measures of the learning experience, and experience judgment tools to understand the reactions of participants.聽

However, they did not find any statistically significant difference in the perceived credibility of the AI tutor, the human tutor, or the human tutor getting help from AI. Mutlu believes this was an issue with the experiment, rather than this being an accurate indication of perception. Because people were being paid to participate via an online recruitment platform, the researchers don鈥檛 believe participants were fully engaged. They had measures in place where they asked participants to recall which condition they were operating in (AI tutor, human tutor, human tutor aided by AI) 鈥 and a high proportion answered incorrectly.

Honing in on the issues

Next, the researchers want to gain funding to re-run the experiment, in which they would recruit students in real educational settings. According to Mutlu, the distrust of AI in education is a pressing issue that needs answers, for the sake of the education system as a whole. 鈥淭here are very significant problems in our education ecosystem,鈥 he said. 鈥淭eachers are leaving their jobs mainly due to workload and wellbeing. If this carries on, in the near future we鈥檒l struggle to actually deliver the same amount of education that we are delivering today. These technologies can help. For instance, giving quality feedback to students could take less time. If we can find ways of generating content, feedback and recommendations that help teachers become more efficient and better educators, we could deliver at least the same quality of education, but perhaps even better.鈥

Although the experiment run as a result of this funding did not generate the anticipated results, the opportunity to collaborate has been extremely valuable. 鈥淭his small but structured funding helped me identify and collaborate with a peer at another top institution,鈥 Mutlu explained. 鈥淥ur idea could not be worked as a large scale funding proposal yet because it's just emerging. We need to generate some evidence of the viability and feasibility of this kind of research first. This funding has also established an important connection for me with Cornell University. Through the course of this work, we鈥檝e had additional conversations which have led to other discussions for papers and PhD supervisions. It has been invaluable.鈥


Links

Featured image

Credit: