Skip to content

Toggle service links

Ethics in Artificial Intelligence in education: Who Cares?

The OU’s Dr Wayne Holmes, a lecturer in Learning, Science and Innovation in the Institute of Educational Technology, discusses its impact and why ethics is crucial to the adoption of AI in education.

Whether students and academics welcome it or not, Artificial Intelligence is increasingly being deployed in universities around the world, and will significantly impact on the future of university education.

Companies such as Facebook, Amazon and Google are investing millions of dollars developing AI in education (AIED) products, while by 2024 the global AIED market is expected to be worth around £4.5 billion.

However, the ethical consequences of AIED, such as ‘adaptive’ or ‘personalised’ learning systems, are rarely fully considered. In fact, most AIED research, development, and deployment have taken place in what is essentially a moral vacuum: virtually no research has been undertaken, no guidelines have been provided, no policies have been developed, and no regulations have been enacted to address the specific ethical issues raised by the use of AI in education.

It’s not just about data

To begin with, concerns exist about the large volumes of data collected to support AIED (how can we be sure that the data is accurate, who owns and who controls the data, and how is student privacy maintained?). However, while data raises major ethical concerns, AIED ethics cannot be reduced to questions about data. Other major concerns include the potential for bias (incorporated into AIED algorithms and computational approaches) and the fact that decisions made by AI’s deep neural networks cannot easily be inspected. But together these are the ‘known unknowns’. What about the ‘unknown unknowns’, the AIED ethical issues that have yet to be even identified?

The ethical use of algorithms

As a first step towards addressing the ethics of AIED, the Open University’s openAIED research group led a workshop at the 2018 Artificial Intelligence in Education international conference, involving researchers from around the world. Participants drew on empirical work to consider ethical issues such as systematic biases in machine-learned student models, impenetrable black-box algorithms, and the ethics of AI-driven college course recommendations.

The Open University is also prioritising the ethics of AIED as it investigates the development and deployment of AI technologies across the university, beginning with AI chatbots to provide additional (24/7) support for OU students and staff.

Whether anyone likes it or not, AI has quietly entered the university campus, but little attention has been paid to the ethics. To give just one example, what happens if a student is subjected to a biased set of algorithms that impact negatively and incorrectly on their assessments?

Dr Wayne Holmes is a Lecturer in Learning Sciences and Innovation at the Open University’s Institute of Educational Technology. He leads the OU’s openAIED research group and is academic lead for the OU’s AI Working Group. He has recently published a report on AIED, Technology-enhanced Personalised Learning: Untangling the Evidence, commissioned by Robert Bosch Stiftung.

 

About Author

Christine is a manager in the Media Relations team within the Marcomms Unit at the OU with an extensive background in media and PR. A former national BBC journalist, sub-editor and news editor, she also has a grounding in regional newspapers. Her PR experience includes working in-house as press officer in the busy Marcomms unit at the Zoological Society of London. At the OU, Christine covers widening access in HE, corporate news and campaigns, as well as stories from the Faculty of Arts and Social Sciences. She has just completed an MA in Philosophy with the OU.

Comments are closed.