top of page

The ethical and moral senses exercised by AI in teaching.

Our ethical and moral senses will be exercised by AI in teacher-to-child encounters and engagements - exponentially so.

Some industries are highly vulnerable to AI and automation – and for good reason. Apart from increased efficiency, AI provides opportunities to perform menial and routine tasks that few people enjoy doing. As a greater proportion of the population grows older, the care industry will increase in size. Many of us are increasingly coming to rely on Alexa and Siri to order up music, make a drink, heat up a meal or tell the washing machine how to wash our clothes.

AI still won’t be able to nuance the creativity in music in the first place or simulate the human intimacy of having dinner with company – these still require flesh, blood and consciousness – at least for the foreseeable future. While AI can provide efficient and even intelligent solutions to human problems, it is still a very long way from feeling pain, joy, love or anger.

Alan Turing, the genius of early computer design and wartime hero of the Enigma code-breaking endeavour, described the threshold when computers would be considered ‘intelligent’ as the point at which they could carry on ‘an extended conversation with a human being’. If that is true, teachers are still far ahead of computers in providing both techne (the skills and knowledge of education) and phronesis (the practical wisdom and character that is the outcome of an education).

Machines are still a very long way from bonding, trusting, emotional understanding, modulating tone, eye contact, body language and sharing vulnerabilities – all of which take place a thousand times a day when a teacher and a pupil are in face-to-face contact with each other.

AI replicates what social media is doing to the quality of our social interaction – it distances us across time and space; it outsources the personal responsibility that comes with physical presence. When that happens, morality is unlearned and our ethical compass is disorientated.

The danger of too much AI in teaching is therefore both an ethical and a moral issue. Even if AI could be creative to the extent that humans can, would we, should we, want it? If an AI teacher could teach larger classes, answer more questions, correct more mistakes, demonstrate a wider range of techniques, mark more exercise books and do all the other ‘technical’ things human teachers do, would we, should we, want that?

As teachers, we are not yet immediately confronted by ethical questions of AI, but it will come – and when it does we should try to be prepared for the questions it will raise – particularly those where the teacher’s duty of care serves the child’s basic needs.

This is an extract from Alan Newland’s book:

‘Becoming a Teacher – the legal, ethical and moral implications of entering society’s most fundamental profession’

– available now from Crown House with a 20% discount with the code ‘becoming20’ - or from Amazon

bottom of page