In ELM’s “I argue” series, Hugo-Henrik Hachem from Linköping University questions the emphasis on promoting critical thinking through learner-centred approaches to artificial intelligence (AI) literacy. He argues for an AI-centred pedagogy that shifts the pedagogical focus from students’ levels of thinking about AI (critical or naïve) to their attention and motivation to study AI.
Critical thinking, as an educational goal, has become a buzzword in policy documents and research on AI literacy and AI in education. Amidst growing concerns about intellectual dependence on AI, AI experts/teachers continually emphasise the importance of promoting critical thinking in learners through learner-centred pedagogies about AI. This type of pedagogy is often based on Socratic dialogue, involving a (critical but humble) teacher wanting to raise their learners’ naïve thinking about AI to a ‘critical’ level. In a learner-centred pedagogy, teachers pose questions to guide students to a predefined set of conclusions about AI that the teacher already knows and endorses as critical thinking.
However, calls for critical thinking as a goal to AI literacy embeds strong elements of intellectual oppression, as summarised below.
In calling for critical thinking in learners, the underlying assumption is that teachers are already critical thinkers about AI, and that learners are naïve about it: a very questionable zero-sum formulation, especially when it is challenging to measure or evaluate critical thinking with causal certainty. Critical thinking is an essentially contested concept and can be defined in various ways. Many are of a circular nature; for example, critical thinking is ‘the person’s critical problem-solving ability,’ and do not convey much.
The assumption that ‘teachers are critical and students are not’ unintentionally creates two levels of intelligence. It is unclear when this intellectual inequality between teachers and students will be fully addressed in the classroom. Is it not until students have reached the same conclusions about socio-technical knowledge as their teachers, and use AI in the same way experts recommend? However, teachers will always know more, which means this inequality is likely to remain the status quo until further notice.
Critical thinking, in learner-centred pedagogies, is also connected to the well-known concept of Socratic dialogue, which has been criticised for pedagogical inefficiency, classroom intimidation, and epistemic problems that lead to more stultification and conformity rather than free thinking. If we evaluate critical or free thinking against preset criteria, does this thinking remain free?
AS AN ALTERNATIVE, I propose an AI-centred pedagogy inspired by Jacques Rancière’s concept of intellectual emancipation. In this type of pedagogy, which is focused on attention and motivation rather than thinking levels, the meeting of ‘ignorant’ teachers with ‘emancipated’ students is a precondition for AI literacy, not an outcome. An ignorant teacher believes their mastery as a teacher is not knowledge-based, but rather the opposite, by pretending they do not know. Only then can they teach emancipated students who believe in their intellectual independence and the redundancy of teachers’ explications.
In this pedagogy, the focus is on the subject of study (AI), not on learners or teachers. Attention must be paid to what is being studied, or the thing, and that is where the authority of knowledge actually resides.
Ignorant teacher –> AI: The object of study <– Emancipated student
This AI-centred pedagogy is built on the following features:
- Intelligence is universal and indivisible into critical and naïve.
- Teachers and students are assumed to be intellectually equal, but have differing levels of motivation and have invested different levels of effort in studying AI.
- ‘Ignorant’ teachers shift their role from guiding students to reach preset conclusions that students lack about AI, to accompanying them on their individual journey of discovering AI knowledge embedded in a thing.
- This thing can be a book, an online course (MOOC), or any resource containing knowledge about AI.
- Authority lies in the object itself, which was initially created by an intelligence and can be understood by another equal intelligence, without needing redundant explanation from a teacher.
- Emancipated students progress by deepening their understanding of AI, rather than towards what the teacher already knows about AI.
- Ignorant teachers’ and emancipated students’ understanding of AI-related knowledge (process and outcome) ideally should not intersect, as they orbit around the same body of knowledge.
- This is possible because ‘ignorant’ teachers validate the students’ will and efforts spent to learn, rather than measuring their thinking levels about AI.
- They do this by asking questions they themselves ‘do not know’ the answers to.
This AI-centred pedagogy enables a non-hierarchical public pedagogy about AI. It is helpful to everyone in all settings, especially in adult and community education, whereas it is more difficult to apply in formal educational settings. AI-centred pedagogy offers a way to counter the unintended intellectual oppression embedded in learner-centred pedagogy about AI.
Emancipated students progress by deepening their own understanding of AI, rather than towards what the teacher already knows about AI.
Given the above, this is a plea for so-called ‘critical thinkers’ about AI to pay attention to how they self-promote in relation to their students. It is lamentable but possible to hypothesise that students who submit their intelligence to teachers would also readily do that to AI tools, leading to intellectual dependence on AI.
IN SUMMARY,
- Critical thinking as an educational goal in AI literacy risks reinforcing intellectual oppression toward students.
- An AI-centred pedagogy, inspired by Jacques Rancière’s concept of intellectual emancipation, offers a meaningful alternative.
- Genuine intellectual emancipation is a precondition for learning, not an outcome of it.
The “I argue” series of columns features texts written by researchers, each presenting a well-argued statement on a topic of their research.
Hugo-Henrik Hachem is a postdoctoral researcher at the Reasoning and Learning Lab (ReaL) at Linköping University in Sweden. He specialises in the sociology and philosophy of adult education. Currently, as a member of the Vinnova-funded ADAPT project, he investigates the intersection of lifelong learning, higher education and artificial intelligence, particularly in the context of upskilling and reskilling industry employees in Sweden. His research focuses on the systemic barriers that prevent industry workers from pursuing competence development in advanced digital skills.