In the first part, we swept away the promises of ultra-personalization of content and guidance of AI applied to education, in the light of cognitive sciences. In this second part, we invite you to discover other promises to promote the right to make mistakes, motivation and self-regulated learning. The final words will return to the vigilance to be had and the questions which still remain largely open, for which the lack of perspective on the subject still calls for caution (1).
A learning context to promote the right to make mistakes and motivation?
In addition to tailoring feedback to learner progress, AI-powered learning environments typically do not make value judgments (until proven otherwise!). This, a priori, objective look at learning performance can create a more neutral environment where learners feel comfortable making mistakes without restraint, but also taking more risks to test several solutions to a problem. issue.
A recent study on the subject notably showed that the feeling of self-efficacy could be improved in programming students by iterating with ChatGPT during a coding exercise, which also promoted their intrinsic motivation. In other words, the use of tools powered by AI could be virtuous in focusing learners’ attention on their errors, then providing them with targeted guidance towards a new attempt by limiting the social pressure to succeed in the face of trainers/ trainers or other learners.
This question still remains open and without a tangible answer, but being able to detect errors more quickly during a learning task, to iterate with AI to improve its answers in a context that reduces the apprehension of failing, could have two benefits. The first is that by directing the feedback in relation to the learner’s proximal zone of development, the phenomenon known as learned helplessness is reduced (loss of self-confidence and belief in the inability to succeed. due to repeated failures or lack of positive results) and the feeling of being competent increases. The second, very complementary, benefit is that iterations with AI can develop what researcher Carole Dweck calls the dynamic mindset (2). This state of mind is characterized by the belief that intelligence is not immutable, that everything can be learned effectively by selecting appropriate strategies by persevering and above all by using error as a lever for progress and not as a brake.
These two hypotheses still remain to be demonstrated and the results could have interesting practical implications for articulating the use of AI with continuing training actions. We can imagine that AI is a gateway so that learners, still very new to a subject, can grope, fail, iterate without pressure from others, then after a certain confidence building with key concepts, he/she will be able to jump into the deep end by putting them into practice, directly in context. An expert or a teacher will be able to provide the essential human support with relevant in-depth training and training, possibly based on the information provided by the AI during these interactions with the learner.
There are still some shadows in this picture… AI can have a positive effect on learners with moderate motivation, but faced with a lower level of motivation and a too high level of challenge, the support of AI seems to be reaching its limits. This can be explained by the absence of detailed explanation in feedback, positive reinforcement, personalized encouragement, or even social interaction. However, these elements are essential, not only for the retention of knowledge, but also for motivating learners to persevere in the face of challenges (3).
Facilitate self-regulated learning, under certain conditions
A more indirect effect that intelligent AI-based systems can have, also harder to measure, would be to provide recommendations on what, how and when to learn. This capacity for planning and monitoring the learning process (known as “self-regulation”) develops over time and is based on knowledge of truly effective methods and tools for learning, but also on the degree of autonomy and resources available to learners.
How could this “metacognitive” process (representations that an individual has of their knowledge and how they can construct and use it) be supported by the power of AI? By supporting them in expressing their learning objectives, in seeking and using available resources, or even in designing a personalized action plan to improve their skills, but also in monitoring progress over time… All these ingredients would enrich the learning experience while training learners to self-regulate and truly “learn how to learn”. But the AI still needs to have “learned to transmit” using levers based on evidence. Indeed, if the assistance offered to learners is based on ignorance or erroneous representations about the learning process and the means to optimize it, then the consequences will be more harmful than beneficial.
Additionally, a recent study of using AI to practice giving written feedback suggests that students tended to rely on AI assistance, rather than actively learning. thanks to her (4). Indeed, as soon as the AI was no longer there to guide them in their learning, the quality of the practical applications reduced, as if a relationship of dependence had been established rather than promoting student autonomy. The research team therefore warns of the need to find the balance between AI assistance and the promotion of learner autonomy to ensure that they remain the “pilot in the plane” to develop crucial soft skills for continuous adaptation.
AI put to the test of learning: many challenges to be overcome through experimentation
AI-based learning technologies offer promising prospects for strengthening the appropriation of knowledge and know-how. However, their integration must be carefully considered so that they are truly at the service of pedagogy and training (5). The beneficial effects mentioned are based on hypotheses and preliminary studies. AI still has everything to prove, so let’s not go too fast! (6)
To promote evidence-based use, it is imperative to adopt an experimental approach to evaluate how AI can truly support the learning process without hindering the active posture, collaborative learning or the relationship between educators. and learners, the ins and outs of which are now well established in the scientific literature.
The main limitations of AI-based systems arise from the nature of the model at the heart of their operation: it must not be forgotten that these models are approximations of reality. When important ingredients of learning are not taken into account in the model, the resulting adaptability becomes limited and the support provided to the learning process is then much less reliable, as well as its applicability in various contexts. Indeed, carefully taking into account the context and feelings of a learner to deduce moments conducive to learning remains much better controlled by humans.
Finally, the integration of AI in education and training raises ethical questions, such as the detection of plagiarism, the biases it can feed (among others: perpetuating discrimination, educational inequalities, or being based on stereotypes (7), the socio-economic consequences that it risks generating, or free will in the process of creating content and the place given to teaching professionals To achieve the objective of moving towards. the development of AI systems technically and pedagogically capable of contributing to the transmission of quality knowledge, collaboration between several stakeholders is more than essential (8).
Written by Alice Latimier, PhD in Cognitive Psychology
References
1. Chen, X., Xie, H., Zou, D., & Hwang, G. J. (2020). Application and theory gaps during the rise of artificial intelligence in education. Computers and Education: Artificial Intelligence, 1, 100002.
2. Ng, B. (2018). The neuroscience of growth mindset and intrinsic motivation. Brain sciences, 8(2), 20.
3. Vu, T., Magis-Weinberg, L., Jansen, B. R., van Atteveldt, N., Janssen, T. W., Lee, N. C., … & Meeter, M. (2022). Motivation-achievement cycles in learning: A literature review and research agenda. Educational Psychology Review, 34(1), 39-71.
4. Darvishi, A., Khosravi, H., Sadiq, S., Gaševi?, D., & Siemens, G. (2024). Impact of AI assistance on student agency. Computers & Education, 210, 104967.
5. DNE-TN2 (September 12, 2024). AI competency frameworks for teachers and students (UNESCO). Education, digital and research. Accessed September 12, 2024 at
6. Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The promises and challenges of artificial intelligence for teachers: A systematic review of research. TechTrends, 66(4), 616-630.
7. Bellamy, R. K., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., … & Mojsilovic, A. (2018). AI Fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint. arXiv preprint arXiv:1810.01943.
8. Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2024). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 61(3), 460-474.
Source: www.usinenouvelle.com