AI in Education: Questioning the Algorithmic Super Teacher 

A VPN is an essential component of IT security, whether you’re just starting a business or are already up and running. Most business interactions and transactions happen online and VPN
,

By Erika Reinkendorf and Ivanna C. Lesizza 

Imagine walking into a classroom where every lesson, every question, and every bit of feedback came not from a teacher, but from an algorithm, how different would your education have been? 

With the arrival of rapidly evolving technologies such as generative artificial intelligence (AI), countless individuals and industries have rushed to adopt this advancement in their daily lives for purposes ranging from boosting business productivity to improving social and even personal well-being. The education sector is not far behind, and it is clear that, whether we like it or not, AI is rapidly reshaping how we teach and learn. Some use this tool to personalize learning experiences, others to reduce teachers’ workloads, and some even use it to optimize educational management processes. 

The reality is that AI has the potential to generate easily scalable solutions to the current challenges of the sector. In this sense, we understand the need to “ride the wave” of artificial intelligence. Yet, as witnesses to the shortcomings of educational systems in our beloved Latin American countries, we ask ourselves how we can use these tools to challenge, rather than reinforce, the status quo in a world where inequalities are widening and environmental crises are deepening. Do we want to promote tools that simply mirror the dynamics of systems already on the verge of failure? In this article, we explore the nuances of AI in education and invite you to join us in discussing the doors it opensboth hopeful and alarming. 

The Promise of AI: Personalization, Efficiency, and Access

As you might have already witnessed in your own lives, AI has the potential to simplify a myriad of tasks and, when combined with innovative thinking, it can even pave the way to creative solutions to complex problems. Intelligent tutoring systems (ITS), for example, use AI to follow a student’s progress in real time, diagnose where they are strong or struggling, and adjust tasks and feedback accordingly (Özer, 2024). 

Instead of moving the whole class forward at the same pace, an intelligent tutor can choose or even generate problems that match each learner’s level, update its “picture” of that student as they work, and offer hints or explanations at the moment they are most needed. 

Personalised Learning and Smarter Pedagogy

In heterogeneous classrooms where students arrive with very different backgrounds, learning paces, and styles, this kind of adaptive support can help education adapt to students rather than forcing them to squeeze into a rigid, standardized model. Additionally, through translation and content-generation tools, AI can open windows to the world. Students can access materials from other countries in their own language and encounter contexts, debates, and voices that would otherwise remain out of reach. The quality of this access, however, depends on our own capacities and intentions as users of these models. 

AI can also offer support to teachers. From drafting innovative lesson plans and exercises to generating quizzes and automized feedback on assignments, AI tools can take over some of the repetitive preparation that often happens late at night or between classes. Reducing the workload frees up time and energy for what only humans can do: listening to students, building trust, paying extra attention to those with special needs, and responding to the social and emotional dimensions of learning. Beyond this scope, some tools can analyze participation patterns or assessment data and give teachers feedback on student engagement and the effectiveness of their strategies, offering specific ideas to refine their pedagogical practice over time (Molina et al., 2024). 

At a broader level, AI can help schools and systems function more efficiently. Algorithms can support decisions about how to allocate resources (where to place teachers, how to plan class sizes, or when to acquire new learning materials) by analyzing data that would be too tedious to examine otherwise. Early-warning systems can flag when a student might be at risk of dropping out, giving schools the chance to intervene before it’s too late. Automating routine administrative tasks, from generating report cards to sending informative messages to families, can also make institutions more agile and free staff to focus on the human side of education. Considering all these possibilities reveals a brighter side of AI. One where all parts of the education system can let go of mechanical tasks and begin approaching education with curiosity and intent. However, as with all things in life, there are challenges and obstacles to overcome if we truly want to create a system that works with us rather than for us. 

Data, Power, and the Politics of Surveillance

If the promises of AI in education sound exciting, the risks should make us reflect for a moment before we jump into the water too enthusiastically. A first fault line has to do with data, privacy, and who ultimately controls the information it gets from students and teachers. When highly sensitive data about learning histories, behaviours, and even inferred emotions is repurposed for non-educational goals (targeted advertising, commercial recommendation algorithms, broader profiling, etc.) the risk is no longer just “leaked data,” but a slow narrowing of people’s lifelong freedom of choice. A student’s mistakes, preferences, and patterns at age 12 could, in subtle ways, shape what content they see, what opportunities they are nudged towards, and which opportunities quietly remain closed decades later. On top of this, algorithms trained on biased data can reproduce and amplify existing discrimination and inequalities, especially along the lines of class, race, gender, or geography. In unequal societies, this means that AI does not arrive in a neutral field; it lands in landscapes already marked by exclusion and, when used free-handedly, may deepen the very patterns it claims to fix. 

Accuracy, Dependency, and Cognitive Risks

Even when we focus narrowly on learning outcomes, the picture is far from straightforward. The quality of AI tutoring is uneven. Recent analyses of chatbots used for education, such as Khanmigo, show that they can make elementary mathematical errors and provide incorrect or misleading explanations to students (Riley, 2024; Christodoulou, 2024). If learners come to trust these systems as infallible, there is the potential risk that misunderstandings are reinforced rather than corrected. This is why many researchers insist that AI-based tutoring systems must be developed in close collaboration with educators and cognitive scientists, grounded in what we actually know about human learning. When we outsource too much of the learning process to AI (letting it plan the work, solve the problems, summarize the texts), students lose valuable opportunities to wrestle with uncertainty, build persistence, and practice critical thinking, problem-solving, and research skills (Kasneci et al., 2023; Mhlanga, 2023; Shiri, 2023; Sok & Heng, 2023). The tools are not inherently harmful, but using them uncritically or as replacements rather than supports risks turning active learners into passive prompt creators. 

Academic Integrity and the Crisis of Trust

The rise of generative AI has also shaken our ideas about academic integrity and trust in the classroom. A recent survey by the Center for Democracy & Technology (2025) reports that 59% of teachers in the United States believe their students are already using generative AI tools for schoolwork, and 68% say they have used AI-detection tools to check assignments. These detectors can produce false positives that wrongly flag human-written work as AI-generated, with serious consequences for students who may be unjustly accused of cheating. An overreliance on such tools risks creating an atmosphere of suspicion in which students feel they are constantly under surveillance. This dynamic undermines the trust that is essential to any meaningful teacher–student relationship. Faced with this scenario, simply “banning AI” or “hunting down AI texts” seems like a dead end. Instead, researchers like Villasenor (2023) argue for rethinking pedagogy and assessment by designing tasks that demand higher-order thinking, critical analysis, and original synthesis, encouraging more in-class writing where understanding must be demonstrated in real time, and incorporating multimedia and personal reflection elements that are harder to outsource to a chatbot. At the same time, schools need to invest in reflection with students and in conversations about the responsible use of AI, including its ethical implications and the importance of academic integrity. We should be aiming for students to be equipped, informed, and accountable users of these tools. 

The Digital Divide 4.0: Who isBenefits?

Finally, inequalities in access to AI systems and disparities in digital literacy mean that privileged sectors of society are in a position to benefit most from these technologies (Cotton et al., 2023; Grassini, 2023). This divide affects teachers and students alike (Rudolph et al., 2023): some schools can experiment with AI-powered platforms, while others struggle with basic connectivity, device availability, or even staff and teacher preparedness to implement them. If we enthusiastically integrate AI into curricula and policies without addressing these gaps, we risk adding yet another layer of exclusion on top of existing ones, where elite institutions enjoy sophisticated, personalized systems while under-resourced public schools are left further behind. In that sense, the question is not only whether AI works, but for whom, under what conditions, and at what cost. 

Beyond the Algorithmic Super Teacher: A Democratic Responsibility

Taken together, these concerns invite a more precautionary, critical approach to AI in education. This article is neither for nor against the use of this tool, but rather acknowledges its existence and urges educators, students, and policymakers alike to rethink our approach to using this technology and to integrate it into our educational practices in an ethical and sustainable manner. The point is not to reject these technologies outright, but to resist treating them as neutral tools or inevitable solutions. Instead, we are called to ask the uncomfortable questions: Who benefits from the efficiencies and data created by these tools? How are core human capacities such as curiosity, critical thinking, and solidarity, being shaped by these technologies? What happens to those who are not in the room when AI is being designed, regulated, and deployed? And, finally, do we want to promote technology that enhances education to meet the needs of a capitalist system or to empower humanity to transform its own history? Only by keeping these questions at the center can we hope to use AI in ways that support more just, democratic, and sustainable educational futures, rather than reinforcing the dynamics of a world already showing signs of collapse. 

Disclaimer: Because we are accountable writers, teachers and learners, we let you know we have used the help of AI to organize our ideas and help us write this article.

References

  1. Center for Democracy and Technology. (2025). Hand in hand schools’ embrace of AI connected to increased risks to students. https://cdt.org/wp-content/uploads/2025/10/CDT-2025-Hand-in-Hand-Polling-111225-accessible.pdf 
  2. Christodoulou, Daisy. (2024). (2024, 2 de mayo). ¿Revolucionará la IA la educación? Engelsberg Ideas. https://engelsbergideas.com/essays/will-ai-revolutionise-education/ 
  3. Cotton, D. R., Cotton, P. A., & Shipway, J.R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 1–12. https://doi.org/10.1080/14703297.2023.2190148 
  4. Grassini, S. (2023). Shaping the future of education: Exploring the potential and consequences of AI and ChatGPT in educational settings. Education Sciences, 13, 692. 
  5. Kasneci, E., Se.ler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., … & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and individual differences, 103, 102274. https:// doi.org/10.1016/j.lindif.2023.102274 
  6. Mhlanga, D. (2023). Open AI in education, the responsible and ethical use of ChatGPT towards lifelong learning. SSRN. https://doi.org/10.2139/ssrn.4354422 
  7. Molina, E., Cobo, C., Pineda, J., & Rovner, H. (2024). Innovaciones digitales en educación: Lo que hay que saber. The World Bank. https://www-worldbank-org.translate.goog/en/region/lac/publication/innovaciones-digitales-para-la-educacion-en-america-latina?_x_tr_sl=en&_x_tr_tl=es&_x_tr_hl=es&_x_tr_pto=tc 
  8. Özer, M. (2024). Potential benefits and risks of artificial intelligence in education. Bartın University Journal of Faculty of Education, 13(2). https://doi.org/10.14686/buefad.1416087 
  9. Riley, Benjamin. (2024). (2024, 2 de mayo). Generative AI in Education: ¿Otro error sin sentido? Education Next. https://www.educationnext.org/generative-ai-in-education-another-mindless-mistake/#:~:text=The%20cognitive%20scientist%20Gary%20Marcus,that%20students%20aren’t%20making 
  10. Rudolph, J., Tan, S., Tan, S. C. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning & Teaching, 6(1), 1-22. https://doi.org/10.37074/jalt.2023.6.1.9 
  11. Shiri, A. (2023). ChatGPT and academic integrity. Information Matters, 3(2), 1-5. https://doi.org/10.2139/ssrn.4360052 
  12. Sok, S., & Heng, K. (2023). ChatGPT for education and research: A review of benefits and risks. Cambodian Journal of Educational Research, 3(1), 110-121. https://doi.org/10.2139/ssrn.4378735 
  13. Villasenor, J. (2023, 10 de febrero). Cómo el ChatGPT puede mejorar la educación, no amenazarla. Scientific American. https://www.scientificamerican.com/article/how-chatgpt-can-improve-education-not-threaten-it/ 
Erika Reinkendorf

Erika Reinkendorf

Erika is an economist with over eight years of experience managing educational projects in non-profit organizations and collaborating with the public sector in Peru. She is recognized for her commitment to the sustainable development of vulnerable communities. Her work is driven by a strong interest in the transformative potential of artificial intelligence in education, the use of film arts as a pedagogical tool for critical reflection and storytelling, and outdoor education as a pathway to reconnect learning with nature and well-being.

Picture of Camila Lesizza

Camila Lesizza

Camila is a sustainability and agroecology professional with experience coordinating intersectional projects and supporting learning programs that invite students to connect both to nature and the community around them. She’s especially interested in the use of sustainable, ancestral agricultural practices as catalysts for peace, social cohesion and autonomous rural development. She’s known for her warm, organized approach to helping groups turn big ideas into clear decisions and real action.

Join the Discourse

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Your Monthly Brief on Technology, Power & Peace

Technology reshapes conflicts, democracy and humanity in real-time. Are you tracking its impact?

Start tracking technology’s impact on peace and democracy.

I agree to receive monthly newsletters and accept data processing as outlined in the data protection policy.