my timesThe Korea Times

We got AI in education all wrong: Here’s the fix

Listen
Chyung Eun-ju

Chyung Eun-ju

For decades, Korea’s top universities have relied on a familiar system for judging academic excellence: high-stakes exams, tightly supervised assignments, and essays meant to showcase a student’s individual mastery. Academic integrity, long protected by strict rules and intense competition, was assumed to be easily verifiable.

Now, generative artificial intelligence (AI) tools like ChatGPT, Gemini, and Claude can produce research papers, essay answers, and even personalized reflections in seconds — often more polished than a student’s own writing. A first-year student can ask a chatbot to analyze a poem or summarize a week’s lecture and receive a convincing, submission-ready response almost instantly. This raises a fundamental challenge: Why struggle to reason through a concept when AI can deliver a passable answer without understanding?

Joel Cho

Joel Cho

Recent reports show that even Korea’s top universities are struggling to adapt. Students at Seoul National University, Korea University and Yonsei University were found to have used AI tools like ChatGPT to cheat on exams, exposing the limits of traditional monitoring. Over 90 percent of college students with AI experience reported using it on assignments, highlighting how deeply these tools have become integrated into higher education. Park Joo-ho, a professor of education at Hanyang University, explains, “AI is a tool for retaining and organizing information, so we can no longer evaluate students on those skills. Instead, assessment should focus on creativity, something AI cannot replicate.” While some students rely on AI responsibly, misuse during exams raises broader concerns about trust, fairness and the role of human judgment in education.

Some U.S. universities are already experimenting with ways to integrate AI responsibly into learning, offering potential models for Korea. At Duke University, for example, students and faculty are piloting a secure AI interface, “DukeGPT,” which provides access to AI-assisted study tools. Faculty are also adapting assessments, relying more on in-class assignments and oral exams to ensure that students demonstrate understanding rather than simply submitting AI-generated answers.

Similarly, Northeastern University partnered with Anthropic to pilot AI tools like Claude to help students with study guides, quizzes, and research, emphasizing AI as a learning aid rather than a shortcut. Meanwhile, elite institutions such as Harvard and MIT now require students to submit an AI usage disclosure when generative AI is involved in assignments, documenting prompts and edits to maintain transparency and integrity.

These examples show a trend toward AI-resilient assessments — methods that evaluate reasoning, creativity and problem-solving rather than rote outputs — mirroring calls from Korean educators like Park Joo-ho to prioritize higher-order cognitive skills in an AI-driven era.

A similar transformation is happening outside the classroom. In professional Baduk training, or Go as it’s known in English, AI programs now provide real-time guidance on optimal moves, allowing players to refine strategies much faster than with traditional methods. Heo Young-rak, a Baduk national team player, notes, “I think about 70–80 percent of my studying is done through artificial intelligence. These days, it’s hard to imagine studying without AI.” Coaches report that AI has significantly boosted players’ performance, accelerating skill development and strategic thinking. Just like in Baduk, there’s no doubt that AI in education can augment learning — but the challenge lies in ensuring that students’ own reasoning and creativity aren’t replaced entirely by AI’s output.

Park Joo-ho emphasizes, “In the era of AI, I believe that … students should be allowed to use AI as a learning tool, but the assessment should measure their own higher-order cognitive abilities.” To make creativity-based assessment feasible, he adds, “In order for university education to properly evaluate students’ creativity in investigating real-world problems, we must move away from large lecture-based classes and knowledge-delivery formats. Classrooms should focus on individualized instruction, with tailored feedback for each student. Achieving this requires dramatically smaller class sizes, a significant increase in faculty, and expanded investment in both human and physical resources for higher education.”

The rapid integration of generative AI is reshaping education in ways that extend far beyond isolated cheating scandals. This growth is actually representing an evolving risk of individuals, and not just exclusively students, bypassing the intellectual processes that education is meant to develop, whether it be within the context of formal or informal education.

The AI tools we all have readily available to us are incapable of teaching critical thinking and analytical reasoning. These essential educational skills are built through struggles, where an individual learns, effectively forms one’s own ideas, tests assumptions, actively researches and refines conceptions. What we are seeing these days is the use of AI to skip the important steps of building such skills; in other words, there is less and less struggle for intellectual development.

These changes being observed contribute to a growing concern that some students are externalizing their thought processes entirely through the use of AI. The easier path taken nowadays is leading students to learn how to create prompts to input into an AI system rather than learning how to reason through a complex problem. This phenomenon demonstrates that students may even achieve acceptable academic results using AI, but at the cost of developing strong cognitive foundations.

AI should enhance education, not replace the fundamental act of thinking. Without appropriate safeguards, AI can end up becoming just a shortcut that leads students away from grappling with academic material. The initiatives taken by some institutions are steps in the right direction to redesign assessments and reduce the effect of blind reliance on AI.

We also believe that formal AI literacy education could be an effective tool to prevent an AI reliant education, to show not only how to use generative tools to enhance education, but also how to be critical about their effectiveness and understand their limitations.

Ultimately, safeguarding academic formation in the age of AI requires adopting a deliberate cultural message: think first, use AI second. Undoubtedly, AI will remain a powerful tool in education, but it must be integrated with caution so that it strengthens, rather than erodes, students’ capacity to reason.

Chyung Eun-ju (ejchyung@snu.ac.kr) is a tech research associate at Donghyun ASP. She earned both her bachelor's in business and master's in marketing from Seoul National University. Joel Cho (joelywcho@gmail.com) is a practicing lawyer specializing in IP and digital law.