AI in classrooms needs better teaching, not policing


It’s eleven forty-seven at night. A second-year college student in Mumbai clicks ‘submit’ on a 1,200-word paper. Her class started at 11:10. The grammar is perfect, the tone is intellectual, and the structure is clean. The student is sure she has studied and sleeps well. However by next week, a lot of people like her might not even remember what the project was about: because they didn’t really learn anything. Welcome to the new normal on campuses from Chennai to Chandigarh: instead of cheating in the old way, students quietly swap out their hardest thinking for a machine and incorrectly think that speed means they have control over the subject or activity.

This certainly is dangerous for higher education system. In our country, grades typically determine how easy it is to move around. Students have to deal with busy schedules, long commutes, part-time jobs, family expectations, and never-ending exam and test prep cycles. Wanting ‘quick and clean’ isn’t a bad thing; it’s a normal reaction to a high-pressure environment. Another reason: many colleges still emphasize polished final solutions over observable logic. That makes it easy for students to look like they’re doing well when they’re really not.

Now a caveat: This isn’t a tirade against technology. It is an appeal to protect the slow, hard part of learning that takes facts and converts them into knowledge; that into understanding and onto judgment.

To not get rid of the mental effort

Tools such as ChatGPT, Gemini or Claude, are indeed helpful, especially for students who need help with language, structure, or getting unstuck, in the sense, those with ‘starter trouble’. The problem starts when AI goes from being a tool for learning to a tool for finishing. Three peer-reviewed research from 2025 warn us about that change.

One article in Frontiers in Psychology in November 2025 called it a “cognitive paradox”: AI can make learning easier by getting rid of unneeded friction (such as formatting problems and first-draft worry). Overuse of AI or LLM tools can also make it harder to conduct the mental effort and initiative that lead to long-lasting knowledge.

The authors of this November 2025 study use Cognitive Load Theory and Bloom’s taxonomy to say that if students hire someone else to do the higher-order work, such as analysis, evaluation, and creation, which can be a truly human activity, then learning in college becomes just finishing low-level tasks. The study also includes experimental patterns that should make any teacher think twice: students who used AI did more practice problems correctly, but their knowledge of concepts got worse; and continuous exposure led to memory loss in controlled settings. In other words, AI can make kids seem smarter in the near-term, but in the long term slowly break down the mental model that forms the bulwark of lasting knowledge.

Solution paralysis

A second study, published in the ASSA Journal, is more direct and disturbing since it frames numbers on what teachers typically know intuitively. In a study that used both surveys and interviews, people who depended on AI a lot had 17.3 percentage points lower critical thinking scores and 22% worse memory retention. And that’s because such learners are offloading their cognitive abilities to a machine. Interviews brought up a phrase that describes what a lot of professors are seeing: “solution paralysis,” which is when students feel like they can’t start, decide, or move forward when they don’t have AI assistance. The same study found that students in the humanities had the biggest drops in their cognitive abilities. This is a big problem. because the humanities and social sciences are generally where people learn how to think critically, argue, make laws, and work on policy.

​A peer-reviewed paper in August 2025 on the impact of “excessive AI tool usage” notes that one of the most important paradoxes of educational technologies that (the) study points out is that the tools that are being implemented to facilitate learning can damage equally the skills that they are trying to facilitate when being over-utilised.

The evidence collectively indicates a fundamental reality: the difficulty is not the utilisation of AI by students; rather, it is the fact that many engage with it without intentional contemplation. Our existing assessment methods and frameworks frequently let this to be seen as “learning.”

Remedies such as AI reciepts

So what are teachers really doing about it, especially in college, when classes might be big and grading is already a lot of work?

Techno-policing is not the best way to do things. They are changes in teaching that show how to think and make using AI without thinking intellectually pointless.

One useful step is the “AI receipt”: students have to write down where they used AI, what they asked for, what they got, and what they altered. This isn’t about making people feel bad; it’s about making them take responsibility. When students have to explain how they utilise AI, they stop thinking of it as magic and start thinking of it as a choice with consequences. Such an “AI receipt” model also helps teachers understand whether the student is thinking about thinking – the ability to monitor, understand and regulate one’s own cognitive processes such as planning and evaluating alternatives. This is meta-cognition and how it is connected to the students’ tendency to delegate such human capabilities to AI tools.

Another effective remedy is a “before-AI” requirement: students must first write a rough paragraph in their own words about what they believe, what they don’t understand, and what they think the answer may be. They can only talk to AI after that. This helps the pupil remember what they learned, therefore AI is more of a comparison tool than a replacement.

Teachers are also giving students low-stakes assessments in class, such a two-minute oral defense, a fast “explain your argument,” or a short application issue, along with take-home assignments. This doesn’t punish using AI; it makes understanding necessary. It also goes against “solution paralysis”. Students learn that they need to be able to think clearly even when they don’t have the instrument in their hand.

Then there are “error-hunting” activities, where students are given an AI-generated solution that has faults in – weak reasoning, bogus citations, and superficial generalisations, and they have to find and fix them. This changes the relationship between the student and the tools. AI is no longer the authority to follow; it is the draft to question. It teaches the kind of judgment that substitutive use tends to weaken.

Human source assignments

Colleges in India have another advantage that AI can’t fake: real, local, lived context. Students have to connect ideas to reality when they have to use at least one “human source” in an assignment. This could be an interview with a local business owner, an observation of a classroom, a neighbourhood case, a campus dataset, or a field note. AI can write, but it can’t take the role of direct intellectual responsibility.

None of this needs extensive training or costly software. You need to change the way you think: stop making assignments that reward polished work and start making projects that reward reasoning – meaning, a reasoning that can be traced.

Step back and think. We realise that a lot of what we do now teaches students how to use AI without thinking. When we give predictable cues, accept generic arguments, and score mostly for fluency, we’re constructing an assessment system that AI can easily pass. Students see this faster than schools do. We need to cease rewarding tasks that can be done without thinking if we want kids to stop outsourcing their thinking.

India is moving quickly toward an industry that’s getting shaped by A.I. But should we afford an AI-shaped education that makes students sound smart but doesn’t teach them how to be smart when they need to be? The true measure of learning is not a student’s ability to complete an assignment at midnight, but their capacity to navigate complexity in classroom or in life, without assistance. If college turns into a place where people learn to write quickly, we’ll end up with a generation that knows how to answer questions but isn’t very good at making decisions. The answer is not to outlaw AI; instead, we can make thinking necessary again. And create classrooms where the mind, not the machine, gets the grade.

(K. Ramachandran is a former journalist and now an education strategist. He writes and comments on trends in higher education.)

(Sign up for THEdge, The Hindu’s weekly education newsletter.)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *