To use or not, is no longer the question. From IITs to DU, universities are fighting unethical AI use

IIT-Delhi Director Rangan Banerjee told ThePrint that these were just the “first tranche” of recommendations and the guidelines were being further updated. “The idea is not just to create fixed rules, but to set a framework that evolves with time. AI is not static—it’s fast evolving, and so the way we deal with it in education also has to evolve.”

Banerjee said there was no clear-cut answer to how much AI should be allowed in academic work. “It’s not like you can say ‘30 percent AI is okay.’ Every context is different.”

As such, he said, the faculty needs to be conscious when designing assignments or setting question papers. “They need to ask themselves—can this be easily answered using AI? If yes, then how do I change the question? Many faculty are now trying out prompts with AI tools, then adjusting their assessments accordingly. That kind of innovation is essential because AI is not going away.”

The use of AI tools in educational institutions has become a global issue. With easy access to AI tools, the question is no longer about whether such tools should be used but rather how they can be used to elevate teaching, learning, and assessment without undermining creativity or academic honesty.

However, experts caution that there is no universally agreed-upon definition of what constitutes the “right balance” when it comes to the use of AI in academics. John J. Kennedy, former dean of the School of Arts and Humanities at Christ University, Bengaluru, emphasised the need for India to adopt a more structured and forward-looking approach.

“This means crafting clear, actionable guidelines that empower both students and faculty, not by imposing blanket bans, but by fostering ethical, adaptable frameworks that evolve with the technology. Outright prohibition of AI is neither practical nor sustainable in the long run,” he told ThePrint.

For now, in the absence of any blanket institutional guidelines, faculty members are left grappling with the issue at an individual level, relying on tools like Turnitin and Paperpal for plagiarism and AI detection.


Also Read: Educational systems must catch up with AI. It is now key for developing human intelligence


Lack of ethics awareness

According to IIT-Delhi’s survey of 427 students and 88 faculty members last year, every student surveyed reported using generative AI tools for writing assistance. Additionally, the report, accessed by ThePrint, found 62 percent used them for coding, 58 percent for idea generation, and 52 percent for exam and quiz preparation.

The initial report by the committee on AI use said faculty members fear that extensive use of these tools could undermine the development of critical thinking and analytical skills among students, as it may encourage “shortcuts” over genuine engagement with complex concepts.

This, however, is not just an IIT-Delhi issue. Furthermore, concerns are also mounting over the increasing use of AI-generated content in academic research.

Antara Chakrabarty, a PhD scholar at South Asian University (SAU) who regularly mentors students on their research proposals, voiced her frustration over the use of AI on the social media platform X last week.

Chakrabarty, a PhD student in Sociology, said it’s acceptable for scholars to use AI for assistance, but not to generate entire research proposals. She shared an incident with ThePrint where a student shared a proposal fully written by ChatGPT with her for suggestions. She asked him to rewrite it, but he never responded. “Two months later, he told her he was admitted to a PhD program using the same proposal. It’s a very troubling trend.”

In November last year, a postgraduate law student sued OP Jindal Global University for failing him over allegations that 88 percent of his coursework was AI-generated. The case was later disposed of after the university issued fresh transcripts following a re-examination.

Faculty members at various institutes say students are unaware of the ethical issue of using AI-generated content. For instance, the IIT-Delhi survey report also revealed that 52 percent of the students did not view the use of generative AI tools as raising professional ethical concerns.

Suman Chakrabarty, professor at the Department of Mechanical Engineering at IIT-Kharagpur, said that it especially becomes an ethical issue when a student’s work is 100 percent AI-generated.

“Even if the AI solution is fully correct, it’s supposed to be an independent effort by the student. This is a fundamental ethical concern and bad practice, which could hurt the student in their professional life.” The first issue, he said, is that it leads to the loss of foundational knowledge; the second is that of ethics, and the third is the “subtle but real risk that many AI-generated answers include small, hidden mistakes”.

“Especially in scientific subjects, AI might confidently present wrong or ambiguous content—and I’ve seen that happen in the case of my own students,” he told ThePrint.

But teachers also emphasised the difficulty of ascertaining that AI had been used to put together a project. Tanvir Aeijaz, associate professor of political science at Ramjas College, Delhi University, said, “At the research level, there are still many software tools to detect AI usage and plagiarism. But at the college and undergraduate levels, the problem is different. Teachers have to evaluate hundreds of assignments, and it becomes practically impossible to check each one thoroughly.”

“Just imagine — in a single class, there might be around 150 students. That means 150 assignments, each typically five pages long in hard copy format. Even if a teacher suspects the use of AI, it’s nearly impossible to detect,” Aeijaz added.

Another political science professor from a prominent Delhi University college who did not wish to be named told ThePrint that last year, at least 20 students submitted nearly identical assignments on political theory. “It was clearly done using ChatGPT. I had to cancel the assignment and hold a class test instead. It’s getting harder to use creative forms of assessment.”

Surajit Mazumdar, a professor at JNU’s Centre for Economic Studies and Planning, noted that, as a result of this AI use, many professors are now moving away from take-home assignments.

“Earlier, students could work at their own pace. But with AI tools so easily available, we’re forced to rely more on exams. The flexibility of creative assessments has been restricted.”


Also Read: No new ‘Institute of Eminence’ tags likely from Centre. How the programme has fared so far


Measures to curb unethical use

Officials from several higher education institutions told ThePrint that they have implemented various measures to curb the unethical use of AI in teaching and learning, beyond just relying on software to detect AI-generated content.

For instance, the Birla Institute of Technology and Science (BITS), Pilani, banned the use of mobile phones during exams, and students could see their course registrations cancelled even if they were only found to be in possession of one.

“Before each exam, which are mostly open-book exams in BITS, students must submit a handwritten declaration stating they are not carrying a phone. These steps reflect a zero-tolerance policy towards cheating using AI-enabled devices,” BITS group Vice-Chancellor V. Ramgopal Rao told ThePrint.

He noted that assignments have largely lost relevance, as most students either submit AI output as is or, at best, tweak it slightly. “As a result, BITS faculty no longer rely on assignments for evaluation and instead conduct oral exams (viva voce) to assess real understanding.”

Similarly, Professor Chakrabarty also tried an innovative approach with his students. “Last semester, I included ChatGPT-generated answers in a model question paper and asked students to critically analyse those answers—identifying what was correct, what was wrong, and how to improve them.”

He further said, “I want to simulate real-world situations in exams. In real life, we don’t work in a confined room without help—we solve problems using whatever resources are available. So, in that exam, I gave them full internet access and told them to evaluate and improve ChatGPT’s answers. The goal is to foster critical thinking. If we want to thrive in the AI era, we need to shift the way we ask questions.”

The Indian Institute of Management (IIM), Sambalpur, has started more in-class activities than take-home projects. “IIM-Sambalpur also uses case discussion assessments using Artificial Intelligence, where every student comes prepared and also gets personalised feedback on areas to work on. This also helps in healthy class discussions where everyone is motivated and prepared to participate and develops critical skills in the process,” Institute Director Mahadeo Jaiswal told ThePrint.

Tanvir Aeijaz said that he, along with many other professors at Delhi University, has completely stopped accepting typed assignments. “Most teachers now require handwritten submissions, and we prefer this method because we’re more equipped at reviewing handwritten work. Also, that way, they will at least read the topic while writing it. However, this is by no means a foolproof solution.”

The IIT-Delhi guidelines also emphasised the need to re-evaluate current assessment methods for assignments and exams, prioritising approaches that promote critical thinking, original analysis, and the application of knowledge in ways that AI tools cannot easily replicate.

Needs policy intervention

However, several faculty members and students told ThePrint that if used correctly, within limits, AI tools can be a benefit to teaching and learning.

For instance, AI can save researchers a significant amount of time.

BITS Pilani’s Vice Chancellor emphasised that AI can help dramatically reduce the time required to complete research and PhD-level tasks, like literature reviews.

“What once took six months can now be completed in just 2 hours,” he said, calling it a “huge boon” for researchers.

Students also agree.

Shalini Maheshwari, a PhD scholar at Banaras Hindu University, said, “It can assist with data analysis, routine tasks, and discovering new insights. However, its effectiveness depends on the quality of the tools and the researchers’ familiarity with them. As AI continues to improve, its adoption in research will likely increase, though its usefulness varies by field.”

Students have also been using AI tools to overcome language barriers. A postgraduate student at Delhi University, requesting anonymity, told ThePrint, “Coming from the Hindi heartland, my English speaking and writing skills are minimal. To compete with other students in the class, I often use AI to help me write assignments in English.”

However, experts and educators emphasised that in order to ethically use AI in academia, it needs to be properly regulated.

For instance, former Christ University professor Kennedy explained, “In academics, AI can also be useful for things like referencing or summarising texts. A student might not have the time to read multiple novels, but with AI, they can get a summary of them. AI can help with these tasks. The key issue arises when students generate content using AI and then claim it as their own original work. That’s where ethical concerns come into play.”

Experts further said that working intelligently with AI to maximise its benefits is a vital skill for the future.

Saikat Majumdar, professor and head of the department of English at Ashoka University, and author of the forthcoming book ‘Open Intelligence: Education between Art and Artificial’, said the real challenge is the growing outsourcing of human thought and creativity to machines.

“The challenge is the inevitable outsourcing of human thought and creativity to AI, which will render more and more human faculties redundant, as technology has done in the past—but this will happen on a wider and more pervasive scale than ever before. Therefore, professors will have to enhance their own AI training so as to be able to easily catch inferior collaboration with AI by students.”

Calling for a multi-stakeholder approach in formulating guidelines for the use of AI in academia, IIM-Raipur Director Ram Kumar Kakani said the importance of involving government, industry, academic leaders, and AI experts in developing effective AI-in-education policies.

“AI systems should be based on proven educational methodologies and aligned with clearly defined learning objectives,” he said.

“Regular audits of these systems and their data handling processes are essential to identify potential privacy risks and safeguard user data. A phased implementation strategy, beginning with pilot projects, will ensure adaptability and help make AI a responsible enabler of learning and innovation in Indian academia.”

(Edited by Sanya Mathur)


Also Read: CBSE asks schools to establish ‘sugar boards’ to monitor & reduce sugar consumption among children