
Artificial intelligence has become a regular fixture in university classrooms, libraries and student residences. Students use AI to summarize papers, translate text and even create brainstorm ideas for assignments. Educators are investigating AI as a tool for supporting research and academic writing. But just as there are opportunities, there’s a growing set of risks.
A new survey shows the scale of concerns in Chinese universities. It’s clear that students and faculty are both experiencing misleading or fraudulent material produced by AI tools. Some reported use of AI tools boosted their productivity, but worry that they may become more reliant on AI over time and potentially lose independent thought and agency at worst or cross ethical lines at the very least. These results emphasize the immediate need to rethink AI as a part of academic thinking.
High Use but Growing Skepticism Across Campuses
According to the Beijing-based consultancy MyCOS, nearly 80 percent of students and teachers have experienced misleading AI-generated content. This is known as “hallucination,” where AI outputs appear credible but are factually wrong.
The survey covered 2,971 students and 1,073 faculty members, conducted between July 8 and 21. Results showed that more than 65 percent of students have faced disputes linked to AI-generated work, underscoring the seriousness of the issue.
Students tend to cross-check outputs more than teachers. About 42.7 percent of students verify content using multiple AI tools, compared with 32 percent of faculty. Both groups, however, rely heavily on authoritative sources to validate AI outputs.
Rising Fears of Academic Misconduct
Widespread apprehension exists about academic dishonesty with AI. More than 85% of students had concerns that plagiarism, fake references or fake data could undermine academic integrity.
Most universities have policies governing the use of AI tools. Most students had low confidence in these policies, with nearly 42% of respondents rating existing policies as “partially effective”, and over 11 percent insisting they were outdated. There is a widening institutional gap between technology and regulation.
How Students Use AI in Daily Learning
For many students, AI is both a tool and a temptation. Gao Xin, a first-year graduate student at Fudan University, said she uses AI daily for literature searches, translation, and structural suggestions in writing. She values its role in simplifying complex papers but avoids depending on it for original academic work.
She shared that AI sometimes generates references to papers that do not exist on CNKI, a leading Chinese academic database. Because of this, Gao now relies on AI mainly for keyword suggestions and translations. She worries that overuse could make her lose independent thinking skills or even unintentionally violate university rules.
Balancing Guidance and Responsibility
At Tsinghua University, student Qiu Letao said academic ethics courses touch on AI usage. However, there is no unified standard across departments or a specific limit on how much AI-generated content is acceptable.
Qiu uses AI daily to brainstorm and organize her coursework. Yet she avoids AI-generated summaries in search engines, choosing instead to consult academic databases or reliable news outlets. She believes that AI content often lacks proper context and sources.
Her method of writing involves building a strong outline first, then generating small sections of content at a time. This approach allows her to remain in control and avoid risks of plagiarism or overdependence.
Universities Struggling to Keep Up with AI
While institutions attempt to regulate AI activity, universities are still behind the curve of technolgy. Although more than sixty percent of teachers and students confirm that a policy exists, doubt regarding the policies’ actual effectiveness indicate that the universities need to catch up quickly.
There are responsible students, however, like Qiu and Gao, who show accountability by confirming AI-loving facts and verifying AI-generated material. The absence of clearly established limits leaves everyone unsure about what “acceptable use” of AI even is. Overall, universities need more substantive AI-focused policies that define more clearly the meaning of the balance between innovation and the preservation of academic integrity.
Looking Ahead: Responsible AI Use in Education
The common adoption of AI technology across Chinese universities indicates that degree-granting institutions know that AI will be transformative to the creation, access, and verification of knowledge. Students and faculty members are both experiencing the benefits, yet they are aware of the challenges such as misinformation, academic dishonesty, and AI dependence.
As the AI technologies further improve, the dialogue on challenges to traditional teaching and learning with AI in higher education will increase. Universities and colleges need to be able to change academic policies to improve definitions of academic misconduct, clarify AI expectations, and encourage thinking critically and actively so that AI is ultimately and academic accelerator and not academic decelerator.