AI Ethics in Scholarship: Duplication, Prejudice, and the Evolution of Academic Sincerity
In the labyrinth of Kazakhstan universities, AI is leading a quiet revolution - not driven by politics or economic changes, but by artificial intelligence.
Michael Jones.
From writing centres to classrooms, students in dorms are finding help in AI resources like ChatGPT, Grammarly, and QuillBot for writing assignments. While some use them for basic editing or idea generation, others rely on AI to draft entire essays.
Gone are the days of arguing if AI belongs in an academic environment; the discussion has shifted towards how to use it most effectively and ethically.
AI promises to enhance academic life in Kazakhstan by helping multilingual learners navigate writing requirements in Kazakh, Russian, and English with instant customized feedback. However, the unchecked use of AI can lead to alarming ethical concerns, such as plagiarism and biased output.
Plagiarism takes on new meanings in the age of AI. Traditionally, it's about copying another person's work without giving credit. But with AI, the lines get blurry. Students who let AI produce an essay and submit it without revisions risk plagiarism. Instructors grapple with deciding whether exceptions should be made for structure and transitions.
Faced with these challenges, universities in Kazakhstan should find a balance between academic integrity and technological advancement. Crafting nuanced academic integrity policies that encourage transparency, fairness, and critical thinking is essential. Each student must navigate the shades of grey themselves, but global institutions like UNESCO can provide guidance with transparent, nuanced guidelines and citation practices for AI-generated content.
To avoid just penalizing students, universities should foster a change in academic culture. Students need to understand the importance of originality and authorship while learning how to use AI as a tool to aid critical thinking and writing, not replace it.
Bias is another significant issue in AI. Although many believe AI is neutral due to its algorithmic foundations, AI models are trained on datasets that predominantly come from English and Western sources. In practice, this means AI reproduces Western cultural, linguistic, and ideological assumptions and can reinforce Anglo-American scholarly practices over local systems of knowledge. To address this, universities should promote more ethnic diversity in AI training data and adjust coursework content to encourage the development of a local or regional academic voice.
With a unique multilingual and multicultural environment, Kazakhstan has the potential to lead in creating AI policies attuned to local realities by reforming academic integrity policies, investing in faculty and student training, and hosting workshops on ethical AI use.
The author is Michael Jones, a writing and communications instructor at the School of Social Science and Humanities, Nazarbayev University (Astana).
Enrichment Data:
Overall:
Universities in Kazakhstan are navigating AI's ethical challenges in academic writing through evolving policies and a focus on pedagogical reform. Key efforts include:
- Policy Development and Transparency: Universities are working to establish nuanced academic integrity policies that distinguish between ethical AI use and misconduct. The emphasis is on transparency, requiring students to disclose AI usage while adapting citation practices for AI-generated content.
- Pedagogical Shifts: Institutions aim to reframe writing as a cognitive process rather than a product, ensuring AI tools facilitate - rather than replace - critical thinking and analysis.
- Sovereign AI and Ethics: Initiatives like Nazarbayev University's ISSAI focus on AI for Good principles, prioritizing ethical frameworks and minimizing external dependencies. This aligns with broader efforts to develop local AI governance models.
- Upcoming Policy Discussions: The August 2025 IFLA conference in Astana will address metadata ethics and copyright challenges posed by AI-generated content, likely influencing Kazakh academic policies. Forums aim to standardize practices for documenting and regulating AI's role in scholarly work.
The dual focus on penalty avoidance and cultural change reflects an understanding that effective solutions require both updated policies and a redefinition of academic values around authorship.
- Michael Jones, a writing and communications instructor, discusses the integration of AI resources like ChatGPT, Grammarly, and QuillBot in Kazakhstan's universities, suggesting that they are being used for writing assignments, ranging from basic editing to drafting entire essays.
- As AI holds great potential in Kazakhstan's academic life, it also raises ethical concerns such as plagiarism and biased output, prompting universities to establish nuanced academic integrity policies, ensuring transparency, and adapting citation practices for AI-generated content.
- To address bias in AI models, which reproduce Western cultural, linguistic, and ideological assumptions, universities in Kazakhstan are encouraged to promote ethnic diversity in AI training data and adjust coursework content to foster the development of a local or regional academic voice.
- Kazakhstan, with its unique multilingual and multicultural environment, has the potential to lead in creating AI policies attuned to local realities, encompassing policy development, pedagogical shifts, sovereign AI and ethics, and upcoming policy discussions.
