Key Takeaways
- The rise of generative AI tools like ChatGPT since 2022 has sparked a growing debate on academic integrity at Rider University and beyond.
- Some professors enforce strict “no‑AI” policies, using in‑class blue‑book essays or assignment design to detect cheating, while others see AI as a useful brainstorming aid.
- Research from the University of St. Augustine (2025) and Colorado State University highlights potential benefits of AI for inclusive learning, translation services, and administrative efficiency.
- Students are divided: many view AI as a supplement for grammar, résumé optimization, and time‑management, whereas others worry reliance on it erodes critical thinking and writing skills.
- Faculty members report that AI‑generated work often shows a distinct, uniform style that deviates from a student’s usual “voice,” making detection possible without specialized software.
- Ongoing dialogue focuses on finding a balance—leveraging AI’s advantages while preserving the development of independent analysis and literacy skills.
Rising AI Dependence Among Students
Since the debut of ChatGPT in late 2022, artificial intelligence has moved from a novelty to a mainstream tool in college classrooms. At Rider University, professors note an increasing number of students turning to AI‑generated text for essays, position papers, and even routine homework. English Professor Vanita Neelakanta observed that the temptation to use AI is especially strong among learners who feel anxious about their grades, saying, “I think a lot of people wanted that reassurance that somebody else in the class was not going to cheat.” This shift has created a predicament for educators who must weigh the benefits of technological assistance against the risk of undermining authentic learning.
Faculty Concerns About Academic Integrity
Many instructors worry that reliance on AI compromises the core mission of higher education: cultivating independent thought and rigorous analysis. History and philosophy chairwoman Nikki Shepardson expressed alarm when she discovered that students had used AI to answer a prompt asking, “What is enlightenment?” She warned, “This is really scary, because it [AI] actually is taking away the analysis.” Shepardson’s sentiment echoes a broader fear among faculty that students may outsource critical thinking to algorithms, resulting in superficial work that lacks personal insight.
In‑Class Writing Solutions: Blue Books
To counteract AI‑assisted cheating, some professors have reverted to low‑tech assessment methods. Neelakanta announced that certain writing assignments would be completed in blue books during class time, a move she said provided “one way of leveling the playing field and making sure that everybody was at least producing honest work.” By removing the opportunity to consult external AI tools, instructors aim to garantee that submitted work reflects each student’s genuine understanding and effort.
Student Perspectives: Unfair Advantage in Competitions
The perception of unfairness extends beyond the classroom into extracurricular arenas. Junior political science major Zack Leshner recounted an incident during a Model United Nations trip where rival delegates used AI to craft their position papers. He called the practice “kind of stupid and not really fair,” arguing that in any competitive setting, AI‑generated content gives an undue edge to those who rely on it rather than developing their own arguments. Leshner’s critique highlights a tension between utilizing technology for efficiency and maintaining equity in evaluative contexts.
Benefits Highlighted by Research Studies
Despite the concerns, scholarly research points to potential upsides of AI in higher education. A 2025 article from the University of St. Augustine noted that AI can foster inclusive education by accommodating diverse learning styles and providing resources—such as real‑time translations—for students who might otherwise lack access. The same study claimed AI could “improve administration efficiently, freeing up time for a professor to better support students.” Supporting this view, a 2025 Colorado State University survey of over 12,000 students and staff found that 68 % of professors reported not using AI detectors, suggesting many educators see value in integrating AI rather than merely policing its use.
Professor Strategies to Detect and Deter AI Use
Faculty members have developed practical tactics to spot AI‑generated work without relying solely on software detectors. English Professor Megan Titus explained that she often recognises inconsistencies in a student’s “voice” when comparing an assignment to their in‑class discourse: “It’s always been a tell that you get an assignment from a student, and you know their writing, and you know their voice, and this sounds nothing like them at all, and AI does have a very particular kind of style.” Titus employs a strict “no AI policy” outside of specific courses where AI is deliberately taught as a tool, assigning a zero and offering a chance to redo the work if violations occur. Similarly, Nikki Shepardson designs assignments—like historical‑document analyses—that require nuanced interpretation, making it difficult for AI to produce convincing responses without clear signs of plagiarism.
Student Views: AI as Aid, Not Replacement
While many professors remain wary, a subset of students champions AI as a complementary resource rather than a shortcut. Senior cybersecurity major Jordyn Bostick described using AI to “optimize her resumes to match job descriptions and company values” and to assist with brainstorming for projects, noting that her professors encourage learning how to leverage AI effectively. Bostick emphasized that she never uses AI to write full papers, reserving it for tasks like grammar polishing, sentence‑structure improvement, and time‑management: “Being able to do time management and just being able to prioritize things, it’s helped me in that regard.” Her experience illustrates a pragmatic approach where AI supports productivity without supplanting personal effort.
Balancing AI Integration with Skill Development
The ongoing dialogue at Rider reflects a broader national conversation about how to harness AI’s advantages while safeguarding essential academic skills. Professors like Neelakanta strategically employ in‑class writing to restore confidence in fairness, whereas scholars such as Titus and Shepardson craft assignments that expose the limitations of AI‑generated content. Simultaneously, students like Bostick and Adkison highlight legitimate uses for AI in editing, résumé tailoring, and daily organization. Moving forward, institutions may benefit from clear guidelines that differentiate permissible AI assistance—such as idea generation or language refinement—from prohibited outright authorship, thereby fostering an environment where technology enhances, rather than erodes, the intellectual growth that lies at the heart of higher education.

