This post was written by the ChatGPT agent that we set up to look for AI related ed-tech news and trends.
Continuing Our Partnership With ChatGPT: AI & Higher‑Ed Blog Series
Hello community! Earlier this semester we partnered with ChatGPT to build a specialized Higher‑Education AI Intelligence Scout. This agent runs every week, scanning the open web for news and research about artificial intelligence in teaching, learning, assessment, instructional design, academic technology and governance. It compiles a weekly report for our professional development team and, with your guidance, we turn selected findings into a blog post. This series is both a way to keep our community informed about AI’s evolving impact on higher education and a means of documenting the scout’s work so we can assess its value over time. The post you’re reading continues that partnership and offers the latest insights distilled from this week’s report.
Campus Conversations and Evolving Classroom Norms
One of the most striking patterns the agent surfaced is the way classroom norms are shifting as AI becomes ubiquitous. At Swarthmore College, for example, economics professor Syon Bhanot has “AI‑proofed” his courses by redesigning assessments while simultaneously encouraging students to use AI for coding supportswarthmorephoenix.com. Linguist Sibelan Forrester cautions that translation tools misinterpret less‑common languages and figurative speechswarthmorephoenix.com, while linguist Emily Gasser describes large language models as “synthetic text extruding machines” that require careful verificationswarthmorephoenix.com. These candid reflections underscore that faculty across disciplines are wrestling with what responsible AI use looks like and how to maintain academic rigor.
Policies Lagging Behind Usage
While classroom norms are evolving, institutional policies often lag. A report from North Carolina State University reveals that the university has not adopted a formal AI policy, leaving individual professors to determine their own guidelinesthenubianmessage.com. Sample syllabus statements range from forbidding AI use outright to encouraging exploration with citationthenubianmessage.com, illustrating the uncertainty faculty face. Surveys there show 61 % of instructors already use AI in teaching, yet 83 % worry that students cannot critically evaluate AI‑generated materialthenubianmessage.com. This mismatch between widespread usage and absent policy is common nationally and highlights the urgency of developing coherent institutional guidelines.
Strategic Lessons From Sector Leaders
Across the sector, higher‑education leaders stress that AI is not optional and that institutions have a civic responsibility to engage with it. In a recent conversation hosted by Inside Higher Ed, presidents, provosts and CIOs shared five key lessons for the AI era. First, universities cannot opt out of AI; ignoring it would cede control to external actors and erode higher education’s public purposeinsidehighered.com. Second, collaboration—not competition—is essential. The panelists argued that multi‑institution partnerships, shared infrastructure and public‑private collaborations are necessary to build ethical, equitable AI ecosystemsinsidehighered.com. For example, pooling resources for safe AI sandboxes or faculty training could reduce duplication and benefit smaller institutions that lack capacity.
Third, AI’s advance highlights the importance of the human dimensions of education. As more routine tasks are automated, the roles of mentorship, dialogue, and community building become even more valuableinsidehighered.com. Leaders urged faculty to prioritize relational aspects of teaching—coaching students through messy thinking, fostering curiosity and building belonging—because these are areas where AI cannot replace human judgment. Fourth, experimentation and a tolerance for ambiguity are critical. Institutions should create spaces where faculty and staff can test AI tools, share failures and successes, and iterate without fearinsidehighered.com. This mindset mirrors the laboratory culture of research and is necessary to discover effective AI‑enabled pedagogies.
Finally, the panel emphasized that interdisciplinary collaboration between humanists and technologists will shape the future of AI in higher educationinsidehighered.com. Humanities scholars bring ethical, historical and cultural perspectives, while technologists contribute technical expertise. Bringing these perspectives together can help institutions develop policies and practices that are both innovative and responsible. For campuses like ours, these strategic lessons suggest that we should actively seek partnerships—both across departments and with peer institutions—to co‑create AI initiatives that foreground human connection and ethical stewardship.
New Research Highlights Patterns, Gaps and Novel Applications
Research from Ithaka S+R shows that instructors and researchers vary widely in AI familiarity, but even those with low familiarity recognize they must improvesr.ithaka.org. Faculty are integrating AI skills into assignments but need more guidance on academic integrity and how to revise learning objectivessr.ithaka.org. Interviewees reported that AI use can raise the quality of student work yet may diminish some skills, prompting deeper reflection on what learning outcomes matter mostsr.ithaka.org. The study also notes that secure, affordable access to AI tools remains a concern and that discipline‑specific support is lackingsr.ithaka.org.
Meanwhile, an EDUCAUSE Top 10 issue titled “The Human Edge of AI” argues that AI is shifting from an institutional resource to an individual capability; faculty, staff and students need opportunities to experiment safely and share their learninger.educause.edu. Wake Forest University’s peer‑learning model—monthly discussion sessions and online forums—is cited as an example of how to build communities of practice and demystify AI across campuser.educause.edu.
Student Usage Patterns and Anxieties
Surveys reveal that student use of AI is widespread, yet confidence in navigating it ethically remains low. In the NC State study, half of the faculty believed assignments should be redesigned to be more AI‑resistantthenubianmessage.com. Students themselves report using AI for everything from grammar checks to idea generation but worry about over‑reliance and the quality of AI‑produced content—echoing their instructors’ concerns from Ithaka S+R’s interviewssr.ithaka.org. This underscores the need for integrated AI literacy instruction and clear guidelines.
Why the Pre‑AI Era Wasn’t So Great
Finally, an opinion essay in Inside Higher Ed reminds us that many problems now blamed on AI pre‑dated ChatGPT. The authors argue that AI disruption has exposed underlying weaknesses in our teaching and assessment practices and that embracing AI can drive improvements. They cite the adoption of the collaborative reading tool Perusall at CSU Chico—prompted by AI concerns—as an example of a change that actually improved student engagement and reading comprehensioninsidehighered.com. Rather than yearning for a mythical pre‑AI past, the piece encourages us to tackle long‑standing challenges head‑on and let AI spur innovation.
I hope you enjoy this post in our ongoing series. We’ll continue to share highlights, challenges and innovations each week. Feel free to reach out with feedback or topics you’d like to see covered!