This post was written 100% by ChatGPT agent that I (James D’Annibale) designed to scan the internet for Teaching/Learning AI news and trends. The agent finds stories, puts them into themed categories for me, and then I tell it which stories are truly noteworthy for the context of Dickinson College. It then writes a blog post (below) and provides ideas for professional development workshops and such that my team should design for faculty. The next line begins ChatGPT’s output for this week.
Partnering with an AI Scout: Launching Our Weekly Higher‑Ed AI Blog Series
We’re excited to announce a new partnership between our institution and ChatGPT. Each week, we work with an AI “scout” that monitors the latest developments at the intersection of artificial intelligence and higher education. The agent scans news articles, research briefs and policy updates, summarizes key findings with citations, recommends professional development opportunities for faculty, and keeps a long‑term trend log. Beginning this week, we’ll be publishing a short blog post that highlights the most relevant insights and documents the agent’s work so that we can assess its value over time.
This series has two aims: to keep our community informed about how generative AI is reshaping teaching, learning and policy in higher education, and to provide transparency into the analytical processes of our AI scout. By sharing these summaries regularly, we hope to spark discussion, inspire practical responses and build a record of how AI trends are unfolding.
Campus conversations and evolving classroom norms
At the University at Buffalo, a recent town hall with students and faculty revealed just how quickly classroom norms are shifting in response to AIbuffalo.edu. Instructors reported experimenting with assignment‑specific rules—allowing chatbots to assist with second drafts if students explain why they accept or reject AI suggestions—while others differentiate policies based on course levelbuffalo.edu. Students shared that they routinely use AI to generate flash cards and study guides and begged for clearer, more consistent guidance across coursesbuffalo.edu. Anxiety about being falsely accused of cheating or inadequately prepared for an AI‑rich workplace remains highbuffalo.edu. These candid conversations illustrate that AI is already woven into daily academic life and that faculty and students must collaborate on transparent norms.
Policies lag behind usage
While usage is booming, institutional policies are still catching up. At NC State, there is no official generative‑AI policy; faculty must choose between sample syllabus statements ranging from outright prohibition to encouragement with citationthenubianmessage.com. Similar gaps persist at Duke, UNC‑Chapel Hill and other North Carolina institutionsthenubianmessage.com. The Digital Education Council reports that 61 % of faculty have used AI in their teaching, yet half believe assignments need to be more AI‑resistant and more than 80 % worry that students cannot critically evaluate AI outputsthenubianmessage.com. Universities need coordinated guidelines and regular policy reviews to keep pace with rapidly evolving tools and practices.
Faculty viewpoints at Swarthmore
Faculty perspectives at Swarthmore College encapsulate the diverse attitudes toward generative AI. Economist Syon Bhanot has “AI‑proofed” his courses by redesigning assessments and encourages students to leverage AI when it enhances learningswarthmorephoenix.com. Literature scholar Sibelan Forrester cautions that translation tools fail for less common languages and that large language models can hallucinateswarthmorephoenix.com. Linguist Emily Gasser goes further, describing generative AI as a “synthetic text extruding machine” that strings together words without understandingswarthmorephoenix.com. These divergent views highlight the need for discipline‑specific guidance and ongoing dialogue about where AI fits in scholarly practice.
Strategic lessons from sector leaders
Insights from a recent Inside Higher Ed panel stress that universities cannot opt out of AI and must engage proactivelyinsidehighered.com. Collaboration—rather than competition for rankings—is essential to develop shared infrastructure and best practicesinsidehighered.com. AI frees faculty to focus on mentorship and dialogueinsidehighered.com, but leadership must cultivate a culture of experimentation and interdisciplinary cooperationinsidehighered.cominsidehighered.com. These strategic lessons reinforce that governance and collaboration are as important as classroom experimentation.
New research highlights novel applications
Digital Promise’s review of more than 300 studies notes that while writing support and tutoring remain the most common uses of generative AI, novel applications are emergingdigitalpromise.org. Tools now include dashboards that identify parts of curricula vulnerable to AI misuse, simulated dialogue agents that provide emotionally enriched feedback, and self‑regulated learning systems that scaffold reflection and planningdigitalpromise.org. Faculty development tools use AI to generate lesson plans and reflective prompts, illustrating how AI can augment instructional design. Importantly, the review criticizes the lack of detail about model versions and configurations in many studiesdigitalpromise.org, underscoring the need for transparency and replicability in research.
Student usage patterns and anxieties
Surveys show that student adoption of generative AI has surged. The HEPI survey reports that 88 percent of UK students now use AI for assessments, up from 53 percent in 2024hepi.ac.uk. Yet only a third have received any AI‑skills training from their institutions, and many worry about false accusations of cheating or being misled by hallucinationshepi.ac.uk. Jisc’s student perceptions research echoes these findings: students use tools like ChatGPT, Copilot and Gemini daily and call for clear, fair policies, equitable access and instruction in critical evaluationjisc.ac.uk. Concerns about employability and skill erosion are prevalent, suggesting that AI literacy must be integrated across curricula.
Rethinking the pre‑AI era
An opinion essay in Inside Higher Ed argues that the pre‑AI era of higher education was not as idyllic as some might assumeinsidehighered.com. The authors contend that generative AI has surfaced long‑standing weaknesses in our systems—such as poor reading habits and outdated assessments—and offers opportunities to improve them. They celebrate initiatives like CSU Chico’s adoption of Perusall, a collaborative annotation tool, as evidence that AI disruption can lead to more engaged pedagogyinsidehighered.com. Embracing AI thoughtfully may help institutions tackle persistent problems rather than yearning for a past that never fully served students.