One Educator’s Perspective: Approaching AI in Teaching and Learning with Intention
By James D’Annibale, Director of Academic Technology at Dickinson College
The views expressed here are personal and do not represent the official position of Dickinson College or the Academic Technology Department.
Updated 8/28/25
Introduction
As educators continue to grapple with the implications of generative AI, I’ve spent the past year+ thinking deeply about what this technology means for teaching and learning. This document reflects my current thinking—shaped by conversations with colleagues, real-world experimentation, reading/watching/listening to various resources, and my background in education and instructional design. It offers a set of core principles and practical examples meant to guide how we might approach AI in higher ed with intentionality, transparency, and care. This isn’t meant to be comprehensive—I have many supporting guides, presentations, and examples I’d be happy to share or present if you, the reader, is interested in learning more. I really appreciate the time and effort it takes for anyone reading this to do so and I hope to converse with you, the reader, about what you love, hate, are confused about, etc. once you get to the end of this post/piece/not-sure-what-to-call-this (ChatGPT wanted to call it a manifesto).
Belief Statement
I believe that generative AI, when used thoughtfully and transparently, can significantly enhance teaching and learning in higher education, along with other educational settings. I also believe that it introduces real risks and valid concerns. Navigating these complexities requires more than policy. It demands pedagogy. It demands curiosity, humility, and a willingness to reimagine not only our assignments, but also the reasons we assign them in the first place.
While my professional title might imply I lead with technology, I assure you I am an educator first. My role is not to push any particular technology, but to help faculty and students consider how technologies (AI in this case) might support or transform their goals. And sometimes, the most appropriate conclusion is that it’s not the right fit for a particular context or course. But we cannot know that until we understand what is possible. There are certainly big-picture issues that need to be worked out regarding generative AI (and other technologies like airline travel, automobiles, iPhones, etc.) as it relates to the environment, labor practices, etc. I do certainly wish for macro-level change to be positive. I can affect such change with my vote at the ballot box, my writing of letters to government officials, my voice when I meet with corporate leaders within the technology industry, etc. But in my role as the Director of Academic Technology, my job is to help the academic community use (or determine when not to use) technology in distinct learning situations. As such, in my role I zoom in on individual courses, assignments, etc. It is an objective fact, that I can demonstrate, that Generative AI affords educators and students the opportunity to create the types of truly transformative learning experiences that educators have been craving for decades, if not centuries. Therefore, I wouldn’t be doing my job if I didn’t provide the community with the knowledge and tools that I’m capable of providing.
Core Principles
Utility vs. Risk is my Preferred Framework for Thinking Through the Downsides of Technology
Every technology—from cars to calculators—requires us to weigh benefits against potential harms. AI is no different. Institutions and individuals must make this calculation for themselves, and the answer may vary by context. But ignoring the utility side of this scale and only focusing on the risks is not an approach I personally think anyone should take.
Among the risks worth acknowledging are the environmental toll of large-scale computing, labor conditions for workers supporting the AI industry (particularly in under-resourced regions), and the use of copyrighted works in training data without permission or compensation. These issues matter, and they deserve scrutiny and action.
Solving these problems is outside the scope of my work and expertise. I call on individuals and institutions concerned about these harms to advocate with their governments and other appropriate bodies for oversight and industry reform. I also call on well-resourced universities with the means of hosting more sustainable, yet effective, generative AI environments to open those systems up to smaller schools like mine so that we can partner with institutions that have similar missions to our own rather than for-profit corporations. Finally on this front, I would like to call attention to the idea that there’s a big difference between the effects of individual use and corporate use. I recently read an article ( How much energy does AI really use? The answer is surprising – and a little complicated) that did a good job of explaining how generative AI use by individuals effects electricity output compared to the industry as a whole. One section of the article proposes that when it comes to individual effects on the environment, I’m doing much more by avoiding flights where I can as opposed to drastically reducing how much I use ChatGPT. For me personally as a novice in the discipline of environmental science, that perspective makes sense but I do acknowledge that others may view it differently.
While we wait for large-scale regulation, reform, etc., my role is to help faculty and students evaluate how these concerns factor into their own decisions—balancing utility with risk in ways that are responsible, transparent, and most importantly for me pedagogically sound.
Pedagogical Transparency Builds Trust and Equity
Students deserve to understand the goals behind their assignments and this has been known to be true long before OpenAI rolled out ChatGPT. Students deserve to understand what role AI (or even help from a human like a tutor or friend) may play, and why certain uses are or aren’t permitted. Being clear about expectations and more importantly, what it is the students are actually meant to learn, helps prevent both confusion and misconduct. It also empowers students to reflect critically on their own learning.
I played football from 4th grade through college and I find the mindset of an athlete instructive here. Athletes put great effort into practices and workouts because they KNOW what it does for them. They get stronger, faster, and better at their sport. The result is that they score more points, win the race, make the tackle, evade the linebacker, etc. They can feel stronger and they are able to very easily make the connection between the workouts/practices and the gains they’ve made.
As educators, we’ve known for decades that students (and all people) learn better when they are motivated to do the work and think deeply about it. As an Education major in college I learned in my very first education class that we ought not assume that our students are intrinsically motivated. Instead, we need to very intentionally explain what the learning is and what it means for students’ futures.
Working at a liberal arts college, I hear a lot about (and personally believe in) the benefits of the liberal arts approach and curriculum, critical thinking, cross-disciplinary exploration, etc. I’m simply proposing that we take the words and ideas that we already believe in and more directly explain them to students in the classroom directly for their assignments. I’m sure this happens in a lot of our classes, but we owe it to our students to do this across every course and every assignment; especially the big ones.
AI Should Never Replace Complex Thinking
When I teach students or faculty how to use AI, I do not offer what I believe to be shortcuts that undercut learning. I do not encourage delegation of reasoning. Instead, I use AI to scaffold metacognition, facilitate structured inquiry, and support inclusive learning environments. I propose that when used well and used appropriately, AI promotes deeper thinking. It does not substitute for it.
Unfortunately, most of the studies I’ve read regarding learning with AI focus on very shallow uses of AI that of course shortcut learning. This fall (2025) I am going to work with a group of Media Center student employees to conduct an analysis of such studies to see if my gut feeling that most of the “AI makes us dumber” studies are using very shallow uses of AI. I hope in the future to see more published studies regarding more complex uses of AI.
Disciplinary Context Matters
AI use in a business negotiation role-play might be generative and authentic; in a chemistry lab report, it might be inappropriate or even dangerous. Good teaching requires good judgment. My job is to support faculty as they make those judgments based on their expertise and values. At the same time, it is incumbent upon faculty to possess intimate knowledge of their learning objectives and to understand how curriculum is written and instructional design decisions are made. We cannot have a productive conversation about whether or how AI fits into a particular course or assignment unless both parties come to the table with pedagogical clarity. While my role is to support and guide these conversations, their impact is greatest when faculty have spent time reflecting on their learning goals and how their assignments serve those goals. This is not a critique—it’s an invitation. When faculty bring their disciplinary insight and teaching instincts to the conversation, and I bring my experience with instructional design and emerging tools, we can make meaningful, student-centered decisions together.
It’s okay to opt out—but let’s do it from a place of pedagogical clarity, and let’s ensure our students understand the rationale behind that choice.
AI Unlocks the Customization Educators Have Wanted for Decades
For decades, educators and students have expressed a desire to make learning more personalized, responsive, and adaptive to the needs of individual students. Generative AI finally offers a practical, scalable means of achieving that goal in many circumstances. From generating differentiated reading passages and questions to customizing practice scenarios or coaching strategies, AI gives educators new tools to honor the diversity of student needs, abilities, and aspirations. Crucially, this is not about automating education, but about empowering educators to reach students more effectively—meeting them where they are, and helping them grow from there.
Inclusive Design Must Be a Priority
Students enter our classrooms with wildly different levels of preparation, privilege, and (dis)ability. I personally (and I believe the Dickinson community does too) believe in the value of diversity, equity, and inclusion. I believe that as educators, we have an ethical obligation to help students overcome barriers, many of which they face through no fault of their own. AI can be a powerful support for students who come from inherently disadvantaged communities and students with cognitive disabilities, language barriers, or executive functioning challenges—but only if we use it intentionally. Equity isn’t automatic. It’s designed.
On a personal level, I live with narcolepsy and post-concussion syndrome, both of which limit the amount of cognitive energy I can spend on, and focus I can assign to, any given task. AI has helped me tremendously by allowing me to reserve that energy for what matters most—evaluating learning technologies, developing innovative instructional strategies, leading my team, and supporting faculty and students. Instead of exhausting myself figuring out the perfect way to structure a paragraph, I can focus on the ideas that paragraph is meant to express (much more on this in a section below). That kind of support, when reasonable and appropriate, should be available to every learner who faces such barriers.
Faculty Autonomy and Shared Responsibility Can Coexist
I respect that each instructor owns their syllabus and has academic freedom to choose course materials, craft assignments, implement teaching strategies, etc. At the same time, students benefit when there is some shared language and guidance across courses. I believe institutions should provide clear language and optional resources—not mandates—to help faculty make informed, student-centered decisions about AI. A cultural expectation of pedagogical transparency within the institution can be an example of this.
Practices and Examples
Designing Transparent Assignments
Drawing from research on transparency in teaching (Winkelmes, M. (2013). Transparency in Teaching: Faculty Share Data and Improve Students’ Learning. Liberal Education 99(2).), I developed a set of guidance to help faculty explain to students what is appropriate or inappropriate uses of AI. Below is a sample of the types of questions this guidance prompts faculty to think about and explain:
- What are the learning objectives?
- What work must be student-generated, and why? How does it impact their long-term learning?
- What kind of AI (or even human) help is allowed at each stage?
- How should students disclose their use?
- What support or checkpoints will faculty provide along the way?
This guidance is currently under review. When completed, it will be linked to here so the reader of this document can review it in full.
Supporting Syllabus Clarity Through GPT
To help faculty articulate their AI policies clearly, I created a custom GPT that walks them through a structured decision process and generates a syllabus statement. The GPT doesn’t tell them what to do; it asks thoughtful questions, then drafts a statement based on their own beliefs and rules. Many faculty told me this process led to more thoughtful AI policies than they would have written alone.
Modeling AI as a Learning Partner
One of my most impactful tools is the Reading Help GPT, which doesn’t summarize texts or give answers, in fact it’s designed to wag its figurative finger at students when they ask for such shortcuts. Instead, it asks guided reading questions designed to build comprehension and critical thinking. Students respond in their own words, get affirming feedback, and are encouraged to reflect and go deeper. This is especially helpful for students who didn’t arrive at college with strong academic reading skills. I plan on having more of these “bots with guardrails” created in the coming year and welcome collaborators.
Enhancing Practice-Based Learning with AI
AI makes it possible to simulate real-world scenarios with unprecedented fidelity. A student in a business class can negotiate with an AI that acts like a seasoned executive. A future educator can role-play a parent-teacher conference. A political science student can engage in dynamic policy debate. These applications promote transfer of learning—from theory to practice—and are fully customizable to reflect course objectives.
Writing With, Not Through, AI
I used to dread writing. It felt physically and cognitively exhausted from writing and that made the whole process inaccessible. I’m sure there are better words to use to properly express what my disabilities do to me. Perhaps it will help to express that before my diagnoses and treatment, I’d often get home from work and crash onto the couch and not wake up again until the morning. In meetings at work I’d be fighting to stay alert and able to contribute. The first day I took the medicine prescribed after my diagnoses the only word I could use to describe how I felt was “awake.” As someone with the disabilities I have, writing requires more energy than I often have and that often resulted in doing the bare minimum. However, the first time I applied my AI-supported writing methodology to a long report, everything changed. I was immersed. I iterated. I thought harder and more joyfully than I ever had before. That experience confirmed for me what a powerful tool AI can be when used ethically and reflectively.
My AI-supported writing methodology essentially involves me creating a document that I throw up my words onto in an unstructured way. I use the document to “talk” through what I want to do, how I’d like to use information I’ve found (without AI), how I’d like things to be organized, how information from various sources ought to be used together, etc.—I’m sure there’s a more English Professor way to say all of this—but I imagine it’s very similar to how someone works with a ghost writer.
When I think of someone like the great Nelson Mandela working with his ghost writer I imagine the two of them sitting at a table with Mandela talking through all the things he wanted to have in his book and how he wanted to organize it all. I envision all of these pictures, memories, ideas, etc. popping out of Mandela’s head like a comic strip speech bubble. I envision the ghost writer (in my mind it’s a ghost from Mario holding a pen and paper) taking all of those speech bubbles and throwing them into the ghost’s notebook and the speech bubbles turn into wonderful prose. That’s essentially what ChatGPT does for me.
It’s important to note, though, that my approach only works because I already learned how to write. I know how to analyze, structure, and express ideas at the level required of my profession. What AI offers me is not a substitute for learning or thinking—it is an accommodation to help me overcome the barrier of my brain literally being unable to do what the rest of you take so much joy in. It helps me preserve my limited cognitive energy for the work that truly matters (to my job anyway). It allows me to focus on thinking rather than being drained by grammar, structure, formatting or phrasing. And that, I believe, is a kind of equity we should strive to extend to all learners who need it and for whom it is appropriate.
A colleague from the English department recently explained to me that he sees writing with AI as someone might view translating with AI. If I write something in English and then have ChatGPT translate it into French, I can likely rest assured the words are accurately represented. However, since I don’t know French, I cannot go through it and ensure that the nuance that I intended in English is still represented in the French translation. That really made a lot of sense to me. This explanation led me to removing writing with AI from the course I teach about AI because I can’t be certain that my students know the metaphorical French. However, I will explain to my students in that class how I use AI to write and that I view it as okay because I’m 37 years old and have a lot of education and experience that leads me to feel comfortable saying that I do know the metaphorical French and always take the time and effort to verify that my intended nuance is still present.
Closing Reflections
Generative AI will not destroy education. Nor will AI save it. It will, however, require that we HUMANS reshape education—and AI is not going away unless something massive changes across the industry.
Different disciplines and educators will make different decisions about how to integrate or restrict AI in their work, and that’s appropriate. But the common ground we all must occupy is a commitment to clarifying our learning objectives and understanding how they can be practiced and assessed in a world where students have access to AI.
Just as with the introduction of many other technologies, we must differentiate between learning that should occur without AI, learning that becomes more feasible or impactful with AI, and learning that is only truly possible because of AI. At the same time, we must also be honest about the fact that certain assessments—like the historical example of assessing a student’s ability to use the slide rule—may now be obsolete. We need to ask ourselves whether some of our legacy assessments are still the most effective and inspiring ways to reflect and support the skills we want students to develop—because if they aren’t, this moment gives us the opportunity to imagine something better.
As educators, we must ask: will we resist that change, or engage with great hopes for the future? I choose the latter. I do not claim to have all the answers—especially not about environmental impact, copyright law, or the makeup of training datasets. But I do believe in dialogue. I believe deeply in sound pedagogy—as both a discipline and a practice that centers human learning, growth, and empowerment. And I believe that students deserve the best of our curiosity, not just our caution.
Let’s help students build the skills they need not only to use AI, but also to question it. Let’s model what it means to think, to learn, and to create in an age of powerful tools. And let’s do it together.
I thank you for reading my thoughts on AI within education and I invite the reader to meet with me to discuss anything you loved, hated, was confused about, etc.
ChatGPT was used in the development of this document to assist with organization, structure, tone, and clarity—but the ideas, experiences, and arguments are entirely my own.