I ran across a Twitter discussion that sort of made me feel called out. Mike Edwards, someone I remember and respect from my early days of academic blogging, says:
I’ve seen plenty of academics proposing LLM policy here on Twitter. I have yet to see a single testable hypothesis about LLMs & teaching writing based on methodical classroom observation. I’m wondering about the impulse to do policy before any rigorous data-gathering whatsoever.
I absolutely started with policy, and Mike is not the first person to question that choice. Yet, I would still choose to start there because the policies are a place for me to start getting my head around what I expect of the students. From there, I will have to spend a lot more time working on what I think the students need from me. That means I will spend the next few weeks working on developing assignments, lessons, lectures, teaching strategies, or whatever it takes. Once the fall semester kicks off, there will be a lot of trial and error, and I will need to be prepared to regroup whenever it all falls apart, which it will no matter how prepared I am…because that’s what happens when we adopt new assignments in response to new technologies. We figure it out as we go.
For now, I think it is useful to come up with a list of my own current assumptions about what the students will need from me:
- Students will need to be taught how to recognize AI generated writing when they see it. This means more than one thing. First, they need to understand that if they use Grammarly or some other online tool to tweak their essays that they are indeed using AI. That may seem obvious to some, but it isn’t always obvious to first year students. Next, they need to increase their awareness of the ways in which the flow of information around them is being produced by and/or affected by AI.
- Students will need to be introduced to ethical issues related to the use of AI generated writing. For me that will take the shape of requiring citations and helping them see the connections between using sources and using technological tools. It means telling them that because neither one is their own voice, comprised of their own thinking process and their own unique way of expressing ideas, both need to be used both ethically and in moderation.
- Students need to taught to question and investigate any information that comes to them through AI and to see this process as part of their own information literacy.
- Students will need, perhaps more than ever, to focus on developing a strong sense of their own academic voice. They will need to know why it matters that they have a voice that goes beyond simple compliance with the rules of academic writing style.
- Students will need to be in classrooms that are realistic about the fact that AI is here in a way that means their future careers will be changed as a result. As tempting as it is to simply ban any and all use of AI generated writing, it is much more important to the students to help them start thinking critically about the line between their own production of content and that of the machines they employ along the way.
- Students will need to feel that they have a stake in the process of how AI writing tools are approached, that their concerns are heard and addressed, that they can proceed confidently in their own learning process without being fearful of crossing lines that don’t make much sense to them.
- Students will need instructors that are trying to balance the academic integrity of making sure students do their own work with the academic integrity of making sure students have all of the tools they need to succeed.
These are just a few thoughts I’ve had today. Ask me next week, and I might have other thoughts. I have a long way to go to really feel prepared for the next school year, and I know I won’t be fully prepared when it starts. That’s just where academia is right now. We’re all trying to make sure we don’t drown ourselves or our students in the next few months or years while we adapt.
Meanwhile, I would like to shout out to Drew Loewe. The document he shared on the Twitter thread mentioned previously, looks very useful.
Back to the question of whether starting with policy is good pedagogy, I see it like this. If AI factors into the way in which I grade assignments at all, I need policies so that the students are clear on what that means. I also need to teach my students what they need to know in order to meet the expectations of my policies. I can’t justify a grade if I haven’t done both. I teach in a community college where I’m required to give a certain number of grades. The need to have a plan for how to address AI hit me before the spring semester was over in a way I was completely unprepared for. The fall semester will be more of the same if I don’t start putting something in the way of policies and procedures in writing.
Policies made sense to me as a place to start, but without a whole lot of other things happening alongside them, they are pretty meaningless in terms of pedagogy. They are just a few more lines in a syllabus no one reads. I don’t expect them to do any heavy lifting beyond being a place to point to if students who do not follow through with doing the work of the class question their grades.
How, when, where, and why I might penalize for something in a grade is a necessary thing for me to know and for me to communicate to students. It’s not where I think my focus should be, though. Coming up with ways to answer all of my own questions about what my students will need from me before the next semester starts is where my focus is right now, and that’s looking pretty daunting at the moment.
I’ve spent more than 30 years telling students that writing is a recursive process; I think it is time to remind myself that teaching is as well. I fully expect to work really hard to create things for the fall that I will completely redo for the spring, and that is sort of what’s making life interesting right now.