Thinking Through ChatGPT and AI in the Creative Institution
Posted on | Updated
Filed in Faculty
Like many of you, I have some questions about Artificial Intelligence (AI) Chatbots, and what’s generally being called “ChatGPT” (even though that’s similar to calling tissues “Kleenex”.) There are already many versions of AI generated text applications available, and I’ve only sampled a small selection: ChatGPT, and I had a chance to try early access to Microsoft Bing. When using these chatbots, I find it’s really easy to focus on the ways that they make mistakes and provide inaccurate information. Yesterday, for example, I had it do a menu plan for me followed by a grocery list, and recipes, but somewhere in there, ChatGPT forgot what was on its own menu; the menu, grocery list, and recipes were all for slightly different meals.
This kind of mistake is similar to the ones that come up in discussion of how to “catch”students using chatbots for assignments. But I have to keep steering myself away from identifying where chatbots fail because chatbots are going to get better, and focusing on what they can’t do is only going to prevent us from looking at what they can and will do. We are an institution that prides itself on experimentation and innovation. I think we are well situated to be a small but important part of the evolution of AI, both in education and in creative process. What follows are my current questions, in no particular order, about chatbots and ECUAD.
Why are we focused on what it can’t (yet) do rather than what it can (will) do?
What skills will students need to effectively use AI as a learning and research tool?
How can we teach students to read AI generated text effectively?
What are our responsibilities to students to prepare them for a career, and a life, that will include AI?
How can we teach students the skills to recognize when AI is correct, incorrect, or reflecting bias and other forms of inequity?
Are there ways that chatbots and AI might contribute to equity, making certain forms of language and communication more accessible? Or will AI reinforce hegemonic modes of communication? Or both?
If ECUAD students come here because they have something to say, (and I truly believe that to be the case) in what situations are they likely to use ChatGPT? What about those situations doesn’t interest them enough to use their own voice?
How will I use this in my own research? To organize data? In my own writing?
How soon will I be able to put large enough data sets into a chatbot to do meaningful archival digging and sorting of statistics?
What does ChatGPT look like for using other languages? Can I feed it my primary sources in one language and have it summarize them, or organize data sets, in another? (I tried this, and yes, but with limitations.)
Have you tried it for your research?
Have you tried it on your own assignments?
Does using a chatbot for an assignment necessarily mean the student hasn’t done the intellectual and critical thinking, or the creative process, that you wanted them to do?
Is it possible that AI can help students use their voice more clearly? That it can help them do tasks that are secondary to their field in order to concentrate on creative or intellectual engagement?
What do the students think? How are they interacting with chatbots and AI?
Are we more bothered by the fact that AI seems to have the potential to “think” for students, or “write” for students?
What can we learn from the imaginings of AI in sci-fi and popular culture? (I’m particularly stuck on why Geordie was always manually tweaking the Enterprise shields when the computer surely could have made those calculations more quickly and accurately.)
What literacies and skills do our students at ECUAD need to master? How will these change in light of AI in the coming years?
Are our most recent graduates the ones who will be the most disadvantaged as they graduate into a world with AI but before it’s been integrated into education?
What changes will we personally and institutionally need to implement to see potential from AI in teaching?
What’s at stake if we don’t adapt to AI? For ourselves, for ECUAD as an institution, and for our students?
How quickly will our current anxiety over AI be forgotten? (We’ve all pretty much embraced an Orwellian world of constant screen time and surveillance, even if reluctantly.) In what ways will we learn to live with AI in education, creative process, and the kitchen?