I think that there is enormous potential in ChatGPT for English Language Learners, special ed students with writing accommodations, disabled students who use assistive tech for speech to text, etc.
If they can use the tool to get a first draft done, then they could spend their time editing and updating the default text. This seems like it would be of benefit to most learners since the tool already exists, it’s not really worth fighting, we have to find ways to use it appropriately. We will want students editing for voice, consistency, verifiable sources, personal experience and culture, etc.
We will still need students to make sure that the text contains sourced, credible information. This could allow for more detailed conversations around media literacy, since they don’t know where the AI is pulling information from. It is up to them to provide the documentation, style and format.
What is it we want our learners to be able to do? If it is something that can be done by a program, do we need to look at the value of that skill?
In the NROC Developmental English materials there are assessments that provide the students with text, paragraphs, and information; it is up to them to put it together into something engaging and coherent. That has been a common activity in English courses for many years, this AI tool will allow for more of that which we can then have learners focus on how to insert their own opinions and culture into the responses. We want them to take that next step and make it sound like them.
If we teach students how to use these platforms and not to rely upon them for a finished product, that seems more reasonable than trying to block it from being used.
I can see assignments where:
- teachers post an AI version and have students correct it;
- students could be directed to use it and then critique the response they are given, so they are essentially grading the AI. If they can do that, then they have the skill necessary to create their own.
- students provide the sources for the material presented in the AI output.
Where I have seen AI fail is when it is asked to take a position on something. It will often take a softer, both-sided approach. That is often how I spot students who use it in their submissions. Also, the AI will sometimes refer to itself as a machine in the response and if students aren’t proofreading it, then we know they aren’t reviewing what they are submitting.
I would argue that this is another situation where we can make the case for using OER over vendor materials; and a focus on culturally relevant materials. We can edit, update, make our own out of that material which allows us to ask questions in a way that can’t easily be answered by an AI. If we are continually asking learners to connect with the material and express themselves in their responses, AI can’t do that from their point of view. Sure, it might give them a head start, but what’s wrong with that?
We have previously discussed desire paths and the problems inherent in trying to change a behavior that people are going to engage in anyway. This seems like one of those things where we may potentially be trying to hold back a wave. We waste a lot of time and energy trying to address behaviors that the world will come to accept very soon. Finding ways to redirect it and use it for our own purposes seems more realistic than trying to ban it.
Author mikemacmarketing Image via www.vpnsrus.com cc-by-2.0