Agentic and Equitable Educational Development For a Potential Postplagiarism Era

By Maha Bali, American University in Cairo

This is a slightly updated and paraphrased version from an earlier version on Maha Bali’s blog: https://blog.mahabali.me/educational-technology-2/are-we-approaching-a-postplagiarism-era/

Many of us in the field of education have been contemplating the potential impact of the latest Artificial Intelligence technologies on the future of education. How we teach, how our students learn, and what future skills they will need in the work environment. In parallel, those of us in educational development roles are pondering how best to support educators in their learning journey around AI and its impact on education and academic integrity.

Sarah Elaine Eaton, a prominent researcher on the field of academic integrity, and one who has been researching the intersections of Artificial Intelligence and academic integrity, recently blogged on the 6 Tenets of the Postplagiarism era on her blog. These are her updated thoughts on a term she first used in her 2021 book, Plagiarism in Higher Education: Tackling Tough
Topics in Academic Integrity
.

Already, in her 2021 book, written and released before ChatGPT captured the world’s attention, Eaton was challenging what plagiarism means. Plagiarism is about more than avoiding copy/paste (which is what a tool like Turnitin makes it seem like to students) and what we should be focusing on is more a positive approach to teaching learners how to reference respectfully in order to acknowledge how others have inspired our thinking and influenced our work, right? In Eaton’s book, she challenges the nitty gritty approach of counting words/phrases copied without quotes as a central theme of plagiarism. Although many treat plagiarism as if it is a kind of “theft” of ideas or words of others, Eaton shows that there are many who have resisted this kind of framing, and that plagiarism is a moral issue, a political issue and a teaching and learning issue. Knowing that we live in uncertain times and an everychanging global environment, “it is important to expand our ideas about what constitutes plagiarism so that we can ultimately help our students focus on learning in ethical ways in changing contexts.” (Eaton, 2021, p. 22). In what follows, I will relay each of Eaton’s six tenets, and respond with my perspective on them.

1. Hybrid Human-AI Writing Will Become Normal

Eaton blogs that a combination of AI and human writing will become so prevalent and normalized that “trying to determine where the human ends and where the artificial intelligence begins is pointless and futile”. I feel like it may be too early to tell, and I also don’t know if this is something we would want to simply accept, or if we might wish to resist it in certain contexts, even if it tends towards such a norm.

To begin with, though, we need to acknowledge that every thought and idea we have and that we express is influenced by interaction with others – whether they be human or non-human others. Even before we were necessarily prompting an AI text generator to write things for us, we were using search engines to help us find things and often being influenced by that output
without necessarily taking it verbatim, and honestly, often without acknowledging our sources. We can sometimes absorb an idea and store it in our memory then bring it back up, without remembering where we first got it from or who influenced us.

In a technology-related example, people tend to be surprised when I cite Twitter conversations in my academic papers (such as this one). I think that kind of practice is about the spirit of attribution/respect for others’ work, even when the “source” is not usually considered “academic”– because I feel that otherwise, there would be exploitation, right? When people post their thoughts on Twitter, do they expect that anything they say can and will be used by others, without their permission, without attribution? Have any of us really read Twitter’s terms and conditions?

I am concerned about the use of AI text generators because they will almost consistently produce an original combination of words, that appear not to be “lifted” or “stolen” verbatim from somewhere. However, those same text generators got those ideas from loads of data they’ve learned from, and they’re not attributing where these ideas came from, nor citing them in any
way. At the moment, ChatGPT rarely, if ever, produces original references even if you ask it to. Other tools like Perplexity.ai and you.com/chat will provide references, though. Perplexity’s seem to be the most accurate, whereas you.com/chat sometimes does it well, but not consistently. It difficult for me to understand why ChatGPT, the more advanced in terms of the quality of its writing and ability to respond to complex and nuanced prompts, cannot or does not provide a little bit of transparency or explainability as to where it is getting some of its ideas from. Transparent machine learning is difficult to achieve, but Perplexity is doing it, so it is possible, right? And providing the references would be useful both for verification purposes, but also as a form of respect for sources that influence us. So that even if we encourage students to be transparent with us about how they’ve used AI in their writing, we need to also urge the AI platforms to be transparent with users on where it is getting its content for any given prompt.

I also feel like our relationship with AI will become entangled and complex, where it will be difficult to determine where our agency begins and where the machine’s influence on us is stronger – but I sense that it is important for students’ metacognition that they remain aware of how they’ve integrated AI into their thinking, and this is why I suggest an approach of “transparency“, not just of students disclosing where they got some ideas/text from in the sense of attribution, but also of reflecting on where they used AI in their process and why that was helpful or how it needed tweaking, etc. Perhaps in a future where this is common practice, the self-reflection may become less important, or at least as learners get older and into the job market, they won’t need to do that. But in the learning phases, at least, I feel like it’s important to keep this self-reflection in order to help learners develop a literacy of when and where and how and why to use AI.

So my position here is: Hybrid Human-AI learning may become more common, and it is up to us to determine how much we want to reflect on the impact of that on our agency.

2. Human Creativity is Enhanced

Eaton suggests in her blogpost that “Human creativity is enhanced, not threatened by artificial intelligence. Humans can be inspired and inspire others. Humans may even be inspired by artificial intelligence, but our ability to imagine, inspire, and create remains boundless and inexhaustible.” This sentiment reminds me of Ross et al’s The Manifesto for Teaching Online where they welcome their robot friends.

However, I think it is worth considering the ways in which AI might limit our creativity as well as enhance it. I don’t think it threatens it, though that is an impulse many have. I wonder what are good analogies for this, and I guess visual art might be a good one here.

As a young person, I used to draw really well on paper. I’m not as good at doing computer graphics work, but tools like Canva can help me, even before it was using AI. Being able to use an AI like mid-journey or DALL.E, to give it text and get an image of something I imagine, instead of having to do it myself, is extremely useful, and much quicker than I would have ever taken to do it myself. I don’t think it’s better than what a graphic design person could get out of it, or possibly even do without it, but if I’m doing something small, like getting a header image on a blogpost, why not, right? And graphic designers themselves can save time, I guess, by using some amount of AI to get them started, then refining it? I have heard from my students that such tools exist.

I also know there are musicians who merge their own creativity with music-generated AI – or use in some way helping them along the process.

It is worth recognizing that education systems themselves have stifled human imagination and creativity, and that there are many human and social threats to human flourishing, regardless of AI, so AI is not the only or main threat to this.

What concerns me is the ways in which overreliance on AI might lead to some laziness in trying to do something yourself first, and how the first output by AI, e.g. in suggesting an outline for a paper or such, might limit you or stop you from taking longer to think through something and come up with things that the AI would not have thought of, because it wasn’t in its training set, you know? And I’m concerned with how culturally biased the training sets must be, and how that might skew the outputs to certain dominant cultural norms. This has historically been the case with all AI, to hazardous consequences.


Having said this, the entire knowledge ecosystem, and education as well, textbooks, teaching, scholarship, research, etc. is also culturally skewed. So in that sense, the AI is just a mirror of all of that, only faster to access, and easier to use. Could we potentially create AI that end users can retrain to learn from diverse or local cultures?

Which brings me to my next point, which is this: I think we need to consider the value of taking our time with things in order to think deeply about them. One of the things I don’t like about online “recommendation engines” is that I lose the value of getting lost in the stacks and stumbling upon things that interest me unexpectedly, or taking the time to develop judgment over what I might want to read and find.

But again – I would NOT replace my library’s online catalog or Google scholar with the library card system that required me to run all over the library in order to find a book that took me hours (OK, minutes when I got better at it) to even decide which books or articles I wanted. Would tools like typset.io that help summarize research articles for us and answer our questions about them – would these tools start limiting our capacity to read deeply and critically, or will they help us read more things faster, in order to do more complex things with them? I think we should consider both of those dimensions and intentionally choose how we approach this.

So my position here is that, yes, human imagination is boundless, and we need to continually reflect on the ways AI may skew the direction of our imagination, and be intentional about how we incorporate it into our own works of creative expression.

3. Language Barriers Disappear

I find myself vehemently disagreeing with this one, though I understand the sentiment to wish it to be true. 

I love what automatic translation can do to break down barriers and enable people to communicate with culturally different others. That Zoom now can do the auto-transcribe+translate magic trick is just phenomenal.

I love imagining the intercultural dialogue that might become possible with multilinguistic conversations – because right now, not being able to speak multiple languages means some people are consistently excluded from communicating with people in other cultures, right?

However, and we know this happens often, we need to remember all the horrible misunderstandings that can happen from linguistic and cultural nuance. Translating a text and interpreting meaning are two very different things. You need a deep understanding of culture and context in order to “get it right”, and AI has not been known to be phenomenal at either of those things.

Now to be fair, even human translation is complex and not politically neutral; it exerts violence and through the layer of interpretation brought on by the translator and their positionality… the danger here is that machine translation may be treated as neutral, and we don’t even know how exactly it is doing its work. We cannot interrogate it and ask how and why it gave that particular interpretation. 

Remember that Zoom magic I mentioned earlier? Someone recently told me that because Zoom now allows anyone to select their language of interpretation, some international learners had been relying on the translated captions, and they once reported a teacher for having said something offensive, which turned out to be a mistake in the translation. I can see this happening frequently.

The key caution here is that we still need to remember what was translated by the machine, and that the interpretation may be inaccurate, incorrectly nuanced, or politically non-neutral. And to recognize the situations where translation needs to be done responsibly and where machine translation will not work. Please don’t use it for the Palestinian-Israeli conflict, for example, where human translation has already caused so much strife.

I am also concerned that overreliance on translation allows us to continue producing content in predominantly Western/European languages and expect the rest of the world to translate (I mean, that’s already the case) rather than nurturing other languages – and I am concerned about what gets lost about how thinking is influenced by language and culture when we focus on translating words not thoughts and cultures and values. Does that make sense?

4. Humans Can Relinquish Control, but Not Responsibility

Sarah writes in the blogpost that “Although humans can relinquish control [to AI], they do not relinquish responsibility for what is written.” And she’s talking about the producers of AI tools as well as the end users, being responsible for how they use the output, verify it, etc. 

Is using a tool equivalent to “relinquishing control”? Is searching via Google or using a calculator a form of “relinquishing control”? Is using a car instead of walking “relinquishing control”? I guess we trust the car to work as we ordered it to, and then we control what we can in order to make it do what we want it to do. With calculators, we assume it will calculate correctly based on correct input from us. But neither of these are like AI text generators, because these tools are mostly predictable, like simple physics/math, they should produce predictable outcomes/outputs and the same thing should happen each time you give it the same input. It’s easy(ish) to figure out when something is going wrong. Like if you use the steering wheel to go left and the car goes right, you’ll know right away something is wrong. On the other hand, text generating tools are not just automating processes, they are automating the generation of content and ideas. They are capable of producing new and original content each time, and we of course steer it with the prompts we write, don’t we? I don’t know, though, that that is relinquishing control any more than we relinquish control by Google-searching something. Of course, we know that Google search is not neutral, there is an algorithm behind it, but this does not always impact how we search. I think if we’re constantly aware of this, it’s not relinquishing control, but I also think there are unconscious ways this works on us, such that we’re not aware of what kinds of control we’re relinquishing? I’m still thinking about this one. I feel like entangled pedagogy might be a helpful lens to look at this as dynamic, relational, contextual and complex.

5. Attribution Remains Important

I agree with Sarah’s blogpost that “Humans learn in community with one another, even when they are learning alone.” I wonder how this fits with #1 above, and whether/how she imagines AI being cited when she suggests that hybrid writing won’t differentiate where the human ends and the AI/computer text begins. I think the blurring of lines does not necessarily mean their non-existence… making whatever nuance explicit, at least in the short term, is helpful.

I wonder if a more positive framing of this moves away from plagiarism and postplagiarism to something more like neo-attribution?

6. Historical Definitions of Plagiarism No Longer Apply

This statement I mostly agree with: “Historical definitions of plagiarism will not be rewritten because of artificial intelligence; they will be transcended. Policy definitions can – and must – adapt.”

I actually think that very specific language is probably needed in policies, in order to be comprehensible to learners, but the more specific the language, the more important it will be to constantly update it. For example, our local policy tends to emphasize not taking text or ideas from another human being without attribution, but this doesn’t cover AI. In the letter of the policy. Obviously what they really meant to say was “any work that is not your own words or ideas” but they had not factored QuillBot and ChatGPT into that equation. It is also negative language, about what not to do, rather than positive language, encouraging attribution. I particularly love Juan David Guttiérez’s very clear guidelines for students, and how he differentiates low-risk AI from high-risk AI – and then how to deal with high-risk AI in terms of informed, transparent, ethical, and responsible use. Notice the positive language, rather than the punitive language. 

I do think (and Sarah expands on this in her book) that the teaching of academic integrity needs to spend so much more time on the WHY of citation and referencing, the ethics of acknowledging how others have influenced us, and how that is both respectful to them and a sign of the credibility of our own work – rather than the mechanics of citation. Tools like Turnitin.com just overemphasize the mechanics of one type of plagiarism and gives a backseat to the real reasons why citation matters, and the ways of referencing ethically.

I do want to add that on top of what Sarah mentions above, I feel like a “critical AI literacy” will be needed. One that ensures learners are aware of the privacy implications of using AI tools, an awareness of their limitations, even as they improve, and perhaps even one that explicitly acknowledges the unethical practices and human sacrifices madein the process of creating them.

It is also worth bringing up what Seymour Papert was trying to achieve with his work – to remind us that when using technology in education, we need to make sure that the child controls the computer, rather than what most educational technologies do, which is all about the machine controlling the child. With AI, it becomes important to critically reflect on what agentic, socially just, intentional, transparent and responsible AI development and use might look like, and then think about how we might refer to our uses of it in a neo-attribution era.

7. Agentic and Equitable Educational Development For a Postplagiarism Era

According to Czerniewicz (2021), educational development departments can have roles that are more focused on either responding to educators’ needs, visioning future directions in pedagogy, or relaying and activating institutional policies. In the area of AI, institutional responses have varied, and in my own institution, the direction has been to create space for educators to set their own policies and guidelines for students, while also opening up opportunities to learn in community about AI, and centering a value of moving away from a “catch and punish” students mentality. The approach my center took combines all three of the roles Czerniewicz mentions in response to the uncertainty of how AI might impact education. 

In order to promote ownership, agency and equity in our faculty development approach (as recommended by Bali & Caines, 2018), we (the Center for Learning and Teaching at the American University in Cairo) offered multiple pathways:

  1. Before communicating with the entire community, we met with small groups of faculty to gauge their needs and interests and early perspectives on AI. This helped set the tone for the first few learning opportunities 
  2. We sent emails with some useful resources on how to talk to students about AI, and what to place on the syllabus, whether or not people wanted to be more permissive or more restrictive
  3. We offered multiple approaches to learning about AI, from dialogic community conversations to hands-on workshops using AI tools, to assignment-redesign to discussing impact on academic integrity. In all workshops, there were breakout room options for beginners who didn’t have any experience with AI text generators and ones for those with more experience who wanted a more advanced conversations. There was also a short video demo sent ahead of workshops so everyone could have at least a basic understanding ahead of the workshop.
  4. We offered local opportunities, but many also joined global opportunities to learn about AI, whether through Equity Unbound (such as this and this) or other international communities (such as this).
  5. The majority of local PD was conducted online to be more equitable given educators’ busy lives, but anyone was welcome to come in for one-on-one consultations in person
  6. We continually kept in touch with educators and collected resources and perspectives from then in order to share with others, including this first article with a faculty members’ perspective on AI. Another document was started to crowdsource a big group of faculty member’s perspectives and practices introducing AI in class, and this will be published shortly. 
  7. ChatGPT is not technically available in Egypt, but there are workarounds. To promote equity, given that not all faculty had the digital fluency to figure this out on their own, faculty were given instructions on how to do the workarounds, and also offered alternative text generator tools to experiment with
  8. Educators with more experience were continually invited to help facilitate discussions and offer resources to others during workshops
  9. Providing multiple opportunities over a roughly one month period helped create time for more “epochal” Transformative Learning in community over time.

It is worthwhile to note that many respected scholars believe the hype around ChatGPT is overrated (Chomsky), the privacy implications are of concern (CainesGilliard and Rorabagh), and that the ethical violations by OpenAI in the process of creating ChatGPT should give us pause (CainesGilliard and Rorabagh Lanclos and Phipps). Developing a critical AI literacy will become more and more important before we can fully engage with an idea of a postplagiarism era. In all of these articles linked earlier, the concern is not with plagiarism, but with how we need to stop and think of more important harm that AI might cause in the process of creating it, and how it might harm how we think and behave in the process of using it, and this should give us pause, especially if we head into a postplagiarism era. The more entangled we become with the AI we use, the more we need to ask critical questions before, during and after.

What is your stance on AI and the possibility of a postplagiarism era? What do you think the role of educational developers should be?

Acknowledgements

The author would like to thank the editor for the invitation to contribute and the two peer reviewers for their suggestions which helped enrich this piece and expand it beyond its initial first draft. Thank you also to Sarah Elaine Eaton for encouraging critical conversation around her initial ideas.

3 thoughts on “Agentic and Equitable Educational Development For a Potential Postplagiarism Era”

  1. Thank you so much for your thoughtful response, Maha! As I read your post, I felt like we were in dialogue with one another, debating ideas, agreeing/disagreeing, and thinking with and because of one another. I thoroughly enjoyed your reply/rebuttal. I hope we can write together one day!

Leave a Comment

Your email address will not be published. Required fields are marked *