On November 30th San Francisco-based startup OpenAI, which has a close relationship with Microsoft, launched ChatGPT, an online AI app.
It’s part of a new generation of AI systems that can converse, generate readable text on demand and even produce novel images and video based on what they’ve learned from a vast database of digital books, online writings and other media. The app can create believable answers for millions of questions that a user might enter.
It has led to some hand wringing amongst educators around the world, and some scary articles being written about how AI apps like this spell the end of traditional assessments. Many schools are looking to ban AI apps like ChatGPT, because it means that written assessments can easily be plagiarised by students. Other institutions are calling for a return to pen and paper.
So, what is the solution for educators? Should we fight new technology with more new technology? Should we abandon technology and go back to classrooms and pens?
We believe the answer is neither.
It can be tempting to think that the answer to this new technology is more technology.
A student has already built an app that can detect text written by ChatGPT with an accuracy of 98%.
Antiplagiarism software Turnitin says that for now, or now, ChatGPT’s answers should be easily identifiable both by teachers and Turnitin software. ChatGPT makes lots of factual errors, and its language model tends to generate linear sentences and pick broad, obvious words, instead of the occasionally narrower vocabulary that a student would select. This creates signals that could be detectable by Turnitin and other anti-plagiarism tools.
Other VET LMS software providers have spruiked their video response technology, which gives students a limited time-frame to answer questions by video, and anti-plagiarism software.
However, its only a matter of time before someone figures out a way to beat the anti-cheating technology, resulting in a software technology arms race, as more and more technology and software is introduced to beat fraud. Is this really the answer?
Other educators have advocated abandoning using elearning for assessment, and returning to paper and pens. This kind of assessment is also unrealistic, because in 2023 assessments often require the student to demonstrate competence in using technology.
Here at Edubytes we believe the solution lies somewhere in between. Use technology to deliver a better learning experience, and give the trainer/assessor more time to focus on doing what only a human does best; inspiring their students, and assessing what is going on in their heads.
Good instructional design will help beat AI. Annie Chechitelli, Turnitin’s chief product officer says that educators should focus on putting learning material in the context of current events, because AI algorithms are fed on old data, and cannot evaluate what is currently happening in front of them.
In practical terms this means that instructional designers should concentrate on creating assessments that must be done by students in the workplace, here and now, and not just rely on regurgitated information being spat out to answer questions.
Here at Edubytes, that is exactly what we specialise in; creating realistic assessments and scenarios that students can do in their workplace, or simulated work environment.
It will still be a long time before humans get inspired by software, or before AI has the capability to think critically, or evaluate whether humans can think critically.
We also believe that even though an RTO is using state-of-the-art software and instructional design, it does not minimise the role of the trainer. Technology is a tool; it does not and should not replace the role of the trainer in engaging with the student, inspiring students to achieve their best, and analysing whether they really understand why and what they are doing. To the best of our knowledge no student has been inspired by elearning software; they are inspired by the content presented and created by other human beings. Many students credit their trainer as being the best part of their learning experience, and motivation for them to achieve the best in their profession. At the end of the day, people connect best with other people, not with software.
Something you will never hear a student say: “That software really inspired me to achieve my best!’
As a trainer and assessor, their ability to fully utilized critical thinking, ask clarifying questions and refrain from making any assumptions are going to be important if students are tempted to use AI apps by:
- Helping them better understand the information and responses provided
- Evaluating the accuracy and relevance of the information provided
- Clarifying any ambiguities or inconsistencies in the responses
- Preventing them from jumping to conclusions or interpreting the information in a biased or inaccurate way
- Making more informed assessment decisions and avoiding potential misunderstandings or errors.
The importance of self-reflection as an assessment tool
Another tool that Edubytes assessment uses is self-reflection.
Edubytes assessments include verbal questions which must be asked by the trainer, and answered verbally by the student, to be intepreted by the trainer/assessor. These questions are open-ended; they invite the student to share their experience in performing the task or solving the problem.
Is more technology the answer?
Some instructional designers are already experimenting with using AI to create learning content. This does look like a positive way for AI to be used to create educational tools. However, this too is still limited, and the risk is that incorrect content will be generated, because AI does not have the latest information. For example, if you asked AI to create assessments for building professionals, AI would not be aware that the National Construction Code has had a major update and that Australia will be using a new NCC for building design from 2023 onwards. Any content generated by AI on the current NCC will quickly be out-of-date.
Additionally, our experience has shown that computer marked assessments are only useful for teaching underpinning knowledge. They do not effectively assess whether a student understands why they are doing something, or how the knowledge relates to their role as an employee or apprentice. Only quality summative assessment that is trainer marked can really help the trainer identify whether the student has retained the information and applied it in their role as an employee.
In fact, there has lately been a push-back against technology by many people.
Silicon Valley’s brightest minds are preventing their own kids from looking at screens, and in the wake of Facebook’s massive (and creepy) data breaches, millions of people have told Mark Zuckerberg and Sheryl Sandberg where they can stick it. Meanwhile, vinyl is hot again, typewriters are cool, and writer Catherine Price made a splash with her recent book How to Break Up With Your Phone.
In his article published in the European Journal of Education, Research, Development and Policy, entitled The future of AI and education: Some cautionary notes, Neil Selwyn makes the following excellent points:
“One dominant set of values that continues to shape debate around AI derives from what might be termed technicist perspectives—i.e., values that tend to underpin the work of software developers, AI researchers and others aligned with computer science. As Birhane et al. (2021) argue, within these professional technicist circles, AI projects tend to be driven primarily by a narrow set of values and concerns over improving technical performance, efficiency and/or generalisability of systems—often shaped by researchers’ previous work and understandings, or else the perceived novelty of the application. The emphasis here is very much on what computer scientist Bettina Berendt (2019) describes as a problem-solving mindset and ambitions to push boundaries of what is technically possible (rather than what is socially desirable).
“Here, we are beginning to see a burgeoning push-back against the prevailing AI logics of capture, control and prediction, and what is perceived to be a general shift toward the mechanisation of human capability (e.g., Felix, 2020). Picking up on decades-old debates over the nature of teaching as an art and craft (rather than a science), such arguments highlight the deeply relational nature of teaching and learning, and the fact that “even human motivations and basic reasoning capabilities fundamentally arise out of social interactions rather than as individual decision-making capabilities” (Siddarth et al., 2021, n.p). As Chris Gilliard puts it: “Good teaching is a combination of art and skill and experience. I’m of the firm belief that no amount of data capture is going to be able to reproduce that. [However] this is apparently a fringe belief in educational technology circles” (Gilliard, 2021, p. 267).”
Good teaching is a combination of art and skill and experience. I’m of the firm belief that no amount of data capture is going to be able to reproduce that. [However] this is apparently a fringe belief in educational technology circles
Here at Edubytes, we are educators first, and technology is a tool, not our product. We believe in empowering trainers to do what only they do best; inspire, motivate, and help students learn. Good quality instructional design assessment beats AI, every time.