How Chatbots Work And Their Ethical Concerns In Assessment Completion.
June 3rd, 2024
June 3rd, 2024
The emergence of large language models has drastically changed how humans interact with computers and has encouraged every industry to incorporate and push the boundaries of such technologies. While sectors such as E-commerce, Healthcare and Travel have benefited from chatbots, others, such as Education, are still having difficulty adapting to chatbots. This is due to the ethical concerns chatbot usage brings up such as cheating in assessments. Fortunately, there are solutions that mitigate these effects and prove that chatbot incorporation in education is ethical, especially for assessment completion. This essay will discuss the underlying technologies that bring chatbots to life, primarily artificial intelligence, and natural language processing. It will also go over the various ethical considerations surrounding assessment completion in higher education, and practical solutions.
Artificial intelligence, commonly referred to as AI, is a complex technology that allows computers to simulate human intelligence and is one of the major components of any chatbot (IBM, 2024). AI has been around since 1950, however, it only started getting mass public attention after the release of ChatGPT in mid-2020 (Dale, 2021). Modern chatbots, such as conversational chatbots, are built upon machine learning (ML) approaches as compared to pattern-based approaches (Adamopoulou et al, 2020). In this approach, the context of the dialogue is understood, and responses are uniquely generated. For example, if a student asks a chatbot for help regarding an assessment, the bot may reply in several ways depending on how the question was phrased and previous chat history. This is achieved by first training a chatbot on extremely large collections of text called dataset. An example of this is OpenAI using publicly available data on the internet to train the models behind ChatGPT (OpenAI, 2024). Once a model is adequately trained and tested, it can start interacting with humans using natural language processing techniques.
However, computers cannot interact with humans in a natural manner on their own. Chatbots make use of another branch of AI known as Natural language processing (NLP). This helps the chatbot to understand the meaning of the human input and produce a human like response (IBM, 2024). NLP does so by first analyzing the users input and picking out information that helps it make sense out of the language itself, and not just the words (Adamopoulou et al, 2020). For instance, if a student tells a chatbot that they are happy because they got a high mark in their Computer Science exam, the NLP will pick up positive emotion using the word “happy,” and it would pick also up “Computer Science” as the name of the subject. Secondly, chatbots use NLP to respond to the user in a way that is easily understandable and personalized. After analyzing the input, the NLP model will select related data from what it was trained on, makes sentences out of it, and then check for errors (Coursera, 2024). Once this is done, the chatbot will provide a response in a natural way. This interaction is then saved in a vector database, where data is stored as mathematical equations. This makes it easier for machine learning models to remember previous input and retrieve data based on context (Microsoft, 2024).
This continual learning and generative nature of chatbots offer multiple benefits and use cases, especially In the Education domain. Firstly, chatbots provide personalized learning for students (Fuchs, 2023). Take for example remote students or students who prefer to work alone. Those who do not have immediate access to a lecturer or tutor can refer to chatbots to seek help in fully understanding an assessment question or for initial research on a topic. In addition to this, chatbots are available 24/7 and they can be accessed from anywhere in the world. This provides students with on-demand support and enhances its accessibility. In addition to the multiple advantages, course content can be delivered to students through the use of chatbots via an online platform (Okonkwo et al, 2021). A great example of this is the University of the South Pacific’s “Sem 0 GPT”, which is an AI assistant that improves students’ learning experience by giving them the freedom to study from their own homes. This system helped nearly 3000 students and attended to almost 700 queries that were related to course content (Commonwealth of Learning, 2024).
On the contrary, the same generative nature of chatbots is a huge problem in education, especially when it comes to completing assessments. One of the major ethical concerns is cheating during assessment, which affects its integrity (Kooli, 2023). Students can input the assessment question and chatbots are highly likely to generate the correct answer – regardless of the domain of the question. In conjunction to this, there is a concern relating to the accuracy of the response (Fuchs, 2023). Chatbots may generate incorrect responses if they are trained on poor quality and biased data. Such training data can lead to scenarios where the chatbot hallucinates. This is when chatbots give senseless responses due to learning an incorrect pattern (IBM, 2024). Moreover, if students use chatbots in such a manner, will affect the development of a student’s critical thinking skills. They may become passive learners (Fuchs, 2023) whereby instead of evaluating the quality and accuracy of information, they blindly accept what is given to them. These, along with other ethical concerns, are the reason the education sector is still having trouble adapting to chatbots.
In spite of this, there are practical solutions that universities can implement to counter these ethical issues. For the issue of cheating, institutions can support students by providing them with resources on how to properly make use of chatbots (Fuchs, 2023). A free and compulsory course for first-year students that covers the use cases and limitations of chatbots would help create a solid foundation for future years. Research shows that students are likely to embrace chatbots as learning tools if they are aware of the potential advantages, have trust in them and understand their usefulness. (Ayanwale et al, 2024). The implementation of such a course will promote the ethical and responsible usage of chatbots. As for the issue of a students’ critical thinking getting affected, universities can introduce assessments which incorporate practical and interactive tasks that require students to be creative (Kooli, 2023). This encourages assessments to be made in such a way that makes it difficult for students to use chatbots to directly get answers. The only way to use chatbots in this scenario would be to understand the assessment better, which as previously mentioned, can be of great help if a lecturer is not available. Such assessments include case studies, debates, e-portfolios, interviews, reflective writing, and multimedia presentations. Universities such as University of London (Weale, 2023) and Eight leading Australian Universities (Cassidy, 2023), have taken this approach in response to chatbots. While this adoption has not provided statistical results, the implementations themselves show that higher education is making an effort to preserve academic integrity.
If universities take such measures and students are able to use chatbots in a way that supports learning and does not replace their critical thinking, the usage of chatbots to help students’ complete assessments can be regarded as ethical. The purpose of an assessment is to measure a student’s learning and achievement while preparing them for participation in work and community activities (Spiller, 2021). The responsible usage of chatbots in assessment completion will not only preserve the main purpose of an assessment but also further prove that it is very ethical to utilize chatbots for such use cases.
To conclude, the rise of chatbots has presented various ethical concerns in higher education, hindering the rate its which institutions can adapt to them. However, the existence of practical solutions to address those issues proves that universities can create win-win situations for them, their staff, and students. Providing resources on chatbot usage and implementing innovative assessments helps incorporate chatbots into assessments while preserving the purpose of an assessment. This in turn leads to the usage of chatbots in assessment completion to be seen as very ethical. Chatbots are a modern and advancing technology and the earlier Universities adapt to it, the better it will be for everyone as they is here to stay.
Adamopoulou, E., Moussiades, L. (2020). Chatbots: History, technology, and applications. Machine Learning with Applications.
Ayanwale, M., Molefi, R. (2024). Exploring intention of undergraduate students to embrace chatbots: from the vantagepoint of Lesotho.
Cassidy, C. (2023). Australian universities to return to ‘pen and paper’ exams after students caught using AI to write essays. Source
Commonwealth of Learning. (2024). USP enhanced its Semester Zero programme with GPT-powered AI support. Source
Coursera. (2024). What Is Natural Language Generation? Source
Dale, R. (2021). GPT-3: What’s it good for? Natural Language Engineering.
Fuchs, K. (2023). Exploring the opportunities and challenges of NLP models in higher education: is Chat GPT a blessing or a curse.
IBM. (2024). What are AI hallucinations? Source
IBM. (2024). What is a neural network? Source
IBM. (2024). What is artificial intelligence (AI)? Source
IBM. (2024). What is natural language processing (NLP)? Source
Kooli, C. (2023). Chatbots in Education and Research: A Critical Examination of Ethical Implications and Solutions.
Microsoft. (2024). What is a vector database? Source
Okonkwo, C., Ade-Ibijola, A. (2021). Chatbots applications in education: A systematic review. Computers and Education: Artificial Intelligence.
OpenAI. (2024). How ChatGPT and our language models are developed. Source
Spiller, D. (2021). Assessment: Purpose and Principles. Source
Weale ,S. (2023). Lecturers urged to review assessments in UK amid concerns over new AI tool. Source