Development of Natural Language Processing-Based Descriptive Answer Evaluation Platform (Gradescriptive)

Natural Language Processing (NLP), Descriptive-Answer, Computer-based test, Large Language Models, Embeddings.

Authors

August 30, 2024

Downloads

The manual method of descriptive answer evaluation inherently comes with a lot of problems like the stressful nature of the task, the subjectivity of the grading process as well as the delayed delivery of results. This research involved the development of a computer-based test platform utilizing Natural Language Processing (NLP) as a transformative solution for evaluating descriptive answer examinations. The motivation for this project are the issues of slow turnaround times, potential bias, and limited scalability faced in the manual method of evaluating descriptive answers. Leveraging a state-of-the-art large language model, the MERN (MongoDB, Express.js, React.js and Node.js) stack and Cascading Style Sheets (CSS), a system that meticulously analyzes student responses using criteria like textual semantic similarity, keyword matching and answer length, was developed. The results of the project include timely and accurate feedback, alleviating anxieties and uncertainties around students’ performances. It showed that descriptive questions can evaluate students' critical thinking, problem-solving, and creativity, unlike objective tests. Meanwhile, lecturers are relieved of the immense stress associated with traditional manual grading, fostering a more positive and productive learning environment.