Development of Natural Language Processing-Based Descriptive Answer Evaluation Platform (Gradescriptive)
Downloads
The manual method of descriptive answer evaluation inherently comes with a lot of problems like the stressful nature of the task, the subjectivity of the grading process as well as the delayed delivery of results. This research involved the development of a computer-based test platform utilizing Natural Language Processing (NLP) as a transformative solution for evaluating descriptive answer examinations. The motivation for this project are the issues of slow turnaround times, potential bias, and limited scalability faced in the manual method of evaluating descriptive answers. Leveraging a state-of-the-art large language model, the MERN (MongoDB, Express.js, React.js and Node.js) stack and Cascading Style Sheets (CSS), a system that meticulously analyzes student responses using criteria like textual semantic similarity, keyword matching and answer length, was developed. The results of the project include timely and accurate feedback, alleviating anxieties and uncertainties around students’ performances. It showed that descriptive questions can evaluate students' critical thinking, problem-solving, and creativity, unlike objective tests. Meanwhile, lecturers are relieved of the immense stress associated with traditional manual grading, fostering a more positive and productive learning environment.
V. Paul and J. D. Pawar (2014). Use of Syntactic Similarity Based Similarity Matrix for Evaluating Descriptive Answer. 2014 IEEE Sixth International Conference on Technology for Education, Clappana, pp. 253- 256
Prof. Sumedha P Raut1 et al., (2022). Automatic Evaluation of Descriptive Answers Using NLP and Machine Learning. 2022 International Journal of Advanced Research in Science, Communication and Technology (IJARSCT). Page 736.
Meenakshi et al. (2022). Web App for Quick Evaluation of Subjective Answers Using Natural Language Processing. Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 22(3) page 594.
Das, I., Sharma, B., Rautaray, S. S., & Pandey, M. (2019). An Examination System Automation Using Natural Language Processing. Conference: 2019 International Conference on Communication and Electronics Systems (ICCES).
https://doi.org/10.1109/icces45898.2019.9002048.
R. K. Rambola et al.,(2021). Development of Novel Evaluating Practices for Subjective Answers Using Natural Language Processing. Springer Link. Page 206.
Harsh et al., (2022). Automatic Grading of Handwritten Answers. International Research Journal of Engineering and Technology (IRJET). 9(5) page 409.
Bashir, M. F., Arshad, H., Javed, A. R., Kryvinska, N., & Band, S. S. (2021). Subjective Answers Evaluation Using Machine Learning and Natural Language Processing. IEEE Access, 9, 158972-158983. https://doi.org/10.1109/ACCESS.2021.3130902.
Sinha, P., Bharadia, S., Kaul, A., & Rathi, S. (2018). Answer Evaluation Using Machine Learning. McGraw-Hill Publications.
Raut, S. P., Chaudhari, S. D., Waghole, V. B., Jadhav, P. U., & Saste, A. B. (2022). Automatic evaluation of descriptive answers using NLP and machine learning. International Journal of Advanced Research in Science, Communication and Technology, 735–745.
https://doi.org/10.48175/ijarsct-3030.
Nandita et al. (2021). Automatic Answer Evaluation Using Machine Learning. International Journal of Information Technology (IJIT). 7(2). Page 1.