Transformers Based Automated Short Answer Grading with Contrastive Learning for Indonesian Language

Mukti, Aldo Arya Saka and Alfarozi, Syukron Abu Ishaq and Kusumawardani, Sri Suning (2023) Transformers Based Automated Short Answer Grading with Contrastive Learning for Indonesian Language. In: 2023 15th International Conference on Information Technology and Electrical Engineering (ICITEE), 26-27 October 2023, Chiang Mai, Thailand.

[thumbnail of Transformers_Based_Automated_Short_Answer_Grading_with_Contrastive_Learning_for_Indonesian_Language.pdf] Text
Transformers_Based_Automated_Short_Answer_Grading_with_Contrastive_Learning_for_Indonesian_Language.pdf
Restricted to Registered users only

Download (737kB) | Request a copy

Abstract

The rapid development of technology has impacted various sectors, including education. These developments have enabled e-Learning to thrive, especially during the Covid-19 pandemic. Evaluating student performance and understanding in e-Learning is typically done through quizzes. However, these evaluations, especially in essay grading, still require manual effort. This can lead to exhaustion and introduce bias and inconsistency into the scoring process. To address this issue, one possible solution is to develop an automated short-answer grading system. This research explores large language model that has a general understanding of language. This model is then subjected to a finetuning process. Specifically, this study employs BERT model, with contrastive learning method to develop an automated short-answer scoring system and compare its performance with similar systems. The model is composed of two components, namely the model body which utilizes BERT variation and the model head which employs logistic regression. The model body is structured in a siamese architecture. The results demonstrate an improvement in model performance of BERT model with constrastive learning. When compared to the pretrained BERT and BERT with cosine similarity finetuning, the reduction in prediction MAE is 21.72% and 9.90%, while for the RMSE metric, it is 17.79% and 13.80%. The transformers-based model with contrastive learning achieves metrics of 0.191 for MAE and 0.231 for RMSE. These findings indicate the potential of using the contrastive learning method in transformers models to develop an automated short-answer scoring system.

Item Type: Conference or Workshop Item (Paper)
Additional Information: Library Dosen
Uncontrolled Keywords: contrastive learning, transformers, automated short answer grading, e-Learning, large language model
Subjects: T Technology > T Technology (General)
Divisions: Faculty of Engineering > Electronics Engineering Department
Depositing User: Rita Yulianti Yulianti
Date Deposited: 26 Jul 2024 07:55
Last Modified: 26 Jul 2024 07:55
URI: https://ir.lib.ugm.ac.id/id/eprint/130

Actions (login required)

View Item
View Item