Manuscript received March 3, 2024; revised May 30, 2024; accepted August 2, 2024; published October 15, 2024
Abstract—Over the years, the significant increase in candidates taking part in examinations and competitive exams has prompted the relevant bodies and institutions to reconsider their assessment approaches. This evolution largely stems from the growing prevalence of Artificial Intelligence (AI) and continuous technological advances. The primary aim of this research is to evaluate the effectiveness of Multiple-Choice Tests (MCTs) enhanced by AI through comprehensive item analysis followed by score regeneration focusing on the most discriminant items, aiming to strengthen assessment accuracy. The predominant adoption of MCTs has emerged, offering a practical and efficient solution for rigorously assessing a wide range of candidates. The success of this method hinges on the quality of its items. Therefore, ensuring the validity of such exams relies heavily on statistical analysis to select relevant items that are balanced in terms of difficulty and discriminatory power. Given the challenges of this analysis during test development, a new approach using score regeneration through an AI tool is proposed. This approach is based on a posterior statistical analysis of candidate performance with adjusted scores by eliminating the least discriminating items. The research sample was intentionally selected, consisting of computer science trainee teachers spread over the last three academic years. To validate this approach, a comparative study was conducted using the t-student’s test and the Spearman correlation coefficient on the grades obtained in algorithmic and programming training modules each year. The results demonstrate that incorporating this score regeneration phase considerably improves the credibility of MCTs-based assessments, providing a solid foundation for educational decision-making. The findings affirm the research objective by showing that AI-enhanced MCTs offer a reliable and valid method for large-scale candidate assessment.
Keywords—assessments, items analysis, Artificial Intelligence (AI), competitive exams, Multiple-Choice Tests (MCTs)
Cite: Najoua Hrich, Mohamed Azekri, Charafeddin Elhaddouchi, and Mohamed Khaldi, "Enhancing Educational Assessments: Score Regeneration through Post-Item Analysis with Artificial Intelligence," International Journal of Information and Education Technology vol. 14, no. 10, pp. 1414-1420, 2024.