Abstract
Automatic code graders, also called Programming Online Judges (OJ), can support students and instructors in introduction to programming courses (CS1). Using OJs in CS1, instructors select problems to compose assignment lists, whereas students submit their code solutions and receive instantaneous feedback. Whilst this process reduces the instructors’ workload in evaluating students’ code, selecting problems to compose assignments is arduous. Recently, recommender systems have been proposed by the literature to support OJ users. Nonetheless, there is a lack of recommenders fitted for CS1 courses and the ones found in the literature have not been assessed by the target users in a real educational scenario. It is worth noting that hybrid human/AI systems are claimed to potentially surpass isolated human or AI. In this study, we adapted and evaluated a state-of-the-art hybrid human/AI recommender to support CS1 instructors in selecting problems to compose variations of CS1 assignments. We compared data-driven measures (e.g., time students take to solve problems, number of logical lines of code, and hit rate) extracted from student logs whilst solving programming problems from assignments created by instructors versus when solving assignments in collaboration with an adaptation of cutting-edge hybrid/AI method. As a result, employing a data analysis comparing experimental and control conditions using multi-level regressions, we observed that the recommender provided problems compatible with human-selected in all data-driven measures tested.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
urlhttps://icpc.global/.
- 2.
- 3.
- 4.
During the pandemic, the course stopped for a while and after 1 year, it was reoffered remotely, instead of face to face. That’s why we do not use the data during the pandemic, since it is in different educational conditions.
- 5.
Notice that the first nearest neighbour of a given TP is itself and that is why we start i from the number 2.
References
Akata, Z., et al.: A research agenda for hybrid intelligence: augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer 53(08), 18–28 (2020)
Albluwi, I.: Plagiarism in programming assessments: a systematic review. ACM Trans. Comput. Educ. (TOCE) 20(1), 1–28 (2019)
Alrajhi, L., Alamri, A., Pereira, F.D., Cristea, A.I.: Urgency analysis of learners’ comments: an automated intervention priority model for MOOC. In: Cristea, A.I., Troussas, C. (eds.) ITS 2021. LNCS, vol. 12677, pp. 148–160. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-80421-3_18
Alrajhi, L., Alharbi, K., Cristea, A.I., Pereira, F.D.: Extracting the language of the need for urgent intervention in MOOCs by analysing text posts. In: International Conference on Web-Based Learning. pp. 161–173. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-33023-0_14
Armstrong, R.A.: When to use the Bonferroni correction. Ophthalmic Physiol. Optics 34(5), 502–508 (2014)
Cairns, P.: Doing Better Statistics in Human-Computer Interaction. Cambridge University Press, Cambridge (2019)
Carter, A., Hundhausen, C., Olivares, D.: Leveraging the integrated development environment for learning analytics. In: The Cambridge Handbook of Computing Education Research, chap. 23, pp. 679–706. Cambridge University Press, Cambridge (2019)
Dellermann, D., Ebel, P., Söllner, M., Leimeister, J.M.: Hybrid intelligence. Bus. Inf. Syst. Eng. 61(5), 637–643 (2019)
Fantozzi, P., Laura, L.: Recommending tasks in online judges using autoencoder neural networks. Olympiads Inform. 14, 61–76 (2020)
Fincher, S., Tenenberg, J., Dorn, B.: H.C., McCartney, R., Murphy, L.: Computing education research today. In: The Cambridge Handbook of Computing Education Research, chap. 2, pp. 40–55. Cambridge University Press, Cambridge (2019)
Fonseca, S.C., Pereira, F.D., Oliveira, E.H., Oliveira, D.B., Carvalho, L.S., Cristea, A.I.: Automatic subject-based contextualisation of programming assignment lists. Int. Educ. Data Min. Soc. (2020)
Gelman, A., Hill, J.: Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press, Cambridge (2006)
Holstein, K., Aleven, V., Rummel, N.: A conceptual framework for human–AI hybrid adaptivity in education. In: Bittencourt, I.I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (eds.) AIED 2020. LNCS (LNAI), vol. 12163, pp. 240–254. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52237-7_20
Hox, J.J., Moerbeek, M., Van de Schoot, R.: Multilevel Analysis: Techniques and Applications. Routledge (2010)
Ihantola, P., et al.: Educational data mining and learning analytics in programming: literature review and case studies. In: Proceedings of the 2015 ITiCSE on Working Group Reports, pp. 41–63. ACM (2015)
Kurnia, A., Lim, A., Cheang, B.: Online judge. Comput. Educ. 36(4), 299–315 (2001). https://doi.org/10.1016/S0360-1315(01)00018-5
Luxton-Reilly, A., et al.: Introductory programming: a systematic literature review. In: Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, pp. 55–106 (2018)
Mirman, D.: Growth Curve Analysis and Visualization Using R. CRC Press (2016)
de Oliveira, J., Salem, F., de Oliveira, E.H.T., Oliveira, D.B.F., de Carvalho, L.S.G., Pereira, F.D.: Os estudantes leem as mensagens de feedback estendido exibidas em juízes online? In: Anais do XXXI Simpósio Brasileiro de Informática na Educação, pp. 1723–1732. SBC (2020)
Pereira, F.D., et al.: A recommender system based on effort: towards minimising negative affects and maximising achievement in CS1 learning. In: Cristea, A.I., Troussas, C. (eds.) ITS 2021. LNCS, vol. 12677, pp. 466–480. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-80421-3_51
Pereira, F.D., et al.: Using learning analytics in the Amazonas: understanding students’ behaviour in introductory programming. Brit. J. Educ. Technol. 51(4), 955–972 (2020)
Pereira, F.D., et al.: Explaining individual and collective programming students’ behavior by interpreting a black-box predictive model. IEEE Access 9, 117097–117119 (2021)
Pereira, F.D., et al.: Toward supporting CS1 instructors and learners with fine-grained topic detection in online judges. IEEE Access 11, 22513–22525 (2023)
Pereira, F.D., et al.: Towards human-AI collaboration: a recommender system to support CS1 instructors to select problems for assignments and exams. IEEE Trans. Learn. Technol., 1–14 (2022). https://doi.org/10.1109/TLT.2022.3224121
Pereira, F.D., de Souza, L.M., de Oliveira, E.H.T., de Oliveira, D.B.F., de Carvalho, L.S.G.: Predição de desempenho em ambientes computacionais para turmas de programação: um mapeamento sistemático da literatura. In: Anais do XXXI Simpósio Brasileiro de Informática na Educação, pp. 1673–1682. SBC (2020)
Quille, K., Bergin, S.: Cs1: how will they do? how can we help? a decade of research and practice. Comput. Sci. Educ. 29(2–3), 254–282 (2019)
Robins, A.V.: Novice programmers and introductory programming. In: The Cambridge Handbook of Computing Education Research, chap. 12, pp. 327–376. Cambridge University Press, Cambridge (2019)
Saito, T., Watanobe, Y.: Learning path recommendation system for programming education based on neural networks. Int. J. Dist. Educ. Technol. (IJDET) 18(1), 36–64 (2020)
Schwartz, B.M., Gurung, R.A.: Evidence-based teaching for higher education. American Psychological Association (2012)
Wasik, S., Antczak, M., Badura, J., Laskowski, A., Sternal, T.: A survey on online judge systems and their applications. ACM Comput. Surv. (CSUR) 51(1), 3 (2018)
Wilcox, R.R.: Introduction to robust estimation and hypothesis testing. Academic Press (2011)
Yera, R., Martínez, L.: A recommendation approach for programming online judges supported by data preprocessing techniques. Appl. Intell. 47(2), 277–290 (2017)
Zhao, W.X., Zhang, W., He, Y., Xie, X., Wen, J.R.: Automatically learning topics and difficulty levels of problems in online judge systems. ACM Trans. Inf. Syst. (TOIS) 36(3), 27 (2018)
Zhou, W., Pan, Y., Zhou, Y., Sun, G.: The framework of a new online judge system for programming education. In: Proceedings of ACM Turing Celebration Conference-China, pp. 9–14. ACM (2018)
Acknowledgements
This research, carried out within the scope of the Samsung-UFAM Project for Education and Research (SUPER), according to Article 39 of Decree n\(^\circ \)10.521/2020, was funded by Samsung Electronics of Amazonia Ltda., under the terms of Federal Law n\(^\circ \)8.387/1991 through agreement 001/2020, signed with UFAM and FAEPI, Brazil. This study was financed in part by Conselho Nacional de Desenvolvimento Científico e Tecnológico - Brasil - CNPq (Process 308513/2020-7) and Fundação de Amparo a Pesquisa do Estado do Amazonas - FAPEAM (Process 01.02.016301.02770/2021-63). This study was financed in part by the Acuity Insights under the Alo Grant program.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Pereira, F.D. et al. (2023). Evaluation of a Hybrid AI-Human Recommender for CS1 Instructors in a Real Educational Scenario. In: Viberg, O., Jivet, I., Muñoz-Merino, P., Perifanou, M., Papathoma, T. (eds) Responsive and Sustainable Educational Futures. EC-TEL 2023. Lecture Notes in Computer Science, vol 14200. Springer, Cham. https://doi.org/10.1007/978-3-031-42682-7_21
Download citation
DOI: https://doi.org/10.1007/978-3-031-42682-7_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-42681-0
Online ISBN: 978-3-031-42682-7
eBook Packages: Computer ScienceComputer Science (R0)