Authors
Sébastien Combéfis1,2 and Guillaume de Moffarts2, 1ECAM Brussels Engineering School, Belgium and 2CSITEd ASBL, Belgium
Abstract
Automatic assessment of code, in particular to support education, is an important feature included in several Learning Management Systems (LMS), at least to some extent. Several kinds of assessments can be designed, such as exercises asking to “fill the following code”, “write a function that”, or “correct the bug in the following program”, for example. One difficulty for instructors is to create such programming exercises, in particular when they are somewhat complex. Indeed, instructors need to write the statement of the exercise, think about the solution and provide all the additional information necessary to the platform to grade the assessment. Another difficulty occurs when instructors want to use their exercises on another LMS platform. Since there is no standard way to define and describe a coding exercise yet, instructors have to re-encode their exercises into the other LMS. This paper presents a tool that can automatically generate programming exercises, from one single and unique description, and that can be solved in several programming languages. The generated exercises can be automatically graded by the same platform, providing intelligent feedback to its users to support their learning. This paper focuses on and details unit testing-based exercises and provides insights into new kinds of exercises that could be generated by the platform in the future, with some additional developments.
Keywords
Code Grader, Programming Assessment, Code Exercise Generation, Computer Science Education