All talks

< back to list of all my talks.

Partially-automated individualized assessment: evaluation and implementation

Manchester Maths Education Seminar, University of Manchester (11/12/2023).

Abstract:
Different assessment approaches have different advantages and limitations, and are appropriate for different assessment goals. A partially-automated assessment uses questions set by an automated question generator which are completed by students and marked as if they were a non-automated piece of coursework. This can preserve validity compared with an open-ended piece of coursework, because the students' response is not limited by computer input or timed exam conditions. At the same time, it can reduce the risk of academic misconduct via copying and collusion via randomisation.
The method is trialled and evaluated via implementation in a final year module intended to develop students' graduate skills, including group work and real-world problem-solving. Individual work alongside a group project aimed to assess individual contribution to learning outcomes. The deeper, open-ended nature of the task did not suit timed examination conditions or automated marking, but the similarity of the individual and group tasks meant the risk of plagiarism was high. Evaluation took three forms: a second-marker experiment, to test reliability and assess validity; student feedback, to examine student views particularly about plagiarism and individualised assessment; and, comparison of marks, to investigate plagiarism.
Implementation in various other contexts will be described, a demonstration assessment will be written, and the question of whether partially-automated assessment can be used to assess individual contribution to group projects will be discussed.