Fig 2 - uploaded by Rabia Maqsood
Content may be subject to copyright.
Comparison of feedback seek vs. no-seek per confidence-outcome category

Comparison of feedback seek vs. no-seek per confidence-outcome category

Source publication
Presentation
Full-text available
Full paper presented at 2018 International Technology Enhanced Assessment Conference (TEA2018), Amsterdam, Netherlands

Similar publications

Presentation
Full-text available
4th National Conference on Growing the National Economy through Technological Advancement, Engineering, Innovation and Application.

Citations

... This additional "confidence" measure in combination with student response's correctness (which could be either correct or incorrect) derives four confidence-outcome categories; by following [30,53] we have: high confidencecorrect response (HCCR), low confidence-correct response (LCCR), high confidence-wrong response (HCWR), and, low confidence-wrong response (LCWR). These distinct categories capture a discrepancy between students' expected and actual performance-a gap that can be addressed using correct information offered to students through feedback in a computer-based assessment system [37]. The pre-mentioned distinct confidence-outcome categories are defined in terms of varied knowledge regions introduced by Hunt [30]. ...
... As intuition suggests, previous results of [37] showed that students' feedback-seeking behavior is positively correlated with wrong answers given with either confidence level. Therefore, the classification scheme proposed by Maqsood et al. [38] only considers feedback seeking behavior for incorrect responses to differentiate between students' engagement or disengagement during assessment. ...
... In Table 1, the first two rows contain answers belonging to HCCR and LCCR which, respectively, represent students' correct responses given with high and low confidence. As mentioned earlier, feedback-seeking action has no correlation with correct responses given with either confidence level [37]; therefore, only a single engagement behavioral pattern is defined for each category of response, namely: "High Knowledge" (HK) and "Less Knowledge" (LK). ...
Article
Full-text available
Students’ engagements reflect their level of involvement in an ongoing learning process which can be estimated through their interactions with a computer-based learning or assessment system. A pre-requirement for stimulating student engagement lies in the capability to have an approximate representation model for comprehending students’ varied (dis)engagement behaviors. In this paper, we utilized model-based clustering for this purpose which generates K\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K$$\end{document} mixture Markov models to group students’ traces containing their (dis)engagement behavioral patterns. To prevent the Expectation–Maximization (EM) algorithm from getting stuck in a local maxima, we also introduced a K-means-based initialization method named as K-EM. We performed an experimental work on two real datasets using the three variants of the EM algorithm: the original EM, emEM, K-EM; and, non-mixture baseline models for both datasets. The proposed K-EM has shown very promising results and achieved significant performance difference in comparison with the other approaches particularly using the Dataset1. Hence, we suggest to perform further experiments using large dataset(s) to validate our method. Additionally, visualization of the resultant clusters through first-order Markov chains reveals very useful insights about (dis)engagement behaviors depicted by the students. We conclude the paper with a discussion on the usefulness of our approach, limitations and potential extensions of this work.
Conference Paper
Considering the usefulness of monitoring students’ response to available task-level feedback in confidence-based assessment, in this paper, we introduce a novel approach to classify students problem-solving activities into various engagement and disengagement behaviors and study their occurrences during complete learning sessions. Then by clustering these sessions, we obtained four distinct groups which varied both in terms of students’ (dis)engagement behaviors and their quantitative performance scores in confidence-based assessment. Moreover, a qualitative analysis shows that high and low performance students (determined based on their final scores in the course) relate differently to the obtained clusters. Based on these findings we highlight that our approach of investigating students’ engagement by observing traces of performed problem-solving activities is promising and opens new avenues of research. Also, our approach is more generic as it does not contain human-expert defined time limits which are usually determined by analyzing students’ data who participated in the experimental study.