ChapterPDF Available

Prediction of Students’ Self-confidence Using Multimodal Features in an Experiential Nurse Training Environment

Authors:

Abstract

Simulation-based experiential learning environments used in nurse training programs offer numerous advantages, including the opportunity for students to increase their self-confidence through deliberate repeated practice in a safe and controlled environment. However, measuring and monitoring students’ self-confidence is challenging due to its subjective nature. In this work, we show that students’ self-confidence can be predicted using multimodal data collected from the training environment. By extracting features from student eye gaze and speech patterns and combining them as inputs into a single regression model, we show that students’ self-rated confidence can be predicted with high accuracy. Such predictive models may be utilized as part of a larger assessment framework designed to give instructors additional tools to support and improve student learning and patient outcomes.KeywordsExperiential LearningSimulation-based TrainingMultimodal Learning Analytics (MMLA)Self ConfidenceMachine Learning
Prediction of Students’ Self-confidence
Using Multimodal Features
in an Experiential Nurse Training
Environment
Caleb Vatral1(B
), Madison Lee2, Clayton Cohn1, Eduardo Davalos1,
Daniel Levin2, and Gautam Biswas1
1Institute For Software Integrated Systems, Vanderbilt University,
Nashville, TN, USA
caleb.m.vatral@vanderbilt.edu
2Peabody College, Vanderbilt University, Nashville, TN, USA
Abstract. Simulation-based experiential learning environments used in
nurse training programs offer numerous advantages, including the oppor-
tunity for students to increase their self-confidence through deliberate
repeated practice in a safe and controlled environment. However, measur-
ing and monitoring students’ self-confidence is challenging due to its sub-
jective nature. In this work, we show that students’ self-confidence can
be predicted using multimodal data collected from the training environ-
ment. By extracting features from student eye gaze and speech patterns
and combining them as inputs into a single regression model, we show
that students’ self-rated confidence can be predicted with high accuracy.
Such predictive models may be utilized as part of a larger assessment
framework designed to give instructors additional tools to support and
improve student learning and patient outcomes.
Keywords: Experiential Learning ·Simulation-based Training ·
Multimodal Learning Analytics (MMLA) ·Self Confidence ·Machine
Learning
1 Introduction
In recent years, experiential learning has gained popularity as an effective app-
roach to training for specialized skills, especially in nursing and healthcare. Expe-
riential learning emphasizes hands-on experiences and reflection [3]. In nurs-
ing education, experiential learning has seen application through simulation-
based training programs. These nursing simulations use high-fidelity manikins
to expose students to realistic patient scenarios in a safe and repeatable envi-
ronment.
Simulation-based experiential learning environments have many advantages.
For example, they provide students opportunities to increase their confidence
c
The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
N. Wang et al. (Eds.): AIED 2023, CCIS 1831, pp. 266–271, 2023.
https://doi.org/10.1007/978-3-031-36336-8_41
Prediction of Students’ Self-confidence Using Multimodal Features 267
through deliberate repeated practice in a safe environment [4], which is a critical
component of an effective nursing curriculum. It influences students’ engage-
ment, motivation, and overall performance, directly impacting patient outcomes
[5]. However, measuring and monitoring self-confidence is challenging because it
has multiple interpretations; it can be measured as a personality trait or as a
metacognitive process [1].
In this paper, we propose a novel approach to predicting students’ metacog-
nitive self-confidence in an experiential nurse training environment by combining
information from student eye gaze and speech patterns. We develop predictive
models of students’ self-rated confidence in their simulations, which can con-
tribute to the development of new methods for assessing and enhancing metacog-
nitive self-confidence. This has implications for developing data-driven perfor-
mance monitoring systems that could be used by students and instructors to
improve learning outcomes and better characterize student readiness.
2 Background
Previous work has shown that careful consideration must be made when mea-
suring students’ self-confidence to ensure that the correct construct is being
measured. Burns et al. [1] showed that self-confidence can be broken down into
a spectrum between an online metacognitive judgement and a personality trait
based on how it is measured. The metacognitive self-confidence is linked to cogni-
tive and metacognitive processes and is typically measured online as a post-task
question; i.e. “How confident are you that your answers/actions are correct?" or
"How confident are you that you were successful in completing your assigned
task?" Personality trait self-confidence, on the other hand, is linked to personal
experience and emotional tendencies and tends to be less related to specific task
performance [1]. In our study, we measure the metacognitive aspects of self-
confidence by having students rate their confidence as part of an individual per-
formance rating after they review and reflect on a video of their training exercise
(see Sect. 3.2). Because of the task-specific nature of this question, the measure-
ment can be interpreted as students’ metacognitive self-confidence. Therefore,
when building our predictive models, we used students’ self-reported confidence
as the ground truth for their metacognitive self-confidence (see Sects. 3.3 and 4).
3 Methods
3.1 Experiential Nursing Simulation
Student nurses trained in a simulated hospital room containing standard medi-
cal equipment and a manikin patient simulator. Students entered the room and
performed routine evaluations of the manikin patient, and then performed rel-
evant prescribed treatments based on their evaluation. For more details on the
simulation environment, see [7]. All students provided their informed consent
to collect video and audio data as they performed their training activities, and
some students volunteered to wear Tobii 3 eye-tracking glasses. In this paper,
we analyze the data from 14 students who used eye-tracking glasses.
268 C. Vatral et al.
3.2 Individual Guided Reflection Debriefing
After participating in their instructional simulations, students were given the
opportunity to engage in guided reflection designed to promote metacognitive
reflection on their performance. Initially, we showed the students their own ego-
centric eye-tracking footage from the simulation in which they participated.
After this, the students re-watched this footage while identifying meaningful
event units by pressing a key when they detected a transition from one event to
another [10]. Students then reviewed the marked events repeatedly and answered
six reflection questions based on that event. One of these questions evaluated
teamwork, asking students to rate “To what degree were you working individu-
ally versus as a team during this event segment?" on a Likert scale from 1 to
5, and this rating is used later in this paper for feature selection. After answer-
ing the questions for each event segment, to conclude the reflection, the stu-
dents were asked to reflect on the entire simulation experience. They were given
a 10-point scale asked, “Please rate YOURSELF on the following measures:"
engagement, confidence, patient safety, positive patient outcomes, and scenario
objective completion. This paper’s main focus is predicting the “Confidence"
item in this overall assessment.
3.3 Machine Learning Modeling
We analyzed students’ captured eye gaze and speech behavior as an indicator
of their overall confidence in the simulation. Using the multimodal eye gaze and
speech data collected from the students as features and students’ responses to
the guided self-reflection as a ground truth for their confidence, we trained a
regression model to predict students’ self-rated confidence.
We initially developed 27 features derived from the eye gaze and speech data.
For each of the students’ event segments, we computed these 27 features from
the observed data. These initial features were selected in a somewhat post-hoc
fashion, partially based on previous work with similar nursing student data [7],
and partially based on the features which were easily available from the sensor
systems. Because of this post-hoc strategy, not all of these features may be
relevant to the prediction of students’ self-confidence, so further refinement of
the feature set through feature selection processes was necessary.
We performed feature selection by building a mixed effects linear model to
measure the fixed effects of the features on self-confidence when controlling
for participants. However, in the guided reflection, students only rated their
metagcognitive confidence for the overall simulation, not for each event segment.
So, we utilized a proxy target variable instead. Utilizing the relationship between
teamwork and self-confidence [7], we built the mixed-effects model with students’
self-rated teamwork in each segment as the target variable and measured the
fixed effects between each of the features and students’ self-rated teamwork.
Twelve features shown in Table 1showed statistically significant effects on
teamwork in our feature selection model (p0.05). Seven features were pro-
duced automatically by the Tobii glasses 3. One additional eye gaze feature, Per-
sonGaze, was computed by the researchers by measuring the overlap between
Prediction of Students’ Self-confidence Using Multimodal Features 269
Table 1 . The 12 sequence features extracted from eye gaze and speech data used in
the final regression model
Feat u re Description
PersonGaze Percentage of time spent looking at another person
AvgSacHz Average number of saccades per second
MinSacAmp Minimum amplitude over all saccades
AvgSacAmp Average amplitude over all saccades
AvgSacPeakVel Average peak velocity of over all saccades
StdSacPeakVel Standard deviation of peak velocity over all saccades
AvgF i x H z Average number of fixations per second
AvgFixPupilDiameter Average pupil diameter during fixations
MinValence Minimum emotional speech valence
MaxArousal Maximum emotional speech arousal
AvgArousal Average emotional speech arousal
MaxDominance Maximum emotional speech dominance
the Tobii gaze coordinates and any person-class bounding box produced by the
YoloV5L object detection model. The other four features, computed using a
trained deep-learning model on sections of the students’ speech audio, measured
emotional valence, arousal, and dominance of student speech [7,8].
Having selected these 12 features, we then return to the task of predict-
ing metacognitive self-confidence. However, these 12 features are computed for
each event, and different students segmented events in different ways. Since our
goal was to predict self-confidence over the entire simulation, we formulated the
regression as a sequence-to-one regression problem. While several techniques can
be used to perform sequence-to-one regression, due to the small sample size of
this study we chose to extract basic statistics of the feature sequences to use
as the final input features of the regression. For each student’s sequence of the
12 features previously identified, we extracted the minimum, maximum, mean,
and standard deviation as features to describe the sequence. These four statis-
tical features were calculated for each of the 12 sequence features, leading to an
overall 48-dimensional input feature vector for the final regression.
4 Results
For the regression of students’ self-confidence scores, because of the small sample
size and class imbalance, we used Gradient Boosted Regression Trees with leave-
one-out cross-validation. For evaluation, we examined the average root mean
squared error (RMSE) and R2correlation coefficient compared to the students’
self-reflections. The model achieved 0.53 ±0.17 RMSE and R2=0.81.Consid-
ering the range of prediction and other limitations, this performance represents
a fairly high level of accuracy, which could be informativein a variety of ways.
270 C. Vatral et al.
To explore the model further, we performed a local explainable AI feature
contribution analysis using the Decision Contribution method [2]. We found 5
unique feature ranking patterns that covered all 14 students. It is most notable
that all 5 rankings had the same top-ranked feature: Minimum of AvgSacAmp.
which accounted for significantly more of the decision than any of the other fea-
tures, scoring an absolute sum of decision contributions of 11.99. This was much
greater than even the second highest ranked feature, which scored 0.65. However,
re-running the regression with only the Minimum of AvgSacAmp feature yielded
1.07 ±0.16 RMSE and R2=0.58, suggesting that while they contributed less,
other features still contributed significantly to the overall model performance.
5 Discussion
The analysis presented here was fairly exploratory in nature, given the small
sample size and initial post-hoc feature selection methodology. However, the
preliminary results suggest several important implications and should be used to
drive future research on multimodal prediction of metacognitive self-confidence.
5.1 Saccade Behavior
Saccade behavior seems to be very important in the predictive model’s ability
to determine students’ self-confidence, suggesting that saccade behavior, and
its associated cognitive processes, are related to metacognitive self-confidence in
some way. 4 out of the 5 top-ranked features were derived from saccade behavior.
Extending this, we find a moderate positive Spearman rank correlation between
minimum average saccade amplitude and self-confidence (0.40 ρ0.92,n =
14 with Fisher z-score transformation). In other words, larger average saccade
amplitudes are linked to higher self-confidence. Prior work has shown relation-
ships between higher-amplitude saccades and goal-directed ideation behavior [9].
Since these simulations tasked students with identifying an unknown problem
and coming up with a solution, it is very likely that more confident students
spent more time in goal-directed ideation to come up with problem solutions as
compared to their peers. However, further work should focus on identifying this
relationship more concretely.
5.2 Implications for Instructors
The model presented here also represents a data-driven objective method for
instructors to examine and evaluate students’ metacognitive self-confidence.
With further development, this kind of evaluation could allow instructors to
provide more in-depth debriefing and targeted interventions to improve self-
confidence, especially for students who have low confidence. Extending this idea,
the work is a small step toward a more holistic objective assessment of perfor-
mance. By aiding instructors’ evaluations using data-driven assessments, bias
Prediction of Students’ Self-confidence Using Multimodal Features 271
and errors in subjective judgment can be reduced, and the burden of assess-
ment on instructors can be lessened. While self-confidence is only one measure
that such data-driven assessments would generate, this work helps to illustrate
the longer-term goal and demonstrate that such assessments can be made with
multimodal data.
6 Conclusions
In this paper, we showed how multimodal data can be leveraged to model stu-
dents’ self-rated metacognitive confidence scores that are connected to their abil-
ity to make metacognitive judgments of their performance. Some limitations of
the current study include the small sample size for training the model, as well
as the lack of demographic data. In order to show the generality of the methods,
future work should repeat this modeling with more students, including students
from different populations. Since this model combines self-report with objective
measurement, such larger populations would present an excellent opportunity
to study diversity and inclusion issues in nursing education. Additionally, future
work should apply predictive modeling to other performance concepts, which
would allow for a more holistic automated assessment of nurse performance.
References
1. Burns, K.M., Burns, N.R., Ward, L.: Confidence-more a personality or ability trait?
it depends on how it is measured: A comparison of young and older adults. Front.
Psychol. 7, 518 (2016)
2. Delgado-Panadero, A., Hernández-Lorca, B., García-Ordás, M.T., Benítez-
Andrades, J.A.: "Implementing local-explainability in gradient boosting trees: Fea-
ture contribution. Inform. Sci. 589, 199–212 (2022)
3. Durlach, P.J., Lesgold, A.M.: Adaptive technologies for training and education.
Cambridge University Press (2012)
4. Labrague, L.J., McEnroe-Petitte, D.M., Bowling, A.M., Nwafor, C.E., Tsaras, K.:
High-fidelity simulation and nursing students’ anxiety and self-confidence: A sys-
tematic review. In: Nursing Forum, vol. 54, pp. 358–368. Wiley (2019)
5. Lundberg, K.M.: Promoting self-confidence in clinical nursing students. Nurse
Educ. 33(2), 86–89 (2008)
6. Vatral, C., Biswas, G., Cohn, C., Davalos, E., Mohammed, N.: Using the dicot
framework for integrated multimodal analysis in mixed-reality training environ-
ments. Front. Artifi. Intell. 5, 941825 (2022)
7. Vatral, C., et al.: A tale of two nurses: Studying groupwork in nurse training by ana-
lyzing taskwork roles, social interactions, and self-efficacy. In: 2023 International
Conference on Computer Supported Collaborative Learning (2023), (In Press)
8. Wagner, J., et al.: Dawn of the transformer era in speech emotion recognition:
closing the valence gap (2022). https://doi.org/10.48550/ARXIV.2203.07378
9. Walcher, S., Körner, C., Benedek, M.: Looking for ideas: Eye behavior during goal-
directed internally focused cognition. Conscious. Cogn. 53, 165–175 (2017)
10. Zacks, J.M., Swallow, K.M.: Event segmentation. Curr. Dir. Psychol. Sci. 16(2),
80–84 (2007)
Article
Full-text available
Aims To map the themes and methods of nursing researches involving eye‐tracking as a measurement, and offer suggestion for future nursing research using eye‐tracking. Design We conducted a scoping review following the methodology outlined in the JBI Manual for Evidence Synthesis on scoping reviews. Methods Eligibility criteria were established based on Population (involving nursing or nursing students), Concept (utilizing eye‐tracking as a research method), and Context (in any setting). Articles were retrieved from the PubMed, Web of Science, Embase, CINAHL, APA PsycInfo, and Scopus databases, spanning from database inception to November 17, 2023. The included studies were analysed using descriptive statistics and content analysis. Results After duplicates were removed, 815 citations were identified from searches of electronic databases and other resources, and 66 met the inclusion criteria finally. Thirty‐eight studies were conducted in a simulated environment. Five application domains were identified, and most of the studies ( N = 50) were observational. The domains found in our review did not cover all topics of nursing research in the same depth. Additionally, 39 studies did not solely explicate eye‐tracking data but instead integrated behavioural measures, scales/questionnaires, or other physiological data. Conclusions Eye‐tracking emerges as a significant research tool in uncovering visual behaviour, particularly in nursing research focused on nursing education. This study not only summarized the application and interpretation of eye‐tracking data but also recognized its potential in advancing clinical nursing research and practice. To effectively harness the capabilities of eye‐tracking in elucidating cognitive processes, future research should aim for a clearer grasp of the theoretical underpinnings of the addressed research problems and methodological choices. It is crucial to emphasize the standardization of eye‐tracking method reporting and ensuring data quality. No Patient or Public Contribution.
Conference Paper
Full-text available
Modern healthcare requires the coordination of a team of professionals with complementary skillsets. To help facilitate teamwork, healthcare professionals, such as nurses, undergo rigorous training of their clinical skills in team settings. In this paper, we analyze a mixed-reality, simulation-based training exercise involving three nurses in a hospital room. We perform multimodal interaction analysis to contrast strategies used in two cases where the patient expressed doubts about their medical care. By analyzing these strategies and comparing them to the student nurses' self-reflections, we show connections among the nurses' clinical roles, their self-efficacy, and their teamwork.
Article
Full-text available
Simulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of these SBT environments, with structured methodologies such as Distributed Cognition for Teamwork (DiCoT) used for analysis. However, the analysis and evaluations generated by such distributed cognition frameworks require extensive domain-knowledge and manual coding and interpretation, and the analysis is primarily qualitative. In this work, we propose and develop the application of multimodal learning analysis techniques to SBT scenarios. Using these analysis methods, we can use the rich multimodal data collected in SBT environments to generate more automated interpretations of trainee performance that supplement and extend traditional DiCoT analysis. To demonstrate the use of these methods, we present a case study of nurses training in a mixed-reality manikin-based (MRMB) training environment. We show how the combined analysis of the video, speech, and eye-tracking data collected as the nurses train in the MRMB environment supports and enhances traditional qualitative DiCoT analysis. By applying such quantitative data-driven analysis methods, we can better analyze trainee activities online in SBT and MRMB environments. With continued development, these analysis methods could be used to provide targeted feedback to learners, a detailed review of training performance to the instructors, and data-driven evidence for improving the environment to simulation designers.
Article
Full-text available
Background As a complementary teaching pedagogy, high‐fidelity simulation remains as an effective form of simulation modality. Empirical evidence has additionally shown high‐fidelity simulation (HFS) to be an effective contributor to students’ learning outcomes. Purpose This paper critically appraised existing scientific articles that covered the influence of utilizing HFS on the effects of nursing students’ anxiety and self‐confidence during undergraduate nursing education. Methods This was a systematic review of scientific articles conducted from 2007 to 2017 on the topic of the influence of using HFS on students’ self‐confidence and anxiety. The literature of six electronic databases (Proquest, SCOPUS, MEDLINE, PubMed Central, CINAHL, and PsychINFO) was reviewed. Results Following the literature search, 35 articles were selected. This review provides updated evidence on the efficacy of HFS in reducing anxiety and enhancing self‐confidence among nursing students when performing nursing duties or managing patients. Moreover, this review highlights the need for more research that examines the impact of HFS on students’ anxiety. Conclusion As this form of simulation is found to be effective in the enhancement of nursing student self‐confidence and the reduction of their anxiety when caring for patients and/or employing nursing skills, the inclusion of simulation‐based activities in all clinical nursing courses is vital.
Article
Full-text available
Humans have a highly developed visual system, yet we spend a high proportion of our time awake ignoring the visual world and attending to our own thoughts. The present study examined eye movement characteristics of goal-directed internally focused cognition. Deliberate internally focused cognition was induced by an idea generation task. A letter-by-letter reading task served as external task. Idea generation (vs. reading) was associated with more and longer blinks and fewer microsaccades indicating an attenuation of visual input. Idea generation was further associated with more and shorter fixations, more saccades and saccades with higher amplitudes as well as heightened stimulus-independent variation of eye vergence. The latter results suggest a coupling of eye behavior to internally generated information and associated cognitive processes, i.e. searching for ideas. Our results support eye behavior patterns as indicators of goal-directed internally focused cognition through mechanisms of attenuation of visual input and coupling of eye behavior to internally generated information.
Article
Full-text available
The current study (N = 244) compared two independently developed and substantively different measures of self-confidence; a self-report measure, and a measure described as “online.” Online measures are confidence-accuracy judgments made following each item on a cognitive task; in the current study, online measures were yoked to tasks of fluid and crystallized intelligence. The self-report and online measures had not previously been compared, and it was unknown if they captured the same self-confidence construct. These measures were also compared to self-efficacy and personality for the purpose of defining self-confidence as an independent construct, as well as to clarify the primary comparison. This study also aimed to replicate previous findings of a stable factor of confidence derived from online measures. An age comparison was made between a young adult sample (30 years and under) and an older adult sample (65 years and over) to determine how confidence functions across the lifespan. The primary finding was that self-report and online measures of confidence define two different but modestly correlated factors. Moreover, the self-report measures sit closer to personality, and the online measures sit closer to ability. While online measures of confidence were distinct from self-efficacy and personality, self-report measures were very closely related to the personality trait Emotional Stability. A general confidence factor—derived from online measures—was identified, and importantly was found in not just young adults but also in older adults. In terms of the age comparison, older adults had higher self-report self-confidence, and tended to be more overconfident in their judgments for online measures; however this overconfidence was more striking in the online measures attached to fluid ability than to crystallized ability.
Article
Full-text available
Clinical nursing instructors are continually telling their students that they just need more confidence. But how do students find this needed confidence and how can nursing instructors help them? The author discusses sources and principles of confidence in relationship to teaching behaviors and strategies for increasing self-confidence, such as simulations, peer modeling, story telling, skill review sessions, and journaling.
Article
Gradient Boost Decision Trees (GBDT) is a powerful additive model based on tree ensembles. Its nature makes GBDT a black-box model even though there are multiple explainable artificial intelligence (XAI) models obtaining information by reinterpreting the model globally and locally. Each tree of the ensemble is a transparent model itself but the final outcome is the result of a sum of these trees and it is not easy to clarify. In this paper, a feature contribution method for GBDT is developed. The proposed method takes advantage of the GBDT architecture to calculate the contribution of each feature using the residue of each node. This algorithm allows to calculate the sequence of node decisions given a prediction. Theoretical proofs and multiple experiments have been carried out to demonstrate the performance of our method which is not only a local explicability model for the GBDT algorithm but also a unique option that reflects GBDTs internal behavior. The proposal is aligned to the contribution of characteristics having impact in some artificial intelligence problems such as ethical analysis of Artificial Intelligence (AI) and comply with the new European laws such as the General Data Protection Regulation (GDPR) about the right to explain and nondiscrimination.
Article
This edited volume provides an overview of the latest advancements in adaptive training technology. Intelligent tutoring has been deployed for well-defined and relatively static educational domains such as algebra and geometry. However, this adaptive approach to computer-based training has yet to come into wider usage for domains that are less well defined or where student-system interactions are less structured, such as during scenario-based simulation and immersive serious games. In order to address how to expand the reach of adaptive training technology to these domains, leading experts in the field present their work in areas such as student modeling, pedagogical strategy, knowledge assessment, natural language processing, and virtual human agents. Several approaches to designing adaptive technology are discussed for both traditional educational settings and professional training domains. This book will appeal to anyone concerned with educational and training technology at a professional level, including researchers, training systems developers, and designers.
Article
One way to understand something is to break it up into parts. New research indicates that segmenting ongoing activity into meaningful events is a core component of ongoing perception, with consequences for memory and learning. Behavioral and neuroimaging data suggest that event segmentation is automatic and that people spontaneously segment activity into hierarchically organized parts and sub-parts. This segmentation depends on the bottom-up processing of sensory features such as movement, and on the top-down processing of conceptual features such as actors' goals. How people segment activity affects what they remember later; as a result, those who identify appropriate event boundaries during perception tend to remember more and learn more proficiently.
Dawn of the transformer era in speech emotion recognition: closing the valence gap
  • J Wagner
Wagner, J., et al.: Dawn of the transformer era in speech emotion recognition: closing the valence gap (2022). https://doi.org/10.48550/ARXIV.2203.07378