ArticlePDF Available

Effects of Computer-Based Training in Computer Hardware Servicing on Students' Academic Performance

Authors:

Abstract

This study determined the effects of computer-based training in computer hardware servicing with a pedagogical agent named “DAC: The Builder” on the academic performance of computing students. Fifty-six university students (30 students in the control group, 26 students in the experimental group) participated in a two-week experiment. The majority of the experimental group exhibited gaming behavior but subsequently reduced it after DAC intervention. The data collected in this study showed that the null hypothesis stating that there is no significant difference in the pretest and posttest scores of the experimental group can be rejected. Moreover, the hands-on posttest scores of both groups had significant differences. This study demonstrates that returning students to the lecture when they exhibit gaming the system behavior is an effective tool for discouraging this behavior. The use of the DAC is therefore recommended for students taking up computer hardware servicing. Implications and recommendations were also discussed.
DOI: 10.4018/IJTESSS.317410

Volume 12 • Issue 1
Copyright © 2022, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
*Corresponding Author
1
󰀨


Rex Perez Bringula, University of the East, Philippines*
https://orcid.org/0000-0002-1789-9601
John Vincent T. Canseco, University of the East, Philippines
Patricia Louise J. Durolfo, University of the East, Philippines
Lance Christian A. Villanueva, University of the East, Philippines
Gabriel M. Caraos, University of the East, Philippines

This study determined the effects of computer-based training in computer hardware servicing with a
pedagogical agent named “DAC: The Builder” on the academic performance of computing students.
Fifty-six university students (30 students in the control group, 26 students in the experimental group)
participated in a two-week experiment. The majority of the experimental group exhibited gaming
behavior but subsequently reduced it after DAC intervention. The data collected in this study showed
that the null hypothesis stating that there is no significant difference in the pretest and posttest scores
of the experimental group can be rejected. Moreover, the hands-on posttest scores of both groups
had significant differences. This study demonstrates that returning students to the lecture when they
exhibit gaming the system behavior is an effective tool for discouraging this behavior. The use of the
DAC is therefore recommended for students taking up computer hardware servicing. Implications
and recommendations were also discussed.

Computer Hardware Assembly, Gaming the System, Hardware Servicing Skill, Tutoring System

Computer hardware servicing is a technical skill where students have to learn computer set building,
computer troubleshooting, software installation, system configuration, and computer maintenance (De
Jesus, 2019). From basic secondary school to computer-related courses in tertiary education, computer
hardware servicing instructions are a fundamental skill in computer education (Hsu & Hwang,
2014). However, there are challenges to learning the course. The difficulty experienced by students
in assembling a computer is not only due to a lack of practice but also to insufficient assistance and

Volume 12 • Issue 1
2
materials (Hwang et al., 2011). For example, to understand the functions of a motherboard, students
need to see a fully functional motherboard. The ideal teaching method for the subject is to allow
the students to use a functional motherboard. However, it will be highly impractical to dismantle
a working computer to show the motherboard. Moreover, providing individualized feedback to all
students will be very tedious and time-consuming (Botarleanu et al., 2018).
One way to address these issues is to employ computer-based training (CBT) software
(subsequently referred to as software) for a computer hardware servicing system (De Jesus, 2019).
However, prior work (e.g., De Jesus, 2019) did not include interventions when students are gaming
the system (GTS) (a deliberate behavior to exploit the system to achieve correct responses rather than
learning the materials; Baker et al., 2008) and assistance from a pedagogical agent. To address these
gaps, this study was conceived. This study developed software for computer hardware servicing for
computing students (Information Technology, Computer Science, and Information Systems) with a
pedagogical agent capable of detecting the GTS. Specifically, the study aims to answer the following
research questions (RQ). 1) What is the software utilization of the students in the experimental group
in terms of the number of lectures taken, time spent on the hands-on activities, number of hands-on
errors, time spent gaming the system, and lesson where GTS was observed? 2) What are the hardware
servicing academic performances of the students in the control and experimental groups in terms of
pretest scores, posttest scores, time spent on the hands-on activities, and number of hands-on errors?
3) Is there a significant difference between the academic performances of the students in terms of
pretest scores, posttest scores, time spent on the hands-on exercises, and number of hands-on errors
in the control and experimental groups?
The following null hypotheses were tested in this study:
H0a: There is no significant difference in the pretest scores of the experimental and control group.
H0b: There is no significant difference in the posttest scores of the experimental and control group.
H0c: There is no significant difference in the time spent on the hands-on activities of the experimental
and control group.
H0d: There is no significant difference in the number of hands-on errors committed of the experimental
and control group.
H0e: There is no significant difference in the pretest and posttest scores of the students in the control group.
H0f: There is no significant difference in the pretest and posttest scores of the students in the
experimental group.


Computer-based training (CBT) is a methodology for providing systematic, structured learning
(Bedwell & Salas, 2010). Practitioners and students could have relied on CBT (Bedwell & Salas, 2010)
since it is an effective educational tool (Oduma et al., 2019). CBT is an evolving field. Researchers
in this field are continuously developing CBT software with the intention of improving the students’
academic performance. For instance, CBT was employed to learn languages. Ecalle et al. (2020)
used CBT programs to stimulate learning to read in French for new immigrant children. Two groups
of students used different CBT software programs. The first group has just started to learn French,
while the second group could already identify a few French words. The experiments showed that
there was a significant effect on phonemic awareness in the first group, while there was a significant
effect on word reading in the second group. CBT for language acquisition is also beneficial for the
older population. In a recent similar study, Klimova (2021) conducted a mini-review of the benefits
of CBT for foreign language training in healthy older people. Klimova (2021) disclosed that CBT
for foreign language acquisition was indeed helpful for older individuals.

Volume 12 • Issue 1
3
CBT was also employed in mathematics learning. For example, Mousa and Molnár (2020)
determined whether CBT in math improves the inductive reasoning of 9 to 11-year-old children. Their
study found evidence to support the conclusion that the experimental group (those who underwent
CBT) had higher posttest scores than the control group. In the recent study of Zwart et al. (2021),
they utilized CBT for training nursing students in professional duties that included mathematical tasks
associated with medication processes. The CBT system included mathematical medication scenarios
and basic arithmetic exercises that could support mathematical medication learning. Data gathered
from 118 participants showed that the CBT improved the mathematical memorization of all students.
De Jesus (2019) conducted a similar study on computer hardware servicing. This is the only
study that is closely related to this current study. De Jesus (2019) developed a CBT named “Computer
Hardware Servicing and Maintenance Trainer” (CHSM Trainer). The CSHM Trainer reduced the
time spent practicing interfaces and troubleshooting. The software received a very satisfactory
subjective evaluation from the students. However, the software could not detect gaming behaviors
or the functionalities of a pedagogical agent.

Pedagogical agents (PA) are virtual characters that facilitate instruction (Bringula et al., 2018;
Lane & Schroeder, 2022). There is a growing body of research that reports the impact of PA on
students’ learning and behavior. There are studies that report both positive (Bringula et al., 2018;
Mohammadhasani et al., 2018) and inconclusive (Li et al., 2016) effects of PA. Nonetheless, a
recent systematic literature review agreed that PA had a significant effect on students’ learning
(Martha & Santoso, 2019).
The ability of the PA to provide real-time and personalized feedback contributed to its
effectiveness. Feedback is a computer-generated message that could assist or correct a student during
a learning process (Bimba et al., 2017; Bringula et al., 2017). It may provide textual, gesture, voice,
or facial responses (Bringula et al., 2020; Dinçer & Doğanay, 2017; Kim & Baylor 2016). The study
of Dinçer and Doğanay (2017) utilized four different PAs with either audible or textual feedback in
teaching students about MS Excel. The four PAs were Tuna (with an appearance of a boy), Ada (a
grown-up female), Ali (a grown-up male), and Zipzip (a robot). Students could choose which PAs
and feedback they liked. It was revealed that designs with agents had positive effects on learners’
motivation, academic success, and cognitive load.
In a recent study, Bringula et al. (2020) investigated the impact of two versions of PAs of intelligent
tutoring systems on the mathematics performance of the students. The first version only provided
textual feedback and a neutral synthetic facial expression (SFE). The second version also provided
textual feedback but included other SFEs (happy, surprise, and sadness). Students who utilized the
second version had higher mathematics performance than those students who utilized the first version.

Gaming the system (GTS) is a student’s deliberate attempt “to succeed in an educational task by
systematically taking advantage of properties and regularities in the system used to complete that
task, rather than by thinking through the material” (Baker, Mitrović, & Mathews, 2010, p. 267). One
form of GTS is guessing (Walonoski & Heffernan, 2006). Different strategies were employed in the
software to prevent guessing. These strategies include delaying hint requests (Price et al., 2017), using
a pedagogical agent to remind students not to game the software (Baker et al., 2006), allowing to see
the gaming behaviors of other students (Verginis et al., 2011), providing textual feedback (Arroyo et
al., 2007; Bringula et al., 2018; Roll et al., 2007; Walonoski & Heffernan, 2006), using response time
as indicator of guessing (Guo et al., 2016), and informing the students that the software is aware of
the students’ behavior (Nunes et al., 2016). It was also suggested non-awarding of points (Kraemer
et al., 2012). All except the first strategy found to reduce gaming.

Volume 12 • Issue 1
4
Students exhibit GTS for various reasons. Students exhibit GTS because they want to know
the reaction of the PA (Rodrigo et al., 2012). Other students were genuinely stuck on the activity
(Beck & Rodrigo, 2014). In a classroom setting, teachers or tutors provide interventions to help
students move forward with the lesson. One of these strategies is to repeat the lecture. For example,
Bringula et al. (2020) reported that videos were the preferred teaching materials since students
could repeat the lectures.


The students utilized computer-based training software accessible through a local area network. The
software has a pedagogical agent named “DAC (Disassemble and Construct): The Builder” (simply
referred to as DAC) capable of conducting hardware servicing training. The software had lessons and
assessments (e.g., hands-on activities, quizzes, and examinations). It covered two lessons (Desktop
Assembly and Disassembly and Troubleshooting) of the course syllabus.
The two lessons contained 16 lectures. The pedagogical agent delivered the lecture through
text and images. The first lecture was about the parts of the system unit and its functions (Figure 1).
Following the lecture, a 25-item randomized quiz related to the lecture was given to assess the student’s
comprehension. The student must have at least 13 points to proceed to the next lecture. If the score
was not satisfactory, the student would repeat the lecture, and the next module would remain locked.
For the rest of the lectures, DAC taught the students how to assemble and troubleshoot computers.
At the end of every lecture, students took part in a hands-on activity. The students could take the
lessons and activities at their own pace.
DAC assisted the students through the lectures, activities, and examinations. DAC provided
textual feedback and displayed neutral and happy facial expressions. It can conduct tutorials/lectures,
provide hints, detect gaming, reprimand students, and recommend topics (Figure 1 and Figure 2).
If DAC detected gaming, it would prompt the students about their behavior and redirect them back
to the lesson. A student was deemed gaming the system when an individual made three consecutive
mistakes within 15 seconds. It was assumed that students exhibited GTS because they were stuck in
Figure 1. DAC: The Builder conducting a lecture

Volume 12 • Issue 1
5
the activity (Beck & Rodrigo, 2014). Students would only be allowed to retake the exercise after the
lesson (Figure 3). This strategy was based on the study of Bringula et al. (2020).


This experimental study utilized the quasi-experimental pretest-posttest control group design (Figure
4). Computing students from one university in the Philippines participated in the study. There were
two classes in the hardware servicing course. Only one teacher handled the two classes. All students
in the two classes participated in the study. The two classes had a total of 73 students. However, only
56 students completed the experiment.
Figure 2. DAC: The Builder assisting the student to build a virtual computer
Figure 3. DAC re-directs the student to the tutorial after it detected GTS

Volume 12 • Issue 1
6
A class section was randomly assigned (R) either to an experimental or control group. The
experimental group consisted of 26 students, while the control group consisted of 30 students (Figure
4). The average age of the participants in both groups was 20 years. The majority of the participants
are male students in both groups: 22 male participants in the experimental group and 19 in the control
group. Most of the students who participated in the experimental group were third-year students (n
= 19), while second-year students (n = 15) were in the control group.
Both sets of students took a pretest (O) before the intervention period (X). The whole experiment
lasted for two weeks. The intervention period lasted for four non-consecutive days (i.e., two class
sessions within a week, and each session lasted for 1.5 hours). Afterward, a posttest (O) was
administered to both groups.

The academic performances of the students were measured through objective tests and hands-on activities.
The objective part was composed of the pretest and posttest. Both tests were incorporated into the system
to facilitate the randomization of items. Initially, the tests included a 39-item isomorphic multiple-choice
test. The content of the tests was about the general parts of the computer and the installation process.
The teacher of the course helped the researchers develop the items for the tests. Then, it was pilot-tested
with 21 students who were not part of the study. The pilot testers were students of another Information
Technology course where hardware servicing was embedded in the syllabus. Unclear instructions and
vague sentences were deleted from the tests. The final tests contained 38 items.
The students in both groups had already taken the initial topic (i.e., the theoretical part) of the
syllabus when the study was conducted. Both groups were given a pretest before the intervention.
The pretest was conducted a day before the intervention period. Afterward, the students in the
experimental group utilized the software during their class hours. This is the intervention period. Each
intervention period lasted for 1.5 hours. Students can repeat the lecture as desired. For the control
group, the students were also taught in the laboratory (i.e., the laboratory served as the classroom)
for the same duration. Students in the control group were also taught the same course contents for
two weeks. After the teacher’s lecture, students took part in hands-on activities and quizzes in DAC.
Finally, the students in both groups took the posttest. The posttest was given on the last day of the
experiment. Students were given an hour to finish each test. The experiment lasted for two weeks.
The second part of the research instrument entailed hands-on activities. In these activities,
students were asked to set up a virtual computer unit (Figure 3). Students have to complete the set-
up within an hour. There were no points associated with this activity. Instead, students’ completion
times and the number of errors they committed were recorded. If DAC detects the student is gaming
the system, DAC will inform the students of their gaming behavior and they will be redirected to the
lesson (Figure 3). The number of lectures taken, time spent gaming the system, and activities where
GTS was observed were also logged in the system.
Figure 4. Randomized Pretest-Posttest Control Group Design

Volume 12 • Issue 1
7

The study utilized descriptive statistics such as sums, means, and standard deviations. Mann-Whitney
U and Wilcoxon Signed rank tests were employed to determine significant differences in the hardware
servicing performances of the participants in the control and experimental groups. A 0.05 level of
significance was adopted to determine the reliability of the findings.


Table 1 shows the students’ software utilization. The experimental group took part in more hands-on
exercises than the control group. Despite taking more hands-on exercises, the experimental group
spent less time completing the hands-on exercises than the control group. The experimental and
control groups took, on average, 9.12 and 13.62 minutes to complete a hands-on activity. This finding
conforms to the study of De Jesus (2019), which found that students who utilized the software tended
to finish the activities quickly. Furthermore, the former had fewer errors in the hands-on exercises.
The majority of the students in the experimental group took 13 lectures.
During the activities, on average, students in the experimental group would exhibit GTS every
4.03 seconds. This means that the students were committing three mistakes within 4.03 seconds. The
majority of the GTS was observed during the 13th activity. Initially, more than 50% (n = 15) of the
students exhibited GTS. After learning that they would return to the lecture, the number of students
who exhibited GTS was reduced to 4. However, three students exhibited gaming three times.


Table 2 shows the Mann-Whitney U test on the hardware servicing performance of the students between
the two groups. The experimental group (M = 25.8) had lower pretest scores than the control group
(M = 27). The control group’s mean and sum of ranks of pretest scores (M = 30.35; s = 910.50) are
higher than the experimental group’s (M = 26.37; s = 685.50). However, the differences in the ranks
are not significant (U = 334.5; p > 0.05). The posttest scores and mean ranks between the groups
were almost equal. It can be expected that the difference between the rank of the posttest scores was
Table 1. Software Utilization
Software Utilization Experimental Group
(n = 26)
Control Group
(n = 30)
Average Number of Hands-on Completed 11 7
Average Time Spent on Hands-on 546.97 seconds
(9.12 minutes)
817.30 seconds
(13.62 minutes)
Number of Hands-on Errors 69.88 102.6
Average Time Exhibited GTS 4.03 seconds -
Activity where GTS was observed Activity 13 -
Average Number of Activity Taken 13 -
First time gaming 15 students -
Second time gaming 4 students -
Third time gaming 3 students -

Volume 12 • Issue 1
8
not significantly different (U = 372.0, p > 0.05). The first (H0a) and second (H0b) null hypotheses
are both accepted.
The Mann-Whitney U test confirmed that there is a significant difference between the software
utilization of the control and experimental groups. The experimental group spent less time (U =
234.00, p < 0.05) and had fewer errors committed during the activities (U = 247.00, p < 0.05). Hence,
the third (H0c) and fourth (H0d) null hypotheses are both rejected.
Meanwhile, the Wilcoxon signed-rank tests were conducted on the pretest and posttest scores
of the groups (Table 3). In the control group, there is almost an equal number of negative (n = 14)
and positive (n = 15) ranks. Moreover, the mean negative rank is 12.71 and the mean positive rank
is 17.13. The sum of the positive ranks (s = 279.50) is higher than the sum of the negative ranks (s
= 178.00). However, the difference between the mean ranks of the pretest and posttest scores in the
control group is not significant (Z = 0.855, p > 0.05). Therefore, the null hypothesis stating that (H0e)
there is no significant difference between the pretest and posttest scores of the control group is accepted.
In the experimental group, there are more positive ranks (n = 20) than negative ranks (n = 5).
Consequently, the mean positive rank (M = 13.98) is higher than the mean negative rank (M = 9.10).
The sum of the ranks further shows the discrepancy between the positive (s = 270.50) and negative
(s = 45.50) ranks. The difference between the mean rank of the scores was found to be significant (Z
= -3.194, p < 0.05). Hence, the posttest scores are higher than the pretest scores of the experimental
Table 2. Mann-Whitney U Test on the Hardware Servicing Performance of the Students in the Experimental (n = 26) and
Controlled (n = 30) Groups
Test Group Mean Mean Rank
(M)
Sum of
Ranks (s) Up-value
Pretest
Experimental 25.8 26.37 685.50
334.5 0.361
Control 27.0 30.35 910.50
Posttest
Experimental 27.9 27.81 723.00
372.0 0.766
Control 28.0 29.10 873.00
Time Spent
on Hands-on
Activities
Experimental 9.12 min 25.50 585.00
234.0 0.010
Control 13.62 min 33.70 1011.00
Hands-on
Errors
Experimental 69.88 23.00 598.00
247.00 0.019
Control 102.6 33.27 998.00
Table 3. Wilcoxon Signed-Rank Tests on the Hardware Servicing Performance of the Students between their Pretest and
Posttest Scores
Test Rank n Mean Rank
(M)
Sum of
Ranks (s) Zp-value
Post_Con –
Pre_Con
Negative 14 12.71 178.00
0.855 0.392Positive 15 17.13 257.00
Ties 1
Post_Exp –
Pre_Exp
Negative 5 9.10 45.50
-3.194 0.001Positive 20 13.98 279.50
Ties 1

Volume 12 • Issue 1
9
group. Consequently, the null hypothesis stating that (H0f) there is no significant difference between
the pretest and posttest scores of the experimental group is rejected.

This study determined the impact of a CBT on the computer hardware servicing skills of college
students. Towards this goal, the academic performances of the students in the experimental and
control groups were compared. Moreover, the software utilization of the experimental group was
investigated. The experimental group had better software utilization than the control group in terms
of the average number of hands-on activities completed, average time spent, and number of hands-
on errors committed. The experimental group was able to cover more hands-on activities than the
control group. The experimental group also exhibited their knowledge more correctly and quickly than
the control group. These findings agree with the study by De Jesus (2019). The favorable software
utilization of the experiment group can be attributed to the students’ familiarity with the software.
The software is indeed able to assist the students’ learning of computer hardware servicing at their
phase. Nevertheless, despite the lack of familiarity with the system, the control group was able to
complete seven hands-on activities.
Consistent with the literature, students in this study also exhibited GTS behavior. In the context
of this study, students attempted to fit the parts of a computer into the different computer slots.
GTS is exhibited within 4.03 seconds. As a result, students responded to the activities passively.The
majority of the GTS was logged in the 13th activity. Perhaps, students were attempting to finish all
the lessons quickly.
In the first case, more than half of the students displayed GTS.After the intervention of DAC,
there was a significant reduction in GTS. Therefore, the combination of returning the students to the
lecture, textual feedback, and neutral facial expression are effective ways to prevent this student’s
behavior. However, there were still 3 students who persisted in their GTS behavior. It is unclear why
these students continue this behavior despite taking more time to re-learn the lesson. Future research
is necessary to shed light on this phenomenon.
The Mann-Whitney U test on the pretest scores of both groups showed no significant difference.
Thus, the prior knowledge of the students in the course is similar. In other words, when the study was
conducted, they had the same levels of understanding of the lesson. At the end of the intervention
period, the differences in their posttest scores were not statistically significant. This means students
in the traditional lecture setting and the experimental setting could not outperform each other.
The Wilcoxon signed-rank tests provided another insight into the group’s academic performance in
hardware servicing. For the control group, the rank of the pretest and posttest scores was not statistically
different. This finding suggests traditional lectures could not increase the students’ scores to a large
extent. Meanwhile, the experimental group had a different result. Students who use the software have
the potential to significantly improve their grades.However, as shown in the previous statistical test,
the scores of the students in the experimental group did not exceed the scores of the control group.

This study contributed to the existing threads of discussion on preventing GTS and on the field of
CBT in general. In prior studies, preventing GTS was focused on reprimanding or reminding students
about their usage behavior (Arroyo et al., 2007; Baker et al., 2006; Nunes et al., 2016; Roll et al.,
2007; Walonoski & Heffernan, 2006). While these strategies have been proven effective in reducing
GTS, they may lack pedagogical value. In this current study, the response of the PA was based on the
assumption that students exhibit GTS because of a lack of skills. Consistent with the study of Chen
et al. (2012), students need to repeat the lesson as a more definitive course of action. The gaming

Volume 12 • Issue 1
10
behavior intervention employed in this study, as shown in the findings, significantly reduced the
number of students who exhibited this behavior.
Furthermore, this study offers practical implications. Considering the positive outcomes of the
experiment, the use of the software is encouraged. The software may also be utilized as supplemental
material for students. Specifically, at-risk, struggling, or absentee students may use the software to
catch up with the course content. CBT researchers may also consider redirecting the students to their
lessons as a way to deter GTS.

This study determined the students’ utilization of a CBT software named “DAC” and its impact
on their academic performance. The experimental group had a more favorable use of the software
compared to the control group. However, this is mainly attributed to familiarity with the system. The
experimental group exhibited GTS. This behavior was significantly reduced after DAC intervention.
Hence, redirecting the students to retake the lesson is an effective way to deter GTS.
The study did not find evidence to reject the first, second, and fifth null hypotheses. However,
the third, fourth, and sixth hypotheses were rejected. Three conclusions can be derived from this
finding. First, students can learn both in traditional and experimental settings. Second, the students
in both conditions could not outperform each other. In other words, after each intervention, it can
be expected that their scores will be the same. Lastly, the software can assist students in catching up
with their peers.
Despite the promising results, there are several limitations in the study that are worth further
investigation. Every intervention has limitations, and the strategy employed in this study is no
exception. It is still unclear how students will avoid being detected in their GTS behavior. Incorporation
of other intervention strategies in the system is suggested to determine the relative impact of these
strategies. Lastly, the software only covered the hardware servicing of desktop computers. Thus,
laptop servicing may be incorporated into future research.

Volume 12 • Issue 1
11

Arroyo, I., Ferguson, K., Johns, J., Dragon, T., Meheranian, H., Fisher, D., & Woolf, B. P. etal. (2007). Repairing
disengagement with non-invasive interventions. Artificial Intelligence in Education, 2007, 195–202.
Baker, R., Walonoski, J., Heffernan, N., Roll, I., Corbett, A., & Koedinger, K. (2008). Why students engage in “gaming
the system” behavior in interactive learning environments. Journal of Interactive Learning Research, 19(2), 185–224.
Baker, R. S., Corbett, A. T., Koedinger, K. R., Evenson, S., Roll, I., Wagner, A. Z., & Beck, J. E. etal. (2006).
Adapting to when students game an intelligent tutoring system. In International conference on intelligent tutoring
systems (pp. 392-401). Springer. doi:10.1007/11774303_39
Baker, R. S., Mitrović, A., & Mathews, M. (2010). Detecting gaming the system in constraint-based tutors.
In International Conference on User Modeling, Adaptation, and Personalization (pp. 267-278). Springer.
doi:10.1007/978-3-642-13470-8_25
Beck, J., & Rodrigo, M. M. T. (2014). Understanding wheel spinning in the context of affective factors. In
International conference on intelligent tutoring systems (pp. 162-167). Springer. doi:10.1007/978-3-319-07221-0_20
Bedwell, W. L., & Salas, E. (2010). Computer‐based training: Capitalizing on lessons learned. International
Journal of Training and Development, 14(3), 239–249.
Bimba, A. T., Idris, N., Al-Hunaiyyan, A., Mahmud, R. B., & Shuib, N. L. B. M. (2017). Adaptive
feedback in computer-based learning environments: A review. Adaptive Behavior, 25(5), 217–234.
doi:10.1177/1059712317727590
Botarleanu, R. M., Dascalu, M., Sirbu, M. D., Crossley, S. A., & Trausan-Matu, S. (2018). ReadME–Generating
personalized feedback for essay writing using the ReaderBench framework. In H. Knoche, E. Popescu, & A.
Cartelli (Eds), Conference on Smart Learning Ecosystems and Regional Development (pp. 133-145). Springer.
Bringula, R., De Leon, J. S., Rayala, K. J., Pascual, B. A., & Sendino, K. (2017). Effects of different types of feedback
of a mobile-assisted learning application and motivation towards mathematics learning on students’ mathematics
performance. International Journal of Web Information Systems, 13(3), 241–259. doi:10.1108/IJWIS-03-2017-0017
Bringula, R., Fosgate, I. C., Yorobe, J. L., & Garcia, N. P. (2020). Exploring the Sequences of Synthetic Facial
Expressions and Type of Problems Solved in a Personal Instructing Agent using Lag Sequential Analysis. In
2020 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (pp. 764-
769). IEEE. doi:10.1109/TALE48869.2020.9368492
Bringula, R. P., Fosgate, I. C. O. Jr, Garcia, N. P. R., & Yorobe, J. L. M. (2018). Effects of pedagogical agents
on students’ mathematics performance: A comparison between two versions. Journal of Educational Computing
Research, 56(5), 701–722. doi:10.1177/0735633117722494
De JesusA. N. B. (2019). Computer hardware servicing and maintenance trainer. https://ssrn.com/
abstract=3448885
Dinçer, S., & Doğanay, A. (2017). The effects of multiple-pedagogical agents on learners’ academic success,
motivation, and cognitive load. Computers & Education, 111, 74–100. doi:10.1016/j.compedu.2017.04.005
Ecalle, J., Vidalenc, J. L., & Magnan, A. (2020). Computer-based Training Programs to Stimulate Learning to
Read in French for Newcomer Migrant Children: A Pilot Study. Journal of Educational Cultural and Psychological
Studies, (22), 23–47. doi:10.7358/ecps-2020-022-ecal
Guo, H., Rios, J. A., Haberman, S., Liu, O. L., Wang, J., & Paek, I. (2016). A new procedure for detection of
students’ rapid guessing responses using response time. Applied Measurement in Education, 29(3), 173–183.
doi:10.1080/08957347.2016.1171766
Hsu, C. K., & Hwang, G. J. (2014). A context-aware ubiquitous learning approach for providing instant learning
support in personal computer assembly activities. Interactive Learning Environments, 22(6), 687–703. doi:10
.1080/10494820.2012.745425
Hwang, G. J., Wu, C. H., Tseng, J. C. R., & Huang, I. (2011). Development of a ubiquitous learning platform
based on a real-time help-seeking mechanism. British Journal of Educational Technology, 42(6), 992–1002.
doi:10.1111/j.1467-8535.2010.01123.x

Volume 12 • Issue 1
12
Kim, Y., & Baylor, A. L. (2016). Research-based design of pedagogical agent roles: A review, progress, and
recommendations. International Journal of Artificial Intelligence in Education, 26(1), 160–169. doi:10.1007/
s40593-015-0055-y
Klimova, B. (2021). Are There Any Cognitive Benefits of Computer-Based Foreign Language Training for
Healthy Elderly People?–A Mini-Review. Frontiers in Psychology, 11, 573287. doi:10.3389/fpsyg.2020.573287
PMID:33584410
Kraemer, E. E., Davies, S. C., Arndt, K. J., & Hunley, S. (2012). A comparison of the Mystery Motivator and the
Get’Em On Task interventions for off‐task behaviors. Psychology in the Schools, 49(2), 163–175. doi:10.1002/
pits.20627
Lane, H. C., & Schroeder, N. L. (2022). Pedagogical agents. In B. Lugrin, C. Pelachaud, & D. Traum (Eds.), The
Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent
Virtual Agents, and Social Robotics Volume 2: Interactivity, Platforms, Application (pp. 307-330). Association
of Computing Machinery. doi:10.1145/3563659.3563669
Li, J., Kizilcec, R., Bailenson, J., & Ju, W. (2016). Social robots and virtual agents as lecturers for video instruction.
Computers in Human Behavior, 55, 1222–1230. doi:10.1016/j.chb.2015.04.005
Martha, A. S. D., & Santoso, H. B. (2019). The design and impact of the pedagogical agent: A systematic
literature review. Journal of Educators Online, 16(1), n1. doi:10.9743/jeo.2019.16.1.8
Mohammadhasani, N., Fardanesh, H., Hatami, J., Mozayani, N., & Fabio, R. A. (2018). The pedagogical agent
enhances mathematics learning in ADHD students. Education and Information Technologies, 23(6), 2299–2308.
doi:10.1007/s10639-018-9710-x
Mousa, M., & Molnár, G. (2020). Computer-based training in math improves inductive reasoning of 9-to 11-year-
old children. Thinking Skills and Creativity, 37, 100687. doi:10.1016/j.tsc.2020.100687
Nunes, T. M., Bittencourt, I. I., Isotani, S., & Jaques, P. A. (2016). Discouraging gaming the system through
interventions of an animated pedagogical agent. In European Conference on Technology Enhanced Learning
(pp. 139-151). Springer. doi:10.1007/978-3-319-45153-4_11
Oduma, C. A., Onyema, L. N., & Akiti, N. (2019). E-learning platforms in business education for skill acquisition.
[NIGJBED. Nigerian Journal of Business Education, 6(2), 104–112.
Price, T. W., Zhi, R., & Barnes, T. (2017, June). Hint generation under uncertainty: The effect of hint quality
on help-seeking behavior. In E. André, R. Baker, X. Hu, M. Rodrigo, & B. du Boulay (Eds.), International
conference on artificial intelligence in education (pp. 311-322). Springer. doi:10.1007/978-3-319-61425-0_26
Rodrigo, M. M. T., Baker, R. S., Agapito, J., Nabos, J., Repalam, M. C., Reyes, S. S., & San Pedro, M. O. C.
(2012). The effects of an interactive software agent on student affective dynamics while using; an intelligent
tutoring system. IEEE Transactions on Affective Computing, 3(2), 224–236. doi:10.1109/T-AFFC.2011.41
Roll, I., Aleven, V., McLaren, B. M., & Koedinger, K. R. (2007, June). Can Help-Seeking Be Tutored? Searching
for the Secret Sauce of Metacognitive Tutoring. Artificial Intelligence in Education, 2007, 203–210.
Verginis, I., Gouli, E., Gogoulou, A., & Grigoriadou, M. (2011). Guiding learners into re-engagement through
the SCALE environment: An empirical study. IEEE Transactions on Learning Technologies, 4(3), 275–290.
doi:10.1109/TLT.2011.20
Walonoski, J. A., & Heffernan, N. T. (2006). Detection and analysis of off-task gaming behavior in intelligent
tutoring systems. In International Conference on Intelligent Tutoring Systems (pp. 382-391). Springer.
doi:10.1007/11774303_38
Zwart, D. P., Goei, S. L., Noroozi, O., & Van Luit, J. E. (2021). The effects of computer-based virtual learning
environments on nursing students’ mathematical learning in medication processes. Research and Practice in
Technology Enhanced Learning, 16(1), 1–21. doi:10.1186/s41039-021-00147-x

Volume 12 • Issue 1
13
Rex P. Bringula is a professor at the University of the East (UE) College of Computer Studies and Systems. He
received his BS Computer Science degree from UE as a Department of Science and Technology scholar. He
received his Master’s in Information Technology and Ph.D. in Technology Management in Technological University
of the Philippines. He is active in conducting school- and government-funded research projects, and in participating
in local and international conferences. His research interests are in computer science/IT education, affective
computing, Internet studies, cyber-behavior, web usability, and environmental issues.
John Vincent Canseco graduated from the University of the East, Manila, Philippines.
Patricia Louise J. Durolfo was a student at the University of the East, Manila, Philippines.
Lance Christian Villanueva was a student at the University of the East, Manila, Philippines.
Gabriel M. Caraos was a student at the University of the East, Manila, Philippines.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Computer-based virtual learning environments (CBVLEs) are potentially useful teaching tools for training nursing students in professional duties such as the mathematical tasks associated with medication processes. In this study, a CBVLE was designed with well-structured instructional activities such as interleaved practice and feedback. Mathematical medication scenarios and basic arithmetic exercises were integrated into the CBVLE. Four training conditions were used in the CBVLE to facilitate extra support for mathematical medication learning: (1) learning without worked examples, (2) learning with worked examples involving domain-specific knowledge, (3) learning with worked examples involving regular thinking strategies, and (4) learning with combined worked examples. This study was conducted with 118 nursing students enrolled in post-secondary nursing education and Bachelor’s nursing programmes. Students were pre-tested and post-tested on their mathematical medication learning. Training in the CBVLE improved mathematical medication learning for all students from pre-test to the post-test stages, but no differences were found among the four different conditions. Nursing students’ prior knowledge, non-verbal intelligence, and number of correct tasks predicted mathematical medication learning outcomes. When controlling for non-verbal intelligence, students in the condition 1 benefited more than students in condition 3 in terms of their mathematical medication learning outcomes. The same accounted for the support of the low-achieving students in the CBVLE. The support conditions for the high-achieving group appeared to be unimportant for mathematical medication learning. It seems that technology is taken over some of the capacity of working memory, which accounts for the benefits to the low-achieving learners.
Article
Full-text available
The purpose of this mini-review is to investigate if there are any cognitive benefits of computer-based foreign language training for healthy older individuals aged 55+ years. The author conducted a literature search of peer-reviewed English written research articles found in Pub Med, Web of Science and Scopus. The findings of this mini-review reveal that the research on the cognitive benefits of computer-based foreign language training for healthy older individuals is small-scale. The limited research findings of only three relevant studies indicate that these computer-based foreign language training programs may bring cognitive benefits for healthy elderly people, especially as far as the enhancement of their cognitive functions such as working memory are concerned. In addition, the authors of these studies suggest that foreign language learning is a useful activity for healthy older adults since it has the benefits of being meaningful (an advantage over other cognitive training approaches) and provides the chance for acquiring important skills that can benefit other aspects of life, such as travel or communication. In conclusion, the author of this mini-review also provides several implications for practice and future research.
Article
Full-text available
The integration of newcomer migrant children is a vital challenge for host countries. For such children, learning to read in a new language is a prerequisite for the acquisition of knowledge in all academic domains at school. To investigate this issue, two experiments were conducted: one with children who were just at the beginning of learning to read in French and another with children who could already read a few words in French. Two specific software programs were used for each group. Each group was exposed to the same experimental design, which included three assessment sessions, namely two before training to obtain a baseline of scores in different literacy skills and a third after training to examine the impact of 10 hours of training. In Experiment 1, the alphabetic code was stimulated: a significant effect on phonemic awareness was observed. In Experiment 2, the grapho-syllabic processing required to read words was stimulated: a significant effect on word reading was observed. Our initial results show that teachers can improve learning to read in ways tailored to the needs of newcomer migrant children.
Article
Full-text available
This study focuses on a computer-based training program in inductive reasoning through tasks embedded into mathematical content for 9–11-years-old students (N = 118) and presents the results of the evaluation study. The online training consists of 120 playful problems based on Klauer’s “Cognitive training for children” concept and on his theory of inductive reasoning (Klauer, 1989). Both the experimental and the control group in the study consisted of 118 participants. A computer-based inductive reasoning test comprising of 44 multiple-choice items was used in the pre- and posttest to measure the effectiveness of the training (Cronbach α = .91). Both the test and the training tasks were in Arabic context regarding language and used directions and were delivered via the eDia online assessment platform (Csapó & Molnár, 2019). On the posttest, after the six weeks of training the experimental group significantly outperformed the control group by more than one standard deviation. The effect size of the training was in international context high (Cohen d = 1.71). Non-significant variance of the latent slope indicated that there was no significant variability in responding to the intervention program. This study provided evidence that inductive reasoning could be developed even on class level and in a computerised environment very effectively at the age of 9-11 independent of students’ original level of inductive reasoning, school-achievement, gender, and socio-economic status.
Article
Full-text available
This study aims to provide a CHSM Trainer or Computer Hardware Servicing and Maintenance Trainer for the College of Information and Communication Technology (CICT). The purpose of this study is to enhance the troubleshooting, system configuration and computer hardware skills of the CICT Information Technology students. The said project was constructed using different computer peripheral parts. The CHSM Trainer has a dual core motherboard which has a built-in video card and a 512 RAM. The CHSM Trainer was evaluated by the BSIT students using the following projects criteria: (a) functionality; (b) user-friendliness; (c) portability; and (d) reliability. The result of the respondent's evaluation was interpreted as follows: Excellent, Very Satisfactory, Satisfactory, Fair and Poor. Overall, the project got 4.33 mean rating with a verbal description of "Very Satisfactory" The conclusions of this study were drawn as follows: (l) The proponents included several peripheral parts for construction of the CHSM Trainer (2) This project is for the students which can help them in troubleshooting, system configuration interfacing technique and computer assembling; (3) The CICT has a growing population of students and many of the students must be familiar with the peripheral parts of a computer. Generally, the CHSM Trainer can lessen the time consumed in practicing interfacing and troubleshooting inside the classroom. Lastly, the recommendations of this study were drawn as follows: (1) The CICT must provide troubleshooting practicing equipment like CHSM Trainer for the information and technology students as it is part and parcel of their skills development; (2) Improve the functionality of the CHSM Trainer; (3) If the CHSM Trainer is for hardware, they must develop a computer trainer for software like Computer Aided Instruction with the topic in troubleshooting and assembling. The researchers also used hard disk drive, DVD-Rom drive, power supply and a flat LCD monitor which makes the project functional. However, the researchers make it more compact by customizing its casing.
Article
Full-text available
A pedagogical agent is an anthropomorphic virtual character used in an online learning environment to serve instructional purposes. The design of pedagogical agents changes over time depending on the desired objectives for them. This article is a systematic review of the research from 2007 to 2017 related to the design factors of pedagogical agents and their impact on learning environments. The objective of this review is to identify and analyze pedagogical agents through the context in which they are constructed, the independent variables used in pedagogical agent research, and the impact of the pedagogical agent implementation. The review found that research on the design of pedagogical agents has different forms, namely text, voice, 2-D character, 3-D character, and human. The independent variables used in the studies are categorized into the appearance of agents and the role of agents. Moreover, the combination of pedagogical agent designs and role designs of pedagogical agents has significant positive impacts on student learning and student behavior. Recommendations are also provided at the end of this review.
Article
Full-text available
Gaining the attention is the first key step to enhance learning. In Attention-Deficit/Hyperactivity Disorder (ADHD) as the most prevalent deficit in school age, the learners face some impairment in attention that requires appropriate intervention. An environment that embedded Pedagogical Agent in computer-assisted instruction (CAI) has been designed to support learning through gaining and guiding attention to relevant information for these students. This study investigated how much the presence of pedagogical agent can improve learning in ADHD students. The learning environment was integrated with a pedagogical agent, named Koosha, as a tutor and motivator. This study employed a pretest and posttest experimental design with a control group. The statistical population was 30 boy students with ADHD in primary school from the North of Iran. The participants were randomly assigned to work with either an agent presenting a multimedia program or without an agent in mathematics. The results (Analysis of covariance -ANCOVA) suggested that experimental and control groups show a significant difference in mathematics achievement. According to this research, using the pedagogical agent can enhance the learning of ADHD students; so it can be considered as a valid school-base intervention for these students.
Chapter
Full-text available
Writing quality is an important component in defining students’ capabilities. However, providing comprehensive feedback to students about their writing is a cumbersome and time-consuming task that can dramatically impact the learning outcomes and learners’ performance. The aim of this paper is to introduce a fully automated method of generating essay feedback in order to help improve learners’ writing proficiency. Using the TASA (Touchstone Applied Science Associates, Inc.) corpus and the textual complexity indices reported by the ReaderBench framework, more than 740 indices were reduced to five components using a Principal Component Analysis (PCA). These components may represent some of the basic linguistic constructs of writing. Feedback on student writing for these five components is generated using an extensible rule engine system, easily modifiable through a configuration file, which analyzes the input text and detects potential feedback at various levels of granularity: sentence, paragraph or document levels. Our prototype consists of a user-friendly web interface to easily visualize feedback based on a combination of text color highlighting and suggestions of improvement.
Article
Full-text available
Adaptive support within a learning environment is useful because most learners have different personal characteristics such as prior knowledge, learning progress, and learning preferences. This study reviews various implementation of adaptive feedback, based on the four adaptation characteristics: means, target, goal, and strategy. This review focuses on 20 different implementations of feedback in a computer-based learning environment, ranging from multimedia web-based intelligent tutoring systems, dialog-based intelligent tutoring systems, web-based intelligent e-learning systems, adaptive hypermedia systems, and adaptive learning environment. The main objective of the review is to compare computer-based learning environments according to their implementation of feedback and to identify open research questions in adaptive feedback implementations. The review resulted in categorizing these feedback implementations based on the students’ information used for providing feedback, the aspect of the domain or pedagogical knowledge that is adapted to provide feedback based on the students’ characteristics, the pedagogical reason for providing feedback, and the steps taken to provide feedback with or without students’ participation. Other information such as the common adaptive feedback means, goals, and implementation techniques are identified. This review reveals a distinct relationship between the characteristics of feedback, features of adaptive feedback, and computer-based learning models. Other information such as the common adaptive feedback means, goals, implementation techniques, and open research questions are identified.