Conference PaperPDF Available

The Effect of Student Model on Learning

Authors:

Abstract and Figures

Our goal in this study was to compare the effectiveness of displaying the open student model as a set of skillometers versus concept maps. The data suggests that concept maps are significantly more effective than a set of skillometers when answering questions that require synthesizing an overview of the topic.
Content may be subject to copyright.
The Effect of Student Model on Learning
Adrian MARIES, Amruth KUMAR
Ramapo College of New Jersey
{amaries, amruth}@ramapo.edu
Abstract
Our goal in this study was to compare the
effectiveness of displaying the open student model as a
set of skillometers versus concept maps. The data
suggests that concept maps are significantly more
effective than a set of skillometers when answering
questions that require synthesizing an overview of the
topic.
1. Introduction
The constructivist theory of learning argues that we
gain knowledge by building upon already existing
knowledge, that we learn new concepts by integrating
them with concepts we already know. Constructing
concept maps makes students analyze the structure of
their own knowledge which helps them assimilate the
new information [9]. Several studies have shown that
concept maps help students learn. One study found that
creating concept maps helps students understand and
retain the material presented in class [10,11] Another
study found that studying the already worked-out
concept map was more effective than generating the
concept map from scratch, which was in turn more
effective than generating the concept map from just a
list of concepts or from a list that was already arranged
spatially [12].
Several studies have shown the ben efits of open
student models. Making the student model available to
students can make them aware of their own knowledge,
or lack of it, and, in turn, improve their learning [3,4].
One survey shows that students want to have access to
their learner models [1]. Student model presented in a
table format is difficult to understand [6]. Therefore,
visualization of data is a critical part of the open
student model [7]. Concept maps, with their ability to
reveal the relationships among concepts, have been
proposed as a mechanism to present the learner model
[1,2,5,8].
In our software tutors, we present the open student
model as a concept map. We wanted to find out
whether using the concept map to present the open
student model conferred any benefits over using other
techniques for visualizing the student model. In this
paper, we will describe an experiment that we
conducted to answer this question, analyze the
collected data and present our results.
2. The Tutor and the Protocol
We used a software tutor on arithmetic expressions
for our evaluation (www.problets.org). It presents
problems on evaluating arithmetic expressions in
C/C++/Java/C# (e.g., 5 + 4 % 8), and asks the student to
evaluate them step-by-step, i.e., one operator at a time.
Once the student has entered his/her answer, the tutor
provides delayed feedback – it lists how many steps the
student solved correctly. It displays the correct evaluation
of the expression using under-braces and intermediate
results. In addition, it prints a text explanation for each
step, such as “16 / 5 returns 3. Since both the operands
are integers, integer division is performed. Any fraction
in the result is discarded.” Since the student’s attempt is
displayed in the left panel and the correct evaluation is
displayed in the right panel simultaneously, the student
can compare the two solutions.
In fall 2006 and spring 2007, we used the arithmetic
expressions tutor to evaluate whether using the concept
map as the student model conferred any additional
benefits. The subjects were students in Psychology
courses who used the tutor over the web, on their own
time, as part of the experiential learning requirements
of a Psychology course.
We used a controlled study and the traditional pre-
test-practice-post-test protocol:
1. First, the subjects answered an online pre-test
consisting of 9 multiple-choice questions. 6 of
these were related to arithmetic expressions and 3
were unrelated.
2. The subjects worked with the software tutor for 15
minutes solving expr ession evaluation problems
and reading the feedback. After solving each
problem, the subjects were shown their open
student model. For the test group, the model was
shown as a taxonomic concept map (domain
concepts are nodes, links are is-a and part-of
relations, and the map is an and-or tree), with their
percentage completion of each concept graphically
displayed in the corresponding node (See Figure
1). For the control group, the model was shown
using skillometers (a series of progress bars
graphically showing the completion percentage of
each concept) (See Figure 2).
3. Finally, the subjects answered an online post-test
consisting of the same questions as on the pre-test.
The questions on the pre-test and post-test were:
1. How many arithmetic operators are available in
C++ programming language? 1/2/3/4/5/6. The
correct answer was 5.
2. Pick ALL the operators among the following that
are C++ arithmetic operators (check all that
apply): <, * , !, /, %, ^. The answer was *, / and %.
3. How many of the following numbers are prime: 2,
3, 4, 5, 6, 7, 8, 9, 10. The answer was 4.
4. Which of the following C++ operators have
'integer' and 'real' types (check all that apply)? -
<=, >, &&, +, /, ^. The answer was /.
5. For which of the following C++ operators is
'Dividing by Zero' an issue (check all that apply)?
- >=, ||, %, ^, !. The answer was %.
6. What is the sum of the internal angles of a
triangle? 90/180/270/360/450/600. The answer
was 180.
7. To how many C++ arithmetic operators does the
issue of 'Precedence' apply? – none/only
one/all/only two/only three. The answer was all.
8. Pick all the issues that apply to all the C++
arithmetic operators (check all that apply) –
Coercion, Correct evaluation, Error, Associativity,
Real. The answer was Correct evaluation and
Associativity.
9. Which of the following are types of operators in
C++ (check all that apply)? – Relational,
Abstraction, Repetition, Logical, Selection,
Assignment. The answer was Relational, Logical
and Assignment.
Note that almost all the questions had multiple correct
answering options and the student was asked to select
all those options. Questions 3, 6 and 9 are not related
to arithmetic expressions, and were meant to serve as
control questions for each subject. Answers to
questions 1,2,4,5,7,and 8 cannot be synthesized
without an overview of the domain of arithmetic
operators, since these questions are not about any
particular operator, but rath er about groups of
operators.
Durin g the problem-solving session, the tutor never
explicitly provided the answers to any of these
questions. But, the answers to questions 1,2,4,5,7,and 8
were evident from examination of the open student
models. The spatial organization of the concept map
made these answers more obvious than the list
organization of the skillometers. Take, for example,
question 2: Pick ALL the operators among the
following that are C++ arithmetic operators (check all
that apply): <, * , !, /, %, ^. A quick look at the open
student model displayed as a concept map, Figure 1,
will show that the five C++ operators are the children
of the root node, namely +, -, *, / and %. Even though
this information is available in the student model
displayed as skillometers as well, Figure 2, it is more
difficult to find. Not only do you have to look at most
topics in the skillometers, but the notation is not as
straightforward either.
4. Data Analysis
50 students participated in the evaluation, 32 were
in the test group and 18 in the control group. We used
two grading schemes:
1. In the regular grading scheme, if a problem had n
answering options, m of which were correct, the
student received 1/m points for each correct
answer. E.g, question 8 has 5 options, 2 of which
are correct; if the students’ answer includes one
Figure 1: Open Student Model Displayed as a Concept Map
incorrect option and the two correct ones, they
receive full credit.
2. In the negative grading scheme, students were
penalized for guessing. If a problem had n
answering options, m of which were correct, the
student received 1/m points for each correct
answer and lost 1/(n – m) points for each incorrect
answer. E.g, if the students have the same answer
as before to question 8, they only receive 2*1/2 –
1/3 = 0.66 points.
Aggregate of all the Questions. First, we considered
the aggregate of all the questions for each student. We
conducted a 2 X 2 mixed factor ANOVA analysis of
the aggregate scores with pre-post as the within-
subjects factor and treatment (skillometers versus
concept maps) as the between-subjects factor. Using
regular grading, we found a significant main effect for
time (pre versus post-test) [F(1,48) = 7.728, p = 0.034]
- students scored significantly higher on the post-test
(4.879 points) than on the pre-test (4.158 points).
There was no significant main effect for the treatment,
or significant interaction between pre-post and
treatment. We again found a significant main effect for
time using negative grading [F(1,48) = 17.417, p =
0.000] - students scored significantly more on the post-
test (3.993 points) than on the pre-test (2.822 points).
There was no significant interaction between pre-post
and treatment.
Related and Unrelated Questions. We did a 2 X 2 X
2 mixed factor ANOVA analysis with pre-post scores
and related-unrelated questions as within-subjects
factors and treatment (skillometer versus concept map)
as between-subjects factor. We found that:
There was a significant main effect for related (ave
2.076) versus unrelated questions (ave 1.410)
[F(1,48) = 16.023, p = 0.000].
There was a significant main effect for pre-test (ave
1.478) versus post-test (ave 2.008) [F(1,48) =
17.417, p = 0.000] -a clear pre-post increase.
There was significant interaction between related
versus unrelated questions and pre versus post-test
[F(1,48) = 24.769, p = 0.000]. The average on
related questions increased from 1.525 on pre-test
to 2.627 on post-test for related questions and
decreased from 1.430 to 1.390 on unrelated
questions.
We did not observe any other significant interaction.
Related Questions. Next, we repeated the 2 X 2 mixed
factor ANOVA analysis for the aggregate scores on
related questions only. Once again, there was a
significant main effect for time using regular grading
[F(1,48) = 7.833, p = 0.007] – students scored
significantly more points (3.235) on the post-test than
on the pre-test (2.448). We found the same results with
negative grading [F(1,48) = 23.289, p = 0.000] –
students scored significantly more points (2.654) on
the post-test than on the pre-test (1.417). The
interaction between pre-post and treatment was
significant whether we used regular grading [F(1,48) =
3.925, p = 0.053] or negative grading [F(1,48) = 3.476,
p = 0.068].
Unrelated Questions. We did not find any significant
main effect for time using regular grading [F(1,48) =
0.401, p = 0.53] or negative grading [F(1,48) = 0.252,
p = 0.618].
Easy, Intermediate and Hard Related Questions.
Next, we repeated the above analysis for easy (1 and
2), intermediate (4 and 5) and hard (7 and 8) questions
considered together. The criterion we used to divide
the problems into the three categories was the
likelihood of finding the answer by taking a quick look
at the open student model. Question 1 (How many
arithmetic operators are available in C++ programming
language?), for example, can be answered fairly easily.
A quick look at the concept map will allow us to see
that there are 5 second-level nodes that have operator
names. To answer the hard questions, we need to
carefully inspect the student model. In order to find the
answer to question 7 (To how many C++ arithmetic
operators does the issue of ‘Precedence’ apply?), we
have to look at all third-level nodes and scan for nodes
with the specified name, a task that requires more than
just a quick glance at the concept map. Intermediate
questions are somewhere in between, harder than what
we call easy questions, but easier to answer than the
hard ones.
Figure 2: Open Student Model Displayed
as Skillometers
For easy questions, there was no main effect for
time or treatment and no significant interaction
between th e two.
For intermediate questions, there was a significant
main effect for time [F(1, 48) = 17.936, p = 0.000] –
the average score improved from 0.66 on the pre-test to
1.22 on the post-test. The effect for treatment was
marginally significant [F(1,48) = 3.0, p = 0.09] – the
test group scored significantly lower than the control
group on the pre-test (0.515 versus 0.941). The
interaction between time and treatment was not
significant.
For hard questions, there was no significant main
effect for time or treatment, but the interaction between
the two was significant [F(1,48) = 4.147, p = 0.047]:
whereas the control group score decreased from pre-
test to post-test (from 0.794 to 0.588), the test group
score increased from pre-test to post-test (0.646 to
0.894). We have summarized the pre-post change in
scores on the three types of questions, and the
statistical significance of the control-test group
difference in table 1.
Table 1: Analysis of Easy, Intermediate and
Hard Questions
Treatment Pre-post
Change Easy Inter
m Hard
Non-negative grading
Avg. 0.041 0.353
-
0.206
Skillomet
ers St. Dev. 0.644 0.786 0.751
Avg. 0.183 0.667 0.248 Concept
Map St. Dev. 0.774 0.816 0.743
Between-subjects p-value 0.495 0.196 0.051
Negative grading
Avg. 0.471 0.459
-
0.254
Skillomet
ers St. Dev. 0.736 0.783 0.523
Avg. 0.576 0.776 0.175 Concept
Map St. Dev. 0.868 0.717 0.672
Between-subjects p-value 0.657 0.174 0.017
Hard Questions. The analysis on questions 7 and 8,
taken separately, revealed the following results. For
questions 7, using regular grading, we did not find a
significant main effect for either time or treatment.
Similarly, there was no main effect for time and
treatment for questions 8, but the interaction between
the two was marginally significant [F(1, 48) = 2.988, p
= 0.090]. While the average for the control group on
this question dropped from 0.215 to 0.079, the average
for the test group increased from 0.147 to 0.171.
However, using negative grading, we found a
marginally significant difference between the control
and test groups on question 8, as shown in Table 2.
Table 2: Analysis of Questions 7 and 8
Treatment Pre-post
Change
Question
7
Question
8
Non-negative Grading
Avg. -0.118 -0.088
Skillometers St. Dev. 0.6 0.476
Avg. 0.152 0.096 Concept
Map St. Dev. 0.566 0.411
Between-subjects p-value 0.136 0.185
Negative Grading
Avg. -0.118 -0.136
Skillometers St. Dev. 0.6 0.27
Avg. 0.152 0.024 Concept
Map St. Dev. 0.566 0.329
Between-subjects p-value 0.136 0.073
Question 8 is one of the five multiple answer
questions the students had to answer. They are the only
type of questions on which guessing has a reasonable
chance of improving the score. The reason why there is
no significant difference between the two groups using
the grading scheme that doesn’t use negative grading is
due to the fact that guessing helps them find the correct
answer and, since it is a hard questions, many students
resorted to guessing. Choosing two correct options and
two incorrect ones (out of a total of two correct and
three incorrect options), for example, will give students
full credit using regular grading while only giving
them 0.33 points using the negative grading scheme.
5. Discussion
We found a significant improvement from pre-test
to post-test on questions that are related to arithmetic
expressions, whether regular grading or negative
grading was used. On the other hand, the scores on the
questions unrelated to arithmetic operators did not
improve with time. This means that the improvement
on the related questions did not occur by chance. It
shows that going through the tutor helped students
better answer the questions related to arithmetic
expressions.
We argue that the pre/post-test improvement is not
due to the problems the students solved working with
the tutor, but due to the open student model. Recall that
on most questions, students were asked to select all the
applicable options, e.g., all the arithmetic operators.
Given this overview nature of the questions, one needs
an overview of the topic to glean the correct answer.
Such an overview is provided by the open student
model, whether it is presented as a set of skillometers
or as a concept map. Concept map version of open
student model has the advantage over skillometers in
that it clarifies the is-a/part-of relationships among the
concepts. Alternatively, students may have constructed
an overview answer by choosing only the operators on
which they solved problems, but this explanation
contravenes Occam’s razor.
The difference between control and test groups
grows statistically more significant as we go from easy
to intermediate to hard questions for both grading
schemes (Table 1). Clearly, as the difficulty of the
questions increases, the gap between the ease of
answering the questions using the two different means
of displaying the student model widens. While it is
relatively straightforward to answer the easy questions
using either of the two ways of displaying the open
student model, it is harder to answer the hard questions
using skillometers than the concept map. In other
words, students are less likely to implicitly learn the
relationships among concepts using a set of
skillometers than using concept maps.
The difference between control and test groups on
question 8 is more significant with negative grading
than with regular grading (Table 2). Clearly, at least
some of the students guessed at least some of the
answers; some of these guesses were correct, and
others were incorrect. By penalizing guessing, negative
grading brought the differences between control and
test groups into sharper focus.
Our goal in this study was to compare the
effectiveness of displaying the open student model as a
set of skillometers versus concept maps. The data
suggests that concept maps are significantly more
effective than a set of skillometers when answering
questions that require synthesizing an overview of the
topic.
6. References
[1] Bull S., 2004. Supporting learning with open learner
models. Proceedings of fourth Hellenic Conference on
Information and Communication Technologies in Education,
Athens, Greece, pp. 47-61.
[2] Bull S., Mangat M., Mabbott A., Abu Issa A.S., Marsh J.,
2005. Reactions to inspectable learner models: seven year
olds to University students. Proceedings of Workshop on
Learner Modelling for Reflection, International Conference
on Artificial Intelligence in Education, pp. 1-10.
[3] Dimitrova, V., Brna, P., and Self, J. A. 2002. The Design
and Implementation of a Graphical Communication Medium
for Interactive Open Learner Modelling. In Proceedings of
the 6th International Conference on Intelligent Tutoring
Systems (June 02). S. A. Cerri, G. Gouardères, and F.
Paraguaçu, Eds. LNCS vol. 2363. Springer-Verlag, London,
pp. 432-441.
[4] Hauser, S., Nückles, M., and Renkl, A. 2006. Supporting
concept mapping for learning from text. In Proceedings of
the 7th international Conference on Learning Sciences
(Bloomington, Indiana, June 27 - July 01, 2006).
International Conference on Learning Sciences, pp. 243-249.
[5] Kay, J. 1997. Learner Know Thyself: Student Models to
Give Learner Control and Responsibility, in Z. Halim, T.
Ottomann & Z. Razak (eds), Proceedings of International
Conference on Computers in Education (AACE), pp.17-24.
[6] Mabbott A., Bull S., 2004. Alternative views on
knowledge: presentation of open learner models. Seventh
International Conference of Intelligent Tutoring Systems.
Springer, Berlin, Heidelberg, pp. 689-698.
[7] Marshall, B. B., Chen, H., Shen, R., and Fox, E. A. 2006.
Moving digital libraries into the student learning space: The
GetSmart experience. J. Educ. Resour. Comput. 6, 1 (Mar.
2006), 2.
[8] Marshall, B., Zhang, Y., Chen, H., Lally, A., Shen, R.,
Fox, E., and Cassel, L. N. 2003. Convergence of knowledge
management and E-learning: the GetSmart experience. In
Proceedings of the 3rd ACM/IEEE-CS Joint Conference on
Digital Libraries (Houston, Texas, May 03). pp. 135-146.
[9] Mazza, R. and Dimitrova, V. 2004. Visualising student
tracking data to support instructors in web-based distance
education. In Proceedings of the 13thIinternational World
Wide Web Conference, (New York, NY, USA, May 04).
WWW Alt. '04. ACM Press, New York, NY, pp. 154-161.
[10] Mazza R. and Milani C. Exploring Usage Analysis in
Learning Systems: Gaining Insights From Visualisations. In:
Workshop on Usage analysis in learning systems. 12th
International Conference on Artificial Intelligence in
Education (AIED 2005). Amsterdam, July 05. pp. 65--72.
[11] Willis, C. L. and Miertschin, S. L. 2005. Mind tools for
enhancing thinking and learning skills. In Proceedings of the
6th Conference on information Technology Education
(Newark, NJ, USA, Oct. 05). SIGITE '05. pp. 249-254.
[12] Zapata-Rivera and Greer, 2004. Interacting with
inspectable bayesian student models. International Journal of
AI in Education. vo1 4 (2). pp. 127-163.
... Brusilovsky et al. (2013) used a tree map depicting achievement in proportion to area represented by a skill. Maries and Kumar (2008) used a concept map with nodes filled in proportion to mastery of content. ...
Article
Learner modeling systems so far formulated model learning in three main ways: a learner’s “position” within a lattice of declarative and procedural knowledge about highly structured disciplines such as geometry or physics, a learner’s path through curricular tasks compared to milestones, or profiles of a learner’s achievements on a set of tasks relative to mastery criteria or a peer group. Opening these models to learners identifies for them factors and relations among factors. Open learner models tacitly invite learners to regulate learning. However, contemporary learner models omit data about how learners have and should process information to learn, understand, consolidate and transfer new knowledge and skills. What to do with information opened to learners is opaque. I propose incorporating trace data about learning processes in learner models. Trace data allow generating learning analytics that inform self-regulating learners about potentially productive adaptations to processes they have used to learn. In a context of big data, such elaborated learner models are positioned to collaborate with self-regulating learners. Together, they can coordinate symbiotically, creating a platform for the system to improve its models of learners and for learners to more productively self-regulate learning.
... It has been recognized that an OLM has to be designed to be understandable and interpretable in order to provide the intended instructional support [3,18]. While some studies have found that simple indicators like skill meters are preferred by students [9], other studies support more complex representations (such as concept maps [21]) as tools to represent and refine assessment claims about learners' knowledge [25]. In [3] experienced OLM users indicated a preference for having both simple and more complex OLM visualizations available. ...
Conference Paper
Full-text available
Guiding students to the learning activities that are most appropriate for their current level of knowledge is one of the goals that adaptive educational systems tried to achieve during the last decades. Recently, several attempts have been made to use Open Learner Models (OLM) as a tool for achieving this goal. While the original goal of OLM is to help students reflect about their own learning process, extending OLM with navigation support functionality enables students to take immediate actions towards improving their knowledge. In this work, we attempted to extend the navigation support functionality of OLM by developing a fine-grained OLM that offers student knowledge visualization on both topic and concept levels. The fine-grained OLM enables students to directly explore connections between their knowledge and available learning activities, making an informed decision about their next learning steps. To assess the impact of the new type of OLM, we evaluated several versions of it in a classroom study, while also comparing it with data from our earlier studies that featured a coarse-grained OLM. Our results suggest that the fine-grained OLM considerably impacts student choice of learning activities, making student learning more efficient. We also found that the specific design features of fine-grained OLM could affect students' confidence and persistence while selecting and attempting the learning activities.
... It has been recognized that, as a representation of a learner model (which is a complex system running in the background), an OLM has to be designed to be understandable and interpretable in order to provide pedagogical support [3,21]. While some studies have found that simple indicators like skillometers are preferred by students [11], other studies support more complex representations such as concept maps [25], for example, as tools to represent and re ne assessment claims on learners' knowledge [33]. Moreover, some researcher have proposed o ering multiple OLM views, from simple to detailed to structured, giving options that satisfy di erent students' preferences [5,8,11,24]. ...
Conference Paper
Full-text available
Open Learner Models (OLM) show the learner model to users to assist their self-regulated learning by, for example, helping prompt reflection, facilitating planning and supporting navigation. OLMs can show different levels of detail of the underlying learner model, and can also structure the information differently. As a result, a trade-off may exist between the potential for better support for learning and the complexity of the information shown. This paper investigates students' perceptions about whether offering more and richer information in an OLM will result in more effective support for their self-regulated learning. In a first study, questionnaire responses relating to designs for six visualisations of varying complexity led to the implementation of three variations on one of the designs. A second controlled study involved students interacting with these variations. The study revealed that the most useful variation for searching for suitable learning material was a visualisation combining a basic coloured grid, an extended bar chart-like visualisation indicating related concepts, and a learning gauge.
... In addition the types of visual representation are also affected by the types of data they represent. Concept maps have been found to be significantly more effective than a set of skill meters to synthesize an overview of the topic in an open learner model according to Maries and Kumar [19]. Tabular formats have been found to be incomprehensible with a poor logical organisation and difficult to support instructor to track student data [27]. ...
... Multiple views of the same learner model information is not thought to be confusing to the learner, although individual preferences are observed [5]. Even simple representations of the learner model are thought to have positive effects on the learning experience [1,6], though more complex representations may be more effective when learning relationships between concepts [7]. The process of reflection and the fostering of learners' abilities to work independently have also been identified as important and necessary in music education provision [8,9,10,11]. ...
Conference Paper
Full-text available
This paper focuses on whether learners of basic music theory may find a multiple-media independent open learner model useful to explore their knowledge of harmony concepts. Learners were given the option to explore example beliefs held in their learner model as music notation, audio or text, and shown how their beliefs compared to those of an expert. Results suggest users are both willing and make use of the open learner model, and show individual preferences for media format in which to view their beliefs. Participants mostly explored incorrect knowledge even though more correct knowledge was present in the model, and made greater use of the views specific to the music domain (music notation, audio) when their model showed "incorrect knowledge". Results indicate the potential to include multi-media information in open learner models in appropriate domains.
... The inspectable student model is a holistic presentation of the system's assessment of student skill across the entire domain. The use of inspectable student models, such as skill-o-meters 10,44,45 , has been well-established as a cognitive scaffold. But there is no conclusive data related to its use as a metacognitive scaffold. ...
Article
Full-text available
Previous studies in our laboratory have shown the benefits of immediate feedback on cognitive performance for pathology residents using an intelligent tutoring system (ITS) in pathology. In this study, we examined the effect of immediate feedback on metacognitive performance, and investigated whether other metacognitive scaffolds will support metacognitive gains when immediate feedback is faded. Twenty-three participants were randomized into intervention and control groups. For both groups, periods working with the ITS under varying conditions were alternated with independent computer-based assessments. On day 1, a within-subjects design was used to evaluate the effect of immediate feedback on cognitive and metacognitive performance. On day 2, a between-subjects design was used to compare the use of other metacognitive scaffolds (intervention group) against no metacognitive scaffolds (control group) on cognitive and metacognitive performance, as immediate feedback was faded. Measurements included learning gains (a measure of cognitive performance), as well as several measures of metacognitive performance, including Goodman-Kruskal gamma correlation (G), bias, and discrimination. For the intervention group, we also computed metacognitive measures during tutoring sessions. Results showed that immediate feedback in an intelligent tutoring system had a statistically significant positive effect on learning gains, G and discrimination. Removal of immediate feedback was associated with decreasing metacognitive performance, and this decline was not prevented when students used a version of the tutoring system that provided other metacognitive scaffolds. Results obtained directly from the ITS suggest that other metacognitive scaffolds do have a positive effect on G and discrimination, as immediate feedback is faded. We conclude that immediate feedback had a positive effect on both metacognitive and cognitive gains in a medical tutoring system. Other metacognitive scaffolds were not sufficient to replace immediate feedback in this study. However, results obtained directly from the tutoring system are not consistent with results obtained from assessments. In order to facilitate transfer to real-world tasks, further research will be needed to determine the optimum methods for supporting metacognition as immediate feedback is faded.
Article
This overview outlines key issues in learning with an open learner model (OLM). Originally, learner models remained hidden, as their primary role was to enable a system to personalise the edu-cational interaction. Opening the model in an understandable form provides additional methods of prompting reflection, plan-ning, and other metacognitive activities that are important in learning. Learner-system discussion, editing or other interaction about the learner model can allow users to contribute more di-rectly to the modelling process to increase its accuracy, while at the same time further facilitating reflection as they justify their viewpoints to themselves and the system. Social learning can be supported by opening learner models to peers, helping learners to gauge their relative progress and effectively seek collaborators. OLMs can also provide useful information to others supporting the learning process, such as teachers and parents. OLMs are set to take an increasing role as new learning technologies develop.
Article
Full-text available
In this study, we examined the effect of two metacognitive scaffolds on the accuracy of confidence judgments made while diagnosing dermatopathology slides in SlideTutor. Thirty-one (N = 31) first- to fourth-year pathology and dermatology residents were randomly assigned to one of the two scaffolding conditions. The cases used in this study were selected from the domain of nodular and diffuse dermatitides. Both groups worked with a version of SlideTutor that provided immediate feedback on their actions for 2 h before proceeding to solve cases in either the Considering Alternatives or Playback condition. No immediate feedback was provided on actions performed by participants in the scaffolding mode. Measurements included learning gains (pre-test and post-test), as well as metacognitive performance, including Goodman–Kruskal Gamma correlation, bias, and discrimination. Results showed that participants in both conditions improved significantly in terms of their diagnostic scores from pre-test to post-test. More importantly, participants in the Considering Alternatives condition outperformed those in the Playback condition in the accuracy of their confidence judgments and the discrimination of the correctness of their assertions while solving cases. The results suggested that presenting participants with their diagnostic decision paths and highlighting correct and incorrect paths helps them to become more metacognitively accurate in their confidence judgments.
Article
Full-text available
This paper describes a survey undertaken to discover students' wishes concerning the contents, interaction and form of open learner models in intelligent learning environments. It was found that, in general, students are receptive to the idea of using open learner models to support their learning. Several different kinds of open learner model are presented, which illustrate a range of approaches and issues relating to the use of open learner models.
Article
Full-text available
This paper presents a novel approach of exploring usage analysis in learning systems by means of graphical representations. Learning systems collect large amounts of student data that can be used by instructors to become aware of what is happening in distance learning classes. Instead of being processed with techniques from user modelling, data is displayed "as it is". Techniques from Information Visualization show how useful insights can be gained from graphical representations. A system called GISMO illustrates the proposed approach. By presenting graphical representations, GISMO allows the user to visualize data from courses collected in real settings. We will show how using graphical representations of student tracking data enables instructors to identify tendencies in their classes, or to quickly discover individuals who need special attention.
Conference Paper
Full-text available
Our work explores an interactive open learner modelling (IOLM) approach where a learner is provided with the means to inspect and discuss the learner model. This paper presents the design and implementation of a communication medium for IOLM. We justify an approach of inspecting and discussing the learner model in a graphical manner using conceptual graphs. Based on an empirical study we draw design recommendations, taken into account in the implementation of the communication medium in STyLE-OLM -an IOLM system in a terminological domain. The potential and improvements of the medium are discussed on the basis of study with STyLE-OLM.
Conference Paper
Full-text available
This paper describes a study in which individual learner models were built for students and presented to them with a choice of view. Students found it useful, and not confusing to be shown multiple representations of their knowl- edge, and individuals exhibited different preferences for which view they fa- voured. No link was established between these preferences and the students' learning styles. We describe the implications of these results for intelligent tu- toring systems where interaction with the open learner model is individualised.
Conference Paper
Full-text available
This paper presents a novel approach of using web log data generated by course management systems (CMS) to help instructors become aware of what is happening in distance learning classes. Specifically, techniques from Information Visualization are used to graphically render complex, multidimensional student tracking data collected by CMS. A system, called CourseVis, illustrates the proposed approach. Graphical representations from the use of CourseVis to visualise data from a java on-line distance course ran with WebCT are presented. Findings from the evaluation of CourseVis are presented, and it is argued that CourseVis can help teachers become aware of some social, behavioural, and cognitive aspects related to distance learners. Using graphical representations of student tracking data, instructors can identify tendencies in their classes, or quickly discover individuals that need special attention.
Article
Full-text available
Inspectable student models focus on the idea of letting students and teachers interact with the representation of the student that the system maintains. Both humans and the system can benefit from this interaction. By externalizing the student model and making it an object for inspection, several representational and interaction issues arise. This paper presents ViSMod (Visualization of Bayesian Student Models) an integrated tool to visualize and inspect distributed Bayesian student models. Using ViSMod, students and teachers can understand, explore, inspect, and modify Bayesian student models. ViSMod offers a practical tool that helps students and teachers to engage in negotiated assessment processes. Student models in ViSMod follow the Bayesian belief net backbone structure proposed by Reye (1996), which describes both cognitive and social aspects of the student. In addition, we report on a usability study of the ViSMod tool and an exploratory study focused on the effects of employing various levels of guidance and support in the way students interact with inspectable Bayesian student models.
Article
This paper examines student reactions to inspectable learner models. We look at a simple example for children as young as 7, and university students using more complex inspectable learner models - one with multiple views on the model data, and one that can be opened to peers and instructors. We provide a descriptive account of student perceptions of their learner models, to complement the more formal data available elsewhere. This information is useful to those considering opening learner models in their systems, as it provides greater insight into individual student attitudes, which is important when supporting individual learning.
Article
We d escribe an approach to student modelling where the student can delv ei nto the stu- dent model. We d escribe ho wt his can support learning on several levels. It serves as a basis for planning learning goals, impro ving communication between the teaching sys- tem and the learner ,a nd as an aid to reflection about learning. There is a growing group of researchers working with accessible and understandable student models. W eo utline the motivations for such approaches. This paper presents the underlying philosoph yf or the design of our student model and shows the type of model we ha ve b uilt. W ed iscuss problems of scrutability we ha ve encountered and the interesting research agenda the yd efine.
Conference Paper
This study examines concept mapping as a follow-up study strategy for learning from text. Based on a task analysis of the sub-tasks learners must accomplish during mapping, we developed the following support measures: Participants (N = 102) either generated a map (1) from scratch (map-generation), (2) from a list of concepts (concepts-provided), (3) from spatially arranged concepts (concepts-arranged), or, alternatively, they (4) studied a worked-out map (worked-out map). The control-group (5) did not engage in mapping. Presenting a worked-out map enhanced learning most effectively. However, constructing a map from scratch was almost equally helpful. In contrast, students in the half-structured conditions (2 and 3) performed no better than the control condition. We concluded that both studying a worked-out map and generating one's own map allowed learners to devote attention to important parts of the learning contents. Half-structured mapping, in contrast, narrowed attention to specific aspects in a dysfunctional way.
Conference Paper
As early adopters of an emerging technology, the Tablet PC (TPC), certain University of Houston Information Systems Technology faculty began to integrate TPCs into the undergraduate curriculum in Fall 2003. Classroom experiences revealed the tool as particularly engaging to Information Systems Technology students. Thus, the authors now believe that TPCs have great potential to improve critical thinking skills of Information Systems Technology students if activities can be developed that capitalize on the inherent capability of the TPC to support visualization.We have just begun a formal investigation of the effectiveness of the TPC as an instructional tool that facilitates the development of critical thinking and learning skills in undergraduate Information Systems Technology students. The investigation extends previous work on the effectiveness of mind maps for improving critical thinking and problem solving skills by combining the visual learning technique of mindmapping with the emerging technology of the TPC and pen-enabled mindmapping software. The research question to be answered is: Do critical thinking skills of Information Systems Technology students improve when mindmapping activities are incorporated into the classroom and delivered via technology.In this paper, we first provide background information on the development of critical thinking and learning skills, the role of visual learning in the development of critical thinking and learning skills, and visual learning tools and techniques such as semantic networks, concept mapping, and mind mapping. We then describe the scope of the project we are undertaking and provide initial results of development efforts to create instructional modules and activities focused on mindmaps.