Conference PaperPDF Available

Automated Assessment of Students’ Conceptual Understanding: Supporting Students and Teachers Using Data from an Interactive Textbook

Authors:
Automated Assessment of Students’ Conceptual Understanding
Supporting Students and Teachers Using Data from an Interactive Textbook
Toby Dragon
Department of Computer Science
Ithaca College
Ithaca, USA
tdragon@ithaca.edu
Carrie Lindeman
Department of Computer Science
Ithaca College
Ithaca, USA
clindem1@ithaca.edu
Abstract Online, interactive textbooks utilizing multimedia
are continually gaining popularity in computer science courses.
We present a system for automated analysis that can harness the
power of these textbook and practice systems to provide
information about high-level conceptual understanding to
educators. The system presents a visualization using data logged
by an interactive textbook to provide numeric estimates of a
student's knowledge of course concepts. This information can be
used to support teachers and individual students. The basis of
our system is a Concept Graph, an artifact representing the
concepts to be taught during the course, and their interrelations.
We describe the manner in which our system uses these Concept
Graphs to provide useful information to educators and students.
Keywords- Computer Science education; Intelligent Tutoring
Systems (ITS); Expert Knowledge base (EKB); ELearning;
Automated Assessment
I. INTRODUCTION
Online, interactive textbooks utilizing multimedia are
gaining popularity in computer science education, e.g., [1, 2].
These systems provide the benefit of presenting information
using text, video, and interactive components that allow
students to practice while learning. The increase in out-of-
classroom instruction can potentially allow for a blended
learning environment, where students learn many of the basic
principles from the interactive textbook and practice
environment. Educators can then focus more class time on
collaborative work, discussion, and more challenging
subjects. However, the educator gains less information about
their students’ grasp on these core concepts that are learned at
home.
Often, these online interactive systems are accompanied
by a some set of tools that allow educators to see the submitted
answers to questions and aggregated information about the
class as a whole. However, the systems rarely offer the
educator any indication of students’ understanding of high-
level concepts, as the systems do not generally include any
type of information that would facilitate that process.
We present a system that offers information about
students’ understanding of high-level concepts based on their
work within an online interactive system. The system allows
educators to specify the important concepts, their
interrelation, and their relation to the exercises that students
complete. In this way, we combine the educator’s
understanding of the course material with the data from the
online practice environment. Utilizing open-source projects,
we developed a tool to provide feedback and assess
understanding. This tool supports educators in their attempts
to help individual students and can potentially help students
directly.
We now present a scenario to offer the reader a glimpse
into the potential uses of the tool (Section II). We then present
the underlying technology that provides the functionality
described (Section III) and some trade-offs involved in
automation vs. manual creation of the necessary artifacts
(Section IV). We conclude by describing our plans for using
and improving the system (Section V).
II. ASSESSING AND SUPPORTING STUDENTS
Here we describe a hypothetical scenario to demonstrate
our system’s potential. This scenario describes the expected
context, manner of use, and productive outcomes of the
system, allowing the reader to see the purpose of the tool and
the manner in which the tool supports assessment of student
performance.
Imagine that the instructor of an Introductory Computer
Science course notices that her student, Jeffrey, appears to be
struggling with the course. She decides to consult our
automated textbook assessment tool to see if she can identify
any specific misconceptions. She looks over the visualization
of Jeffrey’s understanding (Figure 1).
She first noticed a problem from Jeffrey's answer to a
particular question involving concepts from multiple different
topics including function definitions, lists, loops, and if
statements. Jeffrey’s classwork seemed to demonstrate
problems with all of these areas. However, the graph of his
understanding indicated that function definitions, loops, and if
statements were all relatively strong. She notes that he has a
very poor rating for his understanding of lists, due to his
incorrect answers on practice questions and his lack of any of
the views of videos on the topic. When Jeffrey comes to office
hours for help, she quickly identifies his misconceptions about
lists, and this allows him to solve future complex problems
employing lists and other skills.
This basic scenario describes one simple use case of our
assessment tool and demonstrates a practical application for
an educator. The tool provides the ability to quickly see an
overview of the concepts presented in a given course, along
with assessment of a given student’s theoretical understanding
on those concepts based on real data from student
performance. We now describe in detail the technology and
the content development necessary to make this tool function
as described in this scenario. We can then discuss the
additional benefits and alternative uses of this system.
III. UNDERLYING TECHNOLOGY
Figure 1 represents a partial graph of certain concepts
covered in an Introductory Computer Science course and
their interrelations. Our system requires such a graph of the
concepts covering the entire course in question. We refer to
this graph as a Concept Graph, which is a type of Expert
Knowledge Base (EKB). EKBs are a common tool used in
Intelligent Tutoring Systems (ITSs) research [3]. We use this
Concept Graph to automatically analyze the information
collected from an online textbook to produce assessment of
high-level concepts based on low-level information about
student performance collected from the learning
environment. We produce a student model, specifically an
overlay model [3] of this EKB, which gives a score from -1
to 1 to estimate a student's understanding of that concept. This
high-level information can be used by educators to quickly
and easily offer pertinent feedback to students, as described
in the scenario above. There have been other attempts at
similar assessment techniques [4, 5], but the EKBs are
controlled entirely by the development teams. We offer a
system where the EKB is dynamic and can vary from teacher
to teacher based on preference.
We now present the details of our system to clarify the
inner workings of its assessment mechanism. We must
consider the online learning tool from which we collect low-
level student performance information, the structure and
content of the Concept Graph, and their interconnection.
Then we can consider the analysis system that aggregates
data and offers assessment of student performance based on
the relevant Concept Graph.
A. The Learning Environment
The learning environment used by the students is an
online, interactive textbook system called Runestone
Interactive [1, 6]. This system presents multimedia for
instruction (text and video) interleaved with practice exercises
(Figure 2). Exercises include multiple choice questions, drag
and drop questions, a live Python debugging tool, and text
areas where Python code can be written and executed.
Specifically, we are focused on the flagship textbook of
this system "How to Think Like a Computer Scientist," which
presents introductory programming in Python. This textbook
system is used in our curriculum to promote blended learning,
where students work independently online as well as in the
classroom. Students are expected to read and complete
exercises that introduce a topic for homework. Topics are then
covered in more detail during class time, and finally students
return to the online system to complete more challenging
exercises for practice and to read and watch videos to help
cement ideas.
The content of the textbook is defined in text files,
including links to video files. Developing and changing
Figure 2 . A screensh ot showing the learner’s view of the Runestone
Textbook showing text and multiple choice questions.
Figure 1 . The information shared with the instructor as she
examines a specific student assessment.
content is technologically simple. The system logs all user
actions to a database and provides these data in XML format
upon request. We use these data as the input to our automated
analysis system.
B. The Concept Graph
The purpose of the Concept Graph is to make explicit all
concepts that are taught in class, as well as the relationships
between these concepts. These concepts could be at any level
of granularity that the educator chooses. The end product is a
Directed Acyclic Graph (DAG) that provides the system with
a domain model [3], a model of the subject matter to be taught.
Below is the current version of our Concept Graph designed
for our Introductory Computer Science course in Python
(Figure 3). Nodes in the graph are concepts taught in the
course, and edges represent roughly the relationship “is a part
of.” For example, the node for If Statement has an edge
connected to Boolean Expression because understanding of
boolean expressions is a part of understanding if statements.
Nodes without parents are generally high-level, abstract
concepts, and nodes without children are generally low-level,
concrete concepts. The graph transitions from concrete to
theoretical because the graph is assessing a student’s
understanding of an abstract concept based on their estimated
understanding of lower-level concepts. These estimates
originate from actual student performance data provided by
the textbook. The means by which the textbook data is
connected to the Concept Graph are discussed in Section III.c.
Note that we refer to this artifact as the “current” version
of the Concept Graph. The creation of such an artifact is
obviously subjective and prone to debate, even within a single
team or by a single author. The concepts can be at any level of
granularity, and the decision of when such a graph is “correct”
or “complete” is difficult. Much research has been done in the
area of concept inventories to solidify a specific validated set
of concepts for specific courses (e.g., [7]) and this work can
be built upon to create these graphs. However, every educator
must make decisions about the specific content, challenge,
and organization when deciding the specifics of a given
course. Therefore, our Concept Graph is not tied to one
specific concept inventory. Instead, it is a living structure that
can be iteratively improved with ease. The graph itself is
defined in a text file (in JSON format) by a list of nodes and a
list of edges, and therefore is easily editable. The real
demonstration of a “correct” graph is the usefulness when an
educator employs the system in the realistic setting of their
class.
Considering this complex authoring task, any educator
might be (rightly) concerned about the effort involved in
creating the graph and the ambiguity of its form. To address
this concern and offer a simple example solution, a default
graph can be automatically built from the textbook source files
of any text authored in the Runestone system. A textbook is
inherently structured into chapters and subsections, which can
be used to create the graph structure (Figure 4).
Figure 3 . The Concept Graph for the subject Introductory Computer Science.
Figure 4 . A portion of the automatically generated Concept Graph
from the Runestone Textbook.
This automation offers default functionality of the
assessment system without need of further authoring work.
The default structure can then also be modified by the authors
to suit their needs. The trade-offs of using an automatically
constructed Concept Graph versus editing or constructing
one’s own Concept Graph are discussed in Section IV.
C. Connecting Concept Graph to Data
As previously stated, the exercises and video content in
the learning environment are organized by text files, and the
data from these video interactions and exercises are recorded
in a database and retrievable in an XML format. These data
must now be linked to nodes in our Concept Graph to assess
the student’s understanding of high-level concepts. This
connection can be made in two different ways.
The first and simplest way is to rely on the automated
Concept Graph created from the textbook source files. When
parsing the source files of subsections, the questions and
videos within those subsections can automatically be linked
to the corresponding nodes. In this way, a fully functional
Concept Graph including connections to the data source can
be automatically created.
When a Concept Graph differs from the textbook source,
the connection from exercises to the nodes in the graph must
be specified. This is accomplished by tagging the exercises in
the text source with the names of concepts from the Concept
Graph. Authoring then is the rather intuitive task of tagging
each exercise or video in the text with the concepts from the
graph that are relevant to that exercise or video. The exercises
are generally connected with lower level concepts, but the
system allows for connections to any node in the graph. For
example, videos or questions that speak broadly about a topic
might be connected directly to that high-level topic. Again,
the trade-offs between automation and manual authoring are
discussed in Section IV.
D. The Automated Assessment System
Now that the data source, the Concept Graph, and their
connection are established, we consider the manner in which
we use this structure to offer automated assessment. We
employed, altered, and extended open-source software
originally developed for an unrelated project, the Metafora
Project [8]. This software was developed to conduct
automated analysis across many different computer-based
learning tools [9], and therefore provides a basis for
collecting and analyzing individual actions in XML format
[10]. It offers a web-based framework (built on Google Web
Toolkit, GWT [11]) in which analysis can be coded in Java
on the server-side and all available data can be viewed,
filtered, and explored in a web client. These data range from
the low-level actions by the student to high-level analysis
performed by the system.
Algorithmically, the system reads the Concept Graph
structure from either JSON or the book source and creates a
representation in memory that includes leaf nodes for the
specified exercises related to the graph (as described in
Section III.C). The system collects data from the book about
a given student’s use of video and performance on exercises
and populates the leaf nodes with these data. The system then
uses a post-order traversal of the Concept Graph to aggregate
actual data in a bottom-up fashion from the lowest levels of
the graph to all nodes in the graph. Currently, these
calculations are weighted averages. Specifically, the value of
each node is the average of the assessments connected
directly to the node along with the value of all child nodes,
weighted by the amount of assessments connected to each
node.
Our team is working to apply different statistical
techniques to improve these calculations by taking a data
driven approach to calculating both appropriate weight for a
given data point, and the appropriate weight for the edges in
the graph. We are analyzing response patterns and question
characteristics using item response theory [12] to investigate
difficulty and discrimination of individual questions. We are
also employing factor analysis [13] to determine how to best
group questions to represent appropriate latent traits. These
tools together will help inform decisions about the weight of
the leaves and edges in our graph, as well as offering
information that can be used to improve the structure of the
graph. Details of this process are beyond the scope of this
paper and will be included in future publications.
This bottom-up calculation will provide the user with
estimates of any higher-level concepts about which students
have already answered questions. However, we can also use
the graph structure to estimate students' understanding of
concepts about which we have not directly collected
information. After the bottom-up computation, we can then
employ a top-down, pre-order traversal of the DAG that uses
estimates of higher-level concepts to predict understanding of
lower-level concepts about which data has not yet been
collected. While these predicted scores could be less accurate
since there is no direct evidence for them, they are potentially
quite useful, which will be discussed in Section V.
These calculations result in a numerical value for each
node, which we then convert into a color code to be presented
to the educator. Color codes simplify the overwhelming
amount of data available to the educator. Such a simplified
interface has been shown to allow teachers to identify needed
information in a short amount of time [14]. Using Google
Charts [15] tools, we display a chart where each node is
labeled with its concept title, has a colored border to represent
the node score, and can be clicked to display all related data,
including the source of these color codes.
Taking a holistic view of the system, one can now see the
manner in which this tool utilizes the textbook system,
student data, and the defined Concept Graph to offer the
behavior described in the scenario presented in Section II.
This system provides educators a powerful tool to peruse the
estimated understanding of any given student based on their
work within the online textbook for the class and to drill
down into that estimate to understand the source of the
estimation.
IV. AUTHORING VERSUS AUTOMATION
Throughout Section III we explained the manner in which
the Concept Graph and its connections to individual questions
could be created in two different fashions: either automatically
or manually. It is worth noting again that an author need not
choose only one or the other, but can begin with an automated
Concept Graph and edit it to suit by changing the structure.
We now discuss this decision in more detail to highlight
underlying interesting implications of our system, namely: the
contrast of a graph versus a tree structure.
A key difference between a hand-crafted Concept Graph
and one that is automatically generated from the book Table
of Contents (TOC) is that this auto-generated graph will
always be a tree structure (each node having only one parent),
as seen in Figure 4. No question is featured in multiple
subsections and no subsection is featured in multiple chapters.
In contrast, the nodes in a handcrafted Concept Graph might
have multiple parents. For example, in Figure 3, the node for
Boolean Expression has both Expressions and If Statements as
parent nodes. This difference in structure has potentially
profound effects on the resulting calculations, as it can reveal
a learner’s progress over time. The node for Boolean
Expression may have a score that fluctuates based on the
learner’s current work. The learner may not understand
boolean expressions at first, but gain confidence when
working with if statements. In a handcrafted Concept Graph
the progression is recognized in the visualization because
many different sections of the text might be related to Boolean
Expressions. In the TOC Concept Graph, there is only one
section of the book that affects this node, making a more static
timeline of their performance. It should be noted that there is
potential for more complex Concept Graphs to be
automatically generated (e.g., from structured content other
than TOC, or from keyword recognition within text). This
type of functionality would integrate well with our system, but
is not our current focus.
Our automated analysis system was designed to handle
any DAG, so calculations for assessment can function
regardless of whether the graph is a tree or not. However, a
clean and understandable automated display of a DAG is a
much more difficult task than automated display of a tree
structure. Off-the-shelf free software (in our case Google
Charts) offers useful web visualization of tree structures, but
we have not found similar software for displaying a DAG. We
considered the idea of manual creation of a DAG
visualization, but this would not be easily modifiable and is
certainly not appropriate during the initial stage of rapid,
iterative experimentation and development of the DAG.
We have mitigated this display problem by creating a
function that converts a DAG into a tree structure. This works
by duplicating for any nodes with multiple parents. This tree
structure is used only for display purposes, and allows
organized visualization of our Concept Graphs (Figure 5).
As can be seen in the figure, this tree is useful, but can
easily become large and repetitive, making this solution less
than ideal. Better software for DAG visualization is an area of
active investigation.
V. CONCLUSIONS AND FUTURE WORK
Our system utilizes an online, interactive textbook to offer
high-level assessment of student understanding. The specific
examples demonstrate the ways in which this system can aid
educators in assisting individual students and improving the
overall course content.
We have demonstrated the software and given instructors
some time to experiment with the tool in a hypothetical
scenario and offer their feedback. Seven computer science
professors were engaged with the tool for roughly thirty
minutes each, during and after which they offered their
thoughts and guidance on development. Overall, instructors
were excited by the visual way that they could quickly analyze
the individual student’s strength and weaknesses. The largest
concern was the overwhelming amount of information, which
led to several recommendations for improvement. We plan to
make small adjustments to address some of these suggestions,
and to use the system with professors' real classroom data in
the coming semester.
Figure 5. A subsection of our EKB Concept Graph converted to a tree and displayed in our assessment tool.
While we appreciate the tool in its current state, we also
see this work as the foundation of a much larger system that
can provide further support for Computer Science education.
Future directions include increasing the system’s ability to
understand different types of exercises, providing automated
intervention, and creating dynamic collaborative groups based
on students’ current understanding. We explain each of these
briefly, to give the reader a concept of the overall potential of
this system.
A clear next step for the project is to expand the sources
of information about student performance. Our current system
is limited in the type of exercises from which it can glean
information. The first version of the system relies mainly on
video views, and easily-automated-grading question types
(e.g., multiple choice and drag-and-drop). However, the
textbook also has interactive code windows and programming
exercises that are currently hand graded by the instructor. We
are aware of the body of research in automated assessment of
code snippets and full programs (e.g., [16]). Ideally, we plan
to integrate this type of software with our system. Receiving
assessment on these more open-ended tasks has the potential
to drastically improve our estimates of a student’s conceptual
and procedural understanding.
Beyond collecting more data to improve estimates, our
system also has the potential to provide automated adaptation
and/or feedback based on these estimates. This would bring
our system fully into the realm of ITS. As described above,
the first version of our system relies on the instructor to
provide intervention. A future goal is to automate certain types
of intervention, easing the load on the educator and helping
students when an educator is not readily available. The system
could directly interact with students, offering suggestions of
content to review and exercises to practice. The Concept
Graph already contains the necessary mapping to provide
these suggestions. The system identifies problem areas for a
student (labeled in red on their Concept Graph). The next step
is to traverse the graph to reach appropriate leaf nodes, which
represent content that the student should review. The greater
task at hand is to create a user interface that integrates with the
online textbook to present these suggestions in a useful and
intuitive manner. In a similar fashion, we could consider
macro-adaptation, where the book’s content is altered to
present only the information for which the user is most
prepared. A large body of research in such prior work is
available to leverage (e.g., [17]).
As a final direction of planned future work, we also see
potential for our system in the realm of Computer Supported
Collaborative Learning (CSCL). Particularly, we recognize
the importance and challenge of group formation [18], and we
see the potential to offer interesting results in the subfield of
dynamic group formation. Specifically, we plan to use the
information available from the Concept Graphs of different
students to pair students that could work productively together
[19]. We can also use the specific portion of the Concept
Graph that prompted the pairing to suggest exercises that
would be productive for the pair. This grouping concept is an
active area of research for our team, and we plan to pilot test
pairings within the year.
REFERENCES
[1] Runstone Interactive. http://runestoneinteractive.org/ accessed
8/11/2017.
[2] ZyBooks. http://www.zybooks.com/ accessed 8/11/2017.
[3] Woolf, B. P.: Building intelligent interactive tutors: Student-centered
strategies for revolutionizing e-learning. Morgan Kaufmann. (2010)
[4] Butz, C. J., Hua, S., & Maguire, R. B.: A web-based bayesian
intelligent tutoring system for computer programming. Web
Intelligence and Agent Systems: An International Journal, 4(1), 77-97
(2006)
[5] Sosnovsky, S., & Brusilovsky, P. (2015). Evaluation of topic-based
adaptation and student modeling in QuizGuide. User Modeling and
User-Adapted Interaction, 25(4), 371-424.
[6] Miller, B. N., & Ranum, D. L.: Beyond PDF and ePub: toward an
interactive textbook. In: Proceedings of the 17th ACM annual
conference on Innovation and technology in computer science
education, pp. 150-155. ACM. (2012)
[7] Goldman, K., Gross, P., Heeren, C., Herman, G., Kaczmarczyk, L.,
Loui, M. C., & Zilles, C. (2008). Identifying important and difficult
concepts in introductory computing courses using a delphi process.
ACM SIGCSE Bulletin, 40(1), 256-260.
[8] The Metafora Project. http://www.metafora-project.org/ accessed
8/11/2017.
[9] Dragon, T., Mavrikis, M., McLaren, B. M., Harrer, A., Kynigos, C.,
Wegerif, R., & Yang, Y. (2013). Metafora: A web-based platform for
learning to learn together in science and mathematics. IEEE
Transactions on Learning Technologies, 6(3), 197-207.
[10] Dragon, T., McLaren, B. M., Mavrikis, M., & Geraniou, E. (2011,
July). Scaffolding collaborative learning opportunities: integrating
microworld use and argumentation. In International Conference on
User Modeling, Adaptation, and Personalization (pp. 18-30). Springer
Berlin Heidelberg.
[11] Google Web Toolkit. http://www.gwtproject.org/ accessed 8/11/2017
[12] Embretson, S. E., & Reise, S. (2000). Psychometric methods: Item
response theory for psychologists. Mahwah, N.J: Lawrence Erlbaum
Associates, Publishers.
[13] Everitt, B., & Hothorn, T. (2011). An introduction to applied
multivariate analysis with R. New York: Springer.
[14] Mavrikis, M. and Gutierrez-Santos, Sergio and Poulovassilis,
Alexandra (2016). Design and evaluation of teacher assistance tools for
exploratory learning environments. In Proceedings of the Sixth
International Conference on Learning Analytics & Knowledge
(LAK’16). ACM, pp. 168-172. ISBN 9781450341905.
[15] Google Charts. https://developers.google.com/chart/ accessed
8/11/2017
[16] Pieterse, V. (2013, April). Automated assessment of programming
assignments. In Proceedings of the 3rd Computer Science Education
Research Conference on Computer Science Education Research (pp.
45-56). Open Universiteit, Heerlen.
[17] Hsiao, I. H., Sosnovsky, S., & Brusilovsky, P. (2010). Guiding students
to the right questions: adaptive navigation support in an E-Learning
system for Java programming. Journal of Computer Assisted Learning,
26(4), 270-283.
[18] Dillenbourg, P. (2002). Over-scripting CSCL: The risks of blending
collaborative learning with instructional design. Three worlds of CSCL.
Can we support CSCL?, 61-91.
[19] Dragon, T., Lindeman, C., Wormsley, C., & Lesnefsky, D. (2016).
Better than Random: Can We Use Automated Assessment to Form
Productive Groups on the Fly? In the 13th International Conference on
Intelligent Tutoring Systems (ITS-2016), Workshop on Intelligent
Support for Learning in Groups (ISLG).
... In [Brusilovsky, 2013] In [Dragon, 2017], Dragon et al. presented the system for automated analysis that can harness the power of online, interactive textbook and practice systems to provide information about high-level conceptual understanding using a concept graph to educators. It presents a visualization using logged data to provide numeric estimates of a student's knowledge of course concepts, to support teachers and individual students. ...
Article
Purpose-To advance Java programming educations, we have developed a Web-based Java programming learning assistant system (JPLAS). It offers the element fill-in-blank problem (EFP) for novice students to study Java grammar and basic programming skills by filling in the missing elements in a source code. An EFP instance can be generated by selecting an appropriate code, and applying the blank element selection algorithm. Since it is expected to cover broad grammar topics, a number of EFP instances have been generated. This paper proposes a recommendation function to guide a student solving the proper EFP instances among them. Design/methodology/approach-This function considers the difficulty level of the EFP instance and the grammar topics that have been correctly answered by the student, and is implemented at the offline answering function of JPLAS using JavaScript so that students can use it even without the Internet connections. Findings-To evaluate the effectiveness of the proposal, 85 EFP instances are prepared to cover various grammar topics, and are assigned to a total of 92 students in two universities in Myanmar and Indonesia to solve them using the recommendation function. Their solution results confirmed the effectiveness of the proposal. Originality/value-The concept of the difficulty level for an EFP instance is newly defined for the proper recommendation and the accuracy in terms of the average numbers of answer submission times among the students is verified.
Article
Full-text available
This paper presents an in-depth analysis of a nonconventional topic-based personalization approach for adaptive educational systems (AES) that we have explored for a number of years in the context of university programming courses. With this approach both student modeling and adaptation are based on coarse-grained knowledge units that we called topics. Our motivation for the topic-based personalization was to enhance AES transparency for both teachers and students by utilizing typical topic-based course structures as the foundation for designing all aspects of an AES from the domain model to the end-user interface. We illustrate the details of the topic-based personalization technology, with the help of the Web-based educational service QuizGuide—the first system to implement it. QuizGuide applies the topic-based personalization to guide students to the right learning material in the context of an undergraduate C programming course. While having a number of architectural and practical advantages, the suggested coarse-grained personalization approach deviates from the common practices toward knowledge modeling in AES. Therefore, we believe that several aspects of QuizGuide required a detailed evaluation—from modeling accuracy to the effectiveness of adaptation. The paper discusses how this new student modeling approach can be evaluated, and presents our attempts to evaluate it from multiple different prospects. The evaluation of QuizGuide across several consecutive semesters demonstrates that, although topics do not always support precise user modeling, they can provide a basis for successful personalization in AESs.
Conference Paper
Full-text available
This is a position paper in which I argue that massive open online programming courses can benefit by the application of automated assessment of programming assignments. I gathered success factors and identified concerns related to automatic assessment through the analysis of experiences other researchers have reported when designing and using automated assessment of programming assignments and interpret their potential applicability in the context of massive open online courses (MOOCs). In this paper I explain the design of our own assessment software and discuss our experience of using it in relation to the above-mentioned factors and concerns. My reflection on this experience can inform MOOC designers when having to make decisions regarding the use of automatic assessment of programming assignments.
Article
Full-text available
This paper presents Metafora, both a platform for integrated tools as well as an emerging pedagogy for supporting Learning to Learn Together in science and mathematics education. Our goal is to design technology that brings education to a higher level; a level where students not only learn subject matter, but also gain a set of critical skills needed to engage in and self-regulate collaborative learning experiences in science and math education. To achieve this goal, we need to understand how educational technology can bring students' attention to, and promote these higher level skills. We first discuss the core skills that students need as they learn to learn together. We then present a platform and pedagogy to support the acquisition of the critical skills. Finally, we present an example use of our system based on results from pilot studies. This example demonstrates interaction with the platform to highlight potential benefits and limitations of our approach to promoting the associated skills.
Article
Full-text available
A Delphi process is a structured multi-step process that uses a group of experts to achieve a consensus opinion. We present the results of three Delphi processes to identify topics that are important and difficult in each of three introductory computing subjects: discrete math, programming fundamentals, and logic design. The topic rankings can be used to guide both the coverage of standardized tests of student learning (i.e., concept inventories) and can be used by instructors to identify what topics merit emphasis.
Book
Full-text available
Computers have transformed every facet of our culture, most dramatically communication, transportation, finance, science, and the economy. Yet their impact has not been generally felt in education due to lack of hardware, teacher training, and sophisticated software. Another reason is that current instructional software is neither truly responsive to student needs nor flexible enough to emulate teaching. The more instructional software can reason about its own teaching process, know what it is teaching, and which method to use for teaching, the greater is its impact on education. Building Intelligent Interactive Tutors discusses educational systems that assess a student's knowledge and are adaptive to a student's learning needs. Dr. Woolf taps into 20 years of research on intelligent tutors to bring designers and developers a broad range of issues and methods that produce the best intelligent learning environments possible, whether for classroom or life-long learning. The book describes multidisciplinary approaches to using computers for teaching, reports on research, development, and real-world experiences, and discusses intelligent tutors, web-based learning systems, adaptive learning systems, intelligent agents and intelligent multimedia.
Conference Paper
We present our approach to designing and evaluating tools that can assist teachers in classroom settings where students are using Exploratory Learning Environments (ELEs), using as our case study the MiGen system, which targets 11-14 year old students' learning of algebra. We discuss the challenging role of teachers in exploratory learning settings and motivate the need for visualisation and notification tools that can assist teachers in focusing their attention across the whole class and inform their interventions. We present the design and evaluation approach followed during the development of MiGen's Teacher Assistance tools, drawing parallels with the recently proposed LATUX workflow but also discussing how we go beyond this to include a large number of teacher participants in our evaluation activities, so as to gain the benefit of different view points. We discuss the results of the evaluations, which show that participants appreciated the capabilities of the tools and were mostly able to use them quickly and accurately.
Chapter
Multivariate data arise when researchers record the values of several random variables on a number of subjects or objects or perhaps one of a variety of other things (we will use the general term “units”) in which they are interested, leading to a vector-valued or multidimensional observation for each. Such data are collected in a wide range of disciplines, and indeed it is probably reasonable to claim that the majority of data sets met in practise are multivariate. In some studies, the variables are chosen by design because they are known to be essential descriptors of the system under investigation. In other studies, particularly those that have been difficult or expensive to organise, many variables may be measured simply to collect as much information as possible as a matter of expediency or economy.
Book
Multivariate data and multivariate analysis.- Looking at multivariate data: visualization.- Principal components analysis.- Multidimensional scaling.- Exploratory factor analysis.- Cluster analysis.- Confirmatory factor analysis and structural equation models.- The analysis of repeated measures data.-
Article
This paper describes a new and unique vision for electronic textbooks. It incorporates a number of active components such as video, code editing and execution, and code visualization as a way to enhance the typical static electronic book format. In addition, the textbook is created with an open source authoring system that has been developed to allow the instructor to customize the content of the active and passive parts of the text. Initial results of a semester long trial are presented as well.