ArticlePDF Available

Evaluating Collaborative Learning Processes using System-based Measurement.

Authors:

Abstract and Figures

Much of the research on collaborative work focuses on the quality of the group outcome as a measure of success. There is less research on the collaboration process itself, but an understanding of the process should help to improve both the process and the outcomes of collaboration. Understanding and analyzing collaborative learning processes requires a fine-grained analysis of group interaction in the context of learning goals. Taking into account the relationships among tasks, products and collaboration this paper presents a set of measures designed to evaluate the collaborative learning process. We emphasise: direct system-based measures based on data produced by a collaborative learning system during the collaboration process, and suggest that these measures can be enhanced by also considering participants' perceptions of the process.
Content may be subject to copyright.
Collazos, C. A., Guerrero, L. A., Pino, J. A., Renzi, S., Klobas, J., Ortega, M., Redondo, M. A., & Bravo, C. (2007). Evaluating
Collaborative Learning Processes using System-based Measurement. Educational Technology & Society, 10 (3), 257-274.
257
ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain
the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that
copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work
owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists,
requires prior specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.
Evaluating Collaborative Learning Processes using System-based
Measurement
César A. Collazos1, Luis A. Guerrero2, José A. Pino2, Stefano Renzi3, Jane Klobas4,
Manuel Ortega5, Miguel A. Redondo5 and Crescencio Bravo5
1IDIS Research Group, Department of Systems, FIET, University of Cauca, Colombia // ccollazo@unicauca.edu.co
2Department of Computer Science, Universidad de Chile, Chile // luguerre@dcc.uchile.cl // jpino@dcc.uchile.cl
3IMQ Institute of Quantitative Methods, Bocconi University, Milano, Italy // stefano.renzi@unibocconi.it
4UWA Business School, University of Western Australia, Australia // jane.klobas@uwa.edu.au
5Department of Information Technologies and Systems, University of Castilla, Spain // Manuel.Ortega@uclm.es //
Miguel.Redondo@uclm.es // Crescencio.Bravo@uclm.es
ABSTRACT
Much of the research on collaborative work focuses on the quality of the group outcome as a measure of
success. There is less research on the collaboration process itself, but an understanding of the process should
help to improve both the process and the outcomes of collaboration. Understanding and analyzing collaborative
learning processes requires a fine-grained analysis of group interaction in the context of learning goals. Taking
into account the relationships among tasks, products and collaboration this paper presents a set of measures
designed to evaluate the collaborative learning process. We emphasise: direct system-based measures based on
data produced by a collaborative learning system during the collaboration process, and suggest that these
measures can be enhanced by also considering participants’ perceptions of the process.
Keywords
Evaluating collaborative learning processes, CSCL, Collaboration processes, Group interaction
Introduction
Research on collaborative learning was concerned, initially, with the role of the individual in the group, then later
with understanding the group itself, comparing the effectiveness of collaborative learning with individual learning
(Dillenbourg et al., 1995). A number of independent variables have been identified and widely studied, including
group size, group composition, the nature and the objectives of the task, the media and communication channels, the
interaction between peers, the reward system and sex differences, among others (Adams et al. 1996; Dillenbourg et
al., 1995; Slavin, 1991; Underwood et al., 1990). An alternative approach is to study collaboration processes (Barros
et al., 1999, Brna et al., 1997). Indeed, it has been argued that understanding the process of collaboration is necessary
to understand the value of collaborative learning (Muhlenbrock et al., 1999). The work reported in this paper
concerns collaboration processes in computer-supported collaborative learning (CSCL).
Collaboration is “the mutual engagement of participants in a coordinated effort to solve a problem together”
(Roschelle et al., 1991). Research on collaboration processes in CSCL is difficult because it is hard to measure
collaboration for a number of reasons. These include:
¾ Collaborative learning technologies must go beyond generic groupware applications, and even the basic
technology is not yet well developed (Stahl, 2002a).
¾ CSCL technology is difficult to assess because it must be used by groups, not individuals (Muhlenbrock , 1998).
¾ System-based measures of collaborative interactions tend to lose the collaborative content (Stahl, 2002b).
¾ Effective collaborative learning depends on subtle social factors and pedagogical structuring, not just simple
tasks and technologies (Dillenbourg, 1999).
A number of different theoretical and methodological approaches have been taken to deal with these problems.
Barros and Verdejo (Barros et al., 1999) analyzed students’ online newsgroup conversations and computed values for
initiative, creativity, elaboration and conformity. Inaba & Okamoto (Inaba et al., 1997) implemented a system that
used a finite state machine to determine the level of coordination taking into account the flow of conversation of the
group participants. Muhlenbrock and Hoppe (Muhlenbrock, 1999) developed a framework and system for
determining conflicts in focus setting as well as initiative shifts in collaborative sessions on problem solving.
Constantino-González et al. (Constantino-González et al., 2001) developed a system which identifies learning
258
opportunities based on studying differences among problem solutions and tracking levels of participation. An
ICALTS Project identified indicators of students’ interactions at the meta-cognitive level which might enable them to
self-regulate or to assess their activity (ICALTS, 2004). Using activity theory as a theoretical framework, Barros et
al. (Barros et al., 2001) developed a model to find “representational mechanisms for relating and integrating the
collaborative learning elements present in real practical environments”. Martínez et al. (Martinez et al., 2002)
adopted a situated learning perspective. They defined a model that integrated group context and learning style. Soller
& Lesgold (Soller et al., 1999) developed an approach to analyze collaborative learning using hidden Markov
models, drawing on ethnomethodology (Garfinkel, 1967) and conversational analysis (Sacks, 1992). Drawing on the
ideas in many of these earlier studies, Collazos et al. (Collazos et al., 2007) developed a mechanism which includes
activities that provide the opportunity for students to examine a collaborative task from various perspectives so as to
make choices and reflect on their learning both individually and socially. Their model is based on tracing all the
activities performed during a collaborative activity similar to the affordances of video artifacts, through pauses,
stops, and seeks in the video stream. Despite this wealth of studies, there has been a lack of attention to systematic
evaluation of the quality of the collaboration process and definition of measures that might apply across different
applications.
Several researchers emphasize the quality of the group outcome as a criterion for the success of collaborative
learning. Typically, evaluation of collaborative learning has been made by means of examinations or tests to
determine how much students have learned. That is to say, a quantitative evaluation of the quality of the outcome is
made. Some techniques of collaborative learning use this strategy (e.g. “Student Team Learning” (Soller et al.,
2000), “Group Investigation” (Sharan et al., 1990), “Structural Approach” (Kagan, 1990) and “Learning Together”
(Johnson et al., 1975). This approach focuses on the intellectual product of the learning process rather than on the
process itself (Linn et al., 1992). However, not all group learning is collaborative. It is common to find people in a
group who have divisive conflicts and power struggles; or that a member sits quietly, and does not participate in the
discussions; or that one member can do all the work, while the others talk about unrelated subjects; or maybe that a
more talented member may come up with all the answers, dictate to the group, or work separately, ignoring other
group members. While supporting individual learning requires an understanding of the individual thought process,
supporting group learning requires an understanding of the process of collaborative learning (Soller et al., 1999). The
designer of a collaborative learning activity needs therefore to design an activity that requires collaboration, i.e., so
that the success of one person is bound up with the success of others (Collazos et al., 2001). This relationship is
referred to as positive interdependence. Investigators have developed different ways to structure positive
interdependence in software tools based on the interface design to ensure that students think “we” instead of “me”
(Collazos et al., 2003a).
Because of the very complex interactions that occur in truly collaborative systems, where learning occurs through
interaction among group members, understanding and analyzing the collaborative learning process requires analysis
of group interaction in the context of learning goals. These goals may include both learning the subject matter
(“collaborating to learn”) and learning how to effectively manage the interaction (“learning to collaborate”) (Soller et
al., 2000). It is the second of these aspects of collaborative learning that is perhaps hardest to understand in detail.
This is learning that is not merely accomplished interactionally, but is actually constituted of the interactions among
participants (Suthers, 2005; Stahl, 2006). Therefore, whenever we are going to evaluate a CSCL system, it is not only
important to try to evaluate the various mechanisms the software tool provides in order to help people learn through
collaborative appplications but to include some elements which allow evaluation of how people are doing a
collaborative activity, taking into account their attitude towards collaboration. Following Garfinkel, Koschmann et
al., (Koschmann et al., 2005) argue for the study of methods of building meaning: “how participants in [instructional]
settings actually go about doing learning”. In addition to understanding how the cognitive processes of participants
are influenced by social interaction, we need to understand how learning events themselves take place during
interactions among participants. Thus, we note that additional work is needed to understand the process of
collaboration. This knowledge could be applied to develop computational methods for determining how to best
support and assist the collaborative learning process (Collazos et al., 2003b). Our paper addresses this challenge.
We begin by breaking down the collaborative learning process into stages. This allows us to identify indicators that
can be used to evaluate collaborative learning during that part of the process where students are learning through
interaction. It also allows us to focus on what might be the outcomes of learning to collaborate during the process.
We then describe some software tools we have developed to analyze interactions, and show how the indicators can
259
been used to evaluate the collaboration process for students using those tools. Finally, we discuss the benefits of the
proposed approach, draw conclusions and identify opportunities for further work.
Stages of the collaborative learning process
Our interest is in designed collaborative learning processes, i.e. those processes that are designed by a facilitator in
order to provide an environment for collaborative learning, rather than processes in which collaborative learning
might occur spontaneously. Such a collaborative learning process is typically composed of several tasks that are
developed by the cognitive mediator or facilitator and other tasks that are completed by the group of learners.
We divide the collaborative learning process into three phases according to its temporal execution: pre-process, in-
process and post-process. Pre-process tasks are mainly coordination and strategy definition activities and post-
process tasks are mainly work evaluation activities. Both the pre-process and post-process phases are typically
accomplished entirely by the facilitator. On the other hand, the tasks accomplished during the in-process phase will
be performed mainly by the learners (group members). This is where the interactions of the collaborative learning
process take place. Our main goal is in evaluating this stage.
Drawing on Johnson & Johnson (Adams et al., 1996; Johnson et al., 1995), we can identify the tasks involved in the
in-process stage of a collaborative learning process. Tasks completed by the learners are: application of strategies
such as positive interdependence toward achievement of the goal, intra-group cooperation, reviewing success criteria
for completion of the activity, monitoring, providing help and reporting. There are three facilitator tasks: providing
help, intervention in case of problems and providing feedback.
Collaborative learning process indicators
Guerrero et al. (2000) developed an Index of Collaboration, measured as the simple average of scores on indicators
that measured the learner tasks identified by Johnson & Johnson (Adams et al., 1996; Johnson et al., 1995). In this
paper, we will develop a refinement of that Index of Collaboration. Four indicators will measure the following
activities: use of strategies, intra-group cooperation, reviewing success criteria and monitoring. A fifth indicator is
based on the performance of the group. All these indicators can be measured directly from data collected by the
system as students participate in CSCL activities. In addition to these system-based measures of the collaborative
learning process, we propose some additional measures of students’ learning to collaborate based on participants’
responses to the process.
Before we describe the indicators in detail, it is necessary to describe the collaborative environments from which
metrics for estimation of the system-based indicators were gathered. In the next section, we therefore describe some
software tools which we have developed to study the in-process stage of the collaborative learning process.
Software tools
We developed software tools to analyze the quality of the collaboration process for small groups working
synchronously toward each of the two learning goals identified in the introduction to this paper: learning to
collaborate and collaborating to learn. Four tools were used to study learning to collaborate and two tools were used
to study collaborating to learn. Each of the tools is described in turn.
Chase the Cheese
For the first tool, we chose a small case in which a group of persons have to do some learning in order to complete a
joint task. The task is a game of the labyrinth type.
260
The game –called Chase the Cheese– is played by four persons, each with a computer. The computers are physically
distant and the only communication allowed is computer-mediated. All actions taken by the participants are recorded
for analysis and players are made aware of that.
Players are given very few details about the game. The majority of the game’s rules must be discovered by the
participants while playing. They also have to develop joint strategies to succeed. In our studies, each person played
the game only once.
Figure 1 shows the game interface. To the left, there are four quadrants. The goal of the game is to move the mouse
(1) to its cheese (2). Each quadrant has a coordinator –one of the players– permitted to move the mouse with the
arrows (4). The other participants –collaborators– can help the coordinator by sending messages which are seen at
the right-hand side of the screen (10). Each player has two predefined roles: coordinator (only one per quadrant and
randomly assigned) or collaborator (the three remaining players).
Figure 1. Chase-the-Cheese game interface
The game challenges the coordinator of a quadrant in which the mouse is located because there are obstacles to the
mouse movements. Most of the obstacles are invisible to the quadrant coordinator, but visible to one of the other
players. In each quadrant there are two types of obstacles through which the mouse cannot pass: general obstacles or
grids (6) and colored obstacles (7). This is one of the features of the game which must be discovered by the players.
The players must develop a shared strategy to communicate an obstacle’s location to the coordinator of the current
quadrant. No message broadcasting is allowed, so players have to choose one receiver for each message they send
(9). Since each participant has a partial view of the labyrinth, they must interact with their peers to solve the problem.
In order to communicate with them, each player has a dialogue box (8) from which they can send messages to each
of the others explicitly (one at a time) through a set of buttons associated with the color of the destination (9). For
example, in Figure 1, a player can send messages to the other players with blue, red and green colors. Since each
player is associated with a color, their quadrant shows the corresponding color (5).
When starting to move the mouse, the coordinator has an individual score (11) of 100 points. Whenever the mouse
hits an obstacle, this score is decreased by 10 points. The coordinator has to lead the mouse to the cheese (in the case
of the last quadrant) or to a traffic light (3), where the mouse passes to another quadrant and the player’s role is
switched to collaborator while the coordinator’s role is assigned to the next player (clockwise). When this event
occurs, the individual score is added to the total score of the group (12). Both scores, partial and total, are hidden. If
players want to see them, they must pass the mouse over the corresponding icon displaying the score for two
seconds. If any of the individual scores reaches a value below or equal to 0, the group loses the game. The ultimate
goal of the game is to take the mouse to the cheese and do it with a high total score (the highest score is 400 points).
261
TeamQuest
TeamQuest is another labyrinth with obstacles, similar to Chase the Cheese but with some refinements (Collazos et
al., 2004b). The screen has three well-defined areas: game, communication and information (Figure 2). The game
area has four quadrants, each one assigned to a player who has the “doer” role; the other players are collaborators for
that quadrant. Each player is identified with a role image and name which appear on the screen. In a quadrant, the
doer must move an avatar from the initial position to the “cave” that allows them to enter the next quadrant. On the
way, the doer must circumvent all obstacles and traps in the map (which are not visible to all players). In addition,
the doer must pick an item useful to reach the final destination.
In TeamQuest, the user interface has many elements showing awareness: the doer’s icon, score bars, items which
were picked up in each quadrant, etc. The need to collect objects on the way means the players of a team must reach
a goal by satisfying sub goals in each of the game’s stages. In order to reach the final goal it is necessary to pass
through every quadrant avoiding all the obstacles, i.e., if a person is not able to pass his/her quadrant, then it will be
impossible to continue and thus the whole group will not reach the goal.
Figure 2. Team Quest User interface (in Spanish)
MemoNet
This game is loosely based on the classic “Memorize Game”, where the goal is to find an equal pair from several
covered cards. This is repeated successively until there are no covered cards remaining. In the case of MemoNet, the
idea is that four people try to find four equal cards from an initial set of ten different cards. The user interface is
shown in Figure 3.
All players have the same set of cards but ordered in different ways. A person draws one card each time so they need
to collaborate in order to solve the problem. A card is removed when all four players have found it. The game
continues until all cards are uncovered and removed. The game is played in a distributed fashion, with
communication allowed through a chat tool (Collazos et al., 2004a).
ColorWay
A fourth game designed to study learning to collaborate is ColorWay. This game has a 6 x 4 board of colored squares
with obstacles (Figure 4). Players can see their own obstacles (with their own color). Each player has a token with
his or her color, and this token can progress from the lower row to a target located on the upper row. The player can
move the token using the arrows and back buttons only through gray squares which are not currently occupied by
262
another token. Other tokens further restrict movement: no token can go to row n if there is a token in row n-2. In a
similar way to MemoNet, this game allows communication through chat. The problem is that there is only one way
to arrange the tokens, therefore the players need to communicate in order to win the game (Collazos et al., 2004a).
Figure 3. MemoNet user interface (in Spanish)
Figure 4. ColorWay user interface (in Spanish)
CCCuento
CCCuento helps a group to collaboratively write stories. Four participants work on four stories at the same time.
Each story has four phases: introduction, body A, body B and conclusion. Each member must write a different
section of every story. In a first stage every participant writes the introduction to one of the stories. In the second
stage, every participant writes the first part of the development of a different story (body A) (Guerrero et al., 2003).
263
Then, they continue working until they finish all the stories. The group members may edit the parts they were
responsible for at any time during the project (Figure 5).
Figure 5. CCCuento user interface (in Spanish)
DomoSim-TPC
DomoSim-TPC supports collaborative learning of domotical design (also known as house automation or intelligent
building design) by students working at a distance (Redondo et al., 2006a; Bravo et al., 2006a). Domotical design
aims at designing a set of elements that, installed, interconnected and automatically controlled at home, release the
user from the routine of intervening in everyday actions. The aim is to provide optimized control over comfort,
energy consumption, security and communications.) DomoSim-TPC supports a collaborative PBL (problem based
learning) approach (Koschmann et al., 1996). Students are given a domotical design problem which they must solve
by working collaboratively. Each student works at their own computer at a distance from the others.
Figure 6. User interface of DomoSim-TPC design workspace (in Spanish)
264
The system is organized in different shared workspaces, each one for carrying out a specific task (planning, design or
simulation). Figure 6 is a screenshot of the DomoSim-TPC design shared workspace (Bravo et al., 2006b). It
contains tools for building models (designs), discussion and awareness. The work surface contains a house plan on
which a set of operators has been inserted. On the left side of the window the domotical operator toolbars can be
seen, and on the right is the drawing toolbar.
The discussion tool provides communication and coordination support. It consists of a Guided Chat and a Decision-
Making tool. The awareness tool maintains a set of tele-pointers which show in which part of the model building
area the users are working; a list of interactions which shows the actions taken by each user; and a panel containing
the users’ pictures, their names and their state (for example, editing, selecting, linking, simulating, designing,
drawing, communicating). Each user has a unique color which is used to highlight their name and state and to
identify their tele-pointer. In the central part of Figure 6, we can see the tele-pointer of the student collaborating with
the student to whom the interface shown corresponds. As a user performs an action, it is shown immediately to the
group members in the shared workspace. The action is recorded in the list of interactions and, optionally, the system
will beep to capture the user’s attention. All this allows users to know, for example, what the other students are
doing, where they are, even what they may be likely to do next.
Measurement of indicators
As Jermann et al. (Jermann et al., 2001) note, measurement of system-based indicators begins with a data collection
phase which involves observing and recording online interactions. Typically, users’ interactions are logged and
stored by the CSCL system. This raw data can be analyzed and summarized to provide simple interaction metrics.
Indicators are higher level scores calculated from the metrics. In this section, we describe what raw data was
gathered in our experiments and how it was aggregated into metrics and then indicators. We will conclude the
section with some notes on measurement of additional user response variables.
Data collection
In order to analyze collaborative activity it is necessary to collect information about the collaboration process,
recording information about the participants, actions performed, messages sent and received and time of each action.
All the applications we developed include a mechanism to gather information. In TeamQuest, for example, we
implemented a structured chat-style user interface through which the group conversation is held. The application
records every message sent by any member of the group. Along with each message, it records the time of occurrence,
sender, addressee and current quadrant (the mouse location –X and Y position) when the message was sent. Figure 7
shows an example of the information gathered by the application. In addition, the log records the partial scores and
total score by quadrant. The tool also registers the start and finish time of the game, the time spent in each quadrant,
and the number of times each player looked at the partial and total scores by quadrant.
Figure 7. TeamQuest interaction log
265
Metrics
In order to estimate each of the indicators, we first define some performance metrics. Metrics such as time, length of
turn, and other countable events, are directly measurable and can often be automatically collected (Drury et al.,
1999). The following table of metrics includes the observable data elements that were identified from our
experiments as useful indicators of system and group performance. For each metric, we present its definition and
some examples of ways to capture it in Table 1.
Table 1. Metrics
Metric Meaning Example
Number of Errors Number of total errors performed during collaborative activity.
Solution to the problem The group is able to solve the problematic situation (Yes/No)
Movements Total number of mouse or pointer movements
Queries Total queries to the scores (Actions performed over the score
icons)
Explicit use of strategy Outline a strategy for the problem solution in an explicit way
(Yes/No).
Maintain strategy Use the defined strategy during all the activity.
Communicate strategy Negotiate, reaching consensus and disseminate information about
strategy.
Strategy messages Total number of messages that propose guidelines to reach the
group goal.
“Let's label the
columns with letters
and the rows with
numbers”
Work strategy
messages
Total number of messages that help the coordinator of the activity
to make the most suitable decisions. These are typically sentences
in the present tense that aim to inform the group about the current
state of the group task.
“Stop, there is an
obstacle in B3".
Coordination strategy
messages
Total number of messages that correspond to activities whose
main purpose is to regulate the dynamics of the process. These are
typically characterized by prescribed future actions.
"I will move six
squares to the right".
Work messages Total number of messages received by the coordinator of the
activity.
Coordination messages Total number of messages sent by the coordinator of the activity.
Success criteria review
messages
Total number of messages that review the boundaries, guidelines
and roles of the group activity.
Lateral messages Total number of messages, such as social messages, that are not
focused on the solution of the problem.
"Come on, hurry up,
I'm hungry!!!!!!! “
Total messages Total number of messages received and sent by the group during
the activity.
The system-based process indicators
We identified five system-based indicators of the success of the collaborative learning process in section 3: use of
strategies, intra-group cooperation, reviewing success criteria, monitoring and the performance of the group. Having
introduced the metrics, we can now describe these indicators and how they can be estimated and applied.
Use of strategies
The first indicator tries to capture the ability of the group members to generate, communicate and consistently use a
strategy to jointly solve the problem. According to Johnson & Johnson in (Adams et al., 1996), to use a strategy is
266
“to produce a single product or put in place an assessment system where rewards are based on individual scores and
on the average for the group as a whole”.
In our collaborations, group members are forced to closely interact with peers since each player has only a partial
view of the game (e.g., obstacles in the labyrinth games) or the solutions. Therefore, successful completion requires a
strict positive interdependence of goals. If the group is able to complete the task, we can say its members have built a
shared understanding of the problem (Dillenbourg et al., 1995). They must have understood the underlying problem.
For example, in Chase the Cheese, the coordinator does not have all the information needed to move the mouse in
their quadrant without hitting an obstacle, so they need timely assistance from their collaborators. According to
Fussell (Fussell et al., 1998), discussion of the strategy to solve a problem helps group members to construct a shared
view or mental model of their goals and the tasks that must be executed. This mental model can improve
coordination because each member knows how their task fits into global team goals.
In DomoSim-TPC, this aspect is explicitly considered. The students solve the problems with the help of specialized
tools. To successfully develop a design, they need to organize and distribute their work, drawing up a resolution
strategy that divides the problem into sub-problems (Redondo et al., 2006b).
Measurement of use of strategies is related to the software environment in which it is used. In general, however,
measurement should consider both the outcome of the collaboration and the nature of the strategy applied. In our
experiments, outcome could be measured simply as success or failure in solving the problem. Strategy is more
complex, and consists of elements of a) the quality of the technique or strategy actually used to solve the problem, b)
explicit definition of a strategy, c) consistency or maintenance of the strategy throughout the collaboration, and d)
communication of the strategy among group members.
Having identified the elements of strategy, we needed to consider how to measure and combine them. We sought a
method that would be simple to understand and apply but powerful enough to distinguish between the performance
of different groups. Measurement was based on the metrics introduced in Table 1. All elements were scored from 0
to 1. The most complex element to measure was the quality of the strategy used. This was measured as the mean
movement, error and time to solution scores where each of these scores was a value from 0 to 1 based on the
performance of each group relative to the performance of the worst group.
Using a data driven approach, we experimented with different weightings for each of the five components of strategy
(solution of the problem, quality of strategy, explicit use of strategy, maintenance, and communication) with groups
playing Chase the Cheese and Team Quest. We assigned the strategy elements a weight four times larger than the
one assigned to solution of the problem. This weighting reflects the emphasis of our first indicator (CI1) on the use
of strategy; the outcome of use, although important, should not dominate the score. Thus, in calculating CI1, the
strategy applied had a weight of 80% and success had a weight of 20%. After experimentation, the 80% available for
strategy applied included a small score (5%) to represent the actual strategy used. This weight reflects the fact that
many different strategies can be used in practice to solve a given problem, but permits us to penalize groups that use
an unusually large number of movements, have a large number of errors and/or use a large amount of time, even if
they have other elements of strategy in place. The remaining 75 percentage points were allocated as 20% if the group
explicitly outlined a strategy, 25% to the group’s ability to maintain the chosen strategy throughout the process, and
30% for strategy communication.
This set of weights produced scores that could be used to compare groups meaningfully. For example, the group with
the highest score in one test of Chase the Cheese scored 0.75 (out of a maximum of 1), with quite high scores (but
not the highest) on all indicators except communication. Indeed, this group performed consistently and moderately
well throughout the game, but with better negotiation and communication of strategy could have performed better.
On the other hand, the group with the lowest score performed consistently, but moderately badly, throughout the
game and did not reach a solution (Collazos et al., 2002).
Intra-group cooperation
This indicator corresponds to the application of collaborative strategies during the process of group work. If each
group member is able to understand how their task is related to the team’s global goals, then members can anticipate
267
actions. This requires less coordination effort. In the games, this indicator also includes measures related to the
messages every player requires from their peers to reach their partial goal when acting as a coordinator. In
DomoSim-TPC, group members need to communicate, exchanging information in relation to the domain,
coordinating their actions and making decisions by coming to agreements in order to solve the problem (Bravo et al.,
2006b).
A good application of collaborative strategies should be observed as efficient and fluid communication among
members of the group. Good communication, in turn, means few, precise and timely messages. This component of
the indicator was therefore measured as
CI2 = 1 - (Work strategy messages / Work messages)
Providing help is represented by the number of supporting messages from peers. Technically, this measure may be
computed as the ratio between the number of work messages and the total number of messages generated by the
group.
Reviewing success criteria
This indicator measures the degree of involvement of the group members in reviewing boundaries, guidelines and
roles during the group activity. It may include summarizing the outcome of the last task, assigning action items to
members of the group, and noting dates or times for expected completion of assignments.
In TeamQuest, for example, the success or failure of the group is related to achievement of partial and global goals.
It is shown in the obtained scores (partial and global scores). This indicator should also take into account the number
of messages concerned with the reviewing mentioned above. It reflects interest in individual and collective
performance. CI3 is then computed from the total number of messages that review the boundaries, guidelines and
roles of the group activity. It is calculated as
CI3 = 1 - (Reviewing messages / Total Messages)
Scores can range between 0 and 1, where 1 is the highest score. In DomoSim-TPC, e.g., this indicator is related to
the correctness, validity and other characteristics of the design models built by groups of students. In this system, the
relationship between success and strategies can be analyzed by studying the design plans and models that the
students built.
Monitoring
This indicator measures regulatory activity. The objective is to measure the extent to which the group maintains the
chosen strategies to solve the problem, keeping focused on the goals and the success criteria. If a player does not
sustain the expected behavior, the group will not reach the common goal. In this sense, our fourth collaboration
indicator (CI4) is related to the number of coordination messages (i.e. messages in which the coordinator requests
coordination information from collaborators) where fewer messages means good coordination. CI4 is calculated as
CI4 = 1 - (Coordination strategy messages / Coordination messages)
Performance
Baeza-Yates and Pino (Baeza et al., 1997; Baeza et al., 2006) made a proposal for the formal evaluation of
collaborative work taking into account three aspects: Quality (how good is the result of collaborative work), Time
(total elapsed time while working) and Work (total amount of work done). In our experiments, Quality can be
measured by three factors: errors made by the group, solution of the problem, and movements of the mouse. Work is
measured by the number of messages sent by group members. In the games, the software records the play-time
between the first event (movement of the mouse or message sent by any player) and when the group reaches the goal
268
(e.g., cheese) or loses the game (a partial score goes down to zero). In this view, the “best” group does the work
faster. We scored each of Quality, Work, and Time on a scale of 0 to 1 where 0 is the worst possible performance
and 1 is the best possible performance. The performance indicator, CI5, was measured as the mean score on these
three aspects.
Summary of system-based indicators
The indicators are summarized in Table 2.
Table 2: Summary of Indicators
Indicator Measurement
Outcome Strategy
Solution Quality Explicit use Maintainenance Communication
CI1
20% 5% 20% 25% 30%
CI2 1 - (Work Strategy Messages / Work messages)
CI3 1 - (Reviewing messages / Total Messages)
CI4 1 - (Coordination Strategy Messages / Coordination Messages)
Quality Time Work CI5
Few
errors
Solution of
the problem
Few
movements
Total elapsed time while
working
Total Messages
We tested these indicators with 11 diverse groups playing Chase the Cheese (Collazos et al., 2002). Table 3 shows
that the indicators allow us to identify groups that perform consistently well or badly, while providing enough
discrimination to distinguish the strengths and weaknesses of collaboration in each of the groups.
Table 3: Result of use of Indicators in Chase the Cheese
Group Strategy
CI1
Cooperation
CI2
Reviewing
CI3
Monitoring
CI4
Performance
CI5
Group 0
Group 1
Group 2
Group 3
Group 4
Group 5
Group 6
Group 7
Group 8
Group 9
Group 10
0.69
0.31
0.68
0.48
0.71
0.75
0.71
0.47
0.27
0.28
0.48
0.69
0.71
0.62
0.61
0.74
0.84
0.72
0.80
0.75
0.75
0.80
0.2
0.2
0.2
0.5
0.8
1
1
0.2
0.2
0.2
0.2
0.75
0.80
0.80
0.74
0.78
0.86
0.85
0.80
0.82
0.81
0.83
0.65
0.57
0.69
0.63
0.66
0.61
0.52
0.53
0.54
0.54
0.53
Other indicators of learning to collaborate
We have defined a set of system-based indicators that will permit us to evaluate students’ learning to collaborate.
Other indicators have been developed for specific projects, e.g., a method based on genetic algorithms to measure the
relationship between process (level of collaboration) and product (correct solution) in activities carried out using
DomoSim-TPC (Molina et al., 2006). All of these indicators rely on data generated by the CSCL system. They do
not consider, however, the users’ perceptions and psychological responses to participation. We have therefore
included in our evaluation aspects related with community psychology and social and educational psychology. We
defined meta-response variables that represent the personal development of individuals which psychologists believe
can result from effective participation in collaborative learning processes (Francescato et al., 2006). These variables
include cognitive empowerment, self-efficacy as a learner, self-efficacy for computer use, attitudes to computer-
269
supported learning, and attitudes to collaborative work. Full detail of these measurements is provided in other works
(Klobas et al., 2000; Klobas et al., 2002), but as an example, we discuss one of them, Attitudes to collaborative work,
here.
To measure Attitudes to collaborative work, four items were adapted from the Grasha-Reichmann descriptions of the
collaborative learning style (Hruska-Riechmann et al., 1982). Typical items were "I like to work with other students"
and "Group work is a waste of time". They were measured on a 4-point scale with values of 0 (Never), 1 (Rarely), 2
(Sometimes), 3 (Often). Cronbach’s alpha was satisfactory (above .7 in all administrations) and the scale was
additive. Responses to the 4 items were summed to give an attitude to collaborative work rating on a scale of 0-12.
Discussion
Typically, evaluation of collaborative learning has relied on examinations or tests to determine how much students
have learned, however, the model proposed in this paper is based on measuring collaborative learning processes. We
have derived indicators of the quality of student work during the in-process phase of the collaborative learning
process from a prominent model of collaborative learning (Adams et al., 1996) and shown how these indicators can
be calculated from data recorded during the collaboration. Use of these indicators can provide insight into the
collaboration process. Based on the results, evaluators can identify problematic situations and plan new strategies in
order to improve collaborative learning.
System-based indicators can be used to monitor the learning process while a collaborative activity is proceeding.
They may be used to alert an instructor to the need to intervene in the process. In such cases, it is necessary to design
the collaboration so that the instructor knows how to intervene in order to improve the process (Katz, 1999). It is
necessary for the teacher not only to monitor the activities of a particular student but also the activities of his peers to
encourage some kind of interaction that could influence the individual learning and the development of collaborative
skills, such as give and receive, help, and receive feedback and identify and solve conflicts and disagreements
(Dillenbourg et al., 1995; Johnson et al., 1992; Webb et al., 1996). Interventions can be difficult to identify if they
are managed in a manual way, especially when the teacher is working with several groups of students in the same
class at the same time. Our model of evaluation allows the teacher to observe the interaction at an aggregated level
and to be alerted to opportunities to intervene.
Monitoring can help a teacher to identify which aspects of the group process are more complicated. This is an
extremely important aspect of any mechanism of evaluation because it provides the opportunity to determine how to
improve the shortcomings of the group process that were detected from analysis of collaborative interactions. Ideally,
monitoring will help not only to find the weaknesses of the group –a difficult task in itself– but also, with the aid of
the computer, to overcome those weaknesses. It is possible to include in software tools mechanisms both to evaluate
the collaborative learning process and to improve it. In a series of preliminary experiments in CSCL environments, it
has been observed that groups with little experience in collaborative work do not understand, use or adopt
cooperation strategies well (Collazos et al., 2002). Establishing common goals is an important component of strategy
since actions cannot be interpreted without referring to the shared goals, and reciprocally, goal discrepancies are
often revealed through disagreements on action (Dillenbourg et al., 1999). Members of a group do not only develop
shared goals by negotiating them, but they also become mutually aware of these goals. TeamQuest includes a
discussion environment that group members can use during a break. Breaks may be taken at any time during play.
They provide opportunities for analysis of the work done, thus allowing the definition and reinforcement of common
goals.
Monitoring can also help CSCL environment designers to improve their designs. For example, Collazos et al. have
developed a mechanism called Negotiation Table, in which one widget supports discussions within the learning
group and another supports monitoring of the tasks done by the group (Collazos et al., 2003b). These widgets are
intended to improve the strategic aspect of group work. The information registered by the indicators permits the
designers to modify or incorporate new mechanisms according to the information revealed by monitoring the
collaborative activity.
The indicators we have developed permit evaluators to identify some weakness in the collaboration process in order
to design strategies to better support it. As we mentioned in the first part of this article, however, measurement of
270
collaboration is not an easy task. Although we have only briefly mentioned variables that correspond to students’
personal development as a result of participation, we believe these measures are also important. They can give
evaluators insights into the collaborative process performed and the perception of each user with respect to their
participation within the group. They could, for example, be incorporated at the end of a collaborative game using a
brief questionnaire or open ended question to gather students’ perceptions of their experience. Analysis of
participants’ answers could be used to determine if the group is able to self-evaluate or if they were able to construct
a shared understanding of the problem Such additional indicators could be compared with the system-based measures
to understand which aspects of participation and interaction in the CSCL process are associated with users’
perceptions and personal development.
In this project we did not develop indicators of social aspects of participation because the activities that we
constructed for this initial test of the approach were activities for small groups working on activities that could be
completed in a short time period. In real life collaborative learning, we recommend, however, evaluation of the social
aspect as well as the aspects of the collaboration process measured here.
It is also important to note that collaborative learning processes are influenced by the personal style and individual
behavior of every member of the group. In our collaborative games, for example, we noticed that group members
behaved and communicated in consistent ways regardless of the role they were playing, coordinator or participant.
Although our indicators measure the collaboration process within a group, it is also possible to observe the individual
contributions of every member in any group (Constantino-Gonzalez et al., 2000). This would permit a more specific
analysis of interaction (movements and message) to evaluate the performance of every group member in their own
group.
Conclusions and further work
Understanding the collaborative process of learning in groups is an interesting research field. In the case of
collaborative activities, performing a task well implies not only having the skills to execute the task, but also
collaborating well with teammates to do it. This complexity offers opportunities to develop tools and techniques for
improving collaboration.
In this paper we have presented a set of indicators and software tools that have allowed us to experiment in the
evaluation of collaborative work, and in particular, to study the collaborative processes that occur during
collaborative learning. To evaluate collaborative processes, we proposed five system-based indicators and some
indicators based on participants’ psychological responses to the process of participation. We do not claim these are
the only or the best indicators that could be developed to this end, but rather that they provide a direction to pursue in
understanding and evaluating the process of CSCL. Nonetheless, these indicators were able to provide some insight
into the collaborative work done by groups in our experiments. The system-based indicators can be used to detect
group weaknesses in the collaborative learning process and to propose mechanisms to improve them. In this way,
they can be used for both formative evaluation while students are collaborating using CSCL and summative
evaluation once collaboration is complete. The meta-response indicators can be used for summative evaluation of the
process.
Further work is needed to study the influence of many variables we did not isolate in our experimentation. Such
variables include: genre or subject of the collaborative task, age, culture, homogeneous vs. heterogeneous groups,
etc. Other experiments could also be made by changing the CSCL environments. Changes might include allowing
broadcast messages or allowing the group to slightly modify the rules of a game (e.g., forcing the coordinator to
receive all messages from members before enabling moves). Additionally, refinements might be made to the system-
based indicators and experiments conducted to identify how the indicators behave when used to alert teachers and
group members to aspects of the collaborative process that can be improved. Finally, the results of process tests can
be compared with traditional tests of the success of CSCL to confirm that improvements in the CSCL process
translate into improvements in the outcomes of CSCL.
271
Acknowledgments
This work was partially supported by Colombian Colciencias Projects No. 4128-14-18008 and No. 030-2005, and
Cicyt Project TEN2004-08000-C03.
References
Adams, D., & Hamm, M. (1996). Cooperative Learning: Critical Thinking and Collaboration across The
Curriculum (2nd Ed.), Springfield, IL: Charles Thomas Publisher.
Baeza-Yates, R., & Pino, J.A. (2006). Towards Formal Evaluation of Collaborative Work. Information Research,
11(4), Retrieved June 7, 2007, from http://informationr.net/ir/11-4/paper271.html.
Baeza-Yates, R., & Pino, J.A. (1997). A First Step to Formally Evaluate Collaborative Work. Paper presented at the
ACM International Conference on Supporting Group Work, November 16-19, 1997, Phoenix, AZ, USA.
Barros, B., & Verdejo, M. F. (1999). An Approach to Analyse Collaboration when Shared Structured Workspaces
are used for Carrying out Group Learning Processes. Paper presented at the International Conference in Artificial
Intelligence in Education (AIED’99), July 18-23, 1999, Le Mans, France.
Barros, M., Mizoguchi, R., & Verdejo, M. F. (2001). A platform for collaboration analysis in CSCL. An ontological
approach. Paper presented at the Artificial Intelligence in Education Conference, July 9-13, 2001, Los Angeles,
USA.
Bravo, C., Redondo, M. A., Ortega, M., & Verdejo, M.F. (2006a) Collaborative Distributed Environments for
Learning Design Tasks by Means of Modelling and Simulation. Journal of Networks and Computer Applications,
29(4), 321-342.
Bravo, C., Redondo, M. A., Ortega, M., & Verdejo, M. F. (2006b). Collaborative environments for the learning of
design: A model and a case study in Domotics. Computers and Education, 46(2), 152-173.
Brna P., & Burton M. (1997). Roles, Goals and Effective Collaboration. In Okamoto, T. and Dillenbourg, P. (Eds.),
Proceedings of Workshop on Collaborative Learning/Working Support Systems, pp3-10, Kobe, Japan.
Collazos C., Guerrero, L. A., & Vergara, A. (2001). Aprendizaje Colaborativo: un cambio en el rol del profesor.
Memorias del III Congreso de Educación Superior en Computación, Jornadas Chilenas de Ciencias de la
Computación, pp. 10-20, Punta Arenas, Chile (in Spanish).
Collazos, C., Guerrero, L. A., Pino, J., & Ochoa, S. (2002). Evaluating Collaborative Learning Processes. Lectures
Notes in Computer Science, 2440, 203-221.
Collazos, C., Guerrero, L. A., Pino, J., & Ochoa, S. (2003a). Collaborative Scenarios to promote positive
interdependence among group members. Lecture Notes in Computer Science, 2806, 356-370.
Collazos, C., Guerrero, L. A., Pino, J., & Ochoa, S. (2003b). Improving the use of strategies in Computer-Supported
Collaborative Processes. Lecture Notes in Computer Science, 2806, 247-260.
Collazos, C., Guerrero, L. A., & Pino, J. A. (2004a). Computational Design Principles to Support the Monitoring of
Collaborative Learning Processes. Journal of Advanced Technology for Learning, 1(3), 174-180.
Collazos, C., Guerrero, L. A., Pino, J., & Ochoa, S. (2004b). A Method for Evaluating Computer-Supported
Collaborative Learning Processes. International Journal of Computer Applications in Technology 19(3/4), 151-161.
272
Collazos, C., Ortega, M., Bravo, C., & Redondo, M. (2007). Experiences in Tracing CSCL Processes. In Nedjah, N.;
Mourelle, L.d.M.; Borges, M.N.; de Almeida, N.N. (Eds.), Intelligent Educational Machines: Methodologies and
experiences (Chapter 5), Berlin/Heidelberg: Springer.
Constantino-González, M., & Suthers, D. (2000). A Coached Collaborative Learning Environment for Entity-
Relationship Modeling. Lecture Notes In Computer Science, 1839, 325-333.
Constantino-González M., & Suthers, D. (2001). Coaching Web-based Collaborative Learning based on Problem
Solution Differences and Participation. In J.D. Moore, C.L. Redfield & W. L. Johnson (Eds.), Proceedings AI-ED
2001, Amsterdam: IOS Press, 176-187.
Dillenbourg, P., Baker, M., Blake, A., & O’Malley, C. (1995). The Evolution of Research on Collaborative Learning.
In E. Spada & P. Reiman (Eds.), Learning in Humans and Machine: Towards an interdisciplinary learning science,
Oxford: Elsevier, 189-211.
Dillenbourg, P. (1999). What do you mean by collaborative learning? In P. Dillenbourg (Ed.), Collaborative-
Learning: Cognitive and Computational Approaches, Oxford: Elsevier, 1-19.
Drury, J., Damianos, L., Fanderclai, T., Kurtz, J.,Hirschman, L., & Linton, F. (1999). Methodology for Evaluation of
Collaborative Systems, Retrieved June 7, 2007, from http://zing.ncsl.nist.gov/nist-icv/documents/methodv4.htm.
Francescato, D., Porcelli, R., Mebane, M., Cudetta, M., Klobas, J., & Renzi, P. (2006). Evaluation of the efficacy of
collaborative learning in face to face and computer supported university contexts. Computers in Human Behavior,
22(2), 163-176.
Fussell, S., Kraut, R., Lerch, F., Scherlis, W., McNally, M., & Cadiz, J. (1998). Coordination, Overload and Team
Performance: Effects of Team Communication Strategies. Paper presented at the CSCW'98 conference, November
14-18, 1998, Seattle, Washington, USA.
Garfinkel, H. (1967). Studies in Ethnomethodology, Englewood Cliffs, NJ: Prentice Hall.
Guerrero, L. A., Alarcón, R., Collazos, C., Pino, J., & Fuller, D. (2000). Evaluating Cooperation in Group Work.
Paper presented at the 6th International Workshop on Groupware, October 18-20, 2000, Madeira, Portugal.
Guerrero, L. A., Mejias, B., Collazos, C., Pino, J. A., & Ochoa, S. (2003). Collaborative Learning and Creative
Writing. Paper presented at the First Latin American Web Congress, November 10-12, 2003, Santiago, Chile.
Hruska-Riechmann, S., & Grasha, A. F. (1982). The Grasha-Riechmann student learning style scales. In J. Keefe
(Ed.), Student learning styles and brain behavior, Reston, VA: National Association of Secondary School Principals,
81-86.
ICALTS (2004). State of the Art: Interaction Analysis Indicators. Retrieved on June 7, 2007, from
http://www.rhodes.aegean.gr/LTEE/KALEIDOSCOPE-ICALTS/Publications/D1%20State%20of%20the%20Art
%20Version_1_3%20ICALTS_Kal%20NoE.pdf.
Inaba, A., & Okamoto, T. (1997). The Intelligent Discussion Coordinating System for Effective Collaborative
Learning. Proceedings of the IV Collaborative Learning Workshop in the 8th World Conference on Artificial
Intelligence in Education, Kobe, Japan, 175-182.
Jermann, P., Soller, A., & Muhlenbrock, M. (2001). From Mirroring to Guiding: A Review os State of the Art
Technology for Supporting Collaborative Learning. Paper presented at the Euro Computer Supported Collaborative
Learning, March 22-24, 2001, Maastricht, NL.
Johnson, D., & Johnson, R. (1975). Learning Together and Alone, Cooperation, Competition and Individualization,
Englewood Cliffs, NJ: Prentice Hall.
273
Johnson, D., Johnson, R., & Holubec, E. (1992). Advanced cooperative learning, Edina, MN: Interaction Books.
Johnson, D., & Johnson, R. (1995). My mediation notebook (3rd Ed.), Edina, MN: Interaction Book Company.
Kagan, S. (1990). The Structural Approach to Cooperative Learning. Educational Leadership, 47(4), 12-15.
Katz, S. (1999). The Cognitive Skill of Coaching Collaboration. Paper presented at the CSCL’99, December 11-12,
1999, Palo Alto, California.
Klobas, J., & Renzi S. (2000). Students' psychological responses to a course supported by collaborative learning
technologies: Measurement and preliminary results. The Graduate School of Management Discussion Papers Series,
2000-2, Nedlands, Australia: The Graduate School of Management, The University of Western Australia.
Klobas, J., Renzi, S., Francescato, D., & Renzi, P. (2002). Meta-Response to Online Learning. Ricerche di
Psicologia, 25(1), 239-259.
Koschmann, T, Kelson, A. C., Feltovich, P. J,. & Barrows, H. (1996). Computer-Supported Problem-Based
Learning: A Principled Approach to the Use of Computers in Collaborative Learning. In T. Koschmann (Ed.), CSCL:
Theory & Practice in an Emerging Paradigm, Mahwah, NJ: Lawrence Erlbaum, 83-124.
Koschmann, T., Stahl, G., & Zemel, A. (2005). The video analyst's manifesto (or the implications of Garfinkel's
policies for the development of a program of video analytic research within the learning sciences), Retrieved June 7,
2007, from http://edaff.siumed.edu/tk/manifesto.pdf.
Linn, M. C., & Clancy, M. J. (1992). The Case for Case Studies of Programming Problems. Communications of the
ACM, 35(3), 121-132.
Martínez, A., Dimitriadis, Y., Rubia, B., Gómez, E., Garrachón, I., & Marcos, J. (2002). Studying social aspects of
computer-supported collaboration with a mixed evaluation approach. In G. Stahl (Ed.), Computer Support for
Collaborative Learning: Foundations for a CSCL Community, Hillsdale, NJ: Lawrence Erlbaum, 631-632.
Molina, A. I., Duque, R., Redondo, M. A., Bravo, C., & Ortega, M. (2006). Applying machine learning techniques
for the analysis of activities in CSCL environments based on argumentative discussion. In Panizo, L., Sánchez, L.,
Fernández, B., & Llamas, M. (Eds.), Proceedings of the SIIE'06, León, Spain: University of León,. 214-221.
Muhlenbrock, M., & Hoppe, U. (1998). Constructive and collaborative learning environments: What functions are
left for user modeling and intelligent support? Paper presented at the ECAI-98, August 23-28, 1998, Brighton, UK.
Muhlenbrock, M., & Hoppe, U. (1999). Computer Supported Interaction Analysis of Group Problem Solving. Paper
presented at the CSCL’99, December 11-12, 1999, Palo Alto, California.
Redondo, M. A., & Bravo, C. (2006a). DomoSim-TPC: Collaborative Problem Solving to Support the Learning of
Domotical Design. Computer Applications in Engineering Education, 14(1), 9-19.
Redondo, M. A., Bravo, C., Ortega, M., & Verdejo, M. F. (2006b). Providing adaptation and guidance for design
learning by problem solving: he DomoSim-TPC approach. Computers and Education, 48(4) 642-657.
Roschelle, J., & Teasley, S. (1991). The construction of shared knowledge in collaborative problem solving. In C.
O’Malley (Ed.), Computer Supported Collaborative Learning, Berlin, Germany: Springer, 67-97.
Sacks, H. (1992). Lectures on Conversation, Oxford, UK: Blackwell.
Soller, A., Lesgold, A., Linton, F., & Goodman, B. (1999). What Makes Peer Interaction Effective? Modeling
Effective Communication in an Intelligent CSCL. Paper presented at the 1999 AAAI Fall Symposium: Psychological
Models of Communication in Collaborative Systems, November 5-7, 1999, North Falmouth, Massachusetts, USA.
274
Soller, A., & Lesgold, A. (2000). Knowledge acquisition for adaptive collaborative learning environments.
Proceedings of the AAAI Fall Symposium: Learning How to Do Things (pp. 57-64), Cambridge: MIT Press,
Retrieved June 7, 2007, from http://www.cscl-research.com/Dr/documents/Soller-Lesgold-AAAI-00.ps.
Sharan, Y., & Sharan, S. (1990). Group Investigation Expands Cooperative Learning. Educational Leadership,
47(4),17-21.
Slavin, R. (1991). Synthesis of Research on Cooperative Learning. Educational Leadership, 48(5), 71-82.
Stahl, G. (2002a). Groupware Goes to School. In J. Haake & J. Pino (Eds.), Groupware: Design, Implementation and
Use, Berlin, Germany: Springer Verlag, 1-24.
Stahl, G. (2002b). Rediscovering CSCL. In T. Koschmann, R. Hall, & N. Miyake (Eds.), CSCL2: Carrying Forward
the Conversation, Hillsdale, NJ: Lawrence Erlbaum, 169-181.
Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge, Cambridge, MA: MIT
Press.
Suthers, D. (2005). Technology affordances for intersubjective learning: A thematic agenda for CSCL. Paper
presented at the International conference of Computer Support for Collaborative Learning, May 30 – June 4, 2005,
Taipei, Taiwan.
Underwood, G., Mc.Caffrey, M., & Underwood, J. (1990). Gender Differences in a Cooperative Computer-based
Language Task. Educational Research, 32(1), 44-49.
Webb, N., & Palincsar, A. S. (1996). Group processes in the classroom. In D. C. Berliner & R. C. Calfee (Eds.),
Handbook of educational psychology, NY, USA: Macmillan Library Reference, 841-873.
... Il s'agit d'une intention d'être ensemble et elle persiste même après que l'action soit terminée. COLLAZOS et al. [16] définissent la collaboration comme un engagement mutuel coordonné pour résoudre un problème commun [16] (Figure 1.3). ...
... Il s'agit d'une intention d'être ensemble et elle persiste même après que l'action soit terminée. COLLAZOS et al. [16] définissent la collaboration comme un engagement mutuel coordonné pour résoudre un problème commun [16] (Figure 1.3). ...
Thesis
Les formations à distance en ligne, en particulier les MOOC, voient leurs effectifs augmenter depuis la démocratisation d'Internet. Malgré leur popularité croissante ces cours manquent encore d'outils permettant aux instructeurs et aux chercheurs de guider et d'analyser finement les apprentissages qui s'y passent. Des tableaux de bord récapitulant l'activité des étudiants sont régulièrement proposés aux instructeurs, mais ils ne leur permettent pas d'appréhender les activités collectives, or du point vue socio-constructiviste, les échanges et les interactions que les instructeurs cherchent généralement dans les forums sont essentiels pour les apprentissages (Stephens, 2014). Jusqu'à présent, les études ont analysé les interactions soit sémantiquement mais à petite échelle, soit statistiquement et à grande échelle mais en ignorant la qualité des interactions. La proposition de cette thèse est une nouvelle approche de détection interactive des activités collectives qui prend en compte à la fois leurs dimensions temporelles, sémantiques et sociales. Nous cherchons un moyen de permettre aux instructeurs d'intervenir et d'encourager les dynamiques collectives qui sont favorables pour les apprentissages. Ce que nous entendons par "dynamique collective", c'est l'évolution des interactions à la fois qualitatives et quantitatives, des apprenants dans des forums. Nous nous appuyons sur des études (Boroujeni 2017, Dascalu 2017) qui proposent d'associer l'analyse statistique des interactions et le traitement automatique de la langue, pour étudier les flux d'informations dans les forums. Mais, à la différence des études précédentes, notre approche ne se limite pas à une analyse globale ou centrée sur un individu. Nous proposons une méthode de conception d’indicateurs et de tableaux de bord permettant les changements d'échelles et la personnalisation des vues afin de soutenir les instructeurs et les chercheurs dans leur tâche de détection, d'observation et d'analyse des dynamiques collectives de sous-groupes d'apprenants.
... Kemampuan berpikir merupakan bagian dari domain pengetahuan (kognitif). Domain kognitif merupakan kompetensi yang berkenaan dengan kemampuan mengingat kembali atau mengenal terhadap pengetahuan dan pengembangan kemampuan intelektual dan keterampilan berpikir (Collazos, et al, 2007). Kemampuan berpikir analitis merupakan kemampuan yang dimiliki mahasiswa dalam memvisualisasikan, mengartikulasikan dan memecahkan masalah yang kompleks untuk dibuat keputusan logis berdasarkan informasi yang diperoleh. ...
Article
Full-text available
Penelitian ini bertujuan untuk mengembangkan instrumen penilaian terhadap kemampuan berpikir analitis yang tervalidasi dan mahasiswa fakultas pertanian. Pengembangan instrumen tes mengadopsi model 4-D yang dikembangkan oleh Thiagrajan, Semmel & Semmel meliputi tahap Define, Design, Development, dan Disseminate. Desain uji coba produk terdiri dari validasi produk dan uji coba instrumen tes. Subjek uji coba produk meliputi 28 mahasiswa (7 laki-laki dan 21 perempuan). Hasil penelitian menunjukkan instrumen yang layak digunakan sesuai dengan aspek-aspek hasil respon pengguna yang sangat baik. karakteristik instrumen tes yang dikembangkan terdiri dari 15 butir soal uraian materi hidrokarbon pada mata kuliah kimia dasar. Kemampuan berpikir analitis yang dicapai mahasiswa dari tertinggi ke terendah yaitu kemampuan membedakan, menghubungkan, dan mengorganisasi.
... As was mentioned in the literature review, the importance of collaboration as a skill (Al Kandari & Al Qattan 2020;Collazos et al. 2007;Havenga 2018a;Stehle & Peters-Burton 2019) was emphasised during the TPD programme. One of the questionnaire sections aimed to capture data on how Keobiditse promoted SDL and how his learners consequently exhibited SDL. ...
... As was mentioned in the literature review, the importance of collaboration as a skill (Al Kandari & Al Qattan 2020;Collazos et al. 2007;Havenga 2018a;Stehle & Peters-Burton 2019) was emphasised during the TPD programme. One of the questionnaire sections aimed to capture data on how Keobiditse promoted SDL and how his learners consequently exhibited SDL. ...
... As was mentioned in the literature review, the importance of collaboration as a skill (Al Kandari & Al Qattan 2020;Collazos et al. 2007;Havenga 2018a;Stehle & Peters-Burton 2019) was emphasised during the TPD programme. One of the questionnaire sections aimed to capture data on how Keobiditse promoted SDL and how his learners consequently exhibited SDL. ...
Book
Full-text available
This book on self-directed learning (SDL) is devoted to original academic scholarship within the field of education, and is the 6th volume in the North-West University (NWU) SDL book series. In this book the authors explore how self-directed learning can be considered an imperative for education in a complex modern society. Although each chapter represents independent research in the field of self-directed learning, the chapters form a coherent contribution concerning the scholarship of self-directed learning, and specifically the effect of environmental and praxis contexts on the enhancement of self-directed learning in a complex society. The publication as a whole provides diverse perspectives on the importance of self-directed learning in varied contexts. Scholars working in a wide range of fields are drawn together in this scholarly work to present a comprehensive dialogue regarding self-directed learning and how this concept functions in a complex and dynamic higher education context. This book presents a combination of theory and practice, which reflects selected conceptual dimensions of self-directed learning in society, as well as research-based findings pertaining to current topical issues relating to implementing self-directed learning in the modern world. The varied methodologies provide the reader with different and balanced perspectives, as well as varied and innovative ideas on how to conduct research in the field of self-directed learning.
... Para esto, se hace necesario primero poder entender dicho proceso, modelarlo con cada uno de sus actores, actividades y relaciones involucradas. Una vez descrito se puede evaluar y así identificar algunas debilidades con el objetivo de corregirlas para obtener mejores resultados de aprendizaje entre los participantes de la actividad, por medio de la inclusión de mecanismos de monitoreo y evaluación en cada una de las actividades de la etapa de Proceso [8] para de esta manera lograr efectos de mejora en el proceso de aprendizaje colaborativo. ...
Article
El aprendizaje colaborativo soportado por computador - CSCL (Computer Supported Collaborative Learning) es un área de investigación que se preocupa por la realización de actividades colaborativas que generen aprendizaje en sus participantes, investigación que ha obtenido el análisis de los beneficios que trae a nivel de aprendizaje individual y las habilidades sociales logradas con su utilización. Para trabajar colaborativamente es necesario aprender a hacerlo, no todo es cuestión de poner en un mismo lugar a un conjunto de personas, brindarles una herramienta software e indicarles que colaboren en la ejecución de una actividad. Es por esto que surge la importancia de crear mecanismos para monitorear y evaluar el proceso de aprendizaje colaborativo en búsqueda de su mejora. Objetivo: describir la creación de mecanismos de monitoreo y evaluación en pro de la mejora del proceso de aprendizaje colaborativo. Metodología: se basa en el seguimiento de las fases generales de la mejora de procesos software que permitió analizar desde el diagnóstico del actual proceso de aprendizaje colaborativo ejecutado hasta la aplicación de mejoras en diferentes estudios de caso. Resultados: se definieron diferentes mecanismos de monitoreo y evaluación del proceso de aprendizaje colaborativo los cuales permitieron la mejora del mismo. Conclusión: la definición de los diferentes mecanismos de monitoreo y evaluación del proceso de aprendizaje colaborativo son útiles y de ayuda para realizar un proceso de mejora en este contexto aunque su aplicación es medianamente sencilla. Computer Supported Collaborative Learning – CSCL is a research area that is concerned about the realization of collaborative activities that generate learning in its participants, research has obtained the analysis of the benefits that brings at the level of individual learning and social skills achieved with its use. For working collaboratively is necessary learn how to do it, not everything is a matter of putting in the same place a set of people, provide them with a software tool and tell them that collaborate in the execution of an activity. That is why the importance of creating mechanisms to monitor and evaluate the collaborative learning process seeking improvement. Objective: describe the creation of mechanisms for monitoring and evaluation in support of collaborative learning process improving. Methodology: based on following up the software process improvement general phases that allowed analysis from diagnosis of the current collaborative learning process executed until implementation of improvements in different case studies. Results: it was defined different mechanisms of monitoring and evaluation of collaborative learning process which allowed its improvement. Conclusion: the definition of the different collaborative learning process mechanisms of monitoring and evaluation are useful and helpful for doing the process improving in this context although its application is moderately simple.
... Erkens and Bodemer (2019) outlines two prerequisites are essential for collaboration as (1) learners must be aware of their peers' knowledge in the group, and (2) learners must activate their prior knowledge in dealing with the problem. Collaboration can focus on the quality of the group outcome and its process measuring the group success (Collazos, Guerrero, Pino, Renzi, Klobas, Ortega, Redondo, & Bravo, 2007). ...
Thesis
This study focuses on facilitation for enrichment of beginner Physical Sciences teachers to utilise Problem-Based Learning (PBL) while teaching the topic - Particulate Nature of Matter (PNM) which might enhance skills such as collaboration, critical-thinking, creativity, and communication. The problem is that there is a gap between training of science teachers and the real practice. Beginner teachers are not fully equipped with the necessary pedagogic skills that compromises their teaching practices. The purpose of this study is to enhance the utilisation of PBL by beginner Physical Sciences teachers in the teaching of PNM through a Teacher Professional Development (TPD) training. This study contributes to research related to improving Physical Sciences beginner teachers’ use of PBL. Social interdependence theory was adopted as the theoretical framework that underpins this research study, specifically focusing on shared influence between individuals in a small group. This study engaged with PBL-21st- century skills-development conceptual framework, adopting in the process a qualitative case study approach that is exploratory. This qualitative study was conducted within an interpretivist paradigm, allowing the researcher to view the world through the perceptions and experiences of the beginner teacher participants. Data was generated by means of an open-ended questionnaire, portfolio and interviews. This study employed purposive sampling, leading on to the snowballing technique and 5 participants were selected. Data was analysed using Saldaña's (2009) analytical framework, Golightly’s (2013) and Family Secret’s (2009) rubrics for learners’ activities. The overall portfolio data was analysed using an adapted Smith et al.’s (2001) analytic tool. This study’s findings show that TPD enhances beginner teachers’ knowledge and implementation of PBL; before the TPD programme, beginner teachers had limited knowledge on PBL; post the TPD programme, the extent of utilisation of PBL in initiating and promoting the 21st-century skills was highly satisfactory; there are four PBL principles of practice which could be distilled. This study demonstrates that PBL can be effectively implemented in Chemistry education. With relevant 21st-century skills gained through PBL, the study established that this might in turn lead to Self-Directed Learning (SDL). This study recommends that beginner teachers who participated in this study continue to implement PBL every year in the teaching of Grade 10 PNM and related topics since they are fully conversant with the approach and have PBL skills and tasks they could gainfully deploy. This study should be adapted and extended to other science subject areas.
Article
Full-text available
Aim Design-based engineering learning (DBEL) offers a potentially valuable approach to engineering education, but its mechanism of action has yet to be verified by empirical studies. Accordingly, the present study aimed to establish whether DBEL produces better learning outcomes, thereby building a strong, empirically grounded case for further research into engineering education. Methods To build a more comprehensive model of design-based engineering learning, the variables of cognitive engagement (the mediator) and modes of engagement (the moderator) were introduced to build a theoretical process model. Questionnaires and multiple linear regression analysis were used to verify the model. Results and discussion All four features of DBEL (design practice, interactive reflection, knowledge integration, and circular iteration) were found to exert significant and positive effects on learning outcomes. Moreover, cognitive engagement was found to both fully and partially mediate the relationships between these features and the outcomes of engineering learning; under two different modes of engagement, the positive effects of the learning features on cognitive engagement differed significantly. Conclusion The paper concluded the following: (1) a design-based learning approach can enhance engineering students’ learning outcomes, (2) cognitive engagement mediates between design-based engineering learning and learning outcomes (3) a systematic mode of engagement produces better learning outcomes than a staged modes of engagement.
Article
Educational benefits of collaborative learning have been demonstrated in several studies and various systems have been developed to date. Numerous efforts have been made to enhance these benefits by supporting collaborative learning with information and communications technology. These efforts have primarily involved support for constructing collaborative learning groups, for collaborative learning in e-learning environments, and for collaborative learning analysis. This study aims to develop a computer-supported collaborative learning system that supports instructors in real time to facilitate collaborative learning in a face-to-face environment with multiple learners at the same time to provide enhanced support. Both the learner and instructor have one tablet terminal and conduct collaborative learning in a single classroom. Herein, the learner can use the tablet to save an educational log and freely browse the educational log of another learner. By referencing the educational logs, learners can learn through face-to-face communication. Additionally, the instructor can determine (1) who is viewing whose educational log and to what extent and (2) which learner is struggling to achieve targets. Herein, an overview of the proposed system is provided and the results obtained using the proposed system are reported to evaluate its effectiveness.
Article
Full-text available
The purpose ofthis research is to investigate the effect of scouting technique, which was developed and applied by the researchers. Scouting is a cooperative learning technique in which the principles of scouting are applied into the classroom. After the scouting technique was developed by the researchers, its first application took place in vocational high school to tenth students (N=35). Secondly, the technique was carried out to third grade students (N=97). The data was collected through individual and focus group teacher interviews, focus group student interviews, pre-and post-tests. The results indicate that the scouting techniques has positive effects on learning, developing communication among students, facilitating classroom management, fulfilling the academic responsibilities, decreasing the level of misbehaviors in the classroom and supporting teacher-student communication. It is suggested that by using scouting technique, teachers have opportunities to give easy and more homework and spending more time on guidance of students.
Article
Full-text available
This paper focuses on the processes involved in collaboration using a microanalysis of one dyad’s work with a computer-based environment (the Envisioning Machine). The interaction between participants is analysed with respect to a ‘Joint Problem Space’, which comprises an emergent, socially-negotiated set of knowledge elements, such as goals, problem state descriptions and problem solving actions. Our analysis shows how this shared conceptual space is constructed through the external mediational framework of shared language, situation and activity. This approach has particular implications for understanding how the benefits of collaboration are realised and serves to clarify the possible roles of the computers in supporting collaborative learning.
Article
The analysis of activities in CSCL environments can draw interesting conclusions about the collaborative learning processes. Specifically, this analysis can show the effectiveness of such processes and allow the definition of intervention mechanisms to motivate and engage the students in the learning activities. Up to the moment, this analysis has focused on considering the collaboration process and the resulting product separately. We hypothesize that the use of Artificial Intelligence techniques can be useful for the production of a rule system that considers both the process and the product. Among the existing techniques, we propose the use of Genetic Classifier Systems because of their evolution and adaptation ability. The method of analysis proposed in this article focuses on asynchronous CSCL environments in which the collaboration takes the form of a discussion-based conversation. We present the tool that implements this approach and the results of its application to some experimental data obtained in an environment for the collaborative learning of design.
Article
Teachers have expressed a preference for group work over individual work in an earlier survey of the social organization of the computer‐based classroom, and in a variety of activities a group facilitation effect has been observed. Mixed‐gender groups are reported to be preferred over single‐gender groups. However, girls tend to be dominated by boys in computer‐based tasks which require co‐operative work, even though girls have no disadvantages in these tasks when tested individually or in single‐gender groups. The present experiment observed the performance of three types of pairings in an upper primary school — boys with boys, girls with girls and boys with girls — on a task involving the completion of a short piece of text from which certain letters had been deleted. The task required the co‐operative use of the computer keyboard, and the experiment asked whether there would be gender differences in the success with which the task was completed, in comparison with the same children working individually.Four measures of performance gave consistent results: both types of single‐gender pairs improved in comparison with individuals working alone, but mixed‐gender pairs did not. There were no gender differences in performance other than this effect of groups. Informal observations of group organization suggested that single‐gender pairs tended to share the components of the task and to discuss possible solutions, whereas the mixed‐gender pairs tended to separate the task components and to work on each other's instructions. On the basis of this small‐scale experiment, it is suggested that if group facilitation effects are to be seen in mixed‐gender groups, then cross‐gender discussion and negotiation will need to be encouraged.
Article
Discussions of collaborative learning and its benets have become common currency in many studies. Proponents argue that it pro- motes active learning, critical thinking, conceptual understanding, long-term retention of material, and high levels of student satis- faction. Despite this list of impressive potential outcomes and the attention it has received, many teachers are unclear as to what precisely is considered the impact of technology in the collaborative learning process. The authors present design principles intended to be useful to teachers in evaluating and monitoring the collabora- tive learning process. They outline design principles involving two aspects: the participation of the teacher during the collaborative learning process, and the inclusion of a strategy that generates conicts among members of the group. They also describe the
Article
Volume I contains the lectures of Fall 1964 through Fall 1967, in which Sacks explores a great variety of topics, from suicide to children's games to Medieval Hell as a nemonic device to pronouns and paradoxes. But two key issues emerge: rules of conversational sequencing - central to the articulation of interaction, and membership categorization devices - central to the social organization of knowledge. This volume culminates in the extensive and formal explication of turn-taking which Sacks delivered in Fall, 1967. Volume II contains the lectures of Spring 1968 through Spring 1972. Again he touches on a wide range of subjects, such as the poetics of ordinary talk, the integrative function of public tragedy, and pauses in spelling out a word. He develops a major new theme: storytelling in converstion, with an attendant focus on topic. His investigation of conversational sequencing continues, and this volume culminates in the elegant dissertation on adjacency pairs which Sacks delivered in Spring, 1972. © 1992, 1995 by The Estate of Harvey Sacks. All rights reserved.