ArticlePDF Available

Abstract and Figures

This meta-analysis of the experimental literature of distance education (DE) compares different types of interaction treatments (ITs) with other DE instructional treatments. ITs are the instructional and/or media conditions designed into DE courses, which are intended to facilitate student–student (SS), student–teacher (ST), or student–content (SC) interactions. Seventy-four DE versus DE studies that contained at least one IT are included in the meta-analysis, which yield 74 achievement effects. The effect size valences are structured so that the IT or the stronger IT (i.e., in the case of two ITs) serve as the experimental condition and the other treatment, the control condition. Effects are categorized as SS, ST, or SC. After adjustment for methodological quality, the overall weighted average effect size for achievement is 0.38 and is heterogeneous. Overall, the results support the importance of the three types of ITs and strength of ITs is found to be associated with increasing achievement outcomes. A strong association is found between strength and achievement for asynchronous DE courses compared to courses containing mediated synchronous or face-to-face interaction. The results are interpreted in terms of increased cognitive engagement that is presumed to be promoted by strengthening ITs in DE courses.
Content may be subject to copyright.
http://rer.aera.net
Review of Educational Research
DOI: 10.3102/0034654309333844
Jul 6, 2009;
2009; 79; 1243 originally published onlineREVIEW OF EDUCATIONAL RESEARCH
Tamim, Michael A. Surkes and Edward Clement Bethel
Robert M. Bernard, Philip C. Abrami, Eugene Borokhovski, C. Anne Wade, Rana M.
Education
A Meta-Analysis of Three Types of Interaction Treatments in Distance
http://rer.sagepub.com/cgi/content/abstract/79/3/1243
The online version of this article can be found at:
Published on behalf of
http://www.aera.net
By
http://www.sagepublications.com
can be found at:Review of Educational Research Additional services and information for
http://rer.aera.net/cgi/alerts Email Alerts:
http://rer.aera.net/subscriptions Subscriptions:
http://www.aera.net/reprintsReprints:
http://www.aera.net/permissionsPermissions:
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
12431243
A Meta-Analysis of Three Types of Interaction
Treatments in Distance Education
Robert M. Bernard, Philip C. Abrami, Eugene Borokhovski,
C. Anne Wade, Rana M. Tamim, Michael A. Surkes, and
Edward Clement Bethel
Concordia University, Montreal, Quebec, Canada
This meta-analysis of the experimental literature of distance education (DE)
compares different types of interaction treatments (ITs) with other DE instruc-
tional treatments. ITs are the instructional and/or media conditions designed
into DE courses, which are intended to facilitate student–student (SS), student–
teacher (ST), or student–content (SC) interactions. Seventy-four DE versus
DE studies that contained at least one IT are included in the meta-analysis,
which yield 74 achievement effects. The effect size valences are structured so
that the IT or the stronger IT (i.e., in the case of two ITs) serve as the experi-
mental condition and the other treatment, the control condition. Effects are
categorized as SS, ST, or SC. After adjustment for methodological quality, the
overall weighted average effect size for achievement is 0.38 and is heteroge-
neous. Overall, the results support the importance of the three types of ITs
and strength of ITs is found to be associated with increasing achievement
outcomes. A strong association is found between strength and achievement
for asynchronous DE courses compared to courses containing mediated syn-
chronous or face-to-face interaction. The results are interpreted in terms of
increased cognitive engagement that is presumed to be promoted by strength-
ening ITs in DE courses.
Ke y w o r d s : distance education, meta-analysis, student interaction, interaction
treatment.
Introduction
Review of DE Research
There have been many discussions of how distance education (DE) is similar to
and different from face-to-face forms of educational experience. It is not surprising
that the distance that separates the activities of teaching and learning, as well as the
media that are required to bridge that gap, are among the most commonly cited.
Before the dawn of the electronic and then the digital revolutions, it was the postal
service that provided the mediating role. It is no wonder that DE (then called
“correspondence education”) was considered to be a slow and, by some, a
Review of Educational Research
September 2009, Vol. 79, No. 3, pp. 1243–1289
DOI: 10.3102/0034654309333844
© 2009 AERA. http://rer.aera.net
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Bernard et al.
1244
second-rate way of educating and being educated (e.g., Thompson, 1990). This
type of education started to change in the 1980s as access to digital media provided
communication functionality, facilitating more immediate contact between stu-
dents and instructors. In the 1990s, the Internet and high-speed access began to
affect DE courses, bringing them closer to the mainstream of educational practice
(Peters, 2003), so that today, applications of online and Web-based DE abound.
Evidence of the widespread application of DE includes the rise in dedicated virtual
high schools and level of choice between DE and classroom instruction (CI) that
many universities now offer their students.
Much of the research from the 1980s onward focused on establishing the com-
parative equivalence of DE and face-to-face instruction. Bernard, Abrami, Lou,
Borokhovski, Wade, et al. (2004) explained this phenomenon:
It is arguably the case that these comparisons are necessary for policymakers,
designers, researchers, and adopters to be certain of the relative value of
innovation. Questions about relative effectiveness are important, both in
the early stages of development and as a field matures, to summarize the
nature and extent of the impact on important outcomes, giving credibility to
change and helping to focus it. (pp. 379–380)
In 1999, Thomas L. Russell declared, based on a collection of 355 comparative
studies, that DE and CI were not significantly different from one another in terms
of achievement and satisfaction. Bernard, Abrami, Lou, Borokhovski, Wade, et al.
(2004) criticized his nonsystematic vote count methodology, but he is not alone in
the quest to determine the comparative effectiveness of DE and CI.
Since the year 2000, a small cottage industry has emerged in the DE research
community, in which meta-analysis is used as an analytical tool to synthesize the
comparative literature of CI and DE (otherwise referred to as online learning, Web-
based learning, networked learning, etc.; Allen, Bourhis, Burrell, & Mabry, 2002;
Allen et al., 2004; Bernard, Abrami, Lou, Borokhovski, Wade, et al., 2004;
Cavanaugh, 2001; Cavanaugh, Gillan, Kromrey, Hess, & Blomeyer, 2004; Jahng,
Krug, & Zhang, 2007; Lou, Bernard, & Abrami, 2006; Machtmes & Asher, 2000;
Shachar & Neumann, 2003; Sitzmann, Kraiger, Stewart, & Wisher, 2006; Williams,
2006; Ungerleider & Burns, 2003; Zhao, Lei, Yan, Lai, & Tan, 2005).
In previous meta-analysis, Bernard, Abrami, Lou, Borokhovski, Wade, et al.
(2004) examined 232 studies (yielding 699 independent effect sizes) dated from
1985 to 2002, in which DE was compared to CI on measures of achievement,
attitudes, and course completion. They separated studies into asynchronous (i.e.,
mostly correspondence and online courses) and synchronous DE (i.e., mostly tele-
conferencing and satellite-based delivery) and found different results for the two
patterns across the three classes of measures, although asynchronous studies were
not compared directly with synchronous studies. Table 1 is a summary of these
results. Asynchronous DE courses were more positive than CI courses in terms of
achievement and attitudes than synchronous DE courses versus CI courses but had
a bigger problem with course completion.
So generally speaking, what have we learned from all of this synthesis activity
and the primary research efforts that preceded it?
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1245
1. We have learned that DE can be much better and also much worse than CI
(i.e., wide variability in effect sizes) based on measured educational out-
comes and that some pedagogical features of DE design are related to
increased student achievement.
2. We have learned from Phipps and Merisotis (1999) and Bernard, Abrami,
Lou, and Borokhovski (2004) that the research methodologies typically used
to assess this phenomenon are woefully inadequate and poorly reported.
Fundamental confounds associated with different media, different pedago-
gies, different learning environments, and so forth, mean that causal infer-
ences about the conditions of design, pedagogy, and technology use are
nearly impossible to make with any certainty.
3. We have learned that the very nature of the question (How does DE compare
to CI?) impedes our ability to discover what makes DE effective or ineffec-
tive, because the question is cast as a contrast between such starkly different
forms for achieving the same end. For example, in DE versus CI studies,
delivery method is often confounded with instructional design, in which the
DE condition has instructional design features not present in the classroom
control condition and vice versa. This does not mean that we know nothing
about designing good DE; it is just that we have not learned it from classroom-
comparison reviews.
Comparisons of DE versus DE should provide a better way of examining the
effects of various instructional treatments. Ideally, a review of this type should
include studies in which instructional and other treatments are administered on a
roughly equal footing so that confounding problems, so easily identifiable in DE
versus CI comparative studies, are reduced. Clark (2000) argued this point by say-
ing, All evaluations [of DE] should explicitly investigate the relative benefits
of two different but compatible types of DE technologies found in every DE
program” (p. 4).
Although timely and potentially revealing, technical aspects of a between-DE
quantitative synthesis are not as straightforward as DE versus CI meta-analyses,
in which the control condition (the CI group) can be identified unambiguously.
TABLE 1
Summary of average effect sizes for Bernard, Abrami, Lou, Borokhovski, Wade,
et al. (2004)
Outcome measures
Distance
education Course
pattern Achievement Attitudes completion rate
Asynchronous g + = 0.05 (k = 174)* g + = –0.00 (k = 71) g + = –0.09* (k = 53)
Synchronous g + = –0.10 (k = 92)* g + = –0.19 (k = 83)* g + = 0.01 (k = 17)
Note. All means were significantly heterogeneous except for Synchronous Course Completion.
*p < .05.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Bernard et al.
1246
With a wide range of treatments being compared, there is often no obvious control
condition; therefore, the +/0/– valence of the effect size is in doubt. When the
control condition is clearly identifiable, positive (+) effects indicate that the
treatment has outperformed the control and negative (–) effects indicate that
the control has outperformed the treatment. Part of our work in completing the
current meta-analysis has been to establish a rational and revealing way of deter-
mining the +/0/– valence of each calculated effect size. This meta-analysis is an
examination of the literature of empirical studies in which different instructional
treatments are contrasted. We hope that this review will further our understanding,
not of whether DE is effective compared to its alternatives, but specifically how
the design and implementation of DE can be optimized to improve student learning
and satisfaction.
Interaction in DE
We searched the theoretical literature of DE for constructs that would, first,
broadly encompass a large number of studies and, second, provide relevant con-
trasts to determine the valence of the effect sizes. Although there are many interest-
ing perspectives, three emerged as potentially useful dimensions for enabling
comparisons between treatment conditions: (a) student interaction in DE (e.g.,
Moore, 1989); (b) student autonomy (Daniel & Marquis, 1979; Moore, 1973; Moore
& Kearsley, 2005); and (c) technological functionality (Moore & Kearsley, 2005).
Given its alleged importance in DE and previous findings by Lou et al. (2006)
about its predictive qualities in connection with achievement, we chose student
interaction as the basis for effect size coding and as the structure within which
analyses would be conducted and results interpreted.
The DE literature is largely univocal about the importance of interaction
(Anderson, 2003a, 2003b; Bates, 1990; Daniel & Marquis, 1979, 1988; Fulford &
Zhang, 1993; Jaspers, 1991; Juler, 1990; Laurillard, 1997; Lou et al., 2006; Moore,
1989; Muirhead, 2001a, 2001b; Sims, 1999; Sutton, 2001; Wagner, 1994). This is
because of the integral role that interaction between students, teachers, and content
is presumed to play in all of formal education (e.g., Chickering & Gamson, 1987;
Garrison & Shale, 1990) and because interaction was largely absent during so
much of the early history of DE (Nipper, 1989).
Although interaction is not explicit in all definitions of DE (e.g., Keegan, 1996),
it is an integral part of some. For example, The U.S. Distance Learning Association
states, “distance education refers specifically to learning activities within a K–12,
higher education, or professional continuing education environment where inter-
action is an integral component” (Holden & Westfall, 2006, p. 9).
Some of the original thinking about interaction in DE focused mainly on
human–human interaction. Daniel and Marquis (1988) defined interaction “in a
restrictive manner to cover only those activities where the student is in two-way
contact with another person (or persons)” (p. 339). Later, Wagners (1994) broader
and somewhat more abstract and technical definition characterized interaction as
“reciprocal events that require at least two objects and two actions. Interactions
occur when these objects and events mutually influence one another” (p. 8).
Thurmond and Wombach (2004) described the content-driven goal of interaction
in DE as “the learners engagement with the course content, other learners, the
instructor, and the technological medium used in the course. . . . Ultimately, the
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Interactions in Distance Education
1247
goal of interaction is to increase understanding of the course content or mastery of
the defined goals” (p. 4).
Some definitions of interaction (Beard & Harper, 2002; Crawford, 1999;
Wagner, 1994) refer to the social purpose and processes of interaction, particularly
in regard to student–student (SS) and student–teacher (ST) interaction. Gilbert and
Moore (1989) distinguish between instructional interactivity and social interactiv-
ity, and Yacci (2000) acknowledges that the affective benefits of interactivity are
less well understood than the content benefits but that there is evidence that inter-
actions in an online classroom provide social presence and satisfaction. These
social aspects probably would not register on measures of achievement but might
on measures of attitude and course satisfaction.
Holden and Westfall (2006) and a number of other researchers make two addi-
tional points that are important in a discussion of interaction and DE. One is the
difference that exists between asynchronous DE, mediated synchronous DE, and
mixed DE (i.e., DE plus CI, also called blended and hybrid forms of DE). Mediated
synchronous and blended DE contains natural conditions for interaction, espe-
cially between the student and teacher and often among students. Asynchronous
DE may or may not contain capacities for text-based and/or voice-based and video-
based synchronous communication (e.g., MSN, Skype), but these facilities must
be built into the design of the technology applications available to students and
teachers.
The second point made by Holden and Westfall (2006) distinguishes interaction
that is asymmetrical from interaction that is symmetrical. Asymmetrical interac-
tion, like reading a textbook or watching a videotaped lecture, is considered by
Holden and Westfall to involve one-way communication. By contrast, symmetrical
interaction is equally balanced between the parties involved, like having a tele-
phone conversation, having an audio or video chat, or participating in an e-mail
discussion forum. By this description, a face-to-face lecture and discussion is both
synchronous and symmetrical, and listening to a taped lecture or discussion is
asynchronous and asymmetrical. However, synchronous, asynchronous, and
blended patterns often do contain elements of both symmetrical and asymmetrical
interaction. Anderson (2003a) remarked that one-way mediated communication
(e.g., instructors recorded message) can sometimes supplant the need for two-way
communication, often at lower cost, if information giving is the primary goal.
Types of Interaction
An interaction is commonly understood to describe actions among individuals
but is extended here to include individual interactions with curricular content.
Moore (1989) distinguished among three forms of interaction in DE: (a) SS inter-
action, (b) ST interaction, and (c) student–content (SC) interaction.
SS interaction refers to interaction among individual students or among stu-
dents working in small groups (Moore, 1989). In correspondence courses, this
interaction is often absent; in fact, correspondence students may not even be aware
that other students are taking the same course. In later generations of DE, including
two-way videoconferencing and Web-based courses, SS interaction can be syn-
chronous, as in videoconferencing and chatting, or asynchronous, as in discussion
boards or e-mail messaging. With DE becoming popular in mainstream education
with on-campus students, SS interaction may also include face-to-face contact.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Bernard et al.
1248
According to social theories of learning and distributed cognition (Salomon, 2000),
SS interaction is desirable both for cognitive purposes and motivational support,
and indeed, is at the heart of notions about constructivist learning environments in
DE (e.g., Kanuka & Anderson, 1999).
Student–instructor interaction traditionally focused on classroom-based dia-
logue between students and the instructor. According to Moore (1989), during ST
interaction, the instructor seeks “to stimulate or at least maintain the student’s
interest in what is to be taught, to motivate the student to learn, to enhance and
maintain the learners interest, including self-direction and self-motivation” (p. 2).
In DE environments, student–instructor interaction may be synchronous through
telephone calls, videoconferencing, and chats, or asynchronous through corre-
spondence, e-mail, and discussion boards. Face-to-face interaction between stu-
dent and instructors is also possible in some DE environments and when DE is
blended with face-to-face classroom environments. According to Moore (1989;
Moore & Kearsley, 2005) and several other DE theorists (e.g., Anderson, 2003b;
Holmberg, 2003), ST interaction may be directed toward providing motivational
and emotional support, activities that may register on attitude instruments more
than measures of achievement.
SC interaction refers to students interacting with the subject matter under study
to construct meaning, relate it to personal knowledge, and apply it to problem solv-
ing. Moore (1989) described SC interaction as “the process of intellectually inter-
acting with the content that results in changes in the learners understanding, the
learners perspective, or the cognitive structures of the learners mind” (p. 2).
Presumably, SC interaction also encompasses the development of mental and
physical skills. SC interaction may include reading informational texts, using
study guides, watching videos, interacting with computer-based multimedia, using
simulations, or using cognitive support software (e.g., statistical software), as well
as searching for information, completing assignments, and working on projects.
In discussing the continuing evolution and cross-fertilization of these seem-
ingly distinct modes of interaction, Anderson (2003a) states, due to increasing
computational power . . . and storage capacity of computers . . . there is pressure
and opportunity to transform ST and SS interaction into enhanced forms of student–
content interaction” (p. 3). There is some question, though, as to how such progress
would affect the traditional relationships and bonds that have come to be valued
by students and teachers alike.
More recently, Anderson (2003a) expanded the three types of interaction in DE
to include instructor–instructor interaction, instructor–content interaction, and
content–content interaction. Although these interactions may be important in a
larger DE context, they are not often reported in DE studies that focus on individual
DE courses. Therefore, these interactions are beyond the scope of this research.
Strength of Interaction Treatments (ITs)
An important distinction lies between the actual behaviors constituting the three
types of interaction, which for research purposes may be observed or measured,
and the conditions or environments that are designed and arranged by teachers to
encourage such behaviors. We refer to the latter as “interaction treatments” to
distinguish them from the actual interaction behaviors that are intended to arise
from them. The behaviors themselves are seldom described in research reports in
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Interactions in Distance Education
1249
sufficient depth or with sufficient precision to be studied in a systematic review.
Because ITs represent the levels of the independent variable in most studies, they
are usually described in detail and can be studied.
Moore (1989) encouraged distance educators to “organize programs to ensure
maximum effectiveness of each type of interaction, and ensure they provide the
type of interaction most suitable for various teaching tasks of different subject
areas, and for learners at different stages of development” (p. 5). More recently,
Anderson (2003a) has provided a slightly different and more nuanced take on the
provision of interactivity in DE, referred to by him as an “equivalency theorem.”
Equivalency, he says, suggests that different combinations of ITs can be provided
in different strengths and/or not at all to provide students with experiences that are
essentially equivalent, resulting in similar educational outcomes. He argues that
Deep and meaningful formal learning is supported as long as one of the three
forms of interaction (student–teacher; student–student; student–content) is at
a high level. The other two may be offered at minimal levels, or even elimi-
nated without degrading the educational experience. . . . High levels of more
than one of these three modes will likely provide a more satisfying educational
experience, though these experiences may not be as cost or time effective as
less interactive learning sequences. (p. 4)
Anderson (2003a) goes on to describe the combinations of instructional treat-
ments and technologies that can promote each of the three forms of interaction,
specifying among other things the levels (i.e., strength) of each type of interaction
that can be expected to be present or developed in different DE patterns. We used
his descriptions to validate our own procedures for rating the strength of interac-
tion patterns present in each of the studies that we included in the meta-analysis.
Purpose of the Meta-Analysis
This meta-analysis examines evidence of the effects of three types of ITs in DE
research studies in relation to achievement outcomes. Another purpose that evolved
out of Anderson’s (2003a) hypotheses is to investigate combinations of ITs to
determine if there are differences in their potential to affect achievement. Finally,
asynchronous forms of DE are examined independently. Not only were asynchro-
nous DE studies the most common single pattern (compared to synchronous and
mixed forms) but also asynchronous DE is judged to be the DE pattern most in
need of special consideration for the provision of the three forms of interaction.
Research Questions
1. What are the effects of the three kinds of interaction (SS, ST, and SC) on
achievement?
2. Does more overall IT strength promote better achievement?
3. Do increases in treatment strength of any of the three different forms of
interaction result in better levels of achievement?
4. Which combinations of SS, ST, and SC interaction most affect achievement?
5. Are there differences among synchronous, asynchronous, and mixed forms
of DE in terms of achievement?
6. What is the relationship between treatment strength and effect size for
achievement outcomes in asynchronous only DE studies?
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1250
Method
Meta-Analysis
Unlike some systematic reviews that are purely exploratory, this meta-analysis
is designed to examine specific questions related to the instructional conditions
that affect student interaction in DE; students interaction with other students,
with teachers, and with the content they are studying. We are particularly inter-
ested in the strength of the treatments that have the potential to influence learning
and to foster more positive student satisfaction with DE courses. To do this, we had
to make a series of judgments about the types and strengths of the ITs we were
examining. In this section, we describe the general procedures as well as how we
made various judgments relating to the general purposes of the meta-analysis.
Inclusion and Exclusion Criteria
The following criteria were used to define the set of studies to be included in
the meta-analysis:
1. A comparison between two DE conditions (i.e., where teaching and learn-
ing were separated through synchronous or asynchronous means), either
on the basis of pedagogical differences (e.g., feedback and discussion vs.
no feedback and discussion) or technological differences (e.g., satellite,
TV, radio broadcast vs. telephone and e-mail), was required.
2. DE applications with some face-to-face meetings (less than 50%) were
included. In these instances of “mixed or blended learning,” the dependent
measure (e.g., grades, test scores) had to encompass the entire course, not
just the DE or face-to-face segments separately.
3. A reported measure of achievement outcomes was required in the experi-
mental and the control condition.
4. Sufficient data for effect size calculation (e.g., means and standard devia-
tions) or estimation (e.g., p < .05), the reporting of sample sizes so that a
standard error of effect size could be calculated, and the explicit direction
of the effect (i.e., +/–) was required.
5. Only whole courses were included. Programs composed of more than one
course, in which data were aggregated over a collection of courses, were
excluded. We included outcome measures that reflected individual courses
rather than whole programs.
6. A report of the same or closely equivalent achievement measures for each
condition (i.e., outcome measure compatibility) was required.
7. An identifiable grade or age level of learner was required. All levels of
learners from young children in kindergarten to adults were included.
8. The studies could come from publicly available scholarly articles, book
chapters, technical reports, dissertations, or presentations at scholarly
meetings.
9. The inclusive dates of the studies were from January 1985 to December
2006. The start year (1985) was chosen because it marked the beginning of
the “digital age,” when the forerunners of the Internet were available and
e-mail could be used to communicate, representing the first alternative to
the postal service DE (Peters, 2003).
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Interactions in Distance Education
1251
10. Courses that were not institutionally based (e.g., home study) were
excluded.
11. Only interventions that lasted 15 or more hours were included. Shorter
interventions were considered lacking in the external validity necessary to
generalize the findings to typical DE courses.
Data Sources and Search Strategies
The studies used in this meta-analysis were located through a comprehensive
search of publicly available literature from January of 1985 through December of
2006. The following retrieval tools were used:
1. Electronic searches were performed on the following databases: ABI/Inform
Global (ProQuest), Academic Search Premier (EBSCO), CBCA Education
(ProQuest), Communication Abstracts (CSA), Digital Dissertations and
Theses (ProQuest), ED/ITlib, Education Abstracts (Wilson), ERIC
(Webspirs), FRANCIS (CSA), Medline (Pubmed), PsycINFO (EBSCO),
and Sociological Abstracts (CSA). Other databases included Australian
Education Index, Australian Policy Online, British Education Index, Education-
line, EdResearch Online, International Centre for Distance Learning Literature
Database, and Intute: Social Sciences.
2. Web searches were performed using the Google search engine.
3. Manual searches were performed in relevant journals, including American
Journal of Distance Education; Canadian Journal of Learning and
Technology; Distance Education; Educational Technology and Society;
International Journal of Instructional Technology & Distance Learning;
International Review of Research in Open and Distance Learning; Journal
of Distance Education; Journal of Interactive Media in Education; Journal
of Interactive Online Learning; Journal of Learning Design; Journal of
Technology, Learning and Assessment; Language Learning and Technology;
Open Learning; and Turkish Online Journal of Distance Education.
The reference lists of several earlier reviews were consulted, including Moore
(1989); Moore and Thompson (1990); Russell (1999); Machtmes and Asher
(2000); Cavanaugh (2001); Allen, Bourhis, Burrell, and Mabry (2002); Olson and
Wisher (2002); Shachar and Neumann (2003); Bernard, Abrami, Lou, Borokhovski,
Wade, et al. (2004); and Cavanaugh et al. (2004).
Branching searches were performed from the reference lists of many of the
studies located in earlier stages of the review.
Although search strategies varied depending on the retrieval tool used, gener-
ally search terms included “distance education,” “distance learning,” “open learn-
ing,“virtual university,” “virtual classroom,” “online learning,” “Web-based learning,
“electronic learning,” “elearn*,” “blended,” “hybrid,” “Internet,“computer-mediated
communication,” “computer conferenc*,” “video conferenc*,” OR “computer-as-
sisted instruction.” These terms were combined with (student* or learn* or teach*
or classroom*) and in some cases, depending on the nature of the database (achieve-
ment OR attitude* OR outcome* OR satisfaction OR perception*). Please contact
the first author to receive the original search strategies.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1252
Study Selection
After judging the abstracts of more than 6,000 manuscripts that were found
through searches, we retrieved 1,034 as full texts and examined them for inclu-
sion. The interrater agreement (Cohen’s kappa) for this step was 0.74 (r = 0.70,
p < .001).
Each of the full-text manuscripts retrieved was read independently by two
researchers and rated on a 1 to 5 scale for possible inclusion, using the inclusion
and exclusion criteria previously described. The interrater correlation for this step
was 0.61 (p < .001). The two ratings for each study were combined to form a sum.
Studies with a sum of 5 out of 10 or more were given further consideration for
inclusion. In all, 190 studies met all inclusion criteria except for the “duration of
treatment” criterion.
We decided to exclude 116 short duration studies, because most were conducted
as part of a larger course, in laboratory settings, or otherwise under conditions that
could not be generalized to normal DE settings (i.e., Criterion 11). This left a total
of 74 studies for analysis. We estimated the overall interrater agreement (Cohen’s
kappa) for all steps in the selection process to range from 0.70 to 0.81.
The 74 studies were sorted into one of three categories of interaction: SS inter-
action, ST interaction, and SC interaction. Then the levels of the independent vari-
able in each study were examined and assigned as either treatment or control. This
distinction determined the valence of the effect sizes that were extracted from each
study. The categorization of studies is described more completely.
Categorization procedure. Once selected, studies were categorized by the most
prevalent interaction type contained in the independent variable (i.e., SS, ST, or
SC) and the type(s) of outcome measure(s) present. Studies appeared in only one
interaction category so that the categories were orthogonal and could be compared.
Two trained coders, working independently, categorized the studies and then
resolved any conflicts through discussion. The initial interrater agreement (Cohen’s
kappa) was 0.71.
Effect sizes were extracted from the studies. The 74 studies yielded 74 effect
sizes for achievement outcomes and 44 for attitude outcomes. Table 2 shows the
number of effect sizes that were distributed across the three categories of ITs for
achievement and attitude.
After studies (effect sizes) were categorized, the levels of the independent vari-
able were scrutinized to determine which condition was experimental and which
was control. This was done by independent coders who compared the conditions
to determine which was the IT condition. This judgment had two elements: (a) which
level of the independent variable possessed the greatest potential for active engage-
ment by students and (b) which level of the independent variable encouraged more
two-way interaction (i.e., symmetrical interaction). Examples of these decisions
are presented in the next section. It should be noted that the magnitude of interac-
tion was not considered in assigning conditions or in calculating effect sizes.
Rather, instructional treatments were judged on their capacity to elicit or activate
interactive behavior in students. The table in Appendix A presents the designation
of treatment and control (pedagogy and media treatments) for all of the studies in
the meta-analysis.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1253
Examples of categorization decisions
To illustrate how the studies were categorized according to their potential for
interactivity, four examples have been included here. In each case, we describe the
two conditions that were compared, how we determined which was the treatment
and which was the control, and the direction of the effect size. In these examples,
Group A is considered to be the most interactive treatment and Group B the least
interactive control.
Example 1. Beyth-Marom and Saporta (2002) compared two DE methods. One
group (Group A) of social science undergraduate students studying basic research
methods experienced seven satellite TV tutorials (two-way audio and one-way
video so that students could see and hear an instructor who could hear them),
whereas a second set of students (Group B) attended three satellite TV tutorials and
received four (asynchronous) videotape cassettes to be viewed at their conve-
nience. The fully synchronous group (Group A) was designated as the experimen-
tal group for ST interaction.
Example 2. Gulikers, Bastiaens, and Martens (2005) tested “authentic learning
environment” technology by comparing two types of simulation exercises pro-
vided to undergraduate students. These students were asked to conduct an analysis
of a virtual bus company that was having problems with a high incidence of
employees taking sick leave. The task was presented in two forms; one version
(Group B) provided all the necessary data on a Web site without multimedia ani-
mation, whereas the “authentic condition (Group A) required the students to
approach animated employees, conduct simulated interviews, and receive advice
from an electronic advisor. We considered Group A to be high in “interaction with
content” and designated it the experimental condition and designated Group B as
the control condition.
Example 3. Hansen (2000) compared two approaches to course orientation applied
to an Introduction to Microcomputers course for undergraduate students. The entire
course was conducted at a distance through Web-based delivery of materials and
e-mail, supplemented by an orientation session, phone calls, and face-to-face
meetings between students and the instructor. The independent variable was the
TABLE 2
Number and percentage of effect sizes at the categorization stage
Achievement Attitude
Interaction treatment
categories k Percentage k Percentage
Student–student IT effects 10 13.5 6 13.6
Student–teacher IT effects 44 59.5 30 68.2
Student–content IT effects 20 27.0 8 18.2
Total effects 74 100.0 44 100.0
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Bernard et al.
1254
extent of the orientation session; the extended orientation group (Group A) received
additional information and handouts and had reminder postcards mailed to them
during the first 3 weeks of the term. The second group (Group B) did not receive the
extended orientation. We assigned the achievement effect size to ST interaction and
designated the extended orientation students (Group A) as the experimental group.
Example 4. Bell, Hudson, and Heinan (2004) provided two methods of online learn-
ing to physician assistant students in a medical terminology course. Both versions
of the course used the same materials, but some students worked independently on
the Web (Group B), whereas others (Group A) received 12 case studies in an online
conference setting, which they then discussed through the use of asynchronous
messaging. We applied the comparative outcomes to SS interaction, counting the
case-based discussion participants (Group A) as the experimental group.
Effect size extraction
Calculated effect sizes (i.e., Cohen’s d) representing the standardized difference
between the mean of the most interactive condition (treatment) and the mean of the
least interactive condition (control) were converted to Hedges’ g to correct for
small sample bias. Independent effect sizes were extracted from each included
study.
Interrater reliability (i.e., Cohen’s kappa) of the effect size calculation was 0.93.
Effect sizes, standard errors, and sample sizes were entered into Comprehensive
Meta-Analysis™ 2.0 (Borenstein, Hedges, Higgins, & Rothstein, 2005) for the
main analyses. The accuracy of the analysis was verified by several investigators.
Coding Interaction Strength
After studies were categorized, it was apparent that the relative strength of the
treatment (i.e., the difference between the experimental and the control) was not
the same across studies in each category. Furthermore, it was also recognized that
many studies, even though they had been placed in a dominant category (e.g.,
student–student interaction), contained elements of either or both of the other two
interaction categories. As a result, we decided to create a coding scheme that took
into consideration both of these factors.
Procedure for coding the relative strength of each IT
We evaluated the level of the independent variable in each study and decided
the relative strength of the treatment by comparing the quality and/or quantity of
interaction in the experimental group to the quality and/or quantity of interaction
in the control group. If the two treatments both had high levels of interaction on SS
interaction, for instance, then relative strength was coded as 0 because the relative
strength of the treatment, taking both conditions into account, was considered to
be minimal. If, on the other hand, the experimental group had a high level of inter-
action and the control group had a low level of interaction, the relative strength was
coded as 2 (high). If the groups were moderately different in terms of interaction,
a relative strength of 1 (moderate) was assigned to that difference.
We wanted to validate the results of our procedure for estimating treatment
strength, so we used Anderson’s (2003a) descriptions of the predicted strengths of
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Interactions in Distance Education
1255
pedagogy and technology treatments for correspondence education, DE via audio
and video conferencing, and Web-based courses to re-evaluate our original strength
ratings. Because Anderson’s category descriptions are more general than the spe-
cific details of pedagogy and technology provided in studies, some overestima-
tions and underestimations resulted. However, many were exact matches. Three
coders independently judged the strength of the interactions. SS, ST, and SC cod-
ing produced significant (p < .05) interrater agreements Rs of .77, .83, and .71.
Procedure for coding the cumulative strength of the three ITs
After coding each interaction dimension separately, we needed to develop a
scale that combined overall strength across the three interaction dimensions. We
used the following coding scheme:
1 means “Minimal” (any combination of 0s and 1s);
2 means “Moderate” (any combination of 0s and 1s + one 2); and
3 means “High” (any combination with more than one 2).
Procedures for coding strengths of pairs of ITs.
We wanted to assess each of three combinations of the categories of interaction
strength to answer the question, “Which combinations of the three ITs are related to
increasing average effect size?” Again, we used the relative strength ratings, this time
for each pair of ITs: SS + ST, SS + SC, and ST + SC. The combination ratings were
constructed in the following way:
0 means “Equal” (0 and 0);
1 means “Low” (any combination of 0s and 1s);
2 means “Moderate” (a 2 with either a 0 or 1);
3 means “High” (both 2s).
Coding Other Study Features
The complete coding sheet is in Appendix B. Other study features were coded
to reflect methodological quality, demographics (age and subject matter), and DE
mode (synchronous, asynchronous, and mixed).
Methodological Quality
The methodological quality of the studies included in a meta-analysis is of great
concern because the quality of the studies may affect the veracity of the conclu-
sions that can be drawn regarding the phenomenon under consideration. Studies
that contain design flaws, either because of inattention by the researcher or logisti-
cal or practical circumstances, reduce the attribution of causality by failing to
control for alternative explanations to the research hypothesis. Because studies of
educational phenomena are often conducted in field settings rather than in labora-
tories, they are often judged to be low in internal validity.
Four approaches to dealing with methodological quality are common in the
literature of meta-analysis. The first alternative is to use methodological quality as
an inclusion criterion so that only the highest quality studies are included in the
review (What Works Clearinghouse, 2006). The second alternative is to treat
methodological quality as a study feature, classify included studies by quality
criteria, and report their findings separately (Lipsey & Wilson, 2001). The third
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Bernard et al.
1256
alternative is to weigh studies by methodological quality, giving more weight to
some studies and less to others (Lipsey & Wilson, 2001). And the fourth alternative
is to treat methodological quality as a predictor of effect size by removing its influ-
ence from the collection of evidence (Hedges & Olkin, 1985). All of these approaches
have their advantages and their disadvantages.
In the first two approaches, low-quality studies are eliminated from consider-
ation entirely or segregated and interpreted separately. The biggest problem here
is that in a field such as education in which there are few high-quality studies, the
meta-analyst may be forced to conclude that there are too few studies for interpre-
tation and suspend investigation until more are available. In the third and fourth
approaches, the effects of lower quality studies are reduced but not eliminated.
Also, it is not clear how these approaches would aid the analyst in estimating the
average effect size after adjustment for methodological quality was applied. A fifth
approach, favored by us (Abrami & Bernard, 2008), uses an adjustment procedure
to bring the average effect size of lower quality studies up to that of the average
effect size of the highest quality studies (see Results section for more descrip-
tion). The within-class variability is left intact so that subsequent moderator vari-
able analysis (i.e., regression or ANOVA) can proceed in the normal fashion. One
of the big advantages of this approach is that all studies for which an effect size can
be calculated are left in the study, thus improving the power of subsequent tests to
find differences. In the current review, we used five coded methodological quality
study features to form a judgment about study quality on a scale from 6 to 18 (i.e.,
research design was double weighted and some items had more than two levels).
Results
Outlier Analysis and Publication Bias
One study with a sample size of over 32,000 was removed because of its dom-
inating effect on the weighted average effect size. “One study removed” analysis
revealed that all of the remaining average effects fell within the 95th confidence
interval of the overall adjusted average effect size so that all were considered
within a reasonable range around the average effect. In addition, removal of three
high-influence effect sizes failed to improve the fit of the distribution, so based on
these two factors, all 74 effect sizes for achievement were retained.
An examination of the funnel plot (standard error by effect size) for achieve-
ment effects revealed a nearly symmetrical distribution around the average effect.
No imputed effects were required for symmetry. Classic Fail-Safe N revealed that
44 additional studies would be required to nullify the effect of the achievement
data. Taken together, no obvious publication bias was revealed.
Adjusting for Methodological Quality
In this study, we coded five study features related to methodological quality:
(a) research design quality, (b) instrument quality, (c) statistical quality, (d) teacher
equivalence (same/different teacher), and (e) material equivalence (same/different
materials).
Abrami and Bernard (2008) outlined four steps, expressed as questions, for dealing
with the methodological quality of studies in a meta-analysis. The four steps that we
followed in conducting this meta-analysis were related to the following questions:
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1257
Step 1: Are the effect sizes homogeneous?
Step 2: Does study quality explain the heterogeneity?
Step 3: Which qualities of studies matter?
Step 4: How do we deal with the differences?
We now describe how we implemented these steps to adjust achievement outcomes
according to the methodological quality of the studies in the meta-analysis.
Achievement outcomes. We found that the overall unadjusted average effect size
of 0.10 was significantly different from zero, z(73) = 3.52, p < .001, and signifi-
cantly heterogeneous, Q
T
(73) = 209.86, p < .001. We then looked to see whether
the variability might be explained by methodological quality. The scores on the
methodological quality scale for these achievement data ranged from 6 to 14 and
produced the frequency distribution of scale categories shown in Table 3. The scale
categories significantly explained effect size, Q
B
(8) = 30.30, p < .001 and, treated
as an ordinal scale in regression, significantly predicted effect size (β
Regression
[1, 73] =
.08, p < .001, Q
Regression
= 23.50, p < .001). Based on this analysis, we decided to
classify the scale into three larger categories of methodological quality.
In deciding how to classify the effect sizes, we took three factors into consider-
ation: (a) approximately equal intervals of the scale; (b) similar average effect
sizes within categories, resulting in homogeneous or nearly homogeneous catego-
ries; and, if possible, (c) roughly proportionate relative frequencies within catego-
ries. Based on these factors, we decided to create three intervals (categories): low
methodological quality = 6 to 8 (k = 11), moderate methodological quality = 9 to
12 (k = 52), and high methodological quality = 13 to 15 (k = 11). The final split
represents a compromise among the three criteria previously described.
Following our procedures, post hoc tests of homogeneity were conducted
between the categories, and the low and moderate categories were found to be
equal and both were significantly different from the high methodological quality
category. The difference between the average of the low and moderate categories
and the high category average was 0.35. This value was then added as a constant
TABLE 3
Methodological quality scale and categories for achievement outcomes
Relative Category Frequency g+
Scale g+ frequency names (relative %) (category)
6 –0.52 1 Low k = 11 (14.86%) –0.001
7 –0.60 3
8 –0.79 7
9 –0.04 14 Moderate k = 52 (70.28%) 0.05
10 –0.08 18
11 0.07 14
12 –0.00 6
13 0.54 6 High k = 11 (14.86%) 0.39
14 0.39 5
Total 0.10 74 100.00 0.10
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1258
to each effect size in the low and moderate categories. The inverse variance weights
were recalculated, based on the adjusted standard errors, and the three categories
were again compared.
Table 4 shows the unadjusted and adjusted mean effect sizes for each of the
three categories of methodological quality. The left side of the table shows the
unadjusted weighted mean effect size for each category of methodological quality.
The right side of Table 4 shows the outcomes of the adjustment procedure. Note
that the weighted mean effect sizes for the low and high categories are only
approximately equal to that of the high category, because a single constant was
applied to studies in the two lower categories. Although the procedure resulted in
roughly equal category means, the variability within categories was left nearly
intact (Q = 187.10 vs. 182.60). This approach allowed us to estimate and then
model the effects attributable to the best quality studies while exploring the vari-
ability of the entire distribution.
Attitude outcomes. Attitude outcomes were analyzed using the same procedure based
on methodological quality. The adjusted average ES of +0.09 was heterogeneous. Major
results for the attitude data with respect to each research question are briefly summarized
at the end of the Results section. For the purposes of this article, we concentrate on the
achievement results, but detailed results of the attitude data are available by request.
Primary Analysis: Demographics
The publication demographics of the studies were examined through analysis
of variance across categories of publication date. The achievement data did not
differ across categories of publication date.
There is one positive outcome of this analysis, which we think bodes well for
future research in DE. About 68% of achievement studies fell between 2000 and
2006. This does not necessarily suggest a shift away from CI/DE comparison studies,
but it does indicate that more DE treatment comparisons are becoming available. The
research-based study of DE may well benefit substantially from this apparent shift.
TABLE 4
Categories of methodological quality for achievement and statistics for
unadjusted and adjusted effect sizes
Unadjusted effect sizes Adjusted effect sizes
Methodology k g+ SE Q g+(adj) SE Q
Low 11 –0.00 0.07 23.06 0.34 0.07 22.91
Moderate 52 0.05 0.03 133.86 0.39 0.03 129.56
High 11 0.39 0.07 30.18 0.39 0.07 30.13
Within-class 187.10 182.60
Between-class* 22.76** 0.36
Total 74 0.10 0.03 209.86 0.38 0.03 182.96
*χ
2
crit(2) = 5.99. **p < .0001.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1259
Educational level. There were only two prominent levels of education for the
achievement data. For achievement outcomes, studies of undergraduate students
(75.7%) predominated, followed by studies of graduate students (21.6%). The
domination of postsecondary studies in this sample unfortunately reduces the gen-
eralizability of the results that follow. We hope that the future efforts of primary
researchers will produce a sufficient corpus of studies for meta-analysis, at all
levels of the educational spectrum.
Primary Results: Substantive Research Questions
1. What are the effects of the three kinds of interaction (SS, ST, and SC)
on achievement?
The adjusted weighted means for each class of IT differed significantly (Q
B
=
7.05, p < .05). Post hoc tests revealed that SS and SC means were both significantly
larger than ST and that SS was not different from SC. All three categories were
moderately and significantly variable (see Table 5).
These results suggest that ST ITs are less effective, possibly more difficult to
implement consistently, or provide less added value than either SS or SC ITs.
2. Does more overall IT strength promote better achievement?
This question addresses Anderson’s (2003a) hypotheses about the strength of
treatments. The between-class comparison involving categories of treatment
strength was significant (see Table 6), and post hoc comparisons revealed that both
TABLE 5
Weighted average achievement effect sizes for categories of interaction
Interaction categories k g+(adj.) SE
Student–student 10 0.49 0.08
Student–teacher 44 0.32 0.04
Student–content 20 0.46 0.05
Total 74 0.38 0.03
(Q) Between-class* 7.05**
*χ
2
crit
(2) = 5.99. **p < .05.
TABLE 6
Categories of overall interaction strength for achievement outcomes
Interaction strength k g+(adj.) SE
Low strength 31 0.25 0.04
Moderate strength 28 0.55 0.05
High strength 15 0.36 0.06
Total 74 0.38 0.03
(Q) Between-class* 23.80**
*χ
2
crit
(2) = 5.99. **p < .01.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1260
moderate and high-strength ITs outperformed low strength ITs and that high-
strength ITs were not significantly different from moderate strength ITs. In addi-
tion, the linear association between treatment strength and effect size was
significant (β
Regression
[1, 73] = .09, p = .01, Q
Regression
= 6.66, p = .01). This result
seems to support Anderson’s prediction.
These results suggest that increasing the strength of IT affects achievement.
3. Do increases in treatment strength of any of the three different forms
of interaction result in better levels of achievement?
This is a more nuanced question about the strength of the three ITs. It examines
the relative strength of each IT separately.
Only the SC category produced both significant between-class differences and
a positive linear relationship with effect size. Table 7 shows the average effect size
for each rating on the strength scale for each interaction category and the between-
class Q statistics for achievement. Student–content produced a significant regres-
sion effect (β
Regression
[1, 73] = .13, p = .001, Q
Regression
= 12.50, p = .001). Post hoc
analysis indicated that high-strength ITs outperformed both the low and moderate
ITs. These results suggest that stronger SC ITs provide achievement advantages
over weaker SC ITs.
TABLE 7
Weighted average effect sizes for categories of treatment strength
Student–content interaction
Interaction strength k g+(adj.) SE
Low strength 35 0.32 0.04
Moderate strength 27 0.33 0.05
High strength 12 0.60 0.06
Total 74 0.39 0.03
(Q) Between-class* 17.36**
*χ
2
crit
(2) = 5.99. **p < .01.
TABLE 8
Combinations of interaction categories for achievement outcomes
SS + SC ST + SC
Levels of
treatment strength k g+(adj.) SE k g+(adj.) SE
Equal (0) 11 0.17 0.09 4 0.40 0.15
Low (1) 34 0.33 0.04 32 0.28 0.04
Moderate (2) 29 0.48 0.04 38 0.49 0.04
Total 74 0.38 0.03 74 0.38 0.03
*(Q) Between-class 12.40** 13.94**
*χ
2
crit
(2) = 5.99. **p < .01.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1261
4. Which combinations of SS, ST, and SC interaction most affect
achievement outcomes?
This question asks whether certain combinations of SS, ST, and SC produce
better achievement outcomes than others. The analyses produced significant
results (see Table 8) for the combinations SS + SC (β
Regression
[1, 73] = .15, p < .001,
Q
Regression
= 12.39, p < .001) and ST + SC (β[1, 73] = .16, p = .001, Q
Regression
=
10.24, p = .001). The combination SS + ST was not significant.
5. Are there differences among synchronous, asynchronous,
and mixed forms of DE in terms of achievement?
In this study, we found three distinct patterns: synchronous, asynchronous, and
mixed DE studies. Mixed is also referred to in the literature as blended and hybrid
and describes courses that are a combination of DE and face-to-face instruction.
From the separate designations of the mode of the treatment and the mode of the
control, we constructed three categories of direct or pure comparisons between like
modes (i.e., synchronous vs. synchronous, asynchronous vs. asynchronous, mixed
vs. mixed). Out of the 74 effect sizes for achievement, 49 remained in this analysis
when unlike comparisons were removed.
On average, all experimental conditions were significantly better than their
control conditions, but the between-class test of synchronous, asynchronous, and
mixed studies was not significant (see Table 9). The “best of both worlds” predic-
tion for mixed courses is not borne out statistically in these results (i.e., there is no
advantage for mixed courses), but the low number of effect sizes suggests that this
is an area of DE in need of research attention.
6. What is the relationship between treatment strength and effect size
for achievement in asynchronous-only DE studies?
It is arguable that interaction is most crucial, and perhaps most difficult to
achieve, in DE courses that are entirely asynchronous. These are DE courses that
have no face-to-face or synchronous component and are most like the Internet and
Web-based courses that have become so popular. We were interested in knowing
if studies of this DE mode were different from studies containing synchronous or
face-to-face components. Table 10 shows the weighted regression analysis of the
asynchronous studies (k = 37) and other patterns (referred to as “not asynchronous”)
TABLE 9
Comparisons among synchronous, asynchronous, and mixed patterns of DE.
DE Mode
k
g+(adj.)
SE
Synchronous DE 5 0.38 0.11
Asynchronous DE 37 0.39 0.04
Mixed DE 7 0.50 0.09
Total 49 0.41 0.03
(Q) Between-class*
1.40
*χ
2
crit
(2) = 5.99
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1262
in which synchronous communication or face-to-face interaction was present
(k = 37). The “not asynchronous” condition contains the same synchronous and
mixed studies shown in Table 9, plus all other combinations where unlike modes
were compared (e.g., asynchronous vs. synchronous). General treatment strength
(i.e., overall strength without reference to particular interaction patterns) was a
significant predictor of effect size for the “asynchronous only” studies but not for
the other category. However, both categories of studies produced a significant
linear relationship between SC strength and effect size.
When asynchronous studies were examined in terms of IT combinations, the
same pattern emerged that was observed earlier for the entire collection. These are
the 37 effect sizes shown in Table 9 that had a weighted average effect size of 0.39.
The results of both regression analyses indicated strong relationships between
combinations of ITs (i.e., SS + SC and ST + SC).
Summary of the Findings
The findings of this meta-analysis are summarized in Table 11.
Discussion
General Considerations
As an emerging educational practice matures—gaining its own raison d’être, its
own clientele, its own methodologies, and its own infrastructures—the need
diminishes for it to be justified through comparisons with its more established
alternatives. This seemingly is the case for DE. It is arguable then, that the next
form of progress to advance theory and practice will be made as researchers begin
to examine how instructional and technological treatments differ between DE con-
ditions, not between DE and CI. Only comparisons between DE treatments can
provide direct evidence of “what works” in DE and only through research syntheses
of this literature can we make broad statements that will hold up across many types
of courses, learners, and curricula.
TABLE 10
Weighted regression analysis between treatment strength and effect size for
asynchronous and “not synchronous” distance education studies
Asynchronous only Not asynchronous
Model β SE β SE
General treatment strength
Slope 0.25* 0.06 0.02 0.05
(Q) Regression 16.36* 0.12
Student–content treatment strength
Slope 0.17* 0.07 0.15* 0.07
(Q) Regression 6.91* 3.88*
Note. k = 37 (df = 1, 35).
*p < .05.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1263
In this meta-analysis, we have attempted to synthesize the existing and some-
what scant literature of DE versus DE studies, within a commonly understood
framework of ITs. We believe that in doing this, we have made a substantive con-
tribution to progress in DE and successfully developed a prototype for future
syntheses of the DE research literature. A range of researchable issues, including
independence versus dependence and the role and function of technology, can also
be investigated across competing and relatively comparable treatments.
In this meta-analysis, we have identified one of the critical challenges associated
with comparing different instructional treatments rigorously—namely, determining
the meaning of an effect size. If this meaning is unclear or inconsistent, nothing that
makes sense can emerge from the exercise. Consider the following example. In
some ways, student interaction and student autonomy are contradictory concepts
(e.g., Daniel & Marquis, 1979), yet both are integral to the theoretical literature of
DE. In certain instances, an effect size calculated on the basis of high versus low
interaction would have the opposite sign of one calculated for high versus low
Findings
Research questions
Question 1: Categories of
interaction (SS, ST, SC)
Question 2: Overall
strength of interaction
categories
Question 3: Strength of
individual categories
Question 4: Combinations
of categories
Question 5: Asynchronous
vs. Synchronous vs.
Mixed distance
education
Question 6: Interaction in
asynchronous distance
education only
(achievement outcomes)
Achievement
All categories > 0.25, SS
and SC > ST
Increase in strength for
moderate and high
strength over low
strength; regression is
significant
Increase in strength for high
over low and moderate
for student–content only
Increasing relationship
between strength and
effect size for SS + SC
and ST + SC
No difference among types
of distance education.
Strength of SC affects
outcomes in asynchronous
settings more than in other
settings
Attitudes
SS > ST and SC
Increase in strength for
moderate over low
strength; not enough data to
evaluate high strength and
regression is not significant
Increase in strength for
moderate over low for
student–content only
No finding
Synchronous and
asynchronous greater than
mixed distance education
N/A
TABLE 11
Summary of the findings for achievement and attitude outcomes for each
research question
Note. SS = student–student interaction; ST = student–teacher interaction; SC = student–content interaction.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Bernard et al.
1264
independence. Both are correct, but their meaning is different, so studies of student
interaction cannot be synthesized with studies of student autonomy. Therefore,
maintaining a consistent distinction between the experimental and control condi-
tions is of critical concern to the success of a meta-analysis of this type. Moreover,
establishing mechanisms for verifying the reliability and conceptual validity of the
many judgments that must be made may be even more important here than in a
conventional meta-analysis.
Substantive Research Outcomes
The major conclusion from this review is that designing ITs into DE courses,
whether to increase interaction with the material to be learned, with the course
instructor, or with peers, positively affects student learning. The adjusted average
effect of 0.38 represents a moderate and significant advantage for ITs over alterna-
tive instructional treatments, including less prominent ITs. We can only speculate
on the internal mental processes that these ITs foster, but we believe that an increase
in cognitive engagement and meaningfulness may be the result of different amounts
and types of interactivity induced through the presence of ITs.
When divided according to Moore’s (1989) categories of ITs, it appears that
there is support for the importance of all three: SS, ST, and SC. However, we found
a difference in effectiveness for SS and SC over ITs promoting ST interaction.
We did not code the studies for the quality or quantity of interactions that actu-
ally occurred in the experimental and control groups. Such information is typically
not available in reports of research as measures of treatment fidelity, although
occasionally they do appear as the dependent variable of a study. As a result, we
know only about the relative differences in the affordances provided by ITs, not
how students actually responded to them. Richard Clark (R. E. Clark, personal
communication, October 12, 2007) made a point about the actual strength of the
treatment being indeterminate in much of educational technology research.
We are therefore left wondering what we would find if measures of student activ-
ity data were available, making it possible to connect induced student activity and
interaction to measures of achievement. It may be that the presence of ITs functioned
in exactly the way it was intended, by activating student interactivity, so that our
estimates of the effects of interactivity are fairly accurate. But just because opportuni-
ties for interaction or collaboration were offered to students does not mean that stu-
dents availed themselves of them, or if they did interact or collaborate, that they did
so effectively. The latter case is the more likely event, so the achievement effects
resulting from actual interactivity may be underestimated in our review. All we really
know is that something in the nature of the distinction between treatments and con-
trols affected achievement outcomes. Not understanding the underlying phenomenon
is a problem that is encountered in research on interventions of all sorts in education
and limits our ability to develop strong theories that can propel further development.
Larreamendy-Joerns and Leinhardt (2006) pointed out another possibility:
Although online learning environments that allow for social interaction con-
stitute a remarkable advance, they should not be construed as inevitably
conducive to learning, solely because student–student and student–instruc-
tor exchanges take place. Nor should they be understood as obviously
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Interactions in Distance Education
1265
consistent with a vision of knowledge as practice or with efforts to nurture
communities of practice. (p. 591)
According to this, activity itself may not be the active ingredient, particularly when
it comes to SS interaction. To make things more complicated, Fulford and Zhang
(1993) found that students’ perception of interaction was a better predictor of
course satisfaction than their actual measured interaction. Following this line of
reasoning, providing the potential for interaction may be an important design con-
sideration, even if students do not actually avail themselves of this potential. It
seems unlikely, however, that perceived interaction would have any affect on
achievement.
Some of the most interesting results from this meta-analysis involve Anderson’s
(2003a) notion of the strength of ITs. Implicit in his arguments about strength is
that DE-learning environments should be designed with a consideration for the
importance of the three forms of interaction (i.e., creating the conditions for them
to occur) and that their strength is related to both effective learning and the satis-
faction that students express. Anderson goes so far as to predict that bolstering the
impact of at least one of the three interaction types, while the others can be at a
minimal level (or eliminated altogether, he says), has an effect on “deep and mean-
ingful formal learning” (p. 4).
Through our strength coding, we were able to investigate these claims. First, we
found a significant linear relationship between IT strength and effect size for
achievement outcomes. Furthermore, when the actual categories of strength were
investigated through ANOVA, we found strong support for Anderson’s (2003a)
hypothesis about achievement. Both high and moderate levels of treatment strength
were better than low levels.
When we looked at the strength of each category of IT separately, only strength-
ening SC interaction was related to increasing effect size. In essence, this suggests
that when students are given stronger versus weaker course design features to help
them engage in the content, it makes a substantial difference in terms of achieve-
ment. We also found SC ITs implicated when they were set in combinations with
the other ITs. SS + SC and ST + SC produced significant linear effects as well as
significant differences between levels of strength.
We went on to inquire about the nature of different patterns of DE—asynchronous,
synchronous, and mixed—that had not been compared directly in previous work.
In regard to their relationship to ITs, we found them to be about equal on measures
of achievement.
In a final set of analyses, we found that the relationship between the strength of
ITs and achievement held for asynchronous DE courses but did not hold for “not
asynchronous courses. It seems logical that courses lacking either mediated
synchronous interaction or direct face-to-face interaction would benefit most from
enhanced interactive capabilities.
The results of this meta-analysis do not provide recipes for improving the
design of DE courses. Results are based on such a wide variety of instructional
treatments that it would be difficult to argue for one over another. On the flip
side, however, this variability means that the results generalize across many
DE courses and course characteristics. As well, differences between treatments
were, by necessity, judged in relative rather than absolute terms, making it
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Bernard et al.
1266
difficult to rate with precision the expected outcomes associated with any
given instructional or technology treatment. However, the results provide a
blueprint for strategically examining individual courses in terms of the kinds
of ITs that are made available to students. From these results, it seems that a
designers first consideration should be to provide strong associations with the
content, unless content acquisition is not the primary goal of the instruction.
Although we were not able to examine more refined questions regarding con-
tent ITs, it makes sense that those involving more overt student activity would
be preferred over more passive forms. A second determination could be made
as to the desirability of stronger SS or ST connections. Because technical
capacities for facilitating humanhuman interaction seem to be constantly
improving, it is likely that we can expect noticeable improvements in all forms
of interaction that involve collaboration, discussion, and feedback.
Strengthening all three forms of interaction seems ideal, but Anderson (2003a)
points out that this may exceed the availability of human and technical
resources needed for cost-effectiveness.
We can now conclude with some degree of confidence that the availability
of ITs is related to increased learning in DE and that stronger treatments are
more effective than weaker treatments. We cannot conclude from the avail-
able evidence in this research synthesis exactly how the interactivity that is
presumed to underlie such treatments increases learning. We speculate that
there are underlying cognitive and motivational mechanisms that are indi-
vidually or collectively responsible for this link, but another generation of DE
studies is needed to make these processes understood. We invite the commu-
nity of DE theoreticians and primary researchers to think about and further
explore the cognitive and attitudinal mechanisms that link ITs to better stu-
dent performance and satisfaction. This includes developing, adapting, and
testing emerging technologies and instructional approaches to enhance SS,
ST, and SC interaction and suggesting how available resources can be used
more effectively.
Ideas for future DE research and development. Increasing the quantity of interac-
tion may lead to enhanced learning and satisfaction, but increasing the quality of
such interactions, especially in terms of cognitive engagement and meaningful-
ness, may be of greater importance. This is an issue that we have been unable to
address directly in this meta-analysis, so it seems appropriate to discuss it in terms
of research and development work in progress. There appear to be at least two
ways to foster increases in the quality of interactions: instructional design and
software design.
Instructional designs that foster higher quality interactions focus on course fea-
tures that promote high-quality learning activities. For example, cooperative learn-
ing structures may help ensure high-quality SS interactions by using positive
interdependence among the learners as well as individual accountability to ensure
cognitive engagement and meaningfulness. Similarly, designing effective course
strategies for problem-based (Bernard, Rojo de Rubalcava, & St-Pierre, 2000) and
guided discovery (Brown & Campione, 1994) forms of DE may promote the
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Interactions in Distance Education
1267
quality of SC interactions. Finally, the quality of ST interactions may be increased
by ensuring that content interactions focus on comprehension and higher order
thinking skills rather than activities that deal with lower level factual information,
procedural details of a course, or assessment issues.
A range of knowledge tools may also be used to promote better quality DE
interactivity. For example, Mayer (2001) and others have shown that interactive
multimedia can lead to improved learner performance compared to text-only con-
ditions. Multimodal representations and dynamic representations, especially in
mathematics and sciences, may help make complex concepts more understand-
able; more efficiently learned; better retained; and more readily recalled, applied,
and transferred.
It is arguable that the range of established tools currently available to edu-
cators, such as Learning Management Systems technologies, has yet to be
developed sufficiently or examined systematically by the community of DE
developers and researchers for their capacity to activate interactive behavior.
We encourage more and better quality research along these lines, as well as
increased research activity in elementary and secondary school applications
of DE.
Research whose time is past. To reiterate, a subsidiary yet compelling message
arising from this review is that little more can be gained through comparisons
between DE and CI. Although in our previous review of 103 DE versus CI stud-
ies (Lou et al., 2006), we were able to detect the importance of the three kinds of
interaction in DE, we could not have made sense out of the relationship among
the three types or tested specific claims about interaction in the way that the cur-
rent review has. If there is any further traction to be gained by conducting DE
versus CI studies, it is through more refined investigations of how specific
instructional methodologies that have proven effective in CI environments such
as cooperative learning (Johnson, Johnson, & Stanne, 2000) can be adapted for
DE. As well, classroom instructors may gain equally from understanding how
proven DE practices can successfully be adapted for their use.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1268
*Bell, Hudson, &
Heinan (2004)
Brewer & Klein
(2004)
*Britton (1992)
*Cheng (1991)
*Jung, Choi, Lim, &
Leem (2002)
APPENDIX A
Treatment and control designations for studies in the meta-analysis
Pedagogy (Experimental)
Collaborative case study
discussions
Role-plus-reward
interdependent groups
Student group attending
common lecture broadcast
Self-paced learning with
computer-based interaction
Collaborative interaction:
instructor-initiated group
discussions
Pedagogy (Control)
Web-based tutorial
No structured
interdependence groups
Individuals attending lecture
broadcast at personal
workstations
Self-paced learning with
print-based material
Academic interaction:
consultation with
instructor when needed
Technology (Experimental)
Blackboard learning con-
tent management system
(LCMS)
Asynchronous Web-based:
Outlook Express
Two-way audio, one-way
video with one screen
Asynchronous CMC
(Computer Mediated
Communication) with
phone in option
Asynchronous Web-based
instruction
Technology (Control)
Blackboard LCMS
Asynchronous Web-based:
Outlook Express
Two-way audio, one-way
video with individual
screens
Correspondence with
phone in option
Asynchronous Web-based
instruction
(continued)
Achievement data
Studies that contain both achievement and attitude data are marked with an asterisk.
Student–student interaction
Study
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1269
APPENDIX A (continued)
Study
*Romanov &
Nevgi (2006)
Ruksasuk (2000)
*Skylar (2004)
Tuckman (2007)
Annetta (2003)
*Banks (2004)
Pedagogy (Experimental)
Unrestricted WebCT use
with discussion forums
and student–student
message system
Social and instructional
interaction
Online access to PowerPoint
notes, lecture notes,
digital videos, textbook
and discussion board
Scaffolded distance
learning: collaboration
and peer and instructor
coaching
Presentation and discussion
with two-way audio video
Blended using
asynchronous Internet-
based instruction with
instructor-led live
classroom instruction
Pedagogy (Control)
Restricted WebCT with
regular e-mail access to
teacher
Instructional interaction
CD-ROM–based
PowerPoint notes,
lecture notes, digital
videos, and textbook
Traditional distance
learning: with
asynchronous teacher
interaction when needed
Scheduled video
presentations
Asynchronous Internet-
based with asynchronous
discussions
Technology (Experimental)
Unrestricted use of WebCT
LCMS
Web-based instruction with
e-mail, chat groups,
bulletin boards,
hyperlinks, and FAQ
Online Class: WebCT
LCMS
Web-based instruction with
study skills support
groups and to-do
checklists
Live: two-way audio-video
conferencing
Asynchronous Internet-based
instruction
Technology (Control)
Restricted use of WebCT
LCMS with regular
e-mail access
Web-based instruction with
hyperlinks, FAQ, and
e-mail with teacher
Class-in-a-box: CD-ROM
Web-based instruction with
asynchronous
communication via the
Internet
Video: Videotape-based
instruction
Asynchronous Internet-based
instruction
(continued)
Student–teacher interaction
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1270
APPENDIX A (continued)
Study
*Beare (1989)
*Benson (2005)
Bernard &
Lundgren-Cayrol
(2001)
Bernard & Naidu
(1992)
Beyth Marom &
Saporta (2002)
*Caldwell (2006)
Callahan, Givens, &
Bly (1998)
Pedagogy (Experimental)
Telelecture: presentation
through two-way audio
conference
Hybrid learning
environment:
combination of online
and face-to-face sessions
High moderator
intervention: instructor
led discussions and
activities
Correspondence with
teacher feedback
All seven sessions offered
via synchronous lecture
presentation and
discussion
Web-based instruction with
instructor facilitated
face-to-face lab sessions
Live lecture presentation
and discussion
Pedagogy (Control)
Video-assisted independent
study: Asynchronous
video (and some
face-to-face)
Online learning
environment: online
sessions only
Low moderator
intervention: passive
instructor participation
Correspondence without
teacher feedback
Four sessions of
synchronous lecture
presentations and
discussions and three
video-taped sessions
Purely Web-based
instruction
Video-recorded lecture
presentation
Technology (Experimental)
Synchronous two-way
audio conference
Web-based instruction
Online collaborative
environment
Correspondence
Satellite-based
synchronous video
tutorials
Blackboard 6.0 LCMS
CU-SeeMe: synchronous
Internet-based TV
broadcast
Technology (Control)
Video-assisted independent
study: videotape
Web-based instruction
Online collaborative
environment
Correspondence
Combination of satellite-
based synchronous
video tutorials with
asynchronous videotapes
Blackboard 6.0 LCMS
Videotape-based delivery
(continued)
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1271
APPENDIX A (continued)
Study
Campbell, Gibson,
Hall, Richards,
& Callery (2008)
*Cargile Cook
(2000)
*Chen & Shaw
(2006)
Cifuentes &
Hughey (2003)
*Daig (2005)
Davis (1996)
*Frith & Kee
(2003)
Gallie (2005)
Pedagogy (Experimental)
Web-based instruction with
supplemental face-to-face
discussion
Interactive Web-based
delivery with chat rooms
and bulletin board access
Deductive/inductive
predominantly
teacher-centered
instruction
Blended:
computer-mediated and
face-to-face discussions
12-week course
Lecture presentation and
discussion
Mixed conversation:
Collaborative online
instructor supported
communication
Presentation and
discussions
Pedagogy (Control)
Web-based instruction with
supplemental online
discussion
Presentational Web-based
delivery (no bulletin
board and chat room)
Deductive/inductive
predominantly
student-centered
instruction
Asynchronous computer
mediated discussions
6-week course
Lecture presentation and
discussion
Internal conversation:
Individual study with
instructor support when
needed
Presentation with instructor
support when needed
Technology (Experimental)
WebCT LCMS
Unrestricted use of WebCT
LCMS
Synchronous
Webcam-delivered
instruction
FirstClass: computer
communication software
Blackboard LCMS
Satellite live broadcast with
an audio conferencing
system
WebCT LCMS with access
to online chats
Blackboard LCMS with
access to discussion
boards
Technology (Control)
WebCT LCMS
Restricted use of WebCT
LCMS
Asynchronous scripted
online instruction
FirstClass: computer
communication software
Blackboard LCMS
Audio-graphic
teleconferencing system
with still-frame graphics
WebCT LCMS
WebCT LCMS with e-mail
options only
(continued)
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1272
APPENDIX A (continued)
Study
*Grimes, Krehbiel,
Nielson, & Niss
(1989)
Hansen (2000)
Holmberg &
Schuemer (1989)
Holmberg &
Schuemer (1989)
Holmberg &
Schuemer (1989)
*Huett (2006)
Karr, Weck, Sunal,
& Cook (2003)
*Katz (2000)
Pedagogy (Experimental)
Lecture presentation
supplemented with live
interactive televised
sessions with instructor
Self-learning with extended
orientation session
Short assignments for
every course unit
Medium length assignments
for every course unit
Short assignments every
2-course units
Computer simulations with
confidence-building
tactics and
confidence-enhancing
e-mails
Blended DE: online and
face-to-face delivery
modes combined at
various stages of the
course
Live lecture presentation
and discussion
Pedagogy (Control)
Lecture presentation with
instructor support over
the telephone when
needed
Self-learning with short
orientation session
Short assignments for
every 4-course units
Medium length assignments
for every 2-course units
Short assignments every
4-course units
Computer simulations
Online course delivery with
no face-to-face
Lecture presentation and
asynchronous interaction
with the teacher
Technology (Experimental)
Live broadcast with access
to televised interactive
sessions with instructor
Web-based instruction
Correspondence: printed
material with audiotapes
Correspondence: printed
material with audiotapes
Correspondence: printed
material with audiotapes
SAM 2003 (Skill
Assessment Manager
Software) and WebCT
LCMS
Web-based with access to
electronic discussion
board
Picture-Tel: synchronous
interactive video system
Technology (Control)
Videotaped delivery and
correspondence
Web-based instruction
Correspondence: printed
material with audiotapes
Correspondence: printed
material with audiotapes
Correspondence: printed
material with audiotapes
SAM 2003 (Skill
Assessment Manager
Software)
Web-based with access to
electronic discussion
board
Asynchronous interactive
Internet-based
(continued)
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1273
APPENDIX A (continued)
Study
Kirschner, van den
Brink, &
Meester (1991)
Libler (1991)
Lilja (2001)
Lim, Morris, &
Kupritz (2006)
Ostiguy & Haffer
(2001)
Paulsen, Higgins,
Miller, Strawser,
& Boone (1998)
Rovai (2001)
Pedagogy (Experimental)
Correspondence with
feedback to students’
essays
Live lecture broadcast with
certified teacher as
facilitator on site
Live lecture presentation
and discussion
Blended: Online
self-learning with
face-to-face classroom
instruction
Live lecture broadcast:
Teacher-paced
instruction
Live lecture broadcast and
discussion
Monthly face-to-face group
meetings
Pedagogy (Control)
Correspondence with
feedback to students’
essays
Live lecture broadcast with
no facilitator
Online self-learning with
accompanying study guide
Online self-learning
Self-paced asynchronous
Web-based learning
Asynchronous
videotape-based small
group learning
Annual residencies
face-to-face group
meetings
Technology (Experimental)
Correspondence with
audio-taped feedback
Indiana Higher Education
Telecommunication
System Interactive TV
network
University Industry
Television for Education:
Live interactive TV
broadcast
Online learning
environment
Live interactive TV
Live interactive TV
Internet-based
asynchronous learning
network (ALN)
Technology (Control)
Correspondence with
written feedback
Indiana Higher Education
Telecommunication
System Interactive TV
network
Internet-based instruction
Online learning
environment
Conferencing On the Web:
Web-based asynchronous
discussion software
Videotape-based delivery
Internet-based ALN
(continued)
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1274
APPENDIX A (continued)
Study
*Stanley (2006)
Walker &
Donaldson
(1989)
*Williams (2005)
*Wise, Chang,
Duffy, & Del
Valle (2004)
*Worley (1991)
*Yang (2002)
*Zhang (2004)
Zion, Michalsky, &
Mevarech (2005)
*Alavi, Marakas, &
Yoo (2002)
Pedagogy (Experimental)
Teacher-graded
assignments
Two-way audio and graphics
lecture presentation and
discussion
Online self-learning with
tutor available on demand
High social presence
learning environment
Live interactive instruction
Structured discussion
Teacher structures and
moderated online
collaboration
Web-based instruction with
meta-cognitive and
motivational scaffolding
Distributed learning with
advanced options for
content manipulation
Pedagogy (Control)
Automated quizzes
Asynchronous
videotape-based
self-learning
Online self-learning with
no tutor available
Low social presence
learning environment
Live interactive instruction
Unstructured discussion
Peer controlled online
collaboration
Web-based instruction with
no scaffolding
Distributed learning with
limited options for
content manipulation
Technology (Experimental)
Online learning
environment
AT&T Gemini 100
Electronic Blackboard
LCMS
Web-based computer
mediated communication
Web-based computer
mediated communication
Two-way video and
two-way audio
Web-based bulletin boards
Cyberstats: comprehensive
Web-based course ware
Internet-based ALN
Online learning
environment with
enhanced group support
system technology
Technology (Control)
Online learning
environment
Videotape delivery
Web-based
computer-mediated
communication
Web-based
computer-mediated
communication
One-way video and
two-way audio
Web-based bulletin boards
Cyberstats: comprehensive
Web-based courseware
Internet-based ALN
Online learning
environment
(continued)
Student–content interaction
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1275
APPENDIX A (continued)
Study
Anderton (2005)
Bernard & Naidu
(1992)
Cameron (2003)
Collins (2000)
Gulikers,
Bastiaens, &
Martens (2005)
Holmberg &
Schuemer (1989)
Holmberg &
Schuemer (1989)
Holmberg &
Schuemer (1989)
*Jordaan (1987)
Pedagogy (Experimental)
Online instruction with no
goal planning and
strategy monitoring and
evaluation forms
Correspondence instruction
employing concept
mapping
Online self-learning
Online self-study with
access to a Web-based
forum
Authentic learning
environment: self-
learning employing
simulations
Long assignments for every
course unit
Medium-length assignment
every 2-course units
Long assignments for every
course unit
Personalized instruction:
Detailed study guide
Pedagogy (Control)
Online instruction with the
use of goal planning and
strategy monitoring and
evaluation forms
Correspondence instruction
employing
post-questioning
Online self-learning
Self-study with printed
material
Nonauthentic learning
environment:
self-learning with no
simulations
Medium length
assignments for every
course unit
Short assignments every
2-course units
Short assignments for
every course unit
Traditional: basic study
guide
Technology (Experimental)
Online learning
environment
Correspondence
Network simulation
software
Web-based learning
environment
Buiten Dienst: Web-based
learning environment
Correspondence: printed
material with audiotapes
Correspondence: printed
material with audiotapes
Correspondence: printed
material with audiotapes
Correspondence
Technology (Control)
Online learning
environment
Correspondence
Static network
diagramming software
Correspondence
Web-based learning
environment
Correspondence: printed
material with audiotapes
Correspondence: printed
material with audiotapes
Correspondence: printed
material with audiotapes
Correspondence
(continued)
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1276
APPENDIX A (continued)
Study
*Kohlmeier,
McConathy,
Cooksey Lindell,
& Zeisel (2003)
Lei, Winn, Scott, &
Farr (2005)
McKethan,
Kernodle,
Brantz, &
Fischer (2003)
*Miller & Pilcher
(2002)
Moshinskie (1997)
Schroeder (2006)
Smith (1993)
*Wallace, Grinnell,
Carey, & Carey
(2006)
Pedagogy (Experimental)
Full-scale content:
self-learning with
completion of all lessons
and case studies
Self-learning with two
course CDs
Video presentation
supplemented with text
description
Self-learning with the
supplement of
videotaped study guide
Live lecture presentation
with content
manipulation
Multimedia enhanced
instruction
Interactive self-learning
High structure, high
dialogue, low
transactional distance
learning condition
Pedagogy (Control)
Tailored content:
self-learning with
completion of selected
lessons and case studies
based on entry level
Self-learning with one
course CD
Video presentation
Self-learning without the
study guide supplement
Live lecture presentation
and discussion
Text-book-based
self-learning
Passive self-learning
Low structure, low
dialogue, high
transactional distance
learning condition
Technology (Experimental)
Computer-based learning
environment
Computer-assisted
instruction: CD-ROM
Computer-assisted
instruction
Variety of DE learning
environments
Two-way audio/graphics
instruction
DE with printed and
multimedia materials
Computer-assisted
instruction: interactive
videodisc
Web-based instruction
Technology (Control)
Computer-based learning
environment
Computer-assisted
instruction: CD-ROM
Computer-assisted
instruction
Variety of DE learning
environments
Two-way audio/video
instruction
DE with printed materials
Computer-assisted
instruction:
noninteractive videodisc
Web-based instruction
(continued)
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1277
Technology (Control)
Web-based learning
environment
Quality University Extended
Site Telecourses: Video-
taped instruction
Computer Mediated
Communication
collaborative learning
environment
Compressed Video Distance
Education:
telecommunication system
with two-way audio and
compressed two-way video
Web-based instruction
Video-taped instruction
Web-based course
management courseware
Study
*Warrick (2005)
Allen (1995)
Benbunan-Fich &
Hiltz (2003)
Benke (2001)
Bore (2005)
Boverie et al.
(1997)
Kuo (2005)
Pedagogy (Experimental)
Facilitated group: small group
online instruction with one
facilitator per group
Live interactive TV
instruction
Collaborative online learning
with face-to-face
communication
Online instruction with high
teacher immediacy
Live video conferencing
Live interactive TV
instruction
Video conferencing
Pedagogy (Control)
Mentored group: Online
instruction with
individualized mentor’s
support
Self-study with video-taped
instruction
Collaborative online
learning without
face-to-face
communication
Online instruction with low
teacher immediacy
Asynchronous Web-based
instruction
In-class video-taped
instruction
Web-based instruction
APPENDIX A (continued)
Technology (Experimental)
Blackboard LCMS
Intercampus Interactive
Telecommunication
System (IITS): Interactive
TV
Computer Mediated
Communication
collaborative learning
environment
Compressed Video Distance
Education:
telecommunication system
with two-way audio and
compressed two-way video
Two-way videoconferencing
Interactive TV, one-way
video, two-way audio
Video conference–based
delivery
(continued)
Attitude data
Student–teacher interaction
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1278
APPENDIX A (continued)
Technology (Control)
Web-based learning
environment
Computer-assisted
correspondence course
Blackboard Learning System:
synchronous (Voice over
Internet protocol) learning
environment
Computer-based and printed
tutorial with e-mail
correspondence
Web course: Internet-based
instruction
Asynchronous Web-based
instruction
Study
Kurtz, Sagee, &
Getz-Lengerman
(2003)
Boucher & Barron
(1986)
Little, Passmore, &
Schullo (2006)
Murdock (2000)
Suh (2004)
Sweeney-Dillon
(2003)
Pedagogy (Experimental)
Web-based interactive
instruction with
face-to-face
communication
Computer marked course
with elaborate
prescriptive feedback
Additional Elluminate Live!
(synchronous software)
orientation session
Mastery study:
computer-based
instruction with extra
remediation tasks
Synchronous interactive TV
instruction
Synchronous live interaction
Pedagogy (Control)
Web-based interactive
instruction without
face-to-face
communication
Computer marked course
with minimal feedback
No additional Elluminate
Live! (synchronous
software) orientation
session
No treatment (control):
computer-based
instruction without extra
remediation tasks
Asynchronous self-paced,
Web-based instruction
Asynchronous Web-based
instruction with online
discussion forum
Technology (Experimental)
Web-based learning
environment
Computer-assisted
correspondence course
Blackboard Learning System:
synchronous Web-based
(Voice over Internet
protocol) learning
environment
Computer-based and printed
tutorial with e-mail
correspondence
Iowa Communication
Network: interactive video
system
Synchronous two-way
audio-visual instruction
Student–content interaction
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1279
APPENDIX B
Codebook of study features
1. Age
a) Elementary (K–6)
b) Secondary (7–12)
c) Postsecondary
2. Subject matter
a) Math (including statistics and algebra)
b) Language (including language arts and second language learning)
c) Physical and natural science (including biology, physics, chemistry, geology)
d) Social sciences (including history, sociology, geography)
e) Psychology
f) Philosophy
g) Computer science (information technology)
h) Education
i) Health sciences (including medicine, environmental health, and nursing)
j) Business (including economics and management)
k) Engineering
l) Others (specify)
m) Missing 999
3. Research Design
a) Pre-experimental
b) Quasi-experimental
c) True experimental
4. Student–student interaction
a) Experimental > Control
b) Experimental = Control
c) Experimental < Control
d) Can’t specify 999
5. Student–teacher interaction
a) Experimental > Control
b) Experimental = Control
c) Experimental < Control
d) Can’t specify 999
6. Student–content interaction
a) Experimental > Control
b) Experimental = Control
c) Experimental < Control
d) Can’t specify 999
7. Distance education mode experimental
a) Synchronous
b) Asynchronous
c) Mixed
(continued)
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
1280
8. Distance education mode control
a) Synchronous
b) Asynchronous
c) Mixed
9. Technology use in two groups
a) Same technology
b) Different technology
10. Pedagogy use in two groups
a) Same pedagogy
b) Different pedagogy
11. Technology used in experimental (descriptive)
12. Technology used in control (descriptive)
13. Pedagogy used in experimental (descriptive)
14. Pedagogy used in control (descriptive)
Note
This study was supported by grants from the Social Sciences and Humanities
Research Council of Canada and the Fonds Québécois de la Recherche sur la Société
et la Culture to Robert Bernard and Philip Abrami. The authors express appreciation
to Drs. Richard E. Clark, Steven M. Ross, Terry Anderson, Richard A. Schwier, and
Som Naidu for their contributions to the conceptualization of this research. Thanks
also to Dr. Gary M. Boyd for critical comments to earlier drafts, to two anonymous
RER reviewers for their constructive criticisms, and to Lucie A. Ranger and Katherine
Hanz for help with the manuscript. An earlier version of this article was presented in
a presidential session at the Association for Educational Communication and
Technology Annual Convention, Anaheim, CA, on October 27, 2007. Please send
correspondence regarding this article to Robert M. Bernard, Centre for the Study of
Learning and Performance, LB-583-3, Department of Education, Concordia
University, 1455 de Maisonneuve Blvd. W., Montreal, Quebec, Canada H3G 1M8;
e-mail: bernard@education.concordia.ca.
References
References marked with an asterisk are studies in the meta-analysis.
Abrami, P. C., & Bernard, R. M. (2008). Statistical control vs. classification of study
quality in meta-analysis. Unpublished manuscript, Concordia University.
*Alavi, M., Marakas, G. M., & Yoo, Y. (2002). A comparative study of distributed
learning environments on learning outcomes. Information Systems Research, 13(4),
404–415.
*Allen, B. A. (1995). Measurement of factors related to student and faculty satisfaction
with video based and interactive television courses in distance learning (distance
education). Unpublished doctoral dissertation, the University of Alabama.
APPENDIX B (continued)
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Interactions in Distance Education
1281
Allen, M., Bourhis, J., Burrell, N., & Mabry, E. (2002). Comparing student satisfac-
tion with distance education to traditional classrooms in higher education: A meta-
analysis. American Journal of Distance Education, 16(2), 83–97.
Allen, M., Mabry, E., Mattrey, M., Bourhis, J., Titsworth, S., & Burrell, N. (2004).
Evaluating the effectiveness of distance learning: A comparison using meta-analysis.
Journal of Communication, 54, 402–420.
Anderson, T. (2003a). Getting the mix right again: An updated and theoretical rationale
for interaction. International Review of Research in Open and Distance Learning,
4(2), 9–14.
Anderson, T. (2003b). Modes of interaction in distance education: Recent develop-
ments and research questions. In M. Moore (Ed.) Handbook of Distance Education
(pp. 129–144). Mahwah, NJ: Lawrence Erlbaum.
*Anderton, E. K. (2005). An evaluation of strategies to promote self-regulated learn-
ing in pre-service teachers in an online class. Unpublished doctoral dissertation,
University of South Alabama.
*Annetta, L. A. (2003). A comparative study of three distance education strategies
on the learning and attitudes of elementary school teachers participating in a
professional development project. Unpublished doctoral dissertation, University of
Missouri–Saint Louis.
*Banks, L. V. (2004). Brick, click, or brick and click: A comparative study on the effec-
tiveness of content delivery modalities for working adults. Unpublished doctoral
dissertation, Touro University International.
Bates, A. (1990, September). Interactivity as a criterion for media selection in distance
education. Paper presented at the Annual Conference of the Asian Association of
Open Universities, Jakarta, Indonesia.
Beard, L. A., & Harper, C. (2002). Student perceptions of online versus on campus
instruction. Education, 122, 658–663.
*Beare, P. L. (1989). The comparative effectiveness of videotape, audiotape, and
telelecture in delivering continuing teacher education. American Journal of Distance
Education, 3(2), 57–66.
*Bell, P. D., Hudson, S., & Heinan, M. (2004). Effect of teaching/learning methodol-
ogy on effectiveness of a Web based medical terminology course? International
Journal of Instructional Technology & Distance Learning, 1(4). Retrieved July 24,
2007, from http://www.itdl.org/Journal/Apr_04/article06.htm
*Benbunan-Fich, R., & Hiltz, S. R. (2003). Mediators of the effectiveness of online
courses. IEEE Transactions on Professional Communication, 46(4), 298–312.
*Benke, D. A. (2001). The relationship between teacher immediacy behaviors and stu-
dent continued enrollment and satisfaction in a compressed-video distance education
classroom. Unpublished doctoral dissertation, University of Northern Colorado.
*Benson, D. S. (2005). Comparison of learning style and other characteristics of site-based,
hybrid and online students. Unpublished doctoral dissertation, Arizona State University.
*Bernard, R. M., & Lundgren-Cayrol, K. (2001). Computer conferencing: An environ-
ment for collaborative project-based learning in distance education. Educational
Research and Evaluation, 7(2), 241–261.
*Bernard, R. M., & Naidu, S. (1992). Post-questioning, concept mapping and feed-
back: A distance education field experiment. British Journal of Educational Technology,
23(1), 48–60.
Bernard, R. M., Abrami, P. C., Lou, Y., & Borokhovski, E. (2004). A methodological
morass? How we can improve the quality of quantitative research in distance educa-
tion. Distance Education, 25(2), 176–198.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Bernard et al.
1282
Bernard, R. M., Abrami, P.C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., et al.
(2004). How does distance education compare with classroom instruction? A meta-
analysis of the empirical literature. Review of Educational Research, 3(74), 379–439.
Bernard, R. M., Rojo de Rubalcava, B., & St-Pierre, D. (2000). Collaborative online
distance learning: Issues for future practice and research. Distance Education, 21(2),
260–277.
*Beyth Marom, R., & Saporta, K. (2002). Satellite based synchronous tutorials vs. satel-
lite based asynchronous videocassettes: Factors affecting students’ attitudes and
choices. Norfolk, VA: Association for the Advancement of Computing in Education.
*Bore, J. C. (2005). Distance education in the preparation of special education person-
nel: An examination of videoconferencing and Web-based instruction. Unpublished
doctoral dissertation, University of North Texas.
Borenstein, M., Hedges, L. V., Higgins, J., & Rothstein, H. (2005). Comprehensive
meta-analysis (Version 2). Englewood, NJ: Biostat.
*Boucher, T. A., & Barron, M. H. (1986). The effects of computer-based marking on
completion rates and student achievement for students taking a secondary-level dis-
tance education course. Distance Education, 7(2), 275–280.
*Boverie, P., Murrell, W. G., Lowe, C. A., Zittle, R. H., Zittle, F., & Gunawardena, C. N.
(1997). Live vs. taped: New perspectives in satellite-based programming for pri-
mary grades. Albuquerque: University of New Mexico, College of Education,
Organizational Learning and Instructional Technologies (ERIC Document No.
ED407939).
*Brewer, S. A., & Klein, J. D. (2004, October). Small group learning in an online
asynchronous environment. Paper presented at the 27th meeting of the Association
for Educational Communications and Technology, Chicago.
*Britton, O. L. (1992). Interactive distance education in higher education and the
impact of delivery styles on student perceptions. Unpublished doctoral dissertation,
Wayne State University.
Brown, A. L., & Campione, J. C. (1994). Guided discovery in a community of learners.
In K. McGilly (Ed.), Classroom lessons: Integrating cognitive theory and classroom
practice (pp. 229-270). Cambridge, MA: MIT Press.
*Caldwell, E. R. (2006). A comparative study of three instructional modalities in a
computer programming course: Traditional instruction, Web-based instruction, and
online instruction. Unpublished doctoral dissertation, University of North Carolina
at Greensboro.
*Callahan, A., L., Givens, P., E., & Bly, R. (1998, June). Distance education moves into
the 21st century: A comparison of delivery methods. Paper presented at the American
Society for Engineering Education Annual Conference and Exposition, Seattle, WA.
*Cameron, B. H. (2003). Effectiveness of simulation in a hybrid and online networking
course. Quarterly Review of Distance Education, 4(1), 51–55.
*Campbell, M., Gibson, W., Hall, A., Richards, D., & Callery, P. (2008). Online vs.
face-to-face discussion in a Web-based research methods course for postgraduate
nursing students: A quasi-experimental study. International Journal of Nursing
Studies, 45(5), 750–759.
*Cargile Cook, K. (2000). Online technical communication: Pedagogy, instructional
design, and student satisfaction in Internet-based distance education. Unpublished
doctoral dissertation, Texas Tech University.
Cavanaugh, C. S. (2001). The effectiveness of interactive distance education technolo-
gies in K-12 learning: A meta-analysis. International Journal of Educational
Telecommunications, 7, 73–88.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Interactions in Distance Education
1283
Cavanaugh, C. S., Gillan, K. J., Kromrey, J., Hess, M., & Blomeyer, R. (2004). The
effects of distance education on K-12 student outcomes: A meta-analysis. Naperville,
IL: Learning Point Associates.
*Chen, C. C., & Shaw, R. S. (2006). Online synchronous vs. asynchronous software
training through the behavioral modeling approach: A longitudinal field experiment.
International Journal of Distance Education Technologies, 4(4), 88–102.
*Cheng, H. C. (1991). Comparison of performance and attitude in traditional and com-
puter conferencing classes. American Journal of Distance Education, 5(3), 51–64.
Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in
undergraduate education. AAHE Bulletin, 39(7), 3–6.
*Cifuentes, L., & Hughey, J. (2003). The interactive effects of computer conferencing
and multiple intelligences on expository writing. The Quarterly Review of Distance
Education, 4(1), 15–30.
Clark, R. E. (2000). Evaluating distance education: Strategies and cautions. Quarterly
Review of Distance Education, 1, 3–16.
*Collins, M. (2000). Comparing Web, correspondence and lecture versions of a second-
year non-major biology course. British Journal of Educational Technology, 31(1),
21–27.
Crawford, M. W. (1999). Students perceptions of the interpersonal communication
courses offered through distance education (Doctoral dissertation, Ohio University,
1999). Dissertation Abstracts International, 60(05), 1469 (UMI No. 9929303).
*Daig, B. (2005). Student performance in e-learning courses: The impact of course
duration on learning outcomes. Unpublished doctoral dissertation, Touro University
International.
Daniel, J., & Marquis, C. (1979). Interaction and independence: Getting the mixture
right. Teaching at a Distance, 15, 25–44.
Daniel, J., & Marquis, C. (1988). Interaction and independence: Getting the mix right.
In D. Sewart, D. Keegan, & B. Holmberg (Eds.), Distance education: International
perspectives (pp. 339–359). London: Routledge.
*Davis, J. L. (1996). Computer-assisted distance learning, part II: Examination per-
formance of students on & off campus. Journal of Engineering Education, 85(1),
77–82.
*Frith, K. H., & Kee, C. C. (2003). The effect of communication on nursing student
outcomes in a Web-based course. Journal of Nursing Education, 42(8), 350–358.
Fulford, C. P., & Zhang, S. (1993). Perceptions of interaction: The critical predictor in
distance education. American Journal of Distance Education, 7(3), 8–21.
*Gallie, K. (2005). Student perceptions as distance learners in internet-based courses.
Studies in Learning, Evaluation, Innovation and Development, 2(3), 69–76.
Garrison, D. R., & Shale, D. (1990). A new framework and perspective. In D. R. Garrison
& D. Shale (Eds.), Education at a distance: From issues to practice (pp. 123–133).
Malabar, FL: Krieger.
Gilbert, L., & Moore, D. R. (1989). Building interactivity in Web-courses: Tools for
social and instructional interaction. Educational Technology, 38(3), 29–35.
*Grimes, P. W., Krehbiel, T. L., Nielson, J. E., & Niss, J. F. (1989). The effectiveness
of “economics U$A” on learning and attitudes. Journal of Economic Education,
20(2), 139–152.
*Gulikers, J. T. M., Bastiaens, T. J., & Martens, R. L. (2005). The surplus value of an
authentic learning environment. Computers in Human Behavior, 21(3), 509–521.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Bernard et al.
1284
*Hansen, B. A. (2000). Increasing person-environment fit as a function to increase
adult learner success rates in distance education. Unpublished doctoral dissertation,
University of Wyoming.
Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL:
Academic Press.
Holden, J. T., & Westfall, P. J.-L. (2006). An instructional media selection guide for
distance learning. Boston: United States Distance Learning Association.
Holmberg, B. (2003). A theory of distance education based on empathy. In M. G. Moore
(Ed.), Handbook of distance education (pp. 79–86). Mahwah, NJ: Lawrence Erlbaum.
*Holmberg, B., & Schuemer, R. (1989). Tutoring frequency in distance education: An
empirical study of the impact of various frequencies of assignment submission. In
B. Holmberg (Ed.), Mediated communication as a component of distance education
(pp. 45–80). FernUniversität–Gesamthoschule, Hagen, West Germany: Zentrales
Institut für Fernstudienforschung.
*Huett, J. B. (2006). The effects of ARCS-based confidence strategies on learner con-
fidence and performance in distance education. Unpublished doctoral dissertation,
University of North Texas.
Jahng, N., Krug, D., & Zhang, Z. (2007). Student achievement in online education
compared to face-to-face education. European Journal of Open, Distance and
E-Learning. Retrieved February 21, 2007, from http://www.eurodl.org/materials/
contrib/2007/Jahng_Krug_Zhang.htm
Jaspers, F. (1991). Interactivity or instruction? A reaction to Merrill. Educational
Technology, 31(3), 21–24.
Johnson, D. W., Johnson, R. T., & Stanne, M. E. (2000). Cooperative learning meth-
ods: A meta-analysis. Minneapolis: University of Minnesota Press.
*Jordaan, W. (1987). The effectiveness of personalised instruction in distance educa-
tion: An empirical study. Pretoria: University of South Africa. (ERIC Document No.
ED293398)
Juler, P. (1990). Promoting interaction, maintaining independence: Swallowing the
mixture. Open Learning, 5(2), 24–33.
*Jung, I., Choi, S., Lim, C., & Leem, J. (2002). Effects of different types of interaction
on learning achievement, satisfaction and participation in web-based instruction.
Innovations in Education and Teaching International, 39(2), 153–162.
Kanuka, H., & Anderson, T. (1999). Using constructivism in technology-mediated
learning: Constructing order out of the chaos in the literature, Radical Pedagogy,
1(2). Retrieved May 27, 2007, from http://radicalpedagogy.icaap.org/content/
issue1_2/02kanuka1_2.html
*Karr, C. L., Weck, B., Sunal, D. W., & Cook, T. M. (2003). Analysis of the effective-
ness of online learning in a graduate engineering math course. Journal of Online
Interactive Learning, 1(3). Retrieved July 24, 2007, from www.ncolr.org/jiol/
archieves/2003/winter/3/ms02023_Karr
*Katz, Y. J. (2000). The comparative suitability of three ICT distance learning method-
ologies for college level instruction. Educational Media International, 37(1), 25–30.
Keegan, D. (1996). Foundations of distance education (3rd ed.). London: Routledge.
*Kirschner, P. A., van den Brink, H., & Meester, M. (1991). Audiotape feedback for
essays in distance education. Innovative Higher Education, 15(2), 185–195.
*Kohlmeier, M., McConathy, W. J., Cooksey Lindell, K., & Zeisel, S. H. (2003).
Adapting the contents of computer-based instruction based on knowledge tests
maintains effectiveness of nutrition education. The American Journal of Clinical
Nutrition, 77(Suppl. 4), 1025–1027.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Interactions in Distance Education
1285
*Kuo, M. M. (2005). A comparison of traditional videoconference-based, and Web-based
learning environments (ERIC Document Reproduction Service No. ED492707).
*Kurtz, G., Sagee, R., & Getz-Lengerman, R. (2003). Alternative online pedagogical
models with identical contents: A comparison of two university-level courses.
Journal of Online Interactive Learning, 2(1). Retrieved July 24, 2007, from http://
www.ncolr.org/jiol/archives/2003/summer/2/index.asp
Larreamendy-Joerns, J., & Leinhardt, G. (2006). Going the distance with online educa-
tion. Review of Educational Research, 76(4), 567–605.
Laurillard, D. (1997). Rethinking university teaching: A framework for the effective use
of educational technology. London: Routledge.
*Lei, L. W., Winn, W., Scott, C., & Farr, A. (2005). Evaluation of computer-assisted
instruction in histology: Effect of interaction on learning outcome. Anatomical
Record Part B, New Anatomist, 284(1), 28–34.
*Libler, R. W. (1991). A study of the effectiveness of interactive television as the pri-
mary mode of instruction in selected high school physics classes. Unpublished doc-
toral dissertation, Ball State University.
*Lilja, D. J. (2001). Comparing instructional delivery methods for teaching computer
systems performance analysis. IEEE Transactions on Education, 44(1), 35–40.
*Lim, D. H., Morris, M. L., & Kupritz, V. W. (2006, February). Online vs. blended learning:
Differences in instructional outcomes and learner satisfaction. Paper presented at the
Academy of Human Resource Development International Conference, Columbus, OH.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA:
Sage.
*Little, B. B., Passmore, D., & Schullo, S. (2006). Using synchronous software in
Web-based nursing courses. Computers, Informatics, Nursing, 24(6), 317–325.
Lou, Y., Bernard, R. M., & Abrami, P. C. (2006). Media and pedagogy in undergradu-
ate distance education: A theory-based meta-analysis of empirical literature.
Educational Technology Research & Development, 5(2), 141–176.
Machtmes, K., & Asher, J. W. (2000). A meta-analysis of the effectiveness of telecourses
in distance education. American Journal of Distance Education, 14(1), 27–46.
Mayer, R. (2001). Multi-media learning. Cambridge, UK: Cambridge University Press.
*McKethan, R. N., Kernodle, M. W., Brantz, D., & Fischer, J. (2003). Qualitative
analysis of the overhand throw by undergraduates in education using a distance
learning computer program. Perceptual and Motor Skills, 97(3), 979–989.
*Miller, G., & Pilcher, C. L. (2002). Can selected learning strategies influence the suc-
cess of adult distance learners in agriculture? Journal of Agricultural Education,
43(2), 34–43.
Moore, M. G. (1973). Towards a theory of independent learning and teaching. Journal
of Higher Education, 44(9), 661–679.
Moore, M. G. (1989). Three types of interaction. American Journal of Distance
Education, 3(2), 1–6.
Moore, M. G., & Kearsley, G. (2005). Distance education: A systems view (2nd ed.).
Belmont, CA: Thompson/Wadsworth.
Moore, M. G., & Thompson, M. M. (1990). The effects of distance learning: A sum-
mary of the literature (Research Monograph No. 2). University Park: Pennsylvania
State University, American Center for the Study of Distance Education. (ERIC
Document Reproduction No. ED330321)
*Moshinskie, J. F. (1997). The effects of using constructivist learning models when
delivering electronic distance education (EDE) courses: A perspective study. Journal
of Instruction Delivery Systems, 11(1), 14–20.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Bernard et al.
1286
Muirhead, B. (2001a). Enhancing social interaction in computer-mediated distance
education. USDLA Journal, 15(4). Retrieved October 4, 2005, from http://www.usdla
.org/html/journal/APR01_Issue/article02.html
Muirhead, B. (2001b). Interactivity research studies. Educational Technology &
Society, 4(3). Retrieved October 4, 2005, from http://ifets.ieee.org/periodical/
vol_3_2001/muirhead.html
*Murdock, K. (2000). Management of procrastination in distance education courses
using features of Kellers personalized system of instruction. Unpublished doctoral
dissertation, University of South Florida.
Nipper, S. (1989). Third generation distance learning and computer conferencing. In
R. Mason & A. Kaye (Eds.), Mindweave: Communication, computers and distance
education (pp. 63–73). Oxford, UK: Pergamon.
Olson, T. M., & Wisher, R. A. (2002). The effectiveness of Web-based instruction: An
initial inquiry. International Review of Research in Open and Distance Learning,
3(2). Retrieved October 8, 2006, from http://www.irrodl.org/index.php/irrodl/
article/view/103/561
*Ostiguy, N., & Haffer, A. (2001). Assessing differences in instructional methods:
Uncovering how students learn best. Journal of College Science Teaching, 30(6),
370–374.
*Paulsen, K. J., Higgins, K., Miller, S. P., Strawser, S., & Boone, R. (1998). Delivering
instruction via interactive television and videotape: Student achievement and satis-
faction. Journal of Special Education Technology, 13(4), 59–77.
Peters, O. (2003). Learning with new media in distance education. In M. G. Moore (Ed.),
Handbook of distance education (pp. 87–112). Mahwah, NJ: Lawrence Erlbaum.
Phipps, R., & Merisotis, J. (1999). What’s the difference? A review of contemporary
research on the effectiveness of distance learning in higher education. Washington,
DC: Institute for Higher Education Policy.
*Romanov, K., & Nevgi, A. (2006). Learning outcomes in medical informatics:
Comparison of a WebCT course with ordinary web site learning material.
International Journal of Medical Informatics, 75(2), 156–162.
*Rovai, A. P. (2001). Classroom community at a distance: A comparative analysis
of two ALN-based university programs. Internet and Higher Education, 4(2),
105–118.
*Ruksasuk, N. (2000). Effects of learning styles and participatory interaction modes
on achievement of Thai students involved in Web-based instruction in library
and information science distance education. Unpublished doctoral dissertation,
University of Pittsburgh.
Russell, T. L. (1999). The no significant difference phenomenon. Chapel Hill: Office
of Instructional Telecommunications, North Carolina State University.
Salomon, G. (2000). E-moderating the key to teaching and learning online. London:
Kogan Page.
*Schroeder, B. A. (2006). Multimedia-enhanced instruction in online learning envi-
ronments. Unpublished doctoral dissertation, Boise State University.
Shachar, M., & Neumann, Y. (2003). Differences between traditional and distance
education academic performances: A meta-analytical approach. International
Review of Research in Open and Distance Education. Retrieved May 5, 2005, from
http://www.irrodl.org/content/v4.2/shacharneumann.html
Sims, R. (1999). Interactivity on stage: Strategies for learner-designer communication.
Australian Journal of Educational Technology, 15(3), 257–272. Retrieved May 5,
2005, from http://www.ascilite.org.au/ajet/ajet15/sims.html
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Interactions in Distance Education
1287
Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effective-
ness of Web-based and classroom instruction: A meta-analysis. Personnel
Psychology, 59(3), 623–664.
*Skylar, A. A. (2004). Distance education: An exploration of alternative methods and
types of instructional media in teacher education. Unpublished doctoral dissertation,
University of Nevada, Las Vegas.
*Smith, J. J. (1993). The SPICE project: Comparing passive to interactive approaches
in a video-based course. The Journal, 21(1), 62–66.
*Stanley, O. L. (2006). A comparison of learning outcomes by ‘in-course’ evaluation
techniques for an on-line course in a controlled environment. Journal of Educators
Online, 3(2), 1–16.
*Suh, Y. M. (2004). Factors related to student satisfaction with Web and interactive
television courses. Unpublished doctoral dissertation, University of Iowa.
Sutton, L. A. (2001). The principle of vicarious interaction in computer-mediated com-
munications. International Journal of Educational Telecommunications, 7(3),
223–242. Retrieved March 14, 2005, from http://www.aace.org/dl/files/IJET/
IJET73223.pdf
*Sweeney-Dillon, M. T. (2003). Participant experience of distance education: Critical
success factors. Unpublished doctoral dissertation, Indiana University.
Thompson, G. (1990). How can correspondence-based distance education be improved?
A survey of attitudes of students who are not well disposed towards correspondence
study. Journal of Distance Education/Revue de l’Enseignement à Distance, 5(1).
Retrieved April 20, 2007, from http://cade.athabascau.ca/vol5.1/11_thompson.html
Thurmond, V. A., & Wombach, K. (2004). Understanding interactions in distance edu-
cation: A review of the literature. International Journal of Instructional Technology
and Distance Learning, 1(1). Retrieved April 20, 2007, from http://itdl.org/journal/
Jan_04/article02.htm
*Tuckman, B. W. (2007). The effect of motivational scaffolding on procrastinators’
distance learning outcomes. Computers & Education, 49(2), 414–422.
Ungerleider, C., & Burns, T. (2003). A systematic review of the effectiveness and effi-
ciency of networked ICT in education: A state of the field report to the Council of
Ministers of Education, Canada and Industry Canada. Retrieved April 19, 2007,
from http://www.cmec.ca/stats/SystematicReview2003.en.pdf
Wagner, E. D. (1994). In support of a functional definition of interaction. The American
Journal of Distance Education, 8(2), 6–29.
*Walker, B., M., & Donaldson, J. F. (1989). Continuing engineering education by
electronic blackboard and videotape: A comparison of on-campus and off-campus
student performance. IEEE Transactions on Education, 32(4), 443–447.
*Wallace, T., Grinnell, L., Carey, L., & Carey, J. (2006). Maximizing learning from
rehearsal activity in Web-based distance learning. Journal of Interactive Learning
Research, 17(3), 319–327.
*Warrick, W. R. (2005). The learner and the expert mentor, learners and a facilitator,
peer-facilitated learning: A comparison of three online learning designs. Unpublished
doctoral dissertation, George Mason University.
What Works Clearinghouse. (2006). What works clearinghouse study design classifi-
cation (Technical Working Paper). Washington, DC: U.S. Department of Education.
Retrieved May 2, 2008 from http://ies.ed.gov/ncee/wwc/twp.asp
*Williams, P. B. (2005). On-demand tutoring in distance education: Intrinsically-
motivated, scalable interpersonal interaction to improve achievement, completion,
and satisfaction. Unpublished doctoral dissertation, Brigham Young University.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Bernard et al.
1288
Williams, S. L. (2006). The effectiveness of distance education in allied health science
programs: A meta-analysis of outcomes. American Journal of Distance Education,
20(3), 127–141.
*Wise, A., Chang, J., Duffy, T., & Del Valle, R. (2004). The effects of teacher social
presence on student satisfaction, engagement, and learning. Journal of Educational
Computing Research, 31(3), 247–271.
*Worley, E. N. (1991). Compressed digital video instructional delivery: A study of
student achievement, student attitude, and instructor attitude. Dissertation Abstracts
International, 53(3-A), 710.
Yacci, M. (2000). Interactivity demystified: A structural definition for online learning
and intelligent CBT. Educational Technology. Retrieved May 2, 2008, from http://
www.it.rit.edu/~may/interactiv8.pdf
*Yang, Y.-T. C. (2002). Use of structured Web-based bulletin board discussions with
Socratic questioning to enhance students’ critical thinking skills in distance educa-
tion. West Lafayette, IN: Purdue University.
*Zhang, K. (2004). Effects of peer-controlled or externally structured and moderated
online collaboration on group problem solving processes and related individual
attitudes in well-structured and ill-structured small group problem solving in a
hybrid course. Unpublished doctoral dissertation, Pennsylvania State University.
Zhao, Y., Lei, J., Yan, B, Lai, C., & Tan, S. (2005). A practical analysis of research on
the effectiveness of distance education. Teachers College Record, 107(8).
Retrieved February 22, 2007, from http://www.blackwell-synergy.com/doi/
abs/10.1111/j.1467-9620.2005.00544.x
*Zion, M., Michalsky, T., & Mevarech, Z. R. (2005). The effects of metacognitive
instruction embedded within an asynchronous learning network on scientific inquiry
skills. International Journal of Science Education, 27(8), 957–983.
Authors
ROBERT M. BERNARD is a professor of education at Concordia University and a member of
the Centre for the Study of Learning and Performance, LB-581, 1455 de Maisonneuve Blvd.
West, Montreal, Quebec, Canada H3G 1M8; e-mail: bernard@education.concordia .ca. His
research interests are in educational technology and distance education. His methodological
expertise is in statistics, research design, and meta-analysis.
PHILIP C. ABRAMI is professor, research chair, and director of the Centre for the Study of
Learning and Performance, Concordia University, LB-581, 1455 de Maisonneuve Blvd.
West, Montreal, Quebec, Canada H3G 1M8; e-mail: abrami@education.concordia.ca.
His research interests include educational technology, social psychology of education,
and research synthesis.
EUGENE BOROKHOVSKI holds a PhD in experimental psychology and is a postdoctoral
fellow and systematic review projects coordinator at the Centre for the Study of Learning
and Performance, Concordia University, LB-581, 1455 de Maisonneuve Blvd. West,
Montreal, Quebec, Canada H3G 1M8; e-mail: eborokhovski@education.concordia.ca.
His area of research interests includes cognitive and educational psychology, language
acquisition, and methodology of systematic reviews, meta-analyses in particular.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
Interactions in Distance Education
1289
C. ANNE WADE (MLIS) is a manager and information specialist at the Centre for the Study
of Learning and Performance, Concordia University, LB-581, 1455 de Maisonneuve Blvd.
West, Montreal, Quebec, Canada H3G 1M8; e-mail: anne.wade@education.concordia .ca. Her
expertise is in information literacy, information storage and retrieval, and research strate-
gies. She has worked and taught extensively in the field of information sciences for 20
years.
RANA M. TAMIM is a recent PhD graduate in educational technology at Concordia
University, LB-581, 1455 de Maisonneuve Blvd. West, Montreal, Quebec, Canada H3G
1M8; e-mail: rana.tamim@education.concordia.ca. Her research interests focus on the
role of computer technology in facilitating learning, as well as science education, gen-
erative learning strategies, and meaningful learning.
MICHAEL A. SURKES is a recent PhD graduate in educational technology at Concordia
University, LB-581, 1455 de Maisonneuve Blvd. West, Montreal, Quebec, Canada H3G
1M8; e-mail: surkes@education.concordia.ca. His academic qualifications include
degrees in physiological psychology, experimental psychology, and philosophy, and his
research concerns meta-cognition, meta-motivation, and the development of complex
and coherent conceptual frameworks.
EDWARD CLEMENT BETHEL is a PhD candidate in educational technology at Concordia
University, LB-581, 1455 de Maisonneuve Blvd. West, Montreal, Quebec, Canada H3G
1M8; e-mail: e_bethel@education.concordia.ca. His research interests include one-to-one
laptop computing programs, learning objects and learning design, human cognition and
multimedia learning, cognitive load theory, and research synthesis.
2009
at CONCORDIA UNIV LIBRARY on September 29,http://rer.aera.netDownloaded from
... Volume 17, Issue 1, 2024 effective learning (Bernard et al., 2009). Ways to improve student-student interactions should be explored further. ...
... Research shows that structured interactions with educative value have positive effects on student learning. Such interactions include cooperative learning activities, team projects, and discussion forums (Bernard et al., 2009). ...
Article
The swift transition to remote learning in response to the COVID‐19 pandemic presented substantial challenges for faculty at many universities. This study explores faculty perceptions that influenced their satisfaction with the transition to online classes as a result of the COVID-19 pandemic. Challenges relating to teaching online classes and the supportive environment provided by the institution had the strongest effects. Other important factors were comfort with online teaching and perceived quality of interactions with students. Teaching experience, tenure status, and other demographic variables had no significant effect on satisfaction with transition to online teaching. Application of lessons learned to future emergencies as well as online and blended classes are discussed.
... For instance, the work of Xie, H., et al., [15] on adaptive learning environments underscores the potential of technology in creating customized educational experiences. Moreover, Bernard, R.M., et al., [16] demonstrate the efficacy of technology-mediated adaptive learning in improving student performance across various subjects, indicating its versatility. ...
Conference Paper
Full-text available
This study investigates how LLMs, specifically GPT-3.5 and GPT-4, can develop tailored questions for Grade 9 math, aligning with active learning principles. By utilizing an iterative method, these models adjust questions based on difficulty and content, responding to feedback from a simulated 'student' model. A novel aspect of the research involved using GPT-4 as a 'teacher' to create complex questions, with GPT-3.5 as the 'student' responding to these challenges. This setup mirrors active learning, promoting deeper engagement. The findings demonstrate GPT-4's superior ability to generate precise, challenging questions and notable improvements in GPT-3.5's ability to handle more complex problems after receiving instruction from GPT-4. These results underscore the potential of LLMs to mimic and enhance active learning scenarios, offering a promising path for AI in customized education. This research contributes to understanding how AI can support personalized learning experiences, highlighting the need for further exploration in various educational contexts.
... For instance, the work of Xie, H., et al., [15] on adaptive learning environments underscores the potential of technology in creating customized educational experiences. Moreover, Bernard, R.M., et al., [16] demonstrate the efficacy of technology-mediated adaptive learning in improving student performance across various subjects, indicating its versatility. ...
Preprint
Full-text available
This study investigates how LLMs, specifically GPT-3.5 and GPT-4, can develop tailored questions for Grade 9 math, aligning with active learning principles. By utilizing an iterative method, these models adjust questions based on difficulty and content, responding to feedback from a simulated 'student' model. A novel aspect of the research involved using GPT-4 as a 'teacher' to create complex questions, with GPT-3.5 as the 'student' responding to these challenges. This setup mirrors active learning, promoting deeper engagement. The findings demonstrate GPT-4's superior ability to generate precise, challenging questions and notable improvements in GPT-3.5's ability to handle more complex problems after receiving instruction from GPT-4. These results underscore the potential of LLMs to mimic and enhance active learning scenarios, offering a promising path for AI in customized education. This research contributes to understanding how AI can support personalized learning experiences, highlighting the need for further exploration in various educational contexts
... The effectiveness of virtual learning in mathematics achievement depends on various factors, including the quality of instructional design (Means et al., 2009);Puentedura, (2011), teacher-student interaction (Bernard et al., 2009), technological infrastructure (Watson et al., 2013), and students' access to technology and internet connectivity (Becker et al., 2017). Effective implementation of virtual learning strategies, along with appropriate support and monitoring, can contribute to improved mathematics achievement for students in virtual learning environments. ...
Article
Learning generally has taken a new dimension in this present era. Virtual learning has become increasingly popular recently due to technological advancement. Learning has moved from didactic pedagogy characterized by a teacher-centered approach where knowledge is transmitted from the instructor to the learner to a virtual, mobile, blended/ hybrid, e-learning mode of instruction. Learning is now a form of a Transformative approach to teaching and learning that goes beyond acquiring knowledge and skills only to create profound changes in individuals' beliefs, attitudes, values, and behavior. To obtain the necessary skills required for learning, the learners should have access to learning, and learning should be flexible and occur anywhere anytime through virtual instruction. The study emphasizes the ubiquitous (u-learning) paradigm, synchronous virtual learning in particular, and teachers' role in mathematics teaching and learning. This research has contributed valuable insights into how synchronous virtual learning can improve mathematics teaching and learning, support diverse learners, and inform instructional practices in digital environments. The implication of using synchronous virtual learning is to promote active engagement, social interaction, and peer collaboration, enhancing student comprehension and problem-solving skills. Since learning cannot be limited to didactic instruction, virtual learning should be considered for effective mathematics learning.
... The main forms of classroom interaction are student-student interaction, teacher-student interaction, and interactive interaction (Borokhovski et al., 2016). Among them, student-student interactions are considered to be at the center of classroom interactions and student-student interactions are considered to be the most important type of interactions affecting student achievement (Bernard et al., 2009;Hu et al., 2023). Therefore, it is important to analyze student-student dialogues in the classroom. ...
Article
Full-text available
Although traditional analytical techniques can characterize learners’ internal cognitive abilities laterally and indirectly, it is difficult to present learners’ learning development characteristics and changes comprehensively and dynamically. Epistemic network analysis, on the other hand, is an emerging educational research method that integrates qualitative and quantitative analyses and is able to visually and dynamically characterize the developmental trajectory of students’ interactive dialogues. Therefore, this study used epistemic network analysis to dynamically track students’ interactive dialogues. It was found that (1) there was a difference in the use of students’ interactive dialogues between the intervention and control groups, with students in the intervention group using more exploratory dialogues, while students in the control group used more cumulative dialogues. (2) Observed from the time perspective, the development of interactive dialogues of the students in the control group was not significant, and the interactive dialogues of the students in the intervention group developed faster in the early stage and slower in the later stage. (3) The conversation patterns of the students in the excellent group focused on exploratory conversations and were mainly based on reasoning and reflection, and most of the students in the excellent group came from the intervention group. And vice versa for the control group.
Article
Full-text available
The research investigated the effectiveness of using an E-training environment in developing Capstone teaching skills among STEM teachers. To achieve the aim of the research, the researchers applied the one groups' quasi-experimental design and utilized three instruments as follows;1) list of the Capstone teaching skills; 2) Pre-Post achievement test developed by the researcher and implemented before and after applying the training content; 3) The observation card to observe acquiring the Capstone teaching skills determined by the researcher and they are two main fields of the needed skills, (EDP Field, and Process management field) comprising eight (8) main skills that are subdivided into sixty four (64) subskills required for the STEM teachers, after applying the training content through the E-training environment. The participants were chosen randomly (N=27) at Obour STEM School Cairo Governorate, Egypt. the researchers selected the quasiexperimental design in terms of the research population, the sample of the research, is one experimental group, and the methodology of the research. The researchers relied on pre-and post-testing procedures applied on the research group, the first test has been posted to the trainees after collecting their responses on the training needs survey or a questionnaire that included 68 questions to identify their needs, the pre-test comprises 70 questions about the 64 sub-skill, then after that the training needs have been determined based on the pre-test results, through a month the training has been held Online on the Microsoft Teams as a main platform, by the end of the training sessions, the trainees have been asked to solve the post-test which is reapplied once again after another month .Therefore, the current research attempted to investigate the effectiveness of using the E-Training environment based on the SOLE technique as (the independent variable) in developing the Capstone teaching skills as (the dependent variable) among STEM teachers. Quantitative results showed that There is statistically significant difference at the level of (a<=0.05) between the pre-test and postachievement test of the experimental group on developing Capstone teaching achievement of STEM teachers in favor of the post -test. And also, there is statistically significant difference at the level of (a<=0.05) between the pretest and post- observation card of the experimental group on developing Capstone teaching skills of STEM teachers in favor of the post -test. So, the researchers recommended employing the E-Training environment based on the SOLE technique in developing the Capstone teaching skills among STEM teachers in Egypt.
Article
A course on computer systems performance analysis has been adapted for several different distance education delivery options, including an interactive television system, face-to-face presentation at a satellite campus, and delivery over the Internet to independent study students. Of the 122 students who have enrolled in this graduate-level course for a grade over the three-year period analyzed, half have been nontraditional students who never set foot on campus, These remote students have a substantially higher drop-out rate than the traditional on-campus students, and frequently indicate a strong preference for face-to face instruction in a traditional classroom setting. Nevertheless, due to significant differences in the characteristics of the two student groups, the remote students typically earn higher final course grades than the on-campus students. While there is a strong demand for delivery of this type of advanced course to remote students, more still needs to be done to effectively engage these students in the learning process.
Article
This paper describes a collaborative effort between faculty in the College of Engineering and the College of Education at the University of Alabama. A graduate course in engineering mathematics called Partial Differential Equations was developed, then taught in the Spring 2002 term to 26 onsite students and 14 off-campus students. The students in the class were divided into three tests groups: (1) traditional mode of delivery only, (2) online delivery only, and (3) a mixture of traditional and online delivery. In addition, the performance of the students taking the class was compared to that of a previous semester's students who took the class via the traditional mode of delivery. Results indicate that the mode of delivery had little effect on student performance.
Article
The research presented here has a double objective: comparing two undergraduate courses on research methods at Bar-Ilan University (BIU) in Israel, and examining students' attitudes toward the subject matter, as well as their attitudes toward incorporating online learning into the learning process. The subject matter in both courses - one in the School of Education and the other in the Department of Political Science - was almost identical. Each of the first two authors of this paper taught one of the courses. The pedagogical online model of the courses is different; while the education course is categorized as fully online with no required class meetings and the predetermined content occupies most of the course, the political science course uses the wrap-around model, combining class setting, online interaction and discussions with predetermined content. Students' attitudes were examined twice, at the beginning of the courses and at their end. Research findings reveal significant differences between the courses and between the two points in time. One possible explanation of these findings is based on processes of instructor-students interaction.
Article
span>The word interactive, when used to described computer based learning resources, has tended to imply better experiences, more active learning, enhanced interest and motivation. But despite the investment in productions to date, this interactive condition has not been consistently realised. Although the surge in internet based communications and collaborative learning activities has extended the opportunities for human-human communication, the complexity of learner-computer interactivity has yet to be fully unravelled. This paper examines the relationship between the independent learner and computer based learning resources, which continue to be integral to educational delivery, especially in the training sector. To place interactivity in context, the first part of the discussion focuses on the major dimensions of interactivity and the different ways they have been characterised in computer based learning environments. These dimensions demonstrate the many ways that interactivity can be interpreted and the critical role that design and development plays in creating effective interactive encounters. The second part of the paper reviews the way storytelling structures and narrative have been promoted as effective strategies for enhancing comprehension and engagement in computer based learning applications. The way in which the interactivity and narrative are linked becomes critical to achieving this outcome. Extending the use of a narrative within interactive media to include elements of performance and theatre, the third part of the discussion proposes that by conceptualising the learner as actor, a form of learner-designer communication can be established. Integrating this approach with elements of conversational and communication theory provides a context in which the learner-computer interface is transcended by that of learner and designer. Enabling this form of communication with the independent learner is suggested as a means to enhance computer based learning environments.</p