Content uploaded by Jody Oomen Early
Author content
All content in this area was uploaded by Jody Oomen Early on Nov 27, 2019
Content may be subject to copyright.
Personalized Versus Collective Instructor
Feedback in the Online Courseroom:
Does Type of Feedback Affect Student Satisfaction,
Academic Performance and Perceived
Connectedness With the Instructor?
TARA GALLIEN
Northwestern State University, USA
tarag@nsula.edu
JODY OOMEN-EARLY
Texas Woman’s University, USA
joomen@mail.twu.edu
The demand for online learning has never been greater. For
faculty, teaching in the virtual classroom requires a new set of
skills and practices. Online instructors must prepare for the
increased written communication demands that accompany
online education, such as the large amount of time needed to
respond to student inquiries and to provide feedback on
assignments. The purpose of this comparative study was to
determine if there were significant differences in student sat-
isfaction, performance, and perceived “connectedness” to the
instructor when the instructor used collective versus person-
alized feedback. In addition, this study examined two other
variables: the time required to deliver the type of feedback as
well as students’ prior experience with online learning. Four
online health courses were randomly assigned to one of two
treatment groups: personalized or collective. Students in the
personalized group received individual feedback from the
instructor on each assignment. Students in the collective
group received a collective feedback document from the
instructor that summarized overall class performance, ways
to improve, and student perspectives. Data were collected
using an online survey and course evaluations. The findings
revealed that students who received personalized feedback
were more satisfied and performed academically better than
International Jl. on E-Learning (2008) 7(3), 463-476
students who received only collective feedback. Furthermore,
prior online experience moderately predicted student satisfac-
tion and performance. However, no significant findings were
found between treatment groups and the variable of perceived
connectedness. This study serves as a springboard for future
studies examining how instructor-student interaction affects
student learning and satisfaction within the online classroom.
At the university level, administrative pressure to compete in the distance
education market has led to increased demands on faculty to develop and
deliver courses online. For many faculty, the transition from the traditional
classroom to the online environment has not been an easy one, especially for
faculty who received minimal or no training prior to online course develop-
ment and delivery (Lazarus, 2003). A common practice among unprepared
and poorly trained faculty is to repackage an existing course as “online.”
However, this practice has met with criticism from both teachers and stu-
dents, and has caused many faculty to question the pedagogical soundness
of online instruction (DeBard & Guidera, 2000).
Indeed, teaching and learning in an environment separated by space and
time require a new set of skills and practices for both teacher and learner.
There are many new challenges faced by instructors. Faculty who teach
online must make major adaptations to their traditional ways of teaching.
They must implement pedagogically sound online activities and learn to
motivate students whom they have never met face-to-face (Palloff & Pratt,
1999, 2003). These adaptations require course redesign, which can be very
time-consuming. Furthermore, teachers must also spend large amounts of
time responding to student inquires and assignments. The issue of time com-
mitment in terms of faculty workload deserves further discussion because of
its potential to impact learning outcomes.
The online environment is unique in that it involves people in ways that
other distance education formats (e.g., correspondence, compressed video,
etc) do not. Much of this uniqueness lies in the dynamic nature of commu-
nication that takes place between the online instructor and students, and stu-
dents and their classmates in the asynchronous learning environment. In par-
ticular, online learning and teaching involves a shift from oral to written
communication (Picciano, 2002). This shift away from oral dialogue and
nonverbal cues toward a dependence on written communication poses sev-
eral interesting challenges for instructors who have spent a majority of their
teaching careers in the traditional face-to-face classroom (Picciano).
One of these challenges has been the commitment of time to read and
respond to extensive amounts of dialogue and written assignments. In the tra-
ditional class, for example, instructions, concepts, and feedback can be oral-
464 Gallien and Oomen-Early
Personalized Versus Collective Instructor Feedback in the Online Courseroom 465
ly communicated in a relatively short period of time; however, in the online
class, instructors must concisely communicate their thoughts in written words
so every student clearly understands what is being communicated. This is not
always an easy task, depending on the instructors writing skills, typing abili-
ties, and computer capabilities. The time commitment associated with this
form of communication can be quite extensive. (Hara & Kling, 2000)
In a recent study, researchers reported that 76% of instructors believed
they spent more time preparing and delivering an online course compared
with a traditional face-to-face course (McKenzie, Mims, Bennett, & Waugh,
2000). These findings are consistent with several other studies that explore
the time commitment associated with teaching from a distance (Clark, 1993;
Schifter, 2000). Much of what is reported is primarily anecdotal and not
empirically based. Few studies have attempted to quantify the amount of
time needed to teach online courses (DiBiase, 2004; Lazarus, 2003), and
even fewer have specifically examined the amount of time it takes to com-
municate and provide feedback to students on assignments.
To counterbalance the anecdotal data analyzed in earlier studies of facul-
ty workload in the online environment (i.e., issues related to time commit-
ment), Lazarus (2003) and DiBiase (2004) kept detailed records to measure
the amount of time the instructor/researcher used to conduct online classes.
While these studies may provide evidence pointing to the most time-con-
suming aspects of teaching online, they do not mention the specific peda-
gogical methods utilized in conducting these time-consuming activities.
That is, what was the nature of the communication? What type of feedback
was provided to students about their discussions or responses to an assign-
ment? A more detailed examination of instructor feedback follows to better
understand its role in promoting student learning.
INSTRUCTOR FEEDBACK
In simplest terms, educational feedback is “any message generated in
response to a learner’s action” (Mason & Bruning, 1999, p. 1). In the tradi-
tional classroom, instructor feedback is relayed to students in a variety of
ways. During in-class discussions, for example, immediate oral feedback is
provided by the instructor and serves to identify misconceptions and incor-
rect responses, or to provide additional information to clarify ideas and con-
cepts. This feedback is most often presented collectively. That is, all students
hear what the instructor is saying. On examinations, instructors can provide
students with correct answers and provide feedback, either verbally or in
writing, explaining or clarifying misunderstandings. Again, the feedback can
be given individually or presented collectively (i.e., the instructor reviews
the exam with the entire class in one session). On assignments, instructors
often correct errors (e.g., spelling or grammatical errors) and/or write notes
or messages to students that provide two types of information: verification
(i.e., whether an answer is correct or incorrect) or elaboration (i.e., provides
cues to guide learners toward a correct answer; Kulhavy & Stock, 1989;
Mason & Bruning, 1999). In most cases, this information is presented at the
individual level, not collectively. Clearly, a large portion of feedback in this
environment is oral, which means the instructor spends less time formulat-
ing written responses. In the online environment, however, feedback is pro-
vided solely in written form (Palloff & Pratt, 1999, 2003).
One might assume, therefore, that preparing this written communication
consumes a large part of the online instructor’s time. However, little
research has empirically explored whether different forms of feedback can
reduce instructor workload and effectively promote student learning.
STUDENT SATISFACTION AND “CONNECTEDNESS”
Several recent studies have explored student perceptions of online learn-
ing, including levels of satisfaction and perceived “desire for connected-
ness” or community (Cashion & Palmieri, 2002; Richardson & Swan, 2003;
Woods, 2002). Empirical evidence suggests that a “high degree” of faculty-
student interaction is important to foster a sense of student-instructor “con-
nectedness” (relationship) and satisfaction.
An electronic search for information related to the concept of “connect-
edness” in the online environment yielded few studies. A European research
partner of the Massachusetts Institute of Technology (MIT) Media Lab,
called the Human Connectedness research group, is currently exploring the
topic of human relationships and how they are mediated by technology
(Media Lab Europe, n.d.). The research group’s mission is to find ways to
foster a sense of presence and awareness that enhances a sense of commu-
nity and togetherness through the use of technology (Media Lab Europe).
The concepts being studied by the Human Connectedness group,
although not specific to online education, are similar to the highly studied
constructs of “social presence,” “interaction,” “community,” and “immedia-
cy behaviors” in the online environment. These concepts and the newly con-
ceived concept of student-instructor connectedness recognized in this study
are all closely related.
SOCIAL PRESENCE
Social presence refers to the “degree of salience of the other person in the
(mediated) interaction and the consequent salience of the interpersonal rela-
tionships” (as cited in Richardson & Swan, 2003, p. 70). In other words,
social presence refers to the degree that individuals perceive others to be real
in the online environment. According to researchers, this perception of
466 Gallien and Oomen-Early
human presence is essential for students and faculty to develop personal
relationships and communities online (Richardson & Swan). Studies indi-
cate that social presence appears to have a positive influence on student sat-
isfaction and performance (Newberry, 2001; Richardson & Swan). Accord-
ing to Hackman and Walker (1990), “social presence is influenced by the
delivery modes utilized for specific communication functions” (p. 198). To
increase social presence, strengthen relationships, and build online commu-
nities, it is imperative for there to be interaction between students and
instructors (Woods & Baker, 2004). Interaction will also lead to positive
communication behaviors such as teacher immediacy (Woods & Baker).
TEACHER IMMEDIACY
Teacher immediacy behaviors enhance closeness by reducing the psy-
chological distance (i.e., perceived distance) between individuals (Hackman
& Walker, 1990). Some of the behaviors used by teachers to produce imme-
diacy and build a sense of psychological closeness include verbal encour-
agement, praising, asking questions, using humor, and self-disclosure
(Hackman & Walker; Woods & Baker, 2004). In addition, Hackman and
Walker found that teachers who engaged in other immediacy behaviors such
as providing individualized feedback on papers and assignments and
encouraging teacher-instructor communication were viewed more favorably.
Furthermore, Richardson and Swan ( 2003) reported that students’ over-
all perception of social presence, as measured by a modified version of the
Social Presence Scale, accounted for 35% of the variability in students’
overall satisfaction with the instructor. In this study, however, the social
presence construct was found to be closely related to the construct of
instructor satisfaction and so readers should be cautious in their interpreta-
tion of the results. Qualitative data collected from the open-ended questions
at the end of the survey supported this possibility. In response to questions
related to overall satisfaction, students related their overall satisfaction with
the instructor to the instructor’s involvement with them in terms of guidance
and support with course material, and the feedback they received on class
assignments. This suggests that instructors’ immediacy behaviors and level
of interaction are important in determining students’ level of satisfaction.
Interaction
In another study, Vrasidas and McIsaac (1999) examined the nature of
interaction in an online course from the perspectives of the teacher and the
students. The authors concluded that students need continuous, frequent sup-
port, and feedback, especially from instructors. Similarly, the Sloan Consor-
tium (2002), in the document titled Five Pillars of Quality Online Education
stated, “online environments that effectively facilitate high levels of interac-
Personalized Versus Collective Instructor Feedback in the Online Courseroom 467
468 Gallien and Oomen-Early
tion and collaboration among learners typically result in successful online
programs” (p. 5). As instructors communicate with their students, they may
opt to use “collective” feedback (messages addressed and sent to the entire
group) or tailored feedback (individual messages sent to each student).
Community
The sense of community is another important factor in online courses, and
is a strong predictor of student satisfaction and learning (Woods & Ebersole,
2003). Research examining online communities has focused on social net-
works and the relationships that occur within these networks (Harris & Muir-
head, 2004). Unfortunately, much of the research examining the concept of
community has occurred in the nononline setting. Nevertheless, the same
qualities of community observed in a traditional setting should be fostered in
the online environment (e.g., interaction, shared interests, social presence, cul-
ture, relationships, and support; Harris & Muirhead). Rovai (2002) developed
an instrument called the Classroom Community Scale to measure sense of
classroom community in an online learning environment. He concluded that
the instrument yielded two subscales: community connectedness and learning.
Following analysis of the concepts of social presence, interaction, teacher
immediacy, and community, the researcher in the current study recognized
the operation of another construct – the concept of connectedness with the
instructor. Within this context, connectedness refers to a person’s sense of
belonging or presence, feelings of support, and level of communica-
tion/interaction with the instructor. Students who perceive a sense of con-
nectedness with their instructor are likely to feel satisfied and perform well
in their online courses; this has yet to be addressed by research. Furthermore,
at this time it is unclear how different forms of instructor feedback influence
students’ perceived connectedness, satisfaction and overall performance.
SUMMARY
Purpose of the Study
The purpose of this comparative study was twofold: (a) to determine if
there were significant differences in students’ academic performance, satis-
faction, and perceived “connectedness” scores when the instructor provided
collective versus tailored feedback; and (b) to determine if the amount of
time necessary to provide personalized feedback differed significantly from
the amount of time needed to disseminate collective feedback.
Sample
Participants who comprised the convenience sample were undergraduate
students enrolled in the four different online health education courses. To
achieve an adequate number of participants in each treatment group, stu-
dents from two universities were asked to participate: a university in central
Louisiana and a university in northern Texas. The initial enrollment per class
was approximately 25 to 30 students, with the exception of one course,
which had less than 20 students. Due to attrition, the mean number of stu-
dents in each course dropped to 21 students by the end of the semester. A
total of 84 students comprised the final sample.
Methods
Prior to the onset of the study, participants were notified that they had been
chosen to participate in a research study and were given the option to partic-
ipate in the research. To reduce reactivity, the participants were not informed
of the research questions but were informed that they were part of a research
study relating to teaching strategies and e-learning. Since all four courses
were elective courses at both universities, participants were given the option
to choose another online health course if they did not wish to participate. Fol-
lowing informed consent, the participants were randomly assigned to one of
two treatment groups: personalized or collective. Students in the personalized
group received individualized feedback from the instructor on each assign-
ment. Students in the collective group received a collective feedback docu-
ment from the instructor on each assignment as well. The document summa-
rized aspects of the assignment from the various student perspectives and
provided suggestions based on the group’s overall execution of the assigned
task. With the exception of content, each course was designed and taught
using the same instructional design and teaching strategies. The number of
assignments was the same for all four courses. This was done to help equal-
ize the amount of feedback students received in each treatment group.
The instructor used a taxonomy of feedback proposed by Blignaut and
Trollip (2003) to construct the personalized messages (i.e., feedback, see
next paragraph). The purpose of the taxonomy was to ensure that the instruc-
tor was being consistent with the type of feedback she was providing to stu-
dents assigned to the personalized feedback treatment group. To establish
inter-rater reliability, an independent reviewer used the same taxonomy to
code samples of the instructor’s personalized feedback. Again, this was done
to assess whether the instructor was being consistent with the feedback she
was providing to the personalized treatment group. Inter-rater reliability was
established, as the instructor’s and independent reviewer’s coding of the
feedback messages did not differ significantly, and there was a strong corre-
lation between the coders’ classifications (r= .98).
The categories provided by Blignaut and Tollip (2003) are as follows:
1. Corrective Feedback: feedback that corrects the content of a stu-
dent’s answer to an assignment (e.g., While your definition of epi-
Personalized Versus Collective Instructor Feedback in the Online Courseroom 469
470 Gallien and Oomen-Early
demiology is correct, your answer to the second half of the question
was not clear. Please read the question again and provide specific
examples with your explanation that explain why you think epi-
demiology is such an important discipline for health education.)
2. Informative Feedback: feedback that comments on a student’s
answer to an assignment from a “content perspective” (e.g., This is
a good answer – not only did you explain the different levels of pre-
vention, but you provided examples and personal experience to
explain why health behavior might be difficult to change at one
level of prevention versus another.)
3. Socratic Feedback: feedback that asks “reflective questions” about
the student’s answer to an assignment (e.g., Now that you have
explained the Health Field Concept in detail, reflect on your own
experiences: which element of this concept do you feel is most relat-
ed to a person’s health status? Why?)
Collective feedback was constructed after the instructor read all the stu-
dents’ responses to an assignment. In the collective feedback document, the
instructor discussed each aspect of the assignment – pointing out responses
that were well written and informative, identifying and correcting common
errors, clarifying misunderstandings, and making suggestions for improving
responses on future assignments. The document was usually 1 1/2 pages to
2 pages in length and single-spaced. Collective feedback was posted to the
discussion board as well as emailed to each student.
The instructor used self-monitoring to document the amount of time
involved in each grading session. The start and stop times were recorded on the
students’ assignments and then transferred to a logbook for final computation.
Each grading session consisted of the amount of time it took to (a) grade assign-
ments, (b) prepare feedback, and (c) send feedback. However, only the time it
took to prepare and disseminate feedback was used in the final analysis.
At the completion of the course, students were asked to participate in a
survey designed to assess their level of satisfaction and perceived sense of
connectedness with the instructor. The survey also contained four open-ended
questions, which asked participants to describe aspects of the course that may
have contributed to their sense of belonging or connectedness with the
instructor, their overall satisfaction with the course and the type of feedback
they received on course assignments, and their thoughts about factors outside
of the course that may have affected their overall performance in the course.
Due to the nature of the study (i.e., anonymous reporting), individual grades
were not obtained from participants. To report overall performance, the aver-
age percentage for each course was calculated using the final scores from
each of the 84 students participating in the study. These four scores were
entered into SPSS for each of the 71 participants who took the survey.
Personalized Versus Collective Instructor Feedback in the Online Courseroom 471
The final survey data were analyzed using t-tests for independent samples
to compare the mean scores on student overall performance, satisfaction, and
perceived connectedness to the instructor between the two treatment groups.
The t-test for independent samples was the most appropriate statistical test to
use because there were only two treatment groups and the participants were
only tested once on each of the aforementioned variables. The open-ended
questions were categorized based on common themes that emerged during
analysis. The researcher explored these themes in relation to the research
questions posed in the study. Finally, the researcher determined the docu-
mented the amount of time it took the instructor to provide each type of feed-
back, as provided by the instructor’s log sheets. Data were then tabulated to
determine the mean hours it took to provide each form of feedback.
FINDINGS
Results of Quantitative Analysis
A summary of statistical findings is as follows:
1. Students who received personalized feedback (M= .85, SD = .02)
on average performed better than students who received collective
feedback (M= .77, SD = .03). The test was significant, t(58.81) =
12.03, p< .05 (Table 1).
2. Students who received personalized feedback (M= .17, SD = .63)
on average were more satisfied with the course and the feedback
they received than students who received collective feedback (M =
-.19, SD = .81). The test was significant, t(69) = 12.12, p< .04
(Table 2).
3. Students who received personalized feedback on average did not
perceive themselves to be more connected to their instructor than
students who received collective feedback. (See Table 3)
Table 1
Average Performance Scores for Survey Items Between Individuals Who
Received Personalized (N= 39) or Collective (N= 32) Feedback
Survey item tpMean SD
Overall Performance 12.03* .00
Personalized .85 .02
Collective .77 .03
Note: *p< .001
472 Gallien and Oomen-Early
4. The final research question discussed in this article relates to the
amount of time it took the instructor to prepare the different forms
of feedback for each treatment group. Based on the findings, col-
lective feedback was the
least time-consuming for the
instructor to prepare. On
average, it took three hours
and 15 minutes to construct
and send personalized feed-
back (per course) versus one
hour and 43 minutes to pre-
pare and disseminate collec-
tive feedback (Figure 1).
Results of Qualitative
Analysis
Students were also asked to
provide qualitative feedback
in relation to the course and
instruction. When students
were asked to describe the
Table 2
Average Overall Satisfaction and Satisfaction Scores for Survey Items Between
Individuals Who Received Personalized (N= 39) or Collective (N= 32) Feedback
Survey item tpMean SD
Overall Satisfaction 2.12* .04
Personalized .17 .63
Collective - .19 .81
Note: *p < .05
Table 3
Average connectedness scores for survey items between individuals who
received personalized (N= 39) or collective (N= 31) feedback
Survey item tpMean SD
Connectedness 1.46 .15
Personalized .12 .72
Collective - .15 .82
Figure 1. Mean hours to prepare and dis-
seminate feedback per treatment group
aspects of the course that contributed to their satisfaction, students from the
personalized treatment group were more inclined to describe aspects of the
course that led to their satisfaction compared to students from the collective
feedback group. Unexpectedly, both groups commented on the two-week
module design of the course rather than the feedback they received on
assignments as a factor leading to their satisfaction. In fact, only two stu-
dents made reference to feedback in their response to this question.
When students were asked about their sense of connectedness with the
instructor or feelings of isolation, students from the personalized group report-
ed only factors related to connectedness; whereas students from the collective
group reported aspects that led both to feelings of connectedness and to isola-
tion. Communication (through chats, email, message forum, and phone) was
the most common response category related to connectedness.Apparently, stu-
dents from both treatment groups associated feelings of connectedness with the
type and frequency of communication they experienced with the instructor.
Students were also asked to comment on the frequency and quality of the
feedback they received from the instructor. Forty-four participants answered
this question: 19 from the personalized feedback group and 25 from the col-
lective feedback group. Based on the results, twice as many students (from
both treatment groups) stated they received prompt feedback compared to
students who reported slow feedback. Also, approximately 30% (n= 13) of
the students from both groups rated the quality of instructor feedback as
“high quality” or “good.” However, more students from the collective feed-
back group (48% vs 16%) reported dissatisfaction with the frequency and
quality of feedback they received (Table 4).
Personalized Versus Collective Instructor Feedback in the Online Courseroom 473
Table 4
Comments on Frequency and Quality of Instructor Feedback: Major
Categories and Frequencies of Rresponses per Treatment Group (N= 44)
Category Personalized Group Collective Group
Frequency of feedback:
Prompt 53
Slow 31
Quality of feedback:
High quality/good 76
Did not give adequate justification for point deductions 03
Did not give personal/individual feedback 02
Liked receiving feedback via email 02
Note: Only items with more than one response were included in the Table. Some students had more than
one response.
Finally, students were also asked to describe factors outside of the course
that may have affected their overall performance. Nearly 30% (n=13) of the
students who responded to this question stated there were no factors outside
of the course that affected their performance. For the remaining majority,
however, family and work commitments were most commonly reported.
Conclusion
Research examining the interaction between student and instructor in
online learning is still in its infancy. Based on the findings of this study, stu-
dents who received personalized feedback from the instructor on assign-
ments were significantly more satisfied and performed academically better
than students who received collective feedback. The quantitative findings
suggest that students perform better and are more satisfied with their online
course when they receive personalized attention (in the form of feedback)
from the instructor. However, qualitatively, the students related their level of
satisfaction to the design of the course and the availability (i.e., presence) of
the instructor to respond quickly to questions and concerns about the class
rather than to the feedback they received on their assignments. This infor-
mation would not have been available to the researcher had the open-ended
questions not been included on the survey. Therefore, this mixed-method
data collection approach was effective in capturing this phenomenon. This
study further underscores the benefit of using both qualitative and quantita-
tive data collecting techniques to develop a more “complete” picture of the
factors that affect student satisfaction in online courses.
Although the concept of connectedness did not emerge as a discrete vari-
able in this study, it is worthy of further attention as a variable in future stud-
ies that relates to student-faculty relationships. Palloff and Pratt (2003)
noted that student-faculty contact is one of the most important factors in
determining student motivation and involvement. Future studies should clar-
ify what constitutes “connectedness,” as this term is operationalized differ-
ently in existing studies.
This study supports the need for continued research in e-learning and
serves as a starting point for future studies which investigate how student-
instructor interaction impacts academic success and satisfaction in the
online classroom. Most certainly, with time and the continuation of research,
the body of knowledge pertaining to online education will grow, as will our
understanding of what constitutes quality online teaching.
References
Blignaut, S., & Trollip, S. (2003, June). A taxonomy for faculty participation in asynchronous
online discussions [Electronic version]. Proceedings of the World Conference on Educational
Multimedia, Hypermedia and Telecommunications 2003, (pp. 2043 – 2050), Honolulu, HI.
474 Gallien and Oomen-Early
Cashion, J., & Palmieri, P. (2002). The secret is the teacher: The learners’ view of online
learning. Leabrook, Australia: National Centre for Vocational Education Research. (ERIC Doc-
ument Reproduction Service No. ED475001)
Clark, T. (1993). Attitudes of higher education faculty toward distance education: A national
survey. The American Journal of Distance Education, 7, 19-33.
DeBard, R., & Guidera, S. (2000). Adapting asynchronous communications to meet the seven
principles. Journal of Educational Technology Systems, 28, 219-230.
DiBiase, D. (2004, April). The impact of increasing enrollment on faculty workload and student
satisfaction [Electronic Version]. Journal of Asynchronous Learning Networks, 8, 45-60.
Hackman, M. Z., & Walker, K. B. (1990). Instructional communication in the televised
classroom: The effects of system design and teacher immediacy on student learning and sat-
isfaction. Communication Education, 39, 196-206.
Hara, N., & Kling, R. (2000). Student distress in a web-based distance education course
[Electronic version]. Information, Communication & Society, 3, 557-579.
Harris, R., & Muirhead, A. (2004, April 7). Online learning community research – Some
influences of theory methods. Paper presented at the Networked Learning Conference 2004,
Lancaster University, Lancaster, UK. Paper retrieved June 7, 2004, from
http://www.networkedlearningconference.org.uk/past/nlc2004/proceedings/symposia/symp
osium7/harris_muirhead.htm
Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: The place of response
certitude. Educational Psychology Review, 1, 279-308.
Lazarus, B. D. (2003, September). Teaching courses online: How much time does it take?
[Electronic version]. Journal of Asynchronous Learning Networks, 7(3), 47-54.
Mason, B. J., & Bruning, R. (1999). Providing feedback in computer-based instruction: What the
research tells us. Retrieved March 5, 2005, from University of Nebraska-Lincoln, Center for
Instructional Innovation Web Site: http://dwb.unl.edu/Edit/MB/MasonBruning.html
McKenzie, B., Mims, N., Bennett, E., & Waugh, M. (2000, Winter). Needs, concerns, and
practices of online instructors. Online Journal of Distance Learning Administration, III.
Retrieved October 4, 2003, from http://www.westga.edu/~distance/ojdla/browsearticles.php
Media Lab Europe. (n.d.). Human connectedness. Retrieved June 2, 2004, from
http://web.media.mit.edu/~stefan/hc/mission/
Newberry, B. (2001). Raising student social presence in online classes. (ERIC Document
Reproduction Service No. ED466611)
Palloff, R. M., & Pratt, K. (1999). Building learning communities in cyberspace: Effective
strategies for the online classroom. San Francisco: Jossey-Bass.
Palloff, R. M., & Pratt, K. (2003). The virtual student: A profile and guide to working with online
learners. San Francisco: Jossey-Bass.
Picciano, A. (2002). Beyond student perceptions: Issues of interaction, presence, and
performance in an online course [Electronic version]. Journal of Asynchronous Learning Net-
works, 6(1), 21-40.
Rovai, A. (2002). Development of an instrument to measure classroom community. The Internet
and Higher Education, 5, 197-211.
Richardson, J. C., & Swan, K. (2003). Examining social presence in online courses in relation to
students’ perceived learning and satisfaction [Electronic version]. Journal of Asynchronous
Learning Networks, 7, 68-88.
Personalized Versus Collective Instructor Feedback in the Online Courseroom 475
Schifter, C. C. (2000). Faculty participation in asynchronous learning networks: A case study of
motivating and inhibiting factors [Electronic version]. Journal of Asynchronous Learning Net-
works, 4, 10-23.
Sloan Consortium. (2002). The Sloan Consortium report to the nation: Five pillars of quality online
education. Retrieved February 6, 2005, from http://www.sloan-c.org/effective/pillarreport1.pdf
Vrasidas, C., & McIsaac, M. S. (1999). Factors influencing interaction in an online course.
Journal of Distance Education, 13, 22-36.
Woods, R. H. (2002). How much communication is enough in online courses? Exploring the
relationship between frequency of instructor-initiated personal email and learners’ percep-
tions of and participation in online learning. International Journal of Instructional Media, 29,
377-394.
Woods, R. H., & Baker, J. D. (2004). Interaction and immediacy in online learning. International
Review of Research in Open and Distance Learning, 5. Retrieved March 21, 2005, from
http://www.irrodl.org/index.php/irrodl/article/view/186/801
Woods, R., & Ebersole, S. (2003). Using non-subject-matter-specific discussion board to build
connectedness in online learning. The American Journal of Distance Education, 17, 99-117.
476 Gallien and Oomen-Early