ArticlePDF Available

Personalized Versus Collective Instructor Feedback in the Online Courseroom: Does Type of Feedback Affect Student Satisfaction, Academic Performance and Perceived Connectedness With the Instructor?

Authors:

Abstract and Figures

The demand for online learning has never been greater. For faculty, teaching in the virtual classroom requires a new set of skills and practices. Online instructors must prepare for the increased written communication demands that accompany online education, such as the large amount of time needed to respond to student inquiries and to provide feedback on assignments. The purpose of this comparative study was to determine if there were significant differences in student satisfaction, performance, and perceived "connectedness" to the instructor when the instructor used collective versus personalized feedback. In addition, this study examined two other variables: the time required to deliver the type of feedback as well as students' prior experience with online learning. Four online health courses were randomly assigned to one of two treatment groups: personalized or collective. Students in the personalized group received individual feedback from the instructor on each assignment. Students in the collective group received a collective feedback document from the instructor that summarized overall class performance, ways to improve, and student perspectives. Data were collected using an online survey and course evaluations. The findings revealed that students who received personalized feedback were more satisfied and performed academically better than students who received only collective feedback. Furthermore, prior online experience moderately predicted student satisfaction and performance. However, no significant findings were found between treatment groups and the variable of perceived connectedness. This study serves as a springboard for future studies examining how instructor-student interaction affects student learning and satisfaction within the online classroom.
Content may be subject to copyright.
Personalized Versus Collective Instructor
Feedback in the Online Courseroom:
Does Type of Feedback Affect Student Satisfaction,
Academic Performance and Perceived
Connectedness With the Instructor?
TARA GALLIEN
Northwestern State University, USA
tarag@nsula.edu
JODY OOMEN-EARLY
Texas Woman’s University, USA
joomen@mail.twu.edu
The demand for online learning has never been greater. For
faculty, teaching in the virtual classroom requires a new set of
skills and practices. Online instructors must prepare for the
increased written communication demands that accompany
online education, such as the large amount of time needed to
respond to student inquiries and to provide feedback on
assignments. The purpose of this comparative study was to
determine if there were significant differences in student sat-
isfaction, performance, and perceived “connectedness” to the
instructor when the instructor used collective versus person-
alized feedback. In addition, this study examined two other
variables: the time required to deliver the type of feedback as
well as students’ prior experience with online learning. Four
online health courses were randomly assigned to one of two
treatment groups: personalized or collective. Students in the
personalized group received individual feedback from the
instructor on each assignment. Students in the collective
group received a collective feedback document from the
instructor that summarized overall class performance, ways
to improve, and student perspectives. Data were collected
using an online survey and course evaluations. The findings
revealed that students who received personalized feedback
were more satisfied and performed academically better than
International Jl. on E-Learning (2008) 7(3), 463-476
students who received only collective feedback. Furthermore,
prior online experience moderately predicted student satisfac-
tion and performance. However, no significant findings were
found between treatment groups and the variable of perceived
connectedness. This study serves as a springboard for future
studies examining how instructor-student interaction affects
student learning and satisfaction within the online classroom.
At the university level, administrative pressure to compete in the distance
education market has led to increased demands on faculty to develop and
deliver courses online. For many faculty, the transition from the traditional
classroom to the online environment has not been an easy one, especially for
faculty who received minimal or no training prior to online course develop-
ment and delivery (Lazarus, 2003). A common practice among unprepared
and poorly trained faculty is to repackage an existing course as “online.”
However, this practice has met with criticism from both teachers and stu-
dents, and has caused many faculty to question the pedagogical soundness
of online instruction (DeBard & Guidera, 2000).
Indeed, teaching and learning in an environment separated by space and
time require a new set of skills and practices for both teacher and learner.
There are many new challenges faced by instructors. Faculty who teach
online must make major adaptations to their traditional ways of teaching.
They must implement pedagogically sound online activities and learn to
motivate students whom they have never met face-to-face (Palloff & Pratt,
1999, 2003). These adaptations require course redesign, which can be very
time-consuming. Furthermore, teachers must also spend large amounts of
time responding to student inquires and assignments. The issue of time com-
mitment in terms of faculty workload deserves further discussion because of
its potential to impact learning outcomes.
The online environment is unique in that it involves people in ways that
other distance education formats (e.g., correspondence, compressed video,
etc) do not. Much of this uniqueness lies in the dynamic nature of commu-
nication that takes place between the online instructor and students, and stu-
dents and their classmates in the asynchronous learning environment. In par-
ticular, online learning and teaching involves a shift from oral to written
communication (Picciano, 2002). This shift away from oral dialogue and
nonverbal cues toward a dependence on written communication poses sev-
eral interesting challenges for instructors who have spent a majority of their
teaching careers in the traditional face-to-face classroom (Picciano).
One of these challenges has been the commitment of time to read and
respond to extensive amounts of dialogue and written assignments. In the tra-
ditional class, for example, instructions, concepts, and feedback can be oral-
464 Gallien and Oomen-Early
Personalized Versus Collective Instructor Feedback in the Online Courseroom 465
ly communicated in a relatively short period of time; however, in the online
class, instructors must concisely communicate their thoughts in written words
so every student clearly understands what is being communicated. This is not
always an easy task, depending on the instructors writing skills, typing abili-
ties, and computer capabilities. The time commitment associated with this
form of communication can be quite extensive. (Hara & Kling, 2000)
In a recent study, researchers reported that 76% of instructors believed
they spent more time preparing and delivering an online course compared
with a traditional face-to-face course (McKenzie, Mims, Bennett, & Waugh,
2000). These findings are consistent with several other studies that explore
the time commitment associated with teaching from a distance (Clark, 1993;
Schifter, 2000). Much of what is reported is primarily anecdotal and not
empirically based. Few studies have attempted to quantify the amount of
time needed to teach online courses (DiBiase, 2004; Lazarus, 2003), and
even fewer have specifically examined the amount of time it takes to com-
municate and provide feedback to students on assignments.
To counterbalance the anecdotal data analyzed in earlier studies of facul-
ty workload in the online environment (i.e., issues related to time commit-
ment), Lazarus (2003) and DiBiase (2004) kept detailed records to measure
the amount of time the instructor/researcher used to conduct online classes.
While these studies may provide evidence pointing to the most time-con-
suming aspects of teaching online, they do not mention the specific peda-
gogical methods utilized in conducting these time-consuming activities.
That is, what was the nature of the communication? What type of feedback
was provided to students about their discussions or responses to an assign-
ment? A more detailed examination of instructor feedback follows to better
understand its role in promoting student learning.
INSTRUCTOR FEEDBACK
In simplest terms, educational feedback is “any message generated in
response to a learner’s action” (Mason & Bruning, 1999, p. 1). In the tradi-
tional classroom, instructor feedback is relayed to students in a variety of
ways. During in-class discussions, for example, immediate oral feedback is
provided by the instructor and serves to identify misconceptions and incor-
rect responses, or to provide additional information to clarify ideas and con-
cepts. This feedback is most often presented collectively. That is, all students
hear what the instructor is saying. On examinations, instructors can provide
students with correct answers and provide feedback, either verbally or in
writing, explaining or clarifying misunderstandings. Again, the feedback can
be given individually or presented collectively (i.e., the instructor reviews
the exam with the entire class in one session). On assignments, instructors
often correct errors (e.g., spelling or grammatical errors) and/or write notes
or messages to students that provide two types of information: verification
(i.e., whether an answer is correct or incorrect) or elaboration (i.e., provides
cues to guide learners toward a correct answer; Kulhavy & Stock, 1989;
Mason & Bruning, 1999). In most cases, this information is presented at the
individual level, not collectively. Clearly, a large portion of feedback in this
environment is oral, which means the instructor spends less time formulat-
ing written responses. In the online environment, however, feedback is pro-
vided solely in written form (Palloff & Pratt, 1999, 2003).
One might assume, therefore, that preparing this written communication
consumes a large part of the online instructor’s time. However, little
research has empirically explored whether different forms of feedback can
reduce instructor workload and effectively promote student learning.
STUDENT SATISFACTION AND “CONNECTEDNESS”
Several recent studies have explored student perceptions of online learn-
ing, including levels of satisfaction and perceived “desire for connected-
ness” or community (Cashion & Palmieri, 2002; Richardson & Swan, 2003;
Woods, 2002). Empirical evidence suggests that a “high degree” of faculty-
student interaction is important to foster a sense of student-instructor “con-
nectedness” (relationship) and satisfaction.
An electronic search for information related to the concept of “connect-
edness” in the online environment yielded few studies. A European research
partner of the Massachusetts Institute of Technology (MIT) Media Lab,
called the Human Connectedness research group, is currently exploring the
topic of human relationships and how they are mediated by technology
(Media Lab Europe, n.d.). The research group’s mission is to find ways to
foster a sense of presence and awareness that enhances a sense of commu-
nity and togetherness through the use of technology (Media Lab Europe).
The concepts being studied by the Human Connectedness group,
although not specific to online education, are similar to the highly studied
constructs of “social presence,” “interaction,” “community,” and “immedia-
cy behaviors” in the online environment. These concepts and the newly con-
ceived concept of student-instructor connectedness recognized in this study
are all closely related.
SOCIAL PRESENCE
Social presence refers to the “degree of salience of the other person in the
(mediated) interaction and the consequent salience of the interpersonal rela-
tionships” (as cited in Richardson & Swan, 2003, p. 70). In other words,
social presence refers to the degree that individuals perceive others to be real
in the online environment. According to researchers, this perception of
466 Gallien and Oomen-Early
human presence is essential for students and faculty to develop personal
relationships and communities online (Richardson & Swan). Studies indi-
cate that social presence appears to have a positive influence on student sat-
isfaction and performance (Newberry, 2001; Richardson & Swan). Accord-
ing to Hackman and Walker (1990), “social presence is influenced by the
delivery modes utilized for specific communication functions” (p. 198). To
increase social presence, strengthen relationships, and build online commu-
nities, it is imperative for there to be interaction between students and
instructors (Woods & Baker, 2004). Interaction will also lead to positive
communication behaviors such as teacher immediacy (Woods & Baker).
TEACHER IMMEDIACY
Teacher immediacy behaviors enhance closeness by reducing the psy-
chological distance (i.e., perceived distance) between individuals (Hackman
& Walker, 1990). Some of the behaviors used by teachers to produce imme-
diacy and build a sense of psychological closeness include verbal encour-
agement, praising, asking questions, using humor, and self-disclosure
(Hackman & Walker; Woods & Baker, 2004). In addition, Hackman and
Walker found that teachers who engaged in other immediacy behaviors such
as providing individualized feedback on papers and assignments and
encouraging teacher-instructor communication were viewed more favorably.
Furthermore, Richardson and Swan ( 2003) reported that students’ over-
all perception of social presence, as measured by a modified version of the
Social Presence Scale, accounted for 35% of the variability in students’
overall satisfaction with the instructor. In this study, however, the social
presence construct was found to be closely related to the construct of
instructor satisfaction and so readers should be cautious in their interpreta-
tion of the results. Qualitative data collected from the open-ended questions
at the end of the survey supported this possibility. In response to questions
related to overall satisfaction, students related their overall satisfaction with
the instructor to the instructor’s involvement with them in terms of guidance
and support with course material, and the feedback they received on class
assignments. This suggests that instructors’ immediacy behaviors and level
of interaction are important in determining students’ level of satisfaction.
Interaction
In another study, Vrasidas and McIsaac (1999) examined the nature of
interaction in an online course from the perspectives of the teacher and the
students. The authors concluded that students need continuous, frequent sup-
port, and feedback, especially from instructors. Similarly, the Sloan Consor-
tium (2002), in the document titled Five Pillars of Quality Online Education
stated, “online environments that effectively facilitate high levels of interac-
Personalized Versus Collective Instructor Feedback in the Online Courseroom 467
468 Gallien and Oomen-Early
tion and collaboration among learners typically result in successful online
programs” (p. 5). As instructors communicate with their students, they may
opt to use “collective” feedback (messages addressed and sent to the entire
group) or tailored feedback (individual messages sent to each student).
Community
The sense of community is another important factor in online courses, and
is a strong predictor of student satisfaction and learning (Woods & Ebersole,
2003). Research examining online communities has focused on social net-
works and the relationships that occur within these networks (Harris & Muir-
head, 2004). Unfortunately, much of the research examining the concept of
community has occurred in the nononline setting. Nevertheless, the same
qualities of community observed in a traditional setting should be fostered in
the online environment (e.g., interaction, shared interests, social presence, cul-
ture, relationships, and support; Harris & Muirhead). Rovai (2002) developed
an instrument called the Classroom Community Scale to measure sense of
classroom community in an online learning environment. He concluded that
the instrument yielded two subscales: community connectedness and learning.
Following analysis of the concepts of social presence, interaction, teacher
immediacy, and community, the researcher in the current study recognized
the operation of another construct – the concept of connectedness with the
instructor. Within this context, connectedness refers to a person’s sense of
belonging or presence, feelings of support, and level of communica-
tion/interaction with the instructor. Students who perceive a sense of con-
nectedness with their instructor are likely to feel satisfied and perform well
in their online courses; this has yet to be addressed by research. Furthermore,
at this time it is unclear how different forms of instructor feedback influence
students’ perceived connectedness, satisfaction and overall performance.
SUMMARY
Purpose of the Study
The purpose of this comparative study was twofold: (a) to determine if
there were significant differences in students’ academic performance, satis-
faction, and perceived “connectedness” scores when the instructor provided
collective versus tailored feedback; and (b) to determine if the amount of
time necessary to provide personalized feedback differed significantly from
the amount of time needed to disseminate collective feedback.
Sample
Participants who comprised the convenience sample were undergraduate
students enrolled in the four different online health education courses. To
achieve an adequate number of participants in each treatment group, stu-
dents from two universities were asked to participate: a university in central
Louisiana and a university in northern Texas. The initial enrollment per class
was approximately 25 to 30 students, with the exception of one course,
which had less than 20 students. Due to attrition, the mean number of stu-
dents in each course dropped to 21 students by the end of the semester. A
total of 84 students comprised the final sample.
Methods
Prior to the onset of the study, participants were notified that they had been
chosen to participate in a research study and were given the option to partic-
ipate in the research. To reduce reactivity, the participants were not informed
of the research questions but were informed that they were part of a research
study relating to teaching strategies and e-learning. Since all four courses
were elective courses at both universities, participants were given the option
to choose another online health course if they did not wish to participate. Fol-
lowing informed consent, the participants were randomly assigned to one of
two treatment groups: personalized or collective. Students in the personalized
group received individualized feedback from the instructor on each assign-
ment. Students in the collective group received a collective feedback docu-
ment from the instructor on each assignment as well. The document summa-
rized aspects of the assignment from the various student perspectives and
provided suggestions based on the group’s overall execution of the assigned
task. With the exception of content, each course was designed and taught
using the same instructional design and teaching strategies. The number of
assignments was the same for all four courses. This was done to help equal-
ize the amount of feedback students received in each treatment group.
The instructor used a taxonomy of feedback proposed by Blignaut and
Trollip (2003) to construct the personalized messages (i.e., feedback, see
next paragraph). The purpose of the taxonomy was to ensure that the instruc-
tor was being consistent with the type of feedback she was providing to stu-
dents assigned to the personalized feedback treatment group. To establish
inter-rater reliability, an independent reviewer used the same taxonomy to
code samples of the instructor’s personalized feedback. Again, this was done
to assess whether the instructor was being consistent with the feedback she
was providing to the personalized treatment group. Inter-rater reliability was
established, as the instructor’s and independent reviewer’s coding of the
feedback messages did not differ significantly, and there was a strong corre-
lation between the coders’ classifications (r= .98).
The categories provided by Blignaut and Tollip (2003) are as follows:
1. Corrective Feedback: feedback that corrects the content of a stu-
dent’s answer to an assignment (e.g., While your definition of epi-
Personalized Versus Collective Instructor Feedback in the Online Courseroom 469
470 Gallien and Oomen-Early
demiology is correct, your answer to the second half of the question
was not clear. Please read the question again and provide specific
examples with your explanation that explain why you think epi-
demiology is such an important discipline for health education.)
2. Informative Feedback: feedback that comments on a student’s
answer to an assignment from a “content perspective” (e.g., This is
a good answer – not only did you explain the different levels of pre-
vention, but you provided examples and personal experience to
explain why health behavior might be difficult to change at one
level of prevention versus another.)
3. Socratic Feedback: feedback that asks “reflective questions” about
the student’s answer to an assignment (e.g., Now that you have
explained the Health Field Concept in detail, reflect on your own
experiences: which element of this concept do you feel is most relat-
ed to a person’s health status? Why?)
Collective feedback was constructed after the instructor read all the stu-
dents’ responses to an assignment. In the collective feedback document, the
instructor discussed each aspect of the assignment – pointing out responses
that were well written and informative, identifying and correcting common
errors, clarifying misunderstandings, and making suggestions for improving
responses on future assignments. The document was usually 1 1/2 pages to
2 pages in length and single-spaced. Collective feedback was posted to the
discussion board as well as emailed to each student.
The instructor used self-monitoring to document the amount of time
involved in each grading session. The start and stop times were recorded on the
students’ assignments and then transferred to a logbook for final computation.
Each grading session consisted of the amount of time it took to (a) grade assign-
ments, (b) prepare feedback, and (c) send feedback. However, only the time it
took to prepare and disseminate feedback was used in the final analysis.
At the completion of the course, students were asked to participate in a
survey designed to assess their level of satisfaction and perceived sense of
connectedness with the instructor. The survey also contained four open-ended
questions, which asked participants to describe aspects of the course that may
have contributed to their sense of belonging or connectedness with the
instructor, their overall satisfaction with the course and the type of feedback
they received on course assignments, and their thoughts about factors outside
of the course that may have affected their overall performance in the course.
Due to the nature of the study (i.e., anonymous reporting), individual grades
were not obtained from participants. To report overall performance, the aver-
age percentage for each course was calculated using the final scores from
each of the 84 students participating in the study. These four scores were
entered into SPSS for each of the 71 participants who took the survey.
Personalized Versus Collective Instructor Feedback in the Online Courseroom 471
The final survey data were analyzed using t-tests for independent samples
to compare the mean scores on student overall performance, satisfaction, and
perceived connectedness to the instructor between the two treatment groups.
The t-test for independent samples was the most appropriate statistical test to
use because there were only two treatment groups and the participants were
only tested once on each of the aforementioned variables. The open-ended
questions were categorized based on common themes that emerged during
analysis. The researcher explored these themes in relation to the research
questions posed in the study. Finally, the researcher determined the docu-
mented the amount of time it took the instructor to provide each type of feed-
back, as provided by the instructor’s log sheets. Data were then tabulated to
determine the mean hours it took to provide each form of feedback.
FINDINGS
Results of Quantitative Analysis
A summary of statistical findings is as follows:
1. Students who received personalized feedback (M= .85, SD = .02)
on average performed better than students who received collective
feedback (M= .77, SD = .03). The test was significant, t(58.81) =
12.03, p< .05 (Table 1).
2. Students who received personalized feedback (M= .17, SD = .63)
on average were more satisfied with the course and the feedback
they received than students who received collective feedback (M =
-.19, SD = .81). The test was significant, t(69) = 12.12, p< .04
(Table 2).
3. Students who received personalized feedback on average did not
perceive themselves to be more connected to their instructor than
students who received collective feedback. (See Table 3)
Table 1
Average Performance Scores for Survey Items Between Individuals Who
Received Personalized (N= 39) or Collective (N= 32) Feedback
Survey item tpMean SD
Overall Performance 12.03* .00
Personalized .85 .02
Collective .77 .03
Note: *p< .001
472 Gallien and Oomen-Early
4. The final research question discussed in this article relates to the
amount of time it took the instructor to prepare the different forms
of feedback for each treatment group. Based on the findings, col-
lective feedback was the
least time-consuming for the
instructor to prepare. On
average, it took three hours
and 15 minutes to construct
and send personalized feed-
back (per course) versus one
hour and 43 minutes to pre-
pare and disseminate collec-
tive feedback (Figure 1).
Results of Qualitative
Analysis
Students were also asked to
provide qualitative feedback
in relation to the course and
instruction. When students
were asked to describe the
Table 2
Average Overall Satisfaction and Satisfaction Scores for Survey Items Between
Individuals Who Received Personalized (N= 39) or Collective (N= 32) Feedback
Survey item tpMean SD
Overall Satisfaction 2.12* .04
Personalized .17 .63
Collective - .19 .81
Note: *p < .05
Table 3
Average connectedness scores for survey items between individuals who
received personalized (N= 39) or collective (N= 31) feedback
Survey item tpMean SD
Connectedness 1.46 .15
Personalized .12 .72
Collective - .15 .82
Figure 1. Mean hours to prepare and dis-
seminate feedback per treatment group
aspects of the course that contributed to their satisfaction, students from the
personalized treatment group were more inclined to describe aspects of the
course that led to their satisfaction compared to students from the collective
feedback group. Unexpectedly, both groups commented on the two-week
module design of the course rather than the feedback they received on
assignments as a factor leading to their satisfaction. In fact, only two stu-
dents made reference to feedback in their response to this question.
When students were asked about their sense of connectedness with the
instructor or feelings of isolation, students from the personalized group report-
ed only factors related to connectedness; whereas students from the collective
group reported aspects that led both to feelings of connectedness and to isola-
tion. Communication (through chats, email, message forum, and phone) was
the most common response category related to connectedness.Apparently, stu-
dents from both treatment groups associated feelings of connectedness with the
type and frequency of communication they experienced with the instructor.
Students were also asked to comment on the frequency and quality of the
feedback they received from the instructor. Forty-four participants answered
this question: 19 from the personalized feedback group and 25 from the col-
lective feedback group. Based on the results, twice as many students (from
both treatment groups) stated they received prompt feedback compared to
students who reported slow feedback. Also, approximately 30% (n= 13) of
the students from both groups rated the quality of instructor feedback as
“high quality” or “good.” However, more students from the collective feed-
back group (48% vs 16%) reported dissatisfaction with the frequency and
quality of feedback they received (Table 4).
Personalized Versus Collective Instructor Feedback in the Online Courseroom 473
Table 4
Comments on Frequency and Quality of Instructor Feedback: Major
Categories and Frequencies of Rresponses per Treatment Group (N= 44)
Category Personalized Group Collective Group
Frequency of feedback:
Prompt 53
Slow 31
Quality of feedback:
High quality/good 76
Did not give adequate justification for point deductions 03
Did not give personal/individual feedback 02
Liked receiving feedback via email 02
Note: Only items with more than one response were included in the Table. Some students had more than
one response.
Finally, students were also asked to describe factors outside of the course
that may have affected their overall performance. Nearly 30% (n=13) of the
students who responded to this question stated there were no factors outside
of the course that affected their performance. For the remaining majority,
however, family and work commitments were most commonly reported.
Conclusion
Research examining the interaction between student and instructor in
online learning is still in its infancy. Based on the findings of this study, stu-
dents who received personalized feedback from the instructor on assign-
ments were significantly more satisfied and performed academically better
than students who received collective feedback. The quantitative findings
suggest that students perform better and are more satisfied with their online
course when they receive personalized attention (in the form of feedback)
from the instructor. However, qualitatively, the students related their level of
satisfaction to the design of the course and the availability (i.e., presence) of
the instructor to respond quickly to questions and concerns about the class
rather than to the feedback they received on their assignments. This infor-
mation would not have been available to the researcher had the open-ended
questions not been included on the survey. Therefore, this mixed-method
data collection approach was effective in capturing this phenomenon. This
study further underscores the benefit of using both qualitative and quantita-
tive data collecting techniques to develop a more “complete” picture of the
factors that affect student satisfaction in online courses.
Although the concept of connectedness did not emerge as a discrete vari-
able in this study, it is worthy of further attention as a variable in future stud-
ies that relates to student-faculty relationships. Palloff and Pratt (2003)
noted that student-faculty contact is one of the most important factors in
determining student motivation and involvement. Future studies should clar-
ify what constitutes “connectedness,” as this term is operationalized differ-
ently in existing studies.
This study supports the need for continued research in e-learning and
serves as a starting point for future studies which investigate how student-
instructor interaction impacts academic success and satisfaction in the
online classroom. Most certainly, with time and the continuation of research,
the body of knowledge pertaining to online education will grow, as will our
understanding of what constitutes quality online teaching.
References
Blignaut, S., & Trollip, S. (2003, June). A taxonomy for faculty participation in asynchronous
online discussions [Electronic version]. Proceedings of the World Conference on Educational
Multimedia, Hypermedia and Telecommunications 2003, (pp. 2043 – 2050), Honolulu, HI.
474 Gallien and Oomen-Early
Cashion, J., & Palmieri, P. (2002). The secret is the teacher: The learners’ view of online
learning. Leabrook, Australia: National Centre for Vocational Education Research. (ERIC Doc-
ument Reproduction Service No. ED475001)
Clark, T. (1993). Attitudes of higher education faculty toward distance education: A national
survey. The American Journal of Distance Education, 7, 19-33.
DeBard, R., & Guidera, S. (2000). Adapting asynchronous communications to meet the seven
principles. Journal of Educational Technology Systems, 28, 219-230.
DiBiase, D. (2004, April). The impact of increasing enrollment on faculty workload and student
satisfaction [Electronic Version]. Journal of Asynchronous Learning Networks, 8, 45-60.
Hackman, M. Z., & Walker, K. B. (1990). Instructional communication in the televised
classroom: The effects of system design and teacher immediacy on student learning and sat-
isfaction. Communication Education, 39, 196-206.
Hara, N., & Kling, R. (2000). Student distress in a web-based distance education course
[Electronic version]. Information, Communication & Society, 3, 557-579.
Harris, R., & Muirhead, A. (2004, April 7). Online learning community research – Some
influences of theory methods. Paper presented at the Networked Learning Conference 2004,
Lancaster University, Lancaster, UK. Paper retrieved June 7, 2004, from
http://www.networkedlearningconference.org.uk/past/nlc2004/proceedings/symposia/symp
osium7/harris_muirhead.htm
Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: The place of response
certitude. Educational Psychology Review, 1, 279-308.
Lazarus, B. D. (2003, September). Teaching courses online: How much time does it take?
[Electronic version]. Journal of Asynchronous Learning Networks, 7(3), 47-54.
Mason, B. J., & Bruning, R. (1999). Providing feedback in computer-based instruction: What the
research tells us. Retrieved March 5, 2005, from University of Nebraska-Lincoln, Center for
Instructional Innovation Web Site: http://dwb.unl.edu/Edit/MB/MasonBruning.html
McKenzie, B., Mims, N., Bennett, E., & Waugh, M. (2000, Winter). Needs, concerns, and
practices of online instructors. Online Journal of Distance Learning Administration, III.
Retrieved October 4, 2003, from http://www.westga.edu/~distance/ojdla/browsearticles.php
Media Lab Europe. (n.d.). Human connectedness. Retrieved June 2, 2004, from
http://web.media.mit.edu/~stefan/hc/mission/
Newberry, B. (2001). Raising student social presence in online classes. (ERIC Document
Reproduction Service No. ED466611)
Palloff, R. M., & Pratt, K. (1999). Building learning communities in cyberspace: Effective
strategies for the online classroom. San Francisco: Jossey-Bass.
Palloff, R. M., & Pratt, K. (2003). The virtual student: A profile and guide to working with online
learners. San Francisco: Jossey-Bass.
Picciano, A. (2002). Beyond student perceptions: Issues of interaction, presence, and
performance in an online course [Electronic version]. Journal of Asynchronous Learning Net-
works, 6(1), 21-40.
Rovai, A. (2002). Development of an instrument to measure classroom community. The Internet
and Higher Education, 5, 197-211.
Richardson, J. C., & Swan, K. (2003). Examining social presence in online courses in relation to
students’ perceived learning and satisfaction [Electronic version]. Journal of Asynchronous
Learning Networks, 7, 68-88.
Personalized Versus Collective Instructor Feedback in the Online Courseroom 475
Schifter, C. C. (2000). Faculty participation in asynchronous learning networks: A case study of
motivating and inhibiting factors [Electronic version]. Journal of Asynchronous Learning Net-
works, 4, 10-23.
Sloan Consortium. (2002). The Sloan Consortium report to the nation: Five pillars of quality online
education. Retrieved February 6, 2005, from http://www.sloan-c.org/effective/pillarreport1.pdf
Vrasidas, C., & McIsaac, M. S. (1999). Factors influencing interaction in an online course.
Journal of Distance Education, 13, 22-36.
Woods, R. H. (2002). How much communication is enough in online courses? Exploring the
relationship between frequency of instructor-initiated personal email and learners’ percep-
tions of and participation in online learning. International Journal of Instructional Media, 29,
377-394.
Woods, R. H., & Baker, J. D. (2004). Interaction and immediacy in online learning. International
Review of Research in Open and Distance Learning, 5. Retrieved March 21, 2005, from
http://www.irrodl.org/index.php/irrodl/article/view/186/801
Woods, R., & Ebersole, S. (2003). Using non-subject-matter-specific discussion board to build
connectedness in online learning. The American Journal of Distance Education, 17, 99-117.
476 Gallien and Oomen-Early
... Virtual office hours help instructors and students follow up on in-class desk crits to deepen learning and maintain momentum. Students feel socially present when they receive regular, prompt, and personalized instructor feedback that carries conversations forward, deepens learning, and maintains momentum (Dyer et al., 2018;Gallien & Oomen-Early, 2008). ...
... Lack of interaction often leads to poor student engagement and low student satisfaction (Kuo et al., 2014). Interaction in online learning can often be translated into student engagement in learning activities, which has a positive impact on satisfaction with learning (Gallien & Early, 2008;Kim & Kim, 2021). In this study, international students in Chinese universities experienced a higher level of academic engagement after receiving timely responses from teachers and classmates. ...
Article
Full-text available
The aim of this study was to investigate the effect of interactive immediacy on online learning satisfaction, and the mediating effect of learning interest and academic engagement on the relationship between interactive immediacy and online learning satisfaction. 2221 international students in Chinese universities participated in the questionnaire survey. The coefficient omega (ω) and the confirmatory factor analysis (CFA) were used to test the reliability and validity. Structural equation modeling (SEM) was used to analyze interactive immediacy, learning interest, and academic engagement on online learning satisfaction of international students in Chinese universities, as well as the mediating effect of learning interest and academic engagement. The results showed that interactive immediacy did not directly influence the online learning satisfaction of international students in Chinese universities. Learning interest and academic engagement played a complete mediating role between interactive immediacy and online learning satisfaction. Meanwhile, interactive immediacy not only affected online learning satisfaction of international students in Chinese universities through learning interest and academic engagement respectively, but also indirectly affected online learning satisfaction through the chain mediating effect of learning interest and academic engagement. The results of Bootstrap showed that the mediating effects in the model were significant. The findings of this study explored the underlying mechanism of international students' online learning satisfaction in Chinese universities, which provided an empirical basis for universities and teachers to improve the effect of online teaching, and integrate online teaching and traditional classroom teaching.
... The researcher aims to apply audio feedback to the student's online assessment to determine its effects on their academic motivation. The students' contentment, performance, and feeling of community in an online setup are all influenced by their contacts with their instructors, including the sort and frequency of feedback they get on assignments and course content (Gallien & Oomen-Early, 2008). The lack of feedback can increase the learning gap of the students as most of the subjects limit the learning goals for their specific grade level to what the DepEd recommends using the Most Essential Learning Competencies (MELCs). ...
Article
Full-text available
Many research has revealed that feedback has a big impact on students’ learning and is the most effective single moderator of achievement enhancement. Hence, this study aimed to determine the effects of giving audio feedback in an online assessment on students’ academic motivation. The participants in this study are 49 senior high school students who have the same online assessments. A mixed-methods research design was utilized in this study. The results showed that there was no significant difference in academic motivation between the students before and after applying the audio feedback in their online assessments and were indicative of high academic motivation throughout the experiment. However, the students’ experiences revealed the following factors: (1) Student Emotional Engagement; (2) Comprehensive Understanding through Audio Feedback; (3) Students’ Receptiveness in Audio Feedback; (4) Utilization of Feedback in Improving Students' Work; and (5) Improve Students’ Motivation. Overall, even though the audio feedback doesn’t greatly affect the students’ academic motivation, it still creates a better experience in providing feedback to the students as it is used for their improvement on their online assessment. This study has broad implications in the online learning environment and creates new paths that teachers may take in giving feedback in their online assessment to make it personalized and comprehensible for the improvement of students’ work.
... For example, the study results showed that the students who received personal feedback were more satisfied and performed better than students who received only group feedback. [41] In the present study, students were more satisfied with receiving written feedback than verbal feedback. According to the students, the written feedback they received was more consistent with the standards of providing feedback, and this should have led to their greater satisfaction with the written feedback. ...
Article
Full-text available
BACKGROUND Although feedback has a major impact on teaching and learning, the type and way of providing it can have diverse effects. The purpose of this study was to compare the effect of two types of verbal and written feedback on nursing students’ performance quality and satisfaction. MATERIALS AND METHODS This experimental study, that carried out in 2019, has a crossover design. The participants included 30 bachelors of science in nursing at Shushtar Faculty of Medical Sciences, who were assigned to two groups of 15. The first group received first verbal feedback and then written, during basic nursing skills training. The second group received first written feedback and then verbal. At the end of each half of the training sessions, students’ performance and their satisfaction were assessed by, researchermade observational checklists, and a satisfaction questionnaire, respectively. Data were analyzed using SPSS16 software and analyzed using Chi-square and paired t test. Significance level < 0.01 was considered. RESULTS The mean scores of students’ performance in the stages of verbal and written feedback were 15.7 ± 2.5 and 17.7 ± 2.3, respectively. Written feedback was more effective in students’ performance (P = 0.001) and students were more satisfied with this type of feedback (P = 0.001). CONCLUSIONS According to the present study, using written feedback can improve the quality of students’ performances and is associated with high satisfaction. Thus, professors should pay more attention to feedback in their educational processes and use different types of feedback, especially written feedback, in line with the context.
... Effective feedback is a fundamental skill for faculty as it guides student development (Leibold & Schwarz, 2015). Students who received personalized feedback have had greater academic improvement than a student who did not and have reported feeling more fulfilled in their learning experience (Gallien & Oomen-Early, 2008). Implementing feedback may be an important indicator of the instructor's involvement in the course as students look at feedback as a gauge of instructor involvement, which leads students to believe the learning experience to be more successful (Garrett Dikkers, Whiteside, & Lewis 2013). ...
Article
Full-text available
While potential teaching activities in the online classroom are unlimited, an instructor’s teaching time is not. As such, it is essential that online instructors prioritize limited time to instructional strategies that have the greatest impact on student learning. A survey of 413 faculty and 2386 students examined faculty and student perceptions about instructional components or strategies that have greatest impact on student learning in the online classroom. Findings revealed significant differences in faculty and student perceptions with faculty giving the highest value ratings to non-instructor generated content and students prioritizing text-based instructional content (regardless of source). Overall, faculty tended to place more value on instructional components compared to students. Students rated faculty interaction and feedback as the most valuable component of their online learning experience. Findings explore how institutions can utilize teaching supplements to support faculty’s desire to provide content so that instructional time can focus on interaction and feedback.
... Many open online courses use typical automated assessment methods which makes it impractical to assess open-ended skill based work that is integral to fields of creative education like design [3]. Donald A. Schön [4] through his 'reflection of action' theory established that the studio method is the standard classroom model for design education [5] [6]. The design studio plays a key pedagogical role where one can view, examine and critique others work [7]. ...
Article
Full-text available
Peer and self-assessment open opportunities to scale assessments in online classrooms. This article reports our experiences of using AsPeer-peer assessment system, with two iterations of a university online class. We observed that peer grades highly correlated with staff assigned grades. It was recorded that, the peer grade of all student submissions within range of instructor grade averaged to 21.0% and that within the next 2 ranges was 49.0%. We performed three experiments to improve accuracy of peer grading. First, we observed grading bias and introduced a data driven feedback mechanism to inform peers of it. Students aided by feedback were mindful and performed grading with better accuracy. Second, we observed that the rubric lacked efficiency in translating intent to students. Simplified guiding questions improved accuracy in assessment by 89% students. Third, we encouraged peers to provide personalized qualitative feedback along with rating. We provided them with feedback snippets that addressed common issues. 64% students responded that the snippets helped them to critically look at submissions before rating.
Chapter
Universal design for learning has become the most widely used framework for creating access to education for students of all abilities, especially in the kindergarten through 12th-grade school systems. Research and utilization have begun to get attention nationwide at the undergraduate level on higher education campuses. The current research on undergraduate education is promising, but there needs to be more examination of UDL at the graduate level, specifically in graduate online education. This case study will discuss revisions to a graduate online course that fit within the UDL guidelines. The author discusses teaching strategies implemented in their course and will support their effectiveness through confidential student evaluations and comments. One of the purposes of this case study is to provide a list of strategies that other instructors can quickly adopt in their courses. A second purpose is to provide support for more formal research into the utilization of UDL in graduate online education.
Article
Full-text available
Connecting with students has been shown to increase motivation, satisfaction, and perceived learning while decreasing anxiety. Connecting with students in an online or distance education environment can prove difficult. This study examined perceptions of higher education students who were enrolled in various modalities (e.g., hybrid, online asynchronous, and synchronous) during the COVID-19 pandemic in the United States of America. The study found that a high perception of instructor connectedness in the asynchronous classes resulted in lower anxiety levels for students. Four themes emerged from the results: the importance of instructor empathy; sociability; feedback; and course organisation. These helped students to connect to their instructor—thus reducing anxiety.
Article
Full-text available
This study examines the nature of interaction in an online course from both teacher and student perspectives. Major components of a conceptual framework to identify interaction were identified. Data analysis suggested that the structure of the course, class size, feedback, and prior experience with computer‐mediated communication all influenced interaction. Results of the study reconceptualize interaction as a theoretical construct and emphasize the importance of socially constructed meanings from the participants’ perspectives.
Article
Two potentially conflicting forces are currently impacting higher education. On the one hand, there is the relentless advancement of educational technology as a means of delivering, enhancing, and otherwise becoming an integral part of the teaching-learning process. On the other, there is continuing concern over the economy and quality of education being provided to college students. We propose that asynchronous communication through the use of e-mail, course Web pages, and the Internet can be adapted in such a way as to not only meet the seven principles of effective teaching but to enhance student outcomes. Without sensitive adaptation, however, such computer technology can actually detract from the educational process. Research findings and the resulting strategies for successful implementation of asynchronous communication in the classroom will be provided.
Article
Detailed daily records of instructor effort in an established asynchronous online course over a three and one-half year period are analyzed. Student satisfaction data acquired from course evaluation surveys over the same period are also examined. In response to a three-fold increase in enrollment over the period, instructors realized a twelve percent gain in efficiency. Contrary to expectations, a modest economy of scale was achieved with no discernible decrease in student satisfaction.
Article
The authors employed multiple data-collection procedures to determine which of four personal (non-subject-matter-specific) discussion folders would be used most frequently by online learners in two online courses, and which would be rated more favorably and considered more effective than other folders. The folders were studied for the way in which they (1) helped build a positive faculty-student relationship, (2) helped build positive relationships with fellow students, (3) helped foster a greater sense of community online, and (4) contributed to a higher degree of satisfaction with the overall learning experience. Overall, the use of the four personal discussion folders contributed most to building a more positive faculty-student relationship, followed by a greater sense of community.
Article
The present study was designed to investigate the effects of conveyance system design and social presence, in the form of teacher immediacy behavior, on perceived student learning and satisfaction in the televised classroom. Results indicate that system design and teacher immediacy behavior strongly impact student learning and satisfaction. System variables such as interactivity and clear audio and video transmission positively influenced perceived learning and satisfaction. Further, instructors who engaged in immediate behaviors such as encouraging involvement, offering individual feedback, maintaining relaxed body posture and using vocal variety were viewed more favorably.
Article
While the delivery of on-line instructional courses in higher education institutions is flourishing, it is the faculty who play the key role in its successful implementation (Betts, 1998; Rockwell, Schauer, Fritz, & Marx, 1999; Willis, 1994; Wilson, 1998). Limited research has shown that a number of circumstances influence whether or not faculty choose to teach via on-line. Since faculty are pivotal to the success of online instruction, this study explored their backgrounds, concerns, and their on-line teaching practices. The information that is provided will update decision-makers of the current needs and concerns of on-line instructors so an effective distance-learning program can be fostered. Purpose and Rationale The purpose of the study was to identify: (a) the factors that influence faculty participation in distance education, (b) the hours of training received prior to delivering an on-line class, (c) the tools used in the on-line class, (d) the percent of class time on-line during the first on-line class and hours fall of 1999, (e) the number of hours spent in preparation each week for on-line coursework, (f) the number of hours spent interacting with students on-line each week, (g) the time spent preparing and delivering an on-line course compared to face-to-face courses, (h) how much more time, if any, was spent preparing and delivering an on-line course each week compared to face-to-face courses, (i) the optimal on-line class size, (j) the number of times instructors meet face-to-face with students and if these meetings were helpful, (k) if both face-to-face and on-line classes were taught in the same class and which was preferred, and (l) what institutions can do to further assist in the effective delivery of on-line courses. Background Literature
Article
Studies show that temporal factors like workload and lack of release time inhibit faculty participation in developing and teaching online courses; however, few studies exist to gauge the time commitment. This longitudinal case study, presented at the Seventh Annual Sloan-C International Conference on ALN, examined the amount of time needed to teach three asynchronous online courses at The University of Michigan-Dearborn from Winter 1999 through Winter 2000. Twenty-five students were enrolled in each course. Self-monitoring was used to measure the amount of time required to complete the following activities: 1) reading and responding to emails; 2) reading, participating in, and grading 10 online discussions; and 3) grading 15 assignments. Using a stopwatch, the investigator timed and recorded the number of minutes needed for each activity. Also, all messages and assignments were archived and frequency counts were recorded. The weekly, mean number of minutes and assignments was entered on line graphs for analysis. The data showed that teaching each online course required 3 to 7 hours per week, with the greatest number of emails and amount of time required during the first and last 2-weeks of the semesters. Participation in and grading of the discussions took the greatest amount of time and remained steady across the semester. However unlike many live courses, the students participated more in the discussions than the instructor did. The number of assignments that were submitted each week steadily increased over each semester. This case study indicates that the time needed to teach online courses falls within the range of reasonable expectations for teaching either live or online courses and represents the beginning of this area of inquiry. Consequently, additional studies are needed with a variety of instructors across a variety of courses and disciplines to further pinpoint faculty time commitment.