ArticlePDF Available

Measuring service quality: SERVQUAL vs. SERVPERF scales

Authors:
  • Department of Commerce Delhi School of Economics, University of Delhi, Delhi.

Abstract and Figures

An ideal service quality scale is one that is not only psychometrically sound but is also diagnostically robust enough to provide insights to the managers for corrective actions in the event of quality shortfalls. Empirical studies evaluating validity, reliability, and methodological soundness of service quality scales clearly point to the superiority of the SERVPERF scale. The diagnostic ability of the scales, however, has not been explicitly explicated and empirically verified in the past. The present study aims at filling this void in service quality literature. It assesses the diagnostic power of the two service quality scales. Validity and methodological soundness of these scales have also been probed in the Indian context — an aspect which has so far remained neglected due to preoccupation of the past studies with service industries in the developed world
Content may be subject to copyright.
Measuring Service Quality:
SERVQUAL vs. SERVPERF Scales
Sanjay K Jain and Garima Gupta
Quality has come to be recognized as a strategic tool for attaining operational efficiency
and improved business performance. This is true for both the goods and services sectors.
However, the problem with management of service quality in service firms is that quality
is not easily identifiable and measurable due to inherent characteristics of services which
make them different from goods. Various definitions of the term ‘service quality’ have
been proposed in the past and, based on different definitions, different scales for
measuring service quality have been put forward. SERVQUAL and SERVPERF constitute
two major service quality measurement scales. The consensus, however, continues to
elude till date as to which one is superior.
An ideal service quality scale is one that is not only psychometrically sound but is also
diagnostically robust enough to provide insights to the managers for corrective actions
in the event of quality shortfalls. Empirical studies evaluating validity, reliability, and
methodological soundness of service quality scales clearly point to the superiority of the
SERVPERF scale. The diagnostic ability of the scales, however, has not been explicitly
explicated and empirically verified in the past.
The present study aims at filling this void in service quality literature. It assesses the
diagnostic power of the two service quality scales. Validity and methodological
soundness of these scales have also been probed in the Indian context — an aspect which
has so far remained neglected due to preoccupation of the past studies with service
industries in the developed world.
Using data collected through a survey of consumers of fast food restaurants in Delhi,
the study finds the SERVPERF scale to be providing a more convergent and discriminant-
valid explanation of service quality construct. However, the scale is found deficient in
its diagnostic power. It is the SERVQUAL scale which outperforms the SERVPERF scale
by virtue of possessing higher diagnostic power to pinpoint areas for managerial
interventions in the event of service quality shortfalls.
The major managerial implications of the study are:
¾Because of its psychometric soundness and greater instrument parsimoniousness,
one should employ the SERVPERF scale for assessing overall service quality of a
firm. The SERVPERF scale should also be the preferred research instrument when
one is interested in undertaking service quality comparisons across service
industries.
¾On the other hand, when the research objective is to identify areas relating to
service quality shortfalls for possible intervention by the managers, the SERVQUAL
scale needs to be preferred because of its superior diagnostic power.
However, one serious problem with the SERVQUAL scale is that it entails gigantic
data collection task. Employing a lengthy questionnaire, one is required to collect data
about consumers’ expectations as well as perceptions of a firm’s performance on each
of the 22 service quality scale attributes.
Addition of importance weights can further add to the diagnostic power of the
SERVQUAL scale, but the choice needs to be weighed against the additional task of data
collection. Collecting data on importance scores relating to each of the 22 service attributes
is indeed a major deterrent. However, alternative, less tedious approaches, discussed to-
wards the end of the paper, can be employed by the researchers to obviate the data col-
lection task.
KEY WORDS
Service Quality
Measurement of Service Quality
Service Quality Scale
Scale Validity and Reliability
Diagnostic Ability of Scale
Executive Summary
RESEARCH
includes research articles that focus on the
analysis and resolution of managerial and
academic issues based on analytical and
empirical or case research
VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 25
25
Jain, Sanjay K. & Gupta, Garima (2004), Measuring service quality: SERVQUAL vs. SERVPERF scales, Vikalpa: The Journal of Decision Makers,
29(2), 25-37.
Quality has come to be recognized as a strategic
tool for attaining operational efficiency and
improved business performance (Anderson and
Zeithaml, 1984; Babakus and Boller, 1992; Garvin, 1983;
Phillips, Chang and Buzzell, 1983). This is true for the
services sector too. Several authors have discussed the
unique importance of quality to service firms (e.g.,
Normann, 1984; Shaw, 1978) and have demonstrated its
positive relationship with profits, increased market share,
return on investment, customer satisfaction, and future
purchase intentions (Anderson, Fornell and Lehmann
1994; Boulding et al., 1993; Buzzell and Gale, 1987; Rust
and Oliver, 1994). One obvious conclusion of these studies
is that firms with superior quality products outperform
those marketing inferior quality products.
Notwithstanding the recognized importance of
service quality, there have been methodological issues
and application problems with regard to its operation-
alization. Quality in the context of service industries
has been conceptualized differently and based on dif-
ferent conceptualizations, alternative scales have been
proposed for service quality measurement (see, for
instance, Brady, Cronin and Brand, 2002; Cronin and
Taylor, 1992, 1994; Dabholkar, Shepherd and Thorpe,
2000; Parasu- raman, Zeithaml and Berry, 1985, 1988).
Despite considerable work undertaken in the area, there
is no consensus yet as to which one of the measurement
scales is robust enough for measuring and comparing
service quality. One major problem with past studies
has been their preoccupation with assessing psycho-
metric and metho- dological soundness of service scales
that too in the context of service industries in the de-
veloped countries. Virtually no empirical efforts have
been made to eva- luate the diagnostic ability of the
scales in providing managerial insights for corrective
actions in the event of quality shortfalls. Furthermore,
little work has been done to examine the applicability
of these scales to the service industries in developing
countries.
This paper, therefore, is an attempt to fill this existing
void in the services quality literature. Based on a survey
of consumers of fast food restaurants in Delhi, this paper
assesses the diagnostic usefulness as well as the psycho-
metric and methodological soundness of the two widely
advocated service quality scales, viz., SERVQUAL and
SERVPERF.
SERVICE QUALITY: CONCEPTUALIZATION
AND OPERATIONALIZATION
Quality has been defined differently by different au-
thors. Some prominent definitions include ‘conformance
to requirements’ (Crosby, 1984), ‘fitness for use’ (Juran,
1988) or ‘one that satisfies the customer’ (Eiglier and
Langeard, 1987). As per the Japanese production phi-
losophy, quality implies ‘zero defects’ in the firm’s
offerings.
Though initial efforts in defining and measuring
service quality emanated largely from the goods sector,
a solid foundation for research work in the area was laid
down in the mid-eighties by Parasuraman, Zeithaml and
Berry (1985). They were amongst the earliest researchers
to emphatically point out that the concept of quality
prevalent in the goods sector is not extendable to the
services sector. Being inherently and essentially intan-
gible, heterogeneous, perishable, and entailing simulta-
neity and inseparability of production and consump-
tion, services require a distinct framework for quality
explication and measurement. As against the goods sector
where tangible cues exist to enable consumers to eva-
luate product quality, quality in the service context is
explicated in terms of parameters that largely come
under the domain of ‘experience’ and ‘credence’ prop-
erties and are as such difficult to measure and evaluate
(Parasuraman, Zeithaml and Berry, 1985; Zeithaml and
Bitner, 2001).
One major contribution of Parasuraman, Zeithaml
and Berry (1988) was to provide a terse definition of
service quality. They defined service quality as ‘a global
judgment, or attitude, relating to the superiority of the
service’, and explicated it as involving evaluations of the
outcome (i.e., what the customer actually receives from
service) and process of service act (i.e., the manner in which
service is delivered). In line with the propositions put
forward by Gronroos (1982) and Smith and Houston
(1982), Parasuraman, Zeithaml and Berry (1985, 1988)
posited and operationalized service quality as a differ-
ence between consumer expectations of ‘what they want’
and their perceptions of ‘what they get.’ Based on this
conceptualization and operationalization, they proposed
a service quality measurement scale called ‘SERVQUAL.’
The SERVQUAL scale constitutes an important land-
mark in the service quality literature and has been
extensively applied in different service settings.
26 MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES
26
Over time, a few variants of the scale have also been
proposed. The ‘SERVPERF’ scale is one such scale that
has been put forward by Cronin and Taylor (1992) in
the early nineties. Numerous studies have been under-
taken to assess the superiority of two scales, but con-
sensus continues to elude as to which one is a better
scale. The following two sections provide an overview
of the operationalization and methodological issues
concerning these two scales.
SERVQUAL Scale
The foundation for the SERVQUAL scale is the gap
model proposed by Parasuraman, Zeithaml and Berry
(1985, 1988). With roots in disconfirmation paradigm,1
the gap model maintains that satisfaction is related to
the size and direction of disconfirmation of a person’s
experience vis-à-vis his/her initial expectations (Church-
ill and Surprenant, 1982; Parasuraman, Zeithaml and
Berry, 1985; Smith and Houston, 1982). As a gap or
difference between customer ‘expectations’ and ‘percep-
tions,’ service quality is viewed as lying along a con-
tinuum ranging from ‘ideal quality’ to ‘totally unaccept-
able quality,’ with some points along the continuum
representing satisfactory quality. Parasuraman, Zeith-
aml and Berry (1988) held that when perceived or ex-
perienced service is less than expected service, it implies
less than satisfactory service quality. But, when per-
ceived service is less than expected service, the obvious
inference is that service quality is more than satisfactory.
Parasuraman, Zeithaml and Berry (1988) posited that
while a negative discrepancy between perceptions and
expectations — a ‘performance-gap’ as they call it —
causes dissatisfaction, a positive discrepancy leads to
consumer delight.
Based on their empirical work, they identified a set
of 22 variables/items tapping five different dimensions
of service quality construct.2 Since they operationalized
service quality as being a gap between customer’s ex-
pectations and perceptions of performance on these
variables, their service quality measurement scale is
comprised of a total of 44 items (22 for expectations and
22 for perceptions). Customers’ responses to their ex-
pectations and perceptions are obtained on a 7-point
Likert scale and are compared to arrive at (P-E) gap
scores. The higher (more positive) the perception minus
expectation score, the higher is perceived to be the level
of service quality. In an equation form, their operation-
alization of service quality can be expressed as follows:
=
=
k
1j
ijiji )EP(SQ (1)
where: SQi= perceived service quality of indivi-
dual ‘i’
k = number of service attributes/items
P = perception of individual ‘i’ with res-
pect to performance of a service firm
attribute ‘j’
E = service quality expectation for at-
tribute ‘j’ that is the relevant norm for
individual ‘i’
The importance of Parasuraman, Zeithaml and
Berry’s (1988) scale is evident by its application in a
number of empirical studies across varied service set-
tings (Brown and Swartz, 1989; Carman, 1990; Kassim
and Bojei, 2002; Lewis, 1987, 1991; Pitt, Gosthuizen and
Morris, 1992; Witkowski and Wolfinbarger, 2002; Young,
Cunningham and Lee, 1994). Despite its extensive ap-
plication, the SERVQUAL scale has been criticized on
various conceptual and operational grounds. Some major
objections against the scale relate to use of (P-E) gap
scores, length of the questionnaire, predictive power of
the instrument, and validity of the five-dimension struc-
ture (e.g., Babakus and Boller, 1992; Cronin and Taylor,
1992; Dabholkar, Shepherd and Thorpe, 2000; Teas, 1993,
1994). Since this paper does not purport to examine
dimensionality issue, we shall confine ourselves to a
discussion of only the first three problem areas.
Several issues have been raised with regard to use
of (P-E) gap scores, i.e., disconfirmation model. Most
studies have found a poor fit between service quality
as measured through Parasuraman, Zeithaml and Ber-
ry’s (1988) scale and the overall service quality measured
directly through a single-item scale (e.g., Babakus and
Boller, 1992; Babakus and Mangold, 1989; Carman, 1990;
Finn and Lamb, 1991; Spreng and Singh, 1993). Though
the use of gap scores is intuitively appealing and con-
ceptually sensible, the ability of these scores to provide
additional information beyond that already contained
in the perception component of service quality scale is
under doubt (Babakus and Boller, 1992; Iacobucci,
Grayson and Ostrom, 1994). Pointing to conceptual,
theoretical, and measurement problems associated with
the disconfirmation model, Teas (1993, 1994) observed
that a (P-E) gap of magnitude ‘-1’ can be produced in
six ways: P=1, E=2; P=2, E=3; P=3, E=4; P=4, E=5; P=5,
E=6 and P=6, E=7 and these tied gaps cannot be con-
VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 27
27
strued as implying equal perceived service quality
shortfalls. In a similar vein, the empirical study by Peter,
Churchill and Brown (1993) found difference scores being
beset with psychometric problems and, therefore, cau-
tioned against the use of (P-E) scores.
Validity of (P-E) measurement framework has also
come under attack due to problems with the conceptu-
alization and measurement of expectation component of
the SERVQUAL scale. While perception (P) is definable
and measurable in a straightforward manner as the con-
sumer’s belief about service is experienced, expectation
(E) is subject to multiple interpretations and as such has
been operationalized differently by different authors/
researchers (e.g., Babakus and Inhofe, 1991; Brown and
Swartz, 1989; Dabholkar et al., 2000; Gronroos, 1990;
Teas, 1993, 1994). Initially, Parasuraman, Zeithaml and
Berry (1985, 1988) defined expectation close on the lines
of Miller (1977) as ‘desires or wants of consumers,’ i.e.,
what they feel a service provider should offer rather than
would offer. This conceptualization was based on the
reasoning that the term ‘expectation’ has been used
differently in service quality literature than in the cus-
tomer satisfaction literature where it is defined as a
prediction of future events, i.e., what customers feel a
service provider would offer. Parasuraman, Berry and
Zeithaml (1990) labelled this ‘should be’ expectation as
‘normative expectation,’ and posited it as being similar
to ‘ideal expectation’ (Zeithaml and Parasuraman, 1991).
Later, realizing the problem with this interpretation,
they themselves proposed a revised expectation (E*)
measure, i.e., what the customer would expect from
‘excellent’ service (Parasuraman, Zeithaml and Berry,
1994).
It is because of the vagueness of the expectation
concept that some researchers like Babakus and Boller
(1992), Bolton and Drew (1991a), Brown, Churchill and
Peter (1993), and Carman (1990) stressed the need for
developing a methodologically more precise scale. The
SERVPERF scale — developed by Cronin and Taylor
(1992) — is one of the important variants of the SERV-
QUAL scale. For, being based on the perception com-
ponent alone, it has been conceptually and methodolog-
ically posited as a better scale than the SERVQUAL scale
which has its origin in disconfirmation paradigm.
SERVPERF Scale
Cronin and Taylor (1992) were amongst the researchers
who levelled maximum attack on the SERVQUAL scale.
They questioned the conceptual basis of the SERVQUAL
scale and found it confusing with service satisfaction.
They, therefore, opined that expectation (E) component
of SERVQUAL be discarded and instead performance
(P) component alone be used. They proposed what is
referred to as the ‘SERVPERF’ scale. Besides theoretical
arguments, Cronin and Taylor (1992) provided empir-
ical evidence across four industries (namely banks, pest
control, dry cleaning, and fast food) to corroborate the
superiority of their ‘performance-only’ instrument over
disconfirmation-based SERVQUAL scale.
Being a variant of the SERVQUAL scale and con-
taining perceived performance component alone, ‘per-
formance only’ scale is comprised of only 22 items. A
higher perceived performance implies higher service
quality. In equation form, it can be expressed as:
=
=
k
1j
iji PSQ (2)
where: SQi= perceived service quality of indi-
vidual ‘i’
k = number of attributes/items
P = perception of individual ‘i’ with
respect to performance of a service
firm on attribute ‘j’
Methodologically, the SERVPERF scale represents
marked improvement over the SERVQUAL scale. Not
only is the scale more efficient in reducing the number
of items to be measured by 50 per cent, it has also been
empirically found superior to the SERVQUAL scale for
being able to explain greater variance in the overall
service quality measured through the use of single-item
scale. This explains the considerable support that has
emerged over time in favour of the SERVPERF scale
(Babakus and Boller, 1992; Bolton and Drew, 1991b;
Boulding et al., 1993; Churchill and Surprenant, 1982;
Gotlieb, Grewal and Brown, 1994; Hartline and Ferrell,
1996; Mazis, Antola and Klippel, 1975; Woodruff, Ca-
dotte and Jenkins, 1983). Though still lagging behind the
SERVQUAL scale in application, researchers have in-
creasingly started making use of the performance-only
measure of service quality (Andaleeb and Basu, 1994;
Babakus and Boller, 1992; Boulding et al., 1993; Brady
et al., 2002; Cronin et al., 2000; Cronin and Taylor, 1992,
1994). Also when applied in conjunction with the SERV-
QUAL scale, the SERVPERF measure has outperformed
the SERVQUAL scale (Babakus and Boller, 1992; Brady,
Cronin and Brand, 2002; Cronin and Taylor, 1992;
28 MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES
28
Dabholkar et al., 2000). Seeing its superiority, even
Zeithaml (one of the founders of the SERVQUAL scale)
in a recent study observed that “…Our results are in-
compatible with both the one-dimensional view of ex-
pectations and the gap formation for service quality.
Instead, we find that perceived quality is directly influ-
enced only by perceptions (of performance)” (Boulding
et al., 1993). This admittance cogently lends a testimony
to the superiority of the SERVPERF scale.
Service Quality Measurement:
Unweighted and Weighted Paradigms
The significance of various quality attributes used in the
service quality scales can considerably differ across
different types of services and service customers. Secu-
rity, for instance, might be a prime determinant of quality
for bank customers but may not mean much to customers
of a beauty parlour. Since service quality attributes are
not expected to be equally important across service
industries, it has been suggested to include importance
weights in the service quality measurement scales (Cronin
and Taylor, 1992; Parasuraman, Zeithaml and Berry,
1995, 1998; Parasuraman, Berry and Zeithaml, 1991;
Zeithaml, Parasuraman and Berry, 1990). While the
unweighted measures of the SERVQUAL and the
SERVPERF scales have been described above vide equa-
tions (1) and (2), the weighted versions of the SERV-
QUAL and the SERVPERF scales as proposed by Cronin
and Taylor (1992) are as follows:
=
=
k
1j
ijijiji )EP(ISQ (3)
=
=
k
1j
ijiji )P(ISQ (4)
where: Iij is the weighting factor, i.e., importance
of attribute ‘j’ to an individual ‘i.’
Though, on theoretical grounds, addition of weights
makes sense (Bolton and Drew, 1991a), not much im-
provement in the measurement potency of either scale
has been reported after inclusion of importance weights.
Between weighted versions of two scales, weighted
SERVPERF scale has been theoretically posited to be
superior to weighted SERVQUAL scale (Bolton and Drew,
1991a).
As pointed out earlier, one major problem with the
past studies has been their preoccupation with assess-
ment of psychometric and methodological soundness of
the two scales. The diagnostic ability of the scales has
not been explicitly explicated and empirically investi-
gated. The psychometric and methodological aspects of
a scale are no doubt important considerations but one
cannot overlook the assessment of the diagnostic power
of the scales. From the strategy formulation point of
view, it is rather the diagnostic ability of the scale that
can help managers in ascertaining where the quality
shortfalls prevail and what possibly can be done to close
down the gaps.
METHODOLOGY
The present study is an attempt to make a comparative
assessment of the SERVQUAL and the SERVPERF scales
in the Indian context in terms of their validity, ability
to explain variance in the overall service quality, power
to distinguish among service objects/firms, parsimony
in data collection, and, more importantly, their diagnos-
tic ability to provide insights for managerial interven-
tions in case of quality shortfalls. Data for making com-
parisons among the unweighted and weighted versions
of the two scales were collected through a survey of the
consumers of the fast food restaurants in Delhi. The fast
food restaurants were chosen due to their growing
familiarity and popularity with the respondents under
study. Another reason was that the fast food restaurant
services fall mid way on the ‘pure goods - pure service’
continuum (Kotler, 2003). Seldom are the extremes found
in most service businesses. For ensuring a greater gen-
eralizability of service quality scales, it was considered
desirable to select a service offering that is comprised
of both the good (i.e., food) and service (i.e., preparation
and delivery of food) components. Eight fast food res-
taurants (Nirulas, Wimpy, Dominos, McDonald, Pizza
Hut, Haldiram, Bikanerwala, and Rameshwar) rated as
more familiar and patronized restaurants in different
parts of Delhi in the pilot survey were selected.
Using the personal survey method, 300 students
and lecturers of different colleges and departments of
the University of Delhi spread all over the city of Delhi
were approached. The field work was done during
December 2001-March 2002. After repeated follow-ups,
only 200 duly filled-in questionnaires could be collected
constituting a 67 per cent response rate. The sample was
deliberately restricted to students and lecturers of Delhi
University and was equally divided between these two
groups. The idea underlying the selection of these two
categories of respondents was their easy accessibility.
VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 29
29
Quota sampling was employed for selecting respon-
dents from these two groups. Each respondent was asked
to give information about two restaurants — one ‘most
frequently visited’ and one ‘least frequently visited.’ At
the analysis stage, collected data were pooled together
thus constituting a total of 400 responses.
Parasuraman, Zeithaml and Berry’s (1988) 22-item
SERVQUAL instrument was employed for collecting the
data regarding the respondents’ expectations, percep-
tions, and importance weights of various service at-
tributes. Wherever required, slight modifications in the
wording of scale items were made to make the question-
naire understandable to the surveyed respondents. Some
of the items were negatively worded to avoid the prob-
lem of routine ticking of items by the respondents. In
addition to the above mentioned 66 scale items (22 each
for expectations, perceptions, and importance rating),
the questionnaire included items relating to overall
quality, overall satisfaction, and behavioural intentions
of the consumers. These items were included to assess
the validity of the multi-item service quality scales used
at our end. The single-item direct measures of overall
service quality, namely, ‘overall quality of these restau-
rants is excellent’ and overall satisfaction, namely, ‘over-
all I feel satisfied with the services provided’ were used.
Cronin and Taylor (1992) have used similar measures
for assessing validity of multi-item service quality scales.
Behavioural intentions were measured with the help of
a 3-item scale as suggested by Zeithaml and Parasur-
aman (1996) and later used by Brady and Robertson
(2001) and Brady,Cronin and Brand (2002).3
Excepting importance weights and behavioural
items, responses to all the scale items were obtained on
a 5-point Likert scale ranging from ‘5’ for ‘strongly agree’
to ‘1’ for ‘strongly disagree.’ A 4-point Likert scale
anchored on ‘4’ for ‘very important’ and ‘1’ for ‘not
important’ was used for measuring importance weights
of each item. Responses to behavioural intention items
were obtained using a 5-point Likert scale ranging from
‘1’ for ‘very low’ to ‘5’ for ‘very high.’
FINDINGS AND DISCUSSION
Validity of Alternative Measurement Scales
As suggested by Churchill (1979), convergent and dis-
criminant validity of four measurement scales was
assessed by computing correlations coefficients for dif-
ferent pairs of scales. The results are summarized in
Table 1. The presence of a high correlation between
alternate measures of service quality is a pointer to the
convergent validity of all the four scales. The SERVPERF
scale is, however, found having a stronger correlation
with other similar measures, viz., SERVQUAL and
importance weighted service quality measures.
A higher correlation found between two different
measures of the same variable than that found between
the measure of a variable and other variable implies the
presence of discriminant validity (Churchill, 1979) in
respect of all the four multi-item service quality scales.
Once again, it is the SERVPERF scale which is found
possessing the highest discriminant validity.
SERVPERF is, thus, found providing a more con-
vergent as well as discriminant valid explanation of
service quality.
Explanatory Power of Alternative
Measurement Scales
The ability of a scale to explain the variation in the
overall service quality (measured directly through a
single-item scale) was assessed by regressing respond-
ents’ perceptions of overall service quality on its corre-
sponding multi-item service quality scale. Adjusted R2
values reported in Table 2 clearly point to the superiority
of SERVPERF scale for being able to explain greater
Table 1: Alternate Service Quality Scales and Other Measures — Correlation Coefficients
SERVQUAL SERVPERF Weighted Weighted Overall Overall Behavioural
(P-E) (P) SERVQUAL SERVPERF Service Satisfaction Intentions
I (P-E) I (P) Quality
SERVQUAL (P-E) 1.000
SERVPERF (P) .735 -
Weighted SERVQUAL I(P-E) .995 .767 -
Weighted SERVPERF I(P) .759 .993 .772 -
Overall service quality .416 .544 .399 .531 -
Overall satisfaction .420 .557 .425 .554 .724 -
Behavioural intentions .293 .440 .308 .459 .570 .528 1.000
30 MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES
30
proportion of variance (0.294) in the overall service quality
than is the case with other scales. Addition of importance
weights is not able to enhance the explanatory power
of the SERVPERF and the SERVQUAL scales. The results
of the present study are quite in conformity with those
of Cronin and Taylor (1992) who also found addition of
importance weight not improving the predictive ability
of either scale.
Discriminatory Power of Alternative
Measurement Scales
One basic use of a service quality scale is to gain insight
as to where a particular service firm stands vis-à-vis
others in the market. The scale that can best differentiate
among service firms obviously represents a better choice.
Mean quality scores for each restaurant were computed
and compared with the help of ANOVA technique to
delve into the discriminatory power of alternative meas-
urement scales. The results presented in Table 3 show
significant differences (p < .000) existing among mean
service quality scores for each of the alternate scales. The
results are quite in line with those obtained by using
single-item measures of service quality. The results thus
establish the ability of all the four scales to be able to
discriminate among the objects (i.e., restaurants), and as
such imply that any one of the scales can be used for
making quality comparisons across service firms.
Parsimony in Data Collection
Often, ease of data collection is a major consideration
governing the choice of measurement scales for studies
in the business context. When examined from this per-
spective, the unweighted performance-only scale turns
out to be the best choice as it requires much less infor-
mational input than required by the other scales. While
the SERVQUAL and weighted service quality scales
(both SERVQUAL and the SERVPERF) require data on
customer perceptions as well as customer expectations
and/or importance perceptions also, the performance-
only measure requires data on customers’ perceptions
alone, thus considerably obviating the data collection
task. While the number of items for which data are
required is only 22 for the SERVPERF scale, it is 44 and
66 for the SERVQUAL and the weighted SERVQUAL
scales respectively (Table 4). Besides making the ques-
tionnaire lengthy and compounding data editing and
coding tasks, requirement of additional data can have
its toll on the response rate too. This study is a case in
point. Seeing a lengthy questionnaire, many respon-
dents hesitated to fill it up and returned it on the spot.
Diagnostic Ability of Scales in Providing
Insights for Managerial Intervention
and Strategy Formulation
A major reason underlying the use of a multi-item scale
vis-à-vis its single-item counterpart is its ability to pro-
vide information about the attributes where a given firm
is deficient in providing service quality and thus needs
to evolve strategies to remove such quality shortfalls
with a view to enhance customer satisfaction in future.
When judged from this perspective, all the four service
quality scales, being multi-item scales, appear capable
of performing the task. But, unfortunately, the scales
Table 2: Explanatory Power of Alternative Service Scales
Regression Results
Measurement Scale R2Adjusted R2
(Independent Variable)
SERVQUAL (P-E) .173 .171
SERVPERF (P) .296 .294
Weighted SERVQUAL I(P-E) .159 .156
Weighted SERVPERF I(P) .282 .280
Note: Dependent variable = Overall service quality.
Table 3: Discriminatory Power of Alternate Scales — ANOVA Results
Restaurant SERVPERF Weighted SERVQUAL Weighted Overall
(P) SERVPERF I (P) (P-E) SERVQUAL I (P-E) Service Quality
Nirulas 3.63 3.67 -0.28 -0.31 4.04
Wimpy’s 3.41 3.44 -0.64 -0.58 3.46
Dominos 3.40 3.50 -0.45 -0.41 3.52
McDonalds 3.72 3.78 -0.21 -0.20 4.23
Pizza Hut 3.64 3.72 -0.24 -0.29 4.00
Haldiram 3.55 3.51 -0.40 -0.50 3.72
Bikanerwala 3.38 3.41 -0.57 -0.62 3.65
Rameshwar 3.19 3.30 -0.58 -0.58 3.19
Overall mean 3.55 3.61 -0.37 -0.38 3.86
F-value (significance level) 6.60 5.31 4.25 3.40 6.77
(p <.000) (p <.000) (p <.000) (p <.002) (p<.000)
VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 31
31
differ considerably in terms of the areas identified for
improvement as well as the order in which the identified
areas need to be taken up for quality improvement. This
asymmetrical power of the four scales can be probed into
by taking up four typical service attributes, namely, use
of up-to-date equipment and technology, prompt res-
ponse, accuracy of records, and convenience of operat-
ing hours as being tapped in the study vide scale items
1, 11, 13, and 22 respectively. The performance of a
restaurant (name disguised) on these four scale items
is reported in Table 5.
An analysis of Table 5 reveals the following find-
ings. When measured with the help of ‘performance-
only’ (i.e., SERVPERF) scale, scores in column 3 show
that the restaurant is providing quality in respect of
service items 1, 13, and 22. The mean scores in the range
of 3.31 to 3.97 for these items are a pointer to this in-
ference. The consumers appear indifferent to the pro-
vision of service quality in respect of item 11. However,
when compared with maximum possible attainable value
of 5 on a 5-point scale, the restaurant under consider-
ation seems deficient in respect of all the four service
areas (column 5) implying managerial intervention in
all these areas. In the event of time and resource con-
straints, however, the management needs to prioritize
quality deficient areas. This can be done in two ways:
either on the basis of magnitude of performance scores
(scores lower in magnitude pointing to higher priority
for intervention) or on the basis of magnitude of the
implied gap scores between perceived performance (P)
and maximally attainable score of 5 (with higher gaps
implying immediate interventions). Judged anyway, the
service areas in the descending order of intervention
urgency are 11, 22, 13, and 1 (see columns 3 and 5). The
management can pick up one or a few areas for man-
agerial intervention depending upon the availability of
time and financial resources at its disposal. If importance
scores are also taken into account as is the case with the
weighted SERVPERF scale, the order of priority gets
changed to 11, 13, 22, and 1.
In the case of the SERVQUAL scale requiring com-
parison of customers’ perceptions of service perform-
ance (P) with their expectations (E), the areas with zero
or positive gaps imply either customer satisfaction or
delight with the service provision and as such do not
call for any managerial intervention. But, in the areas
where gaps are negative, the management needs to do
something urgently for improving the quality. When
viewed from this perspective, only three service areas,
namely, 13, 11, and 1 having negative gaps, call for
managerial intervention and in that order as determined
by the magnitude of gap scores shown in column 9 of
Table 5. Taking into account the importance scores also
as is the case with the weighted SERVQUAL scale, order
of priority areas gets changed to 11, 13, and 1 (see column
10).
We thus find that though all the four multi-item
scales possess diagnostic power to suggest areas for
managerial actions, the four scales differ considerably
in terms of areas suggested as well as the order in which
the actions in the identified areas are called for. The moot
Table 4: Number of Items Contained in Service Quality
Measurement Scales
Scale Number of Items
SERVQUAL (P-E) 44
SERVPERF (P) 22
Weighted SERVQUAL I(P-E) 66
Weighted SERVPERF I(P) 44
Table 5: Areas Suggested for Quality Improvement by Alternate Service Quality Scales
Scale Item Performance Maximum Gap Importance I(P) or Expectation Gap (P-E) I(P-E) or
Item Description (P) or Score (P-M) Score (I) Weighted Score or Weighted
SERVPERF SERVPERF (E) SERVQUAL SERVQUAL
Score Score
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
1. Use of up-to-date
equipment and technology 3.97 5.00 -1.03 4.28 16.99 4.37 -0.40 -1.71
11. Prompt response 3.08 5.00 -1.92 4.09 12.60 3.57 -0.49 -6.01
13. Accuracy of records 3.51 5.00 -1.49 3.67 12.88 4.05 -0.54 -1.98
22. Operating hours
convenient to all 3.31 5.00 -1.69 4.05 13.37 3.04 -0.27 -1.09
Action areas in
order of priority 11, 22, 13, 1 11, 22, 13, 1 11, 13, 22, 1 13, 11, 1 11, 13, 1
Note: Customer expectations, perceptions, and importance for each service quality item were measured on a 5-point Likert scale ranging
from 5 for ‘strongly agree’ to 1 for ‘strongly disagree.’
32 MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES
32
point, therefore, is to determine which scale provides
a more pragmatic and managerially useful diagnosis.
From a closer perusal of the data provided in Table 4,
it may be observed that the problem of different areas
and different ordering suggested by the four scales is
coming up basically due to different reference points
used explicitly or implicitly for computing the service
quality shortfalls. While it is the maximally attainable
score of 5 on a 5-point scale that presumably is serving
as a reference point in the case of the SERVPERF scale,
it is customer expectation for each of the service area
that is acting as a yardstick under the SERVQUAL scale.
Ideally speaking, the management should strive for at-
taining the maximally attainable the performance level
(a score of 5 in the case of 5-point scale) in all those
service areas where the performance level is less than
5. This is exactly what the SERVPERF scale-based anal-
ysis purports to do. However, this is tenable only under
situations when there are no time and resource con-
straints and it can be assumed that all the areas are
equally important to customers and they want maximal-
ly possible quality level in respect of each of the service
attributes. But, in a situation where the management
works under resource constraints (this usually is the
case) and consumers do not equally importantly want
maximum possible service quality provision, the man-
agement needs to identify areas which are more critical
from the consumers’ point of view and call for imme-
diate attention. This is exactly what the SERVQUAL
scale does by pointing to areas where firm’s performance
is below the customers’ expectations.
Between the two scales, therefore, the SERVQUAL
scale stands to provide a more pragmatic diagnosis of
the service quality provision than the SERVPERF scale.4
So long as perceived performance equals or exceeds
customer expectations for a service attribute, the SERV-
QUAL scale does not point to managerial intervention
despite performance level in respect to that attribute
falling short of the maximally attainable service quality
score. Service area 22 is a case in point. As per the
SERVPERF scale, this is also a fitting area for managerial
intervention because the perceived performance level in
respect of this attribute is far less than the maximally
attainable value of 5. This, however, is not the case with
the SERVQUAL scale. Since the customer perceptions
of a restaurant’s performance are above their expecta-
tion level, there seems to be no ostensible justification
in further trying to improve the performance in this area.
The customers are already getting more than their ex-
pectations; any attempt to further improve the perform-
ance in this area might drain the restaurant owner of
the resources needed for improvement in other critical
areas. Any such effort, moreover, is unlikely to add to
the customers’ delight as the customers themselves are
not desirous of having more of this service attribute as
revealed by their mean expectation score which is much
lower than the ideally and maximally attainable score
of 5.
If importance scores are also taken into consider-
ation, the weighted versions of both the scales provide
much more useful insights than those provided by the
unweighted counterparts. Be it the SERVQUAL or the
SERVPERF scale, the inclusion of weights does represent
improvement over the unweighted measures. By incor-
porating the customer perceptions of the importance of
different service attributes in the analysis, the weighted
service quality scales are able to more precisely direct
managerial attention to deficient areas which are more
critical from the customers’ viewpoint and as such need
to be urgently attended to. It may, furthermore, be
observed that between the weighted versions of the
SERVPERF and the SERVQUAL scales, the weighted
SERVQUAL scale is much more superior in its diagnos-
tic power. This scale takes into account not only the
magnitude of customer defined service quality gaps but
also the importance weights that customers assign to
different service attributes, thus pointing to such service
quality shortfalls as are crucial to a firm’s success in the
market and deserve immediate managerial intervention.
CONCLUSIONS, IMPLICATIONS, AND
DIRECTIONS FOR FUTURE RESEARCH
A highly contentious issue examined in this paper re-
lates to the operationalization of service quality con-
struct. A review of extant literature points to SERV-
QUAL and SERVPERF as being the two most widely
advocated and applied service quality scales. Notwith-
standing a number of researches undertaken in the field,
it is not yet clear as to which one of the two scales is
a better measure of service quality. Since the focus of
the past studies has been on an assessment of the psy-
chometric and methodological soundness alone of the
service quality scales — and that too in the context of
the developed world — this study represents a pioneer-
ing effort towards evaluating the methodological sound-
ness as well as the diagnostic power of the two scales
VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 33
33
in the context of a developing country — India. A survey
of the consumers of the fast food restaurants in the Delhi
was carried out to gather the necessary information. The
unweighted as well as the weighted versions of the
SERVQUAL and the SERVPERF scales were compara-
tively assessed in terms of their convergent and discri-
minant validity, ability to explain variation in the overall
service quality, ease in data collection, capacity to dis-
tinguish restaurants on quality dimension, and diagnos-
tic capability of providing directions for managerial
interventions in the event of service quality shortfalls.
So far as the assessment of various scales on the first
three parameters is concerned, the unweighted perform-
ance-only measure (i.e., the SERVPERF scale) emerges
as a better choice. It is found capable of providing a more
convergent and discriminant valid explanation of serv-
ice quality construct. It also turns out to be the most
parsimonious measure of service quality and is capable
of explaining greater proportion of variance present in
the overall service quality measured through a single-
item scale.
The addition of importance weights, however, does
not result in a higher validity and explanatory power
of the unweighted SERVQUAL and SERVPERF scales.
These findings are quite in conformity with those of
earlier studies recommending the use of unweighted
perception-only scores (e.g., Bolton and Drew, 1991b;
Boulding et al., 1993; Churchill and Surprenant, 1982;
Cronin, Brady and Hult, 2000; Cronin and Taylor, 1992).
When examined from the point of view of the power
of various scales to discriminate among the objects (i.e.,
restaurants in the present case), all the four scales stand
at par in performing the job. But in terms of diagnostic
ability, it is the SERVQUAL scale that emerges as a clear
winner. The SERVPERF scale, notwithstanding its su-
periority in other respects, turns out to be a poor choice.
For, being based on an implied comparison with the
maximally attainable scores, it suggests intervention
even in areas where the firm’s performance level is
already up to customer’s expectations. The incorpora-
tion of expectation scores provides richer information
than that provided by the perception-only scores thus
adding to the diagnostic power of the service quality
scale. Even the developers of performance-only scale
were cognizant of this fact and did not suggest that it
is unnecessary to measure customer expectations in serv-
ice quality research (Cronin and Taylor, 1992).
From a diagnostic perspective, therefore, (P-E) scale
constitutes a better choice. Since it entails a direct com-
parison of performance perceptions with customer ex-
pectations, it provides a more pragmatic diagnosis of
service quality shortfalls. Especially in the event of time
and resource constraints, the SERVQUAL scale is able
to direct managerial attention to service areas which are
critically deficient from the customers’ viewpoint and
require immediate attention. No doubt, the SERVQUAL
scale entails greater data collection work, but it can be
eased out by employing direct rather than computed
expectation disconfirmation measures. This can be done
by asking customers to directly report about the extent
they feel a given firm has performed in comparison to
their expectations in respect of each service attribute
rather than asking them to report their perception and
expectation scores separately as is required under the
SERVQUAL scale (for a further discussion on this aspect,
see Dabholkar, Shepherd and Thorpe, 2000).
The addition of importance weights further adds to
the diagnostic power of the SERVQUAL scale. Though
the inclusion of weights improves the diagnostic ability
of even the SERVPERF scale, the scale continues to suffer
from its generic weakness of directing managerial atten-
tion to such service areas which are not at all deficient
in the customer’s perception.
In overall terms, we thus find that while the
SERVPERF scale is a more convergent and discriminant
valid explanation of the service construct, possesses
greater power to explain variations in the overall service
quality scores, and is also a more parsimonious data
collection instrument, it is the SERVQUAL scale which
entails superior diagnostic power to pinpoint areas for
managerial intervention. The obvious managerial impli-
cation emanating from the study findings is that when
one is interested simply in assessing the overall service
quality of a firm or making quality comparisons across
service industries, one can employ the SERVPERF scale
because of its psychometric soundness and instrument
parsimoniousness. However, when one is interested in
identifying the areas of a firm’s service quality shortfalls
for managerial interventions, one should prefer the
SERVQUAL scale because of its superior diagnostic
power.
No doubt, the use of the weighted SERVQUAL scale
is the most appropriate alternative from the point of
view of the diagnostic ability of various scales, yet a final
decision in this respect needs to be weighed against the
gigantic task of information collection. Following Cro-
34 MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES
34
nin and Taylor’s (1992) approach, one requires collecting
information on importance weights for all the 22 scale
items thus considerably increasing the length of the
survey instrument. However, alternative approaches do
exist that can be employed to overcome this problem.
One possible alternative is to collect information
about the importance weights at the service dimension
rather than the individual service level. This can be
accomplished by first doing a pilot survey of the re-
spondents using 44 SERVQUAL scale items and then
performing a factor analysis on the collected data for
identifying service dimensions. Once the service dimen-
sions are identified, a final survey of all the sample
respondents can be done for seeking information in
respect of the 44 scale items as well as for the importance
weights for each of the service quality dimensions iden-
tified during the pilot survey stage. Addition of one
more question seeking importance information will only
slightly increase the questionnaire size. The importance
information so gathered can then be used for prioritizing
the quality deficient service areas for managerial inter-
vention. Alternatively, one can employ the approach
adopted by Parasuraman, Zeithaml and Berry (1988).
Instead of directly collecting information from the re-
spondents, they derived importance weights by regress-
ing overall quality perception scores on the SERVQUAL
scores for each of the dimensions identified through the
use of factor analysis on the data collected vide 44 scale
items. Irrespective of the approach used, the data col-
lection task will be much simpler than required as per
the approach employed by Cronin and Taylor (1992) for
gathering data in connection with the weighted SERV-
QUAL scale.
Though the study brings to the fore interesting
findings, it will not be out of place to mention here some
of its limitations. A single service setting with a few
restaurants under investigation and a small database of
only 400 observations preclude much of the generali-
zability of the study findings. Studies of similar kind
with larger sample sizes need to be replicated in differ-
ent service industries in different countries — especially
in the developing ones — to ascertain applicability and
superiority of the alternate service quality scales.
Dimensionality, though an important consideration
from the point of view of both the validity and reliability
assessment, has not been investigated in this paper due
to space limitations. It is nonetheless an important issue
in itself and needs to be thoroughly examined before
coming to a final judgment about the superiority of the
service quality scales. It is quite possible that the con-
clusions of the present study might change if the dimen-
sionality angle is incorporated into the analysis. Studies
in future may delve into this aspect.
One final caveat relates to the limited power of both
the unweighted and the weighted versions of the SERV-
QUAL and the SERVPERF scales to explain variations
present in the overall service quality scores assessed
through the use of a single-item scale. This casts doubts
on the applicability of multi-item service quality scales
as propounded and tested in the developed countries
to the service industries in a developing country like India.
Though regressing overall service quality scores on
service quality dimensions might somewhat improve the
explanatory power of these scales, we do not expect any
appreciable improvement in the results. The poor explan-
atory power of the scales in the present study might have
arisen either due to methodological considerations such
as the use of a smaller sample or a 5-point rather than
a 7-point Likert scale employed by the developers of
service quality scales in their studies or else — as is more
likely to be the case — the problem has arisen due to
the inappropriateness of items contained in the service
quality scales under investigation in the context of the
developing countries. Both these aspects need to be
thoroughly examined in future researches so as to be able
to arrive at a psychometrically as well as managerially
more useful service quality scale for use in the service
industries of the developing countries.
ENDNOTES
1. Customer satisfaction with services or perception of
service quality can be viewed as confirmation or dis-
confirmation of customer expectations of a service offer.
The proponents of the gap model have based their
researches on disconfirmation paradigm which main-
tains that satisfaction is related to the size and direction
of the disconfirmation experience where disconfirma-
tion is related to the person’s initial expectations. For
further discussion, see Churchill and Surprenant, 1982
and Parasuraman, Zeithaml and Berry, 1985.
2. A factor analysis of 22 scale items led Parasuraman,
Zeithaml and Berry (1988) to conclude that consumers
use five dimensions for evaluating service quality. The
five dimensions identified by them included tangibility,
reliability, responsiveness, assurance, and empathy.
3. The scale items used in this connection were: “The
VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 35
35
probability that I will use their facilities again,” “The
likelihood that I would recommend the restaurants to
a friend,” and “If I had to eat in a fast food restaurant
again, the chance that I would make the same choice.”
4. Even though a high correlation (r=0.747) existed be-
tween (P-M) and (P-E) gap scores, the former cannot
be used as a substitute for the latter as on a case by case
basis, it can point to initiating actions even in such areas
which do not need any managerial intervention based
on (P-E) scores.
REFERENCES
Andaleeb, S S and Basu, A K (1994). “Technical Complexity
and Consumer Knowledge as Moderators of Service
Quality Evaluation in the Automobile Service Indus-
try,” Journal of Retailing, 70(4), 367-81.
Anderson, E W, Fornell, C and Lehmann, D R (1994).
“Customer Satisfaction, Market Share and Profitability:
Findings from Sweden,” Journal of Marketing, 58(3), 53-
66.
Anderson, C and Zeithaml, C P (1984). “Stage of the Pro-
duct Life Cycle, Business Strategy, and Business Per-
formance,” Academy of Management Journal, 27 (March),
5-24.
Babakus, E and Boller, G W (1992). “An Empirical Assess-
ment of the Servqual Scale,” Journal of Business Research,
24(3), 253-68.
Babakus, E and Mangold, W G (1989). “Adapting the Serv-
qual Scale to Hospital Services: An Empirical Investi-
gation,” Health Service Research, 26(6), 767-80.
Babakus, E and Inhofe, M (1991). “The Role of Expectations
and Attribute Importance in the Measurement of Serv-
ice Quality” in Gilly M C (ed.), Proceedings of the Summer
Educator’s Conference, Chicago, IL: American Marketing
Association, 142-44.
Bolton, R N and Drew, J H (1991a). “A Multistage Model
of Customer’s Assessment of Service Quality and Va-
lue,” Journal of Consumer Research, 17(March), 375-85.
Bolton, R N and Drew, J H (1991b). “A Longitudinal Ana-
lysis of the Impact of Service Changes on Customer
Attitudes,” Journal of Marketing, 55(January), 1-9.
Boulding, W; Kalra, A, Staelin, R and Zeithaml, V A (1993).
“A Dynamic Process Model of Service Quality: From
Expectations to Behavioral Intentions,” Journal of Mar-
keting Research, 30(February), 7-27.
Brady, M K and Robertson, C J (2001). “Searching for a
Consensus on the Antecedent Role of Service Quality
and Satisfaction: An Exploratory Cross-National Study,”
Journal of Business Research, 51(1) 53-60.
Brady, M K, Cronin, J and Brand, R R (2002). “Perfor-
mance–Only Measurement of Service Quality: A Rep-
lication and Extension,” Journal of Business Research,
55(1), 17-31.
Brown, T J, Churchill, G A and Peter, J P (1993). “Improving
the Measurement of Service Quality,” Journal of Retail-
ing, 69(1), 127-39.
Brown, S W and Swartz, T A (1989). “A Gap Analysis of
Professional Service Quality,” Journal of Marketing, 53
(April), 92-98.
Buzzell, R D and Gale, B T (1987). The PIMS Principles, New
York: The Free Press.
Carman, J M (1990). “Consumer Perceptions of Service
Quality: An Assessment of the SERVQUAL Dimensions,
Journal of Retailing, 66(1), 33-35.
Churchill, G A (1979). “A Paradigm for Developing Better
Measures of Marketing Constructs,” Journal of Market-
ing Research, 16 (February), 64-73.
Churchill, G A and Surprenant, C (1982). “An Investigation
into the Determinants of Customer Satisfaction,” Journal
of Marketing Research, 19(November), 491-504.
Cronin, J and Taylor, S A (1992). “Measuring Service Quality:
A Reexamination and Extension,” Journal of Marketing,
56(July), 55-67.
Cronin, J and Taylor, S A (1994). “SERVPERF versus SERV-
QUAL: Reconciling Performance-based and Perceptions–
Minus–Expectations Measurement of Service Quality,”
Journal of Marketing, 58(January), 125-31.
Cronin, J, Brady, M K and Hult, T M (2000). “Assessing
the Effects of Quality, Value and Customer Satisfaction
on Consumer Behavioral Intentions in Service Environ-
ments,” Journal of Retailing, 76(2), 193-218.
Crosby, P B (1984). Paper presented to the “Bureau de
Commerce,” Montreal, Canada (Unpublished), Novem-
ber.
Dabholkar, P A, Shepherd, D C and Thorpe, D I (2000).
“A Comprehensive Framework for Service Quality: An
Investigation of Critical, Conceptual and Measurement
Issues through a Longitudinal Study,” Journal of Retail-
ing, 76(2), 139-73.
Eiglier, P and Langeard, E (1987). Servuction, Le Marketing
des Services, Paris: McGraw-Hill.
Finn, D W and Lamb, C W (1991). “An Evaluation of the
SERVQUAL Scale in a Retailing Setting” in Holman, R
and Solomon, M R (eds.), Advances in Consumer Research,
Provo, UT: Association for Consumer Research,480-93.
Garvin, D A (1983). “Quality on the Line,” Harvard Business
Review, 61(September-October), 65-73.
Gotlieb, J B, Grewal, D and Brown, S W (1994). “Consumer
Satisfaction and Perceived Quality: Complementary or
Divergent Constructs,” Journal of Applied Psychology,
79(6), 875-85.
Gronroos, C (1982). Strategic Management and Marketing in
the Service Sector. Finland: Swedish School of Economics
and Business Administration.
Gronroos, C (1990). Service Management and Marketing:
Managing the Moments of Truth in Service Competition.
Mass: Lexington Books.
Hartline, M D and Ferrell, O C (1996). “The Management
of Customer Contact Service Employees: An Empirical
Investigation,” Journal of Marketing, 69 (October), 52-70.
Iacobucci, D, Grayson, K A and Ostrom, A L (1994). “The
Calculus of Service Quality and Customer Satisfaction:
Theoretical and Empirical Differentiation and Integra-
tion,” in Swartz, T A; Bowen, D H and Brown, S W (eds.),
Advances in Services Marketing and Management, Green-
wich, CT: JAI Press,1-67.
Juran, J M (1988). Juran on Planning for Quality. New York:
The Free Press.
Kassim, N M and Bojei, J (2002). “Service Quality: Gaps
in the Telemarketing Industry,” Journal of Business Re-
search, 55(11), 845-52.
Kotler, P (2003). Marketing Management, New Delhi: Pren-
36 MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES
36
tice Hall of India.
Lewis, R C (1987). “The Measurement of Gaps in the Quality
of Hotel Service,” International Journal of Hospitality
Management, 6(2), 83-88.
Lewis, B (1991). “Service Quality: An International Com-
parison of Bank Customer’s Expectations and Percep-
tions,” Journal of Marketing Management, 7(1), 47-62.
Mazis, M B, Antola, O T and Klippel, R E (1975). “A
Comparison of Four Multi-Attribute Models in the
Prediction of Consumer Attitudes,” Journal of Consumer
Research, 2(June), 38-52.
Miller, J A (1977). “Exploring Satisfaction, Modifying
Modes, Eliciting Expectations, Posing Problems, and
Making Meaningful Measurements,” in Hunt, K (ed.),
Conceptualization and Measurement of Consumer Satisfac-
tion and Dissatisfaction, Cambridge, MA: Marketing
Science Institute, 72-91.
Normann, R (1984). Service Management. New York: Wiley.
Parasuraman, A, Berry, L L and Zeithaml, V A (1990).
“Guidelines for Conducting Service Quality Decrease,”
Marketing Research, 2(4), 34-44.
Parasuraman, A, Berry, L L and Zeithaml, V A (1991).
“Refinement and Reassessment of the SERVQUAL
Scale,” Journal of Retailing, 67(4), 420-50.
Parasuraman, A, Zeithaml, V A and Berry, L L (1985). “A
Conceptual Model of Service Quality and Its Implica-
tions for Future Research,” Journal of Marketing, 49 (Fall),
41-50.
Parasuraman, A, Zeithaml, V A and Berry, L L (1988).
“SERVQUAL: A Multiple Item Scale for Measuring
Consumer Perceptions of Service Quality,” Journal of
Retailing, 64(1), 12-40.
Parasuraman, A, Zeithaml, V A and Berry, L L (1994).
“Reassessment of Expectations as a Comparison Stand-
ard in Measuring Service Quality: Implications for
Further Research,” Journal of Marketing, 58(January),
111-24.
Peter, J P, Churchill, G A and Brown, T J (1993). “Caution
in the Use of Difference Scores in Consumer Research,”
Journal of Consumer Research, 19(March), 655-62.
Phillips, L W, Chang, D R and Buzzell, R D (1983). “Product
Quality, Cost Position and Business Performance: A
Test of Some Key Hypothesis,” Journal of Marketing, 47
(Spring), 26-43.
Pitt, L F, Oosthuizen P and Morris, M H (1992). Service
Quality in a High Tech Industrial Market: An Application
of SERVQUAL, Chicago: American Management Asso-
ciation.
Rust, R T and Oliver, R L (1994). Service Quality — New
Directions in Theory and Practice, New York: Sage Pub-
lications.
Shaw, J (1978). The Quality - Productivity Connection, New
York: Van Nostrand.
Smith, R A and Houston, M J (1982). “Script-based Eva-
luations of Satisfaction with Services,” in Berry, L,
Shostack, G and Upah, G (eds.), Emerging Perspectives
on Services Marketing, Chicago: American Marketing
Association, 59-62.
Spreng, R A and Singh, A K (1993). “An Empirical Assess-
ment of the SERVQUAL Scale and the Relationship
between Service Quality and Satisfaction,” in Peter, D
W, Cravens, R and Dickson (eds.), Enhancing Knowledge
Development in Marketing, Chicago, IL: American Mar-
keting Association, 1-6.
Teas, K R (1993). “Expectations, Performance Evaluation,
and Consumer’s Perceptions of Quality,” Journal of
Marketing, 57(October), 18-34.
Teas, K R (1994). “Expectations as a Comparison Standard
in Measuring Service Quality: An Assessment of Reas-
sessment,” Journal of Marketing, 58(January), 132-39.
Witkowski, T H and Wolfinbarger, M F (2002). “Compar-
ative Service Quality: German and American Ratings
across Service Settings,” Journal of Business Research, 55
(11), 875-81.
Woodruff, R B, Cadotte, E R and Jenkins, R L (1983).
“Modelling Consumer Satisfaction Processes Using
Experience-based Norms,” Journal of Marketing Research,
20(August), 296-304.
Young, C, Cunningham, L and Lee, M (1994). “Assessing
Service Quality as an Effective Management Tool: The
Case of the Airline Industry,” Journal of Marketing Theory
and Practice, 2(Spring), 76-96.
Zeithaml, V A, Parasuraman, A and Berry, L L (1990).
Delivering Service Quality: Balancing Customer Percep-
tions and Expectations, New York: The Free Press.
Zeithaml, V A and Parasuraman, A (1991). “The Nature
and Determinants of Customer Expectation of Service,”
Marketing Science Institute Working Paper No. 91-113,
Cambridge, MA: Marketing Science Institute.
Zeithaml, V A and Parasuraman, A (1996). “The Behavioral
Consequences of Service Quality,” Journal of Marketing,
60(April), 31-46.
Zeithaml, V A and Bitner, M J (2001). Services Marketing:
Integrating Customer Focus Across the Firms, 2nd Edition,
Boston: Tata-McGraw Hill.
Sanjay K Jain is Professor of Marketing and International Business
in the Department of Commerce, Delhi School of Economics,
University of Delhi, Delhi. His areas of teaching and research
include marketing, services marketing, international marketing,
and marketing research. He is the author of the book titled Export
Marketing Strategies and Performance: A Study of Indian Textiles
published in two volumes. He has published more than 70 research
papers in reputed journals including Journal of Global Marketing,
Malaysian Journal of Small and Medium Enterprises, Vikalpa,
Business Analyst, etc. and also presented papers at various national
and international conferences.
e-mail: skjaindse@vsnl.net
Garima Gupta is a Lecturer of Commerce in Kamla Nehru Col-
lege, University of Delhi, Delhi. She is currently pursuing her
doctoral study in the Department of Commerce, Delhi School of
Economics, University of Delhi, Delhi.
e-mail: garimagupta77@yahoo.co.in
VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 37
37
... Economists developed the principal-agent theory or model in the 1970s that deals with situations in which the principal is in the state to induce the agent −to perform some tasks in the interest of the principal− but not necessarily the agent's [30]. The theory aids to investigate the role of Outpatients Pharmacy Staff and record the management of patients' compliance and satisfaction as agents of the healthcare service delivery. ...
... Parasuraman, et al. (1985) in their research, established a very strong relationship between quality of service and customer satisfaction. When perceived service quality is less than expected service quality customer will be dissatisfied (Jain & Gupta, [30]). High perceived service quality will therefore result in increased customer satisfaction (Saravanan & Rao, [34,35]) since service quality is a precursor to satisfaction. ...
... Nghiên cứu về sự hài lòng của khách hàng, không có sự thống nhất chung trong việc đo lường. Có 4 mô hình đánh giá mức độ hài lòng được nhiều nhà nghiên cứu sử dụng: mô hình IPA (Importance-Performance Analysis), mô hình SERVQUAL (Service Quality), mô hình HOLSAT (Holiday Satisfaction) và mô hình SERVPERF (Service Performance), trong đó SERVPERF là mô hình đơn giản, thích hợp cho việc đánh giá sự hài lòng vì không gặp phải vấn đề khi yêu cầu khách hàng đánh giá cả 2 phần kì vọng và cảm nhận (Cronin & Taylor, 1992;Jain & Gupta, 2004). ...
Article
Full-text available
1) Sự tin cậy (β= 0,263); 2) Kiến thức kỹ thuật khách hàng (β= 0,187); 3) Sự đáp ứng (β= 0,171); 4) Năng lực phục vụ (β= 0,166); 5) Sự đồng cảm (β= 0,099); 6) Giá cả cảm nhận (β= 0,019). Từ mức độ ảnh hưởng của các yếu tố, nhóm tác giả khuyến nghị một số giải pháp nhằm giúp gia tăng sự hài lòng của đơn vị kinh doanh xăng dầu đối với hoạt động cung ứng dịch vụ kiểm định của cơ quan chức năng tại tỉnh Tiền Giang. Từ khóa: chất lượng dịch vụ kiểm định, Sự hài lòng của doanh nghiệp kinh doanh xăng dầu. Abstract The study aims to evaluate how petroleum business units are satisfied with the measurement inspection services provided for petroleum measuring columns at the Technical Center for Quality Measurement Standards in Tien Giang province. We collected 220 questionnaires from leaders of petroleum business units in the province, out of which 198 were valid for analysis. We used various methods, including scale reliability testing, exploratory factor analysis (EFA), correlation, and regression analysis with SPSS version 20 to evaluate the data. Our research results indicate that six factors affect the satisfaction of petroleum business units, namely: Trust (β= 0.263), Customers' technical knowledge (β= 0.187), Responsiveness (β= 0.171), Service capacity (β= 0.166), Empathy (β= 0.099), and Perceived price (β= 0.019). Based on the influence of these factors, we recommend several solutions to help increase the satisfaction of petroleum business units with the inspection services provided by authorities in Tien Giang province. Keywords: Quality of inspection services, Satisfaction of petroleum businesses. JEL classifications: C51, C81. 1. Giới thiệu Sự hài lòng của khách hàng là một trong những yếu tố quan trọng bởi nó liên quan đến các chỉ số về lợi nhuận và hiệu suất của các công ty và tổ chức (Ahn và cộng sự, 2007). Nghiên cứu mức độ hài lòng của khách hàng về chất lượng dịch vụ (CLDV) đã thu hút sự quan tâm rộng rãi của các nhà nghiên cứu khoa học trên thế giới. Các mô hình như: mô hình "Kỳ vọng-cảm nhận" Oliver (1980), mô hình "chất lượng kỹ thuật-chức năng" Gronroos (1984), SERVQUAL của Parasuraman (1985), mô hình chỉ số hài lòng của khách hàng (mô hình CSI) do Fornell C (1992) phát triển. Các mô hình này được rất nhiều tổ chức, cá nhân trên thế giới sử dụng để đo lường mức độ hài lòng của khách hàng.
... Researchers have been known to employ different models to measure service quality, including Grönroos's service quality model (Grönross, 1984;Kitapçı, Yıldırım, & Çömlek, 2011;Kozak & Aydın, 2018), the Kano model (Kano, 1984;Berger et al., 1993;Matzler et al., 1996;Matzler & Hinterhuber, 1998;Pawitra & Tan, 2003;Yun & Ree, 2006;Gregory & Parsa, 2013), the Servqual model (Parasuraman, Zeithaml, & Berry, 1988;Babakus & Mangold, 1992;Asubonteng et al., 1996;Lam, 1997;Atılgan et al., 2003;Eleren & Kılıç, 2007), and the Servperf model (Cronin & Taylor, 1994, pp. 131;Durvasula, 1999;Jain & Gupta, 2004;Öztürk & Kenzhebayeva, 2013). Despite numerous studies and different approaches, it has been observed that the most widely used service quality measurement model is the "Extended Service Quality Gap Model" developed by Parasuraman, Zeithaml, and Berry (1988). ...
Article
Full-text available
The success of hotel businesses primarily hinges on delivering quality service, and achieving this is possible through the measurement of the provided quality. In the context of measuring service quality, the Service Quality Gap Model developed by Parasuraman, Zeithaml, and Berry (1985) is commonly utilized. Subsequently, in 1988, Parasuraman, Zeithaml, and Berry expanded this model and identified five fundamental gaps between customers' expectations and perceptions of service quality. The first four gaps are related to factors within the organization, while the fifth gap focuses on the disparity between customer expectations and perceptions and is a function of the first four gaps. In previous research, it has been observed that researchers often concentrate on the fifth gap, neglecting the viewpoints of the service providers. Within this context, the aim of this study is to investigate the underlying causes of the gap between customers' expectations and perceptions of service quality in the context of hotel businesses. The population of the study comprises managers and employees working in 4 and 5-star hotels in Antalya. Non-probability sampling, specifically convenience sampling, was used in the research. Data was collected from 217 managers and 217 employees using a questionnaire. Validity and reliability analyses were conducted in the data analysis process. The findings of the study indicate that the abundance of hierarchical levels between management and employees, perceived control issues, paperwork negatively impacting service quality, independent efforts in promotional activities, and making excessive promises to customers are significant factors affecting service quality.
... There is also an application for the hotel industry named LODGQUAL developed as a derivative of SERVQUAL (Getty and Thompson, 1994). In response to SERVQUAL analysis, Cronin and Taylor (1992) introduced the SERVPERF instrument based on perception ratings several research studies have argued that SERVPERF instrument empirically outperforms the SERVQUAL scale with respect to the service industry (Elliott, 1995;Van Dyke, Kappelman, & Prybutok, 1997;Jain & Gupta, 2004). Although there has been some research on service quality in hotels, the focus predominantly has been on customer expectation rather than hotel performance. ...
Article
Full-text available
The significance of assessing service quality for business performance has been acknowledged in various service marketing studies due to its direct influence on customer satisfaction and its indirect implications for customer loyalty. Despite the recognition of service quality and its measurement, very few studies have empirically examined the precursors and consequences of service quality after Covid-19 pandemic which has had a colossal structural impact on the world and from Indian context. In this study we gauge the antecedents and consequences of service quality pre and post the Covid-19 pandemic. The present study endeavors to ascertain whether all the dimensions of SERVQUAL carry equal weight in terms of their impact on overall service quality before and after the pandemic. Additionally, the study intends to evaluate whether the ranking of the SERVQUAL dimensions remains consistent before and after the pandemic. The study further empirically determines whether there exists a disparity in the total service quality score between the pre-Covid and post-Covid periods. The empirical findings suggest that all service quality dimensions are not of equal importance and exhibit a significant difference in their impact on the total service quality score during the post-Covid period. Consequently, in the study we conclude that ranking of five dimensions remains consistent in terms of their effect on the total service quality score during pre- and post-Covid periods.
Chapter
Existing theoretical and empirical researchers associated innovation with technological transformations and introduced radical/disruptive products and processes. However, considering the approaches and theories would be limited to manufacturing industries, innovation in service industries with non-technological aspects needs to be noticed. This study emphasises the need for a synthesis approach that could play a major role in building/enhancing service business models/frameworks. These frameworks need a stronger focus on using organizational, strategic, and marketing innovation typologies in formulating service business models resulting in low customer engagement and customer experience management. This chapter discusses various new business models that have evolved their features and benefits. Finally, this study concludes by emphasising the need for incorporating non-technical components when innovating in services and services-based firms.
Article
Full-text available
This meta-analysis compiles data from 30 studies on the quality of public transport services conducted in different countries. The studies collectively investigate different forms of public transportation, such as buses, paratransit, and rail services, utilizing a range of methodological approaches. The recurring themes in service quality are key dimensions such as reliability, responsiveness, assurance, empathy, and tangibles. The analysis demonstrates that these dimensions have a significant impact on user satisfaction, perceived value, and behavioural intentions. This emphasizes the widespread relevance of service quality measures such as SERVQUAL and SERVPERF. Furthermore, the results emphasize the differences in how service quality is perceived in different regions and the urgent requirement for targeted policy interventions to improve public transportation systems worldwide.
Article
Full-text available
Today in many industries service quality is widely used in different sectors. Every sector whether services sector or manufacturing can apply and implement five dimensions of service quality. The purpose of this study is to reveal the impact of service quality on student’s satisfaction in education sectors. The findings of the study will show impact of different service quality dimensions on students’ satisfaction in a private university. This study is useful to see the significance of service quality to satisfy students. SERVQUAL model of service quality provided by (Parasuraman, et al., 1988) is implemented. It includes five dimensions tangible, reliability, responsiveness, assurance and empathy. Findings of this study showed that four of service quality dimensions (tangible, responsiveness, assurance and empathy) have positive association with student satisfaction, except reliability have negative association with student satisfaction.
Article
Full-text available
If service quality relates to retention of customers at the aggregate level, as other research has indicated, then evidence of its impact on customers’ behavioral responses should be detectable. The authors offer a conceptual model of the impact of service quality on particular behaviors that signal whether customers remain with or defect from a company. Results from a multicompany empirical study examining relationships from the model concerning customers’ behavioral intentions show strong evidence of their being influenced by service quality. The findings also reveal differences in the nature of the quality-intentions link across different dimensions of behavioral intentions. The authors’ discussion centers on ways the results and research approach of their study can be helpful to researchers and managers.
Article
The authors investigate whether it is necessary to include disconfirmation as an intervening variable affecting satisfaction as is commonly argued, or whether the effect of disconfirmation is adequately captured by expectation and perceived performance. Further, they model the process for two types of products, a durable and a nondurable good, using experimental procedures in which three levels of expectations and three levels of performance are manipulated for each product in a factorial design. Each subject's perceived expectations, performance evaluations, disconfirmation, and satisfaction are subsequently measured by using multiple measures for each construct. The results suggest the effects are different for the two products. For the nondurable good, the relationships are as typically hypothesized. The results for the durable good are different in important respects. First, neither the disconfirmation experience nor subjects’ initial expectations affected subjects’ satisfaction with it. Rather, their satisfaction was determined solely by the performance of the durable good. Expectations did combine with performance to affect disconfirmation, though the magnitude of the disconfirmation experience did not translate into an impact on satisfaction. Finally, the direct performance-satisfaction link accounts for most of the variation in satisfaction.
Article
The author examines conceptual and operational issues associated with the “perceptions-minus-expectations” (P-E) perceived service quality model. The examination indicates that the P-E framework is of questionable validity because of a number of conceptual and definitional problems involving the (1) conceptual definition of expectations, (2) theoretical justification of the expectations component of the P-E framework, and (3) measurement validity of the expectation (E) and revised expectation (E*) measures specified in the published service quality literature. Consequently, alternative perceived quality models that address the problems of the traditional framework are developed and empirically tested.
Article
The authors develop and test a model of service employee management that examines constructs simultaneously across three interfaces of the service delivery process: manager-employee, employee-role, and employee-customer. The authors examine the attitudinal and behavioral responses of customer-contact employees that can influence customers’ perceptions of service quality, the relationships among these responses, and three formal managerial control mechanisms (empowerment, behavior-based employee evaluation, and management commitment to service quality). The findings indicate that managers who are committed to service quality are more likely to empower their employees and use behavior-based evaluation. However, the use of empowerment has both positive and negative consequences in the management of contact employees. Some of the negative consequences are mitigated by the positive effects of behavior-based employee evaluation. To increase customers’ perceptions of service quality, managers must increase employees’ self-efficacy and job satisfaction, and reduce employees’ role conflict and ambiguity. Implications for the management of customer-contact service employees and directions for further research are discussed.
Article
The authors develop a longitudinal model of the effect of a service change on customer attitudes about service quality. The model is estimated with data from a field experiment with three survey waves. Service changes are found to have a strong influence on customer evaluations of service quality through their effect on customer perceptions of current performance and disconfirmation. The effect of disconfirmation is larger and the effect of prior attitudes is smaller directly after the service change than in a subsequent time period.
Article
This study uses a causal modelling methodology to examine competing methodological and theoretical hypotheses concerning the effects of product quality on direct costs and business unit return on investment (ROI). Results show that the PIMS’ measures under study exhibit high reliability across all samples. The findings fail to support the widely held view that a high relative quality position is incompatible with achieving a low relative cost position in an industry.
Article
A critical element in the evolution of a fundamental body of knowledge in marketing, as well as for improved marketing practice, is the development of better measures of the variables with which marketers work. In this article an approach is outlined by which this goal can be achieved and portions of the approach are illustrated in terms of a job satisfaction measure.