ArticlePDF Available

Measuring Quality and Performance in Higher Education

Authors:

Abstract

Abstract, bibliogr. The main argument of this paper emanates from an understanding that 'quality' is a highly contested concept and has multiple meanings to people who conceive higher education and quality differently. This paper attempts to analyse ways of thinking about higher education and quality; consider their relevance to the measurement of performance of universities and colleges; and explore their implications for the selection of criteria, approaches and methods for the assurance of quality in higher education. This paper also investigates various models of measuring quality in higher education, consider their value and discuss both their shortcomings and contributions to the assessment of higher education institutions. These models include the simple 'production model', which depicts a direct relationship between inputs and outputs; the 'value-added approach', which measures the gain by students before and after they receive higher education; and the 'total quality experience approach', which aims to capture the entire learning experience undergone by students during their years in universities or colleges.
This article was downloaded by: [Hong Kong Institute of Education]
On: 10 June 2014, At: 20:04
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954
Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Quality in Higher Education
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/cqhe20
Measuring Quality and
Performance in Higher Education
Maureen Tam
Published online: 18 Aug 2010.
To cite this article: Maureen Tam (2001) Measuring Quality and Performance in Higher
Education, Quality in Higher Education, 7:1, 47-54, DOI: 10.1080/13538320120045076
To link to this article: http://dx.doi.org/10.1080/13538320120045076
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information
(the “Content”) contained in the publications on our platform. However, Taylor
& Francis, our agents, and our licensors make no representations or warranties
whatsoever as to the accuracy, completeness, or suitability for any purpose
of the Content. Any opinions and views expressed in this publication are the
opinions and views of the authors, and are not the views of or endorsed by Taylor
& Francis. The accuracy of the Content should not be relied upon and should be
independently verified with primary sources of information. Taylor and Francis
shall not be liable for any losses, actions, claims, proceedings, demands, costs,
expenses, damages, and other liabilities whatsoever or howsoever caused arising
directly or indirectly in connection with, in relation to or arising out of the use of
the Content.
This article may be used for research, teaching, and private study purposes.
Any substantial or systematic reproduction, redistribution, reselling, loan,
sub-licensing, systematic supply, or distribution in any form to anyone is
expressly forbidden. Terms & Conditions of access and use can be found at
http://www.tandfonline.com/page/terms-and-conditions
Quality in Higher Education, Vol. 7, No. 1, 2001
Measuring Quality and Performance
in Higher Education
MAUREEN TAM
Teaching and Learning Centre, Lingnan University, Tuen Mun, Hong Kong, China
ABSTRACT The main argument of this paper emanates from an understanding that `quality’ is a
highly contested concept and has multiple meanings to people who conceive higher education and
quality differently. This paper attempts to analyse ways of thinking about higher education
and quality; consider their relevance to the measurement of performance of universities and colleges;
and explore their implications for the selection of criteria, approaches and methods for the assurance
of quality in higher education. This paper also investigates various models of measuring quality in
higher education, consider their value and discuss both their shortcomings and contributions to the
assessment of higher education institutions. These models include the simple `production model’,
which depicts a direct relationship between inputs and outputs; the `value-added approach’, which
measures the gain by students before and after they receive higher education; and the `total quality
experience approach’, which aims to capture the entire learning experience undergone by students
during their years in universities or colleges.
Conceptions of Higher Education and Quality
`What counts as quality is contested’ (Barnett, 1994, p. 68). Quality may mean different
things to different people who therefore demand different quality outcomes and methods
of assessing quality. Harvey and Green (1993) describe quality as a `relative concept’. It is
relative to the stakeholders in higher education:
Quality is relative to the user of the term and the circumstances in which it is
involved. It means different things to different people, indeed the same person
may adopt different conceptualisations at different moments. This raises the issue
of whose quality? (Harvey & Green, 1993, p. 10)
There are a variety of stakeholders in higher education, including students, employers,
teaching and non-teaching staff, government and its funding agencies, accreditors, valida-
tors, auditors, and assessors (including professional bodies) (Burrows & Harvey, 1992).
Each of these stakeholder s has a different view on quality, in¯ uenced by his or her own
interest in higher education.
For example, to the committed scholar the quality of higher education is its ability to
produce a steady ¯ ow of people with high intelligence and commitment to learning that
will continue the process of transmission and advancement of knowledge. To the govern-
ment a high quality system is one that produces trained scientists, engineers, architects,
doctors and so on in numbers judged to be required by society. To an industrialist a high
quality educational institution may be one that turns out graduates with wide-ranging,
¯ exible minds, readily able to acquire skills, and adapt to new methods and needs
(Reynolds, 1990).
ISSN 1353± 8322 print; 1470± 1081 online/01/010047± 08 Ó2001 Taylor & Francis Ltd
DOI: 10.1080/1353832012004507 6
Downloaded by [Hong Kong Institute of Education] at 20:04 10 June 2014
48 M. Tam
Each of these views represents a valid expectation of higher education and about its
quality. The measurements thus required and the standards to be applied will surely be
different for each of these notions of quality.
This idea is resonant with what Barnett (1994) conceives as a threefold connection
between different conceptions of higher education, different approaches to quality, and the
identi® cation of different outcome measures (which Barnett terms as performance indica-
tors, or PIs). Behind the various notions of what constitutes quality, there lies, whether
explicitly formed or held tacitly, a view as to the ends that higher education should serve.
In turn, these prior conceptions will generate different methodologies for evaluating
quality, and in particular will call for alternative sets of outcome measure (PIs).
Barnett (1994) illustrates this interconnectedness between conceptions, approaches and
outcomes in the context of four dominant contemporary conceptions of higher education.
When higher education is conceived as the production of highly quali® ed manpower, the
graduates are seen as products whose career earnings and employment will relate to the
quality of the education that they have received. When higher education is likened to a
training for a research career, the PIs then become the research output of staff and students
and the input measures of their research ability. The third conception is higher education
as the ef® cient management of teaching provision. On this view, the PIs are ef® ciency
indicators, such as completion rates, unit costs, student± staff ratio, and other ® nancial data.
Further, when higher education is conceived as a matter of extending life chances, the
focus is on the participation rate or percentage growth of students from under-represented
backgrounds, including mature students, part-time students and disabled students.
These are four different, if overlapping, conceptions of the purposes of higher education.
Each of them has its own de® nition of quality and with a distinctive set of PIs that are
associated with it. Common in these four conceptions is the view of higher education as a
`black box’. None of them focuses on or indicates an interest in the educational process, or
the quality of the learning achieved by the student. They ignore what goes on in the `black
box’ and focus chie¯ y on inputs and outputs.
Barnett (1994) later contrasts these four conceptions with another four conceptions of
higher education which focus, this time, on the quality of the student experience. The ® rst
conception is about exposing students, or initiating them into the process and experience
of pursuing knowledge. The second is related to the development of students’ autonomy
and integrity. The third values the cultivation of general intellectual abilities of students to
form perspectives and vision beyond the con® nes of a single discipline. The ® nal concep-
tion of higher education is about the development of critical reason.
Those four conceptions, unlike the previous four, do not easily lend themselves to
evaluation by numerical quality measures, such as PIs. The complexity and quality of the
educational process and student experience will not be readily captured by any form of
objective measures using numbers and scores. Hence, the usefulness of performance
indicators by focusing primarily on input and output is very much in doubt.
Quality and Quality Measurement in Higher Education
In similar vein, Harvey and Green (1993) conceive quality as a multifaceted notion which
is value-laden in nature. Each stakeholder in higher education sees quality and its
outcomes differently resulting in a host of methods and approaches adopted to measure
quality in the light that one sees it.
There are widely differing conceptualisations of quality in use (Schuller, 1991). But
Harvey and Green in their discussion of the relationship between quality and standards in
Downloaded by [Hong Kong Institute of Education] at 20:04 10 June 2014
Measuring Quality and Performance 49
higher education identify ® ve perceptions or notions of quality discernible in higher
education: quality as exceptional (linked with excellence and elitism), as perfection or
consistency, as ® tness for purpose, as value for money, and as transformative (interpreted as
`the enhancement and empowerment of students or the development of new knowledge’)
(Harvey, 1995; see also Harvey et al., 1992). Each of these notions of quality has implica-
tions for the methods and approaches used to measure the desirable outcomes emanate
from it.
There are problems raised by this pluralistic view of quality and its measurement:
·Who should d ne the purposes of higher education? Should it be the government, the
students, the employers of students, the managers of institutions or the academic
professionals?
·How would the con¯ icting views about higher education and quality be resolved in
judging the quality of an institution? Who would determine the priorities? (Green, 1994,
p. 15)
Barnett (1994) describes the quality debate by different groups of actors in higher edu-
cation as a `power struggle’, where each group tries to ® ght for their voices to be heard and
taken into account when assessments of quality are undertaken. Each of the different
voices is valid deserving serious exploration in its own right, but none can be the only
legitimate voice to be heard. It is therefore the challenge for any kind of performance
evaluation to be framed so as to permit the equal expression of legitimate voices, though
they may always con¯ ict or compete in some ways.
As a result of the diversity in views about quality and higher education, a variety of
systems and approaches have been developed for monitoring quality of different kinds
and at different levels, displaying varied emphases and priorities. These monitoring
systems include the following.
Quality control is a system to check whether the products produced or services provided
have reached the pre-de® ned standards. Quality is usually inspected at the end of the
production and is undertaken by someone external to the workforce. The main problem
with this approach to quality measurement in higher education is that it is done in
isolation ignoring the fact that the overall quality of a university must be the concern of
everyone who works there (Frazer, 1992).
Quality assurance is a system based on the premise that everyone in an organisation has
a responsibility for maintaining and enhancing the quality of the product or service. When
put in the university context, quality assurance requires a whole-institution approach for
a complete transformation to quality involving top-level commitment, followed by sub-
stantial and comprehensive re-education of all personnel (Chaffee & Sherr, 1992). The
transformation requires time, effort, and willingness of everyone in the institution to
change to a culture which is quality-driven and ever-improving.
When compared to the quality control system, quality assurance represents a more
comprehensive approach to assessing and monitoring quality in higher education. Quality
assurance requires not just the detection of defects as in quality control but also their
prevention. It requires the commitment of everyone in the institution to an organisational
culture that prizes quality, relentlessly improving in search of perfection. However, this is
something very dif® cult to achieve which very often remains as a goal or philosophy that
universities would aspire to seek to achieve or get closer to.
Quality audit is a means of checking that relevant systems and structures within an
institution support its key teaching mission, and to ensure that provision is at or beyond
Downloaded by [Hong Kong Institute of Education] at 20:04 10 June 2014
50 M. Tam
a satisfactory level of quality. A quality audit can be conducted either internally or
externally. Audit checks that the university system does what it says it is going to do, and
has written documented evidence to prove it. The major criticism of audits is that they
offer no more than a snapshot of an institution (Pearce, 1995). Educationists generally ® nd
audit distastefulÐ shallow, undemandingÐ since either the evidence of conformance to
processes and procedures is there or it is not. There is no argument about it (Green, 1994).
Quality assessment is a means of assessing the quality of what is actually provided by
institutions (Pearce, 1995). Green (1994) adds that quality assessment involves the judge-
ment of performance against criteria, either internally or externally. This gives rise to a
potential source of con¯ ict, precisely because quality criteria for education are so dif® cult
to agree (Keefe, 1992). Another potential problem with quality assessment is that it is
usually intended to be mission sensitive (Pearce, 1995). It examines the quality of education
provision against the expressed aspirations of the individual institution. If the institution
has high aspirations, quality is to be measured against this yardstick. That might make it
more dif® cult for a university to succeed than another which set itself lower aspirations.
Taken to absurdity, a university which aspired to produce rubbish, and succeeded, would
be of higher quality than a university which claimed intellectual excellence, but narrowly
failed (Pearce, 1995).
Indicator systems approach to evaluating universities compares performance across a
range of indicators (Johnes & Taylor, 1990). There are several characteristics associated
with performance indicators. First, a performance indicator should have a monitoring
function. It can be de® ned as `an item of information collected at regular intervals to track
the performance of a system’ (Fitz-Gibbon, 1990). Second, an indicator is usually quantitat-
ive (Cuenin, 1986). Third, performance indicators are objective-related; they are `state-
ments, usually quanti® ed, on resources employed and achievements secured in areas
relevant to the particular objectives of the enterprise’ (CVCP/UGC, 1986).
The development of PIs in higher education can be traced back to manufacturing
industry and relates to the way in which inputs are transformed into outputs (Johnes &
Taylor, 1990). Put in the university context, the theory examines the relationship between
the outputs that universities aim to achieve and the inputs they need to produce those
outputs.
According to Johnes and Taylor (1990), if universities are to be evaluated, it is therefore
necessary to acquire information about:
1. the outputs which universities aim to produce;
2. the inputs which universities need to produce these outputs;
3. quantitative measurements of each university’s inputs and outputs;
4. the technical relationship between inputs and outputs.
Such emphasis on the link between inputs and outputs emanates from a political motive
of comparing institutions to estimate what each university could have produced with the
inputs available to it. This intention was made very explicit in one of the CNAA discussion
papers (CNNA, 1990) that among the various reasons for the development of PIs, there are
the intentions to `increase accountability’ and to `raise questions about planning intentions
and assist in the deployment of resources’.
It is therefore apt for Johnes and Taylor (1990) to conclude that the purpose of
attempting to measure the technical relationship between inputs and outputs in the
university sector is actually to provide a benchmark against which each university can be
compared.
Despite its promises for greater accountability and benchmarking between institutions,
Downloaded by [Hong Kong Institute of Education] at 20:04 10 June 2014
Measuring Quality and Performance 51
this production model of quality assessment does not quite apply to higher education since
universities produce more than one output. Moreover, many of the outputs are different
and are dif® cult or impossible to measure in monetary or even in physical units (Cave et
al., 1988).
Johnes and Taylor (1990) identify a further problem with the application of the pro-
duction model in the university sector. Inputs are often used to produce more than one
output and there is no obvious way of attributing speci® c inputs to speci® c outputs. The
key dif® culty here with the input± output link is that when a homogenous product is being
produced, the assumption of the link is reasonable. But when the outputs of higher
education differ substantially in kind and quality, it would become dif® cult to substantiate
the link between inputs and outputs as it would be the case in the mechanistic production
world.
Further confounding the discussion over inputs and outputs in higher education is the
fact that many outputs of universities are not amenable to quantitative measurement.
Examples of outputs such as `cultivating talents of students and disseminating cultural
values’ are some common objectives of universities that are not easily subjected to
quantitative representation.
This becomes a particular problem when the process variables are to be included in the
link between outputs and inputs of higher education. Many process variables such as
teaching and curriculum effectiveness are very dif® cult to measure and may not show a
direct link between inputs and outputs.
Further, both input and output indicators do not and cannot comment on the quality of
the student experience in higher education. If higher education is seen as a developmental
process of increasing the intellectual maturity and personal growth of students, it is
dif® cult to see how performance indicators and input± output analysis can be of any help.
What can be concluded up to this point is that higher education is a process of causing
student learning and development, which is not amenable to any kind of simple input and
output analysis. The idea that institutions of higher education are founded on processes of
causing growth and development of students in a holistic sense, incorporating not just
intellectual growth, but social, emotional and cultural development as well, warrants
attention to the measurement of quality as a kind of `transformation’ (Harvey & Green,
1993).
Quality as Transformation
The idea that higher education is about the educational processes and the development of
minds and hearts of students is resonant with the transformative view of quality espoused
in the following quote:
The transformative view of quality is rooted in the notion of `qualitative change’ ,
a fundamental change of form ¼ Transformation is not restricted to apparent or
physical transformation but also includes cognitive transcendence. (Harvey &
Green, 1993, p. 24)
In addition to cognitive transcendence, it is apt for Caul (1993) to add that higher education
does not just enhance students’ intellectual capacity, but also can `literally transform
self-image, equip the individual with more skills, build on the basis of the knowledge that
the individual had before arrival; change attitudes and assumptions’. (Caul, 1993, p. 597).
In this light, the notion is that quality as transformation implies a change in students in all
aspects as a result of the higher education they receive.
Downloaded by [Hong Kong Institute of Education] at 20:04 10 June 2014
52 M. Tam
There is other similar terminology to describe the change in students’ development
caused by higher education. This includes `growth’ and `impact’ (Astin, 1985). All these
words imply an importance for universities to bring about a positive change in students in
both cognitive and non-cognitive dimensions in order to be considered excellent which
displays quality in provision.
Hence, the performance evaluation of higher education should incorporate a consider-
ation of the impact of the institution on its students. In the words of Alexander Astin:
Its basic premise is that true excellence lies in the institution’s ability to affect its
students ¼ to make a positive difference in their lives. The most excellent institu-
tions are ¼ those that have the greatest impact ¼ on the student’s knowledge and
personal development. (Astin, 1985, pp. 60± 61)
Such institutional-impact approach to the monitoring and evaluation of performance of
universities has, as a result, called upon a number of quality measurement methodologies
that aim to capture the positive in¯ uence or the `value added’ to the students as they pass
through the system of higher education.
One of these methods is the popular `value-added’ approach of trying to measure the
pre- and post-difference in students at different points in time:
Value-added education examines changes in students’ performance over time.
Students assessed for entering competencies and then reassessed following the
completion of appropriate courses or experiences. Differences between the initial
and subsequent measures are then used as evidence of institutional impact.
(McMillan, 1988, p. 564)
There is no doubt that the value-added approach to quality measurement is an advance-
ment from the input± output analysis and its associated performance indicators. Compared
to the simple input± output measure, the value-added method is more appealing because
it tries to correct for differences in quality of student input and measure the competencies
of students at entrance to the university and subtract this from their ability upon emerging
at graduation:
The idea of measuring the value added to students is related to a shift from the
traditional concept of quality as exceptional towards relative and transformative
notions. (Harvey, 1995, p. 6)
The basic argument underlying the value-added approach is that true quality
resides in the institution’s ability to affect its students favourably, to make a
positive difference in their intellectual and personal development. (Astin, 1982,
p. 11)Hence, what counts as quality is the contribution of higher education to
change in students.
Despite many of its promises for better quality comparisons of institutions by making
available the gain scores and impact data, the value-added approach to performance
assessment in higher education is fraught with problems. The fundamental problem is that
value-added assumes there is a stable relationship between students’ performance at the
points of entry and exit (Barnett, 1994). However, the purpose of higher education is to
provide students with a new order of experience, to equip them with new frameworks of
thought and action (Barnett, 1992). Hence, the assumption that there is a necessary
relationship between students’ attainments on entry and those at the point of exit is
improper.
Downloaded by [Hong Kong Institute of Education] at 20:04 10 June 2014
Measuring Quality and Performance 53
Measurement of Total Quality Experience
In his criticism of performance indicators, Barnett (1994) commented that PIs can only tell
past performance. In themselves, they cannot provide insight into the future or even
suggest ways in which things ought to be modi® ed or improved (Barnett, 1994, p. 76). This
criticism also applies to the value-added and institutional-impact approaches to quality
assessment in higher education because they report mainly on the change already made to
the students and provide pre- and post-data to shed light on the institutional in¯ uence that
has taken place.
Value-added research and institutional-impact studies provide useful information in
terms of student differences over a period of time but they cannot adequately explain what
might have caused such differences due to many of the technical dif® culties just outlined.
Further, both value-added and institutional-impact evaluations do not get to the heart of
the quality of the student experience per se. Their focus is still very much on the
institutional aspect of quality instead of on what might be the chief necessity of what
higher education is about. In higher education it is the student who primarily does the
achieving. The institutional dimension of higher education, though a necessary dimension,
should be subsidiary to the student dimension (Barnett, 1992).
Students are a necessary part of the concept of higher education; the role of institutions
is just to provide the optimal favourable conditions to promote quality learning in
students. Therefore, at the forefront in any considerations of quality in higher education
should be the improvement of the student experience (Barnett, 1992).
Studies that aim to investigate numerous aspects of student experience in higher
education are contributing to the knowledge of quality learning and the necessary condi-
tions in institutions that are required to promote quality learning in students. Research on
quality student experience requires an array of methods, which should include both
quantitative and qualitative measures to shed light on the experience per se and the factors
that are associated with particular aspects of it. These methods may involve a measure of
the student achievement or a standardised test before and after they receive higher
education, their involvement in certain courses or curricular choices, and other sources of
information, such as student interviews and surveys, opinions of faculty and resident
personnel. The causes of behavioural change are complex and multidimensional in the
institutional setting, and if only one method of collecting data is used it is likely that
conclusions based on the results will be oversimpli® ed and misleading.
Conclusion
The discussions so far on quality in higher education and its measurement are premised
on two important considerations: that the central activity of higher education is that of
maximising the student’s educational development; and that it is the continuing improve-
ment to maximise student learning and development that remains the primary goal of
universities and should be the focus of any concern over quality in higher education and
its measurement.
Any measurement of quality and performance evaluation in higher education that falls
short of the centrality of student’s experience is bound to be peripheral and fail to provide
information about how students ® nd the experience and how much they are learning and
progressing both intellectually and emotionally throughout their years in university.
There are contested views over quality and its measurement which inform the prefer-
ences of different stakeholders in higher education. To understand quality it is necessary
Downloaded by [Hong Kong Institute of Education] at 20:04 10 June 2014
54 M. Tam
to recognise that it has contradictory meanings that can lead to different assessment
methods, and thus different practical outcomes.
References
ASTIN, A.W., 1982, `Why not try some new ways of measuring quality?’ , Educational Record, Spring,
pp. 10± 15.
ASTIN, A.W., 1985, Achieving Educational Excellence (San Francisco, Jossey± B ass).
BARNETT , R., 1992, Improving Higher Education: Total quality care (Buckingham, SRHE/Open University
Press).
BARNETT , R., 1994, `The idea of quality: voicing the educational’, in DOHERT Y, G.D. (Ed.) Developing Quality
Systems in Higher Education (London, Routledge).
BURROW S, A. & HARVEY, L., 1992, `De® ning quality in higher education: the stakeholder approach’, paper to
the AETT Conference on Quality in Education, University of York, 6± 8 April.
CAUL, B., 1993, Value-Added: The personal development of students in higher educ ation (Belfast, December
Publications).
CAVE, M., HANN EY, S., HEN KEL, M. & KOG AN, M., 1988, The Use of Performance I ndicators in Higher Education:
The challenge of the quality movement, 3rd edn (London, Jessica Kingsley).
CHAFF EE, E.E. & SHER R, L.A., 1992, Quality: Transforming postsecondary education, ASHE± ERIC Higher Edu-
cation Report No. 3 (Washington, DC, George Washington University, School of Education and Human
Development).
COMMIT TEE OF VICE-CHANC ELLORS A ND PRIN CIPALS O F THE UNIVE RSITIES O F THE UNIT ED KINGDOM AND UN IVERSITY
GRANTS COMM ITTEE (CVCP/UGC), 1986, Performance Indicators in Universities: A ® rst statement by joint
CVCP/UGC Working Group (London, CVCP).
COUNC IL FOR ACADEMIC AWARDS (CNAA), 1990, Performance Indicators and Quality Assurance, Information
Services Discussion Paper 4, June (London, CNAA).
CUENIN , S., 1986, `International study of the development of performance indicators in higher education’,
paper presented at OECD, IMHE Project, Special Topic Workshop.
FITZ-GIBBON , C., 1996, Monitoring EducationÐ Indicators,quality and effectiveness (London, Cassell).
FRAZER , M., 1992, `Quality assurance in higher education’, in CRAF T, A. (Ed.) Quality Assessment in Higher
Education: Proceedings of an international conference in Hong Kong,1991 (London, Falmer Press).
GREEN, D., 1994, `What is quality in higher education? Concepts, policy and practices’ , in GREEN , D. (Ed.)
What is Quality in Higher Education? (Buckingham, SRHE & Open University Press).
HARVEY, L., 1995, `Editorial’, Quality in Higher Education, 1(1), pp. 5± 12.
HARVEY, L. & GREEN, D., 1993, `De® ning quality’, Assessment & Evaluation in Higher Education, 18(1), pp. 9± 34.
HARVEY, L., BURROWS, A. & GREEN , D., 1992, Criteria of Quality. Quality in Higher Education Project (Birming-
ham, University of C entral England in Birmingham).
JOHNES, J. & TAYLO R, J., 1990, Performance Indicators in Higher Education (Buckingham, SRHE & Open
University Press).
KEEFE, T., 1992, `The quality is strained’ , Times Higher Education Supplement, 11 December.
MCMILLAN, J.H., 1988, `Beyond value-added education: improvement is not enough’, Journal of Higher
Education, 59(5), pp. 56 79.
PEARCE, R.A., 1995, Maintaining the Quality of University Education (Buckingham, University of Buckingham).
REYNOLD S, P.A., 1990, `Is an external examiner system an adequate guarantee of academic standards?’, in
LODER, C.P.J. (Ed.) Quality Assurance and Accountability in Higher Education (London, Kogan Page).
SCHULLE R, T. (Ed.), 1991, The Future of Higher Education (Milton Keynes, SRHE & Open University Press).
Downloaded by [Hong Kong Institute of Education] at 20:04 10 June 2014
... Management's role is to acquire talent, foster success and optimise organisational performance and quality, all in pursuit of its vision (Price, 2004;Salaman et al., 2005, p. 2). In the context of education, this means that a high-quality educational institution produces intelligent and committed graduates and staff who contribute to spreading and developing new knowledge (Tam, 2001). One way to achieve this is to provide training opportunities for staff to develop themselves, which is one of the primary components of HRD processes in all organisations: ...
... Training enables staff to learn and develop new skills, which leads to positive changes in their behaviour at work (Swanson, 2022;Garvin et al., 2008). However, training is also crucial because our world is constantly changing. ...
... The assumption is that training and development will become even more important with the increasing pace of social and technological changes (see McDiarmid and Zhao, 2022). Organisations now recognise that training is not only a necessity but also improves employee productivity (Swanson, 2022). ...
Article
Purpose The overall quality of education may be compromised due to the limited availability of safety and security (S&S) courses in professional teacher education. The purpose of this paper is to identify the main safety-related training needs of a higher education institution, which may provide insights for improving the quality of education from a safety perspective. Design/methodology/approach This study included 17 interviews with students and staff experienced in S&S due to their professions. The study also used Laurea University of Applied Sciences’ (Laurea) S&S reports, which have a variety of S&S events from 28 October 2020 to 20 December 2021. Both data sets were analyzed using qualitative theory-driven content analysis. Findings Safety risks at schools are mainly constructed through the negative psychosocial atmosphere and lack of safety knowledge and/or skills. There is a need for safety training covering key topics such as crime prevention, violence, fire safety and understanding inclusion and diversity. Practical implications The study proposes a new risk-based training and development management model for school management and the planning of training activities. Social implications The analysis offers valuable perceptions of the S&S challenges of educational institutions, which can be used as a starting point to enhance overall educational quality and safety. Originality/value This paper provides a novel way of improving the safety of education by approaching training needs from a risk assessment perspective.
... Each of these stakeholders has a different view on quality (e.g. Tam, 2001;Seyfried and Pohlenz, 2018), and different purposes, aims and interests and therefore a different understanding of what quality means. And even then, within a stakeholder group, the concept of quality 8 might vary depending on the circumstances and past experience of individuals. ...
... Harvey and Green 1993, p.10). Tam (2001) adds further to this dilemma by raising the question of who should determine the purposes of higher education [and hence presumably be most interested in its quality]. Should it be the government [and funding bodies], the students [as 'customers'] or the employers of the students, the institutional management or governing authority, the academic professionals or society in general? ...
... Ellis, 2019) along which we try to place particular programmes, teaching activities, services, research activities or entire institutions. Behind these various notions of what constitutes quality in education lies the question of what ends higher education should serve (Tam, 2001). This is an important question as these prior conceptions will generate different methodologies for evaluating quality through QA processes and for alternative sets of outcome measurers. ...
Technical Report
Full-text available
After a number of institutional quality review cycles, many higher education (HE) institutions and quality agencies in many countries, including Ireland, are reflecting on where the quality assurance (QA) review process is right now, what it has achieved over the past 20+ years and where the process might go in the future. The fact that this is happening in so many different places suggests that this reflection is both timely and important, especially given the financial and staff resources invested in the process, the multiplicity of drivers and their growing demands (both internal and external to the institutions), the increasing financial constraints and continuing need to balance autonomy and accountability in the HE sector. Are things working well and achieving the aims and objectives that underpinned the emergence of QA in the education sector in the first place? If not, does the present QA approach simply need supplementing or is major surgery required? These are important questions not only for HE but also as QA evolves in the further education and training (FET) sector. Based on a combination of research into the literature and discussions with experts in the QA arena, this report considers the concept of quality in higher education and the QA ecosystem involving the various processes and range of stakeholders. It then explores the benefits and costs of QA, current formal measures and recent developments. New concepts in QA are introduced including an institutional typology and expanding the remit of quality reviews. The final section summarises what we have learnt so far and presents a series of proposals that may contribute to the next phase of development of QA systems in higher education. We must acknowledge that much QA practice has not been documented and therefore cannot be reflected in this report. Whilst the main focus of this report has been on QA in the Irish higher education system, it is set within, and hopefully has relevance to, a wider international context.
... Latin America has around 20 million students enrolled in 60,000 programs across 10,000 institutions. Universities track the effectiveness of higher education initiatives by monitoring enrollment and degree completion rates, but dropouts may hinder the desired impact of increased access to education (Gold and Albert 2006;Tam 2001). ...
Article
Latin America may have higher attrition rates than previously assumed. Many students in Latin America do not graduate on time because they cannot complete the traditional undergraduate dissertation. This paper examines the factors contributing to delayed graduation for a context-specific variable: dissertation problems. We designed a questionnaire survey based on multi-item scales and surveyed 480 university students from Argentina, Bolivia, Chile, Colombia, and Mexico. Aggregate-cross-sectional data and multilevel structural equation modeling show that institutional assistance and supervision quality [personal problems] reduce [increase] dissertation problems. These results may enable university authorities to identify undergraduate students with dissertation problems and thus improve their strategies for reducing attrition rates.
... As a follow-up, the same Bendermacher et al. (2019) provide empirical evidence of the interplay of the various "value orientations" and identify which are fundamental in advancing a quality culture, but still without providing any "practical suggestions" as such. Alternative studies propose models and frameworks for measuring quality (Tam 2001;Verschueren et al. 2023) or which focus on the more interpersonal dimensions of a quality culture (Njiro 2016, Sattler and Sonntag 2018, Verschueren et al. 2023, while others, still, "investigate […] ways of understanding quality culture" and "how […] [u]niversities frame quality culture" (Nygren-Landgärds et al. 2024), but without focusing on concrete examples or practical guidance from and for institutions intent on promoting and further developing a Quality Culture appropriate to their context. ...
Article
Drawing on the data collected via a European funded project with eleven higher education partners, the article proposes a five-stage working model which can be adopted in and adapted to different institutional contexts so as to shift perceptions, strengthen engagement and channel commitment with a view to developing the desired quality culture. The project explored ways in which quality in higher education was viewed and practised by three main stakeholder groups: students, academics and quality managers, referred to as three ‘quality circles’. It adopted a reflective approach to issues of quality based on grassroots discussion and cooperation between key, but in some cases disengaged, stakeholders in the quality process. The project designed, tried and tested a series of activities which demonstrated lasting impact. The analysis of the project data revealed a patterning, which, if organised sequentially, carries the potential to crystalise into an action model which may be replicated by individual higher education institutions in support of advancing towards the quality culture they might be striving for. This article highlights the building blocks of the model and explains practices which can underpin their successful implementation. Received: 3 August 2023Accepted: 9 April 2024
... responsive faculty and staff) and outputs (e.g. graduate employment; Barker, 2002;Lagrosen et al., 2004;Oldfield and Baron, 2000;Scott, 2008;Tam, 2010;Vl asceanu et al., 2007). ...
Article
Purpose In India, national accreditation agencies stipulate that internal quality control in higher education institutions (HEIs) is to be institutionalized through internal quality assurance cells that are responsible for implementing and controlling quality systems. As the concept of goal congruence is central to a control process, this study aims to examine whether goal congruence is observed in such institutions. The impact of the absence of goal congruence on the quality of performance in higher education was also examined. Design/methodology/approach This cross-sectional study measured the impact of goal congruence or the lack thereof on the performance quality of HEIs as defined in the evaluation criteria of the apex accreditation agency, the National Assessment and Accreditation Council. Two hypotheses were tested using t-tests and regression analysis. Focus group discussions were conducted to elicit participants’ suggestions. Findings The results showed a lack of goal congruence between HEIs’ quality goals and their faculty’s personal goals, which adversely impacts the quality of their performance, as indicated by an average disagreement of 81% on a ten-statement scale. Goal congruence as an independent variable explained 63% of the variability in HEIs’ performance quality, and the results were statistically significant, indicating that lack of goal congruence is an important contributor to poor performance among HEIs. Originality/value Accreditation of HEIs is a global practice; hence, the findings of this study and the importance of goal congruence apply not only to India but also to HEIs globally.
Article
Full-text available
This study amied to determine the effect of satisfaction with pay in improying the quality of educational services in the cities of Azaz and Jarablus. To achieve the objectives of the study aquestionnaire was prepared for the purpose of collecting the repared fov the purpose of collecting the required information. The study sample was selected from the teaching administrative staff in the universities of norhen Syria, after collecting the distributed questionnaires and analyzing them using the statistical package for the social sciences (SPSS) program. This study reached several results, the most important of which are: There is a statistically singnificant effect between satisfaction with pay in improving the quality of educational services provided in the universities of northern Syria. In addition, satisfaction with wages among employees contributes to improving the quality of educational services in universities in northern Syria.
Article
This study thoroughly examines the academic performance and NAT outcomes of sixth-grade students across seven elementary schools in Baclayon, Bohol, during 2013-2014 and 2014-2015. It aims to identify correlations between academic performance and NAT results, specifically in the Filipino subject. Using documentary analysis, the research explores students' learning competencies and provides insights into school performance in Baclayon, a historically significant town with a diverse economic landscape and notable tourist attractions. The focus is on NAT results in the Filipino subject for Grade Six students in seven elementary schools, employing a collaborative approach for data collection and analysis. The findings unveiled intriguing insights, indicating that while School I consistently demonstrated high mean scores and mean performance scores across the two academic years, enrollment fluctuations were observed, notably in School V. Further analysis delineated proficiency levels in the Filipino subject, highlighting areas of strength and potential improvement. The correlation analysis between academic performance and NAT results in the Filipino subject yielded nuanced outcomes, elucidating the multifaceted dynamics at play in educational settings. These findings illuminate pathways for enhancing educational strategies and outcomes in Baclayon's elementary schools, contributing to broader discussions on educational reform and improvement. In 2013-2014 and 2014-2015, Schools I, II, and IV had high enrollment, while VII, VI, III, and VII had low enrollment. Despite good performance in Filipino, NAT results didn't align in Baclayon's elementary schools. This hints that the NAT might not reflect Filipino proficiency accurately. High-enrollment schools likely have effective strategies, while low-performing ones may need to boost enrollment and academics.
Article
A partir de una revisión bibliográfica sobre satisfacción y calidad en la universidad se hace una crítica al enfoque empresarial de estos conceptos, aceptando que pese a ello es importante comprender la valoración que hace el estudiante a su experiencia universitaria. A continuación se resume el sistema de evaluación de la universidad en la que se realizó el estudio y sus componentes, y a partir del procesamiento de sus más de 250 mil registros en el departamento y período seleccionados, se determina que los elementos de mayor incidencia en la valoración general que hace el estudiante corresponden a los servicios no académicos concesionados (restaurante, cafetería, impresión y fotocopias), seguido de los servicios no académicos gestionados directamente por la universidad (internet dentro del campus y acceso electrónico al parqueadero). En los aspectos académicos los elementos que inciden en una menor valoración del estudiante fueron el aporte de la asignatura a la formación profesional y el nivel académico de la asignatura.
Conference Paper
Full-text available
Nowadays, higher education is considered as a sphere of public policy with its economic and public dimensions rather than being merely an education topic. The role of non-state participants in the higher education field is more and more enlarging within the new policy contexts exceeding national borderlines. Plenty of participants at diverse levels, from transnational organizations to students as persons are included in the policy procedure. That is why the field of higher education policy represents a complex process where policy-making processes are intertwined with different participants and turbid borders. Quality assurance is one of the policy areas that best symbolize this complication with many stakeholders on the global, public and institutional grade. Globalization and international rivalry carry the quality issue to the position of prominence in many high education systems. For the Quality assurance system to proceed regularly and without interruption, higher education institutions are included in both internal and external evaluation processes. Education Quality Assurance Agency, the institution in charge of the external evaluation of the universities in Azerbaijan, plays a guiding role by providing feedback to the institutions which have completed the process. In the Azerbaijan higher education system, as in other countries, quality assurance has become an essential agenda item in recent years. The new higher education system approach of the Azerbaijan Republic Ministry of Education has been defined as mission differentiation and diversity, flexibility, and institutional autonomy which require a significant transformation in the current system. In this study, quality assurance in Azerbaijan higher education is explored following the conceptual framework of the policy process
Book
This book provides the first systematic exploration of the topic of quality in higher education. It discusses the meaning of quality and its improvement at the levels both of the University as an institution and of programmes of study. It contends against judgmental and ranking systems and systems that are based on performance indicators and looks instead to quality systems that promote an open-ended development in students. The central thesis is that quality cannot be managed but it can be cared for. Proposals for action are offered, in assessing institutions, in reviewing course programmes and in improving the curriculum and the experience of individual students.
Article
Value-added education programs are becoming increasingly popular in assessing the impact of college on students. It is argued that the effectiveness of value-added programs can be enhanced by employing appropriate methods of faculty judgments to support content-related and instruction-related evidence for validity by adapting standard-setting techniques to judge gains observed and by using appropriate caution and research designs when making causal inferences.
Article
In the following three excerpts from his 1985 Achieving Educational Excellence: A Critical Assessment of Priorities and Practices in Higher Education, reprinted by permission of Jossey‐Bass Publishers, Alexander W. Astin describes and critiques four traditional conceptions of educational excellence, explains and defends the talent development approach that he espouses, and presents his conception of educational equity. Charles S. Adams comments on Astin's book in this issue's Reviews.
Article
Five approaches to measuring higher education quality are outlined and criticized as incomplete: the nihilist, reputational, resources, outcome, and value-added approaches. Feedback from students and faculty is the key to discovering and improving institutional contributions to student development. (MSE)
Article
This collection of 12 essays addresses three themes related to the future of higher education: access, governance, and quality. The contributors represent teaching, research and management, universities, polytechnics, and colleges. The collected essays and their authors are as follows: "Reassessing the Future" (Tom Schuller); "Finished and Unfinished Business" (Gareth Williams); "Widening the Access Argument" (Andrew McPherson); "Access and Institutional Change" (Elizabeth Reid); "Access: An Overview" (Peter Scott); "Governance and Sectoral Differentiation" (William H. Stubbs); "Governance: The Institutional Viewpoint" (Michael Richardson); "Governance: An Overview" (Michael Shattock); "The Future and Further Education" (Colin Flint); "Quality in Higher Education" (Pauline Perry); "Quality and Qualities: An Overview" (Christopher Ball) and "Access, Quality, and Governance: One Institution's Struggle for Progress" (Tessa Blackstone). The appendix provides a list of participants attending a conference on "The Future of Higher Education: A Reassessment" held September 19-21, 1990, at Birkbeck College. (81 references) (GLR)
Article
What counts as quality is contested. The different views of quality generate different methods of assessing quality and, in particular, generate alternative sets of performance indicators (PIs). However, PIs are highly limited in their informational content, and have nothing directly to tell us about the quality of the educational processes. Given the contested character of ‘quality’, performance evaluation should be framed so as to permit the equal expression of legitimate voices, though founded on the teaching staffs critical self-reflections. The anarchy of viewpoints that this approach might seem to generate can be combated in part by focusing performance review on the educational processes and the educational development of the students. Doubts remain about the allegiance of the academic community to such an educational agenda.
Article