ArticlePDF Available

Adding value to performance measurement by using system dynamics and multicriteria analysis

Authors:

Abstract and Figures

The design, implementation and use of adequate performance measurement and management frameworks can play an important role if organisations are to succeed in an increasingly complex, interdependent and changing world. Yet, despite widespread recognition of the importance of performance assessment, there are some issues that require further study if measurement systems are to be effective in the process of management through monitoring and decision making. This article proposes and seeks to illustrate that the use of system dynamics and multicriteria decision analysis, as part of a broader framework for performance measurement, can address some of the issues identified. The use of these approaches, independently and in an integrated manner, provides a means of exploring the dynamic complexity of organisations, making explicit trade-offs between performance measures, and assessing the impacts of initiatives to improve performance, thereby enhancing understanding and ultimately contributing to the improvement of organisational performance.
Content may be subject to copyright.
1
Adding Value to Performance Measurement by Using
System Dynamics and Multicriteria Analysis
Santos S, Belton V and Howick S
Research Paper No. 2001/19
Sérgio P. Santos
1
, is a postgraduate doctoral student at the Management Science
Department; Valerie Belton is a Professor at the Management Science Department
and Susan Howick a Lecturer at the Management Science Department, Strathclyde
Business School, Glasgow
Abstract
The design, implementation and use of adequate performance measurement and
management frameworks can play an important role if organisations are to succeed
in an increasingly complex, interdependent and changing world. Yet, despite
widespread recognition of the importance of performance assessment, there are some
issues which require further study if measurement systems are to be effective in
supporting the decision making process. This article argues that the System
Dynamics and Multicriteria Decision Analysis approaches can address some of these
issues, and ultimately contribute to improve organisational performance. To support
this claim, several problems that make performance measurement systems fall short
of their potential are outlined and, a discussion about how System Dynamics and
Multicriteria Analysis can help organisations overcome these problems is presented.
Key words: Performance Measurement and Management, System Dynamics,
Multicriteria Decision Analysis, Integrating Methods.
Research News
Join our email list to receive details of when new research papers are published and
the quarterly departmental newsletter. To subscribe send a blank email to
managementscience-subscribe@egroups.com.
Details of our research papers can be found at
www.managementscience.org/papers.asp. Management Science, University of
Strathclyde, Graham Hills Building, 40 George Street, Glasgow, Scotland. Email:
mgtsci@mansci.strath.ac.uk Tel: +44 (0)141 548 3613 Fax: +44 (0)141 552 6686
E-mail address: sergio@mansci.strath.ac.uk
1
Financial support for the research being carried out by the author has been gratefully received from
the Fundação para a Ciência e a Tecnologia, under grant SFRH/BD/823/2000.
2
1 Introduction
The environment within which most organisations operate is changing rapidly. Organisations
failing to adapt and respond to the complexity of the new environment tend to experience,
sooner or later, survival problems. In this climate of change, the development,
implementation and use of adequate performance measurement and management
frameworks is one of the major challenges confronting organisations and can play an
important role in their success.
In an attempt to address some of the criticisms of traditional systems presented by several
authors (see, for example, Kaplan (1983), Lynch and Cross (1991), Banks and Wheelwright
(1979), Fitzgerald and Moon (1996), Turney and Andersen (1989)), and to deal with a rapidly
changing environment, several performance measurement systems (PMS) have been
proposed in the last decade. The Balanced Scorecard (Kaplan and Norton, 1992), the
Performance Pyramid (Lynch and Cross, 1991) and the Results and Determinants
Framework (Fitzgerald et al. 1991) are only a few examples.
However, in spite of the availability of various approaches to develop PMS, it is recognised
that some issues deserve further research if measurement systems want to succeed in
supporting the decision making process and in improving organisational performance. The
key argument presented in this paper is that the use of System Dynamics (SD) and
Multicriteria Decision Analysis (MCDA) either individually or in an integrated way can provide
a useful framework in which to explore some of these issues, and consequently, to better
understand and ultimately improve organisational performance.
The paper is structured as follows. In Section 2 we discuss why PMS frequently fail in
supporting the decision making process, focusing particularly on issues where SD and
MCDA can be helpful. In section 3 we outline the strengths of these two approaches to show
how their use can bring new insights to inform and support performance measurement and
management. An illustrative example from the health care sector is presented. Finally, in
Section 4 we conclude with some closing remarks.
2 Performance Measurement - Some Emerging Issues
Any review of the literature illustrates that performance measurement is a field attracting
considerable attention and that some issues require further study if measurement systems
want to succeed in supporting the decision making process and in improving organisational
performance.
In reality, diverse reasons may be pointed out to justify why many efforts to improve
performance have not met with great success. Taking an holistic view of the field of
performance measurement, we can conclude that much of the work has focused on
3
designing measures and measurement systems with less concern for the other stages of the
performance measurement and management process. However, once a measurement
system has been developed it has to be implemented and used. Both, the implementation of
measurement systems and using them to manage organisational performance appear to be
areas in which progress has been limited to date (Neely (1999), Bourne et al. (2000)).
There is, however, some evidence that the implementation of measurement systems is not a
straightforward task (Dumond, 1994). Neely et al. (2000) argue that this is mainly due to fear,
politics and subversion. Given that SD and MCDA have individually proved their potential to
inform and support decision making, working as a vehicle to reach consensus, ownership
and commitment among decision makers, we believe that their effective use in the context of
performance measurement can facilitate the implementation of measurement systems.
Once the implementation phase has taken place then measurement systems have to be
used to manage organisational performance. What seems to happen with the existing PMS
is that they tend to provide a large and complex amount of information about the
performance of the organisation and whether corrective actions are required or not.
However, these systems neither provide participants with tools to assist decision makers
understand, organise and use such information, in order to identify for example the causes of
poor performance, nor provide participants with tools to help them in evaluating and
eventually selecting appropriate corrective actions. One of the most common complaints
made by practitioners is that PMS provide too much data and too little analysis (Neely et al.,
1995). Due to the limited information processing capabilities of the human brain and given
that, in most of the cases, understanding the causes of poor performance and determining
the proper action plan for performance improvement require detailed analysis of the structure
of the problem under study and the consideration of trade-offs, we believe that SD and
MCDA can also play a major role in these phases.
However, we believe that the support of these approaches can be extended beyond the
implementation stage and beyond a better analysis and use of the information resulting from
measurement. Namely, it is our belief that SD and MCDA can also provide very valuable
insights to address some weaknesses in the design of performance measurement systems.
For example, it is recognised that the identification of factors affecting performance and the
understanding of their relationships is one important step in PMS design. However, it is also
recognised that much more has to be done in this topic (Neely (1999), Flapper et al. (1996),
Bititci and Turner (2000)).
From one side, it is well known that unless the process of identifying appropriate measures is
understood, performance measurement frameworks will be of little practical value (Neely et
al., 2000). Despite this, the practice shows that several organisations create performance
measures on an ad hoc basis (Flapper et al., 1996). From another side, it is recognised that
there are frequently conflicting performance measures, and therefore trade-offs among these
measures are inevitable. Some of the most well known performance measurement
4
frameworks (for example, the Balanced Scorecard, Results and Determinants Framework
and Performance Pyramid) emphasise the need of measurement systems to make explicit
the trade-offs between the various performance measures, but are vague in how to deal with
these trade-offs. In these frameworks, trade-offs are implicitly made through the selection of
a balanced or multidimensional set of performance measures, but suggestions on how to
make the trade-offs explicit in practice are not offered. We believe that the use of an
appropriate MCDA procedure can be helpful in this context. The need of additional research
dealing with the trade-offs among performance measures is, indeed, recognised in the
literature (see, for example, Ittner and Larcker, 1998).
We believe that both the identification of appropriate performance measures and the explicit
consideration of trade-offs between them can be significantly assisted if the relationships
among measures are mapped and understood. However, it is curious to note that with a few
exceptions (for example, Kaplan and Norton acknowledge the inter-relationships), little
consideration is given in the literature to the relationships between performance measures.
Trying to identify factors affecting performance and their relationships, Suwignjo et al. (2000)
used cognitive maps. However, we believe that cognitive maps alone do not allow
participants to fully understand the interconnections between these factors due to the
existence of non-linear interactions between them, delays, feedback loops and other
elements that give rise to dynamic complexity. To deal with the dynamic complexity inherent
in social systems and to infer dynamic behaviour, quantitative simulation is required (Senge
(1990), Sterman (1989a, 1989b)). Consequently, we believe that the use of qualitative
diagrams (for example, causal loop diagrams) and their translation into simulation models
using the SD approach can enrich the analysis and provide very useful insights for the
design of measurement systems.
In summary, it is our belief that if performance measurement wants to lead to enduring and
continuous performance improvement, then the different stages of the performance
measurement and management process (design of measurement systems, their
implementation, analysis and use) should form a continuous loop (Figure 1). Moreover, we
also believe that the use of SD and MCDA can bring new insights to inform and support the
different stages of this process, helping decision makers to close the loop.
DESIGN
(what, when and
how to measure?)
MEASURE
(what happened?)
ANALYSE
(why did it happen?)
PLAN
(what-if … which
acti ons to adopt?)
Fig. 1 The life cycle of the performance measurement and management process
5
It is important to note that the process of performance measurement and management
should be iterative and not a linear sequence of steps. This is indicated by the arrows in the
centre of the diagram. Additionally, measurement systems should be dynamic, evolving over
time. Since the organisation’s environmental conditions are constantly changing, and new
strategies need to be developed to cope with these changes, the system proposed must be
regularly monitored and updated. This is however an issue in which further research efforts
are required. Most organisations have only static performance measurement systems, and
much of the work that is currently ongoing in the field of performance measurement is static
in orientation (Bititci and Turner (2000), Neely (1999), Suwignjo et al. (2000), Waggoner et
al. (1999)). Given that performance measures change over time as well as their importance
to the stakeholders, measurement systems need to be sensitive to these changes. We
believe that the SD and MCDA approaches allow decision makers to review and update
systematically the measurement system, taking into consideration these changes. From one
side, SD models can help decision makers gain insights of system’s behaviour over time
which may reveal very valuable to review and update PMS. From another side, the use of a
MCDA procedure allows decision makers to review and reprioritise the weights for each
performance measure, reflecting how important a performance measure is to the decision
maker at a given moment in time.
3 Adding Value to Performance Measurement
In this section we give a brief overview of SD and MCDA and we build on the strengths of
these approaches to show how their use can bring new insights to inform and support
performance measurement and management. Particularly, we intend to discuss how these
approaches can support the design, implementation, analysis and use of measurement
systems.
3.1 System Dynamics
System dynamics was conceived and developed in the late 1950s and early 1960s at the
Massachusetts Institute of Technology by Jay Forrester. Indeed, the advent of SD is
generally considered to be the publication of Forrester's pioneering book, Industrial
Dynamics in 1961. Since then, significant advances have been made, and a cursory
examination of the literature indicates that the number of organisations using SD models for
the development of both strategic and operational policies is growing rapidly. An overview of
SD can be found, for example, in Forrester (1961), Richardson and Pugh (1981) and
Sterman (2000).
SD models are frequently developed and used to represent, analyse, and explain the
dynamics of complex systems. The dynamics or behaviour of a system is defined by its
structure and the interactions of its parts. The main goal of SD is to understand through the
6
use of qualitative and quantitative models how this behaviour is produced, and use this
understanding to predict the consequences over time of policy changes on the system.
Although SD models can help decision makers in enhancing understanding of system
behaviour over time, SD models do not concern themselves with the explicit evaluation of
this behaviour. That is, a pattern of behaviour is frequently presented as preferable to
another, based only on the modeller’s intuition (Gardiner and Ford, 1980). Some effort has
been devoted since the early 1980s to the study of optimisation of system dynamic models
(see for example, Coyle 1985). Despite this, it is recognised that evaluating, and choosing
between alternative courses of action is not a straightforward task. In the context of SD, as in
many others, the decision maker is confronted with a large and complex amount of
information, usually of a conflicting nature and reflecting multiple interests. Consequently, the
use of an appropriate MCDA approach can be very valuable to assist decision makers
organise such information in order to identify a preferred course of action (see for example,
Belton 1985).
3.2 Multiple Criteria Decision Aid
MCDA is now 30 years old, and it is an important area of Operations Research/Management
Science. Since the first session devoted to multicriteria analysis in a scientific congress,
organised by Roy during the 7th Mathematical Programming Symposium, which was held in
The Hague in 1969, the field of MCDA has seen remarkable growth. On one hand, important
theoretical results have been achieved leading to the development of several multicriteria
methods. On the other hand, the number of real world applications documented in the
literature is increasing considerably. A synthesis of the main streams of thought in this field
can be found in Belton and Stewart (2001) or Mollaghasemi and Edwards (1997).
MCDA is designed to take explicitly into account multiple and usually conflicting objectives in
supporting the decision process. In this way, MCDA methodologies can help decision
makers to learn about the problems they face, and consequently to make better informed
and justifiable choices. This is a view shared by many prominent researchers in the field (see
for example, Belton (1990), French (1988), Goodwin and Wright (1998) and von Winterfeldt
and Edwards (1986)).
That is, in the same way that one of the principal benefits arising from the use of a SD model
is to enable the decision maker to gain a greater understanding of the system of interest, one
of the main advantages from the use of a MCDA approach is the learning which occurs
about the problem faced and the alternative courses of action. Furthermore, the use of a
MCDA approach enables the decision maker to develop an explicit evaluation process,
which might be used to justify and explain to others why a particular option was selected
(Belton (1990) and Goodwin and Wright (1998)).
7
3.3 Using SD and MCDA to Support the Performance Measurement and Management
Process
The purpose of this section is to show that SD and MCDA used individually or in an
integrated way can provide very useful insights when supporting the performance
measurement and management process. In what follows and using a simple illustrative
example in the health care sector, we intend to discuss how can this support be concretised.
Particularly, we describe how SD and MCDA can assist and add value to each one of the
stages in the life cycle of the performance measurement and management process
diagrammed in Figure 1.
3.3.1 - Design
It is widely accepted that effective performance measurement systems should provide
decision-makers with information about the degree to which organisational objectives are
achieved and how well the organisation is performing its tasks. To get this information, an
appropriate set of performance measures is required. However, the issue of which
performance measures a given organisation should adopt is not a straightforward one.
Although the design of performance measures has been widely discussed in the literature
(for an exhaustive review of the literature see Neely et al. 1997), there is no consensus
concerning the best way to develop performance measures. It is however recognised by
several authors that PMS should align performance measures with the strategic objectives of
the organisation (see for example, Globerson (1985), Kaplan and Norton (1992) and Lynch
and Cross (1991)). In this way, it is assured that the system will provide information on
whether these strategic objectives are being successfully implemented or not, and
additionally, it is assured that if corrective actions are required, measures consistent with
these objectives will be adopted. Yet, despite this, it is recognised that several organisations
develop performance measures on an ad hoc basis and without taking into consideration the
relationships between measures. It is also recognised that even when more structured
frameworks for performance measurement are adopted, little guidance on how the
appropriate measures can be identified (Neely et al. 2000) and how to capture an holistic
view of the system being assessed (Sloper et al. 1999), is provided.
We believe that one way to assure this alignment is to start with the organisation’s overall
objective or strategy (which is usually too broad for managers to evaluate how well it is being
achieved), and to decompose it to a level where it can be easily assessed, that is, into
performance measures.
Using the example of Hospital Trusts, let us suppose that the strategic orientation is ‘to
promote effective delivery of high quality services’. We must recognise however that such
objective is too broad for managers to evaluate how well it is being achieved by Hospital
Trusts. Therefore, in order to evaluate its achievement we need to break it down in lower
level objectives. We know for example that in order to achieve higher levels of performance
8
in which concerns the ‘effective delivery of high quality services’ Hospital Trusts must assure
fairer provision of services, improved value for money, better health and so on (see, for
example, Department of Health (1997) and NHS Executive (1999)). However, it is important
to note that although the identification of these factors or dimensions constitutes an
important step in the performance assessment process, they do not immediately present a
workable framework for detailed evaluation of the performance of Hospital Trusts. To
effectively and thoroughly assess, for example, if a Hospital Trust is improving the health of
the population, if it is providing clinically effective health care or if it is ensuring that people’s
ability to obtain health care is related to their needs, these dimensions should break down
further. For example, to assess the ‘efficiency’ of a Trust, that is, the way in which a Trust
uses its resources to achieve value for money, several performance measures might be
defined. Day surgery rates, length of stay in hospital, unit costs and labour productivity, are
only a few examples.
Several tools or facilitative processes can be used to foster creative thinking in order to
identify these performance measures. We can anticipate, for example, that the use of Post-
Its complemented with qualitative maps (structuring the ideas generated) may be very
valuable at this stage (Figure 2). Causal Loop Diagrams (CLDs) seem to be an effective tool
in helping to structure in a more formal way the ideas or performance measures which have
emerged with the use of Post-Its. CLDs are an important tool for representing the feedback
structure of systems. Later in this section the details of how this is accomplished are
presented. Sterman (2000) presents some guidelines for drawing CLDs.
Fig. 2 Qualitative Map
Given that in most of the cases a wide range of performance measures is generated, it can
be necessary and worthwhile to bring some structure to this list of measures. The use of a
hierarchy, or performance measures’ tree as we will refer to it (Figure 3), can help in
structuring these measures and can be equally useful in forming the basis of a multi-attribute
value function analysis.
Day case rate
Duration of
treatment in
hospital
Patients with
operations cancelled
for non-medical
reasons
Patients’
waiting time
Admission
rates
Patients in
waiting list
Efficiency
Patient experience
Fair access
Health outcomes
Effective delivery of
high quality services
Readmissions Avoidable
deaths
9
Fig. 3 Performance Measures’ Tree
It is important to emphasise that the generation of a proper set of performance measures is
an iterative process which should not finish with the design of a ‘first’ performance measure’s
tree. Notice that, while this tree provides information about the links between performance
measures and performance dimensions, and between these dimensions and overall system
performance, it neither shows how performance measures interact with one another nor
provides significant insights about possible intervention or leverage points. The use of both
qualitative and quantitative SD modelling (Wolstenholme, 1990) can play a very valuable role
to foster this understanding.
For example, the use of CLDs can play a fundamental role in this phase for several reasons.
The CLD in Figure 4 is based on a model originally developed by Coyle (1984) and
illustrates some of these reasons. From one side, it gives a clear picture of the different
elements of the problem and the interconnectedness between them (cause and effect,
feedback loops, delays and so on). For example, the performance measure ‘waiting time’ is
considered a proxy measure of the Patient/Carer Experience with Hospital Trusts. Lower
values reflect a better experience of patients and consequently a higher satisfaction level.
Figure 4 shows how this measure interacts with other ones. For example, an increase in the
admission rate will tend to lead to a decrease in the time that patients have to wait until they
are seen by a doctor. Conversely, an increase in the size of the waiting list should result in
an increase in the waiting time, other factors remaining constant. Notice that the use of CLDs
allows to identify feedback loops, and it is the interaction between these loops that
determines the dynamics of the system. The use of CLDs also allows the identification of
intervention points or policy levers that can be used to control the performance of Hospital
Trusts. For example, to increase the likelihood of achieving the desired level for the
performance measure stated previously, a possible course of action is to reduce the duration
of treatment. As can be seen in Figure 4, reducing the length of stay of patients in hospital
would lead to an increase in the discharge rate and to a decrease in the number of patients
in hospital, allowing higher admission rates and, ultimately, shorter average waiting times.
However, Figure 4 also shows that while shortening the duration of treatment might lead to
Overall Performance
Performance Dimensions
Performance Measures
Level 1
Level 2
Level 3
10
shorter waiting times, it also increases the likelihood of inappropriate discharges. It is,
therefore, likely that some of these patients will be readmitted for further treatment,
generating in the future an increase in the waiting list. Furthermore, if it is reasonable to
suppose that NHS may in part control the performance of certain variables, there are others
which overall control is beyond Hospital Trusts. The CLD shows that many factors, some of
which are outside the direct control of hospitals, such as GP referrals or the capacity in
community care, have an important bearing on the performance achieved by a particular
hospital. That is, the CLD reveals, on its own, that to improve performance in the health care
sector a great understanding of the problem under study is necessary. Control and co-
ordination of a variety of activities, carried out by various organisational groups, is required
(Ballantine et al., 1998). This section also intends to show that this control and co-ordination
can be much easier if participants benefit from the insights provided by SD simulation
modelling.
Fig. 4 Causal Loop Diagram
The process of identifying and structuring performance measures in this way offers several
advantages. First, it assures that performance measures are designed in line with the
strategic orientation of Hospital Trusts. As a result, these measures, directly or indirectly,
provide information on whether the strategy defined in stage one is being successfully
implemented and, additionally, encourage behaviours consistent with this strategy (Neely,
1999). Moreover, it is assured that if corrective actions are required, measures consistent
with these objectives will be adopted. Second, going through the process helps to clarify
people’s thinking on the subject and on their objectives. This leads to a clearer
understanding of what should be measured, why and how, and provides insights for better
decisions. Third, it provides the basis of a multi-attribute value function analysis that may be
carried out not only to assess how well an hospital trust is performing but also to support the
decision making process if policy options need to be analysed and evaluated.
Waiting
Time
Capacity
Utilisation
Discharge
Rate
Admission
Rate
Waiting
List
Patients in
Hospital
Hospital
Capacity
Duration of
Treatment
Medical
Opinion
Readmissions
Fraction
Readmission
s
New
Referrals
+
-
-
+
+
+
-
-
-
-
-
+
+
+
+
-
Community
Care
Capacity
-
11
Having identified the performance dimensions and measures which the decision makers
consider to be relevant to evaluate organisational performance, the next step is to set targets
and to find out how well the organisation is achieving them. These targets can be set in
different ways. One possibility which we believe is useful in the case of Hospital Trusts
consists in establishing targets based on a range of ‘acceptable performance’. The upper
limits of this range may include, for example, industry benchmarks and the lower limits may
represent the worst tolerable performance for each measure (see Table 1). It is important to
emphasise that both the publication of ‘league tables’ for some hospital performance
measures and the increased understanding obtained with the development and use of SD
models can assist decision-makers in setting these boundaries.
Table 1 Performance Targets
Performance Measures
Admission
Rates
Patients in
Waiting List
Day Case
Rate
Duration of
Treatment
Avoidable
Deaths
Best attainable performance
Achieved performance
Worst tolerable performance
3.3.2 Measurement
Once the targets have been defined it is possible to assess how well the hospital trust is
performing against each individual measure as well as to have an overall view of the
performance of the hospital under consideration by aggregating these measures into a few
dimensions of performance or into a single indicator of overall performance. It is, however,
very likely that the set of measures identified will be composed by multiple and
heterogeneous measures of performance which cannot easily be reduced to a single
dimension. Given that MCDA approaches have proved their potential in integrating multiple
heterogeneous measures into a single or a few key indicators of overall performance, we
believe that their use is well suited in the present context.
The procedure we propose to carry out this analysis (that is, to quantify the factors in
performance and to arrive to an indicator of overall performance) makes use of a
hierarchical, weighted additive value-function and is supported by the use of the software
VISA (Visual Interactive Sensitivity Analysis). VISA is a multicriteria decision support
system based on a multi-attribute value function. Belton and Vickers (1990) provide and
overview of the use of a simple multi-attribute value function incorporating VISA. Keeney
and Raiffa (1976) and Winterfeldt and Edwards (1986) explain in detail the multi-attribute
value function.
Notice that at this stage we have identified already the performance measures and
corresponding targets which the decision makers consider to be relevant to evaluate the
performance of the Hospital Trust. Therefore, the next step is to find out how well the
Hospital Trust performs on each of the lowest-level measures in the tree (Figure 5).
12
There are many possible ways to evaluate or score the performance of the Hospital with
respect to the performance measures defined. A possible procedure is to compare the actual
performance of the hospital trust against the targets defined in table 1 and score it using a
normalised 0-100 global scale on which the 0 and 100 points are defined by the worst
tolerable and best attainable possibility for each performance measure. The scoring process
can be realised through direct rating or by using value functions. Having scored the
performance of the hospital trust with respect to all the measures at level 3 of the
performance measures tree, the next stage is to weight those measures to reflect their
relative importance to the performance dimensions at level 2. As for scoring there are many
possible ways of weighting performance measures. Independently of the procedure to be
adopted, it is important that people do take the range over which the measure is assessed
into account when assigning importance weights. The weights for the higher-level measures
(performance dimensions) in the value tree are found by summing the appropriate lower-
level weights. These weights can be assessed either by direct comparison of the
performance dimensions at level 2 or by selective comparisons of performance measures at
level 3. Once these weights are defined, we are in a position to find out how well the hospital
trust performs in each performance dimension and how it performs overall. This is done by
using a hierarchical weighted value function.
The Alternative Window in Figure 5 shows the actual performance of the hospital trust
against the different performance measures. These performances are converted, within the
VISA model, into value scores on a 0 to 100 scale (Figure 5A). Each of the performance
measures (Figure 5B) and performance dimensions (Figure 5C) are weighted to reflect
acceptable trade-offs to the individual or group of decision makers from whom weights are
elicited. The profile graphs show how the hospital performs against each measure and
dimension. Each performance measure (or dimension) is represented by a vertical line and
the performance of the hospital trust is illustrated by the point at which the line depicting its
performance crosses the performance measure’s (or dimension’s) line.
A
13
Fig. 5 VISA Analysis
The process of measuring hospital’s performance in this way offers several advantages.
First, and as we can see in Figure 5, scores are directly related to the nature of each
measure, and when appropriate they reflect non-linearities of values scale which may exist
for some measures. Second, and as stressed previously, MCDA approaches allow to
integrate heterogeneous measures into a single or a few key indicators of overall
performance. Third, a thorough sensitivity analysis can be carried out to analyse or explore
how robust the overall score obtained by the Hospital Trust is to changes in the inputs to the
model, particularly to changes on priorities and values.
3.3.3 Analysis
It is important to note that the procedures carried out so far using VISA, allow us to look
individually at the scores on each of the performance measures but also to evaluate how the
hospital scores in each of the performance dimensions and over all the performance
measures. This information is very valuable to assess how well the hospital is performing. By
scoring and reporting results, decision makers can identify where performance has been
strong, and where improvement is required. But this information is of little or no help in
driving hospital trusts if it is seen as an end in itself. That is, although these scores allow
decision makers to know how the hospital compares with the defined targets on a range of
measures, and therefore to know what is working well and what is not, this does not provide
a strong basis from which to manage effectively for improvement. To be effective, a
performance measurement and management system should support decision making,
informing decision makers, in between other things, about which are the causes of poor
performance and which actions to implement to obtain effective and appropriate change.
B C
14
Many PMS have fallen short of their potential because of failure to provide the decision
makers with the understanding and support necessary to do this. However, identifying the
causes of problems and developing appropriate solutions is frequently a difficult process for
the unaided decision maker.
The focus on the causal structure of problems and the search for leverage points in the
system are some of the strengths which make SD to be an appropriate approach to foster
understanding of the process underlying performance generation and to identify the factors
that are susceptible to lead changes. The use of qualitative SD based on causal loops (see
Figure 4) and quantitative SD based on computer simulation (see next section) can be,
therefore, a very valuable exercise to assist decision makers gain a greater understanding
about how the organisation is performing and why. And notice that SD modelling allows not
only to understand what happened but can also provide very valuable insights about what
might be about to happen next.
3.3.4 - Planning
A simulation model may play a vital role in testing and comparing alternative actions to
improve system’s performance. In some cases, if participants do not have access to a
simulation model to test and design policy actions there is the danger that the selected
policies will worsen the problem instead of amending it. In other cases, even if a given
situation improves, as a result of adopting a given course of action, it can be only
temporarily, and consequently, other policies could have been more effective. For example,
Wolstenholme (1999) demonstrates through the use of SD modelling that an increase in
hospital bed capacity is not the most effective solution to reduce total patient waiting times.
He illustrates that, when bed capacity is increased although more patients are admitted to
hospital, the effect is temporary. As soon as the new capacity is full, the number of patients
in hospital wards stabilises and the pre-hospital waiting time increases again. Furthermore, a
counter-intuitive behaviour may occur as a consequence of this type of policy. For example,
the additional bed capacity introduced can stimulate more demand for hospital treatment or,
at least, encourage more GPs referrals to hospital. To understand the dynamic complexity
inherent in these situations, a SD simulation model is required. By developing and running
this model participants can understand the stock and flow structure of systems and observe
the changes that occur over time in the variables of interest. Stock and flow diagrams are
usually used as a basis for developing this simulation model. As with CLDs, stock and flow
diagrams show relationships among variables. However, unlike CLDs, stock and flow
diagrams distinguish between different types of variables: stocks, flows and information.
15
Fig. 6 Stock and flow diagram
As Figure 6 illustrates SD uses a particular notation for stock and flow diagrams. Stocks
(also known as levels or state variables) are accumulated quantities describing the condition
or state of the system. Stocks would remain and be measurable even if all the flows in the
system were frozen at a moment of time. Flows (alternatively called rates) are the changes
to the stocks that occur during a period of time. Systems consist of networks of stocks and
flows linked by information feedbacks from the stocks to the flows. As can be seen in Figure
6, information links give information to auxiliary variables about the value of other variables.
Auxiliary variables contain calculations based on other variables, and they are often defined
to make the model easier to understand. The cloud-like symbols mark the boundaries of the
model. They represent sources or sinks from which flows arise and into which they vanish
when they lie outside the boundary of the system being modelled.
The entire structure of a system can be represented by using these types of icons and
defining the connections between them. The stock and flow diagram in Figure 6 constitutes,
for example, a simplified representation of the way in which patients flow from the community
into the NHS Hospital Trusts and back into the community. From the stock and flow diagram
we can also see how different performance measures interact. However, one of the major
advantages of this diagram is that it can be used as a basis for developing a SD simulation
model which allows to represent explicitly the system’s internal structure and this structure is
often the underlying source of the problem. Thus, by finding and modifying this system
structure we are able to improve organisational performance in the most effective way.
Lebas (1995) suggests that understanding the processes underlying performance is the only
way to define the measures that lead to appropriate actions. According to Lebas only when
we understand which of the steps in the process is defective, effective corrective action can
be designed.
Waiting_List
New_Referrals
Admission_Rate
Hospital_Capacity
Discharge_Rate
Readmission_Rate
Community_Care_Leaving_Rate
In_Hospital
Duration_of_Treatment
In_Community_Care
Fraction_Readmissions
Waiting_Time
Stock
16
Using SD modelling, several alternative actions can be simulated and their impact on the
performance measures of the system tested. For example, alternative plans or schemes for
the allocation of resources between the many stages of the patient flow process, can be
tested using this simulation model. However, as stressed previously, the selection of the
‘best’ action plan is not straightforward. Stakeholders have different and often conflicting
objectives and, as a result, trade-offs must be made. Given that the decision maker is
confronted with a large and complex amount of information and given that it is likely that
none of the alternative courses of action optimise all performance measures, we believe that
the use of an appropriate MCDA approach can be very valuable to assist the decision
process.
Suppose, for instance, that the results of measurement indicate that the hospital trust is
performing poorly in which concerns the size of waiting lists for elective surgery. How can the
hospital reduce waiting lists? This issue clearly involves multiple, conflicting objectives and it
is very likely that there will be alternative courses of action or strategies to be considered.
For example, to reduce the size of inpatient waiting lists, a possible action to implement is to
reduce the length of treatment in hospital. However, this procedure would increase the
likelihood of inappropriate discharges and, consequently, would increase the number of re-
admissions. That is, to improve one performance measure (for example, the size of waiting
lists) we have to sacrifice the performance of another measure (for example, the
effectiveness of treatment). Figure 7 shows two hypothetical policy alternatives and some of
their impacts that are to be evaluated.
As we can see, although both alternatives might produce an improvement in the size of the
waiting list, they might increase the number of re-admissions. It happens very often that
none of the alternative courses of action to be considered will be able to optimise all
Months
Size of WL per 1000 head of population
Waiting_List_A1
1
Waiting_List_A2
2
0 6 12 18 24 30 36
10
15
20
25
30
1
2
1
2
1
2
1
2
1
2
1
2
1
2
Months
Readmission Rate %
Readmission_Rate_A1
1
Readmission_Rate_A2
2
0 6 12 18 24 30 36
5
10
15
20
1
2
1
2
1
2
1
2
1
2
1
2
1
2
Fig. 7 - Size of waiting list and readmission rates under policy 1 and 2
17
performance measures given that some of them are conflicting. In order to decide which
policy alternative is better a MCDA procedure can be applied. When this happens, the
information resulting from running the simulation model can be passed to the multicriteria
model and explicitly evaluated. That is, the alternatives to be considered for analysis and
evaluation by the MCDA approach will be the different plans of action suggested by those
with expert knowledge in the area and the criteria for the evaluation of these plans may
initially taken to be the performance measures presented in Table 1. The use of the SD and
MCDA approaches in this stage offers two obvious advantages. From one side, SD
modelling allows decision-makers to verify, through the use of ‘microworlds’, and
consequently, in a risk-free environment, the effect of different actions on a system’s
performance measures over time. From another side, using MCDA, decision makers can
develop an explicit evaluation process of these actions.
4 Closing remarks
Improving the performance of an organisation is not a straightforward task. Instead, it is
frequently a complex and poorly defined problem which solution often requires a process of
organisational learning enabling decision makers to change the way they think and act, and
consequently, enabling a more effective use of the available information.
It is therefore our belief that approaches which allow decision makers to identify and
understand the causes of poor performance, which allow decision makers to understand the
implications of alternative courses of action before they become operational and which help
them in evaluating and eventually selecting appropriate corrective actions, can provide very
valuable insights when supporting the process of performance measurement and
management.
SD and MCDA are two approaches to modelling which have individually proved their
potential to inform and support decision making. This paper has sought to demonstrate that
there is also a clear potential for these approaches to be employed in support of
performance measurement and management. We believe that the integration of these
approaches, bringing together their complementary strengths, can provide a valuable tool for
understanding and informing decisions about organisational performance. However, we also
believe that if this integration is to be successfully implemented some technical and
conceptual problems dealing with it have to be addressed (see, for example, Andersen and
Rohrbaugh 1992). To investigate theoretically and empirically the effects of integrating SD
and MCDA in the context of performance measurement and how to integrate these two
approaches in the most efficient and effective ways is one of the goals of the research that is
being carried out by the authors at the University of Strathclyde. Forthcoming papers will
document this ongoing research.
18
Some issues which prevent organisations getting the most from their PMS were discussed in
this paper. While the use of SD and MCDA may not be the solution to all these issues, we
believe it brings new insights to inform and support the different stages of the performance
measurement and management process. Namely, we believe it is worth consideration for
several reasons:
it allows the design of a measurement system aligned with the strategic objectives of the
organisation;
the factors affecting performance and their interrelationships can be explicitly identified;
it provides a way of creating a consistent and integrated set of performance measures;
it offers a powerful frame in analysing the ways by which changes in system’s
performance occur;
trade-offs between the different performance measures and dimensions are explicitly
addressed;
it empowers and involves individuals;
finally, and a consequence of all the previous reasons, these approaches provide
powerful tools for organisational learning.
19
REFERENCES
Andersen, D. F. and J. Rohrbaugh (1992), “Some Conceptual and Technical Problems in
Integrating Models of Judgement with Simulation Models”, IEEE Transactions on
Systems, Man, and Cybernetics, 22(1), 21-34.
Ballantine, J., S. Brignall and S. Modell (1998), “Performance measurement and
management in public health services: a comparison of U.K. and Swedish practice”,
Management Accounting Research, 9, 71-94.
Banks, R. L. and S. C. Wheelwright (1979), “Operations versus strategy trading tomorrow
for today”, Harvard Business Review, May-June, 112-120.
Belton, V. (1985), "The Use of a Simple Multiple-Criteria Model to Assist in Selection from a
Shortlist", Journal of the Operational Research Society, 36, 265-274.
Belton, V. (1990), "Multiple criteria decision analysis: Practically the only way to choose", in
Hendry, L. and R. Eglese (Eds.), Operational Research Tutorial Papers, Operational
Research Society, Birmingham, 53-101.
Belton, V. and T. J. Stewart (2001), Multiple Criteria Decision Analysis: An Integrated
Approach, Kluwer Academic Publishers, Boston.
Belton, V. and S. Vickers (1990), "Use of a Simple Multi-Attribute Value Function
Incorporating Visual Interactive Sensitivity Analysis for Multiple Criteria Decision
Making", in Bana e Costa, C.A. (Ed.), Readings in Multiple Criteria Decision Aid,
Springer-Verlag, Berlin, 319-334.
Bititci, U. S. and T. Turner (2000), “Dynamics of performance measurement systems”,
International Journal of Operations & Production Management, 20(6), 692-704.
Bourne, M., J. Mills, M. Wilcox, A. Neely and K. Platts (2000), “Designing, implementing and
updating performance measurement systems”, International Journal of Operations &
Production Management, 20(7), 754-771.
Coyle, R. G. (1984), “A systems approach to the management of a hospital for short-term
patients”, Socio-Economic, Planning Sciences, 18(4), 219-226.
Coyle, R. G. (1985), “The use of optimization methods for policy design in a system
dynamics model”, System Dynamics Review, 1(1), 81-91.
Department of Health (1997), “The New NHS: Modern. Dependable”, London: DoH.
(Available on http://www.open.gov.uk/doh/newnhs/newnhs.htm).
Dumond, E.J. (1994), “Making best use of performance measures and information”,
International Journal of Operations & Production Management, 14(9), 16-31.
Fitzgerald, L. and P. Moon (1996), Performance Measurement in Service Industries: Making
it Work , CIMA.
Fitzgerald, L., R. Johnston, S. Brignall, R. Silvestro and C. Voss (1991), Performance
Measurement in Service Business, CIMA, London.
Flapper, S. D., L. Fortuin and P. P. Stoop (1996), “Towards consistent performance
management systems”, International Journal of Operations & Production Management,
16(7), 27-37.
Forrester, J. W. (1961), Industrial Dynamics, MIT Press, Cambridge, Massachusetts.
French, S. (1988), Decision Theory: An Introduction to the Mathematics of Rationality, Ellis
Horwood, Chichester.
Gardiner, P. C. and A. Ford (1980), “Which Policy Run is Best, and Who Says So?”, in
System Dynamics: TIMS Studies in the Management Sciences, Legasto, A. A., J. W.
Forrester and J. M. Lyneis (Eds.), North-Holland, Amsterdam, 14, 241-257.
Goodwin and Wright (1998), Decision Analysis for Management Judgement, John Wiley &
Sons, Chichester, Chapter 2.
Globerson, S. (1985), "Issues in developing a performance criteria system for an
organization", International Journal of Production Research, 23(4), 639-646.
Ittner, C. D. and D. F. Larcker (1998), “Innovations in Performance Measurement: Trends
and Research Implications”, Journal of Management Accounting Research, 10, 205-238.
Kaplan, R. S. (1983), “Measuring Manufacturing Performance: A New Challenge for
Managerial Accounting Research”, The Accounting Review, LVIII(4), 686-705.
20
Kaplan, R. S. and D. P. Norton (1992), “The Balanced Scorecard Measures That Drive
Performance”, Harvard Business Review, Jan/Feb, 71-79.
Keeney, R. L. and H. Raiffa (1976), Decisions with Multiple Objectives: Preferences and
Value Tradeoffs, Cambridge University Press, United Kingdom.
Lebas, M. J. (1995), “Performance measurement and performance management”,
International Journal of Production Economics, 41, 23-35.
Lynch, R. L. and K. F. Cross (1991), Measure Up The Essential Guide to Measuring
Business Performance, Mandarin, London.
Mollaghasemi, M. and J. P. Edwards (1997), Making Multiple-Objective Decisions -
Technical Briefing, IEEE Computer Society Press, California.
Neely, A. (1999), “The performance measurement revolution: why now and what next?”,
International Journal of Operations & Production Management, 19(2), 205-228.
Neely, A., H. Richards, J. Mills, K. Platts and M. Bourne (1997), "Designing performance
measures: a structured approach", International Journal of Operations & Production
Management, 17(11), 1131-1152.
Neely, A., J. Mills, K. Platts, H. Richards, M. Gregory, M. Bourne and M. Kennerly (2000),
"Performance measurement system design: developing and testing a process-based
approach", International Journal of Operations & Production Management, 20(10),
1119-1145.
Neely, A., M. Gregory and K. Platts (1995), “Performance measurement system design A
literature review and research agenda”, International Journal of Operations & Production
Management, 15(4), 80-116.
NHS Executive (1999), “Quality and Performance in the NHS: High Level Performance
Indicators”, London: NHS Executive. (Available on
http://www.doh.gov.uk/indicat/nhslpi.htm).
Richardson, G. P. and A. L. Pugh III (1981), Introduction to System Dynamics Modeling with
DYNAMO, Productivity Press, Cambridge, Massachusetts.
Senge, P. M. (1990), The Fifth Discipline: The Art & Practice of the Learning Organization,
Doubleday Currency, New York.
Sloper, P., K.T. Linard and D. Paterson (1999), "Towards a Dynamic Feedback Framework
for Public Sector Performance Management", International System Dynamics & ANZSYS
Conference.
Sterman, J. D. (1989a), “Misperceptions of Feedback in Dynamic Decision Making”,
Organizational Behavior and Human Decision Processes, 43(3), 301-335.
Sterman, J. D. (1989b), “Modeling Managerial Behavior: Misperceptions of Feedback in a
Dynamic Decision Making Experiment”, Management Science, 35(3), 321-339.
Sterman, J.D. (2000), Business Dynamics Systems Thinking and Modeling for a Complex
World, McGraw-Hill, London.
Suwignjo, P., U. S. Bititci and A. S. Carrie (2000), “Quantitative models for performance
measurement system”, International Journal of Production Economics, 64, 231-241.
Turney, P. B. and B. Anderson (1989), “Accounting for Continuous Improvement”, Sloan
Management Review, Winter, 37-47.
Von Winterfeldt, D. and W. Edwards (1986), Decision Analysis and Behavioral Research.
Cambridge University Press.
Waggoner, D. B., A. D. Neely and M. P. Kennerley (1999), “The forces that shape
organisational performance measurement systems: An interdisciplinary review”,
International Journal of Production Economics, 60/61, 53-60.
Wolstenholme, E.F. (1990), System Enquiry A System Dynamics Approach, John Wiley &
Sons, Chichester.
Wolstenholme, E.F. (1999), "A patient flow perspective of U.K. Health Services: Exploring
the case for new "intermediate care" initiatives", System Dynamics Review, 15(3), 253-
271.
... The effective use of performance information to improve operations is a complex task that can be difficult without the support of appropriate tools. This suggests that they be encouraged to understand the real causes of poor performance and determine the appropriate action plan, along with a detailed analysis of the structure of the problem under study and consideration of trade-offs, and to circumvent the lack of ability to effectively process all the information needed to develop and implement more coherent and well-informed action plans by decision makers (Santos et al. 2002). ...
... The impact profile demonstrates competitive performance in all three criteria (orange band). According to Santos et al. (2002) the performance evaluation model should provide an understanding of the real causes of poor performance and thereby determine an appropriate action plan. The model developed by means of the MCDA-C allows, by means of the constructed scale and the reference levels, to identify the real cause of the poor performance. ...
Article
Full-text available
The performance evaluation models proposed in the scientific literature to support the decision-making process in the context of sustainability in Higher Education Institutions (HEIs) present gaps with respect to the design process. In relation, to the management of environmental education in HEIs, there is an absence of decision support models. In this context, the objective of the research is to build a model for evaluating the performance of environmental education for an undergraduate course at a public university. It is a case study, with data collection through interviews with the Course Coordinator, complemented by questionnaires and documental analysis. The intervention instrument used was the Multicriteria Methodology for Decision Aiding-Constructivist (MCDA-C). The main results were explored showing the process of building a performance evaluation model, considering the singularity of the context, the flexibility in the elaboration process and interactivity with different stakeholders. Additionally, efforts were focused on the presentation of the final assessment model, demonstrating the potential of the MCDA-C methodology as a practical tool to support the decision-making process, and on the discussion of the model developed in relation to the literature reviewed. The model built allows the decision maker to understand the environmental education intertwined with the course, to assess the current situation and the desired end state, as well as the necessary actions for its management. In addition to the constructivist perspective, the model meets the Stakeholder Theory; explains the advantages, using participatory approach methodologies and performance indicators have characteristics of a functional system.
... To rank order options based on their scores against criteria Belton and Stewart (2002); Salo and Hämäläinen (2010); Phillips (2007) Reagan- Cirincione et al. (1991); Santos et al. (2004;; Brans et al. (1998) As mentioned, after getting into contact with a client, one of the first tasks of the analyst is to agree on a goal and outline of the process. There is a danger here that the initial problem formulation of the contact client restrict the goal of the modelling project. ...
... The rank order can then be tested for its sensitivity to changes in scores, weights or aggregation method. Santos et al. (2004; compare MCDA and system dynamics. One obvious difference is that the first offers a static view and the latter a dynamic perspective. ...
Conference Paper
Full-text available
System dynamics projects in organizational contexts are impacted by power and politics. Case studies show how decision makers' interests influence both the modelling process as well as the implementation of recommendations. The system dynamics literature offers little in terms of conceptual understanding or empirical research focused on the impact of power. This paper discusses definitions of power, and its interaction with rationality and consensus. This is contrasted with system dynamics based organisational interventions. System dynamicists are encouraged to speak truth to power. They place themselves in the role of scientists, building decision makers' conceptual understanding and helping to identify policies that improve overall system functioning. An alternative theoretical perspective emphasizes the wide availability and accessibility of model-based decision support. The motivated information processing approach adds that sharing and processing information is motivated not only by finding the best solution for the group or organisation, but also by self interest. System dynamics lacks an understanding of the competent proself motivated decision maker. The focus on overall system functioning places individual interests and stakeholder relations in the background. To more directly capture these elements of power, a range of intervention methods complementary to system dynamics is described. This paper may help to ground system dynamics interventions in relevant literature and rethink practice.
... Systems thinking is a natural candidate for such an effort. It is a way of investigating the behavior of systems over time [13] using a top-down approach to represent them and reveal insights into how potential strategies could drive their functions. For that reason, it has been applied to industries and organizations to investigate how digital transformation and sustainable development can be achieved. ...
Article
Full-text available
The discourse surrounding digital transformation (DT) and sustainable development (SD) is pervasive in contemporary business and organizational operations, with both processes considered indispensable for sustainability. The success or failure of these endeavors hinges significantly on factors such as the behavior and skill sets of individuals within organizations. Thus, the purpose of the paper is twofold: to investigate the perceptions of organizations on digital transformation and sustainable development with regards to skills and education, and, secondly, to use the insights from these perceptions as a starting point for the use of systems thinking as a tool that could assist in achieving these states. To achieve the objective, a research effort was conducted that included desktop research, interviews with experts, and the development of a survey that was disseminated across Europe with questions on digital transformation and sustainable development. Finally, a general causal loop diagram was designed, illustrating the processes of digital transformation and sustainable development within organizations from a top-down view. The study reveals commonalities between DT and SD, recognizing both processes as advantageous with shared deficiencies in specific skill sets. It highlights a synergistic relationship between initiating DT and fostering SD activities. Furthermore, the research underscores the temporal aspects of these processes, acknowledging delayed positive effects and immediate implementation costs that challenge decision-makers to balance long-term benefits with short-term viability. In conclusion, the exploration emphasizes the dynamic nature of DT and SD, urging continual attention to the evolving landscape and the imperative for a shared understanding within organizational contexts.
... Identified conflicting objectives (Fig. 7) originate in different scopes of city functions. As mentioned above, CACC tends to focus on concrete spatially explicit adaptation measures, ignoring wider impacts on socioeconomic systems, while assessments of SUD consider the city and its functions as a whole (Santos et al., 2002). Further, it becomes clear that SUD ignores vulnerability assessments, and most indicators do not cover the vulnerability dimensions addressed by CACC. ...
Article
Full-text available
Current adaptation responses to sea-level rise tend to focus on protecting existing infrastructure resulting in unsustainable adaptation pathways. At the same time, urban development compromises a city's adaptive capacity if the climate risk component is ignored. While fighting for the same space, these two domains are currently widely analyzed separately. This paper develops a framework for integrating sustainability assessments of sustainable urban development (SUD) and coastal adaptation to climate change (CACC). Through a systematic literature review, we collected more than 2,700 indicators for SUD and 1,800 indicators for CACC. The indicators occurring most frequently are extracted and structured into frameworks. The study highlights the differences and similarities between the two frameworks. We further identify complementary and conflicting objectives that can advance or inhibit the effective integration of SUD and CACC. CACC tends to focus on assessing specific adaptation measures and their immediate impact on the city's vulnerability, ignoring wider impacts on socioeconomic systems. SUD considers the city and its functions as a whole but ignores vulnerability assessments across urban subsystems. We develop a combined framework for sustainability assessment that may serve as a basis for both qualitative and quantitative integrated studies under the paradigm of sustainable adaptation.
Article
Purpose This paper aims to design a hybrid model of knowledge-based performance management system (KBPMS) for facilitating Lean Six-Sigma (L6s) application to increase contractor productivity without compromising human safety in Indonesian upstream oil field operations that manage ageing and life extension (ALE) facilities. Design/methodology/approach The research design applies a pragmatic paradigm by employing action research strategy with qualitative-quantitative methodology involving 385 of 1,533 workers. The KBPMS-L6s conceptual framework is developed and enriched with the Analytical Hierarchy Process (AHP) to prioritize fit-for-purpose Key Performance Indicators. The application of L6s with Human Performance Modes analysis is used to provide a statistical baseline approach for pre-assessment of the contractor’s organizational capabilities. A comprehensive literature review is given for the main pillars of the contextual framework. Findings The KBPMS-L6s concept has given an improved hierarchy for strategic and operational levels to achieve a performance benchmark to manage ALE facilities in Indonesian upstream oil field operations. To increase quality management practices in managing ALE facilities, the L6s application requires an assessment of the organizational capability of contractors and an analysis of Human Performance Modes (HPM) to identify levels of construction workers’ productivity based on human competency and safety awareness that have never been done in this field. Research limitations/implications The action research will only focus on the contractors’ productivity and safety performances that are managed by infrastructure maintenance programs for managing integrity of ALE facilities in Indonesian upstream of oil field operations. Future research could go toward validating this approach in other sectors. Practical implications This paper discusses the implications of developing the hybrid KBPMS- L6s enriched with AHP methodology and the application of HPM analysis to achieve a 14% reduction in inefficient working time, a 28% reduction in supervision costs, a 15% reduction in schedule completion delays, and a 78% reduction in safety incident rates of Total Recordable Incident Rate (TRIR), Days Away Restricted or Job Transfer (DART) and Motor Vehicle Crash (MVC), as evidence of achieving fit-for-purpose KPIs with safer, better, faster, and at lower costs. Social implications This paper does not discuss social implications Originality/value This paper successfully demonstrates a novel use of Knowledge-Based system with the integration AHP and HPM analysis to develop a hybrid KBPMS-L6s concept that successfully increases contractor productivity without compromising human safety performance while implementing ALE facility infrastructure maintenance program in upstream oil field operations.
Article
Purpose The purpose of this research is to present an integrated methodological framework to aid in performance stewardship of management institutions according to their strategies based on a holistic evaluation encompassing social, economic and environmental dimensions. Design/methodology/approach A Mamdani fuzzy inference system (FIS) approach was adopted to design the quantitative models with respect to balanced scorecard (BSC) perspectives to demonstrate dynamic capability. Individual models were developed for each perspective of BSC using Mamdani FIS. Data was collected from subject matter experts in management education. Findings The proposed methodology is able to successfully compute the scores for each perspective. Effective placement, teaching learning process, faculty development and systematic feedback from the stakeholders were found to be the key drivers for revenue generation. The model is validated as the results were well accepted by the head of the institution after implementation. Research limitations/implications The model resulting from this study will assist the institution to cyclically assess its performance, thus enabling continuous improvement. The strategy map provides the causality of the objectives across the four perspectives to aid the practitioners to better strategize. Also this study contributes to the literature of BSC as well to the applications of multi-criteria decision-making (MCDM) techniques. Originality/value Mamdani FIS integrated BSC model is a significant contribution to the academia of management education to quantitatively compute the performance of institutions. This quantified model reduces the ambiguity for practitioners to decide the performance levels for each metric and the priorities of metrics.
Article
Full-text available
The paper explores the potential role of Machine learning (ML) in supporting the development of a company’s Performance Management System (PMS). In more details, it investigates the capability of ML to moderate the complexity related to the identification of the business value drivers (methodological complexity) and the related measures (analytical complexity). A second objective is the analysis of the main issues arising in applying ML to performance management. The research, developed through an action research design, shows that ML can moderate complexity by (1) reducing the subjectivity in the identification of the business value drivers; (2) accounting for cause-effect relationships between business value drivers and performance; (3) balancing managerial interpretability vs. predictivity of the approach. It also shows that the realisation of such benefits requires a combined understanding of the ML techniques and of the performance management model of the company to frame and validate the algorithm in light of the context in which the organisation operates. The paper contributes to the literature analysing the role of business analytics in the field of performance management and it provides new insights into the potential benefits of introducing an ML-based PMS and the issues to consider to increase its effectiveness.
Chapter
Problems with the performance of U.S. manufacturing firms have become obvious in recent years. Japanese and Western European manufacturers are able to produce higher quality goods with fewer workers and lower inventory levels than comparable U.S. firms. The ability of foreign firms to become more efficient producers has gone largely unnoticed in the education and research programs of many U.S. business schools. A much greater commitment to understanding the factors critical to the success of manufacturing firms is needed. While an understanding of the determinants for successful manufacturing performance will require contributions from many disciplines, accounting can play a critical role in this effort. Accounting researchers can attempt to develop non-financial measures of manufacturing performance, such as productivity, quality, and inventory costs. Measures of product leadership, manufacturing flexibility, and delivery performance could be developed for firms bringing new products to the marketplace. Expanded performance measures are also necessary for capital budgeting procedures and to monitor production using the new technology of flexible manufacturing systems. A particular challenge is to de-emphasize the current focus of senior managers on simple, aggregate, short-term financial measures and to develop indicators that are more consistent with long-term competitiveness and profitability.