ArticlePDF Available

Human factors in requirements engineering : a method for improving requirements processes for the development of dependable systems.

Authors:

Abstract

Thesis (Ph. D.)--University of Lancaster, 1999.
Human Factors in
Requirements Engineering
a method for improving requirements processes
for the development of dependable systems
A thesis submitted to Lancaster University for the degree of
Ph.D. in Computer Science
October 1999
Stephen Alexandre Viller B.Sc., M.Sc.
Computing Department
ii
Human Factors in Requirements Engineering: a method for improving requirements
processes for the development of dependable systems
A thesis submitted to Lancaster University for the degree of Ph.D. in Computer Science
Stephen Alexandre Viller B.Sc., M.Sc. October 1999
Abstract
This thesis presents a novel integration of a broad body of work in a variety of human
science communities into a method for understanding and improving requirements
processes, particularly for the development of safety-critical systems. The work
contributes to a broader understanding of human error than exists in any one of the
contributing communities alone, and brings this understanding to bear upon the
endeavour of improving software processes in general and requirements engineering
(RE) processes in particular, which are especially prone to human error. This
contribution has been multidisciplinary in nature, and has led to direct contributions to a
number of areas of computer science, combining issues of requirements, human factors,
safety, and process improvement. In doing so the work :
Contributes to Human–Computer Interaction (HCI) research on safety and
human error.
Informs RE methods which address issues of safety.
Extends safety considerations to consider the human factors issues of RE.
Extends considerations of process improvement to RE.
The work of the thesis undertakes a highly practical and applied view of requirements
and safety critical systems, and a pragmatic application of findings from a diverse human
sciences literature. The method developed in the thesis (called PERE, for Process
Evaluation in RE) was evaluated through application in a number of industrial settings.
At the heart of PERE is the Human Factors Checklist which encapsulates the human
sciences literature in the form of vulnerabilities to error, and potential defences which may
be put in place against them. In concert with a more mechanistic form of process
analysis, PERE embodies a multi-perspective, or viewpoint oriented approach to
understanding and improving RE processes.
iii
Declaration
I declare that the work presented in this thesis is, to the best of my knowledge and
belief, original and my own work. The material has not been submitted, either in whole
or part, for a degree at this or any other university.
Stephen A. Viller
iv
Acknowledgements
When a thesis takes as long to complete as this one has done, there are inevitably many,
many people to thank for their help, encouragement and support along the way.
First of all, thanks to Tom Rodden, my supervisor, for managing to convince me that
this work is good enough, always making time when it was necessary, for keeping up the
nagging when others might have given up on the lost cause, and for recruiting others to
nag when he wasn’t around.
Thanks also to John Bowers, who was a central source of inspiration during the
formulation of the thesis, and provided academic guidance and supervision for the
human factors aspects of the work in particular. This thesis would not look the way it
does without his involvement at various points during its development.
I would also like to thank the other REAIMS people at Aerospatiale, Adelard, RWTÜV,
and GEC Alsthom, and fellow Lancaster REAIMSers Ian Sommerville and Pete Sawyer,
all of whom provided feedback and helped to shape this work in a number of ways.
Lancaster has proved to be the ideal setting for me to conduct this work, and the biggest
factor in this is the people I have worked and socialised with. I would like to thank the
various people I’ve worked alongside over the years in CSEG and in the nebulous
grouping that is the Centre for Research in CSCW. You all know who you are. A special
mention is due to Gareth Smith; longest suffering of the various people I have shared an
office with in my time here (of which there have been many), and to John Mariani, for
being John Mariani.
A number of people have been consistent sources of friendship and support throughout
my academic career. Particular thanks are due to Eevi Beck, James Pycock, Richard
Bentley, Keith Smith, Karen Long, and Martin Lea. I would also like to thank my
parents and the rest of my family for their continued support and interest, even when
what I was doing didn’t make much sense to them.
Leaving the best until last, thank you Abigail. For still being there, for listening, for
believing that it really would finish, for putting up with me when I couldn’t do anything
else because of my thesis, for being a partner in every sense, and for giving birth to
Helen, who will hopefully get to know her dad a bit better now that he’s finished his
PhD.
v
Contents
Abstract ............................................................................................................................ii
Declaration......................................................................................................................iii
Acknowledgements.........................................................................................................iv
Contents...........................................................................................................................v
List of figures.................................................................................................................xiii
List of tables ................................................................................................................... xv
1. Introduction............................................................................................................1
1.1 Improving the RE process for safety-critical systems....................................................................................2
1.2 A human-factors informed approach.................................................................................................................. 4
1.2.1 Novel Contributions..........................................................................................................................5
1.2.1.1 The focus of process improvement effort on the RE process...........................5
1.2.1.2 The application of human factors research to software process
improvement....................................................................................................................5
1.2.1.3 Human factors checklist................................................................................................6
1.2.1.4 The combination of approaches contributing to PERE.........................................6
1.3 Thesis Structure.......................................................................................................................................................6
2. Improving the Requirements Engineering Process .................................................9
2.1 Motivation...................................................................................................................................................................9
2.2 Requirements Engineering.................................................................................................................................12
2.2.1 Definitions of RE..............................................................................................................................12
2.2.2 RE in context: The software process.........................................................................................15
2.2.3 Approaches to RE............................................................................................................................18
2.2.3.1 Functional approaches..................................................................................................18
2.2.3.2 Object-oriented approaches.......................................................................................20
2.2.3.3 Participatory Design......................................................................................................22
2.2.3.4 User-centred approaches............................................................................................23
2.2.3.5 Ethnographically informed design............................................................................27
2.2.3.6 Formal methods.............................................................................................................31
2.2.3.7 Summary..........................................................................................................................31
2.3 Problems in RE........................................................................................................................................................33
2.4 RE for safety-critical systems ..............................................................................................................................34
2.4.1 Standards for developing safety-critical systems...................................................................35
vi
2.4.2 Capability maturity models............................................................................................................37
2.4.2.1 The SEI CMM.................................................................................................................38
2.4.2.2 Other models..................................................................................................................40
2.4.3 Safety-critical software development in practice....................................................................41
2.5 Conclusions..............................................................................................................................................................43
3. Human activities in Requirements Engineering ....................................................45
3.1 Errors in Individual activity....................................................................................................................................47
3.1.1 Slips and Lapses...............................................................................................................................49
3.1.1.1 Recognition failures......................................................................................................50
(i) Misidentification................................................................................................50
(ii) Non-detection....................................................................................................50
(iii) False positives....................................................................................................51
3.1.1.2 Attentional failures........................................................................................................51
(i) Inattention slips................................................................................................51
(ii) Slips through over-attention...........................................................................51
3.1.1.3 Memory failures.............................................................................................................52
(i) Forgetting intentions........................................................................................52
(ii) Forgetting or misremembering preceding actions......................................52
(iii) Encoding failures...............................................................................................52
(iv) Retrieval failures.................................................................................................52
(v) Reconstructive memory errors.......................................................................52
3.1.1.4 Selection failures............................................................................................................53
(i) Multiple side-steps............................................................................................53
(ii) Misordering.........................................................................................................53
(iii) Blending actions from two current plans .....................................................53
(iv) Carry-overs..........................................................................................................53
(v) Reversals..............................................................................................................53
3.1.1.5 Summary of Slips and Lapses.....................................................................................53
3.1.2 Mistakes.............................................................................................................................................55
3.1.2.1 Rule-based mistakes....................................................................................................55
(i) Misapplication of good rules...........................................................................55
(ii) The application of bad rules............................................................................55
3.1.2.2 Knowledge-based mistakes.......................................................................................56
(i) Availability biases...............................................................................................56
(ii) Frequency and similarity biases.....................................................................56
(iii) Confirmation biases..........................................................................................56
(iv) Over-confidence.................................................................................................56
(v) Inappropriate exploration of the problem space........................................56
(vi) Attending and forgetting in complex problem spaces...............................57
(vii) Bounded rationality and satisficing ...............................................................57
(viii) Problem simplification through halo effects................................................57
(ix) Control illusions and attribution errors.......................................................57
(x) Hindsight biases and the ‘I-knew-it-all-along-effect’...................................57
3.1.2.3 Summary of Mistakes...................................................................................................58
3.1.3 Human Errors: A Provisional Summary.....................................................................................5 8
3.1.4 Violations............................................................................................................................................59
vii
3.1.4.1 Violations and Safe Operating Procedures.............................................................61
3.1.4.2 Classifying Violations.....................................................................................................61
(i) Skill-based violations........................................................................................62
(ii) Rule-based violations........................................................................................62
(iii) Knowledge-based violations............................................................................62
3.1.4.3 Summary of Violations..................................................................................................63
3.1.5 Summary of Errors in Individual Activity....................................................................................63
3.2 Group Performance Failures and Process Losses........................................................................................63
3.2.1 Social Facilitation and Inhibition...................................................................................................65
3.2.2 Performance in Interacting Groups............................................................................................66
3.2.2.1 Additive tasks..................................................................................................................66
(i) Motivational losses...........................................................................................67
(ii) Coordination losses .........................................................................................68
3.2.2.2 Compensatory tasks.....................................................................................................68
3.2.2.3 Disjunctive tasks............................................................................................................68
3.2.2.4 Conjunctive tasks...........................................................................................................69
3.2.2.5 Discretionary tasks........................................................................................................69
3.2.2.6 Summary of group performance failures................................................................69
3.2.3 Group Leadership............................................................................................................................70
3.2.4 Conformity and Consensus: Normative and Informational Influence.............................72
3.2.5 Innovation: Minority Influence....................................................................................................73
3.2.6 Group Decision Making: The Risky Shift, Group Polarisation and
Groupthink........................................................................................................................................74
3.2.7 Summary of Group Coordination Failures, Process Losses and Related
Sources of Error................................................................................................................................76
3.3 Organisational Problems and Failures ..............................................................................................................77
3.3.1 Latent Organisational Failures......................................................................................................77
3.3.2 Typologies of Organisations.........................................................................................................80
3.3.3 Normal Accidents Vs. High Reliability Organisations............................................................82
3.3.4 Summary of Organisational Problems and Failures...............................................................87
3.4 Summary and Conclusions ..................................................................................................................................88
4. Applying Human Factors Research to Requirements Engineering Processes.........92
4.1 Applying the human factors research to the RE process............................................................................93
4.1.1 Errors due to individual action......................................................................................................93
4.1.1.1 Slips and Lapses.............................................................................................................95
Recognition Failures.......................................................................................................95
Attentional Failures.........................................................................................................95
Memory Failures..............................................................................................................96
Selection Failures.............................................................................................................96
4.1.1.2 Mistakes...........................................................................................................................96
Rule-based Mistakes.......................................................................................................96
Knowledge-based Mistakes...........................................................................................97
4.1.1.3 Violations..........................................................................................................................98
4.1.2 Group Process Losses and Related Phenomena................................................................101
4.1.3 Organisational Failures in Requirements Engineering......................................................103
4.1.4 Summary.........................................................................................................................................105
viii
4.2 Process improvement possibilities in RE.....................................................................................................105
4.2.1 Document design.........................................................................................................................106
4.2.2 Notations and representations.................................................................................................108
4.2.3 Meeting procedures....................................................................................................................111
4.2.4 Preventative measures at the organisational level..............................................................113
4.3 Summary................................................................................................................................................................116
5. The Human Factors Checklist .............................................................................117
5.1 The Human Factors Checklist.........................................................................................................................117
5.1.1 Individual Errors—Slips & Lapses.............................................................................................120
5.1.2 Individual Errors—Rule-Based Mistakes................................................................................121
5.1.3 Individual Errors—Knowledge-Based Mistakes..................................................................121
5.1.4 Individual Errors—Violations ......................................................................................................123
5.1.5 Problems with Group work.......................................................................................................124
5.1.6 Problems of an Organisational Nature....................................................................................126
5.1.7 Problems with Document Design...........................................................................................127
5.1.8 Problems with Notations, Representations, and Diagrams.............................................1 28
5.2 Summary................................................................................................................................................................129
6. Designing PERE...................................................................................................130
6.1 Utilising human factors research in practice................................................................................................ 130
6.2 PERE: A Tool Approach..................................................................................................................................... 132
6.2.1 A formative assessment of the Human Factors Checklist...............................................133
6.2.2 Familiarity with the process under consideration................................................................134
6.2.3 Reducing the number of questions........................................................................................135
6.2.4 Describing processes..................................................................................................................136
6.2.5 Different viewpoints on processes.........................................................................................137
6.2.6 Step by step guidance.................................................................................................................138
6.2.7 Summary.........................................................................................................................................138
6.3 An overview of PERE..........................................................................................................................................139
6.4 Summary................................................................................................................................................................141
7. PERE in Detail: Implementation of the Method..................................................143
7.1 Context of use......................................................................................................................................................143
7.1.1 What is the purpose of PERE’s application?...........................................................................144
7.1.2 Who will be responsible?............................................................................................................145
7.1.3 How mature is the process to be examined?.......................................................................146
7.2 The PERE analysis process...............................................................................................................................146
7.2.1 Overview.........................................................................................................................................147
7.2.1.1 Process capture and modelling...............................................................................148
7.2.1.2 Coordination between the viewpoints.................................................................149
7.2.2 Mechanistic viewpoint.................................................................................................................149
7.2.2.1 Basis of the mechanistic viewpoint.......................................................................150
7.2.2.2 The Mechanistic viewpoint process......................................................................155
Iteration in the mechanistic viewpoint.....................................................................157
7.2.2.3 Deliverables..................................................................................................................157
Process model................................................................................................................157
PERE Component Table (PCT)..................................................................................159
PERE Vulnerability Table (PVT)..................................................................................159
ix
7.2.2.4 Summary.......................................................................................................................160
7.2.3 Human factors viewpoint...........................................................................................................160
7.2.3.1 Basis of the human factors viewpoint...................................................................161
7.2.3.2 The human factors viewpoint process.................................................................164
7.2.3.3 Deliverable....................................................................................................................165
PERE Human Factors Table (HFT)...........................................................................166
7.2.3.4 Summary.......................................................................................................................167
7.3 Modifying PERE ...................................................................................................................................................168
7.3.1 Strategies for Specialising PERE...............................................................................................168
7.3.2 Adapting PERE...............................................................................................................................168
7.3.2.1 Specialisation by Process Domain..........................................................................169
7.3.2.2 Specialisation by Distribution of Function...........................................................169
7.3.2.3 Specialisation by Process Complexity...................................................................169
7.3.2.4 Specialisation by Availability of resources............................................................170
7.3.3 Simplifying PERE...........................................................................................................................170
7.3.3.1 Conducting a ‘One-Shot’ Application of PERE...................................................170
7.3.3.2 Simplifying the Mechanistic Viewpoint................................................................170
7.3.3.3 Simplifying the Human Factors Viewpoint..........................................................172
7.3.4 Developing PERE..........................................................................................................................173
7.3.4.1 Specialising and Adding to the Mechanistic Analysis Classes........................173
7.3.4.2 Specialising and Adding to the Human Factors Analysis Classes.................1 73
7.3.5 Improving PERE............................................................................................................................173
7.3.5.1 Reviewing the Application of PERE.......................................................................174
7.3.5.2 Reviewing the Utility of PERE.................................................................................174
7.3.5.3 Applying PERE Reflexively.......................................................................................174
7.4 Summary................................................................................................................................................................175
8. PERE in Use: Evaluation of the Method...............................................................176
8.1 The Motivation for Assessment .....................................................................................................................176
8.2 Evaluation during the development of PERE ..............................................................................................177
8.2.1 A First Application of the Human Factors Checklist...........................................................177
8.2.1.1 Aerospatiale’s MERE process..................................................................................177
8.2.1.2 Experience of Applying the Human Factors Checklist to MERE..................179
8.2.1.3 Aerospatiale’s Experience Applying the Human Factors Checklist to
MERE..............................................................................................................................181
8.2.2 Summary of PERE's formative evaluation..............................................................................182
8.3 Evaluation following the development of PERE......................................................................................... 182
8.3.1 PERE analysis of Aerospatiale’s MERE process....................................................................182
8.3.1.1 Organisational aspects of MERE.............................................................................182
The origins of MERE......................................................................................................183
Fitting MERE within existing processes....................................................................183
The Distinction between Generalists and Specialists............................................183
8.3.1.2 Experience of applying PERE to MERE.................................................................184
8.3.1.3 Summary of PERE Analysis of MERE....................................................................185
Mechanistic viewpoint..................................................................................................186
Human factors viewpoint............................................................................................187
8.3.2 PERE Applied to the standards production process...........................................................189
x
8.3.2.1 Why choose the standards process?......................................................................189
8.3.2.2 Description of the standards process...................................................................190
8.3.2.3 Experience of applying PERE to the standards process...................................191
8.3.2.4 Summary of PERE’s application to the standards process..............................193
8.3.3 Comparison of PERE to ISO9000............................................................................................193
8.3.4 Summary of PERE’s summative evaluation...........................................................................194
8.4 Conclusions...........................................................................................................................................................194
9. Conclusions .........................................................................................................196
9.1 Objectives of the work ......................................................................................................................................197
9.1.1 Focus on the RE process............................................................................................................197
9.1.2 Inform improvement of the RE process from a human factors perspective..............198
9.1.3 Provide a structured framework for applying the findings...............................................199
9.1.4 Evaluate the method on an ‘industry strength’ process....................................................199
9.2 Novel Characteristics.......................................................................................................................................... 199
9.2.1.1 The focus of process improvement effort on the RE process......................200
9.2.1.2 The application of human factors research to software process
improvement...............................................................................................................200
9.2.1.3 Human factors checklist...........................................................................................200
9.2.1.4 The combination of approaches contributing to PERE....................................201
9.3 Future Work.......................................................................................................................................................... 201
9.3.1 Tool support................................................................................................................................... 202
9.3.2 Contribute towards safety case/standards, etc....................................................................202
9.3.3 Specialisation and adaptation of PERE....................................................................................202
9.4 Final remarks ........................................................................................................................................................203
10. References...........................................................................................................204
Appendix A PERE Human Factors Checklist .........................................................A-1
A.1 Individual Errors—Slips & Lapses ....................................................................................................................A-2
A.2 Individual Errors—Rule-Based Mistakes.......................................................................................................A- 3
A.3 Individual Errors—Knowledge-Based Mistakes..........................................................................................A-3
A.4 Individual Errors—Violations..............................................................................................................................A- 5
A.5 Problems with Group work...............................................................................................................................A-6
A.6 Problems of an Organisational Nature ...........................................................................................................A-8
A.7 Problems with Document Design..................................................................................................................A- 9
A.8 Problems with Notations, Representations, and Diagrams.................................................................. A-10
Appendix B PERE User Manual............................................................................ B-1
B.1 Process Capture for PERE..................................................................................................................................B-1
B.1.1 Techniques from Within the REAIMS Project.........................................................................B-1
B.1.2 Process Capture through PREview-PV.......................................................................................B-2
B.1.3 Process Capture through MERE..................................................................................................B-3
B.1.4 Other Techniques for Process Capture.....................................................................................B-4
B.2 The PERE Mechanistic Viewpoint...................................................................................................................B-5
B.2.1 Introduction.......................................................................................................................................B-5
B.2.2 Overview of the Mechanistic analysis.........................................................................................B-6
B.2.2.1 Select relevant processes...........................................................................................B-7
B.2.2.2 Obtain process documentation.................................................................................B-8
B.2.2.3 Specify process structure and working materials................................................B-8
xi
Identify components....................................................................................................B-8
Classify components....................................................................................................B-8
Identify interconnections and working materials...................................................B-8
B.2.2.4 Specify components.....................................................................................................B-8
B.2.2.5 Identify vulnerabilities..................................................................................................B-8
B.2.2.6 Review vulnerabilities..................................................................................................B-9
B.2.2.7 Output to human factors analysis.............................................................................B-9
B.2.3 Identifying Components by Iterative Deepening..................................................................B-9
B.2.4 Basic Component Classes..........................................................................................................B-11
B.2.4.1 Transduce.....................................................................................................................B-12
B.2.4.2 Process..........................................................................................................................B-12
B.2.4.3 Channel.........................................................................................................................B-12
B.2.4.4 Store...............................................................................................................................B-12
B.2.4.5 Control...........................................................................................................................B-12
B.2.5 Component Specification...........................................................................................................B-13
B.2.5.1 Class...............................................................................................................................B-13
B.2.5.2 Interfaces and Working Materials..........................................................................B-13
B.2.5.3 (Optional) State..........................................................................................................B-13
B.2.5.4 Invariant.........................................................................................................................B-13
B.2.5.5 (Optional) Preconditions and Resources............................................................B-13
B.2.5.6 (Optional) External Control.....................................................................................B-14
B.2.6 Interconnection and Working Materials.................................................................................B-1 4
B.2.7 Representing the Results of Analysis.....................................................................................B-15
B.2.8 Vulnerability Identification..........................................................................................................B-18
B.2.9 Basic Component Classes and Basic Vulnerability Classes...............................................B-18
B.2.9.1 Transduce Component Vulnerabilities................................................................B-1 8
B.2.9.2 Process Component Vulnerabilities......................................................................B-19
B.2.9.3 Channel Component Vulnerabilities.....................................................................B-19
B.2.9.4 Store Component Vulnerabilities..........................................................................B -19
B.2.9.5 Control Component Vulnerabilities......................................................................B-19
B.2.10 Component Attribute Vulnerabilities......................................................................................B-19
B.2.10.1 Interface Vulnerabilities............................................................................................B-19
B.2.10.2 State Vulnerabilities...................................................................................................B-20
B.2.10.3 Invariant Vulnerabilities.............................................................................................B-20
B.2.10.4 Pre-condition and Resource Vulnerabilities........................................................B-20
B.2.10.5 External Control Vulnerabilities..............................................................................B-20
B.2.10.6 Specialising attribute vulnerability classes...........................................................B-20
B.2.11 Vulnerability Review.....................................................................................................................B-21
B.2.12 Proposing Defences.....................................................................................................................B-22
B.2.12.1 Redundancy.................................................................................................................B-2 2
B.2.12.2 Feedback......................................................................................................................B-23
B.2.12.3 Checking.......................................................................................................................B-23
B.2.12.4 Specialising for component class...........................................................................B-23
B.2.12.5 Component attribute defences.............................................................................B-23
B.2.13 Vulnerability Review and Probabilistic Risk Assessment...................................................B-24
xii
B.2.14 Iterating the Mechanistic Analysis and Feeding into the Human Factors
Analysis Component..................................................................................................................B-25
B.3 The PERE Human Factors Viewpoint..........................................................................................................B-26
B.3.1 Introduction....................................................................................................................................B-26
B.3.2 Human activity within components.........................................................................................B-27
B.3.2.1 Individual activities .....................................................................................................B-28
B.3.2.2 Group activities...........................................................................................................B-29
B.3.3 Interconnections and working materials................................................................................B-30
B.3.3.1 Materials, documents and representations........................................................B-31
B.3.3.2 Human and organisational connections...............................................................B-32
B.3.4 Organisational context.................................................................................................................B-32
B.3.5 Human activity outwith the process model: Violations......................................................B-34
B.3.6 Vulnerabilities and Defences.....................................................................................................B-34
B.3.7 Documenting the Human Factors Analysis...........................................................................B-35
B.3.8 Summary.......................................................................................................................................... B-35
B.4 Modifying PERE.................................................................................................................................................B-36
B.4.1 Strategies for Specialising PERE...............................................................................................B-36
B.4.2 Adapting PERE...............................................................................................................................B-37
B.4.2.1 Specialisation by Process Domain.........................................................................B-37
B.4.2.2 Specialisation by Distribution of Function...........................................................B-37
B.4.2.3 Specialisation by Process Complexity...................................................................B-38
B.4.2.4 Specialisation by Availability of resources............................................................B-38
B.4.3 Simplifying PERE...........................................................................................................................B-38
B.4.3.1 Conducting a ‘One-Shot’ Application of PERE...................................................B-39
B.4.3.2 Simplifying the Mechanistic Viewpoint................................................................B-39
B.4.3.3 Simplifying the Human Factors Viewpoint.........................................................B-39
B.4.4 Developing PERE..........................................................................................................................B-41
B.4.4.1 Specialising and Adding to the Mechanistic Analysis Classes....................... B- 41
B.4.4.2 Specialising and Adding to the Human Factors Analysis Classes.................B-42
B.4.5 Improving PERE.............................................................................................................................B- 42
B.4.5.1 Reviewing the Application of PERE......................................................................B-42
B.4.5.2 Reviewing the Utility of PERE................................................................................B-43
B.4.5.3 Applying PERE Reflexively.......................................................................................B-43
Appendix C Results of applying PERE to MERE................................................... C-1
C.1 PERE Component Table (PCT) for The Aerospatiale MERE Process.................................................C-4
C.2 PERE Vulnerability Table (PWT) for The Aerospatiale MERE process............................................. C-12
C.3 PERE Human Factors Table (PHT) for The Aerospatiale MERE process.......................................C-18
xiii
List of figures
Figure 1.1: Overview of the thesis...........................................................................................................................................7
Figure 2.1: Activities during the requirements phase (from Davis, 1993, p. 21) ....................................................13
Figure 2.2: A waterfall model of software development.................................................................................................15
Figure 2.3: Boehm’s spiral model of software development (from Boehm, 1988, p. 64)...................................16
Figure 2.4: V’ model of software development (as defined in IEC-1508-3, 1997).................................................17
Figure 2.5: Structured Analysis process (from Bansler and Bødker, 1993, p. 168)................................................19
Figure 2.6: Rational’s Objectory process for object-oriented design (from Rational, 1997)................................21
Figure 2.7: Workflow in Objectory’s requirements capture component (from Rational, 1997).........................22
Figure 2.8: A HCI prototyping cycle (from Bannon, Bowers, Carstensen, Hughes, Kutti, Pycock,
Rodden, Schmidt, Shapiro, Sharrock and Viller, 1993, p. 87)..............................................................................25
Figure 2.9: The task-artifact cycle (from Carroll et al., 1991, p. 80)............................................................................26
Figure 2.10: The task-artifact framework for HCI (from Carroll et al., 1991, p. 83) .............................................26
Figure 2.11: IEC 1508 software safety lifecycle (from IEC-1508-3, 1997)...............................................................37
Figure 2.12: The five levels in the SEI Capability Maturity Model, (from Paulk et al., 1995, p. 16)...................39
Figure 3.1: Generic error modelling system (GEMS) (from Reason, 1990, p. 64)................................................48
Figure 3.2: Components of planned action.........................................................................................................................49
Figure 3.3: Taxonomy of slips and lapses (errors due to individual skill-based activity)........................................54
Figure 3.4: Taxonomy of human factors contributing to errors in RE due to rule-, and knowledge-
based individual activity ..................................................................................................................................................58
xiv
Figure 3.5: Taxonomy of human factors contributing to errors in RE due to violations.........................................63
Figure 3.6: Taxonomy of human factors contributing to errors in RE due to group activity..................................76
Figure 3.7: Active and latent failures and their contribution to the breakdown of complex systems
(from Reason, 1990, p. 202)........................................................................................................................................78
Figure 3.8: The development of a system failure (from Turner, 1992, p. 193).......................................................78
Figure 3.9: Taxonomy of human factors contributing to errors in RE due to organisational activity...................88
Figure 4.1: Classification of individual errors.......................................................................................................................94
Figure 5.1: An excerpt from the human factors checklist............................................................................................1 19
Figure 6.1: Overview of PERE...............................................................................................................................................140
Figure 7.1: Overview of PERE...............................................................................................................................................148
Figure 7.2: The ALARP principle (from Bloomfield, Bowers, Jones, Sommerville and Viller, 1995).............154
Figure 7.3: Simplified model of the PERE Mechanistic Viewpoint process............................................................155
Figure 7.4: Process analysis through selective iterative deepening..........................................................................158
Figure 7.5: SADT-like notation for process models in PERE......................................................................................159
Figure 7.6: An excerpt from the human factors checklist............................................................................................1 62
Figure 7.7: excerpt from Key Figure 2 (see appendix A for full checklist and appendix B for key
figures)............................................................................................................................................................................163
Figure 7.8: The overall structure of PERE’s human factors analysis method..........................................................165
Figure 8.1: Simplified MERE process.................................................................................................................................179
Figure 8.2: A ‘circulation-percolation’ view of the standards process (from Emmet, 1996).............................191
xv
List of tables
Table 3.1: Distinctions between skill-, rule-, and knowledge-based errors (from Reason, 1990, p.
62 )........................................................................................................................................................................................5 9
Table 3.2: Comparison of errors versus violations (based on Reason, 1990)...........................................................60
Table 3.3: Skill-, rule-, knowledge-based violations (based on Reason, 1990)........................................................61
Table 3.4: A summary of Steiner’s typology of tasks (Steiner, 1972; Steiner, 1976) (from Wilke
and VanKnippenberg, 1988, p. 325)........................................................................................................................67
Table 3.5: Group performance of groups working on various types of tasks (Forsyth, 1983;
Steiner, 1972; Steiner, 1976) (from Wilke and VanKnippenberg, 1988, p. 332)......................................69
Table 3.6: Complex vs. Linear systems (from Perrow, 1984, p. 88)...........................................................................80
Table 3.7: Tight and loose coupling tendencies (from Perrow, 1984, p. 96)...........................................................81
Table 3.8: Centralisation/Decentralisation of authority relevant to crises (from Perrow, 1984, p.
332) .....................................................................................................................................................................................82
Table 3.9: Competing perspectives on safety with hazardous technologies (from Sagan, 1993, p.
46 )........................................................................................................................................................................................8 5
Table 4.1: Vulnerabilities and defences regarding documents in RE........................................................................109
Table 4.2: Vulnerabilities and defences regarding notations and representations in RE...................................111
Table 4.3: Vulnerabilities and defences regarding meeting procedures in RE......................................................114
Table 4.4: Vulnerabilities and defences regarding RE at an organisational level....................................................115
Table 7.1: Passing process information between viewpoints.....................................................................................150
Table 7.2: Basic component classes in the mechanistic viewpoint...........................................................................151
xvi
Table 7.3: Component attributes........................................................................................................................................152
Table 7.4: Component attribute vulnerabilities...............................................................................................................153
Table 7.5: The PERE Mechanistic viewpoint process stages......................................................................................156
Table 7.6: PERE Component Table Headings.................................................................................................................159
Table 7.7: PERE Vulnerability Table Headings.................................................................................................................160
Table 7.8: PERE human factors analysis process questions.........................................................................................166
Table 7.9: PERE Human Factors Table headings............................................................................................................167
Table 7.10: Simplifying PERE’s mechanistic viewpoint.................................................................................................171
Table 7.11: Simplifying PERE’s human factors viewpoint............................................................................................172
Table 8.1: human factors vulnerabilities without existing defences in MERE........................................................187
Table 8.2: Proposed defences with secondary vulnerabilities....................................................................................188
1
1. Introduction
The developers of safety-critical systems are concerned with the design and development
of technology which, should it go wrong, could lead to loss of life, serious injury, or
serious harm to the environment, property, and so on. It is becoming increasingly
common for such systems to rely upon computers for some or all of their control
functions, and consequently their safe operation. For example, the latest generation of
passenger aircraft, including the Airbus Industrie A320, and Boeing’s 777 are controlled
by the air crew via computers. Pilots no longer operate the flight controls directly.
Rather, their input is interpreted by a computer, which acts as an intermediary between
pilot and aircraft, and this type of aircraft has consequently become known as ‘fly-by-
wire’. Failures of these systems1 (referring not only to the technology, but the
human–computer system as a whole) can have catastrophic consequences (Mellor, 1994;
Neumann, 1995, pp. 45-46). If not, they certainly require the human operators (the
pilots in this case) to employ their problem solving and crisis management capabilities to
the full in order to avert disaster (Wright, Pocock and Fields, 1998).
Safety-critical systems are developed using largely the same techniques as any other
computer-based system. In fact, they are only distinguished from any other type of
system by the nature of their application domain and the consequences of their failure. It
follows that the main distinguishing characteristic of safety-critical development
processes, as far as the software is concerned, is the degree of validation and verification
activities which take place in order to ensure that the resulting computer-based system
functions safely and as expected.
1The term system can have a number of meanings in this context. In its broadest sense, it is the
bringing together of people, hardware, and software to accomplish a particular task or objective. It
may also be taken to mean the combination of hardware and software subsystems designed for a
particular purpose, treating the human component as external to the 'system'. For the purposes of this
thesis, the term system, and consequently safety-critical system, will be used in the narrower sense of
the software system alone. Where a broader sense of system is intended, it will be made clear in the
text.
2
The focus of this thesis is on improving the design process for safety-critical systems such
as those in fly-by-wire aircraft, railway signalling systems, nuclear power plant control
rooms, and any other safety-critical application. In particular, the focus is on the earliest
stage of the design process, which is concerned with developing a requirements
specification, based upon the design brief, perceived needs, understanding of the
application domain, etc.
The thesis undertakes a highly practical and applied view of requirements and safety
critical systems. This perspective reflects the broader academic and industrial partnership
underpinning the research work. The majority of the research reported within this thesis
took place as part of the REAIMS project2 (Viller and Sawyer, 1995). This was a
European research project bringing together researchers from safety critical, human
factors and software engineering backgrounds. A unique aspect of this work was their
close partnership with users within safety critical development environments. This
offered the opportunity to develop the research concepts and mechanisms in partnership
with industrial users of these methods.
The remainder of this chapter provides further context to the thesis, describes the
approach taken, outlines the novel contributions, and presents the structure of the
remainder of the thesis.
1.1 Improving the RE process for safety-critical systems
Safety-critical systems are computer-based systems which, if they fail to function as
intended, could lead to human fatality, injury, or damage to the environment or
property. Such systems are becoming increasingly common, and as a result are leading to
greater concerns for safety where they are used. They only differ from any other
computer-based system in that the consequences of failure are that much greater. An
implication of this is that the development process for safety-critical systems is largely
the same as for any other computer-based system, with the differences largely being
concerned with measures to ensure that there are no faults in the final product, or that
the faults are known about and contained. This fact puts the onus on developers of
safety-critical systems to ensure they undertake the extra work necessary to ensure, and
be able to assure, that the systems will operate as desired, in a safe manner. The outcome
of all this is that the development process for safety-critical systems is subject to extra
scrutiny, over and above that deemed necessary for non-dependable systems.
The need for closer inspection of how safety-critical systems in particular are developed,
has led to a focus on software process improvement, which is concerned with the whole
of the development process, or software lifecycle (Paulk, Weber, Curtis and Chrissis,
1995; Zahran, 1998). It is now widely recognised that, of all the phases in software
2 Requirements Engineering Adaptation and Improvement Strategies for Safety and Dependability,
http://www.comp.lancs.ac.uk/computing/research/cseg/projects/reaims/
3
development, requirements engineering (RE) is the most problematic (Kelly and Sherif,
1992; Keutzer, 1991; Lutz, 1993; McConnell, 1993; Sheldon, Kavi, Tausworthe, Yu,
Brettschneider and Everett, 1992; Weinberg, 1997). Despite this, process improvement
efforts are targeted, in the main, at later stages of the development process, where
metrics are easier to define and quantify. The potential for improvement of RE
processes, however, is much greater given the increased likelihood of problems existing,
and is the reason for this thesis’ focus on the RE process.
RE involves the development of a specification for a system based upon a set of
perceived needs and an understanding of the application domain. Kotonya and
Sommerville define it thus:
Requirements engineering is a relatively new term which has been invented to cover all of the activities
involved in discovering, documenting, and maintaining a set of requirements for a computer-based
system. The use of the term ‘engineering’ implies that systematic and repeatable techniques should be
used to ensure that system requirements are complete, consistent, relevant, etc. (Kotonya and
Sommerville, 1998, p. 8)
They also define an RE process as:
a structured set of activities which are followed to derive, validate and maintain a systems
requirements document. Process activities include requirements elicitation, requirements analysis and
negotiation and requirements validation. (Kotonya and Sommerville, 1998, p. 9)
As the earliest stage in the software development process, errors in RE can potentially
propagate throughout the rest of the development, and lead to systems being delivered
which do not meet the actual requirements. The cost of rectifying such errors so late in
the development process is, of course, much greater than it would be if the errors were
prevented, or identified and corrected at source. Errors in requirements are also,
however, much harder to detect than errors at other stages in the development process,
where intermediate outputs can be validated against their specifications. The
‘specifications’ in these terms for RE are the real-world requirements, needs, desires,
goals, etc. of the organisation purchasing the software, of the people who will ultimately
use it, and other stakeholders in the final system. Understanding these ‘real-world’
requirements is the work of requirements engineers, and is essentially about
understanding the domain, and the work being supported within it. In this sense, RE is
revealed as an intellectual and human intensive process, involving communication with,
and understanding of, domain experts, users, and other stakeholders.
All of the above leads to a number of conclusions, and implications for the work
reported in this thesis.
Experience tells us that a large proportion of errors in software development
originate in the RE process.
Errors in requirements are harder to identify, so they persist for longer, and as a
result are more costly to rectify.
Preventing errors is a potentially more successful approach than relying on
detection.
4
RE is a human intensive process, so is likely to be vulnerable to errors that
originate in the psychological and sociological nature of human action and
interaction.
The RE process has been largely ignored by the process improvement community.
When developing safety-critical systems, it is even more important to ensure that
the requirements for a system are free from errors.
Consequently, this thesis proposes a process improvement approach for RE in safety-
critical systems development that is based on an understanding of human factors which
may lead to errors and failures. The following section describes the approach that is
taken in the thesis.
1.2 A human-factors informed approach
Central to the approach taken in this thesis is the recognition that RE is an inherently
human process, which relies upon the collaboration between a number of individuals in
various organisational contexts (supplier, procurer, user) in order to progress. This
remains the case despite the appearance of technology to support the RE process in the
form of tools for documenting, cataloguing and tracing requirements
(Quality Systems & Software, 1999; Rational Software Corporation, 1999). In the light
of this, there is a strong case to inform any approach to improving the RE process with
an understanding of the nature of human interaction, how it may fail, and how failures
might be avoided. There is a wealth of literature in the fields of psychology, sociology,
organisational behaviour, and elsewhere that is directly relevant to this endeavour. What
is needed is an approach to collecting this work together and presenting it in such a
manner as to make it available for the purpose of improving RE processes. This is
precisely what this thesis sets out to achieve.
The task of collecting together all of the relevant human factors literature on failures is
non-trivial. It requires a multi-disciplinary approach to reviewing the literature, which
itself requires significant effort to develop an understanding of human activity from a
number of perspectives. Having collected all the relevant psychological and social
scientific research together, it must then be structured and presented in such a way as to
make it useful and usable for improving RE processes.
This thesis collects the human factors information together into a Human Factors
Checklist. This is structured according to the nature of the activity concerned, be it at an
individual, social, or organisational level. On its own, the checklist is not particularly usable
due to the great quantity of diverse information contained therein. For this reason, two
further devices are employed in order to assist with application of the checklist. First,
guidance is provided for navigating the checklist, relying upon the fact that
characteristics of certain situations will make sections of the checklist not applicable.
This knowledge allows the checklist to be ‘pruned’ in order to reduce the number of
items which must be considered for a given process component. Second, a mechanistic
5
approach to process analysis, based upon an established hazard analysis technique, is
provided alongside the human factors checklist. This mechanistic analysis considers the
process as if it is carried out by a machine, and looks for vulnerabilities in the process
related to its structure and the nature of materials being handled. A side effect of this
analysis is a model of the process broken down into components and sub-components.
This facilitates further pruning of the human factors analysis where, for example,
components can be seen to involve no human activity.
Together, the mechanistic and human factors analyses comprise the PERE approach (for
Process Evaluation in Requirements Engineering). The development of PERE3
involved a number of novel contributions to the field of requirements engineering.
These are summarised below.
1.2.1 Novel Contributions
A great deal of what is novel about the work in this thesis is the bringing together of
previously disparate fields of research. All of this is underpinned, however, by the initial
assumption that the software development process in general, and the RE process in
particular, should be considered to be safety-critical when the products being developed
are themselves safety-critical. The acceptance of this position provides the motivation
for embarking upon what could be considered a costly exercise of analysing processes for
vulnerabilities related to human activity. Having taken this position, the thesis makes
the following novel contributions to the field.
1.2.1.1 The focus of process improvement effort on the RE process
Existing approaches to software process improvement concentrate on phases of
development which follow on after the requirements specification has been established.
Only recently have authors started to focus improvement efforts specifically on the RE
process (Sommerville and Sawyer, 1997a). PERE is novel amongst existing techniques
for process improvement in its focus on the RE process in particular.
1.2.1.2 The application of human factors research to software process improvement
Human factors are now recognised as an important aspect of the design of the human
interface for computer-based systems, but have made little impact so far where the
systems are being operated in safety-critical settings. PERE represents an effort to take
these concerns one step further by setting out to use human factors research to improve
the processes by which such systems are developed4.
3 PERE was developed as part of the European Community funded REAIMS (Requirements
Engineering Adaptation and Improvement for Safety and Dependability) project.
4A leading practitioner in research on human error has remarked that the topic of this thesis occupies
"doubly virgin territory"
6
1.2.1.3 Human factors checklist
The Human Factors Checklist, which is the core of the PERE method, collects together
a large and diverse literature on failures in human activity in order to make it applicable
to improving RE processes. The checklist is structured according to three broad
classifications of human activity, namely: individual, social, and organisational. Individual
factors arise out of cognitive psychology, and draw greatly upon work within the field of
Human Error. Social factors are drawn from social psychology and sociology, paying
particular attention to how people behave and perform when working in teams.
Organisational factors emanate from sociology and organisational and political studies,
drawing in particular upon studies of how organisations may or may not function
reliably in hazardous environments.
1.2.1.4 The combination of approaches contributing to PERE
PERE also represents a novel bringing together of disparate approaches to
understanding and improving processes, and incorporating them into a framework which
allows the different findings to be used alongside each other. The mechanistic analysis is
based upon an established approach to hazard analysis from the process industry. The
human factors analysis, embodied in the Human Factors Checklist, brings together
research from a number of human sciences. PERE presents the two elements of the
analysis as viewpoints, which allows them to be treated alongside each other, and also to
contribute to a broader viewpoint oriented process improvement effort if desired.
1.3 Thesis Structure
This thesis is structured to provide access to this work to as broad a community of
researchers as possible. The thesis structure aims to highlight the novel aspects of the
work while also providing a review of the disparate areas of work and the research
underpinning the central contribution of the thesis. The overall structure of the thesis
broadly follows the underlying philosophy of the work and the overall approach to the
thesis. The core of the work is the development of a taxonomy based approach to
human factors within safety critical systems development. The development of this
taxonomy is based on an extensive analysis of both the requirements engineering and
“human factors literature. The analyses of these two areas are combined in the
development of a Human Factors checklist that is supported by a methodological
framework developed to allow the checklist to be applied in practice. This broad
arrangement of the thesis is shown diagrammatically in Figure 1.1
Chapters 2 & 3 provide an extensive survey of the literature and material that underpins
the checklist in Chapter 5. Chapter 2 considers the nature of requirements engineering,
different approaches to RE, the problems that exist in RE, and how RE for safety-
critical systems differs from ‘mainstream’ RE. It demonstrates the significance of the RE
process in software development, that it is an inherently human-intensive process, that
existing approaches to process improvement largely ignore RE, and pay even less
7
attention to human issues. Chapter 3 then presents a large and diverse review of the
human factors literature from three broad perspectives on human activity: as individuals,
in social groups, and as part of an organisation. The result of this review is a generic
taxonomy of human errors, classified according to the three perspectives. Readers familiar
with the areas covered in Chapters 2 and 3 may wish to move onto the core contributions of the thesis
in Chapter 4 after checking the generic taxonomy presented at the end of Chapter 3.
Extract error taxonomy
and apply to Requirements
Engineering
Human Factors Checklist
Evaluate on industrial RE
processes
Develop method
framework to support
application of the
checklist
PERE
A
pp
lications of PERE
MERE Standards
process
Chapters 2&3
Chapter 4
Chapters 6&7
Chapter 8
Repeated in Appendix A
User manual in Appendix B
See Appendix C
Chapter 5
Figure 1.1: Overview of the thesis
The core conceptual development of the checklist is covered in Chapter 4. This chapter
focuses on the central work of the thesis on improving RE processes, and considers how
well the taxonomy fits the needs of RE process improvement. In this chapter the
taxonomy is specialised to include errors related to core RE activities such as document
design, notations and representations, meeting procedures, and organisational activities.
8
This checklist work provides the basis of the Human Factors Checklist used as the core
of the PERE method.
The checklist itself is presented in Chapter 5. This describes how the Human Factors
Checklist is developed from the taxonomy and provides a listing of the checklist in full.
Each potential error type is expressed as a vulnerability to error, and one or more possible
defences are suggested.
The checklist is complemented by a description of the PERE method in Chapters 6 & 7.
Chapter 6 outlines the requirements for the PERE method based upon an initial
application of the checklist to an industrial requirements process. Based upon this
feedback, Chapter 7 presents the detailed design of PERE, including description of the
mechanistic and human factors viewpoints, and guidance for their application. The
chapter concludes with a consideration of how PERE can be modified in a number of
different ways, such as adaptation to suit specific domains, simplification of the analysis,
development of the approach, and being used reflexively to improve the method itself.
In Chapter 8 the evaluation of PERE is illustrated by a detailed application to an
industrial RE process in a safety-critical setting. Although this evaluation is presented at
the end of this thesis is worth stressing that the assessment and evaluation of PERE
through the application to an existing industrial requirements process called MERE (see
Appendix C) was extremely formative in nature. Essentially the development of PERE was
continually informed through the experiences of its application in the development of the MERE
process. This formative approach to evaluation reflects the practical and applied nature of
the developed method and exploited the extensive access available to a real world
commercial requirements engineering context.
Finally, Chapter 9 reflects on the work presented in the thesis, examines its
contribution, how well it meets the stated objectives, and considers which directions the
work could be taken in the future.
9
2. Improving the Requirements
Engineering Process
This thesis is concerned with developing a method which can be used to propose
improvements to RE processes, especially those used in the development of safety-
critical systems. Before developing this method, it is necessary to ask what is required?
This chapter, therefore, aims to provide the motivation for embarking on such an
activity. To do this, the chapter addresses the following questions:
why is this thesis concerned with improving RE?
what is requirements engineering?
what problems exist in RE?
what, if anything, is different about RE for safety-critical systems?
These questions shape the content of the following sections. First, the motivation for
undertaking the work reported in this chapter, and in the thesis as a whole, is addressed.
Second, the field of requirements engineering (RE) is reviewed, including a number of
different approaches to RE, in order to determine what is understood to constitute the
RE process. Third, problems which exist in the RE process are identified drawing upon
empirical studies in the literature. This leads to consideration of the issues particular to
engineering requirements for safety-critical systems, which are reviewed, including
standards and frameworks for process improvement.
2.1 Motivation
Safety-critical systems are concerned with the design and development of technology
which, should it go wrong, could lead to loss of life, serious injury, or serious harm to
the environment, property, and so on. Examples of such systems are ubiquitous in
modern society, from forms of mass transit such as aircraft, rail networks, and passenger
ferries, through power plants and heavy industry, to medical electronics applications
10
such as patient monitoring and resuscitation machines. In the past, such systems have
relied on the well understood properties of mechanical and electrical components for
their safety. As technology has advanced in more recent times, more and more safety-
critical systems are coming to rely on computer-based systems in order to operate in a
safe manner. The use of computer technology has facilitated great improvements in the
systems concerned, so, for example, more aircraft can be packed into the same airspace,
or more complex power station designs can be operated. The use of computer-based
technology also allows for a greater degree of flexibility in operation than was possible
with electronic and mechanical systems. These advances have been made, however, at
the cost of introducing further complexity of the control systems, accompanied by a
reduction in the predictability of how these systems might fail. Furthermore, as capacity
is increased—more passengers carried, margins reduced, etc.—the repercussions of
failure become potentially more catastrophic.
Consequently, there exists a great need to ensure that, where computers are employed in
safety-critical settings, it should be possible to assure that they will operate as intended,
and that if they fail they do so in a safe manner. It is imperative that safety-critical
computer-based systems function as designed, and that the design reflects precisely what
is needed. This in turn pushes back concerns with the safe operation of a system to its
design, and the process by which this is arrived at. Thus, the focus broadens from a
consideration of the system in use, to the process by which the system comes into
existence. In particular, this thesis considers the early formative stages of the process
where the key decisions are made.
Existing studies of errors in the software development process identify the early stages
concerned with the elicitation and expression of requirements as being the most
problematic (Kelly and Sherif, 1992; Keutzer, 1991; Lutz, 1993; McConnell, 1993;
Sheldon et al., 1992; Weinberg, 1997). Errors in requirements are serious for a number
of reasons:
they are harder to detect;
undetected, they remain dormant until much later in the development process;
the longer they remain undetected, the more rework becomes necessary to rectify
them.
As a result of these factors, errors in requirements are the most costly to correct.
Requirements Engineering (RE) is the discipline which is concerned with this early stage
in systems development, the requirements process, in which a requirements specification is
developed from a set of (real or perceived) needs, along with various constraints and
other factors. It is necessary to consider how this process is managed in order to improve
safety.
Software process models have emerged as a means of controlling the development of
software systems. A model takes the form of a number of stages which specify activities
such as specifying requirements, coding, testing, and so on. Probably the best known and
most widely used model is known as the waterfall model of system development, where
11
the process is seen as a cascade from one stage to the next, with deliverables being
passed forward at each stage. Some variants of the waterfall model include feedback
paths to allow for rework due to errors found at different stages. The waterfall model
and its variants share similar criticisms due to the poor way in which they depict the
iterative nature of software development, and the way they encourage an ‘over the wall’
attitude to the development process.
The notion of iteration has been incorporated into a range of more modern process
models. These include exploratory prototyping and evolutionary models such as Boehm’s
(1988) spiral model. All of these iterative models tend to consider the uncovering of
errors as part of the process. The cost of incorporating iteration, however, is often the
loss of traceability and measurement as the rapid changes inherent to an iterative model
increase the cost of reporting.
The main thrust to address problems in the software development process comes from
efforts in software process improvement. The most widely used approach arising from
this field is the Capability Maturity Model for software (CMM)5 developed by the
Software Engineering Institute (SEI) at Carnegie-Mellon University (Humphrey, 1988;
Paulk, Curtis, Chrissis and Weber, 1993). This model defines a number of levels of
maturity for software development processes, along with criteria that must be met by a
particular process in order to be assessed at that level. Techniques such as this focus on
the process as it is documented, and encourage the creation of process documentation
where it does not already exist as a means of approaching processes that are repeatable.
Where the products of the systems development process are safety-critical, then the
costs of faults in the resulting systems are potentially even greater. Further, the
importance of developing software that functions as intended is also heightened, which
in turn necessitates a set of requirements that accurately reflect the actual needs,
constraints, and so on for the system being developed. In other words, the development
process should itself be considered to be safety-critical, and in particular so should the
requirements process.
One of the features that distinguishes the requirements process from other stages of
systems development is that it is highly human intensive. Whilst tool support is
becoming more common, very little of the requirements process is automated. Successful
RE depends upon a great deal of social interaction within project teams, between
analysts and domain experts, with end-users, and so forth. Human activity at this level is
not sufficiently addressed by existing process improvement methods.
Human reliability techniques exist which have been used to assess potential human
problems with various safety-critical systems (Bell and Swain, 1985; Bello and
Colombari, 1980; Embrey, 1987). Unfortunately, their focus tends to be on human
operators of hazardous equipment (e.g. nuclear power plant control rooms) rather than
on the work of designers of such equipment. They are also typically concerned with
5 The term CMM is used throughout this thesis to refer to the Capability Maturity Model for software
(as distinct from other variants of the CMM).
12
cognitive issues at the interface between human and machine, rather than some of the
social issues that are likely to become important where several people are working
together.
For these reasons, this thesis is concerned with the development of a human-centred
process improvement method for the requirements engineering process for safety-critical
systems development. The development of this method represents the novel bringing
together of a range of research traditions. The combination of these techniques extends
the consideration of the management of the software process and the scope of safety.
This forms the core of the contribution of this thesis. The next chapter examines
literature from the human sciences which will form this basis of this thesis’ method. The
remainder of this chapter returns to the issues covered above in more detail. Section 2.2
reviews the field of Requirements Engineering (RE), covering where it is situated in the
systems development lifecycle, and different approaches to RE. Section 2.3 presents the
problems which exist in RE, providing a focus for later work in the thesis. Section 2.4
examines the nature of RE for safety-critical systems, how it is different from RE for
non-critical systems, and what techniques have been employed in the field.
2.2 Requirements Engineering
Requirements Engineering is a relatively recent term which denotes the set of activities
that are associated with the elicitation, specification, and management of requirements
for a computer-based system. The RE process is the first stage in the development of a
software system, and it feeds into the broader software development process as a whole.
What actually constitutes the RE process, however, is open to some debate. This section
presents definitions of the term Requirements Engineering from the literature, and then
describes the broader context of the software process in which RE exists. This is
followed by a more detailed description of the RE process, including different
approaches to RE.
2.2.1 Definitions of RE
Davis (1993) argues that requirements engineering can be seen as being composed of
two sets of work activities (see Figure 2.1). On the one hand, requirements engineers
undertake ‘problem analysis’ where they engage in brainstorming, interviewing clients or
conducting focus group sessions with them, meeting with other members of the
requirements team and so forth. In problem analysis, the requirements engineer will
attempt to understand the application domain in as rich a manner as possible,
appreciating the complexity of the problem. On the other hand, requirements engineers
are engaged in ‘product description’ as they assemble information, experiment with
notations, diagrams or pseudo-code, or author various documents related to the
requirements process. These two drivers can be thought of as acting against each other.
In product description, the requirements engineer is attempting to ‘congeal’ the rich and
variegated knowledge and appreciation she may have concerning the application domain
13
into more concise and inevitably simplified expressions of the issues. While problem
analysis may expand the area of the ‘problem space’ under consideration, product
description narrows it down as the requirements engineer moves towards a specification
document. An extended requirements process may iterate between product description
and problem analysis many, many times although the general drift will be towards
product description and specification as time unfolds.
the seed idea
Delineating constraints
Refining constraints
Trade-off between conflicting constraints
Understanding the problem
Expanding information
Consistency checking
Congealing
a consistent and complete SRS (Software Requirements Specification)
product
description
problem
analysis
a relatively complete understanding of requirements
Figure 2.1: Activities during the requirements phase (from Davis, 1993, p. 21) 6
Before considering how to improve requirements engineering, it is worth reflecting on
what is currently understood by RE in the literature. Macaulay (1996) uses the
following definition from Pohl (1993) to drive her consideration of what RE consists
of:
Requirements Engineering can be defined as the systematic process of developing requirements
through an iterative co-operative process of analysing the problem, documenting the resulting
observations in a variety of representation formats, and checking the accuracy of the understanding
gained. (Macaulay, 1996, p. 5)
6Clouds not to scale!
14
Macaulay subsequently focuses on a number of key phrases within the above definition
in order to tease out what is implied about the nature of RE. In particular, RE is a
systematic, iterative, and human-intensive process.
The fact that RE is human-intensive should not come as a surprise. The process is largely
concerned with one set of people (the requirements engineers) coming to an
understanding of the work performed by another set of people (the users, or some
similar set of people). this understanding is then communicated in the form of a
specification of a computer-based system which is intended to solve some problem in
the users’ domain to a third set of people (the customers or procurers). This process, and
its systematic nature, is reinforced by Sommerville (1996), who defines the RE process
in terms of the activities that constitute it. Four principal stages are described:
Feasibility study which is primarily concerned with determining whether
development of the proposed system is cost effective from a business perspective.
The resulting Feasibility Report will inform the decision on whether to continue
with more detailed consideration of the proposed system.
Requirements analysis which is the process of developing the requirements for
the proposed system through various means such as observation, discussion,
modelling, and developing prototypes. This stage may result in a number of
models of the proposed system, which will contribute to the Requirements
Document.
Requirements definition which is the process by which a document defining
the set of requirements for the proposed system is created. The document is
couched in terms of what the customer wants, and is written so as to be
understood by customers and end-users.
Requirements specification in which a detailed, precise description of the
proposed system is produced. This will become the basis for the contract between
developer and customer.
In a more recent work, Sommerville & Sawyer (1997a) provide the following definition:
Requirements engineering is a relatively new term which has been invented to cover all of the activities
involved in discovering, documenting, and maintaining a set of requirements for a computer-based
system. The use of the term ‘engineering’ implies that systematic and repeatable techniques should be
used to ensure that system requirements are complete, consistent, relevant, etc. (Sommerville and
Sawyer, 1997a, p. 5)
Once again, the systematic nature of the endeavour is reinforced here. The need to be
systematic arises from concerns with reducing errors, as well as introducing a level of
repeatability to the process. These are fundamentally the same concerns as basic
approaches to process improvement (defining a process and making it repeatable), and
are indicative of RE’s relative immaturity in the software engineering community. The
following section describes the software process as a whole in greater detail, in order to
provide further context for this chapter’s consideration of RE.
15
2.2.2 RE in context: The software process
Any large scale complex system will need to be built under the familiar resource
constraints of time, personnel and finance. As such, it will need to be a managed process
involving some division of labour among possibly changing personnel of varying
specialisms. The traditional approach to managing this complexity has been to use an
abstract model of the activities involved in software development. A range of
approaches to software process models exist. Two general approaches can be considered
as representing the different trends in software process: the waterfall model, which has a
strong sequential flavour, and the more iterative spiral model.
The waterfall model (see Figure 2.2) is an approach to system building that specifies the
process as a series of well-defined steps. It evolved from general systems theory and from
the planning and logistic work inspired by Operations Research during the Second
World War (Agresti, 1986). The idea of the ‘life-cycle’ is intended to capture the overall
process of system development, from the inception of the idea through to the eventual
elimination of the system from service. In significant respects, and as its origins suggest,
software system development is treated as an instance of project organisation and
management. Crucial to the model is the importance attached to the early phase of
determining system requirements for the design, and holding off implementation of these
in software until late in the whole process. There have been a number of variants
proposed since the original version appeared in the early 1970s, though all follow much
the same logic.
Requirements
Analysis and
Definition
System and
software design
Implementation
and unit testing
Integration and
system testing
Figure 2.2: A waterfall model of software development
The Waterfall model (and/or the methods of structured analysis that are often closely
associated with it) is one which is, notionally at least, in very wide use in industrial
16
practice. It has not lacked its critics and a wide variety of alternative models have been
formulated. The majority of these have focused on introducing some notion of iteration
into the process. The most notable examples of these include the whirlpool model
(Belady and Lehman, 1976); rapid prototyping (Lim and Long, 1992) and the spiral
model. The spiral model was developed by Boehm (1988) to combine the strengths of
the classic Waterfall model with those of the evolutionary ‘prototyping’ approach,
linking these with ‘risk analysis’ procedures (see Figure 2.3).
The main feature of the spiral model is the inclusion of risk analysis stages in each cycle
of the development. Each cycle can follow a different process model, according to the
identified risks, e.g. if interface or requirements are a major risk, then a rapid
prototyping model might be used. Alternatively, if performance and user interface risks
have been resolved, and development risks dominate, then a more traditional waterfall
phase can be followed (modified to suit the incremental development). In this way, the
outstanding risks in the development of a product become the primary driving force
behind the development process, and are used to determine the most appropriate, (i.e.
lowest risk) approach to be taken.
Operational
Prototype
Prototype
3
Requirements plan
life-cycle pl an Concept of
Operation
Risk
anal-
ysis Prototype 1
Detai led
design
Determine
ob je c tiv e s ,
alternatives,
constraints
Evaluate alternati ves
identify, resolve risks
Develop , verify
next-level product
Plan next phas es
Development
plan
Integration
and test plan
Review
Prototype
2
Ris k
analysis
Risk
analysis
Risk
analysis
Simulations, models, benchmar ks
Software
requirements
Requirement
validation
Software
Product
design
Design val idation
and verification
Code
Unit
test
Integration
and test
Acceptance
test
Implementation
Cumul ative
cost
Progress
through
steps
Commitment
partition
Figure 2.3: Boehm’s spiral model of software development (from Boehm, 1988, p. 64)
One of the prime motivators for this thesis is the concern with the development of
safety-critical systems. Whilst the activities involved in developing software for safety-
critical domains are largely the same as for any other system, extra care must be taken to
ensure that errors are not introduced into the products of the process. Companies
17
developing safety-critical systems tend to treat safety requirements separately from the
rest of the product requirements. Safety requirements will typically be assigned Safety
Integrity Levels (SILs) as described in the Draft IEC Standard 15087 (IEC-1508-3,
1997). This standard specifies a development process for safety-critical systems which is
essentially based on the V model of development. The V model is so named because of
the way in which the process model is typically represented (see Figure 2.4). The V
shape is a result of the inclusion of explicit testing activities, and the criteria for them, in
the process. On the left-hand-side of the V, moving from requirements to
implementation, each stage produces the criteria by which the output(s) of the stage can
be tested. The right-hand-side of the V, from implementation to introduction of the
system, consists of a series of verification and validation activities which test the
software against the aforementioned criteria.
E/E/PES
Safety
Requirements
Specification
Software
Architecture
Software
Safety
Requirements
Specification
Software
System
Design
Module
Design
Module
Testing
Validation
Testing
CODING
Integration Testing
Validation Validated
Software
Deliverable
Verification
Integration
Testing
[Components, sub-systems
& onto PES]
E/E/PES
Architecture
[Module]
Figure 2.4: V’ model of software development (as defined in IEC-1508-3, 1997)8
Developers of safety-critical systems usually report that they use either the waterfall or
V model (Sommerville and Viller, 1997) primarily because of the need to be able to
audit the process and deliver certain documentation at particular milestones (which are
tied in to the stages of the process model). In some cases, the reported use of these
‘standard’ models is in tension with the actual process followed, but discrepancies
between actual and reported processes are tolerated because of the need to report on
progress in a way which can be audited and assessed for safety issues. Given the
recognised problems with the waterfall model, and problems which are bound to exist
when a different process is followed to that which is reported, the importance of
examining these development processes for problems—and the suggestion of solutions
to them—cannot be understated.
7 This draft standard has now been agreed as a full international standard IEC 61508.
8See section 2.4.1 for more details on IEC 1508
18
Despite the differences between these competing models, they share in common the
notion of system development moving on from an initial stage of exploration about the
application domain, leading to some form of specification of a system that is intended to
fulfil some desired function, and finally on to implementation and testing. Whether each
activity is returned to several times, or (at least on paper) completed in a single pass, the
nature of the activity is the same, and the product is always a system based upon the
specification of system requirements. It is the process leading to this specification that is
the focus of RE, and the work reported in this thesis. The following section presents a
number of approaches to software development that incorporate this process.
2.2.3 Approaches to RE
Even if practitioners largely agree on the overall structure of the RE process, there is no
single agreed approach to arriving at the end-product. Some approaches have been in
wide use in industry, whilst others have been more restricted to research labs. This
section presents a broad cross-section of the different approaches and philosophies to
system development that have emerged since the introduction of the waterfall model.
The purpose of this review is not to reach any conclusions regarding the appropriateness
of the various methods or approaches, nor to decide which method is the best for any
given system development. Rather, the intent is to present the approaches as a
characterisation of the work which requirements engineers engage in, and to draw out a
number of common features which can be seen as typical of the activity of software
development in general, and requirements engineering in particular.
2.2.3.1 Functional approaches
Functional approaches to RE are typified by Structured Analysis, in its various forms
(DeMarco, 1978; Yourdon, 1989). Chronologically, structured analysis was developed
following on from structured programming and structured design, and in these terms can
be seen as the result of a design philosophy being applied to successively earlier stages of
systems development. Earlier versions of structured analysis were heavily influenced by
the same motivating factors which gave rise to the waterfall model, and the process
embodies a similar sequential and transformational approach.
First of all, a current physical model is developed, which describes the system as it is
currently implemented. From this, a current logical model is derived, which models the
current system concepts, and relational dependencies between them. This model is
expressed in abstract terms, without reference to the physical means by which the
system is implemented. The new logical model is developed from this, which represents the
functionality of the system to be developed in a similar abstract way. Finally, the new
physical model is constructed, based upon the new logical model, and this forms the basis
upon which the new system is implemented (see Figure 2.5).
Modern Structured Analysis (Yourdon, 1989) advocates the omission of any modelling
of the current system if at all possible. The reason given for this is that users can become
disillusioned with the length of time and effort required to produce these models which
do not result in anything tangible to demonstrate what the new system will be like, and
19
the project may be cancelled as a result. Instead, the analyst should embark upon the
modelling of the new system, termed the essential model, as soon as possible. The essential
model is meant to describe what the system is to do, rather than any implementation of
how it might be achieved. This mirrors the traditional distinction in RE that requirements
should be expressed in a way that is independent of any particular technological
solution.
Current
Logical
Model
Current
Physical
Model
New
Physical
Model
New
Logical
Model
Model new
logical system
Establish
man- machine
interface
Derive
logica l
equiva le nt
Analyse
Current System
Im pl e me nt
new syste m
Figure 2.5: Structured Analysis process (from Bansler and Bødker, 1993, p. 168)
Nevertheless, the essence of both traditional and modern structured analysis is to model
the system as a series of functions, transforming data from input to output. The primary
notation used for this modelling is the data flow diagram (DFD). Modelling with DFDs
requires the requirements engineer to take a process orientation, with data taking a
secondary role—processes are assumed to be what remain static, with data being more
transitory. In structured analysis, people are effectively treated as data, appearing as
sources or sinks in models. For structured analysis, it is not an issue whether a function
is carried out by a person or a machine, just that a particular transformation takes place
on a set of data. This perspective can be seen to also apply to the way in which analysts
are encouraged to work with users during the RE process. DeMarco (1978) talks about
co-opting users in order to render them less of a problem later on:
Urge the user to help you ‘debug’ the model [of the new system] while it is still on paper. [...] This is
not the time to ask the user to be lenient. On the contrary, he ought to be allowed some whims while
the system is still made only of paper. Now is the time to co-opt him, to make him feel that, by
imposing his modifications on the model, he is establishing the eventual shape of the system. This will
give him a sizable measure of responsibility for the system when it is delivered, and will make him feel
every deficiency is at least partly his fault. That frame of mind makes him doubly helpful during
analysis and more than normally docile at acceptance time. (DeMarco, 1978, pp. 263–264)
Yourdon (1989) recognises that it is unfortunately common for analysts to “assume that
all users are blithering idiots when it comes to the subject of computers.” (p48) and
exhorts analysts to avoid “the common mistake of treating them as a subhuman form of
life” (p48). However, in categorising types of user that will be encountered according to
their level of experience, Yourdon seems to encourage just this for a particular class of
user.
A second type of user is the one I like to call “the cocky novice,” the person who has been involved
in one or two systems development projects, or (even worse) the user who has a personal computer
20
and who has written one or two (ugh) BASIC programs. This user often claims to know exactly what
he or she wants the system to do and is prone to point out all the mistakes that the systems analyst
made on the last project.”(Yourdon, 1989, p. 49, original emphasis and parentheses)
So, in structured analysis, users are largely to be kept at arms length, and manipulated in
order to maximise the analyst’s objectives in arriving at the specification of the system
that will be built. Understandably, this perspective has come under criticism from a
number of sources. Two prominent alternative approaches which take a very different
view on the analyst-user relationship are Participatory Design, and User-Centred Design,
which are considered later in this chapter. The following section, however, reviews an
approach to RE which differs from structured analysis in the primary perspective
adopted in the models developed, taking objects (data) rather than operations (process)
as the fundamental modelling unit.
2.2.3.2 Object-oriented approaches
The major difference between object-oriented and functional approaches to systems
development is in the choice made over what it is about systems that should be of
primary concern for requirements engineers—objects or operations—and consequently
what becomes the main focus for models of the system. In functional approaches, it is
the processes that are modelled primarily, with the data they manipulate seen as
secondary. In object-oriented approaches, the opposite view is taken, whereby objects
(the data in the work place) are the focus of the modelling, and the operations
performed on the objects (the processes followed in the work place) are secondary.
Advocates of object-oriented approaches argue that this is more ‘natural’, that as
humans we are accustomed to classifying the world around us as objects, with
relationships grouping them into composite objects, and classifying them into similar
types.
The history of object-oriented analysis has similarities with structured analysis, in that its
roots are in object-oriented design, and object-oriented programming before that. Just as
with the models produced to describe systems, object-oriented software requires the
programmer to take a different perspective to that used with functional languages. As
object-oriented programming languages have become more popular, so this has become a
justification for switching to object-oriented design and subsequently analysis, in order
to avoid having to switch from a process to a data orientation within the development
process for a single system.
Initially, just as was the case for structured analysis, there were a number of exponents
of object-oriented analysis, each proposing their own process and notation (Booch,
1994; Coad and Yourdon, 1990; Jacobson, Christerson, Jonsson and Övergaard, 1992;
Rumbaugh, Blaha, Premerlani, Eddy and Lorenzen, 1991). Very recently, however,
there has been a great deal of effort to bring the various approaches together to form a
single, unified method. This effort has centred around the Rational Corporation9, to
which the three main protagonists are now affiliated, namely Grady Booch, James
9http://www.rational.com/
21
Rumbaugh, and Ivar Jacobson, and it has culminated in the development of the Unified
Modelling Language (UML, see Fowler and Scott, 1997)10. UML is rapidly becoming an
industry standard, recently having been accepted by the Object Management Group
(OMG) as the standard for exchanging models between tools. As a notation, UML is
process-independent, although Rational recommend the use of their Objectory process,
based upon Jacobson et al’s (1992; 1995) original work. This process is iterative, being
influenced by Boehm’s (1988) spiral model, although it is presented in rather a different
way (see Figure 2.6). The process is described in terms of two dimensions: one of them
dynamic, in terms of the phases of construction (Inception, Elaboration,. Construction,
Transition); and the other static, consisting of the components which make up the
content of the process (Requirements Capture, Analysis & Design, Implementation,
Test, Management, Environment, Deployment). Each of the phases consists of a number
of iterations, and finishes with a major project milestone. Depending on the phase of
development, each iteration of the process will involve varying proportions of the
different process and supporting components.
Elaboration Construction TransitionI nc ept ion
Phases
Requirements Capt ure
Analysis & D esi gn
Impl em enta tion
Te st
Manage ment
Environment
Depl oyme nt
Process Components
Supporting Components
Iterations
preliminary
iteration( s)
iter.
#1
iter.
#2
ite r.
#n
it e r .
#n+1
iter.
# n+2
iter.
#m
iter.
#m+1
Organization
along con tent
Organization alo ng time
Figure 2.6: Rational’s Objectory process for object-oriented design (from Rational, 1997)
One distinguishing feature of the Objectory process is that it is use case driven. Use cases
are an approach to modelling the functionality of systems that is both object oriented,
and user-centred. The former makes them fit well with an overall object oriented
approach to system development (although it is claimed that they can also be used with
other approaches), and the latter means that they can be shown to users in order to
validate the models.
As far as the nature of object oriented requirements engineering is concerned, advocates
of object oriented techniques claim that modelling the world in terms of objects is more
‘natural’ than functions. According to them we see the world around us in terms of
things, which we classify both in terms of structure and taxonomy (Coad and Yourdon,
10 The specification of UML is available from Rational’s web site at http://www.rational.com/UML/
22
1990). It is unusual, however, for a software engineer to have only experienced or been
trained to use object oriented techniques, and the transition from functional to object
oriented analysis is not straight forward. The intellectual work in object oriented RE can
be quite taxing as a consequence.
The workflow in the requirements capture phase of Objectory, as characterised by its
developers, is presented in Figure 2.7. This entails a great deal of contact with the users
in developing an understanding of the workplace and a common vocabulary for the use
case models to be expressed in. For a small development, all the roles may be fulfilled by
the same requirements engineer, but for larger projects, there will be a degree of
negotiation required between different members of a development team where selection
and prioritisation of use cases for modelling is concerned. Just as for structured analysis,
therefore, the process involves the coordination of a number of roles, and for larger
projects these roles will be performed by different people.
Find Use Cases
andActors
Describe the
Use-Case M odel
R eview t he
Use-Case Model
Use-Case-Model
Architect
Use-Case
Specifier
Requirements
Reviewer
Architect
Structure t he
Use-CaseM o del
Capture a
Common
Vocabulary
Describe a
Use Case
Prioritize UseCases
Figure 2.7: Workflow in Objectory’s requirements capture component (from Rational, 1997)
This section has so far dealt with functional and object oriented approaches to RE, both
of which are concerned largely with the way in which the system requirements are
modelled in terms of processes and data. The following section focuses on Participatory
Design, which places greater emphasis on the involvement of, and cooperation with,
users in the design process as a means of ensuring a more reliable match between user
needs and the requirements specification that is produced.
2.2.3.3 Participatory Design
Participatory Design (PD) is firmly rooted in what has become known as the
Scandinavian tradition of information systems development. In contrast with structured
analysis, PD has a political commitment to involve users directly in the design process,
empowering them and enhancing their skills, job satisfaction, etc., rather than degrading
them. One of the ways that PD practitioners have attempted to achieve this is through
close cooperation with trade unions, and direct collaboration between designers and
23
users. There is no single approach that can be characterised as PD, with various
practitioners evolving the techniques over three decades (see. for example: Bødker,
1991; Bødker, Ehn, Kammersgaard, Kyng and Sundblad, 1987; Carlsson, Ehn, Erlander,
Perby and Sandberg, 1978; Nygaard and Bergo, 1975; Schuler and Namioka, 1993).
According to Ehn and Kyng (1987), writing about their Collective Resource Approach,
the key concepts of PD include:
The desire to enhance worker control over technology.
The centrality of the labour process.
The need to build on the tacit knowledge and skills of labour.
The use of computers to augment worker skills, not replace them.
The utility of the tool rather than the machine metaphor for the computer.
The poverty of formal models and abstractions of the work process.
The need and desire for ‘user’ involvement in the design process.
The need to develop concrete representations of designs—prototypes.
The awareness of systems design as a political and social, as well as technical
process.
Recently, PD has become more popular elsewhere in Europe, and also in North America
(see Muller and Kuhn, 1993), where very little exists of the social and political
infrastructure that made PD’s original aims attainable in Scandinavia. This, coupled with
a shift in Scandinavian society towards a more Western European outlook, has seen
PD’s perspective change from one of workplace democracy to more effective design
(see, e.g. Greenbaum and Kyng, 1991), with a focus on cooperation with, and the
involvement of, a variety of stakeholders in the design process.
Analysis in PD takes place in a variety of ways, including: Future Workshops (Kensing
and Madsen, 1991), which are convened in order to focus users and designers on
possible alternative futures for the workplace under study; Cooperative Prototyping
(Bødker and Grønbæk, 1991), which involves users in the generation of prototype
systems, rather than simply in their evaluation when they are demonstrated; and Design
Mock-ups (Ehn and Kyng, 1991), in which possible designs for the system are
constructed from a variety of materials in order to generate ideas and obtain feedback
from users. The emphasis in all the different approaches is on direct contact with, and
involvement of, users throughout the design process.
PD is not the only effort to orient the design process towards the users who are
intended to benefit from its products. The following section presents approaches to
requirements engineering that have been grouped under the term ‘user centred’.
2.2.3.4 User-centred approaches
The field of Human–Computer Interaction (HCI) (Dix, Finlay, Abowd and Beale, 1993;
Preece, Rogers, Sharp, Benyon, Holland and Carey, 1994) is concerned primarily with
24
the design of systems that match the needs and capabilities of the people who operate
them. The most visible beneficiary of this research is the design of the human–computer
interface itself. A considerable amount of effort has also been directed towards the
design of interactive systems in general, i.e. bringing concerns about the needs of users to
bear on the whole design of systems, rather than merely on the user interface. This
approach to systems design has its roots in the cognitive sciences, and has become
known as User Centred Design (see, e.g. Norman and Draper, 1986).
User-centred techniques are not characterised by a unified approach to design. In
contrast to other groupings in this section, user-centred approaches are much less of a
coherent categorisation, and more of a loose grouping of techniques which have evolved
from similar concerns and theoretical considerations. In fact, two broad groups of
approaches from the HCI tradition—rapid prototyping and task analysis—are in some
senses contradictory (see below). As far as a common theme exists, it is that they share a
common theoretical perspective on the design of systems based upon an understanding
of users and their capabilities. This theoretical perspective is informed by the cognitive
sciences in general, and cognitive psychology in particular.
The use of rapid prototyping in system design is not unique to user-centred approaches.
In contrast to PD, however, where users are directly involved in the production of the
prototypes, rapid prototyping in user-centred design follows a more traditional
approach. It is the systems designers (be they user-interface specialists or not) who
develop the prototype(s) for evaluation by end users. There follows a period of
reflection on the feedback obtained, which leads to the development and evaluation of a
new version of the prototype. This process repeats until designers and users are happy
with the version of the system as embodied in the prototype, and this then feeds into the
design and production of the delivered system. A typical HCI prototyping cycle is
presented in Figure 2.8.
Rapid prototyping offers several advantages to the developer of a system, as it allows
users and developers to cheaply explore design alternatives, and also helps to shape the
users’ expectations of the final system. Conversely, the process can be criticised for its
seemingly haphazard nature, which at its worse could be characterised as ‘hacking’ the
interface by trial and error. Another criticism of rapid prototyping is that little or no
understanding of users’ goals, knowledge, etc. are built up when prototyping. This means
that incorrect decisions could be made when addressing a problem with the interface,
because the developer misunderstands the reason for the problem (see Dix et al., 1993,
pp. 179-80). A more structured approach to RE and systems development from the HCI
community, based on psychological understanding and models of users, comes in the
form of task analysis.
25
Pick Nearest
Existing Design
new prototype
symptoms
costs to
users
“deep” problems,
costs, frequency
modifications
specifications
‘key’
requirements
R
E
F
I
N
E
MENT
E
N
VALU
A
T
I
O
Observe
Interaction
Do First
Design
Propose
Changes
Implement
Changes
Interpret
Symptoms
Pre-Design
Research &Survey
Figure 2.8: A HCI prototyping cycle (from Bannon, Bowers, Carstensen, Hughes, Kutti, Pycock, Rodden,
Schmidt, Shapiro, Sharrock and Viller, 1993, p. 87)
The term task analysis covers not one, but many methods for understanding the tasks
performed by users (Diaper, 1989a; Johnson, 1992). Task analysis involves breaking
down the tasks to be performed by a user into their components. This may be done
hierarchically, as in HTA (Shepherd, 1989), according to the knowledge required in
order to complete the task, as in TAKD (Diaper, 1989b), or even in terms of the objects
that a user interacts with, as in ATOM (Walsh, 1989). One feature of task analysis
which distinguishes all of the approaches from other, seemingly similar, analysis
techniques, is that the analysis is not restricted to activity or objects that will ultimately
be a part of, or be modelled in, the system under development. Task analysis is
concerned with modelling everything of relevance to the user in performing their tasks.
In some cases, the approach ultimately leads to the classification of tasks according to
whether they will be performed by the user, the computer, or a combination of human
and machine.
In work which brings together the two strands of HCI-related work above, Carroll
(1991) has proposed that the development of systems follows what he has termed the
task-artifact cycle (see Figure 2.9).
Carroll states that, if we look at how technology is developed:
“Design and development proceed chiefly by emulation of prior art; deduction from scientific principles
has always played a minor role. Moreover, details of both the design and its context are critical
determinants of viability. A “major” innovation typically depends on a variety of more modest
innovations, craft techniques, and even happenstance. Finally, design is an iterative process. Major
innovations rarely if ever emerge full-blown; they are incrementally synthesized over time. Every
26
emulation, every modest detail alters, in a sense creates, the context for subsequent invention” (Carroll
et al., 1991, pp. 75-76)
REQUIREMENTS
POSSIBILITIES
TASKS ARTIFACT
S
Figure 2.9: The task-artifact cycle (from Carroll et al., 1991, p. 80)
So the design of technological artefacts is essentially iterative and evolutionary, with
scientific principle playing a minor role. With the task-artifact cycle, Carroll has
attempted to inform an iterative approach to design with a deeper, psychological,
understanding of the users who will ultimately use the artefact(s) being developed. This
approach provides a defence against the claims that rapid prototyping is weak on
understanding the underlying reasons behind good or bad interfaces, and that it
contributes little to any cumulative theory of good design.
In fact, what Carroll proposes is a task-artifact framework for HCI (see Figure 2.10) which
incorporates two existing techniques into the task-artifact cycle, namely scenario-based
design (Campbell, 1992; Carroll, 1995) and design rationale (Conklin and Begeman,
1988; Maclean, Young and Moran, 1989). The former is used to “express tasks as
artifacts” by envisioning possible designs as scenarios, and the latter is used to
“understand artifacts as tasks” by presenting the rationale for their design.
PSYCHOLOGY
OF TASKS ARTIFACTS IN
SITUATIONS OF USE
SCENARIO-BASED
DESIGN
REPRESENTATION
DESIGN RATIONALE
VIA
CLAIMS EXTRACTION
Figure 2.10: The task-artifact framework for HCI (from Carroll et al., 1991, p. 83)
Another way in which HCI research has become systematised in system development is
through efforts to incorporate HCI-informed techniques into more structured
development methods. Methods such as USTM (Macaulay, 1993; Macaulay, Fowler,
Kirby and Hutt, 1990) and HUFIT (Catterall, 1990; Taylor, 1990) attempt to bridge the
gap between task analysis and structured methods. They achieve this by utilising the
27
techniques in workshops involving a variety of stakeholders in the proposed system, and
by incorporating human-factors research into the method’s toolset. These methods
emphasise working in multidisciplinary teams, involving users where possible.
One criticism of development techniques that are inspired by, and have arisen out of
HCI research, is that they are driven by the development of theory, and largely cognitive
theory at that. With theory as a primary motivator in HCI, studies often consist of
attempts to test and develop new and better theory, taking place in artificial settings
with users recruited to perform artificial tasks. The following section considers a highly
contrasting approach to influencing systems design.
2.2.3.5 Ethnographically informed design
Ethnography, or to be more accurate, ethnomethodological ethnography, is an approach
to the study of work which is a highly distinctive branch of sociology (Button, 1991;
Sharrock and Anderson, 1986). Ethnomethodology’s approach to understanding human
activity is different to other human sciences because it eschews theorising about it.
Rather, ethnomethodological accounts of human activity are based upon detailed
descriptions of the activity that are the result of spending prolonged periods as a
‘participant observer’ in the setting where the activity takes place. In taking this
approach, ethnography avoids the problems associated with the artificiality of
laboratory-based study, and produces accounts that are worded in terms that are readily
understood by the participants being studied. In particular, what ethnography offers the
design process over other techniques is detailed accounts of how work is accomplished
in practice, rather than how it may be specified, or how workers might report their
actions in an interview.
The rise in popularity of ethnography as an approach to RE is relatively recent (Bannon
et al., 1993; Goguen, 1993; Goguen and Linde, 1993; Jirotka and Goguen, 1994),
although it’s use as a technique to inform the design of technology is slightly longer
established (Suchman, 1987). Its popularity is largely due to the concerns within the
field of Computer Supported Cooperative Work (CSCW) (Baecker, 1993; Greif, 1988)
for the social nature of work, and the need to understand it in order to develop systems
that will successfully support it (Bentley, Hughes, Randall, Rodden, Sawyer, Shapiro and
Sommerville, 1992; Harper, Lamming and Newman, 1992; Heath and Luff, 1992;
Rouncefield, Viller, Hughes and Rodden, 1995). The benefit that ethnographic studies
have brought to the field of CSCW has largely taken the form of improved
understanding of the way in which work is socially organised, and how, for example,
seemingly mundane tasks can play a vital role in the successful accomplishment of the
work under study.
A number of criticisms have been made of ethnography, however, concerning its use as a
method of requirements elicitation. These have largely been of a practical nature, related
to the manner in which ethnography is conducted for more traditional purposes. In
essence, the criticisms aimed at ethnography as a method for RE take the form of:
28
ethnography is typically a lengthy process, taking several months, or even longer
in some cases. RE simply cannot afford to make use of a technique that takes so
long to produce results;
communicating the results of ethnographic studies to the design process is not
straight-forward;
language and culture barriers exist between sociologists and technologists;
it is difficult to draw abstract lessons in the form of design principles from a
technique that is concerned with the concrete detail of a particular situation;
the success of an ethnographic study is dependent upon the skills of the individual
fieldworker;
Whilst many of the above comments and criticisms are true of ethnography in its ‘pure’
sense, there have been a number of developments in recent times that attempt to address
these issues in a number of ways.
Work in the COMIC project11 examined how the role of ethnography could be modified
in order to make it more suitable for use in the design process. This led to a number of
different scenarios of ethnography in system design (Hughes, King, Rodden and
Andersen, 1994):
Concurrent ethnography. This is where ethnographic study is performed
alongside the development of a prototype of the system. Regular debriefing
meetings are held between ethnographers and designers in order to provide
domain information for the prototype development, and for the prototype
developer to raise questions which are used to focus subsequent periods of study.
The prototype which results from this process has been directly influenced by
ethnographic study.
Quick and dirty ethnography. The main distinction between this form of
ethnography and concurrent ethnography is the relative scale of the development
in comparison to the ethnographic study. In cases where the study has to take
place in a large workplace (in terms of physical size, or number of people
involved), it would be impractical to expect an ethnography to cover the whole
of the workplace. However, brief, focused studies of particular aspects of the
workplace can lead to great insights into specific parts of the work to be
supported, and are useful for providing a scope for the work of the designers.
This scoping is provided as a document which results from a number of
debriefing meetings interspersed with brief focus studies.
Evaluative ethnography. This can be viewed as a more focused version of
quick and dirty ethnography, in that it is used to perform a ‘sanity check’ on an
existing design, prior to its implementation. A study of this type can be used to
ensure that the social organisation that currently exists in a workplace will not be
11 http://www.comp.lancs.ac.uk/computing/research/cseg/comic/
29
harmed by the introduction of a new system, or that the models of the work
process that are embodied in the design sufficiently reflect what happens in
practice.
Re-examination of previous studies. In some instances, it may be possible to
gain useful insights into the social organisation of a particular workplace through
the examination of existing ethnographies of similar settings. This is not
straightforward, as each study will have been performed for a particular purpose.
However, where general, infrastructural features of cooperative work are of
interest, the ability to turn to a corpus of existing studies is potentially highly
useful.
These different approaches are all aimed at making use of ethnographers (i.e. the people)
in the design process. They address some of the issues raised above, but still suffer from
the fact that the ethnographers must communicate their findings to the designers in the
projects. They must do this in such a manner as to be understandable, that the
understanding is not changed significantly by their different perspectives, and that the
findings are useful. Subsequent work has addressed some of these issues by examining
ways in which the results of ethnographic studies can be presented in the design process
so as to be more readily understood and relevant for design.
There are a number of ways in which ethnographic studies have been used to influence
the design process. In some cases, it is simply that studies have taken place that have
raised a number of issues for the research community to consider. In others, the
ethnographers are directly involved in the design process, and present reports on their
studies during debriefing meetings with the designers (Bentley et al., 1992). In others
again, the ethnographers are engaged in modifying the ways in which their reports are
presented to the design process in order to bring out the specific details of the study that
are most useful for design (Hughes, O’Brien, Rodden and Rouncefield, 1997; Hughes,
O’Brien, Rodden, Rouncefield and Sommerville, 1995). The latter two positions
correspond to the first two ways in which the relationship between ethnographers and
designers can operate, according to a recent paper by Button & Dourish (1996).
In their paper, which is essentially concerned with addressing how ethnography can
contribute to the design process, Button and Dourish present three possibilities for how
the ethnographer-designer relationship could operate:
Learning from the ethnomethodologist. As in concurrent ethnography, for
example, ethnographers undertake their study in the workplace, and write this
up. It is through the participation of ethnographers in the design process,
however, that the information is transferred to the designers. Ethnographers may
act as proxies for the users in the field, or for the workplace itself, and can
provide feedback to the designers in response to suggestions for changes to the
system under consideration. In this case, the ethnographer is engaged directly in
the process of informing the design, and in Dourish and Button’s
terms, the locus of the ethnomethodology (what makes the analysis an
ethnomethodological one) is in the ethnographer’s head.
30
Learning from ethnomethodological accounts. In this approach,
ethnographers perform a study of a workplace, and write it up as before. This
time, however, it is the account—that which has been written up—that is passed
on to the design process. This is similar to the presentation viewpoints (Hughes et
al., 1995) and framework (Hughes et al., 1997), where an ethnographic study is
presented in a manner intended for consumption by designers. The locus of
ethnomethodology, in this case, is in the account.
Learning from ethnomethodology. Finally, in this approach, the design
process is informed by ethnomethodology itself. Rather than relying on
ethnographers to present the results of a study, either themselves or through an
account of it, this approach requires the design process itself to be directly
influenced by ethnography. In this case, according to Button and Dourish, the
design process itself is in some way an ethnomethodological one.
Recent work at Lancaster in the Coherence project (Viller and Sommerville, 1999a;
Viller and Sommerville, 1999b; Viller and Sommerville, in press) attempts to work from
this third perspective by developing a systematic method of social analysis for systems
design. Building upon the work on presenting ethnographic accounts to the design
process, Coherence integrates social viewpoints into an existing viewpoint-oriented
approach to requirements engineering called PREview (Sommerville and Sawyer,
1997b; Sommerville, Sawyer and Viller, 1998). The Coherence approach is intended to
be applied by software engineers, who do not necessarily (and probably do not) have any
formal training in sociology. By integrating common findings in ethnographic studies into
the viewpoints, Coherence sensitises the analyst to social issues that may be pertinent to
the system being developed. Coherence also attempts to bridge the gap to more
traditional design approaches by linking through to object oriented analysis, via use case
models.
It can be seen from the above that the relationship between ethnography and design is
still developing, and has not really been taken up yet by industry, although more recent
efforts are aimed at making ethnographically informed design more accessible to analysts
in industry. The benefits of ethnography are particularly apparent in any system which
entails coordinating the efforts of several people. What ethnography offers is a rich and
detailed description of how the work is currently done in practice, and what any new
system should be wary about perturbing. These benefits come at a cost, which is why
various attempts have been made to modify the approach and gear it more towards the
design process.
The different ways that the requirements process can be influenced by ethnography (c.f.
Button & Dourish’s work) leads to different models of the work performed by a
requirements engineer. On the one hand, it may involve working in collaboration with
ethnographers on a particular design, or at least with written accounts produced by
them. This involves developing an understanding between ethnographers and
requirements engineers of each others’ vocabulary and cultural differences. On the other
hand, where the process is directly influenced by ethnography, requirements engineers
31
need to develop an understanding of the social aspects of the workplace by themselves,
in order to inform the design of the system.
2.2.3.6 Formal methods
The final approach to RE considered here is particularly pertinent given this thesis’ focus
on processes for safety-critical systems development. Formal methods have their
strongest following in the safety-critical systems field, where the ability to prove
mathematically that a system matches its specification is particularly appealing. Formal
methods are increasingly becoming mandatory in safety-critical development, as
indicated in standards such as the UK Ministry of Defence’s standard for procurement
of safety-critical software (Ministry of Defence, 1997a; Ministry of Defence, 1997b).
Formal specifications are expressed in a language whose vocabulary, syntax, and
semantics are formally defined (Sommerville, 1996). They can be classified into two
broad categories where the methods are based upon:
model-based notations such as Z (Spivey, 1988) and VDM (Jones, 1986); or
process algebras such as CSP (Hoare, 1985) and CCS (Milner, 1980)
The advantages of adopting a formal specification approach include: the rigour of
following formal methods can lead to a deeper understanding of the software
requirements and design; formal specifications, being expressed in a mathematical
notation, are open to analysis with mathematical techniques to prove consistency and
completeness; and formal specifications may be processed automatically by software
tools.
These advantages are frequently outweighed by the perceived difficulty of understanding
and overhead of adopting formal methods. Whilst these may be largely unfounded,
formal methods are at a disadvantage where validating specifications with end-users is
concerned as it requires the users to understand the notation being used to represent
their domain.
Whilst formal methods are weak in terms of supporting the analyst in the earlier stages
of RE, they come into their own in the later stages of analysis leading to specification.
Formal transformation approaches to the development process take advantage of the
precision of the notations used in order to treat systems development as a series of
mathematical transformations from formal specification to final executable program.
Verification at each step in the process leads to being able to reliably assert that the final
system functions exactly as denoted in the formal specification. Include automated tools
in this process, and a great deal of human error can be avoided. Of course, this does not
avoid the possibility that the specification does not describe exactly what is required of
the software.
2.2.3.7 Summary
The purpose of this review was to present a variety of approaches to RE, in order to
arrive at a characterisation of the work engaged in by requirements engineers. This was
32
not an exhaustive survey. For example, quality-related approaches such as QFD (Haag,
Raja and Schkade, 1996; Sullivan, 1986), or viewpoint-based approaches (Kotonya and
Sommerville, 1992) are not covered here. Nevertheless, a broad cross section of
approaches to RE has been reviewed. It is interesting to note that, despite the diversity
of perspectives and approaches to the endeavour, the characteristics of the work
involved for requirements engineers are largely the same. Common factors to be found in
all of the above approaches include:
Domain knowledge First and foremost, requirements engineers need to build
up an understanding of the domain in which the system being developed is to
operate. This requires a great deal of technical knowledge about the domain to be
collated together, whether this be about nuclear physics, banking procedures, or
safety legislation. Requirements engineers’ understanding of the domain in which
they are working must be good enough not only to develop a complete and
consistent set of requirements for the procured system, but also to talk
authoritatively about the domain to other members of the development team,
and to users or procurers about the proposed system.
Method Even the most informal approach to RE involves the requirements
engineer following or applying some sort of method. This method may take the
form of detailed, step-by-step guidance on what to do at which point of the
process, as in many functional approaches, to a set of sensitivities and approaches
to the understanding of the domain under examination, as in ethnographically
informed approaches. In all the cases, successful requirements engineers can be
seen to be skilfully applying training they have received on how to follow the
method.
Notation The RE process is essentially about communication, and one of the key
functions performed is to communicate a set of needs and an understanding of
the domain from the relevant sources of information to the design, and those
who will implement it. Various notations have been developed for this purpose,
including: mathematically precise formal methods; various special purpose
modelling notations used in functional and object-oriented methods; hierarchical
representations of task structure in task analysis; and so on. The lowest common
denominator is, of course, natural language, which is used extensively by
ethnographically informed approaches, although even here particular
combinations of words take on precise and specific meanings in a given context.
One of the skills that a requirements engineer must often apply is the translation
of information from one representation to another, especially where, for
example, models are being explained to users for the purpose of validation.
People Requirements engineers engage in interaction with other people
throughout the RE process. At the outset, they must negotiate with the procurers
of the system in order to understand what the organisational goals are that system
development is driven by. Current users must be observed, interviewed, etc. in
order to understand how the work is currently achieved. Improved understanding
of the application domain is furthered through consultation with domain experts.
Finally, engineers engaged in other activities in the development process, e.g.
33
testing, implementation, technical authoring, etc., will also at various instances
need to discuss aspects of the system with the requirements engineer. Each of
these scenarios throughout the development process involve the requirements
engineer in interpersonal contact with a variety of people with a number of
different and contrasting backgrounds. Social and cultural issues will have to be
negotiated in various ways, depending upon who is being dealt with. In
particular, requirements engineers will often function as an intermediary between
users and domain experts on the one hand, and developers on the other, with the
task of translating between the two.
Documentation Notational issues notwithstanding, requirements engineers must
still be able to describe the current system, needs for improvement, details of the
domain, and so on, in a clear and concise way. The primary vehicle for such
descriptions will be the requirements specification document. Regardless of the
method or approach being followed, the requirements engineer will need to
produce significant amounts of text, not only for the purpose of communicating
design issues to others, but also frequently to act as a contract between procurer
and developer.
This section has shown that RE can be considered, with little regard for exactly which
method or approach is being followed, as ‘intellectual’ office work. That is to say, the
work that requirements engineers are engaged in does not differ a great deal from other
office work that involves a degree of technical knowledge, communication skills, and
ultimately, reading and writing. This is not to belittle what is involved in RE. Rather, it
is to move closer to an understanding of the nature of the work that is RE, in order to
better understand how failures could occur, and consequently how to avoid them. The
following section examines the problems within RE in more detail.
2.3 Problems in RE
It is widely acknowledged that the requirements analysis phase of systems development
is a prominent source of potential errors in design (Brooks Jr., 1987; Kelly and Sherif,
1992; Keutzer, 1991; Lutz, 1993; McConnell, 1993; Sheldon et al., 1992;
The Standish Group, 1995; Weinberg, 1997). It is also widely believed that
requirements errors are hard to detect and expensive to rectify later (Boehm, 1976).
Additionally, work on the design and use of safety-critical systems is converging on the
conclusion that ‘latent failures’ at the design and development stage are often factors as
important as operator error in the aetiology of accidents and critical incidents. For
example, in a study undertaken by the DARTS project (Baldwin and Ayres, 1993) of
four teams developing a safety system for the nuclear industry, as many as 56% of
requirements errors were not detected until development and validation phases.
Furthermore, nearly a half of these undetected requirements errors were classified as
highly significant errors that would lead to the safety system “failing to trip on demand”.
According to another smaller survey, 63% of reported errors in a safety-critical project
were traced back to problems with requirements.
34
This suggests that the processes by which the requirements for systems are identified
should come under scrutiny with respect to their safety. Within the general requirements
engineering literature, Davis (1993) argues that many errors are made in the
requirements engineering phases of systems development which often remain latent.
Basili and Weiss (1981) substantiate this view with an analysis which revealed the
following breakdown of errors:
Non-clerical errors (77%) and within these:
- 49% incorrect fact
- 31% omission
- 13% inconsistency
- 5% ambiguity
- 2% misplaced requirement (wrong section)
Clerical errors (23%)
On the basis of this, Basili and Weiss (and Davis following them) argue that
requirements errors tend to be substantive rather than more trivial clerical (i.e. typing)
errors. While this point is well made, the classification scheme offered by Basili and
Weiss and other writers (e.g. Lutz, 1993) does not necessarily help us identify the
possible moments within the requirements process where errors are likely to be made
nor identify the factors likely to promote or inhibit error. This is because such
classification schemes are post hoc in the sense that the terms used to describe the errors
can only be used once the errors have been detected. One can only classify something as
a ‘misplaced requirement’ once it has been discovered in the wrong place. In contrast,
this thesis wishes to focus on the development of processes to reduce the opportunities
for errors to occur. In other words, the approach taken in this thesis is to develop a
means by which errors in RE can be avoided, based upon the characterisation of RE
activity developed in section 2.2. Before this, however, the following section returns to
what is one of the main motivations for the work in this thesis—RE for safety-critical
systems development—in order to examine what, if anything, changes for RE when the
application domain is safety-critical.
2.4 RE for safety-critical systems
The main difference between the focus of this thesis—RE for safety-critical
systems—and RE more generally is the severity of the consequences when an incorrect
requirement leads to a fault in the system. When considered in these terms, it can be
seen that the work of a requirements engineer will be largely the same as for
development in other domains, but the process that they follow must be more rigorous.
In other words, the costs of failure, be they human, financial, or environmental, are such
that extra effort is warranted in order to ensure that the system does not fail. In this
section, we wish to reflect on the peculiar nature of RE for safety-critical systems.
35
The RE process for safety-critical systems will follow the same steps as for any other
system development, but extra measures must be undertaken to prevent errors
occurring, or at least to detect them if they do occur. The extra activities undertaken to
ensure safety will obviously incur extra costs, which may be deemed too expensive for
the components of the system that are not considered to be safety-critical. Further,
before a new safety-critical system can be operated, its developers must be able to show
that it is indeed safe. This is usually achieved by preparing a safety case, which is a
document that details all of the safety critical aspects of the system, and demonstrates
the measures taken to ensure safe operation. For these reasons, RE for safety-critical
systems typically involves the separation of safety requirements from the rest of the
requirements, so that the extra effort can be targeted on where it is most needed.
Briefly, then, RE for safety-critical systems is differentiated from RE for non-critical
systems by the following:
safety-critical requirements are separated out and dealt with more rigorously;
extra effort to prove correctness is targeted at the safety-critical requirements;
there is an onus on the developer organisation to demonstrate that the
development of the system, and therefore its requirements, has been done in a
manner that ensures safe operation.
The two main developments in software and requirements engineering that aim to
address these points are the adoption of standards that specify how development should
proceed, tools and techniques to be used, etc.; and assessment of software processes
against a model of developer organisations’ capability and maturity.
2.4.1 Standards for developing safety-critical systems
A number of standards have been developed to specify, amongst other things, the
process that should be followed by organisations developing safety-critical software. The
defence industry in particular has given rise to such standards, including interim defence
standards from the UK Ministry of Defence on the procurement of safety-critical
systems(Ministry of Defence, 1997a; Ministry of Defence, 1997b); on performing hazard
analyses of programmable electronic equipment (Ministry of Defence, 1996c;
Ministry of Defence, 1996d); and guidelines on performing HAZOP studies (Kletz,
1992) on equipment containing a programmable electronic component
(Ministry of Defence, 1996a; Ministry of Defence, 1996b).12
More recently, the IEC (the International Electrotechnical Commission) has released a
standard that is aimed at the Functional safety of electrical/electronic/programmable electronic
safety-related systems. The standard is split into the following seven parts:
12 Programmable Electronic Systems (PES) is a phrase often used in defence and safety-critical systems
domains to broadly describe any systems that contain software, be they computer-based systems,
embedded systems, or PLC (Programmable Logic Control) controlled devices.
36
Part 1: General requirements;
Part 2: Requirements for electrical/electronic/programmable electronic systems;
Part 3: Software requirements;
Part 4: Definitions and abbreviations;
Part 5: Examples of methods for the determination of safety integrity levels;
Part 6: Guidelines on the application of parts 2 and 3;
Part 7: Bibliography of techniques and measures.
Part 3 of the standard is particularly relevant for this thesis, being concerned with
software requirements (IEC-61508-3, 1999). Part 3 specifies a standard process for
software development (see Figure 2.4) and situates this in a broader software safety
lifecycle (see Figure 2.11), which in turn fits within the overall system safety lifecycle.
The core of the standard is concerned with proposing a risk-based approach to the
determination of safety integrity levels (SILs)for the various safety-critical components of
programmable electronic systems (PESs) in a safety-related system. A number of
quantitative and qualitative approaches are described in part 5, aimed at determining the
level of risk associated with various system components, which in turn leads to SILs
being assigned. The SIL is then used to dictate the measures that must be taken for each
system component to ensure that the system does not exceed a tolerable level of risk.
Given this thesis’ focus on human factors, it is also interesting to note the presence of a
section on the competency of persons in Part 1 of the standard. Annex B of Part 1
recommends a number of factors for consideration when assessing individuals for their
competency to undertake activities in the development of safety-critical systems. The
guidelines relate to the training, technical knowledge, experience, and qualifications of
all staff involved in any safety lifecycle. For safety lifecycle activities, these factors for
each individual should be appropriate, assessed, and documented, for a given
application/domain. When assessing for appropriate experience, knowledge, etc., the
degree of competency required should be related to a number of features of the
particular application, including: the consequences of failure, the safety integrity levels,
the novelty of the design, and the relevance of previous experience and qualifications.
37
Safet
y
functions
re
q
uirements
s
p
ecification
Safet
y
Inte
g
rit
y
r
e
q
uirements
sp
ecification
Software v alidati on
p
lannin
g
Software desi
g
n &
develo
p
ment
P
E inte
g
ration
hardware/
software
)
Software o
p
eration
& maintenance
p
rocedures
S
oft ware safet
y
validation
4
5
32
1.2
1.1
6
To overall installatio n
commission
T
o overall o
p
eration
m
aintenance
From safety requirements
Figure 2.11: IEC 1508 software safety lifecycle13 (from IEC-1508-3, 1997)
In its draft form, the standard was difficult to apply, and incomplete in places. Now
complete, organisations wishing to develop safety critical systems will increasingly
become obliged to be certified against the standard in order to satisfy potential
purchasers or procurers that they are capable of developing software systems that are
safe. The notion of capability was adopted by the Software Engineering Institute (SEI)
at Carnegie Mellon University when developing a model of software development to be
used for assessing the quality of an organisation’s development process. The following
section turns to this view on software development.
2.4.2 Capability maturity models
In contrast to approaches described elsewhere in this chapter, the focus of research into
software process maturity has been on the management of software processes, rather
than on the technologies or methods used. The main work in this area has been
13 The V-shaped software lifecycle presented in figure 2.4 fits in box 3.
38
conducted at the Software Engineering Institute (SEI) at Carnegie Mellon University,
which has led to a number of other initiatives around the world. It should be noted that
this field of research does not explicitly address safety-critical issues. It is nevertheless
relevant to examine approaches to software process improvement as it can be argued
that organisations intending to develop safety-critical software should be concerned
with improving their development processes. Furthermore, this thesis can be viewed as a
contribution to improving RE processes within the broader field of software process
improvement.
2.4.2.1 The SEI CMM
The first notable work in the area was carried out by Watts Humphrey at the SEI
(Humphrey, 1988). This was subsequently expanded upon in Humphrey’s book Managing
the Software Process (Humphrey, 1989), and ultimately became known as the SEI
Capability Maturity Model for software (CMM) (Paulk et al., 1993; Paulk et al., 1995).
The origins of the SEI work lay in the US Department of Defense’s concern with
improving the quality of software delivered by contractors. Associated with the CMM
are techniques for assessing software processes against the model, and evaluating the
software capability of an organisation. Armed with this, the Department of Defense
were in a position to assess the relative capabilities of organisations bidding for
contracts, and award contracts based upon such evaluations.
The CMM consists of five levels of maturity, ranging from initial, through to optimising
(see Figure 2.12). They are characterised as follows:
1 Initial. The software process is characterized as ad hoc, and occasionally even chaotic. Few
processes are defined, and success depends on individual effort.
2Repeatable. Basic project management processes are established to track cost, schedule,
and functionality. The necessary process discipline is in place to repeat earlier successes on
projects with similar applications.
3Defined. The software process for both management and engineering activities is
documented, standardized, and integrated into a standard software process for the
organisation. All projects use an approved, tailored version of the organisation’s standard
software process for developing and maintaining software.
4Managed. Detailed measures of the software process and product quality are collected.
Both the software process and products are quantitatively understood and controlled.
5Optimizing. Continuous process improvement is enabled by quantitative feedback from
the process and from piloting innovative ideas and technologies. (Paulk et al., 1995, pp. 15 -
17)
The model is decomposed at each level (except for level 1) into several key process areas.
Each key process area aims to satisfy a number of goals by addressing various key
practices. These in turn are organised at each level into five common features. These are
attributes of the organisation and its processes which indicate whether the
implementation and institutionalisation of a key process area is effective, repeatable and
39
lasting. Each level of the CMM functions as the foundation of the next level above it,
such that the satisfaction of level 4 key process areas depends upon having already
satisfied levels 2 and 3, for example.
Initial
(1)
Repeatable
(2)
Defined
(3)
Managed
(4)
Optimizing
(5)
Disciplined
process
Standard,
consistent
process
Predictable
process
Continuously
improving
process
Figure 2.12: The five levels in the SEI Capability Maturity Model, (from Paulk et al., 1995, p. 16)
The CMM, in common with many other process capability models, is used in two
different ways. Software Process Assessments are voluntary, are initiated by the organisation
concerned, and are performed by SEI-trained staff in collaboration with key personnel
from the organisation being assessed. The results of a process assessment are kept
confidential to the organisation, and are used as a basis for planned process
improvement. Software Capability Evaluations are compulsory, are instigated by the
Defense Department, and are performed by government employees14. The results of a
software capability evaluation (SCE) are public, and are used to determine the capability
of an organisation to produce software for Defense Department contracts. Despite these
differences, the process of conducting a process assessment or SCE is largely identical,
and involves answering 101 yes/no questions (85 of which are graded) distributed
between the five maturity levels in the CMM. The responses to these questions are used
to determine how well the organisation’s processes conform to the key process areas at
the different levels, and to ultimately arrive at a set of recommendations for
improvements (for a process assessment) and a maturity rating (for an SCE). The tight
14 For contractors who wish to bid for US government contracts, particularly with the Defense
Department.
40
coupling between the two uses of the CMM means that process assessments are typically
performed as a precursor to an SCE, with a view to maximising the maturity level of the
organisation.
A number of criticisms exist of the CMM, SCEs, and the way in which the assignment of
process maturity levels is arrived at in particular (Bollinger and McGowan, 1991). For
example, due to the way in which the SEI questionnaire responses are graded, it is
possible to be graded as level 1, even though all the criteria for level 2 and above are
met, simply because not all of the level 1 questions are satisfied.
The CMM is not particularly strong where RE is concerned, having only one level 2 key
process area devoted to Requirements Management. This has two high level goals, and 12
key practices which aim to satisfy them. These are all relatively abstract, and effectively
reduce to following an effective change control process for software requirements.
Human factors are explicitly excluded from the CMM’s notion of software process
improvement
2.4.2.2 Other models
Since the CMM was first released, a number of other process improvement approaches
have emerged. Most of these are based upon the CMM, but have been adapted to suit
different cultures and domains. For example, BOOTSTRAP (Kuvaja, Simila, Krzanik,
Bicego, Soukkonen and Koch, 1994) was developed for use in the European
Community, and Trillium (Trillium, 1994) for developers of embedded software
systems, particularly in the telecommunications industry. Most notably, a joint ISO/IEC
initiative led to the formation of the SPICE project (Software Process Improvement
and Capability dEtermination) (El Emam, Drouin and Melo, 1997) which was
mandated to develop a new ISO/IEC standard (15504) for Software Process
Assessment (Zahran, 1998). The SPICE project took the SEI CMM, and a number of
other existing software process improvement models and techniques as its starting point,
and set out to develop a process assessment method that combines the best of all the
available techniques. Similar to the CMM, ISO/IEC 15504 can be used in a number of
ways:
Capability determination mode: to assess the capability of a potential software
supplier (similar to a CMM SPE)
Process improvement mode: to help an organisation improve its own software
process.
Self assessment mode: which allows an organisation to determine its own ability to
undertake a new project.
At the centre of ISO/IEC 15504 is the Reference Model, which is structured into two
dimensions. The process dimension classifies software development processes into five
process categories, each of which consist of a number of processes, each one described in
terms of its purpose, and its expected outcome. The five process categories are: Customer-
supplier (CUS); Engineering (ENG); Support (SUP); Management (MAN); and
Organisation (ORG). The capability dimension expresses software process capability in
41
terms of process attributes that are grouped into six capability levels (0..5) in a similar
manner to the CMM. Each level is described in terms of its main characteristics, and the
attributes used for measuring capability. Where the capability dimension differs from the
CMM is that it is applied to a process, rather than the organisation executing it. It is also
possible to assess several instances of the same process, and produce an aggregate
evaluation from this.
Requirements issues are dealt with in more detail than in the CMM, with one customer-
supplier process (CUS.2 - Manage customer needs) and two engineering processes
(ENG.1 - Develop system requirements and design; and ENG.2 - Develop software
requirements) devoted to RE issues. While the CMM is not concerned with human
issues at all, in ISO/IEC 15504 they are given consideration within one of the
organisation processes (ORG.4 - Provide skilled human resources), which is concerned
with the capabilities of the people employed to carry out the process. This is very similar
to the treatment given to human issues in IEC 1508, which is concerned with the
competence of people engaged in the development of safety-critical software. Just as
with IEC 1508, however, the consideration of human factors does not extend beyond
this view of recruiting and training staff with appropriate skills and knowledge, towards
a consideration of how well suited people are in general to the tasks that they have to
perform.
2.4.3 Safety-critical software development in practice
Finally, in considering what differentiates RE for safety-critical systems from RE in
general, this section turns to a review of the development processes employed by a
number of safety-critical systems projects (Sommerville and Viller, 1997). The review
was conducted on four projects funded under the UK DTI/EPSRC funded Safety-
Critical Systems Programme, with the purpose of comparing the process models
employed on them along with the process implicit in the IEC 1508 draft standard (see
section 2.4.1).
The projects and the process models they advocate first of all were reviewed and
described individually. They were then compared across a number of dimensions:
Documentation. How good is the documentation associated with the process?
Lifecycle. What extent of the software lifecycle is covered by the process?
Model. Which process model (e.g. Waterfall, V, etc.) is followed?
Usability. How easy is it to follow the process?
Safety techniques. Which approaches to safety analysis (if any) are incorporated into
the process?
Process safety. How safe is the process itself?
Configuration management. Does the process include an approach to configuration
management?
42
Actors/roles/responsibilities. Does the process allocate any particular roles, or assign
responsibilities to be fulfilled?
Deliverables. What deliverables are generated by the process, over and above what
one would expect for a typical (non safety-critical) process?
Standards. Does the process require or recommend that particular standards are
followed?
Tool support. Does any form of tool support exist for the process?
Terminology. How does the terminology used to describe the process compare with
that used in IEC Draft Standard 1508?
The review gave rise to a number of conclusions regarding the state of safety-critical
systems development processes, as embodied by the sample of four projects, and a
number of other observations were made by participants on the projects at the
workshop in which the review was presented. Concerning the actual process followed,
there was a widespread adoption of the V-shaped variant of the waterfall model, with
different degrees of validation and verification built in. This was despite a recognition
that spiral processes are better in terms of risk management, and the improved quality of
requirements and system developed as a result. The main reason for this seems to be the
perceived difficulties in auditing spiral processes as opposed to those following a
waterfall model. Managers certainly preferred the security of the waterfall model,
although representatives of some of the projects indicated that they would probably
follow a spiral model, whilst reporting and auditing it as if it were a waterfall model.
There were diametrically opposed views regarding the place of safety analysis in the
development process for safety-critical systems. There were those who believed that it
should be integrated as a part of the development process. To these people, safety was
‘just’ another quality of the system to be developed, albeit one which they must get
right, and as such safety requirements did not warrant separate treatment. Conversely,
others believed that the importance of safety analysis and safety requirements made it
imperative that they were focused upon separately, with a separate safety requirements
analysis process which runs in parallel with the ‘normal’ system requirements process.
Both viewpoints have their own merits, and the correct solution for a given organisation
is probably dependent upon the context in which the organisation operates, and in
which the development takes place. The latter view, however, is the closest to that
which is recommended by the IEC Draft Standard 1508, which separates out safety
requirements for components with safety integrity levels above a particular value, and
directs extra effort to ensure that they are met.
In summary, it can be seen that some efforts have been made to increase the reliability of
the RE process for safety-critical systems, but it is still largely the same process and
therefore vulnerable to the same failures as any other RE process for systems
development in any other domain. The basic characterisation of RE as intellectual office
work, proposed at the end of section 2.2, applies equally here when the domain, or the
system being developed, is safety-critical.
43
2.5 Conclusions
This chapter has been concerned with establishing the motivation for the development
in this thesis of an approach to improving the RE process for safety-critical systems. To
do this, a number of questions were addressed, regarding the nature of RE, the problems
that exist in RE, and what differences exist when developing requirements for safety-
critical systems. These questions were addressed through reviews of three broad areas of
the literature.
First and foremost, RE was reviewed in terms of how it has been defined, then
subsequently how it fits into the broader context of the software process, and the
different models of the process that exist. Finally, different approaches to RE were
reviewed, focusing in particular upon the nature of the work undertaken by
requirements engineers. Despite the broad cross-section of approaches covered in this
section, it was found that RE remained essentially the same regardless of technique or
process followed, as far as the tasks performed by requirements engineers is concerned.
The outcome of this section was to characterise RE as ‘intellectual office work’.
Second, the problems in RE were examined, paying particular attention to empirical
findings in the literature. Literature in this area is sparse, but that which does exist
points to RE as being a major source of errors in the software process, and faults in the
systems produced as a result. The problematic nature of RE is compounded by the
length of time which errors in requirements tend to go unnoticed. All too frequently,
errors that occur in RE are not identified until the product is in much later stages of
development, or even once it has been released and is in use.
This motivation can be viewed in terms of three broad considerations:
RE is a crucial stage in systems development, upon which the rest of the software
process depends. Errors in requirements tend to propagate to the rest of the
development process, are harder to identify, and are more costly to rectify than
errors made during other stages of development. When developing safety-critical
systems, the development process itself, and therefore the RE process, is also
safety-critical.
RE is an inherently human-intensive process. It involves the development of a set of
requirements for a computer-based system that must be arrived at via interaction
between requirements engineers and domain experts. Regardless of the method
followed, RE ultimately consists of what might be characterised as ‘intellectual
office work’.
RE is poorly covered by process improvement. Existing approaches to process
improvement have little or no focus on the RE process, and issues related to
human factors in the development process receive poorer treatment still.
The following chapter considers the inherently human activities involved in RE as a
source of error. In particular, Reason’s (1990) taxonomy is used as a structuring
mechanism for considering the potential sources of error in the RE process. This
44
individual perspective is then broadened to consider social group and organisational
issues. In addition to providing a review of the literature on error, the migration of this
work to the RE process in itself represents a novel contribution of this thesis (Viller,
Bowers and Rodden, 1999).
45
3. Human activities in
Requirements Engineering
Chapter 2 introduced and reviewed the field of requirements engineering (RE), the early
phase of systems development. RE is concerned with understanding the problem to be
addressed by, and ultimately determining the requirements for, computer-based systems
in a variety of settings. The RE process for safety-critical systems was highlighted as an
area which merits closer inspection. The consequences of errors in requirements for
safety-critical systems cannot be thought of in terms of financial cost alone, but must
also be considered in terms of the risk of causing harm to people, property, or the
environment. This in turn means that, to a limit15, any measures which can be taken to
prevent errors occurring in the process will be worthwhile.
The purpose of this chapter is to contribute to the development of a method of process
improvement aimed at the RE process for safety-critical systems. The contribution takes
the form of a review of a number of fields of research in the human sciences which have
produced findings which are relevant to the human endeavours which constitute the RE
process. The result of performing this review is a classification of human errors and
other human factors which are relevant to understanding how human-related failures
occur in RE, and how such failures can be mitigated. The findings as reviewed in this
chapter and developed in the following chapter will collectively form the basis of the
process improvement method which is the core of this thesis.
In order to develop safety-critical systems, the process by which they are developed must
also be considered to be safety-critical. If not, there is a risk that systematic or random
errors may be introduced to successive generations of products, or across different
product ranges. This focus on process improvement for safety-critical system
15 The ALARP (As Low As Reasonably Practicable) principle describes a way of analysing the perceived
risk of an incident occurring against the cost of preventing it. The ALARP region is the zone between
which any preventative measures are deemed to be worthwhile regardless of cost, and where the
perceived risk (i.e. combination of likelihood and consequence of incident occurring) is so low that
costly measures are impractical (see chapter 6 for a more detailed description of ALARP).
46
development is a novel feature of this thesis. Other approaches to the reduction of
errors in safety critical systems focus on the product itself, rather than the process by
which it is produced.
The RE process for any non-trivial system will necessarily involve a number of
individuals working both together and in isolation on a variety of manual and computer
supported processes. Rarely (and almost never in a commercial context) is requirements
engineering engaged in solely by an isolated individual working alone. Overwhelmingly,
requirements are elicited, analysed and documented by teams of engineers working as
part of a larger development team. Accordingly, the contribution of each individual
needs to be coordinated with other team members as the requirements engineering
process unfolds. Even when individuals do work alone (e.g. when one person assumes
sole responsibility for drafting a requirements document), their contributions are
oriented towards the contributions of others and specifically designed to mesh with
them. Furthermore, in commercial contexts, relations that a supplier organisation has
with clients will always be borne in mind in some way while, for example, requirements
documents are being developed for them.
These are intended as simple, non-controversial observations but it is worth emphasising
them: requirements engineering is a form of cooperative work, involving the
coordination of individual work within a team or group of engineers working in an
organisational context. Further, the human-intensive nature of the RE process means that
many of the errors which are attributed to this phase of systems development are of a
human nature. If improvements to the RE process are to be proposed, therefore, it is
important to first of all develop a good understanding of the human activities which are
inherent in it.
Following the above observations, this chapter approaches the understanding of human
activity in RE from three perspectives, all of which are relevant at different points in
the process. The first of these is from an individual perspective, concerned with the
actions and tasks of requirements engineers working in isolation, or on individual tasks
which contribute to a larger team or organisational goal. The source of research in this
field is cognitive psychology, and more specifically work under the banner of Human
Error, which has primarily approached problems of errors in high risk activities from a
cognitive standpoint. Broadening this perspective leads to the consideration of people
working not in isolation, but together in groups or teams, that is to say in social
contexts. Research into how people perform in groups has a long tradition in
psychology, and findings from this perspective come primarily from social psychology,
but also from sociology. Having broadened the perspective once, it would be wrong to
stop there when the enquiry can take in a further, wider reaching concern. This is the
way in which individuals and groups, teams, or other units function within organisations
and the wider society at large. Here, the relevant research originates in sociology and in
organisational and political studies. The purpose of studying the literature in these three
areas is to inform, and provide the basis for, a human-centred approach to process
improvement for requirements engineering.
47
This chapter reviews a variety of literature regarding human activity and the errors
which are inherent in it. The background to this chapter is eclectic, drawing on cognitive
and social psychology, sociology, and organisational and political studies. Each area
contributes to the construction of a single taxonomy from the three perspectives
outlined above. The taxonomy is concerned with factors which may contribute to
failures in human-intensive processes, and forms the basis of the process improvement
method developed in this thesis. The review is necessarily detailed in order to develop a
tool which can sufficiently address the complexity of human processes when required to
do so.
In the remainder of this chapter, the following section first of all reviews work on
human error which comes from a cognitive psychological background and is relevant to
individual human activity. In section 3.2, a broad range of social psychological research
is reviewed which is concerned with the performance of individuals in social settings.
Third and finally, sociological and organisational accounts of safe operations are
reviewed in section 3.3.
3.1 Errors in Individual activity
The previous section characterised requirements engineering as the work of individuals
in groups and teams within an organisational context. This section focuses on the first of
these three perspectives on human factors in RE—people working as individuals—and
on research which addresses human errors in such situations. Cognitive psychology is
devoted to the understanding of how we perform as individuals, and is the source of the
findings presented here. In particular, a branch of cognitive psychology has developed
which has specifically examined human failures in high risk endeavours such as nuclear
power plant control rooms. This field of research is known as human error.
In human factors work on human error an important distinction is usually made
between slips and lapses on the one hand and mistakes on the other. The distinction
hinges on recognising two different ways in which planned action can fail:
The plan may be adequate but the actions associated with realising or executing
the plan do not go as intended. A slip occurs when an action is incorrectly
performed or when some similar or related action is performed instead. A lapse
occurs when an action which should be performed is omitted.
The actions go entirely as planned but the plan itself is faulty in that it does not
achieve its desired outcome. These failures are mistakes.
Mistakes, then, are typically errors of plan formulation, while slips and lapses are
typically errors of plan execution. This distinction is often held to be important because
slips and lapses are believed to have different origins and be influenced by different
factors than mistakes. In the terms of Rasmussen (1983), slips and lapses are to be
understood in terms of the exercise of pre-existing skills, while mistakes relate to the
inappropriate application of rules or prior knowledge. Accordingly, strategies which might
48
be undertaken to remedy mistakes may not be effective in alleviating the risk of slips and
lapses. On this argument, recognising different types of error is important to formulating
appropriate strategies for anticipating and preventing error (Reason, 1990).
This skill-, rule-, and knowledge-based distinction was taken up by Reason (1990)16 and
Figure 3.1 illustrates the relationship between these three ‘levels’ of action, and how
Reason suggests they interact in his proposed Generic Error Modelling System (GEMS).
SKILL-BASED LEVEL
(Slips and lapses)
YES
NO
RULE-BASED
LEVEL
(RB mistakes)
YES
GOAL
STATE
YES
NO
KNOWLEDGE-
BASED LEVEL
NONE FOUND
Subsequent attempts
(KB mistakes)
OK? OK?
IS PROBLEM
SOLVED?
IS THE
PATTERN
FAMILIAR?
Revert to mental
model of the problem
space. Analyse more
abstract relations
between structure
and function
Infer diagnosis and
formulate corrective
actions. Apply actions.
Observe results, ... etc.
Find higher
level analogy
Apply stored rule
IF (situation)
THEN (action)
Consider
local state
information
Problem
Routine actions in a familiar
environment
Attentional checks on
progress of action
NO
Figure 3.1: Generic error modelling system (GEMS) (from Reason, 1990, p. 64)
16 Whilst Reason’s more recent, (1997) work still refers to this framework, he focuses much more on
organisational factors which contribute to accidents (see also section 3.3.1)
49
So, according to Reason, routine activity in a familiar environment progresses with an
occasional attentional check to make sure that everything is OK—that action is
proceeding according to plan. Errors at this ‘level’ are slips and lapses, which are dealt
with in the next section. The following section to this deals with mistakes, which can
occur at the rule- or knowledge-based levels. Activity here is oriented towards problem
solving, and the level at which this takes place depends upon the individual’s familiarity
with the problem at hand. That these errors can occur in any human activity makes it
relevant to this thesis where the concern is to reduce systematic errors in what is a highly
human intensive process. The remainder of this section reviews the different forms of
human error, the result of which will be a classification of error types from an individual
perspective. This classification ultimately forms the basis of this thesis’ process
improvement method.
What is interesting to note here is the use of plans, in their cognitive psychological sense,
and their parallel with plans at the larger scale which are being treated here in terms of
processes in general, and the RE process in particular. The plans which are checked
against for ‘correct’ progress during skill-based activity in the model in Figure 3.1, and
the rules which are turned to when solving problems during rule-based and knowledge-
based activity are analogous to the procedures and guidance for activity which are
central to any method for achieving any task. So, for example, when this section later
considers rule-based mistakes, it can be seen that the errors which this type of activity
are vulnerable to also apply to the process models which typically guide the requirements
engineer in their work. Further, as requirements engineers become more skilled at
performing the activities as set out in the process which is being followed, then they
become vulnerable to the slips and lapses as outlined in the following section.
3.1.1 Slips and Lapses
Slips and lapses are errors which result from some failure in the execution and/or storage stage
of an action sequence, regardless of whether or not the plan which guided them was adequate to
achieve its object. (Reason, 1990, p. 9)
Slips and lapses often occur during performance of routine familiar tasks in their usual
environments. This means that the more skill people develop in performing a particular
task, the more vulnerable they become to making slips and lapses during execution of
that task. Slips and lapses are very typically associated with some form of attentional
distraction: while performing a routine task, one is distracted by something else and slips
up. Alternatively, they can be provoked by unexpected change in the otherwise familiar
environment. The classification of slips and lapses can be facilitated by analysing the
components involved in the performance of intentional action. We can consider planned
action in terms of four components (see Figure 3.2):
Recognition Attention Memory Selection Action
Figure 3.2: Components of planned action
50
identifying and recognising the objects which are to be acted upon and the
opportunities which are appropriate for action
attending to how the action is unfolding particularly at critical choice points
remembering
- the plan that is being followed
- where exactly one is in the plan
- the actions and their correct order and the objects the actions are to be
performed on/with
selecting the correct actions at the correct time.
Typically, and especially for highly familiar routines, these processes do not command
conscious awareness. People recognise opportunities for action, attend to critical choice
points, remember the plan being followed and select the action to perform without
deliberating on each of these processes. From the actors’ perspective, it seems that they
just act. Indeed, sometimes errors are made precisely when people do attend to what is
otherwise routinely executed.
Analysing action in this way can enable further different types of execution failure to be
distinguished (cf. Reason, 1990). The following sections address failures within each of
the four components of planned action identified above.
3.1.1.1 Recognition failures.
These lead to slips which involve the incorrect identification or non-identification of
details important to the plan.
(I)MISIDENTIFICATION
Misidentification is typically caused by confusion between correct and incorrect objects
arising through their similarity. For example: if two buttons (one marked stop, one
marked start) are placed close together on a panel and are very similar in shape, size and
colour, then one might well expect slips to arise due to the misidentification of one
button caused by its similarity to another. Confusion due to similarity can be
exacerbated by poor signal to noise ratios in the operating environment and by habitual
expectations and needs.
(II)NON-DETECTION
Non-detection (or omissions or false negatives) can be due to many reasons. For example:
operator fatigue, interruption and misleading expectancies can all lead to failures due to
non-detection. Additionally, sometimes correctly detecting one potential failure can lead
to another in close proximity being missed.
51
(III)FALSE POSITIVES
False positives (wrongly identifying problems which are not actually present) can also lead
to action slips. Often safety-critical processes are designed so as to be relatively tolerant
of false positives but this is not always the case, especially where corrective actions
undertaken on the basis of a false positive identification have considerable costs in their
own right, such as the emergency shut-down of a nuclear reactor.
3.1.1.2 Attentional failures.
Slips can arise both through insufficiently attending to how planned action is progressing
and through over attending. Failures due to inattention constitute the most common
source of error in this category but quite familiar classes of error can arise when one
attends to something at moments where it is better to just execute a highly practised
sequence (e.g. concentrating on one’s precise leg movements is not advised when
walking down stairs!).
(I)INATTENTION SLIPS.
While these can take many forms, they are almost all due to attention being captured by
some other detail of the situation or some change in it. This kind of distraction or pre-
occupation can commonly manifest itself as:
Branching slips where two different outcomes have identical initial actions but the
unintended outcome is achieved. Typically, in these circumstances, the
unintended actions comprise the stronger, more commonly and recently executed
routines.
Overshoots and undershoots where the actions either continue beyond their intended
stopping point or end prematurely.
Omissions following interruptions where some event or detail external to the plan
leads to a failure to check the plan’s progress.
Unawareness that the plan is inappropriate where a plan is entered into even if it has
already been carried out or if its outcome has already been achieved.
(II)SLIPS THROUGH OVER-ATTENTION.
Although these errors are perhaps less familiar, people are still prone to make them.
Interestingly, they can often occur in an attempt to compensate for an error (or a ‘near-
miss’) due to earlier inattention. Two sub-classes can be identified:
Mistimed checks where the progress of some action sequence is checked upon but
at an inappropriate moment. For example, action sequences may be attended to
but, incorrectly, actors think that they are further (or less far) advanced than they
in fact are. If the action sequence has progressed further than the actor believes,
then an action may be inappropriately repeated. If the action sequence is not as
far advanced as the actor considers it to be, then crucial steps may be omitted.
52
Disrupting well practised actions where inappropriate attention can disrupt an action
normally performed automatically and without conscious awareness (e.g. thinking
about the letters one is hitting can disrupt normally fluent speed typing).
3.1.1.3 Memory failures
Memory failures typically lead to lapses—the omission of some action important to a
planned action sequence. Most theories of human remembering distinguish between
three stages in the remembering process. (1) Encoding something into memory. (2)
Storing and retaining the information in an appropriate form. (3) Accessing this
information and retrieving it when it is required (Baddeley, 1997). Memory failures can
be due to any one of these three aspects of memory or to some combination. In actual
cases, it is often difficult to distinguish between different explanations of a lapse. If
something is forgotten, it is often equally explicable as a retrieval or as an encoding
failure. Some commonly occurring errors due to memory failures include:
(I)FORGETTING INTENTIONS
Where the plan itself might be forgotten in total or where some actions within it might
have been forgotten.
(II)FORGETTING OR MISREMEMBERING PRECEDING ACTIONS
Where the actions prior to the current action are forgotten or incorrectly recalled,
leading perhaps to repetition or omission.
(III)ENCODING FAILURES
Where, say, some distracting activity at the time some important information is
presented leads to the actor failing to adequately attend to and encode the information.
(IV)RETRIEVAL FAILURES
Where some information is available in memory but cannot be accessed, perhaps because
the environment does not itself give the actor enough information (or ‘memory cues’) to
assist appropriate retrieval. There is considerable psychological evidence to suggest that
the similarity between the environment in which information was initially encoded to
the environment in which it is retrieved greatly influences the accessibility of
information in memory.
(V)RECONSTRUCTIVE MEMORY ERRORS
Where, in the absence of directly retrieving details of past actions or events, what is
believed to have occurred in the past is guessed upon or otherwise reconstructed. There
is considerable psychological evidence to suggest that people often engage in
reconstructing the past in the light of their general knowledge. For example, knowing
that one typically visits a certain restaurant on a Saturday, one may assume that this was
the case on a particular date, even if no specific memory of activities on that occasion
53
exists. Sometimes error-prone memory reconstructions can be as subjectively vivid and
as confidently believed in as memories for actually occurring events.
3.1.1.4 Selection failures
Even if opportunities for action have been correctly identified, if the actor is attending
appropriately to unfolding events, if plans and prior actions are appropriately
remembered, it is still possible to select the incorrect action out of the range of
alternatives that might be available. These failures often occur when an actor is having
to engage in several different planned sequences simultaneously. The following sub-
classes can be identified:
(I)MULTIPLE SIDE-STEPS
Where in pursuing an initial plan, one gets diverted into another, and maybe yet another,
leading to an incorrect action being performed when one eventually returns to the initial
plan.
(II)MISORDERING
Where correct actions are performed but in an incorrect order.
(III)BLENDING ACTIONS FROM TWO CURRENT PLANS
Where elements from one plan migrate or are transposed to another one.
(IV)CARRY-OVERS
Where an action from one plan which has just or is about to terminate is carried over to
the next plan that the actor has just started or is about to engage in.
(V)REVERSALS
Where an action sequence is incorrectly reversed. These can occur when a plan enters a
state which could equally arise in the plan which brings about the opposite outcome.
For example: one intends to take off one’s shoes. The laces are undone, the shoe is
picked up and then put back on again. Taking off one’s shoes is (of course) reversed by
putting the shoes back on. The starting state for the one plan is the end state for the
other. This makes it possible for a slip involving a selection failure of the reversal type to
occur.
3.1.1.5 Summary of Slips and Lapses
This concludes the section on slips and lapses in individual human activity. Depending
upon which phase of planned action is being performed at a particular time, slips and
lapses can take the form of: errors in recognition of situational factors; attention to
progress in the process; memory of the plan and it’s state of execution; or selection of an
appropriate action. The result of this section is the following classification of human
error types in skilled activity (Figure 3.3):
54
1. Slips and lapses
1.1 Recognition failures
- 1.1.1 Misidentification
- 1.1.2 Non-detection
- 1.1.3 False positives
1.2 Attentional failures
- 1.2. 1 Inattention s lips
1 .2.1.1 Branching slips
1 .2.1.2 Overshoots and undershoots
1 .2.1.3 Omissions following interruptions
1 .2.1.4 Unawareness that the plan is inappropriate
- 1.2.2 Slips through over-attention
1 .2.2.1 Mistimed checks
1 .2.2.2 Disrupting well practiced actions
1.3 Memory failures
- 1.3.1 Forgetting intentions
- 1.3.2 Forgetting or misremembering preceding actions
- 1.3.3 En coding failures
- 1.3.4 Retrieval failures
- 1.3. 5 Reconstructive memory errors
1.4 Selection failures
- 1.4. 1 Multiple side-steps
- 1.4.2 Misordering
- 1.4.3 Blending actions from two current plans
- 1.4. 4 Carry-overs
- 1.4. 5 Reversals
Figure 3.3: Taxonomy of slips and lapses (errors due to individual skill-based activity)
This classification forms the first part of a hierarchy of error types which is added to
over the remainder of this chapter.
What distinguishes slips and lapses from other forms of human error is the nature of the
task being performed. The more routine and familiar the activity—the more we operate
in skill-based behaviour—the more prone we are to making errors of these types. This has
implications for the design and improvement of processes which have or may have a
routine or repetitive element.
When conditions are less familiar or routine, skill-based behaviour becomes less of a
factor, and we must turn to rule- or knowledge-based behaviour in order to complete tasks
of increasing degrees of novelty. Human errors in these situations cease to be labelled
slips or lapses, but are termed mistakes. The following section reviews the types of
mistake encountered in rule-based or knowledge-based activity.
55
3.1.2 Mistakes
Mistakes may be defined as deficiencies or failures in the judgmental and/or inferential processes
involved in the selection of an objective or in the specification of the means to achieve it,
irrespective of whether or not the actions directed by this decision-scheme run according to plan.
(Reason, 1990, p. 9)
In contrast to slips and lapses, mistakes occur in the formulation and construction of
plans, rather than in their execution. It is commonplace (cf. Rasmussen, 1983; Reason,
1990) to distinguish two kinds of mistake: rule-based and knowledge-based mistakes.
Rule-based mistakes involve the application of ‘pre-packaged solutions’ or acting in
accordance with some rule of practice which is inappropriate to the current situation. In
contrast, knowledge-based mistakes occur when there is no ‘rule’ or existing solution to
apply (or misapply). Rather, knowledge-based mistakes occur when an actor’s general
knowledge is called upon in the formulation of new plans or action sequences and
various characteristics of how humans use their general knowledge lead to mistakes of
various sorts. The occurrence of knowledge-based mistakes is notoriously hard to
predict. In this section, mistakes are analysed into these two classes and then further
subclasses are identified.
3.1.2.1 Rule-based mistakes.
Rule-based working is best characterised as a process of tackling familiar problems
where a person must first decide on a classification for the problem they aim to solve,
followed by the selection of a solution. This process is vulnerable at two main points: on
classification of the situation, and on selecting the solution. Failure at these two points
gives rise to two sub-classes of error:
(I)MISAPPLICATION OF GOOD RULES
Where some procedure or problem-solving rule which has proven useful in the past is
applied to a new problem situation where it turns out to be inappropriate. This class of
mistake can often arise due to a failure to distinguish between appropriate and
inappropriate conditions for the application of the rule.
(II)THE APPLICATION OF BAD RULES
Where some procedure or problem-solving rule that is sub-optimal is nevertheless
persisted with because, say, its applicability has not been tested across the entire range of
possibilities or because past mistakes have not been discovered.
Rule-based mistakes, therefore, are particularly likely to occur when someone is engaged
in problem-solving activity in a relatively familiar domain, where previous strategies are
applied in order to achieve their objectives. When previous solutions are not applicable,
the individual must turn to knowledge-based procedures, and then becomes vulnerable
to knowledge-based mistakes.
56
3.1.2.2 Knowledge-based mistakes.
When one has to construct new solutions to problems and formulate wholly new plans
for action without recourse to existing rule-based solutions, one is subject to various
biases which have been documented extensively in psychological literature over the last
25 years (Kahneman, Slovic and Tversky, 1982). These include:
(I)AVAILABILITY BIASES
Where ideas for solving problems occur to one on the basis of past experience and use of
the ideas, rather than in terms of their direct relevance to the current problem or
evidence relevant to it. That is, one may be biased to adopt solutions which come to
mind easily, which are easily available to thought.
(II)FREQUENCY AND SIMILARITY BIASES
Where the most easily available solutions are ones which have been most frequently
adopted in the past (but which may not be appropriate) or which have been employed in
situations similar in many details to the current one (but where it turns out that the
details which are different are the ones that really matter). These biases relate to how
information is stored and retrieved from memory. We are biased by how often we have
encountered something and by the nature of the similarity of the current situation to
past ones.
(III)CONFIRMATION BIASES
Where evidence is selectively sought to confirm, rather than disconfirm, currently
existing belief states or the utility of potential solutions to problems. Confirmation
biases can work with availability biases to create the situation where people will
formulate an inappropriate though readily available solution to a problem and maintain
it in spite of contradictory evidence. Hunches may be wrongly pursued and early
warnings ignored.
(IV)OVER-CONFIDENCE
Where a preferred solution is given greater confidence than its adequacy might warrant.
In some studies, psychologists have found that people use confirming evidence to
enhance their confidence but do not reciprocally use disconfirming evidence to lower
their confidence in their preferred solution to a problem. Accordingly, the confidence
with which a person holds a particular belief is not always a reliable guide to its likely
usefulness or truth.
(V)INAPPROPRIATE EXPLORATION OF THE PROBLEM SPACE
Where the range of possible solutions to a problem is explored by scrutinising either a
few solutions very deeply or many solutions in a shallow fashion. In the former case, the
best solution might be missed, yet many reasons assembled for tolerating a poor one. In
the latter case, the best solution might be encountered but not explored in enough depth
57
to distinguish it from poorer ones. This source of error may be particularly acute when
the problem space is very large, that is, when there are many problems to solve and/or
many possible solutions per problem.
(VI)ATTENDING AND FORGETTING IN COMPLEX PROBLEM SPACES
Where the range of issues to be considered is very large, it is likely that only a few can be
attended to at any one moment. Human limitations on attentional capacity will mean
that some considerations may get ignored and/or forgotten at a later moment when they
become relevant.
(VII)BOUNDED RATIONALITY AND SATISFICING
Again, when problems are complex and the range of alternative solutions large, people
often make do with suboptimal solutions if they are for most purposes good enough.
Simon (1976) refers to this tendency as ‘satisficing’ and it arises because humans, far
from being completely rational in their decision making, exhibit ‘bounded rationality’.
(VIII)PROBLEM SIMPLIFICATION THROUGH HALO EFFECTS
Where people over-generalise judgements from one consideration to another in order to
cope with a complex situation. For example, a solution found wanting in one respect
may be judged to be imperfect in others, even if there is no direct evidence to this effect.
That is, one judgement has a ‘halo’ around it which biases other judgements.
(IX)CONTROL ILLUSIONS AND ATTRIBUTION ERRORS
People typically underestimate the influence of chance factors and other considerations
outside of their direct control. That is, people often feel to be more in control of
complex events and processes than they actually are. This bias extends also to accounting
for past events and the actions of others. People tend to attribute events to enduring
characteristics of individuals rather than to environmental contingencies, even when the
latter explanation might be more appropriate.
(X)HINDSIGHT BIASES AND THE ‘I-KNEW-IT-ALL-ALONG-EFFECT
When considering past events, people often perceive the outcomes of those events as
more unavoidable than they seemed at the time. Even participants in events can
sometimes exaggerate what they knew at the time so as to present themselves as
knowing the outcome ‘all along’. This is not so much because people deceive themselves
but more due to how they organise information about complex events. The outcome of
a complex situation may often be used to reorganise people’s memories of the
antecedent events. In particular, knowledge of the outcome can sometimes be used to
simplify what people need to remember about an event. Hindsight biases, then, may be
less to do with self-deception and more to do with the simplifying tendencies of
memory.
58
3.1.2.3 Summary of Mistakes
This concludes the section on mistakes in individual human activity. Mistakes can be
considered to occur while engaged in either rule-based or knowledge-based work. The
distinction between the two is related to the familiarity of the individual concerned
with the problem being addressed. Errors in both types of activity depend upon the
individual selecting the wrong actions based on their understanding of the problem, how
they have solved similar problems in the past, and so on.
This section adds the following error types to the existing classification presented in
Figure 3.3 at the end of section 3.1.1 (Figure 3.4):
2
.Mistakes
2.1 Rule-based mistakes
- 2.1.1 Misapplication of good rules
- 2.1.2 Application of bad rules
2.2 Knowledge-based mistakes
- 2.2.1 Availability biases
- 2.2.2 Frequency an d similarity biases
- 2.2.3 Conmfirmation biases
- 2.2.4 Over-confidence
- 2.2.5 Inapp ropriate exploration of the problem space
- 2.2.6 Attending and forgetting in complex problem spaces
- 2.2.7 Bounded rationality and satisfic ing
- 2.2.8 Problem simplification through halo effects
- 2.2.9 Control illusions and attribution errors
- 2.2.10 Hindsight biases and the ‘I-knew-it-all-along-effect’
Figure 3.4: Taxonomy of human factors contributing to errors in RE due to rule-, and knowledge-based
individual activity
3.1.3 Human Errors: A Provisional Summary
This section has presented a review of the field of human error, largely the work of
Reason, which is concerned with the cognitive mechanisms which lead to errors in
human activity. The work is rooted in cognitive psychology, which takes an individual
perspective on the understanding of failures in human-intensive processes. The work is
best summarised in Reason’s own terms, as in Table 3.1 below.
This concludes the review of human errors due to cognitive factors, i.e. relating to the
work of individuals. So far, this chapter has developed a classification of human errors
in terms of three different levels of cognitive activity, namely skill-based, rule-based, and
knowledge-based. Each level of activity implies a different relationship between the
individual concerned, and their current activity. This, in turn, leads to different kinds of
vulnerability to error, according to the nature of the individual activity. The resultant
59
hierarchy of error types, when considered together with the nature of an activity of
interest, provides a means of narrowing down the search for the errors which a
particular activity may be vulnerable to, along with an indication of the kind of
measures which can be implemented to defend against them.
Table 3.1: Distinctions between skill-, rule-, and knowledge-based errors (from Reason, 1990, p. 62)
DIMENSION SKILL-BASED
ERRORS
RULE-BASED
ERRORS
KNOWLEDGE-
BASED ERRORS
TYPE OF ACTIVITY Routine actions Problem-solving activities
FOCUS OF
ATTENTION
On something other
than the task in hand
Directed at problem-related issues
CONTROL MODE Mainly by automatic processors
(schemata) (stored rules)
Limited, conscious
processes
PREDICTABILITY
OF ERROR TYPES
Largely predictable “strong-but-wrong” errors
(actions) (rules)
Variable
RATIO OF ERROR
TO OPPORTUNITY
FOR ERROR
Though absolute numbers may be high, these
constitute a small proportion of the total number
of opportunities for error
Absolute numbers
small, but
opportunity ratio
high
INFLUENCE OF
SITUATIONAL
FACTORS
Low to moderate; intrinsic factors (frequency of
prior use) likely to exert the dominant influence
Extrinsic factors
likely to dominate
EASE OF
DETECTION
Detection usually fairly
rapid and effective
Difficult, and often only achieved through
external intervention
RELATIONSHIP TO
CHANGE
Knowledge of change
not accessed at
proper time
When and how
anticipated change will
occur unknown
Changes not
prepared for or
anticipated
This chapter’s consideration of individual human activity has not, however, quite
concluded yet. The literature also identifies a further class of ‘errors’ where actions do
not follow the specified plan or procedure. These are distinguished from what has gone
before by the complicity of the actor(s) concerned. Whereas slips, lapses, and mistakes
are generally taken to be inadvertent, violations are usually deliberate deviations from the
plan. The following section reviews this area of research in the human error community.
3.1.4 Violations
Violations are deviations from safe operating processes, practices, procedures, standards
or rules. Deviations can be deliberate (breaching rules for safe practice when knowing
that such rules exist) or erroneous (acting against the recommendations of a rule without
being aware of the existence of such a rule). Of these two classes of violation, deliberate
violations have been most studied by psychologists and human factors researchers.
However, the research on violations is still small in comparison with what is known
about slips, lapses and rule and knowledge-based mistakes.
60
According to Reason (1990), deliberate violations differ from the errors covered so far
in a number of respects. These are summarised in Table 3.2.
Table 3.2: Comparison of errors versus violations (based on Reason, 1990)
Errors Violations
Mainly informational in origin (incorrect or
incomplete information leads to error).
Mainly motivational in origin (certain attitudes,
social norms or an organisational culture
encourages violation).
Errors are unintended. Violations are typically deliberate.
They can be explained in terms of individual
information processing characteristics.
They have to be understood in relation to the
social context.
Errors can often be remedied by improving
the relevant information.
Violations can only be remedied by changing
attitudes, social norms or organisational
culture.
According to Reason (1990), then, violations have a different origin from errors, and
require different remedies and management strategies.
There can be many different motivations for deliberate violation. Violations are not
necessarily due simply to the wilful negligence of operators, though this sometimes can
be the case. Violations also relate to organisational issues such as:
The nature of the workplace.
- A poor, badly maintained workplace may not only promote errors, it may also
give workers the impression that, as the organisation does not care, they need
not.
The quality of tools and equipment.
- Inappropriate or poor quality tools may encourage workers to use them in a
way which they were not designed for.
Whether supervisors and managers turn a ‘blind eye’ to violations to get the job
done.
- Some jobs may be very difficult to do by strictly following safe operating
procedures and under such circumstances ‘cutting corners’ may become a way
of life.
The quality of the processes, rules, regulations and operating procedures.
- Operating procedures may be poorly formulated, out of date or themselves
erroneous, in which case workers may be strongly motivated to violate them.
The organisation’s overall safety culture.
- An organisation which does not encourage an awareness of safety issues may
implicitly encourage deliberate violation.
61
3.1.4.1 Violations and Safe Operating Procedures
Ironically, violations often arise when a relatively mature system is used. A relatively
mature system may have, in the past, endured a number of accidents or incidents, each
of which have provoked further restrictions on the actions of operators or users.
Additionally, as experience in the use of the system accumulates, the number of
procedural changes may also accumulate leading to a situation where operators become
more and more restricted as time goes by.
This is not necessarily the case, it must be emphasised, but it is quite typical for a critical
incident to provoke further restriction on operators rather than a full system redesign.
Thus, the scope of permitted action can decrease with each incident or new system
version until it is narrower than those actions required to get the job done. Under such
circumstances, workers may have to violate to be able to do their work. Ironically, then,
increasing experience with a system, if this results in increasing prohibitions of unsafe
acts, can sometimes lead to circumstances when violations are more and not less likely
(Westrum, 1991). It is important, then, to avoid procedural over-specification in
response to incidents, accidents and increasing experience as the history of a system
unfolds.
3.1.4.2 Classifying Violations
This thesis has followed Rasmussen (1983) in arguing that human actions can be carried
out at three levels:
the skill-based level
- where routine actions are carried out in highly practised tasks;
the rule-based level
- where procedural, if/then rules embodying our pre-packaged knowledge are
consulted if we need to modify our pre-packaged behaviour;
the knowledge-based level
- where our general knowledge of the world is consulted if no pre-packaged
rules are available to us.
Reason (1990) suggests on the basis of a number of studies that this three level
framework can also be used to classify violations. The remainder of this section explores
this classification in more detail. Table 3.3 summarises this approach:
Table 3.3: Skill-, rule-, knowledge-based violations (based on Reason, 1990)
Performance Level Error Type Violation Type
Skill-based Slips and lapses Routine and optimising
violations
Rule-based Rule-based mistakes Situational violations and
‘misventions’
Knowledge-based Knowledge-based mistakes Exceptional violations
62
(I)SKILL-BASED VIOLATIONS
These are where some aspect of safe operating practice is violated by the skilled, routine
performance of workers. That is, their routinely used skills are violational. Such routine
violations include corner-cutting and making-do. They can be especially prominent in an
environment which is relatively indifferent to safety issues. Skilled, yet violatory,
practices can easily develop unchecked under these circumstances, especially as the skills
of operators (and other workers) are rarely explicitly analysed in operating procedures
beyond general exhortations to perform with care. It is also important to note that skill-
based violations, because the exercise of a skill is largely carried out automatically, often
occur without the awareness of those who commit them. Within the class of skill-based
violations are optimising violations. These occur in the performance of some routine task
where the actor will optimise how the task is done in non-functional ways. For
example, on long haul air flights, pilots have been known to engage in risky manoeuvres
to alleviate the boredom. These are violations of standard operating procedure engaged
in to ‘optimise’ what would otherwise be a boring task.
(II)RULE-BASED VIOLATIONS
As remarked above, a lengthy history of the usage of a system does not necessarily
improve its safety as increasingly prohibitive procedures can sometimes increase the
likelihood of violation just to get the job done. Violations of this sort can be termed
situational violations as they typically involve breaking restrictive procedures in the light of
particular situational exigencies. Situational violations often occur with the conscious
awareness of actors. They know they are breaking procedures because their scope for
action in a particular situation is so constrained that they almost have to in order to
perform at all. Situational violations can sometimes be remedied by changing situational
constraints, for example, by introducing more appropriate tools or improving working
conditions. Situational violations tend to be deliberate acts carried out in the belief that
they will not result in bad consequences. However, of course, such beliefs can be false in
which case the violation will also consist of a rule-based mistake. Reason (1990) refers
to such blends of violation and mistake as a ‘misvention’ (for mistaken circumvention).
(III)KNOWLEDGE-BASED VIOLATIONS
In a psychological sense, the performance of a skilled activity is typically routine and
automatic. When routine skills are not available, pre-packaged procedures that are
available to the actor may be scrutinised for their relevance and usefulness to the task.
Where no such pre-packaged procedures are involved, the actor’s general knowledge of
the world has to be recruited and a more problem solving mode of thought is entered
into. This latter form of action—action that is knowledge-based—is typically, therefore,
engaged in when circumstances are exceptional and unfamiliar for the actor. Thus,
violations at this level can be classified as exceptional violations. They are deliberate
violations entered into because the circumstances are exceptional. Exceptional
violations can involve inappropriate action when a rare but trained-for situation occurs
or when there is a very improbable combination of more familiar circumstances.
63
3.1.4.3 Summary of Violations
This concludes the consideration of violations by individuals of plans and procedures. It
has already been noted that violations may well become more likely as the maturity of a
process increases and more restrictions are imposed on how the process should be
followed as a consequence. It is also interesting to note in a similar vein that violations
can often take place in order to make the process work, and can be seen to be in the
spirit in which the process is intended to be carried out. Violations, therefore, are not
necessarily as problematic in themselves as their categorisation in the literature suggests,
but may well be indicative of problems in the specification of the process which they are
circumventing.
As with the previous sections on slips, lapses, and mistakes, this section has also
uncovered further types of error in individual activity, and these are summarised below
(Figure 3.5):
3. Violations
3.1 Routine and optimising violations
3.2 Sit uational violations and ‘misventions’
3.3 E xceptional violations
Figure 3.5: Taxonomy of human factors contributing to errors in RE due to violations
3.1.5 Summary of Errors in Individual Activity
This section has considered the role of the individual in originating errors, which have
been classified in terms of skill-based slips and lapses, and rule-based and knowledge-
based mistakes. These have been examined principally in terms of their relation to a plan
(or process) and their originating factors have been briefly considered. Violations,
deliberate deviations from the process, have also been examined. These occur for a host
of reasons, many of which are social in origin. Any RE process improvement technique,
and especially one explicitly focusing on the human factors component of processes,
must incorporate facilities to minimise individual errors. However, consideration needs
to move beyond that of errors in individual activity in order to examine the social
process which is central to the development of any real system. The following section,
therefore, considers group processes, and their potential for error.
3.2 Group Performance Failures and Process Losses
Many work processes are group activities. That is, they involve the participation of
several individuals acting and interacting together. The management and use of safety-
critical systems is almost invariably something done by a group or team of individuals.
Even under those circumstances where individuals have quite clear roles and
responsibilities, the actions they perform in those roles have to be coordinated with the
64
actions performed by others in other roles for the whole team’s work to be an effective
and dependable process. These remarks are generally true of most advanced industrial
production processes—few are the responsibility from start to finish of only one
individual—and certainly true of working with safety-critical systems. Furthermore, the
design and development of safety-critical systems, including the engineering of their
requirements, are practically never undertaken by single individuals.
Errors which arise through the ineffective or inappropriate coordination of individuals
in groups will be termed coordination failures. Related to coordination failures, the work
which has been conducted by social psychologists on group process losses will be analysed.
Interacting in a group can sometimes lead to individuals making fewer contributions to
the solution of some problem than they may have made working individually. Such
losses may, under certain circumstances, lead to coordination failures, where, for
example, group losses entail the omission of some critical action or contribution to the
group’s work. These remarks mean that how individuals coordinate their actions in
group and team activities are crucial to understanding the origins and development of
certain kinds of errors and failures in design and use. Just as for errors in individual
activity, the RE processes for safety critical system development must be designed so as
to protect against failures of this nature.
In the remainder of this section, these issues of group performance are analysed under
various headings:
the facilitation or inhibition of individual task performance by the presence of
others (supervisors or an audience);
the performance of interacting social groups and how this relates to individual
performance;
how factors such as leadership, status, and expertise can affect group interaction
and influence group decision making.
The purpose of proceeding through the subject matter in this manner is to build up
further classifications of vulnerabilities to error in human processes in much the same
way as in section 3.1 which focused on individual activity. The main difference in this
section is that the sources of the research and findings which arise out of it are much
more diverse. The outcome for this thesis, however, is much the same, and takes the
form of a hierarchy of error and failure types which is built up section-by-section.
Each of the following sections visits a particular focus of research in social psychological
studies of social settings and group work in particular. These are: social facilitation and
inhibition; performance in interacting groups; group leadership; conformity and
consensus; minority influence; and group decision making. In each case, one or more
candidates are suggested for addition to a classification of potential problems with group
work as was the case for individual work in the previous section. The final summary
section brings all these candidates together.
65
3.2.1 Social Facilitation and Inhibition
The mere presence of others can often affect performance even on individual tasks. That
is, an individual’s performance can be affected even if there is no interaction taking
place between the task-performing individual and others present. Paradoxically, these
effects can be either facilitatory (presence of others leads to better performance) or
inhibitory (presence of others leads to worse performance). Travis (1925) found that, on
a simple manual task, people improved their performance in the presence of an audience.
On the other hand, Pessin (1933) found that it took people longer to learn a list of items
when facing an audience than when practising alone. It is likely that the presence of
others improves performance on easy, well-learned tasks (that is, those which are
conducted at the skill level) whereas social inhibition occurs when subjects are engaged
in difficult or unfamiliar tasks which are not (yet) well learned (that is, those tasks
which are more likely to be conducted at the rule and knowledge-based levels)17.
While this resolution of the ‘paradox’ that both social facilitation and inhibition can be
found is reasonably robust, there is considerable disagreement amongst social
psychologists as to just why social facilitation occurs at the skill level but inhibition
occurs at the other levels. The presence of others can have a variety of effects. Others
can:
serve as distracting stimuli;
lead the task performer to worry that their performance might be explicitly or
implicitly evaluated by the others;
cause the task performer to become aroused and anxious;
and so forth.
Any full theoretical explanation of social facilitation and inhibition effects would have
to integrate both arousal/motivational effects and information processing or problem
solving effects of the presence of others (for details of such a theory, see Paulus, 1989).
Findings of the sort documented here should lead, for example, to consideration of
questions surrounding when and where the direct supervision of workers is appropriate.
Direct supervision, even if it is founded in the aim of monitoring and checking for errors
in performance, can sometimes lead to performance losses if the task in question
implicates the use of one’s general knowledge in problem solving activity or if the task is
being conducted using pre-packaged proceduralised rules. On the other hand, the
performance of a routine, skilled task may even be improved by the presence of a
supervisor, though whether the dedication of a supervisor to monitoring an individual
performing a routine task is worthwhile use of another individual’s time may turn out to
be debatable.
17 See Zajonc (1965) for further details and Manstead and Semin (1980) for the connection of these
phenomena to distinctions between automatic (skill-based) and consciously controlled (rule and
knowledge-based) performance.
66
This analysis suggests an important class of errors or failures which is the first in the
social and group performance category:
Errors or failures due to social facilitation or inhibition.
3.2.2 Performance in Interacting Groups
How does the performance of groups of people jointly engaged in the performance of
some task relate to the performance of the individuals comprising them? Will a group
outperform the best individual within that group (in which case conducting the task in a
group situation has benefits) or will a group lead to performance worse than this (in
which case the task would be better performed by either the most able individual alone
or by a ‘nominal group’ of people working independently from whom the best solution
is drawn?) Again, at first glance, the research presents a contradictory picture. There are
circumstances where groups outperform the level the best individual within it is capable
of performing alone. On the other hand, there are circumstances where individuals
working alone, when their performance levels are appropriately summed, outperform
groups.
Steiner (1972; 1976) convincingly argues that the relationship between group and
individual performance, and hence whether there are significant group productivity
losses, depends upon the kind of task being performed. To give a simple example, a team
of people building a house will obviously complete it faster than an individual working
alone. Furthermore, a building team where the jobs that each does are matched to their
abilities will complete the job faster than individuals of identical abilities. On the other
hand, it is likely that the fastest relay race team will be made up of the four fastest
runners. A single slow runner will impact directly on the whole team’s performance.
Thus, in many activities, the performance of the least able member is critical to group
success and cannot be compensated for by the good performance of more able members.
Table 3.4 summarises Steiner’s classification of tasks:
Steiner and others who have followed him (e.g. Wilke and Van Knippenberg, 1988)
have shown that by classifying tasks into the different ways in which individual inputs
are combined in group performance, one can make predictions about the nature and
extent of group performance losses. These results are summarised in the following
sections:
3.2.2.1 Additive tasks
For these, a group will always outperform any single individual comprising the group
because, by definition, group performance equals the sum of individual performance
levels. However, it was discovered over one hundred years ago by Ringelmamm (see
Wilke and Van Knippenberg, 1988) that, while this is true, the contribution that
individuals make as part of a team is often less than the contribution they would make if
acting alone—a phenomenon known as ‘social loafing’ (for a substantial recent review,
see Karau and Williams, 1993). Indeed, individual contributions often go down as a
function of increasing group size. Thus, although the actual productivity of the group
67
exceeds the productivity of the best member, the potential productivity is higher still.
These losses from potential productivity are of two main sorts:
Table 3.4: A summary of Steiner’s typology of tasks (Steiner, 1972; Steiner, 1976) (from Wilke and
VanKnippenberg, 1988, p. 325)
Question Answer Task type Examples
Sub-tasks can be
identified
Divisible Playing a football game,
building a house,
preparing a six-course
meal
Can the task be
broken down into
sub-components, or
is division of the task
inappropriate? No sub-tasks exist Unitary Pulling on a rope, reading
a book, solving a
mathematics problem
Quantity Maximising Generating many ideas,
lifting the greatest weight,
scoring the most runs
Which is more
important: quantity
produced or quality of
performance? Quality Optimising Generating the best idea,
getting the right answer,
solving a mathematics
problem
Individual inputs are
added together
Additive Pulling a rope, stuffing
envelopes, shovelling
snow
Group product is
average of individual
judgements
Compensatory Averaging individuals’
estimates of the number
of beans in a jar, weight of
an object, room
temperature
Group selects product
from pool of individual
members’ judgements
Disjunctive Questions involving ‘yes-
no, either-or’ answers,
such as mathematics
problems, puzzles, and
choices between options
All group members
contribute to the
product
Conjunctive Climbing a mountain,
eating a meal, relay races,
soldiers marching in file
How are individual
inputs related to the
group’s product?
Group can decide how
individual inputs relate
to group product
Discretionary Deciding to shovel snow
together, opting to vote
on the best answer to a
mathematics problem,
letting leader answer
question
(I)MOTIVATIONAL LOSSES
As part of a team, individuals need not be motivated to perform as well as they would
when performing individually. One can be a ‘free-rider’.
68
(II)COORDINATION LOSSES
Some of each individual’s activity has to be devoted to coordinating their efforts with
others rather than the direct performance of the task itself.
3.2.2.2 Compensatory tasks
Consider a task in which a group of individuals are each making estimates of the number
of beans in a jar or of the weight of an object. Shaw (1981) argues that the bulk of
evidence for tasks of this sort indicates that the statistical average of a group of people
making such estimates is more reliable than the judgements of most of the individual
making up the group. That is, the overestimates of some cancel out the underestimates
of others. Steiner (1972; 1976), however, suggests that this conclusion should be taken
with some care as it is not always possible in daily life to statistically average in simple
ways. Equally, one is not always acquainted with the skills and biases of the individuals
making up the group so one cannot be sure whether the variation in their biases will be
likely to lead to overestimates cancelling out underestimates.
3.2.2.3 Disjunctive tasks
These are tasks where the group’s solution or overall outcome will be a selection from
the individually proposed solutions or individually contributed performances. Many
problem solving tasks are of this sort, where a single option (or a range of options which
is less in number than all those proposed by the group) must ultimately be selected. Very
often, not surprisingly, group performance in disjunctive tasks will equal the best
performance of the individuals who make up the group. Shaw (1932) explains this result
by noting that groups have the opportunity to correct the errors and reject the incorrect
suggestions made by individuals. However, further work has made the picture more
complex for, on the one hand, it is not always the case that a group member does
propose the best solution to a problem and, on the other hand, it is not always the case
that a group happens to adopt the best solution even if it is proposed by one of its
members. Much depends on whether the best solution is recognised as such by the group
members. This may only be possible for certain kinds of problem or task. Tasks where
optimal solutions are easily recognised as such are known as eureka tasks. In contrast, for
non-eureka tasks, it is quite possible that a correct solution will not be proposed or an
incorrect solution will amass support from the group members. The critical aspects for
group success in disjunctive tasks appear to be (Steiner, 1972; Thomas and Fink, 1961):
Potential performance and member expertise. Do group members possess
the right expertise for solving the problem?
- if there is not at least one competent member the group is unlikely to succeed.
Motivation. Do group members, possessing the correct solution, actually
propose it?
- if a low status member happens upon the solution, they may feel unable to
express it.
69
Coordination. Do correct solutions elicit more support than incorrect ones so
that they emerge as the group’s overall solution?
- if a low status member or one held to be inexpert does express the solution, it
might be resisted by other group members or ignored;
- in some problem domains, the overhead of the task of convincing others that
a solution is correct may be too high for correct yet non-obvious solutions to
be adopted.
3.2.2.4 Conjunctive tasks
In these it is necessary that every member contributes lest the group fails. For runners in
a relay race team, it is only necessary for one of them to drop the baton or run out of
their lane for the whole team to be disqualified. Conjunctive tasks, where group success
depends upon the least proficient member, are more likely to fail with increasing group
size, as the probability that the group will contain at least one member who does not
contribute increases the more members there are. This is prominently true for unitary
group tasks. However, many tasks in everyday life are divisible. That is, different
members can adopt different sub-tasks. If the sub-tasks have a relative degree of
independence in their execution then the effects of failure of one sub-task (allocated to
one individual or a subgroup) may not be catastrophic for the whole enterprise.
Additionally, in divisible conjunctive tasks, if the competencies of the individuals match
the sub-tasks they are engaged in, then the potential productivity of the group can rise
above the productivity of the least able member.
3.2.2.5 Discretionary tasks
A discretionary task is one in which the members of the group themselves decide upon
what kind of task it is, how the task is to be performed, and how individual inputs are
to be coordinated into a group outcome. Discretionary tasks, if this can be put so
paradoxically, are by definition ill-defined. They are likely to resolve into one of the
forms of task already discussed in which case one can expect the corresponding
performance levels to be achieved and losses (if any) encountered. However, this very
process of resolving the task and its conduct is itself an overhead to the performance of
the task in which coordination losses are likely to become critical.
The results and analysis above are summarised in Table 3.5.
3.2.2.6 Summary of group performance failures
Just as Steiner’s system can be used to classify tasks, this scheme as presented in Table
3.5 can be used to classify group performance failures and errors in such tasks. Thus, one
can refer to additive task errors, compensatory task errors and so forth as errors made in
additive and compensatory tasks. In addition, this task-based classification scheme can
be complemented with one based on some of the social psychological phenomena noted.
This gives rise to the following additions to the classification of social and group
performance failures:
70
Failures and errors due to inappropriate human resources in the group
- e.g. there is no adequately competent group member
Socio-motivational errors and failures
- including free-rider and related problems
Group coordination errors and failures
- e.g. where the overhead of coordinating a group militates against its
performance or the performance of each or some of its members
Status related errors and failures
- e.g. where the status of a group member militates against their contributions
being taken seriously or the group over-believe an ‘expert’
Group planning and management errors and failures
- e.g. when groups inappropriately break down tasks to sub-tasks and
inappropriately allocate individuals to sub-tasks
Table 3.5: Group performance of groups working on various types of tasks (Forsyth, 1983; Steiner, 1972;
Steiner, 1976) (from Wilke and VanKnippenberg, 1988, p. 332)
Task Group Productivity Description
Additive Better than best Group out-performs the best individual member
Compensatory Better than most Group out-performs a substantial number of group
members
Disjunctive
(eureka)
Equal to the best Group performance matches the performance of the
best member
Disjunctive
(non-eureka)
Less than best Group performance can match that of the best
member, but often falls short
Conjunctive
(unitary)
Equal to the worst Group performance matches the performance of the
worst member
Conjunctive
(divisible with
matching)
Better than the worst If sub-tasks are properly matched to ability of
members, group performance can reach high levels
3.2.3 Group Leadership
There is a considerable social psychological literature on the topic of leadership in small
groups. This literature covers many different aspects of the issue but perhaps the most
relevant for current purposes concerns the leadership factors which are likely to promote
the productivity or success of a group in the performance of the group’s task.
One approach has been to study experimental groups who do not have a leader initially
to see under what circumstances a leader might emerge and what characteristics that
leader may have. In an early series of observational studies of groups in interaction, Bales
and Slater (1955) suggest that a specialisation often occurs in groups between ‘socio-
71
emotional specialists’ who are oriented towards the solidarity of the group and who
resolve tensions in the group and so forth, and ‘task specialists’ who are more concerned
with the execution of the task itself than the group’s internal social dynamics. Bales and
Slater (1955) argue that group leaders rarely embody both aspects and that effective
groups often have two ‘leaders’—one concerned with the socio-emotional aspects of the
group, one concerned with its effective task performance. This conclusion was borne
out by Likert (1967) in studies which suggested that effective leaders manifested one of
(and rather rarely both) employee centred behaviour and production centred behaviour.
Steiner (1976) argues that employee centred (or socio-emotional) behaviour is necessary
to ensure that the unrealised productivity of the group is kept to a minimum (e.g. by
encouraging all group members to participate) and that production (or task) centred
behaviour is necessary to ensure that the group’s potential productivity is as high as it
can be. Thus, according to this literature, there seem to exist two leadership styles, each
with its own impact on group performance and effective groups often need an
appropriate balance between the two.
Of course, it is not necessary for groups to have a leader. Indeed, some groups may
perform better without an explicit leader and may not have the need for a leader to
emerge. There seem to be a number of critical factors of relevance to whether groups
have a need for a leader.
size—Hemphill (1961) argues that effective performance in a large group is
often dependent on the group having a leader to coordinate various specialised
subgroups and facilitate overall decisions.
availability—the group should have someone at their disposal who has had
relevant leadership experience.
the value of success—success must be important to the group for it to seem
worthwhile to install a leader.
Finally, Rutte and Wilke (1984) suggest that group leadership may be important in
resolving the ‘free-rider problem’ (that is, the possibility that some group members may
contribute unequally and ‘ride’ on the contributions of others) and if task success is in
danger.
This analysis leads to a number of potential sources of group failure and error if leaders
are inappropriately or ineffectively installed into groups. For example:
Errors or failures due to inappropriate leadership style (or balance between styles)
- e.g. a group lacks a ‘task sensitive’ leader
Errors or failures due to inappropriate leadership skills
- e.g. the person appointed leader does not have appropriate experience
Errors or failures due to the excessive influence of the leader
- e.g. a high status leader who does not encourage contrary opinions to emerge.
72
3.2.4 Conformity and Consensus: Normative and Informational Influence
Sooner or later in working together as a group, group members will become aware of the
opinions or contributions of others. Specifically, they may often become aware of
whether they are in the majority or minority of group opinions. Indeed, periodically,
many groups explicitly assess the level of agreement within the group through formal
voting, a ‘show of hands’ or other means. When group members become aware of the
overall position of the group, how does this influence their views? Do individuals within
groups come to realign their views in accordance with the majority view (‘majority
influence’) or do they retain their private views in spite of the majority view? Equally,
under what circumstances can minority opinion come to influence the views of the
majority?
Certainly, the existence of a majority position can influence the views of minority
individuals. Classic social psychological experiments by Sherif and Sherif (1953) and
Asch (1951) are often claimed to demonstrate just that. In Asch’s work, for example, a
series of lines of varying lengths are shown to a group of people. One pair of the lines are
identical in length and the group members’ task is to say which two are identical. Asch
arranged the experiment so that the majority of the group members are ‘confederates’ of
the experimenter who are instructed to give a consistent, yet incorrect response. Asch
found that as many as 37% of people, when confronted with a clearly incorrect majority
opinion, nevertheless fell in line with the majority view.
To further analyse why people conform in such settings, social psychologists often make
a distinction between informational and normative influence. People may be influenced
in their opinions by the information provided by others, what their opinions are and the
reasons they give for them. In contrast, people may also be influenced by normative
reasons to conform with the views of others. For example, an individual may wish to
avoid being disliked and so agree with a majority view to promote the chances that they
will be popular within the group. Alternatively, an individual of low status within a
group may change their view to match that of the majority if that majority contains high
status members. Both of these are examples of normative influence in action.
Thus, consensus in a group can come about through the combination of two factors: the
normative and informational influence that different group members have on one
another. A number of studies have been conducted to try and tease apart the effects of
normative and informational influence to gain an impression of when each factor is most
potent. This work can only be crudely summarised here (for more details, see
Van Avermaet, 1988):
Normative influence is heightened by, for example:
- rewarding conformity itself
- increasing the interdependence of group members on each other
- insisting that opinions and contributions to the group are made public by
being spoken aloud rather than written down privately or anonymously
73
- informing the group that it will be compared with other groups.
Informational influence is greater, for example:
- for group members who are perceived as being competent in the task domain
- as sources of information become more reliable (e.g. improvements in viewing
conditions)
- as the majority increases (but only if the majority are seen to be acting
independently and are not merely repeating the same reasons for the majority
opinion)
- when the range of opinion within the group increases (that is, if the majority
is not unanimous).
The distinction between normative and informational influence again suggests two
classes of process error or failure which may occur as groups interact in the pursuance of
some task:
Errors or failures due to conformity arising from inappropriate normative
influence
- e.g. when an incorrect judgement of a high status member commands
influence because others respect that status
Errors or failures due to conformity arising from inappropriate informational
influence
- e.g. when the judgement of one member is based on false evidence or is
misunderstood by another group member (at least some of these errors may
arise due to slips, lapses or mistakes being made within the group).
3.2.5 Innovation: Minority Influence
Of course, the existence of a majority opinion or a subsequent overall group consensus is
no guarantee in itself of the correctness or worth of the opinion. Indeed, the main
problem with conforming to majority opinion is that important minority views may be
ignored. As one can imagine, there is much that a group leader or facilitator can do to
prevent minority opinions being passed over. A study by Maier and Solem (1952)
suggests that leadership style is an important determinant here of whether minority
opinions can come to have influence. A group leader who merely monitored the
procedures and agenda followed by the group did not help minority opinions find their
voice while a leadership style which encouraged a more even group participation did
allow effective minority opinions to emerge. Under these circumstances, encouraging
uniform participation went some way to ensuring that all opinions and not just the
majority one were given equivalent discussion. Note that this strategy might also have
assisted in ameliorating process losses due to the free-rider problem (see above).
In principle, of course, it must be possible for a minority to influence majority opinion or
otherwise change and innovation would be impossible. However, it is equally clear that,
74
due to both normative and informational factors, a majority is hard to displace.
Moscovici (1976) argues that minority opinion can alter majority views provided that
the minority adhere to a behavioural style in which they propose a clear position and hold
firmly to it. Of particular importance to this behavioural style is the consistency with
which the minority defend and advocate their position. This consistency is made up of
two components: intra-individual consistency over time (individuals will not waver
within themselves in their views) and inter-individual consistency over time (individuals
will not waver between themselves in their views). Note it is important for minorities
to sustain their positions consistently over the long term. This contrasts with the effects of
majority influence which can be immediate.
Provided the conditions noted above are held, there is a chance for the minority to
influence the majority. However, it has been shown if these very same strategies are used
in turn by the majority against a consistent minority, the effect of the minority can
swiftly disappear (Doms and Van Avermaet, 1985).
Interpretations of exactly how minority influence takes place vary (see Maass and Clark,
1984). However, it is clear according to this literature that, if an adequate range of
opinions are to be considered within a group, strategies must be found for permitting
minority opinions to emerge. Without this, a further class of group process error or
failure may emerge:
Errors or failures due to the exclusion of minority opinion.
3.2.6 Group Decision Making: The Risky Shift, Group Polarisation and
Groupthink
Consider a decision making task in which a group has to resolve on a course of action.
Consider further that each course of action has a set of risks and probabilities attached
to it. Naturally, this is a very common situation in the design of safety-critical systems or
in the engineering of requirements for them. Under such circumstances, what are the
relations between the views of individual group members (the course of action they as
individuals would decide upon) and an overall group decision? In a famous experimental
study, Kogan and Wallach (1964) showed that groups seem to be more tolerant of risks
than the individuals composing them. That is, the course of action resolved upon by the
group was more risky in general than the decisions that the individuals would have
tended to make in isolation. This phenomenon is often known as the risky shift.
However, since this early work, it has been shown that groups are not always ‘riskier’
than the individuals comprising them. Quite often groups can be more cautious than
individuals and the risky shift is not the general phenomenon it might have at first
appeared to be. What seems to be important is the initial level of opinion within a
group. If the individuals initially favour moderately risky strategies, then the group will
adopt yet more risky options. However, if the individuals who comprise the group
initially favour moderately safe options, then the group decision is likely to be even less
risky. That is, the group ‘shifts’ further in the direction already favoured. Myers (1982)
75
terms this phenomenon group polarisation. Group polarisation seems to occur in a wide
variety of contexts (Lamm and Myers, 1978).
The previous discussion would suggest two reasons why group polarisation can occur.
First, group polarisation may occur for normative reasons as each group member
compares their views with other members’ positions. Members—on realising the overall
group norm—may come to adopt more extreme positions to align themselves more fully
with the direction of the group’s thinking. Alternatively, group polarisation may occur
due to processes of informational influence. Group interaction will yield a number of
arguments, most of which are in support of the position already favoured by the group.
Group discussion therefore will tend to increase the amount of support that the overall
group position will have. In the light of this, members may take even more extreme
positions as they will be encountering arguments for their view which they had not
heard before. Group polarisation, on this view, becomes a matter of mutual persuasion.
Clearly, both informational and normative influence can operate in explaining group
polarisation and Isenberg (1986) explicitly argues that any plausible theory of group
polarisation should combine these two factors.
So-called groupthink (Janis, 1972) is an extreme case of group polarisation. Janis (1972)
described a number of cases of military and political decision making (most notably the
decision making of the Kennedy administration leading to the ‘Bay of Pigs’ invasion in
1961) in which group polarisation takes extreme forms. Groupthink occurs when a
group of already like-minded individuals form a highly cohesive group and mutually
reinforce themselves in a course of action which may well turn out unwise in spite of
the group’s extreme conviction. Janis argues that the following antecedent conditions
make groupthink possible:
the decision making group is highly cohesive;
the group is isolated from alternative sources of information;
the group’s leader clearly favours a particular option.
If these conditions are met, then groupthink will be characterised by discussions in
which the group develops:
an illusion of its own invulnerability;
a tendency to mutually rationalise actions which are in line with the proposed
option;
while ignoring or discounting inconsistent evidence and arguments.
Groupthink and less extreme forms of group polarisation, then, constitute another
possible source of error or failure in group or team work:
Errors or failures due to group polarisation and groupthink.
76
3.2.7 Summary of Group Coordination Failures, Process Losses and Related
Sources of Error
In this section, a variety of contributions to social psychology have been reviewed which
seem to be relevant to understanding the origins of errors and failure in group processes.
In so doing, the following classes of error have been isolated which are gathered together
here in Figure 3.6 in the same way and for the same purpose as the classification of
individual errors presented at the end of section 3.1. Naturally, this is not an exhaustive
classification but it does point to some of the details of group processes which one
should be sensitive to in scrutinising group activity for potential sites of error.
4. Group Coordination Failures and Process Losses
4.1 Errors or failures due to social facilitation or inhibition.
4.2 F ailures and errors due to inappropriate human resources in the group.
4.3 S ocio-motivational errors and failures.
4.4 G roup coordination errors and failures.
4.5 Status related errors and failures.
4.6 G roup planning and management errors and failures.
4.7 G roup Leadership
- 4 . 7.1 Errors or failures due to inappropriate leadership style (or
balance between styles)
- 4 . 7.2 Errors or failures due to inappropria te leadership skills.
- 4 . 7.3 Errors or failures due to the excessive influence of the leader.
4.8 Premature consensus and the exclusion of minority opinion
- 4 . 8.1 Errors or failures due to conformity arising from inappropriate
normative influence.
- 4 . 8.2 Errors or failures due to conformity arising from inappropriate
informational influence.
- 4 . 8.3 Errors or failures due to the exclusion of minority opinion.
4.9 Errors or failures due to group polarisation and groupthink.
Figure 3.6: Taxonomy of human factors contributing to errors in RE due to group activity
This concludes this chapter’s treatment of problems and vulnerabilities due to working
in groups, and brings the analysis to the point where one further and final broadening of
perspective is required. So far, the chapter has considered the work of individuals and
the vulnerabilities to error in solitary activity. Then, in this previous section,
consideration was given to how humans when engaged in working as a group or team
are subject to a different set of vulnerabilities to error or to perform in a sub-optimal
way. This broadening from individual to group must be taken one step further to now
consider how organisations function, and the nature of vulnerabilities to error in
organisations (which in turn consist of groups of individuals engaged in a variety of
activities of diverse types). The following section considers problems in organisations,
once again with a view to building on the classification of vulnerabilities to error which
has been developed throughout the rest of this chapter.
77
3.3 Organisational Problems and Failures
The third broad area of research which is turned to in this chapter is that relating to
work at an organisational level, and the errors and failures which organisations are
vulnerable to. Previous sections have characterised the work of requirements engineers
as a combination of individual and cooperative work. One justification for opening up
the coverage to the social psychology of groups was that individual activity does not
occur in isolation, and that for much of the RE process human activity is predominantly
oriented towards the work of others. The same argument applies here, in that the
various individuals and their groupings in project teams and so forth all exist within
some organisational setting, and all their activity pertains to some organisational goal or
other. As such, an understanding of organisations, how they are made up, and how they
function, is extremely relevant for the purposes of this thesis.
The remainder of this section considers three perspectives on the ways in which
organisations have been found to function in hazardous situations, and how they can
contribute to failures, but also to their avoidance. First of all, organisational failures are
viewed as ‘accidents waiting to happen’ in the work on latent organisational failures.
The following section presents a classification scheme for organisations in terms of the
degree of interactive complexity and tightness of coupling between components. On the
basis of this classification, it is suggested that, for some types of organisation, accidents
are inevitable and should therefore be considered normal. Finally, this school of thought
is contrasted with work from a number of researchers who contest that organisations
can be highly reliable in hazardous settings provided a number of recommendations are
adhered to.
3.3.1 Latent Organisational Failures
In this section, the importance of organisational factors in understanding the origins of
error and failure are turned to. An increasing amount of work in the field of accident
and error analysis is concentrating on the factors that can be attributed to failures at an
organisational level. Reason (1990; 1992), in particular, has coined the term latent
organisational failures to describe the errors resulting from organisational factors which
may remain dormant for some time before combining with one or more other factors in
the cause of an accident. These latent failures frequently take the form of fallible
decisions taken high up in the organisation hierarchy, and are so named because of the
likelihood that they will remain unnoticed for some time before being transmitted
through the organisation’s various levels to combine with a triggering event or active
failure (unsafe act) to breach the system’s safety defences and cause an accident (see
Figure 3.7).
78
INADEQUATE
DEFENCES
UNSAFE ACTS
PSYCHOLOGICAL
PRECURSORS
OF UNSAFE
ACTS
LINE
MANAGEMENT
DEFICIENCIES
FALLIBLE
DECISIONS
Active failures
&
Latent failures
Active failures
Latent failure
s
Latent failures
Latent failures
INTERACTIONS
WITH LOCAL
EVENTS
LIMITED
WINDOW OF
ACCIDENT
OPPORTUNITY
ACCIDENT
Figure 3.7: Active and latent failures and their contribution to the breakdown of complex systems (from
Reason, 1990, p. 202).
Turner (1992) also proposes that system failures develop over a period of time, and are
usually due to a number of factors, rather than a single catastrophic event. According to
Turner, the development of a system failure is typified by the sequence in Figure 3.8:
1. Situation ‘notionally normal’
2. Incubation period
3. Trigger event
4. Onset
5. Rescue and salvage
6. Full cultural readjustment
Figure 3.8: The development of a system failure (from Turner, 1992, p. 193)
79
From the outset, when the system is ‘notionally normal’, the situation gradually and
invisibly deteriorates during the incubation period until some trigger event occurs. At
this point, the various factors are combined together to become the onset of the system
failure. Subsequently, the event is recovered from by way of immediate rescue and
salvage, and eventual readjustment, the purpose of which is to understand why the
failure occurred, and to try and prevent similar events taking place in the future. Turner
(1992) sets out the predisposing features which will typically interact in the incubation
period, during which time the system is a “disaster waiting to happen”, as follows:
Organisational rigidities of perception and belief
Decoy phenomena which distract attention from genuine hazards
A range of many types of information and communication difficulties associated with the ill-
structured problem which eventually generates the accident. Such ambiguities, noise and
confusion are frequently complicated by unexpected elements injected into the situation by
‘strangers’ who are unfamiliar with the system, most frequently members of the public, and
by additional surprises arising from unanticipated aspects of the ‘site’ or of the technical
system involved
Failure to comply with existing safety regulations
A variety of modes of minimizing or disregarding emergent danger, especially in the final
stages of the incubation period (Turner, 1992, p. 194, original emphasis)
Reason (1992) identifies ten organisational failure types, and three latent workplace
factors which can combine in many (30!) ways in order to produce various situations
that are the “early warning signs” of an accident. The workplace factors are:
violation producing conditions, such as unfamiliarity with the task, poor human-system
interface, and irreversibility of errors.
error-producing conditions, including lack of organisational safety culture,
management/staff conflict, and poor supervision and checking; and
inadequate defences, where human and technical elements fail to deal with an
accident in terms of protection, detection, warning, recovery, containment, or
escape.
The organisational failure types that these factors can combine with are:
incompatible goals;
organisational deficiencies;
inadequate communications;
poor planning and scheduling;
inadequate control and monitoring;
design failures;
unsuitable materials;
80
poor operating procedures;
inadequate maintenance; and
poor training.
This work naturally leads to consideration of the types of organisation that are error-
prone as a means of identifying potential problem areas that need to be addressed.
3.3.2 Typologies of Organisations
Reason (1992) proposes a seven-point organisation rating scale, which can be used to
classify the safety culture that exists within an organisation. This scale, based on how
organisations react to hazards, ranges from pathological, where safety practices are at the
bare minimum, through to generative-proactive, where the organisation is constantly
striving to improve safety measures. Reason adopts medical terminology, and talks about
organisational safety ‘health indicators’ and how safety research has so far concentrated
on performance and warning indicators that are concerned with the immediate to
medium term picture of ‘health’. It is Reason’s contention that progress in understanding
what makes a safe organisation will be made only by further study of high reliability
organisations, and by using global ‘health’ indicators that give a more predictive and long
term view of organisational safety.
Perrow (1984) classifies organisations according to their interactions, which may be linear
or complex and their coupling, either loose or tight. The classifications are summarised in
Table 3.6and Table 3.7 respectively:
Perrow then classifies organisations according to where they fall in the two-dimensional
categorisation of linear-complex interactions versus tight-loose coupling. He uses this
classification when considering whether authority in an organisation should be
centralised or decentralised in order to reduce the risk of accidents. It can be seen from
this (see Table 3.8) that tightly coupled, complex interactions produce incompatible
demands on the organisation. Tight coupling requires authority to be centralised, whilst
complex interactions require decentralised authority. It is this class of organisations that
Perrow believes to be especially vulnerable to system accidents. In fact, he argues that
accidents are inevitable in tightly coupled, interactively complex systems, and to this
extent can be considered ‘normal’. Perrow’s argument is picked up by Mellor (1994) and
is supported with a number of cases where Mellor argues that the use of computers in
any system will increase both the interactive complexity and the degree of coupling, and
therefore make the occurrence of normal accidents more likely.
81
Table 3.6: Complex vs. Linear systems (from Perrow, 1984, p. 88)
Complex Systems Linear Systems
Tight spacing of equipment Equipment spread out
Proximate production steps Segregated production steps
Many common-mode connections of
components not in production sequence
Common-mode connections limited to power
supply and environment
Limited isolation of failed components Easy isolation of failed components
Personnel specialisation limits awareness
of interdependencies
Less personnel specialisation
Limited substitution of supplies and
materials
Extensive substitution of supplies and materials
Unfamiliar or unintended feedback loops Few unfamiliar or unintended feedback loops
Many control parameters with potential
interactions
Control parameters few, direct, and segregated
Indirect or inferential information sources Direct, on-line information sources
Limited understanding of some processes
(associated with transformation processes)
Extensive understanding of all processes
(typically fabrication or assembly processes)
Summary Terms
Complex Systems Linear Systems
Proximity Special segregation
Common-mode connections Dedicated connections
Interconnected subsystems Segregated subsystems
Limited substitutions Easy substitutions
Feedback loops Few feedback loops
Multiple and interacting controls Single purpose, segregated controls
Indirect information Direct information
Limited understanding Extensive understanding
Table 3.7: Tight and loose coupling tendencies (from Perrow, 1984, p. 96)
Tight Coupling Loose Coupling
Delays in processing not possible Processing delays possible
Invariant sequences Order of sequences may be changed
Only one method to achieve goal Alternative methods available
Little slack possible in supplies, equipment,
personnel
Slack in resources possible
Buffers and redundancies are designed-in,
deliberate
Buffers and redundancies fortuitously
available
Substitutions of supplies, equipment,
personnel limited and designed-in
Substitutions fortuitously available
82
Table 3.8: Centralisation/Decentralisation of authority relevant to crises (from Perrow, 1984, p. 332)
INTERACTIONS
Linear Complex
Tight
CENTRALISATION for tight coupling.
CENTRALISATION compatible with
linear interactions (expected, visible).
e.g.. Dams, power grids, some
continuous processing, rail and marine
transport.4
CENTRALISATION to cope with tight
coupling (unquestioned obedience,
immediate response).
DECENTRALISATION to cope with
unplanned interactions of failures (careful
slow search by those closest to
subsystems).
Demands are incompatible .
e.g.. Nuclear plants, weapons; DNA,
chemical plants, aircraft, space missions.
COUPLING
Loose
CENTRALISATION or
DECENTRALISATION possible. Few
complex interactions; component
failure accidents can be handled from
above or below. Tastes of elites and
tradition determine structure.
e.g.. Most manufacturing, trade
schools, single-goal agencies (motor
vehicles, post office).
DECENTRALISATION for complex
interactions desirable.
DECENTRALISATION for loose coupling
desirable (allows people to devise
indigenous substitutions and alternative
paths), since system accidents possible.
e.g.. Mining, R&D firms, multi-goal agencies
(welfare, DOE, OMB), universities.
3.3.3 Normal Accidents Vs. High Reliability Organisations
Sagan (1993) portrays the normal accident approach of Perrow and others as being a
somewhat pessimistic view, and contrasts it with the more optimistic18 work of a
number of researchers whom he groups under the ‘High Reliability Theory’ school of
thought. These researchers have examined systems such as U.S. Navy aircraft carriers,
the American Federal Aviation Administration’s (FAA) air traffic control system,
nuclear power plants, and the human body’s immune system amongst others (see, for
example, La Porte and Consolini, 1991; Marone and Woodhouse, 1986; Roberts, 1989;
Wildavsky, 1988). The systems they have studied all display high levels of reliability, and
the researchers believe that this can be explained by a number of common features
which they have discovered in the organisations concerned. In particular, the studies
have pointed to four critical causal factors that they believe if satisfied will lead directly
to highly reliable operations, even for organisations working with hazardous
technologies. These factors are expanded upon below19:
Organisation leadership prioritises safety. Two reasons are given for why it
is important that the political elite and leaders of the organisation should place a
great emphasis on the importance of safety. First, the needs for high levels of
redundancy and constant operational training requires a great financial
18 Although some of the high reliability theorists object to this optimistic label, (see Sagan, 1993, p. 47)
19 See chapter 1 of (Sagan, 1993) for a good review of high reliability theory, normal accident theory, and
a comparison of the two.
83
commitment. Therefore, if the political authorities and organisational leadership
concerned are willing to devote considerable resources to safety, then accidents
will be less likely. The example is given that U.S. Congress has never reduced the
level of budget requested by the FAA for air traffic control. Second, the
leadership must see safety as a priority in order to be able to transmit this to the
rest of the organisation and in turn lead to the development of a strong
organisational culture of safety. This behaviour was observed in the commander
of an aircraft carrier, who was seen to repeatedly “lay down the culture of the
organisation” in briefing new crewmen on safety procedures and instructing them
to never break the ship’s rules “unless safety is at stake”.
High levels of redundancy exist in personnel and technology. In the
words of one of the high reliability theorists, “duplication is a substitute for
perfect parts” (Bendor, 1985), and redundancy is seen as a must in the quest to
build “reliable systems from unreliable parts”. There are two types of redundancy
which can be employed:
- Duplication. Two (or more) different units are dedicated to performing the
same function. The studies of aircraft carrier operations have highlighted the
importance of both technical and personnel duplication. For example, there
may be backup computers, antennae, and so on; and two people are often
assigned to perform the same task, such as checking the arresting gear settings
on the carrier prior to each landing.
- Overlap. More than one unit has the same functional area in common. For
example, different officers may be assigned the same duties, whilst their
overall responsibilities may differ, thus allowing each one to cross-check the
other’s work. Also on the aircraft carriers, each time a plane lands there is a
constant stream of commands, verifications, and so on that is broadcast over
multiple channels such that any incorrect detail stands more chance of being
picked up before it becomes a problem.
Decentralised authority, continuous training, and strong organisational
culture of safety are encouraged. These three factors are seen as relieving
some of the pressure created by individual failures such that redundant systems
are not over stressed:
- Decentralisation. In order to allow for those closest to problems to respond
rapidly and appropriately to any situation as it develops, a high degree of
decentralised authority for decision-making is required. Surprisingly, this is
evident even in operations on U.S. aircraft carriers which would at first
glance appear to be structured in a highly centralised and hierarchical manner.
For example, higher ranking officers have been observed to defer to the
technical judgement of lower ranking crew, and each and every crew member
regardless of rank is obligated to halt carrier operations if they believe that to
continue would lead to an accident.
- Continuous operations and training. Organisations are more likely to relax
vigilance and become complacent when the conditions of operation become
84
stable and routine, leading in turn to carelessness and error. For example,
accidents in air traffic control are more likely under light traffic conditions
when controllers are likely to be less vigilant. To combat this, the high
reliability theorists propose that organisations should employ a continuous
process of training on-the-job, work-loads that are challenging, and
simulations of emergencies that are often and realistic. A good example of this
is the constant flight training mode employed by aircraft carriers at sea.
- Culture of reliability. In a stable operating environment, an organisation can rely
upon standard operational rules and procedures for maintaining reliability
because the actions performed and decisions made by operational staff will
fall within a predictable set. This is not usually the case for organisations that
are working with hazardous technologies, where staff must react rapidly to an
unpredictable environment in an appropriate manner. Developing a reliability
or safety culture at all levels of the organisation through recruitment,
socialisation, and training of staff is seen as a way of achieving this degree of
assurance that staff will respond to dangerous situations in the appropriate
manner.
Organisational learning takes place through trial-and-error, simulation,
and imagination. It is of great importance that an organisation is capable of
learning over time, if it is to achieve a highly reliable status. This trial-and-error
process must work such that both safety- and danger-inducing activities are
recognised as such, and that operating procedures are adjusted in order to increase
the operational level of safety. High reliability theorists cite the changes in
working practices in nuclear power plant control rooms after the Three Mile
Island incident, and point to how many of the safety procedures in place on U.S.
aircraft carriers were introduced following crashes or deck fires. Obviously, it is
imperative that organisations learn from such serious incidents, and from lesser
ones as well, but it would be most unwise for an organisation—especially one
working with hazardous technology—to court disaster for the benefit of a
potential learning experience. For this reason two supplementary strategies for
improving organisational learning are proposed:
- Simulations. Rather than waiting for an accident to happen, the organisation
can simulate a possible scenario, and use this both as a training exercise for
the staff, as well as allowing procedures to be altered in the light of this
experience. This is routine practice in both the nuclear and aerospace
industries, where simulations are used to provide operators and pilots with
the experience of trial-and-error learning, without the serious consequences.
- Imagination. In addition to simulating accident scenarios using dedicated
simulators or operational equipment, it is also possible to envisage hazardous
events and their consequences with pen and paper or more sophisticated
tools. This is where risk or hazard analysis fits in, or any such method which is
used to anticipate possible operator or design errors, with safety consultants
analysing the potential for errors in existing procedures, and proposing
solutions to these problems.
85
The high reliability theorists believe that organisations working with hazardous
technologies can operate safely through good management and organisational design
which apply the above factors. This is in contrast to the normal accident school of
thought which states that accidents are inevitable in such organisations, which are by
definition highly complex and tightly coupled. Sagan (1993) applies the two schools of
thought to the problem of safety with nuclear weapons operations, possibly the most
hazardous of hazardous technologies, in order to test the assumptions that the two
theories are based on and how well their predictions fit with reality. Table 3.9 provides
a summary of the contradictory assumptions, statements, and predictions of the two
schools of thought.
Table 3.9: Competing perspectives on safety with hazardous technologies (from Sagan, 1993, p. 46)
High Reliability Theory Normal Accidents Theory
Accidents can be prevented through good
organisational design and management.
Accidents are inevitable in complex and
tightly coupled systems.
Safety is the priority organisational objective. Safety is one of a number of competing
objectives.
Redundancy enhances safety: duplication and
overlap can make “a reliable system out of
unreliable parts.”
Redundancy often causes accidents: it
increases interactive complexity and
opaqueness and encourages risk-taking.
Decentralised decision-making is needed to
permit prompt and flexible field-level responses
to surprises.
Organisational contradiction:
decentralisation is needed for complexity,
but centralisation is needed for tightly
coupled systems.
A “culture of reliability” will enhance safety by
encouraging uniform and appropriate responses
by field-level operators.
A military model of intense discipline,
socialisation, and isolation is incompatible
with democratic values.
Continuous operations, training, and simulations
can create and maintain high reliability
operations.
Organisations cannot train for unimagined,
highly dangerous, or politically unpalatable
operations.
Trial and error learning from accidents can be
effective, and can be supplemented by
anticipation and simulations.
Denial of responsibility, faulty reporting,
and reconstruction of history cripples
learning efforts.
At the end of Sagan’s inquiry into the applicability of the two sets of theory, he answers
the question that he asks himself at the beginning of the book: “Which theoretical
perspective proved to be most helpful in understanding the history of nuclear weapons
safety?” (Sagan, 1993, p. 252). Whilst acknowledging the useful insights provided by the
high reliability perspective, he found much stronger support for the pessimistic views of
Perrow and others with the normal accidents approach. Not only this, but based on the
historical data about nuclear weapons operations in the U.S.A., he extends Perrow’s
pessimism with four further issues that contribute to the causes of accidents in high
technology systems. In brief, these are:
The dark side of discipline. Both high reliability theorists and normal
accidents theorists agree that a strong organisational culture—with high degrees
of socialisation, discipline, and isolation from the rest of society—can lead to
86
greater safety when working with hazardous technologies. Goffman (1961) refers
to such organisations as “total institutions” and the aircraft carriers of the high
reliability theorists’ case studies are fine examples of this. Perrow, however,
questions whether it is either possible or desirable for civilian organisations to be
run on such a strict military model. Sagan cites several examples that point to
such a culture leading to “excessive loyalty and secrecy, disdain for outside
expertise, and in some cases even cover-ups of safety problems, in order to
protect the reputation of the institution” (Sagan, 1993, p. 254).
Conflicting interests. Whilst organisational leaders may place a great priority
on achieving safe operations, they will also have many other, potentially
competing interests, some of which may take priority.
Constraints on learning. Organisational learning is constrained by political and
social pressures to portray a certain image of the organisation to the outside
world. What shocked Sagan was to find that not only would this lead to false
reporting to the press and so on, but that the invented or altered stories would
come to be believed by their creators. What he found was “...not just a further
piece of evidence showing how difficult it is for large organisations to learn from
success. These cases show something more disturbing: the resourcefulness with
which committed individuals and organisations can turn the experience of failure
into the memory of success.” (Sagan, 1993, pp. 257-258).
The measure of safety. Finally, Sagan warns against believing the story-teller,
especially when the story is about the teller, and a little bit too good to be true.
He criticises the high-reliability theorists for relying upon accounts of safety in
U.S. Navy aircraft carrier operations which have been produced by the Navy
themselves, and urges those studying organisations working with hazardous
technologies to exercise scepticism when dealing with such accounts.
It should be clear from the foregoing discussion that the debates about human error,
latent failure and reliability in an organisational context are highly complex and
unresolved. For this reason, it would be misleading and, indeed, dangerous to simplify
the literature by siding with any single position or organisational theory out of those
reviewed. Rather, the organisational aspects of human factors need to be understood on
a case-by-case basis. The high reliability theorists select cases which strongly suggest the
credibility of their approach to organisational safety, at least for those cases (and settings
very similar to the ones they have studied). On the other hand, the ‘normal accidents’
view gains credibility from its own, different, cases and offers a plausible account of
how accidents can indeed become a ‘normal’ feature of certain settings and are likely to
remain so. Sagan’s work indicates that for the particular setting he studied a normal accidents
view may be the more plausible. This does not rule out the possibility that, for other
kinds of settings, the recommendations of the high reliability theorists might be more
useful.
At the organisational level, safety can be regarded as an issue subject to critical
organisational dilemmas. For example, to heighten the reliability of a process against
failure, one may feel (as recommended by the high reliability theorists) tempted to
87
introduce redundancy into the process. However, as observed from the normal accidents
perspective, this might heighten organisational complexity. Redundancy, then, is an
organisational dilemma and, in the abstract, one can argue the case for introducing
redundancy both ways. While this may be true in the abstract, in specific cases, it may
become quite clear whether introducing redundancy is an effective defence or merely a
source of ‘secondary vulnerability’. However, this issue can only be resolved in the light
of specific knowledge of the application domain and through continual monitoring of
the effectiveness of in-service changes to processes.
The complexity of issues at the organisational level and how problems of one sort (e.g.
redundancy) often have implications for problems of another sort (e.g. organisational
complexity) means that potential process improvements need to be studied in the light
of the full range of organisational context considerations. At the organisational level,
there are no easy answers.
3.3.4 Summary of Organisational Problems and Failures
This section has examined the relationship between organisations and accidents, and
how the nature of organisations can contribute to or mitigate errors and accidents, and
indeed how accidents may be inevitable in some types of organisation. In contrast to the
two other main areas of research reviewed in this chapter, there is less of a consensus
about the contributing factors to errors at the organisational level, and what might be
done to counter them. Instead, any lessons from this literature need to be applied with
greater reference to the situation at hand. Nevertheless, it is still possible to draw out
some organisational vulnerabilities which can be added to the classification which has
been built up over this chapter.
This concludes this chapter’s review of various human sciences for findings and theories
relevant to developing a process improvement method for RE. The complete
classification of organisational vulnerabilities is presented below in Figure 3.9 below. It
is worth reiterating at this stage that there exists no sense of theoretical or
methodological ‘purity’ in this classification. It has been built up with a very practical
purpose in mind, and it’s success stands or falls on how well the resulting method can be
applied in practice, and not on how well the various theories underpinning the findings
made use of here might fit together (or not, as the case may be).
88
5. Organisational problems and failures
5 . 1 ‘Sing le points of failure’ exist where a mistake by an individual can lead
directly to a failure or hazardous condition
5 . 2 Errors and failures propagate through the process
5 . 3 Wide f luctuations in workload
5 . 4 R eporting procedures and hierarchy of decisio n-making authority
prevents rapid response to problems as they arise
5 . 5 Working practices allowed to ‘slip into unsafe modes
5 . 6 Failure to comply with existing safety regulations or develop new
safety procedures
5 . 7 Recurrent failures of a similar nature
5 . 8 Potential safety hazards are allowed to pass unrecorded
5 . 9 Organisational rigidities of perception and belief
5 . 10 Significance of vulnerability is minimised
5 . 11 There exists tight coupling of processes within a complex production
system
5 . 12 Process varies from project to project (ad-hoc)
Figure 3.9: Taxonomy of human factors contributing to errors in RE due to organisational activity
3.4 Summary and Conclusions
This chapter has presented a review of human factors literature relevant to the study of
errors made by humans when working as individuals, in groups, and within an
organisational context. This review was necessarily wide-ranging, covering a varied and
disparate collection of sources. The result of this review is a number of generic error
types which are applicable to different types of work activity. The three perspectives are
independent in the sense that the research reviewed here has arisen from radically
different traditions in the human sciences, yet they are also related due to their shared
focus on human activity, and how it can fall short of what is desired.
The introduction to the chapter presented this thesis’ view that the three perspectives
are also interrelated in the sense that RE consists of individuals performing a variety of
activities, some of which are conducted in isolation, but many in group or team settings,
and all are in some way oriented to other stakeholders in the process. Further, this
activity all takes place in some organisational context. This context may change
according to, amongst other things, the relationship between the developers and
procurers, and on the nature of the application domain. Of particular interest to this
thesis is the development of safety-critical systems, in particular the RE process for
safety-critical system development, which must itself be considered to be a safety-crtical
process. Consequently, a reasonable degree of effort must be directed to protecting the
process against failure of any nature.
89
The result of this chapter’s review is a classification of human error types from the three
perspectives. This has been developed through the chapter and presented as a textual
hierarchy at the end of each main section. The taxonomy is presented here in its entirety
for the first time:
1 . Slips and lapses
1.1 Recognition failures
- 1.1.1 Misidentification
- 1.1.2 Non-detection
- 1.1.3 False positives
1.2 Attentional failures
- 1.2.1 Inattention slips
- 1.2.1.1 Branching slips
- 1.2.1.2 Overshoots and undershoots
- 1.2.1.3 Omissions following interruptions
- 1.2.1.4 Unawareness that the plan is inappropriate
- 1.2.2 Slips through over-attention
- 1.2.2.1 Mistimed checks
- 1.2.2.2 Disrupting well practised actions
1. 3 Memory failures
- 1.3.1 Forgetting intentions
- 1.3.2 Forgetting or misremembering preceding actions
- 1.3.3 Encoding failures
- 1.3.4 Retrieval failures
- 1.3.5 Reconstructive memory errors
1.4 Selection failures
- 1.4.1 Multiple side-steps
- 1.4.2 Misordering
- 1.4.3 Blending actions from two current plans
- 1.4.4 Carry-overs
- 1.4.5 Reversals
2 . Mistakes
2.1 Rule-based mistakes
- 2.1.1 Misapplication of good rules
- 2.1.2 Application of bad rules
2.2 Knowledge-based mistakes
- 2.2.1 Availability biases
- 2.2.2 Frequency and similarity biases
- 2.2.3 Confirmation biases
- 2.2.4 Over-confidence
- 2.2.5 Inappropriate exploration of the problem space
- 2.2.6 Attending and forgetting in complex problem spaces
- 2.2.7 Bounded rationality and satisficing
- 2.2.8 Problem simplification through halo effects
- 2.2.9 Control illusions and attribution errors
- 2.2.10 Hindsight biases and the ‘I-knew-it-all-along-effect’
90
3 . Violations
3.1 Routine and optimising violations
3.2 Situational violations and ‘misventions’
3.3 Exceptional violations
4 . Group Coordination Failures and Process Losses
4. 1 Errors or failures due to social facilitation or inhibition.
4.2 Failures and errors due to inappropriate human resources in the group.
4.3 Socio-motivational errors and failures.
4.4 Group coordination errors and failures.
4. 5 Status related errors and failures.
4.6 Group planning and management errors and failures.
4.7 Group Leadership
- 4.7.1 Errors or failures due to inappropriate leadership style (or balance
between styles)
- 4.7.2 Errors or failures due to inappropriate leadership skills.
- 4.7.3 Errors or failures due to the excessive influence of the leader.
4.8 Premature consensus and the exclusion of minority opinion
- 4.8.1 Errors or failures due to conformity arising from inappropriate
normative influence.
- 4.8.2 Errors or failures due to conformity arising from inappropriate
informational influence.
- 4.8.3 Errors or failures due to the exclusion of minority opinion.
4.9 Errors or failures due to group polarisation and groupthink.
5 . Organisational problems and failures
5. 1 ‘Single points of failure’ exist where a mistake by an individual can lead directly
to a failure or hazardous condition
5.2 Errors and failures propagate through the process
5.3 Wide fluctuations in workload
5.4 Reporting procedures and hierarchy of decision-making authority prevents
rapid response to problems as they arise
5.5 Working practices allowed to ‘slip’ into unsafe modes
5.6 Failure to comply with existing safety regulations or develop new safety
procedures
5. 7 Recurrent failures of a similar nature
5.8 Potential safety hazards are allowed to pass unrecorded
5.9 Organisational rigidities of perception and belief
5.10Significance of vulnerability is minimised
5.11There exists tight coupling of processes within a complex production system
5.12Process varies from project to project (ad-hoc)
Chapter 2 reviewed a number of process improvement techniques which may be
targeted at the RE process, and found that there was little if any focus on the reduction
of human error in the methods available and in use. The information presented in this
chapter is, therefore, exactly what was found to be lacking in the process improvement
approaches reviewed in Chapter 2. What is needed now is a systematic means of
applying this work towards the improvement of RE processes. The classification of
91
human error types presented above feeds directly into the process improvement method
developed in the remainder of this thesis.
The next chapter progresses this work by considering how well the generic error types
presented here do actually apply to RE activities, and proposes a model for
encapsulating and applying the findings as part of a process improvement effort. In so
doing, the foundations are considered for the method at the core of this thesis, which is
presented in Chapter 6.
92
4. Applying Human Factors Research
to Requirements Engineering Processes
Chapter 3 examined the sources of errors in human activity. This involved a review of a
large amount of research from a diverse selection of sources in the human, social, and
organisational literature. It is clear from this review that everyday human activity is
vulnerable to a number of errors which may, under certain circumstances, have
potentially harmful consequences. Further, a great deal of what is understood by the
term Requirements Engineering consists of different configurations of everyday human
activity, such as reading, writing, interacting with others, and so on. Consequently, the
RE process is itself vulnerable to many of the types of error highlighted thus far. It is not
enough, however, merely to be convinced of this, and further convincing may well be
necessary in any case. For these findings to be of any use to RE, they must be presented
in a way that allows them to be used to improve the RE process, and for this to be
achieved in a systematic way. This is the aim of the thesis—to provide a systematic
means of applying findings from human factors research to the improvement of RE
processes in particular.
This chapter is concerned with verifying that the vulnerabilities to error uncovered in
the review do indeed apply to RE processes. This is itself a novel activity as no work has
previously been undertaken in this area, and there is consequently a lack of reports on
the RE process which present the work in a way that would allow this verification to be
performed directly. Therefore, this chapter provides further supporting evidence to the
suitability of the findings’ application to RE process improvement through revisiting the
various classes of error and considering how they might apply to RE. Empirical studies
of RE and the system development process are used wherever possible to support this
approach. It also provides the intellectual basis for the development of the method
presented in Chapter 6.
The justification for making use of the human factors research in Chapter 3 for the
purpose of improving RE processes has already been stated in terms of the everyday
nature of the activities engaged in by requirements engineers. This chapter shows that
this is indeed the case, and that examining RE processes in terms of the human error
93
types reviewed in Chapter 3 can provide insights into how RE can be vulnerable to
human error. The previous chapter’s review, however, said very little about the specifics
of RE which distinguish it from other everyday human activity such as driving a car, for
example. This chapter therefore also examines a number of concerns which are more
specific to RE. In particular, these are related to the working materials employed by
requirements engineersdocuments and notations—and typical working practices
employed at project and organisational levels.
This chapter consists of two main sections. Section 4.1 revisits the generic classification
of human errors developed from the literature in Chapter 3, and examines the relevance
to RE for each area of the hierarchical classification. Section 4.2 examines issues more
specific to RE, and suggests additions to the error classification as a result.
4.1 Applying the human factors research to the RE process
The review in Chapter 3 produced a number of error types in terms of individual, social,
and organisational activity. Unfortunately, none of the work concerned itself directly
with the activities engaged in by requirements engineers in the process of developing a
set of system requirements. Whilst this thesis has argued that RE is inherently a social, as
well as technical, endeavour, which is performed within an organisational context, it is
not enough merely to restate the point here. Rather, this section revisits the error
classification developed in the last chapter, and considers each error class in terms of the
activities engaged in by requirements engineers, supporting this with empirical evidence
wherever possible.
4.1.1 Errors due to individual action
The classification scheme used in this thesis to describe errors in individual work is based
on the classification of faults developed primarily by Reason (1990), from the work of
Rasmussen (1983) and others, and is presented once more in Figure 4.1. The next few
sections consider each of the principal forms of fault identified and highlighted in the
diagram in terms of the RE process.
94
1. Slips and lapses
1.1 R ecognition failures
- 1.1.1 Misidentification
- 1.1.2 Non-detection
- 1.1.3 False positives
1.2 A ttentional failures
- 1.2.1 Inattention slips
1 .2.1.1 Branching slips
1 .2.1.2 Overshoots and undershoots
1 .2.1.3 Omissions following interruptions
1 .2.1.4 Unawareness that the plan i s inappropriate
- 1.2.2 Slips through over-attention
1 .2.2.1 Mistimed checks
1 .2.2.2 Disrupting well practised actions
1.3 M emory failures
- 1.3.1 Forgetting intentions
- 1.3.2 Forgetting or misremembering preceding actions
- 1.3.3 Encoding failures
- 1.3.4 Retrieval failures
- 1.3.5 Reconstructive memory errors
1.4 Se lection failures
- 1.4.1 Multiple side-steps
- 1.4.2 Misordering
- 1.4.3 Blending actions from two current plans
- 1.4.4 Carry-overs
- 1.4.5 Reversals
2. Mistakes
2.1 R ule-based mistakes
- 2.1.1 Misapplication of good rules
- 2.1.2 Application of bad rules
2.2 K nowledge-based mistakes
- 2.2.1 Availability biases
- 2.2.2 Frequency and similarity biases
- 2.2.3 Conmfirmation biases
- 2.2.4 Over-confidence
- 2.2.5 Inappropriate exploration of the problem space
- 2.2.6 Attending and forgetting in complex problem spaces
- 2.2.7 Bounded rationality and satisficing
- 2.2.8 Problem simplification through halo effects
- 2.2.9 Control illusions and attribution errors
- 2.2.10 Hindsight biases and the ‘I-knew-it-all-along-effect’
3. Violations
3.1 R outine and optimising violations
3.2 Situa tional violations and ‘misventions’
3.3 Exceptional violations
Figure 4.1: Classification of individual errors
95
4.1.1.1 Slips and Lapses
Slips and lapses are errors where the requirement process is considered to be correct but
the means by which the requirements process is instantiated by engineers results in
errors. This source of errors is a product of the means by which engineers carry out the
task. Most of the errors classified by Lutz (1993) are of these sorts. Additionally, slips
and lapses which occur in the use of any routine skill which nevertheless impacts upon
the requirements process are relevant here. To give a relevant but mundane example,
much of the work of preparing requirements documents involves the highly routine skill
of typing (e.g. at a word processor). Under these circumstances, one can imagine the
requirements process to be impacted by slips and lapses which lead to typing errors.
Indeed, 23% of the requirements errors classified by Basili and Weiss (1981) are ‘clerical’
in this sense. Depending on the nature of the notations and language used in a
requirements document, detecting a significant clerical error may not be easy.
Due to the generic nature of the psychological processes concerned here, it seems
reasonable to conjecture that most slips and lapses in the requirements process are not
specific to requirements engineering but will be the result of errors in the conduct of
mundane tasks involving everyday skills (typing, reading, filing and so forth). In any case,
the specific form that these errors would take in the requirements process is unknown as
adequately detailed studies of the work of requirements engineers have not been carried
out in ways which would reveal them. Nevertheless, examples of these errors are likely
to emerge. Following the earlier analysis in section 3.1.1, four principle sources of error
are considered: recognition, attentional, memory and selection failures:
RECOGNITION FAILURES
This source of error is manifested when a requirements engineer misidentifies a
particular requirement or class of requirements. This may occur, for example, when two
objects are very similarly described or notated but when, in fact, they are different in
significant features. Non-detections (or omissions) are very likely to plague quality
review processes, especially when requirements documents are very large. While a
requirements engineer may be highly practised in the use and interpretation of such
standard representations as dataflow diagrams, object interaction diagrams or viewpoint
definition tables, this is not enough to guarantee that recognition failures will not occur.
As a simple illustrative example, consider a requirements engineer involved in
constructing a dataflow diagram for some sub-system. A recognition error would involve
the output being taken from one transformation and directed erroneously as a result of
misrecognising an appropriate destination.
ATTENTIONAL FAILURES
This class of error results directly from either inattention or over attention in the
carrying out of action. Again, such failures are likely to occur in mundane routine
activities like reading and writing.
96
Consider again a requirements engineer constructing a dataflow diagram. Where the
number of transformations becomes large and there may be several similar ones,
attending to one area of the diagram may result in failure in another or cross-checking a
transformation with its data dictionary entry may lead to losing the place in the diagram.
MEMORY FAILURES
This class of error results from the inability to correctly remember the correct sequence
of actions. This can result in omissions where particular activities are not carried out or
repetition errors. For example, in converting a draft requirements document into some
standard document style, the particular place the author is at in the editing task is
misremembered leading to an incorrect layout or format being used for some critical
detail.
SELECTION FAILURES
These errors occur when the wrong course of action is selected. Consider the case where
a requirements engineer is checking the consistency of a requirements document. This by
necessity involves pursuing multiple cross references. Along the way, some other issues
in the document capture the engineer’s attention leading to errors when the original task
is returned to.
4.1.1.2 Mistakes
The assumption underpinning the characterisation of slips and lapses was that errors
were a product of a wrongly instantiated but otherwise correct plan. In contrast,
mistakes result from errors within the plan of action. In terms of requirements
engineering, while slips and lapses are errors resulting from the manner in which a
fundamentally sound requirements process is carried out, mistakes signal errors in the
requirements process itself. Two forms of mistakes are of particular interest: rule-based
mistakes where errors result from the misapplication of previous rules; and knowledge-
based mistakes where errors result from the need to act outwith previous rule-based
procedures.
RULE-BASED MISTAKES
Rule-based mistakes result from either the application of bad rules or the erroneous
application of previously sound rules. This form of mistake may result from an
inappropriate emphasis within the requirements process of standards and guidelines (see
Dorfman and Thayer, 1990 for a review of standards, etc.). An emphasis on previous
solutions provides significant opportunity for errors based on the misapplication of
particular standards or methods. Clearly, rule-based mistakes are a prominent potential
source of error when re-using previous requirements experience. For example, in
describing the principles underpinning a requirements re-use tool, Ryan and Mathews
(1993) give the following account of the requirements process:
A software analyst working in a particular problem domain gradually acquires a set of generic
design plans [more accurately procedures or rules in our terms] which are used to solve recurring
97
problems. Analysis of a particular problem is guided by the designer attempting to fill in missing
details for what is felt to be the nearest generic plan. (Ryan and Mathews, 1993, p. 113)
This characterisation of the requirements process as being heavily influenced by previous
experiences highlights the central role of memory in the process. This offers significant
possibilities for mistakes as previous requirements solutions are misremembered or
applied inappropriately.
There are many different requirements methods which can be followed (see Chapter 2).
Choosing a particular method is a problematic endeavour and once committed to a
particular method it is notoriously difficult for organisations to change. Methods
incorporate a commitment to equipment and training. As a result organisations are often
tempted to ‘stretch’ methods to situations for which they are not best suited. This
‘stretching’ of methods may result in rule-based mistakes as incorrect requirements
processes are put in place.
An additional source of rule-based mistakes may arise when a requirements engineer
schooled in one method begins to adopt a new one. It is quite possible that the
proceduralised knowledge acquired in the use of the previous method leads to rule-based
errors if it is incautiously applied again in the new context.
KNOWLEDGE-BASED MISTAKES
Mistakes of this form arise from the need to develop a new plan of action. Within
requirements engineering this is manifest in the need to develop requirements for
systems that the engineers have limited or no previous knowledge of. In these cases,
engineers often need to develop a new requirements process to support and co-ordinate
their work. For example, in describing object oriented analysis, Coad and Yourdon
(1990) state:
In an overall approach OOA consists of five major steps
1) Identifying Objects
2) Identifying Structures
3) Defining Subjects
4) Defining Attributes (and Instance Connections)
5) Defining Services (and Message Connections) (Coad and Yourdon, 1990)
The first of these two steps are centrally concerned with the requirements engineer’s
ability to apply previous experience in order to identify both objects and structures.
Especially in unfamiliar domains, this will involve the requirements engineer in
considerable knowledge-based exploration of possible solutions. One would anticipate
the biases which were documented in Chapter 3 to appear under these circumstances.
A source of error in these cases can be an inappropriate exploration of the problem at
hand due to the ready availability of a previous solution. For example, in the
development of the Computer Assisted Dispatch (CAD) system for the London
98
Ambulance Service (Page, Williams and Boyd, 1993), the health authority’s procurement
process dictated that a supplier who offered a previous solution be chosen. One result of
this procurement decision was that limited exploration of the problem domain itself was
undertaken and so inappropriate requirements emerged and the system failed. The LAS
example is interesting in that many aspects of the requirements process were
circumvented because of commercial pressures and policy constraints on how the health
authority could tender for business.
4.1.1.3 Violations
In contrast with slips, lapses, and mistakes, where plans are deviated from in error,
violations are concerned with situations where actors deliberately deviate from a given
plan. It is worth stressing again that it is unclear whether violations in the software
process are actually a vice or a virtue. Software process models are by necessity abstract
and often incomplete. Engineers working within the process often undertake activities
that deviate from the process. However, it should be noted at this point that many of
these violations are required to repair gaps in the process and often act as a source of
quality as well as faults. As Reason (1990) acknowledges, violations which repair a
faulty, unsafe plan are violations in name only. However, it is possible to go further.
‘Violations’ are often required to execute an overly prescriptive plan. In these cases, the
‘violators’ are often attempting to be more true to the spirit of the plan than its
restrictive detail is.
Skill-based violations occur when routine practice is violated by the skilled, routine
performance of workers. These often occur when the needs and professional activities of
the engineers do not fit easily with overly restrictive process models. An example of this
form of violation is reported in a study of a process which formed part of the
development of an aircraft (Rodden, King, Hughes and Sommerville, 1994). Consider
the process of software testing in such a development project. Testing is a key factor in
ensuring the eventual safety of the developed system and yet Rodden et al’s (1994) study
presents an example where the process was subject to skill-based violations.
Officially, the testing process required code to be formally released for testing, and for
System Problem Reports (SPRs) to be raised against this code. These formal reports
were then logged and developers were actioned to carry out the modifications necessary
to correct the reported problem. When it came to formal release, the number of SPRs
was used as an indication of ‘how good the work is’. That is, the workers on different
sub-systems had an interest in minimising the number of SPRs that were generated. To
solve this problem, the code was tested informally by means of “engineers’ tests”. Even
though these tests were not a proper part of the defined software process and could not
be taken as any formal proof of the quality of the code, their results were generally
accepted amongst the engineers because they acknowledged each other to be
trustworthy and competent professional engineers. While this—in strictness—might be
deemed a skill-based violation, it is questionable whether it should be condemned as
such. Indeed, managerial attempts which were made to alter this practice created
problems in their own right.
99
In terms of the management of the process studied, it was quite apparent that one of the
fundamental problems was that the SPRs were “out of control”. The response in this
case was understandably to create pressures for effectively and speedily dealing with
SPRs to improve ‘quality control’ by the adoption of a set of managed procedures.
Attempts of this sort to speed up processes have immediate consequences for
accountability, however. Thus, where informal processes are habitually used as a first
‘quality’ test, the security of engineers in their own practices is undermined by these
external demands, precisely because their decisions are formed out of a tension between
the need to have addressed actions promptly and their habitual use of informal methods
of testing. In other words, the professional character of their work is compromised by
the pressing need to ‘sort out’ a whole gamut of problems in a timely way. It is not
unusual to encounter engineers who are taken aback by vigorous injunctions to complete
work in this context:
“Something’s been actioned for you to do, and you haven’t been able to do it, and you’re getting
told to ‘get your arse into gear.’ ” (Rodden et al., 1994, p. 62)
This is not to say that pressures for action and control are not necessary for effective
product development, for there is every indication that they often are, but the tension
created represents a way in which uncertainties are generated. The implication is that a
strong impedance to the acceptance of a process is the limited consideration of the
substantial professional practices and skills used by software engineers. As a result,
violations from the process are almost inevitable as engineers seek to undertake their
work in line with their keen sense of professional skill.
Rule-based violations involve breaking restrictive procedures in the light of particular
situational exigencies. In the case of requirements, many of the violations of this form
result directly from management pressure. For example, in their study of software
design, Bansler and Bødker (1993) highlight the ways in which structured design is
abandoned when the commercial demand becomes sufficiently pressing. Similar accounts
have been given in the manufacturing industry. Anderson, Button and Sharrock (1993)
describe the work of a development team working on the hardware and software of a
project called ‘Centaur’ which was concerned to produce an ‘add on’ high capacity
feeder for an existing photocopier. Centaur was established to meet the perceived
demand for such a product from a niche market (US educational libraries). The Centaur
team were put under intense time pressure to deliver the enhanced photocopier in time
to meet the seasonal purchasing times of the libraries. Additionally, there were problems
in adequately staffing the project and ensuring an appropriate ‘skill mix’. Finally, it was
imperative to keep unit manufacturing costs down and in line with what was
appropriate to ask from an educational market.
Amongst other strategies, the Centaur team managed these constraints by:
improvising on the formal product delivery and development procedures they
would normally observe by:
- informalising the (normally formal) project review,
- exploiting the company’s ‘black economy’,
100
- -creatively accounting the unit manufacturing cost;
working around normal work practices;
revising the design requirements from within the design process by:
- amending customer requirements,
- assuming solutions will become available even if there was no sign of them,
- (when under exceptional time pressure) waiving a requirement which had
previously been taken as an immutable constraint.
Clearly, these are radical measures but, in essence if not in detail, they may be typical of
real-world industrial design and development when working under time and cost
constraints. Not all of these strategies are likely to be shared with safety-critical design
where time and cost constraints are differently experienced (see Tierney, 1993),
although Perrow (1984) repeatedly laid some of the blame for a variety of safety-related
incidents and accidents on ‘production pressure’. Nevertheless, a study of this sort points
to the importance of understanding how design and development take place in actual
practice rather than in theory and how requirements are interpreted in the light of real
exigencies and constraints.
As already argued, violations may not be in themselves problematic. However, they may
become so when particular management styles are adopted. The software process studied
in the aerospace industry (Rodden et al., 1994) can be returned to in order to bring this
point out. The process studied was obviously concerned with the development of a
software product where errors are potentially life threatening. Consequently, software
development focused on the need to develop safe software and much of the focus on
strict process management is intended to reinforce the development of safe software
systems. The approach adopted is prescriptive and detailed in nature.
However, overly prescriptive processes and management styles also serve to
problematise the ‘normal’ “ad-hocing” that engineers do when problems arise. All
organisational life involves ‘cutting corners’, informal ‘bending of rules’ and so forth. In
most instances, project managers are aware that such work goes on, if not in detail, and
allow it precisely because it is a means by which the work can be done and might not be
done otherwise. However, with routine day to day work, what is allowable can change
because of the adopted process, and particular informal strategies can become very
different when procedures are strictly enforced.
Problems can be created when engineers do the sorts of thing they have always done,
but where strictly following procedures means that such practices have consequences
outside their own domain. Thus, in the case of Rodden et al’s (1994) study, when code
changes were made within a sub-system, the need for strict overall monitoring across the
process meant that the formal notification of such changes was strongly encouraged.
Thus, the perceived need for an overview across the development process compromises
a practice that is endemic in development. It is not unreasonable to view such practices
as ‘trustworthy’ in that local and unofficial use of some form of informal “engineer’s
101
testing” is a perfectly normal practice, and indeed necessary. However, overly
prescriptive adoption of a process can create a climate of uncertainty for engineers
wherein the limits of their rule bending become more difficult to identify and
consequently acceptance of the process itself is threatened. Thus, the issue of violation
became problematic not just because violations took place but because engineers were
unsure of the extent to which they are allowed to take place.
4.1.2 Group Process Losses and Related Phenomena
Many aspects of the RE process involve group or team activities: meeting with clients,
group brainstorming, project team meetings and so forth. One can imagine then that
group process phenomena are likely to be a prominent source of faults in requirements
analysis.
Curtis, Krasner and Iscoe (1988) argue that in the 17 software development teams they
studied early requirements phases were often dominated by a small coalition of
individuals and sometimes just one person who took control of the project’s direction.
The dominant coalition usually consisted of people who knew about the application
domain or who had some other relevant experience. Under these circumstances, then, it
is important to monitor for errors due to the emergence of such coalitions or group
leaders. Curtis et al. (1988) report that these initial coalitions are rarely opposed when all
project team members are drawn from the same corporation. That is, these coalitions or
leaders frequently become a source of majority influence. On the other hand, when the
project team consists of representatives from several different organisations, competing
coalitions are more likely to emerge with a variety of differing opinions. Curtis et al.
argue that the existence of a variety of coalitions may well promote a deeper scrutiny of
the design issues than being fixed with the views of a single dominant coalition. Of
course, much of this depends on the specific dynamics of the teams in question but
Curtis et al.’s work nevertheless points to the importance of group phenomena in systems
development.
Some approaches to RE have specifically designed into them special purpose group
sessions. For example, Macaulay (1993) describes User Skills and Task Match (USTM), a
method which emphasises the importance of identifying the stakeholders involved in the
requirements process and conducting cooperative sessions in which all are involved. She
reports on the development of requirements for a new control room within an
electricity transmission company and identifies four principal stakeholders:
1) Strategic thinkers who have a long term financial interest in the success of the proposed
system. […]
2) Computer Specialists who would ultimately be responsible for the design and development
of the proposed system. […]
3) Control Room Engineers who would ultimately be users of the proposed system. […]
4) Managers of the Control Rooms who would ultimately be responsible for the introduction of
any proposed system. […] (Macaulay, 1993, pp. 179)
102
Macaulay warns explicitly of the group process problems that can result from an over or
under emphasis of input from different stakeholders. To combat this, she stresses the
importance of the role of a facilitator in the USTM method who, amongst other things,
ensures that all stakeholders contribute. Viller (1993) expands upon this point and
considers the need to support group facilitation in CSCW systems in general, and those
designed to support RE in particular.
In the design of safety-critical systems, specifically designed focus group or group
brainstorming sessions are often conducted to assess the likelihood that various hazards
might occur. One well known method of this sort is Hazops (Kletz, 1992) which was
first developed to assess the hazards and operability risks relevant to the design of
chemical plant, though recently Hazops has been extended to applications in general
software development.
Brainstorming techniques date back to Osborn (1957) and typically involve an emphasis
on producing as many ideas as possible on the topic relevant to the brainstorming
session. Participants are usually instructed not to be critical of ideas but to state any that
come to mind no matter how wild they might think them to be. The aim is frequently
emphasised that it is the quantity of ideas that is important and participants are
encouraged to build on the ideas of others. In spite of these aims, almost all studies of
brainstorming have indicated that brainstorming groups show productivity losses of the
sorts documented in Chapter 3 (see, for example, Diehl and Stroebe, 1987).
Additionally, when the quality of ideas has been assessed, brainstorming groups yield
comparable quality (but no better) than the summed contributions of the same number
of individuals working in isolation.
As one might expect, many explanations have been offered for this. Indeed, as this is yet
another example of the general phenomenon of group productivity loss, one can imagine
that all of the reasons mentioned in Chapter 3 (free riders and so forth) can be argued
for brainstorming too. Recently, however, a number of studies have turned to the issue
of how such losses can be ameliorated so that the advantages of brainstorming might re-
emerge.
Harkins and Jackson (1985) have shown that brainstorming performance losses are
mitigated by introducing an expectancy that individual contributions will be evaluated.
Similarly, Paulus and Dzindolet (1993) have shown that brainstorming groups can
perform to a comparable level to the same number of individuals working in isolation if
the groups are informed of the performance levels that isolated individuals attain.
Diehl and Stroebe (1991) found that brainstorming group losses were ameliorated by
simply giving the groups more time than was given to isolated individuals. Groups of
four outperformed the levels of four individuals summed together if the groups were
given four times as much time. Diehl and Stroebe also found that supplying participants
with personal notepads can facilitate group performance, if the group as a whole are
expecting their performance to be evaluated. In contrast, providing individuals with
personal notepads inhibits group performance if each individual thinks that their own
individual contribution is going to be evaluated.
103
However, Diehl and Stroebe (1991) point out that allowing a group of four individuals
four times as much time as isolated individuals working alone can hardly be construed as
a procedure which will enhance overall productivity! Equally, the effects of introducing
notepads (and other procedural manipulations) tend to (at best) make groups equal to
the same number of individuals working separately. Accordingly, Diehl and Stroebe
conclude with the recommendation that groups should not be used for idea generation and that
techniques which systematically combine the ideas of individuals working alone are
likely to be more effective. However, it must be emphasised that the majority of
research sceptical of the performance of brainstorming groups consists of laboratory
based experiments using abstract, non-work-like tasks. It remains to be seen whether
one can convincingly document either brainstorming group productivity losses or gains
in more realistic scenarios. Furthermore, the productivity of brainstorming groups may
be only one criterion for their usefulness, though admittedly an important one. The
opportunity for participation that they offer may sometimes provide a means for
underwriting an organisation’s concern to involve its clients in the requirements analysis
process.
It is important to emphasise once more, however, that in the production of this thesis,
no adequately detailed study of group interaction in the requirements analysis process
has been found which might enable one to say what are the specific factors in these
settings which might contribute to various group process errors or losses.
4.1.3 Organisational Failures in Requirements Engineering
Curtis et al. (1988) point out that the organisational cost of acquiring expertise in a
particular application domain can be considerable. In their study of 17 development
teams, estimates varied from six months to a year for how long it takes to get a new
“project assignee” adequately competent in an application area. One can imagine that
this cost may, under certain circumstances, lead to inadequately skilled or trained people
being assigned to projects and being expected to come ‘up to speed on the job’ (as in the
Centaur case discussed above).
Additionally, Curtis et al found that technically skilled individuals in organisations often
tend to be promoted to management posts, typically weakening the technical knowledge
base of organisations. The technical strengthening of management that such careers
suggest may not actually turn out to be a real gain if managers cannot actively participate
in the work of project teams and are only able to monitor projects. Equally, the
technical expertise of a manager may turn out to be inappropriate if it is based on
experience which has been surpassed by recent developments. Ironically, then, the more
experienced managers may become at management, the less skilled they may become in
the technical domain of the projects they are expected to manage. It is no wonder,
perhaps, that Curtis et al found that many managers report being unclear of what their
role is and of being out of contact with the work of the software engineers who
comprise their project teams.
104
In addition to organisational problems due to the ‘thinly spread’ nature of knowledge of
the application domain, Curtis et al’s study points to problems due to conflicting and
fluctuating status that requirements can have in real development settings. In multi-
organisation, collaborative projects, the different organisations can differ between
themselves over what requirements are important and what less important. Projects
involving multiple customers can also lead to similar problems. Some of the projects
studied by Curtis et al Allowed customers direct access to members of the design team.
This sometimes led to requirements being changed without the completion of the
official change review procedure. One can imagine that such phenomena may make it
very difficult to develop a common understanding of just what the requirements are.
Within an organisation, there are likely to be a number of social groups with different
perspectives on the development of products and the requirements for them. Naturally,
one cannot guarantee that these different perspectives will be complementary. They may
often conflict. Indeed, maintaining an adequately distinctive perspective on what the
requirements for a product are may be important to demonstrating the distinctive
professional worth of a certain group.
In the development of commercial products, internal company groups (such as
marketing) can often act as or on behalf of customers and introduce a further source of
conflicting requirements. A major overhead for design teams is often this process of
resolving tensions between different organisational groups which might not only add to
the work they have to do but also lead to uneasy compromises in requirements
specification. Sommerville (1996) summarises some of these points:
Large systems usually have a diverse user community. Different users have different requirements
and priorities. These may be conflicting or contradictory. The final system requirements are
inevitably a compromise between them.
The people who pay for a system and the users of a system are rarely the same people. System
customers impose requirements because of organizational and budgetary constraints. These may
conflict with end-user requirements. (Sommerville, 1996, p. 66)
Organisational structures, in addition to being a source of conflict over requirements,
can often militate against problems in requirements analysis being effectively solved.
Again, the study by Curtis et al. (1988) is a useful source of examples. The computer
professionals they interviewed often reported problems with established formal
procedures as a way of finding out emerging requirements (and other) problems. It was
vital for most projects to have a rich network of informal, personal contacts alongside
the formal project structures. Large project teams often incorporated many different
personnel from different organisational groupings with different management lines and
reporting relationships. Thus, the organisational structures may not facilitate passing on
of relevant information. Even when a project resided within a single organisational
entity, often different phases of a project would be undertaken by different teams. In
particular, the requirements team may have at best only a partial overlap of personnel
with the development, delivery and maintenance teams. Consulting documents was
often the only means for different teams to communicate with each other, especially if
one team had been disbanded and its personnel moved on to other projects in the
105
interim. However, time pressure frequently can lead to inadequate documentation.
Severe problems and considerable potential for error could be found when crucial
documentation was inadequate and the original relevant personnel no longer available.
In addition to communication and coordination problems within organisations, Curtis et
al. document problems deriving from inter-organisational relations. There were often
organisational boundaries which inhibited communication between software engineers
(including requirements engineers) and client organisations. Contact between designers
and users was often indirect when it needed to be direct. When there was direct contact,
it was often not of a form which facilitated the work of the development company. For
example, in none of the 17 projects studied by Curtis et al. was there a single point of
customer contact. This again led to conflicting requirements emerging and contradictory
advice being obtained from different individuals within customer organisations.
Curtis et al.’s work, then, provides clear evidence of several of the ‘organisational
pathogens’ which Reason and others have claimed can contribute to latent failures in
design. The projects they studied when taken together provided evidence of
incompatible organisational goals, inadequate communications, inadequate control and
monitoring, poor planning and scheduling, and poor training in the application domain.
4.1.4 Summary
This section has returned to the classification of errors developed through the review in
Chapter 3, this time applying the error types to consideration of the actions performed in
the process of requirements engineering. This has reaffirmed the conviction that the RE
process can indeed be treated as what is essentially a combination of what are quite
generic forms of human action.
The error classification is, however, open to the criticism that it is too generic. Whilst
the error types can be shown to be applicable to RE processes, the same could be said of
any human endeavour. So, whilst this section has succeeded in confirming that the
generic error types in the classification of Chapter 3 do apply to the everyday work of
requirements engineers, further work is required to make the classification more specific
to RE.
The following section examines more closely what is specific to the activities performed
by requirements engineers and the materials they work with.
4.2 Process improvement possibilities in RE
Having revisited the main classes of errors highlighted in Chapter 3, and considered their
impact on typical RE processes, the next step in designing a process improvement
method is to propose improvements based upon these error classes. A large class of
improvements will obviously take the form of avoiding the conditions under which the
RE process may be vulnerable to the errors covered. There is also a high likelihood that
106
other improvements will be more closely linked to human issues which are more closely
related to the specific work of requirements engineers. Reason (1990) suggests that there
is a mapping between the type of error, and where efforts should be focused to put
preventative measures in place. According to Reason (1990), slips, lapses and mistakes
can be ameliorated by improving the design of how information is presented to
individuals, while violations and latent failures suggest the necessity of organisational
measures. This separation between individual/informational redesign on the one hand
and organisational redesign on the other may be a little too ‘clean’, however. It is quite
possible that organisational phenomena relate to how information is presented and vice
versa. Accordingly, suggestions for process improvement in this thesis which do not take
the form of avoiding a particular human factors vulnerability, are structured in a
different way. Rather, they are structured in terms of the principal activities which
engage requirements engineers in their work or which are important tools for their
work.
The remainder of this section addresses four different aspects of what could be
considered to be the ‘everyday’ work of requirements engineers. The first two are
concerned with some of the activities and artefacts more specific to RE, whilst the final
two revisit two of the areas of research reviewed in Chapter 3. The areas covered are:
the design of requirements documents;
the use of notations, diagrams and other forms of representation in the
requirements process;
the function and conduct of meetings; and
activity at the organisational level.
For each area of vulnerability, various possible sources of error are discussed, along with
possible defences to them. They are summarised in tables at the end of each section. The
vulnerabilities and defences proposed here will supplement those which resulted from
the review work in Chapter 3. Together, they form the basis of the process improvement
technique being developed in this thesis.
4.2.1 Document design
As already suggested, much of the work of requirements engineers involves the
production and processing of documents. The first recommendation, therefore, is that
the design of all requirements related documents and the means by which they are
produced is important to audit for safety issues. These documents are from the start
designed in order to be read by specific readers (customers, other engineers, managers,
etc.). That documents are designed with their recipients in mind militates against any
clean separation between documents as devices for presenting information to individuals
and documents as organisational records. Accordingly, examining the rationales for
requirements document design is doubly important.
107
Requirements documents are often produced to standard formats. This will inevitably
create a tension between the observance of the standard and staying true to the specific
application domain at hand. For example, omission errors may well occur because some
details might not be appropriate to insert in the document format. This is not to suggest
that document standards should be abandoned, just that their design and use should be
carefully scrutinised. For example, Davis (1993) notes that some NASA document
standards require separate requirements specification and delivery planning sections.
This, he suggests, means that how the phased delivery of a system might relate to the
satisfaction of requirements is not always made clear. This is a particular example of a
general point. The distance that exists in a document between sections and the structure
of the document should relate not only to the conceptual separation of aspects of the
document but also to the demands of readability. For example, Davis suggests that it
would be better to explicitly include marginal notations next to each requirement giving
brief delivery information rather than separate these in the document. Careful document
design may well minimise the occurrence of skill-based slips and lapses in the work of
those who read and write them.
To give some further examples of how document standards might relate to errors, the
US Department of Defense has been developing standards since 1978 which allow
authors to annotate the role of different data items with respect to the software (MIL-
STD-961D, 1995). For example, double slashes before and after are used to enclose the
names of outputs, single slashes around inputs. This convention makes it very easy to
make typing and word processing errors, especially through the injudicious use of global
search and replace commands!
Documents may have a number of different functions and it is sometimes important that
one should not be confused for another. For example, in discussing the role of
requirements in a large design project, Curtis et al. (1988) highlight the way in which the
requirements document became problematic as it became a proposal for particular
actions rather than a statement of requirements.
They knew [the requirements were] inaccurate. They were trying to competitively win….. so the
requirements document looked a lot like a proposal. It was not adequate in any fashion to design
from. (Curtis et al., 1988, p. 1277)
Requirements documents can often have a dual status as an agreement between
customers and vendors on the one hand while also being a specification of the work to
be undertaken on the other. In addition to this tension, requirements documents may
need to serve as a basis for a contract between a procurer and a software supplier while
also describing a system so that it can be understood by potential users. As Sommerville
(1996) argues, generally, users prefer a higher level of abstract description than that
which can be provided in a specification which is sufficiently detailed to act as a
contract. Furthermore, it is probably impossible to construct a detailed specification
without some design activity so further blurring the distinction between requirements
and design specification. To alleviate this, Sommerville (1996) proposes that a number
of distinct documents should be authored:
108
•A requirements definition is a statement, in a natural language plus diagrams, of what services
the system is expected to provide and the constraints under which it must operate. It is
generated using customer-supplied information.
•A requirements specification is a structured document which sets out the system services in
detail. This document, which is sometimes called a functional specification, should be
precise. It may serve as a contract between the system buyer and software developer.
•A software specification is an abstract description of the software which is a basis for design
and implementation. This specification may add further detail to the requirements
specification. (Sommerville, 1996, pp. 64-65)
While the motive for suggesting that different functions should be embodied in separate
documents is well founded, this solution could create problems of its own to do with
the coordination across the different documents. Seeing the relations between requirements
definition, requirements specification and software specification may be harder if they
are embodied in separate documents with, perhaps, separate authors and readers.
To summarise, then, the following general responses to vulnerabilities associated with
document design and production processes are suggested:
Carefully review document standards and whether they promote the readability
and error-free production of documents
Carefully scrutinise any special textual conventions used
Consider the introduction of technologies (e.g. hypertext-based systems) which
can promote dynamic cross referencing between documents.
A number of more specific vulnerabilities relating to working with documents can be
drawn out of the discussion in this section. For each one, an appropriate defence must
be proposed. These vulnerabilities and defences are presented in Table 4.1 below:
4.2.2 Notations and representations
A characteristic feature of many of the documents involved in requirements analysis is
that they use notations and various forms of visual representation (function tables,
dataflow diagrams etc.). Indeed, the use of notations and diagrams, etc. extends beyond
documents. Manipulating formulae and drawing diagrams are standard means for
supporting everyday technical work. However, as is clear from the literature on the
human factors aspects of programming languages (Hoc, Green, Samurçay and Gilmore,
1990, esp. Part II), not all notational schemes are equivalent in their learnability or ease
of use. Furthermore, notations are usually designed with a specific purpose in mind and,
while they may be adequate to those purposes, it may be very hard to use them for other
purposes. For example, data flow diagrams may not help an object oriented software
designer capture a class hierarchy. This is for the simple reason that the first mode of
representation is specifically designed to model the flow of data while the second is not.
109
It is likely that the inappropriate use of forms of notation may well lead to error. For
example, a notation employed to represent something it was not designed to represent
may well be semantically ambiguous in these cases, not to mention hard to use.
However, users of notations often tend to be advocates of them and well motivated to
push the notations beyond their limits. It is important, then, that the limits of notations
are documented and well understood.
Table 4.1: Vulnerabilities and defences regarding documents in RE
Document standards do not support
the needs of certain projects, leading
to e.g. omission of relevant detail
The rationales for document standards should be
scrutinised, especially with respect to how the
standards might allow the recognition of project-
specific idiosyncrasies.
Document standards impose a strict
ordering of sections, regardless of
their relationship to each other for a
particular system or project
The structure of documents should be assessed
both for how it supports readability and how it
reflects the document’s conceptual organisation.
Textual notations are used in
documents to provide additional
information regarding the status or
category of entities contained therein
Textual conventions should be examined with
respect to how vulnerable they are to error.
Requirements specifications are
used for several purposes, e.g.
serves both as contract between
procurer and supplier, and as
description of the system to be
developed
Separate the requirements specification into
different documents for specific audiences and
purposes.
Be sensitive to how documents serve different
functions and are intended for different readers
Tool support could facilitate multiple views onto the
same document, thereby maintaining consistency
between views.
Documentation pertaining to a
particular system is spread across a
number of documents, which must
be read together.
As the number of documents increases, it is
important to introduce effective strategies for
coordination across documents
Consider use of hypertext documents to facilitate
following cross-references and so on.
The appropriate response to these observations is surely to consider the use of a number
of notations together rather than hope for The Notation which would provide perfectly
adequate representations in all cases. Indeed, using a repertoire of notations and other
modes of representation is what is often done in practice. However, this itself can lead
to further problems. For one thing, becoming adept in the usage of a notation often
requires some investment of time and resources, quite possibly in the form of explicit
training. For another, it may not always be clear how one notation translates into
another. It is important then that a project team embodies amongst its members
mutually overlapping expertise in different modes of representation so that the potential
errors of one party can be checked by another and so that questions of how one notation
might translate into another can be addressed in a generally informed way.
110
Diagrams and other means of visualising systems and requirements for them are
important to scrutinise for possible features of them which might promote the chance of
error. A number of brief points can be made about diagrams and the form of thinking
that diagrams can encourage.
Diagrams which contain bounded objects (circles, squares and so forth) may
encourage the reader to think that the entities represented are more self-
contained and distinct than they are.
Things may not be represented diagrammatically if they cannot be visualised or
bounded appropriately. Thus, awkward problems may remain literally invisible.
Diagrams usually constrain the set of relations which can be drawn, lest the
diagram become a tangle of relations of different sorts. However, a tangle of
interdependencies may be more realistic than a simplified tree.
Diagrams can encourage a simplified view of the relations of concepts to their
context. Objects are separated not only from each other, they ‘stand out’ from
their background.
Diagrams and other means of visualisation can substitute for exposition. Under
some circumstances, ‘a picture saves a thousand words’ may turn out to be a false
economy.
Diagrams do not always make it possible to represent alternatives without
redrawing the whole diagram. Ambiguity and uncertainty can again become
literally invisible.
Proximity effects: The eye may take two objects to be related simply because they
are close together on the page.
Centre/periphery, top/bottom and left/right biases: The eye may be drawn
inappropriately to objects at the centre (or top or on the left) because of pre-
existing perceptual (or reading-related) biases. This may give an exaggerated
importance to objects seen at certain locations.
Diagrams can be highly transportable. They can be cut and pasted with ease from
one document to another. Yet redrawing a diagram can be somewhat effortful.
This may encourage the reuse of diagrams in new and potentially inappropriate
contexts.
These are just some possibilities. It should be emphasised that this is not to suggest that
forms of visual representation (or formal notation for that matter) should be abandoned
in favour of only using natural language descriptions. As Sommerville (1996) emphasises,
the use of natural language in the requirements process can have its own problems.
Rather, a sensitivity to the features of notations and diagrams which may lead to error is
urged for, so that an awareness of safety-related issues can extend to the use of these
devices too.
Following the above discussion, the following general points can be made with respect
to vulnerabilities due to the use of notations and representations in RE:
111
Carefully review the rationale behind notation conventions and representational
formalisms
Especially consider whether the notations do in fact offer advantages over
ordinary language descriptions
Ensure that notations etc. are commonly understood across development teams.
Also, a number of more specific vulnerabilities can be extracted, and once again each
vulnerability is addressed by one or more proposed defences. These are presented in
Table 4.2 below:
Table 4.2: Vulnerabilities and defences regarding notations and representations in RE
Diagrams often contain only bounded
objects (circles, squares, etc.)
Allow for less distinct or self-contained entities to
be expressed in a less formal notation
The set of relations that can be drawn
in a diagram or otherwise formalised are
restricted in order to reduce complexity
(e.g. only simple trees).
Allow for less constrained expression of
relationships between entities and consider
‘open-ended’, extensible formalisms
Diagrams and other formalisms are
used in documents as a substitute for
exposition.
Include both exposition and diagrams/formalisms
in documents
Proximity effects
(i.e. two objects in a diagram may be
taken to be related to each other
merely because they are close to each
other on the page)
Make explicit any conventions used to indicate
relationships between entities in diagrams
Construct alternate views onto diagrams to
prevent particular views being treated as the ‘only’
views.
Centre/periphery, top/bottom, and
left/right biases
(i.e. pre-existing perceptual or reading-
related biases may lead to attaching
inappropriate importance to entities
because of their position in a diagram)
Make explicit any conventions used to indicate
relative importance of entities in diagrams
Construct alternative views onto diagrams as
above
Diagrams and other formalisms are
copied and pasted from one document
to another, possibly in inappropriate
contexts
Include the source of diagrams and formalisms if
pasted from elsewhere, in order to assist checking
that they are being used appropriately.
4.2.3 Meeting procedures
The review in Chapter 3 emphasised the extensive social psychological literature on
group process phenomena. While much of this literature is based on laboratory
experiments and hence is often criticised for lacking ‘real world’ validity, nevertheless, it
does point to the importance of considering group dynamics in any social activity. In
requirements engineering, this should lead to attention to the specific features of
meetings and other group activities. It has already been argued that group brainstorming,
though widespread, may well not yield the benefits it is often believed to yield. A similar
112
analysis might be applied to any meeting situation within the requirements process. One
needs to be sensitive to questions of:
how the agenda is set and what relation it has to the conduct of the meeting (e.g.
do people feel constrained by how the meeting is conducted?);
whether the meeting has a chair or group leader and what this person’s
responsibilities are (e.g. it may well be inappropriate to assume that a project
manager should chair each meeting as that person’s skills may lie elsewhere);
whether the meeting requires a facilitator to help promote an equality of
participation or an adequately extensive consideration of varied arguments;
how majority and minority opinions are managed and how the products of the
meeting are recorded and otherwise documented (e.g. a common minute-taking
style is to note down only agreed upon actions and points; this may mean that
significant variations of opinion simply do not get recorded);
and so forth (it is straight forward to derive a preliminary checklist of
considerations from the list of group process and related errors in Chapter 3).
Of particular importance is how meetings with clients and users are managed. It is
commonplace to argue that regular meetings and reviews with clients are important for
the requirements process (see Sommerville, 1996). Naturally, this cannot be denied. But
it is important to consider how such meetings are managed. It is commonplace for
system suppliers to engage in a ‘walkthrough’ of the requirements documents and
demonstrate any supporting prototype applications similarly. While, naturally, supplier-
organisations wish to display themselves in a good light, overly ‘stage managing’ a
demonstration session can give users and clients an inappropriate impression of the
supplier’s understanding of the system or of their ability to deliver, perhaps leading to
areas of significant difference being glossed over and an over-confidence that the system
to be delivered will satisfy all user or client requirements. Furthermore, the overly ‘slick’
presentation of a prototype can give clients and users an impression that what is being
displayed as a prototype is sufficiently fault free for it to be construed as a first version.
For these and related reasons, some developers prefer to demonstrate prototypes in a
non-computational medium. For example, Ehn and Kyng (1991) report on the use of
‘cardboard prototypes’ in meetings between developers and users as means to encourage
the more active participation of users in design while avoiding presenting a prototype in
a form which could ever be mistaken for being a first version.
This chapter has argued that it is important to scrutinise the nature of documents and
the dynamics of meetings in the requirements process. It is also important to be aware of
the relation that documents can have to meetings. For example, documents can often
‘drive’ meetings (as in the case of a ‘walkthrough’ of a requirements document as
mentioned above). The textual, sequential order of a document may not match the
ordering of interaction surrounding that document, still less the order in which issues
occur to individuals participating in the meeting. On the other hand, documents can
often report on the results of meetings and group discussions where decisions are taken.
However, sometimes the process by which the results are obtained can be lost. Why this
113
might sometimes matter can be illustrated with a brief discussion of viewpoints methods
of requirements analysis.
Most viewpoint methods (as do many other requirements analysis methods, for that
matter) emphasise the importance of obtaining consensus and consistency between
different viewpoints and between different participants in the requirements analysis
process (clients, vendors, users etc.). However, the literature on group processes has
indicated at least two means by which consensus in a group can come about. Consensus
can emerge as a result of informational or normative influence acting on the minority
voices within a group. Clearly, then, the existence of consensus within a group is no
guarantee that this situation has been brought about by all participants equally (and
only) considering the evidence and arguments put before them. The alignment of
opinion within a group could equally reflect the operation of processes of normative
influence, for example, as group leaders have an exaggerated influence or as group
members converge on consensus opinions to maintain an intact group (or indeed
corporate) identity. An agreed upon document may itself, then, be an unreliable guide to
the processes which have gone into its construction and, in particular, an unreliable
reflection on the degree to which a range of opinions, evidence and arguments have been
considered.
This is not to argue that viewpoints methods are inappropriate. Indeed, far from it.
Viewpoints methods build in to the requirements process an explicit recognition that
requirements are often disputed and contradictory affairs. From the discussion in section
4.1 of the organisational actualities of system development, this realism has a lot to
commend it. The point being made here, however, is that the way in which the
differences between different viewpoints are managed is important to scrutinise.
Resolving viewpoint differences is important both to develop coherent documents
which can serve as the basis of contractual arrangements and to ensure that system
components do not conflict. However, this should not be taken as an argument for
prematurely resolving viewpoints or for resolving them for the wrong reasons or through
inappropriate means. Indeed, it is quite possible that—under certain
circumstances—maintaining a range of (not necessarily consistent) viewpoints might
promote safety, through keeping a range of design alternatives and safety considerations
open.
The following general point can be made as a result of the above discussion:
It is important to consider how meetings and reviews with clients are managed
Also, a number of specific points can be made regarding meeting procedures and how
meetings are conducted in RE, presented below in Table 4.3.
4.2.4 Preventative measures at the organisational level
Turner (1992) identifies one of the major features of ‘high reliability organisations’ as
their treatment of accidents and near-misses as opportunities for learning. Such
organisations are characterised by their ability to create and maintain an open learning
114
system in which such incidents allow the organisation to increase its knowledge about
safety issues, rather than apportion blame. Reason would refer to such organisations as
‘generative-proactive’. The U.S. Air Safety Reporting System (ASRS), which allows
airline pilots to anonymously report failures and near-misses is a good example of using
accidents or near-misses as opportunities for organisational learning. In fact, this system
became widely adopted in the USA only after a crash occurred less than one month after
a near-miss at the exact same location. The airline concerned with the near-miss had its
own reporting system in place, and the circumstances of the incident were reported to
all of their own air-crews. Unfortunately, the plane that crashed was operated by a
different carrier (Hardy, 1990).
Table 4.3: Vulnerabilities and defences regarding meeting procedures in RE
Overly ‘stage managing’ a
demonstration session can give users
and clients an inappropriate impression
of the supplier’s understanding of the
system, of their ability to deliver, or of
the closeness of the prototype to being
complete
Be sensitive to how sessions with clients and/or
users are managed, being careful to deliver
realistic expectations for what is being delivered.
Consider demonstrations of prototypes in a non-
computational medium
The means by which the results of
meetings (decisions made, etc.) are
made, may hide the process through
which they are arrived at, which may be
important for traceability.
Consider introducing meeting reporting
mechanisms which include the alternative options
which were dismissed along with the reasons why
Striving for consensus in meetings can
lead to alternative or dissident opinions
being ignored.
It may be more appropriate to maintain a range of
viewpoints, design alternatives and safety
considerations open, even if conflicts exist
Turner (1992) proposes four general characteristics as initial points for a good safety
culture in an organisation:
...the establishment of a caring organisational response to the consequences of actions and
policies; a commitment to this response at all levels, especially the most senior, together with an
avoidance of over-rigid attitudes to safety; provision of feedback from incidents within the
system to practitioners; and the establishment of comprehensive and generally endorsed rules and
norms for handling safety problems, supported in a flexible and non-punitive manner. (Turner,
1992, p. 199)
In addition to the general promotion of a safety culture, it is necessary to analyse more
specifically how the work of a requirements analysis team relates to the organisational
context in which it embeds. For example, Curtis et al’s (1988) work suggests that the
communication channels and reporting relationships, relations between different
stakeholders and professional groups and so forth have to be scrutinised carefully. These
and other organisational phenomena are likely to be crucial in the design of safety-
critical systems.
For example, many methods for the design of safety-critical systems involve the
distinction of different roles and responsibilities, in other words a division of labour.
Dividing up the labour is often done with safety in mind, so that everyone is clear about
115
everyone else’s responsibilities as well as their own. However, overly rigid and
formalised divisions of labour can militate against an overlap of expertise that might
contribute to a protective redundancy in the work. Rigid divisions of labour—while
making it clear who is accountable for what—might make it harder for workers to
check for errors in the activities of others.
In the design of safety-critical systems, other well intentioned devices for project
management may come to have a different organisational significance from the one
initially intended. For example, checklists of activities may be introduced with the
intention of ensuring that the work is done and done completely (no crucial activity is
omitted) but, in a real organisational setting, they may also be used to check progress
against target deadlines. These two functions can sometimes contradict each other. An
activity might be completed quickly and inadequately merely so that it appears as
‘ticked-off’ and ‘on target’ on an activities checklist. Similarly, records of the number of
faults reported and remedied should be used with care lest faults are not made visible if
it is thought that a team’s progress will be measured by them.
On the basis of this work, the following general suggestions for requirements process
improvement at an organisational level can be made:
actively maintain a flexible treatment of safety issues in requirements analysis and
software development;
encourage full commitment to a safety culture at all management levels and
amongst all stakeholders in requirements analysis and software development; and
introduce communication networks within organisations which allow for safety-
related information to disseminate throughout those groups who have a stake in
the design of the system in question;
Table 4.4 below presents further specific points which can be drawn from the preceding
discussion:
Table 4.4: Vulnerabilities and defences regarding RE at an organisational level
Overly rigid and formalised divisions
of labour can militate against an
overlap of expertise that might
contribute to a protective
redundancy in the work
Set up projects in which responsibilities are shared
without being diffused
Measures aimed to punish those
responsible for incidents and
accidents will likely lead to near
misses being covered up
Introduce means for disseminating safety-related
information throughout the organisation including
recognised means for reporting requirements ‘near-
misses’ without this being a means for the
assignment of blame (e.g.. Consider anonymous
reporting mechanisms)
116
4.3 Summary
The main purpose of this chapter was to verify that the classification of human errors
produced from Chapter 3’s review of the literature is indeed applicable to RE processes.
The degree to which this is the case hinges on the assumption that RE processes are
essentially made up of elements of basic units of work—reading, writing, meetings,
etc.—and are therefore vulnerable to the same types of errors found in the psychological
and sociological literature.
The classification of errors was returned to in section 4.1, this time applying the error
types to consideration of the actions performed in the process of requirements
engineering. For each category of error in the classification, examples were found in the
RE literature of corresponding failures. This reaffirms this thesis’ conviction that the RE
process can indeed be treated as what is essentially a combination of what are quite
generic forms of human action. This in turn confirms that the error classification
constructed in Chapter 3 is a good starting point for indicating potential sites of error in
requirements processes.
The error classification is, however, open to the criticism that it is too generic. Whilst
the error types can be shown to be applicable to RE processes, the same could be said of
any human endeavour. So, whilst this chapter has succeeded in confirming that the
generic error types in the classification of Chapter 3 do apply to the everyday work of
requirements engineers, further work is required to make the classification more specific
to RE.
Subsequently, section 4.2 broadened the error classification to consider in particular
some aspects specific to RE. This examined such things as the notations and
documentation conventions used in RE, the use of meeting techniques for certain
purposes, and organisational considerations in RE. This gave rise to a further set of
potential errors, as well as improvement suggestions to guard against them. This
collected body of vulnerabilities and associated defences against them needed to be
applicable in industrial settings, especially in safety-critical domains. This involves
working with a great volume of quite specialised material, which has been pulled
together from a variety of sources.
The collated set of vulnerabilities and defences elaborated within this chapter provide
the core for the development of the method presented in the following chapter.
Essentially, the method focuses on exploiting a checklist approach where the majority of
vulnerabilities outlined in the last two chapters are presented to users of the method in
order to encourage the development of appropriate defences.
117
5. The Human Factors Checklist
The previous chapter built upon the substantive review of human factors undertaken in
Chapter 3 by considering how well the taxonomy to emerge from Chapter 3 met the
actual needs of RE process improvement. In particular, the taxonomy developed from
existing work on human factors was extended to include errors related to core RE
activities such as document design, notations and representations, meeting procedures,
and organisational activities
The focus on RE oriented activities such as documentation conventions outlined in the
previous chapter gave rise to a set of RE related potential sources of error. The inclusion
of these additional error sources significantly extended the taxonomy presented in
Chapter 3. Broadly speaking, the categorisation reported in the previous chapter has
gathered together a wide range of RE related vulnerabilities, and improvement
suggestions to guard against them. This collected set of paired vulnerabilities and
associated defences provides the core of a checklist that underpins the development of
the method presented in the following chapter. The method relies on a checklist
approach to protection against vulnerabilities where the majority of vulnerabilities
outlined in the last two chapters are presented to users of the method in order to
encourage the development of appropriate defences. Before consideration of the
development of the PERE method in the following chapter, it is worth reflecting on the
Human Factors Checklist.
5.1 The Human Factors Checklist
This chapter presents the PERE Human Factors Checklist in full. Essentially, the
checklist outlined in this chapter consists of a number of tables, for each of the different
categories of vulnerabilities to error outlined in the previous chapters. As well as
providing the basis of the method outlined in the following chapter the checklist
provides an index into the relevant literature reviewed in Chapters 3 and 4. The
indexing properties of the checklist are significant in its application within the method
118
outlined in the next chapter as they allow users to access supporting material and more
complete accounts that underpin the suggested defences.
The purpose of this checklist is to support process improvement analysts in their
examination of RE processes with a view to detecting potential sites for error, and
suggesting changes to the process to avoid or counteract them. The approach, therefore,
is one of identifying vulnerabilities in the process, and proposing defences against them. It is
important that the checklist is not seen as a series of check-boxes which must each be
completed, and which can be followed algorithmically. For each process under scrutiny,
many (if not most) of the vulnerabilities will not apply. Analysts will, to a large extent,
have to depend upon their own skill and judgement to determine for which
vulnerabilities this is the case for a given process. Some judgements, however, will be
more straight forward than others. For example, when studying the process by which a
technical author creates documentation for a system, it is unlikely that the process will
be vulnerable to group performance losses (unless the documentation is developed by a
team!).
The checklist is structured to reflect Chapter 3’s classification of errors, along with the
contributions from Chapter 4. This leads to the following structure:
Individual activity
- Skill-based
- Rule-based
- Knowledge-based
Social group activity
Organisational activity
Violations
Document design
Notations and representations
In each case, the checklist consists of four columns, as in the annotated excerpt from the
checklist section regarding vulnerabilities of group activity, in Figure 5.1. The two main
columns in the checklist present a particular entry in the error classification in terms of
the error which a process may be vulnerable to, and suggestions for defences which may
be implemented to guard against the vulnerability. For much of the checklist, the
defence which corresponds to a given vulnerability takes the form of suggesting how it
might be avoided. In some cases, particularly those arising from the review of
organisational issues, the literature actually presented the defences (such as the
prioritisation of safety at all levels of the organisation). Here, it was necessary to ‘work
backwards from the defence in order to present vulnerabilities which the defence
guards against.
119
The final column is important because of the need to ground the vulnerabilities and
defences in the literature. To reiterate an earlier point, the nature of the checklist is not
that of an algorithm which can be followed for any process. Analysts must use their
skills to apply the findings artfully, and in so doing may well need to consult source
material to ensure that a particular vulnerability is relevant for the process under
examination. For this reason, the link back to this thesis’ review, and thence to the
original literature, is an important means to ensure that the checklist is applied sensibly.
In order to be useful in practice, the checklist has to be released along with the review
sections it refers to.
As an aid to abstracting away from the detailed vulnerabilities and defences, each section
of the checklist has a summary heading (in italic script), with higher level entries to
enable more rapid determination of the relevancy of the category of vulnerability in
question.
5.1 Social facilitation and inhibition
(i.e. the degree and direction
in which individual
performance of a task is
affected by being observed)
Consider whether the introduction of
direct supervision of activity is
appropriate, whether its gains outweigh
any potential performance losses as
might be the case for skill-based tasks
Tend not to employ direct supervision
(and prefer more indirect, deferred error-
checking) for more knowledge-based
tasks.
3.2.1
unique reference
number
the
vulnerability
to error
possible defence
(
against the
vulnerability
reference to the section number(s) in this
thesis where the vulnerability is discussed
Figure 5.1: An excerpt from the human factors checklist
In the remainder of this chapter we shall present the different tables making up the
checklist in terms of the categories used in the previous chapters.
120
5.1.1 Individual Errors—Slips & Lapses
Ref Vulnerability Possible Defence Source
1 Slips and lapses in familiar
skilled activities
Check products of work
Consider introducing supervision and
document review if none existing
Consider redesign of how information is
presented and of individuals’ work
environments
Consider introduction of appropriate tool
support and/or redesign of current
working tools
3.1.1
1.1 Recognition and
attentional failures
(e.g. errors in use of
standard, familiar notations
and errors in pursuing
multiple cross-references
in a requirements
document)
Design presentation of information to
maximise ease of recognition
Design notations to allow significant
differences to be highlighted
Avoid naming conventions which lead to
groups of names that are ambiguous or
may be easily confused
Minimise worker fatigue and interruptions
through monitoring and/or redesigning
work hours, routines and work
environments
3.1.1.1
3.1.1.2
1. 2 Memory failures
(e.g. omitting some familiar
action or incorrectly
reconstructing the history
of a requirement)
Consider introduction of tool support for
requirements traceability and/or ‘memory
prostheses’ (including video and audio
recordings of significant meetings etc.)
If none exist currently, provide means by
which team members can support each
other’s remembering
Enable details of original contexts (e.g.
when a requirement was first expressed)
to be captured and retrieved
3.1.1.3
1.3 Selection failures
(e.g. selecting the wrong
course of action when
engaged in several tasks
simultaneously)
Simplify the work setting so that workers
confront simultaneous tasks more rarely
Redesign technologies so that after
interruption workers are easily able to
return to their work at the point they were
at beforehand
3.1.1.4
121
5.1.2 Individual Errors—Rule-Based Mistakes
Ref Vulnerability Possible Defence Source
2 Rule-based mistakes
where ‘pre-packaged’
solutions or rules of
practice are inappropriately
applied
Training in appropriate solutions
Continual revision of training in light of
changes or innovations in work domain
Dissemination of expertise and experience
(if appropriate) introduce checking,
supervision or review if none exists
currently
3.1.2.1
2.1 Misapplying good rules
(i.e. following a course of
action that is known to work
in certain contexts but that
does not work in this one)
Test the appropriateness of existing rules
or procedures in new situations under
experimental or simulated conditions
(where appropriate) design how information
is presented so that workers can easily
recognise conditions for which an
appropriate rule exists
3.1.2.1
(i)
2.2 Applying bad rules
(i.e. following a course of
action that has been
shown not to work, or that
used to work but no longer
does so)
Avoid persistence of sub-optimal rules by
testing over a wide range of conditions
through (e.g.) simulations
Disseminate ‘near-miss’ and ‘critical
incident’ data throughout the organisation
3.1.2.1
(ii)
5.1.3 Individual Errors—Knowledge-Based Mistakes
Ref Vulnerability Possible Defence Source
3 Knowledge-based
mistakes where new
solutions to problems
have to be devised
without recourse to pre-
packaged ones
Design group processes to include
‘critiquing’ and reflection on how solutions
are arrived at (e.g. through video or audio
tape analysis or other means of experience
capture)
Introduce checking and review of products
of work with particular scrutiny of possible
sites of knowledge-related bias
3.1.2.2
3. 1 Availability biases
(i.e. solutions are chosen
because they come to
mind easily)
Carefully scrutinise the adoption of a
solution on the basis of prominent past
experience
Focus in detail on what distinguishes the
new situation from past situations which
come to mind easily
3.1.2.2
(
I
)
3. 2 Frequency and similarity
biases
(i.e. solutions are chosen
because they are similar or
have been used often
before)
Carefully scrutinise the adoption of a
solution merely because similar solutions
had been adopted before
Focus on the uniqueness of current
situation
3.1.2.2
(ii)
122
Ref Vulnerability Possible Defence Source
3.3 Over-confidence and
confirmation biases
(i.e. solutions are chosen
solely because of the
confidence with which
they are believed to
succeed, or because only
evidence confirming their
success is sought)
Actively seek out information to disconfirm
the currently favoured solution by (if
necessary) assigning ‘critics’ to it who do
not necessarily favour the same solution
Do not favour solutions merely on the basis
of the confidence their advocates have in
them
3.1.2.2
(iii)
3.1.2.2
(iv)
3.4 Inappropriate exploration
of the problem space,
bounded rationality and
satisficing
(i.e. making do with what
appears to be an
appropriate solution,
ignoring the possibility of
better solutions which may
result from alternative lines
of enquiry)
Especially when the range of options is
large, ensure that the whole problem space
is adequately ‘scoped’ initially so as to
minimise the chances of a good solution
being missed
Do not prematurely allocate resources to
the deep exploration of a possible solution
until the whole problem space has been
‘scoped’
3.1.2.2
(v)
3.1.2.2
(vii)
3.5 Attending and forgetting
in complex problem
spaces
(i.e. losing one’s way
because of the complexity
of the problem)
Consider appropriate tool support and
other techniques for aiding memory and
attention (see §1.2 above)
3.1.2.2
(vi)
3.6 Problem simplification
through (i) halo effects
(ii) control/attribution errors
and
(iii) hindsight biases
As above, counteract these effects through
critiquing solutions by
(i) concentrating on the specificities of new
situations
(ii) ensuring that factors outside direct
human control are adequately recognised,
and
(iii) recognising how things could have
been otherwise (i.e. documenting the
significant branching- and choice-points) in
past critical events
3.1.2.2
(viii)
3.1.2.2
(ix)
3.1.2.2
(x)
123
5.1.4 Individual Errors—Violations
Ref Vulnerability Possible Defence Source
4 Violations of established
operating procedures
Survey whether established procedures are
unnecessarily prescriptive (perhaps due to
restrictions accumulating as a result of past
incidents), and (if so) consider their
redesign
Carefully distinguish between harmful
violations and those necessary to get the
job done at all
3.1.4
4.1 Routine violations
(i.e. violations that are
habitual and performed
regularly as part of the
process)
Carefully assess the effects of such
violations to see if they are in fact harmful
Check to see if these arise through
inappropriate management styles and/or
processes which overly regulate the
conduct of workers, allowing inadequate
skilful or professional discretion. If so, it may
be appropriate to review operating
procedures
Alternatively, if violations arise through an
inadequate emphasis on safety, introduce a
strong safety culture.
3.1.4.2 (I)
4.2 Optimising violations As above, and also attend particularly to
processes which—themselves—motivate
risky optimisation by (for example) being too
boring to perform otherwise
3.1.4.2 (I)
4.3 Situational violations As above 3.1.4.2 (ii)
4.4 Exceptional violations Ensure the practical usefulness of training
even for rare situations through (as
appropriate) imagination, simulation or drills
Encourage the envisionment of unusual
combinations of circumstances and
responses to them as part of training
3.1.4.2 (iii)
124
5.1.5 Problems with Group work
Ref Vulnerability Possible Defence Source
5 Group coordination
failures and process
losses
Scrutinise the internal dynamics of
meetings, presentations, brainstorming
and other group interaction situations and
review procedures for them paying
particular attention to the factors listed
below
Employ (where appropriate) video and
other recording facilities to enhance the
possibility of such reflection
3.2
5. 1 Social facilitation and
inhibition
(i.e. the degree and
direction in which
individual performance of a
task is affected by being
observed)
Consider whether the introduction of
direct supervision of activity is appropriate,
whether its gains outweigh any potential
performance losses as might be the case
for skill-based tasks
Tend not to employ direct supervision (and
prefer more indirect, deferred error
checking) for more knowledge-based
tasks.
3.2.1
5.2 Inappropriate human
resources
Ensure that teams and groups are
selected with skills, knowledge and
experience which match the task at hand
Do not merely select personnel who
happen to be available.
3.2.2
5.3 Socio-motivational
problems
When ‘free-rider’ problems become acute,
consider introducing procedures for group
interaction to specifically address them
(e.g. the introduction of a group ‘facilitator’
or a ‘leader’ focusing on the balance
between contributors)
3.2.2
5.4 Group coordination
problems
Consider dedicating personnel to
monitoring coordination or undertaking
coordination tasks
Ensure that all proposals for solutions to
problems receive adequate critiquing and
other examination (cf. group facilitation
above)
Technological support (e.g. email and
other applications) may be appropriate to
alleviate coordination problems for
distributed work groups
3.2.2
5.5 Status related problems Ensure that experts are not ‘over-believed’
by critiquing all opinions
Ensure that novices and low-status group
members have the opportunities and
resources to confidently contribute
3.2.2
125
Ref Vulnerability Possible Defence Source
5.6 Group planning and
management problems
Ensure that the breakdown of tasks to sub-
tasks and the allocation of personnel to task
is conducted appropriately
3.2.2
5.7 Inappropriate leadership
style, skills and influence
Ensure that group leaders and others in a
management role have relevant experience
and knowledge of the problem domain
Ensure that they do not exclude minority
opinion or inappropriately bias solutions,
and balance ‘task-centred’ (making sure the
job’s done) with ‘socio-centred’ (e.g.
making sure all team members contribute
appropriately) leadership
3.2.3
5.8 Premature consensus and
the exclusion of minority
opinion
(i.e. the reaching of a
decision too early, before
all appropriate options
have been fully explored,
maybe because they were
only proposed or
supported by a minority of
the team)
Preserve, record and periodically review
minority opinions (e.g. courses of action not
favoured by the group as a whole)
Make resources available for the
development of minority opinions and the
critical exchange between minority and
majority opinions
3.2.4
3.2.5
5.9 Group polarisation
problems (e.g. risky shifts
and groupthink)
Counteract groupthink by composing
groups from varied backgrounds and skill-
bases
Encourage access to sources of information
which might disconfirm existing opinions
Discourage dogmatism on the part of group
leaders or high status members
Especially review the composition and
dynamics of groups who seem to be
heading for increasingly risky decisions
3.2.6
126
5.1.6 Problems of an Organisational Nature
Ref Vulnerability Possible Defence Source
6 Organisational
vulnerabilities
Review organisation structure,
communication, culture and learning to
uncover how sensitive the organisation is to
safety and reliability issues
Consider the introduction of new
organisational forms, patterns of reporting,
a strong safety culture and explicit
mechanisms for experience capture to
promote fault tolerance and prevent error
propagation.
3.3
6.1 ‘Single points of failure’
exist where a mistake by
an individual can lead
directly to a failure or
hazardous condition
Higher levels of redundancy should exist in
personnel and/or technology (e.g. through
overlapping expertise and sharing of
responsibilities)
3.3.1 to
3.3.320
6. 2 Errors and failures
propagate through the
process
Introduce appropriate supervision and
checking
ditto
6.3 Wide fluctuations in
workload
If the fluctuations can be anticipated, then
consider allocating staff flexibly to coincide
with high loads
ditto
6.4 Reporting procedures and
hierarchy of decision-
making authority prevents
rapid response to
problems as they arise
Decentralise authority to speed
responsiveness
ditto
6.5 Working practices allowed
to ‘sli
p
’ into unsafe modes
Emphasise continuous training and a strong
safety and reliability culture
ditto
6. 6 Failure to comply with
existing safety regulations
or develop new safety
procedures
Promote a strong safety and reliability
culture at all organisational levels
ditto
6. 7 Recurrent failures of a
similar nature
Promote organisational learning ditto
6.8 Potential safety hazards
are allowed to pass
unrecorded
Implement reporting and disseminating
accounts of ‘near-misses’ and other safety
information
ditto
6.9 Organisational rigidities of
perception and belief
Introduce reviews and critiquing groups with
the responsibility of exploring the basis of
existing organisational practice
ditto
20 In chapter 3, it is argued that vulnerabilities at the organisational level have a character which requires
one to consider a large range of interacting factors. For this reason, analysts are recommended to
become acquainted with all of sections 3.3.1, 3.3.2, 3.3.3 and 4.2.3 no matter what the specific
organisational vulnerability is that ahas been identified.
127
Ref Vulnerability Possible Defence Source
6.10 Significance of
vulnerability is minimised
Promote a strong safety and reliability
culture at all organisational levels
ditto
6.11 There exists tight coupling
of processes within a
complex production
system
Redesign processes to loosen coupling
and simplify production wherever possible
ditto
6.12 Process varies from
project to project (ad-hoc)
Adopt a specified and documented process ditto
5.1.7 Problems with Document Design
Ref
.
Vulnerability Possible Defence Source
7 Document Design Carefully review document standards and
whether they promote the readability and
error-free production of documents
Carefully scrutinise any special textual
conventions used
Consider the introduction of technologies
(e.g. hypertext-based systems) which can
promote dynamic cross referencing
between documents.
4.3.1
7.1 Document standards do
not support the needs of
certain projects, leading to
e.g. omission of relevant
detail
The rationales for document standards
should be scrutinised, especially with
respect to how the standards might allow
the recognition of project-specific
idiosyncrasies.
4.3.1
7.2 Document standards
impose a strict ordering of
sections, regardless of
their relationship to each
other for a particular
system or project
The structure of documents should be
assessed both for how it supports
readability and how it reflects the
document’s conceptual organisation.
4.3.1
7.3 Textual notations are used
in documents to provide
additional information
regarding the status or
category of entities
contained therein
Textual conventions should be examined
with respect to how vulnerable they are to
error.
4.3.1
7.4 Requirements
specifications are used for
several purposes, e.g.
serves both as contract
between procurer and
supplier, and as
description of the system
to be developed
Separate the requirements specification
into different documents for specific
audiences and purposes.
Be sensitive to how documents serve
different functions and are intended for
different readers
Tool support could facilitate multiple views
onto the same document, thereby
maintaining consistency between views.
4.3.1
128
Ref
.
Vulnerability Possible Defence Source
7.5 Documentation pertaining
to a particular system is
spread across a number of
documents, which must
be read together.
As the number of documents increases, it is
important to introduce effective strategies
for coordination across documents
Consider use of hypertext documents to
facilitate following cross-references and so
on.
4.3.1
5.1.8 Problems with Notations, Representations, and Diagrams
Ref Vulnerability Possible Defence Source
8 Notations, representations
and diagrams
Carefully review the rationale behind
notation conventions and
representational formalisms
Especially consider whether the notations
do in fact offer advantages over ordinary
language descriptions
Ensure that notations etc. are commonly
understood across development teams.
4.3.2
8.1 Diagrams often contain
only bounded objects
(circles, squares, etc.)
Allow for less distinct or self-contained
entities to be expressed in a less formal
notation
4.3.2
8.2 The set of relations that
can be drawn in a diagram
or otherwise formalised are
restricted in order to
reduce complexity (e.g.
only simple trees).
Allow for less constrained expression of
relationships between entities and
consider ‘open-ended’, extensible
formalisms
4.3.2
8. 3 Diagrams and other
formalisms are used in
documents as a substitute
for exposition.
Include both exposition and
diagrams/formalisms in documents
4.3.2
8.4 Proximity effects
(i.e. two objects in a
diagram may be taken to
be related to each other
merely because they are
close to each other on the
page)
Make explicit any conventions used to
indicate relationships between entities in
diagrams
Construct alternate views onto diagrams
to prevent particular views being treated
as the ‘only’ views.
4.3.2
129
Ref Vulnerability Possible Defence Source
8.5 Centre/periphery,
top/bottom, and left/right
biases
(i.e. pre-existing
perceptual or reading-
related biases may lead to
attaching inappropriate
importance to entities
because of their position
in a diagram)
Make explicit any conventions used to
indicate relative importance of entities in
diagrams
Construct alternative views onto diagrams
as above
4.3.2
8. 6 Diagrams and other
formalisms are copied and
pasted from one
document to another,
possibly in inappropriate
contexts
Include the source of diagrams and
formalisms if pasted from elsewhere, in
order to assist checking that they are
being used appropriately.
4.3.2
5.2 Summary
This chapter has presented the Human Factors Checklist underpinning the PERE
method in full. This checklist collates the areas of vulnerability due to human factors
within the RE process and the possible defences that may be developed against these
vulnerabilities. The general philosophy of the Human Factors Checklist is that its users
will be sensitised to the sources of potential errors within the RE processes they are
adopting and will undertake a series of defensive actions to protect against these. This
general approach to the use and application of the checklist presented in this chapter
provides the basis for the PERE method presented in the following chapter.
130
6. Designing PERE
This chapter presents the design of PERE (Process Engineering in Requirements
Engineering), a process improvement method which aims to address the main problems
posed in the previous chapters. These were:
the lack of process improvement methods for requirements processes with any
focus on the potential for human-related problems (Chapter 2); and
the overwhelming volume of research findings which can be applied, albeit with
difficulty, to the RE process (Chapter 3).
The novelty of this work essentially consists of addressing the second problem in order
to help solve the first, and in doing so make a large and varied body of research findings
available to the RE community.
Whilst this thesis groups the findings reviewed in Chapter 3 under the convenient
banner of ‘Human Factors’, it would be misleading to suggest that they fit so easily
together. Not only are the findings diverse, they are also voluminous, yet they are
obviously pertinent to the activity of requirements engineers. The work of this chapter,
therefore, is to provide a mechanism by which the findings of Chapter 3 can be applied
methodically to requirements processes. The route chosen here is to build upon the
checklist outlined in the previous chapter that categorised the types of human errors and
failures highlighted in Chapters 3and 4 as vulnerabilities in human intensive processes and
paired them with proposed defences which can mitigate their occurrence and/or effects.
6.1 Utilising human factors research in practice
Chapter 4 made some suggestions for RE process improvement based upon the activities
performed by requirements engineers. Together with the categories of error reviewed in
Chapter 3, they form the conceptual core of the thesis. As they stand, however, they are
unlikely to be of use to industry for a number of reasons. First and foremost is the sheer
volume of information contained in the review of the literature, and the specialised
131
nature of this information. Bearing in mind the weakness of process improvement
techniques in the area of human factors, as remarked upon in Chapter 2, it is very
unlikely that any process improvement analyst will be familiar with a large amount of
the research covered in Chapter 3. This section, therefore, considers some possible
approaches to how this can be addressed so that the findings can be applied to real RE
processes. The aim of the section is to propose an approach which can be followed to
ensure successful application of the process improvement suggestions.
Perhaps the most obvious approach to coping with the vast quantity of research findings
reviewed in Chapter 3 would be to recruit process improvement analysts from people
with a human or social science background. This has the advantage that the potential
analysts would be familiar with the literature, and so would be in a position to employ
their existing skills and knowledge in the field of software process improvement. It is
already common for software developers to employ human factors specialists to work on
human interface development, and sociologists are increasingly turned to in CSCW
system development (see, for example, Bentley et al., 1992; Bowers, O’Brien and
Pycock, 1996; Fitzpatrick, Kaplan and Mansfield, 1996; Heath and Luff, 1992; Hughes,
Sommerville, Bentley and Randall, 1993; Luff and Heath, 1998; Martin, Bowers and
Wastell, 1997; Pycock, Palfreyman, Allanson and Button, 1998; Sommerville, Rodden,
Sawyer and Bentley, 1992). So it could be seen as a natural progression to involve
human scientists in process improvement efforts where they can search for human
related problems in the current process and scrutinise improvement proposals for any
potential human related issues. However, a human factors specialist is unlikely to be
specialist in more than one field, e.g. they may be a sociologist, or cognitive psychologist.
Whilst people do exist with backgrounds in multiple human sciences, they are still quite
rare and it is highly unlikely that there would be enough people with such a background
for such a strategy to be commercially viable. This being the case, any human factors
specialists employed to fulfil such a role would likely still face the problem of
familiarising themselves with a large body of literature. Furthermore, cognitive
psychologists, for example, might have significant problems with adopting some of the
theoretical positions of sociologists, and vice versa. It may well be the case that some of
the theoretical differences between different human sciences are irreconcilable for
anyone allied too closely to any single discipline. In reviewing the human factors
literature, this thesis has taken the approach of a ‘methodological magpie’. It has sought
out any findings which may prove useful when considering the work undertaken by
individuals and teams engaged in requirements engineering. It may well be the case that
some of the findings reviewed in Chapter 3 are not directly compatible with each other.
Nevertheless, this thesis’ approach is to determine the applicability of the findings when
considering actual processes, with all the specific details and intricacies which this
implies. Only then would it be possible to say that one theoretical standpoint is more
appropriate than another, for that particular situation. What is needed, therefore, are
individual analysts who can apply these findings as appropriate, without the theoretical
‘baggage’ that a training in a particular discipline might entail. This is not to discount the
benefit of involving human factors specialists in process improvement. It is merely to
state that for their involvement to be useful to the process improvement process itself,
then a particular type of human factors specialist is required, one who is prepared to
132
suspend some of their, possibly closely held, beliefs regarding the methodological ‘purity’
of some of the literature which has been reviewed here.
So, bearing in mind that specialists who are ready and willing to put all their theoretical
differences to one side may well be quite rare, a different approach might be more
appropriate. For example, it would be possible to recruit process improvement analysts
from more conventional sources, and then give them training in the appropriate human
sciences to provide them with the requisite skills to apply the findings from the
literature. This approach would still suffer from the need to familiarise the analysts with
a large body of research, but their distance from the original work may make it easier for
the findings to be applied successfully. Furthermore, organisations already undertaking
process improvement are more likely to adopt this work if it is possible for it to be
integrated with their existing analysis methods. This does not, however, solve the
problem of the sheer quantity of information which must be worked with. This will be a
problem regardless of the background and training of the analyst, and requires a tool—in
the broadest sense of the word—to support analysts in their work.
6.2 PERE: A Tool Approach
In suggesting that a tool is required, it is important to be wary of following a route
which will lead to an overly constrained method of working for the analyst. The
literature on violations suggests that any such tool would itself be vulnerable to this sort
of error, and would possibly therefore do more harm than good. What is needed is a
more lightweight approach, which can support the work of the analyst, without getting
in the way of the work itself. Checklists such as the one proposed in Chapter 5 have
been used for this purpose in a variety of situations, in systems design methods such as
HUFIT (Taylor, 1990), and ETHICS (Mumford, 1986). The use of checklists in this
thesis is not proposed lightly. The previous chapters has already alluded to potential
problems where the ‘ticking-off’ of items in a checklist becomes an end in itself,
attaining greater importance to the user than the task which the checklist is designed to
support. For this reason, it is necessary to complement the checklist by considering the way
it is used in such a way to defend against this type of misuse. This involves
complementing the checklist outlined with in the previous chapter with a method and
guidance for how to use it. Furthermore, there are potential problems regarding the
purpose of the checklist—the encapsulation of a large and diverse set of information to
be used in different contexts at a later time. To guard against problems of ‘ontological
drift’ (Robinson and Bannon, 1991), it is necessary for the checklist to ground any
recommendations it makes for process improvement in the context from which they
originate.
The checklist outlined in the previous chapter is designed to support the application of
the human factors literature in such a way that, when an analyst is studying a particular
process, it can be used as an aide-mémoire to ensure that the findings relevant to the
process have been covered adequately. At the same time, it facilitates further
examination of the source data in situations where it is unclear to what extent a
133
particular entry is applicable to the process under examination. The checklist cannot be
used as a substitute for training in (at least some of) the relevant literature, and requires
a degree of skill and artfulness in its application, as not all headings are relevant for every
process. The following section introduces the checklist, and describes it’s design and
structure.
As we have already stated the checklist is not enough on its own, and also requires the
design of a method in order to guide the analyst through its application. This hypothesis
was reinforced in an initial trial of the checklist which was conducted at Aerospatiale,
one of the REAIMS project industrial partners, as a means of developing the design of
the method. As it stands, the checklist developed in the previous chapter is unlikely to
be successful for a number of reasons. These concerns must be addressed by any method
designed to support the application of the checklist. The concerns are:
gaining familiarity with the process being improved;
reducing the number of questions which must be asked of the process;
adopting a means for describing processes;
recognising different viewpoints on processes; and
the provision of guidance for following the method.
6.2.1 A formative assessment of the Human Factors Checklist
The development of PERE was essentially through the formative assessment of the
Human Factors Checklist in a real world context. This formative assessment was initially
undertaken by fieldworkers based at Lancaster University and started in December
1994. The chosen site of study was one of the other REAIMS partners, Aerospatiale.
The fieldsite was provided in advance with the Human Factors checklist and asked to
apply it to one of their own Requirements Engineering Processes. The use of the
checklist by Aerospatiale was complemented by a parallel application of the checklist by
the visiting fieldworkers. The use of the checklist by its developer provided a useful
counter balance to the experiences of those in Aerospatiale who used the checklist for
approximately a month before being visited. These debriefing visits took the form of a
series of semi-structured interviews complement by a general observation of their design
and manufacturing process over a period of days. The data from the semi-structured
interviews and the field observations were later analysed over a two week period at
Lancaster University. Rather than highlighting shortcomings and omissions in the
content of the checklist analyses focused on use and the issues and problems with
applying the Human Factors Checklist.
This trial raised a number of key concerns with initial formulation of the human factors
checklist. The Aerospatiale process, called MERE (for Memorisation of Experience in
RE) that was the subject of the trial was being developed to address a number of
problems regarding how experience gained on the design of aircraft was not being
134
remembered and carried forwards from one design to the next. As we have stated in the
overview of the thesis provided in Chapter 1 access to the MERE process provided the
opportunity for a strongly formative approach to evaluation with the on-going
assessment of PERE in practical use being used to drive its further refinement.
The focused debriefing used in the trial application allowed significant reflection and
offered new suggestion for method development. This arrangement reflects the broader
REAIMS project structure and the emphasis on synergistic partnerships as a means of
undertaking multidisciplinary research. The partnership between the researcher
developing the checklist and method and the industrial partner applying the method
provided an opportunity for the checklist to be exercised on a ‘real’ process. In addition,
the use of the checklist by safety/reliability engineers at Aerospatiale rather than its
developer also afforded an opportunity to find out how easy the checklist was to
understand and apply. Furthermore, the Aerospatiale staff were interested to find out
more about the human factors issues encapsulated in the checklist, and the detailed
output from the checklist’s application to MERE provided them with useful input to
subsequent refinement of the MERE process itself. More detail on the MERE process is
given in Chapter 8, which returns to this trial application and a more thorough
evaluation of PERE applied to MERE at the end of their respective developments.
The following sections each present one of the above concerns, and in each case it is
supported with excerpts from the initial trial field work (Bowers and Viller, 1994),
presented in separate panels in the text. This early application of the human factors
checklist was instrumental in the development of the PERE method presented in this
thesis. In particular, it generated a number of requirements which PERE had to meet.
The requirements on PERE resulting from the trial are presented at the end of each
section.
6.2.2 Familiarity with the process under consideration
One of the major factors affecting the initial application of the hum an
factors checklist at Aerospatiale centred around the difficulty and length o f
time involved in becoming familiar enough with the process under
examination in order to be able to apply the checklist. Much of the time
spent at Aerospatiale involved questioning the Safety/Reliability Engineers
about details of the process which had not been understood correctly from
the documentation provided in advance.
Any process improvement activity will first of all be primarily concerned with
understanding the nature of the process to be improved. A method which assists in this
process is likely to be easier to apply, and hopefully more successful for this reason. The
human factors checklist makes the assumption that some sort of understanding of the
process under consideration already exists. PERE must therefore provide a means of
attaining this understanding.
Depending upon the resources available, a variety of techniques might be used to lead
into the application of the human factors checklist. For example, ethnographic field
135
studies have been used successfully to produce rich descriptions of processes in a variety
of domains (Anderson et al., 1993; Bentley et al., 1992; Blythin, Rouncefield and
Hughes, 1997; Bowers and Pycock, 1996; Harper et al., 1992; Harper, 1991; Heath,
Jirotka, Luff and Hindmarsh, 1993; Heath and Luff, 1992; Heath and Luff, 1996; Luff
and Heath, 1993; Rodden et al., 1994). The advantage of using such approaches is their
grounding in the day-to-day actuality of ‘what goes on’ in the work place. However, a
great deal of effort is involved in performing such studies, effort which may not be
available. Furthermore, it is arguable as to just how methodical ethnography is, at least
when compared with structured analysis techniques. It can be seen, nevertheless, that
any technique for describing processes will inevitably involve a degree of familiarisation
which is necessary in order to describe the process accurately. Therefore, it would also
be possible to turn to existing process improvement methods such as those reviewed in
Chapter 2 which will include their own techniques for describing processes.
Already existing process models, where they are available, could be used to accelerate
process understanding. Used in conjunction with short field studies, existing models
could be validated against ‘real life’ to ensure that the recorded processes mirror what
actually takes place.
For PERE to be successful at making the information contained within the human
factors checklist readily applicable to RE processes, it must first tackle this problem.
What is needed is a methodical approach to understanding processes which can be used
as a precursor to application of the human factors checklist.
Requirement for PERE
1 PERE must incorporate a methodical approach to the
understanding of processes which can be used prior to th e
human factors checklist.
6.2.3 Reducing the number of questions
The Aerospatiale Safety/Reliability engineers who took part in the trial o f
PERE were very concerned with what they saw as a combinatorial explosion
of potential questions to be asked of their processes. They went as far a s
to suggest that if any method required an engineer to ask more than
three or four questions regarding a part of a process, it would soon b e
dropped as too great an overhead. In fact, the problem experienced by
the Aerospatiale engineers was largely due to their unfamiliarity with the
literature upon which the checklist is based. This meant that they felt they
had to consider every potential vulnerability for each process or sub-
process they studied. Experience with the human factors literature behind
the checklist made it possible to decide quite easily whether it m ade
sense to consider whether a process or set of processes were subject to a
particular vulnerability.
For a checklist approach of the type proposed here to achieve a broad coverage of the
potential vulnerabilities to error, it necessarily involves a checklist containing a large
number of questions. To an analyst new to the approach, this may prove to be very
136
daunting, and much time and effort could be wasted in attempting to apply the checklist
in as thorough a manner as possible. Not all entries, however, will be relevant for all
process components. On this basis, it should be possible to provide guidance to the
analyst to allow for ‘pruning’ of the number of questions which must be asked of a given
part of the process.
The need here, therefore, is for some means of guiding the analyst through the process of
deciding on the relevance of particular questions or sections of the checklist for
particular processes or groups of processes. There is obviously also an issue here with
regards to the scalability of PERE. The bigger the process, the more components it is
made up of, and the more potential process vulnerabilities which must be considered.
Requirement for PERE
2 The method must provide a means of navigating through t he
checklist so that the number of questions to be asked of a
process are limited to only those which are relevant.
6.2.4 Describing processes
The process studied at Aerospatiale had been documented in an SADT-
like notation. This was effective both within the organisation, as well as for
communicating the nature of the process for the application of the h uman
factors checklist. In fact, the problems experienced with understanding the
process were almost all related to understanding the organisational
context within which the process operated.
It is not enough for PERE to be successful at applying the human factors literature, and
in the course of its application uncovering process components which might lead to
errors in the overall requirements process. PERE must also be effective at
communicating its findings in such a way that is useful to the people who must act upon
its findings, and furthermore that the results of a PERE analysis may be reused at a later
date, if required.
What this means is that PERE must adopt a notation for describing processes. This will
not only serve the above purposes related to communicating the results of analyses to
those not directly concerned with the application of PERE. There is also a need for
some way to describe processes within PERE itself. This allows for both the study of
the structure of the process for vulnerabilities, as well as for communication purposes
between different stages of PERE analysis.
The selection of this process modelling notation should not place any restrictions on the
ways in which PERE can be used. Indeed, it may not be necessary to specify only one
notation as the candidate for the job. The notation adopted for use in PERE should be
sufficient to describe the types of processes typically encountered in RE. Several such
notations are available from the fields of software engineering and process improvement.
137
Requirement for PERE
3 PERE should adopt an existing, simple, default notation for
describing processes, but also be able to work with other
notations where necessary (for compatibility with existing
models, for example).
6.2.5 Different viewpoints on processes
It was interesting to note that the engineering background of the
Aerospatiale staff led them to focus upon different types of vulnerabilities
of the process. Some of which were not of types anticipated by t he
checklist. In particular, issues relating to how process components were
interconnected.
Just as the requirements for a system may be considered from a variety of viewpoints, so
may the processes by which the system is developed (Sommerville, Kotonya, Viller and
Sawyer, 1995). For example, one viewpoint on how a process is carried out will be
embodied in an organisation’s manuals and other process documentation (how the
process ‘should’ be followed). Conversely, the individuals directly involved in
performing the process may well have their own ‘workarounds’ or particular ways of
getting the work done. It is not unreasonable to assume that these two perspectives onto
the same process may well differ in several respects. This is not to say that either one or
the other is ‘correct’, although different people with different stakes in the process may
well wish to convince the analyst that this is the case. Rather, it is necessary to recognise
that different perspectives can, and probably do exist, and should therefore be
accounted for in some way.
This thesis adopts this perspective in recognising two viewpoints in particular onto the
requirements process. This is not to say that there are no other possible viewpoints, but
that these two provide a sufficiently detailed description of a process in terms of its
configuration and human component in particular, for the purpose of analysis and
proposal of process improvements.
The first viewpoint considers the process as if it is executed by a machine, i.e. human
factors are not taken into account. This analysis is concerned with the breakdown of the
whole process into components and sub-components, their interrelationships, and how
the process is enacted. This viewpoint, therefore, is interested in the process in purely
mechanistic terms, regardless of whether it is carried out by humans or machines. It can
be used to build up a model of the process if none exists already, or to verify the
accuracy of any existing models.
The second viewpoint is concerned with the human aspects of the process, it’s
components, and sub-components, etc., and how they might affect performance of the
process. This viewpoint utilises the process model developed in the mechanistic
viewpoint in order to prune the amount of analysis required. Chapter 7 describes the
138
viewpoints in further detail, and how they can be put to work in a method for
requirements process improvement.
Requirement for PERE
4 PERE should recognise that several perspectives or viewpoints
may exist onto the process under examination, and should
analyse processes in terms of two viewpoints in particular:
mechanistic and human factors.
6.2.6 Step by step guidance
The Aerospatiale engineers felt overwhelmed by the size of the checklist.
They felt that, even if training had been provided (it had not for them),
there would still be a need for guidance on its application for analysts who
had not been involved in its development.
One of the main motivations behind the development of the PERE method is the need
to encapsulate the human factors checklist within a framework which will enable
systematic application of the findings from the human factors literature, by analysts who
do not necessarily have that particular background or training. It is hardly surprising,
therefore, that there is a need for structured guidance for analysts applying PERE.
This guidance should take such a form that relatively naïve users feel that ‘their hands
are held’ through the process of following PERE. It should, however, also be
appropriate for more experienced users who should need guidance more as an aide-
mémoire during an application of PERE. This in turn means that a balance must be met
between providing sufficient guidance for those less familiar with the human factors
literature on the one hand, with the production of a method which can be followed by
more expert users without the guidance hindering their progress on the other.
Requirement for PERE
5 PERE must provide flexible guidance for users of the method
such that inexperienced analysts are led through its use,
whereas those with more experience can turn to it as a resource
for their analysis.
6.2.7 Summary
The concerns with the human factors checklist discussed above have been addressed
through the development of the PERE method in which the checklist is encapsulated.
PERE addresses the concerns (and requirements arising from them) in the following
ways:
the introduction of a mechanistic viewpoint. This helps in a number of ways:
- mechanistic analysis leads to an improved understanding of processes prior to
application of the human factors checklist (requirement 1);
139
- building up a model in terms of components and sub-components allows for
pruning of the number of questions required to be asked of the process as a
whole (requirement 2);
- application of the mechanistic analysis also leads to the development of a
process model, to assist with communication of results both within the
method and to stakeholders who have an interest in the results (requirement
3);
- the mechanistic analysis is iterative and structured according to the flow of
the process under consideration (requirement 5);
the mechanistic and human factors analyses can be treated easily as two separate
viewpoints, allowing for quite different perspectives to be taken on the same
process components (requirement 4);
a layered approach to application of the human factors checklist allows for a
more stepwise approach to human factors analysis, and will bring with it the
opportunity to reduce the number of questions which must be asked of each
process component (requirement 5).
PERE is described in detail in Chapter 7 and the user manual in Appendix B. The
following section presents an overview of the design of PERE, with the method
consisting of two distinct viewpoints, one which considers the process under
consideration mechanistically, and the other which pays particular attention to the role
played by humans in carrying out the process. These two viewpoints are integrated
together, and the result of applying the method is a set of recommendations for
improvements to the process.
6.3 An overview of PERE
This section presents an overview of PERE, which is presented in detail in the next
chapter. In seeking to improve the requirements engineering process the method is
concerned in practice with a mixture of manual and computer supported processes
involving individual as well as group activity in an organisational context. Accordingly,
PERE incorporates two viewpoints onto the analysis process:
I) An analysis of the process in ‘mechanistic’ terms as a number of interconnected
process components. For this, classical safety analysis techniques are adapted for a
consideration of the requirements process. This viewpoint is particularly attuned
to detecting vulnerabilities in processes which have an origin in the technologies
used to support the process or in its formal, ‘mechanistic’ aspects. This analysis is
referred to as the mechanistic viewpoint.
II) An application of human factors and social sciences inspired analysis to assess
vulnerabilities and defences at an individual, group and organisational level using
the results of the classical mechanistic analysis to scope the investigation. This
viewpoint, naturally, is particularly attuned to identifying vulnerabilities and
140
suggesting defences where human error may be a prominent source of process
faults. This analysis is referred to as the human factors viewpoint.
By offering two viewpoints onto the analysis of processes and the detection of
vulnerabilities, PERE supports enhanced defences over methods which might adopt
only a traditional safety analysis on the one hand or only an analysis in human factors
terms on the other. PERE has also been structured so as to permit the application of
human factors knowledge in a systematic fashion, one of the major aims of this thesis.
PERE is designed to be applicable in a very flexible way in a variety of organisational
contexts. The method can be specialised in a number of systematic ways and initiated for
a variety of different purposes (see Appendix B). PERE’s basic method and the kinds of
processes and human factors issues it recognises can be adapted and extended with use
so as to identify vulnerabilities and suggest defences which are sensitive to the domain of
application and responsive to changing engineering practice or professional experience.
A schematic overview of the PERE analysis process is given below in Figure 6.1.
Amongst other things, this diagram highlights the multi-viewpoint nature of this work.
This is over and above the dual viewpoints which guide PERE analysis.
Purpose of process evaluation(Organisational goals)
Process
models
Results of
empirical study
Identified
weaknesses
Suggested
protection
Clarified
understanding
of process
Mechanistic
Viewpoint
Process
models
Analyse components
• Individual errors
• Group errors
Analyse connections
• Materials
• Human &
Organisational
Describe
Organisational
context
Violations
Develop model
• Components
• Connections
• Invariant
• Working material
Weakness Review
• Consequences
• Likelihood
• Barriers
PERE
Other process
analyses
Denotes
Multiple
Viewpoints
Human
Factors
Viewpoint
Figure 6.1: Overview of PERE
141
6.4 Summary
This chapter has presented the approach taken in this thesis to the design of PERE, a
combined engineering and human factors approach to the safety analysis of RE
processes. The aim of PERE is to make the large body of human factors research
reviewed in Chapter 3 available to process improvement specialists.
Three options were considered for how to make this body of work useful to companies
developing safety-critical systems—and especially their requirements. Two of the
options were regarding the skills, background and training of the people who carry out
the process improvement process. In fact, even if an all-round expert on the human
factors literature used in this thesis existed, and the likelihood is that such people are
few and far between, then even they would very likely still require support in making
use of the information. Therefore, it was the third option which was pursued, which
consisted of encapsulating the findings of Chapters 3 and 4 into a structured checklist of
human factors vulnerabilities and defences which can be used to prompt for factors
which should be addressed during a process improvement effort.
As part of its development, the checklist was given a brief trial in industry at
Aerospatiale, a partner in the REAIMS project. This led to a number of findings
regarding the checklist’s usability. The main problems found were related to the sheer
volume of information contained in the checklist, the potentially enormous number of
questions which would have to be asked of a process during analysis unfamiliarity with
the literature, and with the process under analysis. The trial gave rise to a number of
requirements for PERE, namely:
PERE must incorporate a methodical approach to the understanding of processes
which can be used prior to the human factors checklist.
The method must provide a means of navigating through the checklist so that the
number of questions to be asked of a process are limited to only those which are
relevant.
PERE should adopt an existing, simple, default notation for describing processes,
but also be able to work with other notations where necessary (for compatibility
with existing models, for example).
PERE should recognise that several perspectives or viewpoints may exist onto
the process under examination, and should analyse processes in terms of two
viewpoints in particular: mechanistic and human factors.
PERE must provide flexible guidance for users of the method such that
inexperienced analysts are led through its use, whereas those with more
experience can turn to it as a resource for their analysis.
This chapter, therefore, has shown that the checklist can be used on RE processes, but
that problems exist for application, especially by non human factors experts. Also, even
for experts, there were problems with applying the checklist to a process before
sufficient familiarity with it had been developed. It is therefore necessary to develop
142
some means by which assistance can be given to remove the need for a great depth of
human factors knowledge in order to apply the checklist, and to build in the opportunity
to build up familiarity with the process under examination.
This chapter has presented a high level design for the PERE method of process
improvement for requirements engineering. The method consists of two viewpoints
onto the process under study. The first viewpoint considers the process from a
mechanistic perspective, ignoring the nature of the agent (i.e. human or machine)
performing the tasks. This viewpoint is concerned with how components and sub-
components of the process are connected, what might happen if particular components
fail, and so on. A model of the process is also produced as a result of the mechanistic
viewpoint. The second viewpoint considers the human aspects of the process
components identified by the mechanistic viewpoint, and is concerned with the
potential for error where human activity is the cause.
The core of the method is based upon the findings from human sciences pertaining to
errors, as reviewed in Chapter 3. These findings are encapsulated in a human factors
checklist, which consists of a table of vulnerabilities, and possible defences associated
with them. It is structured into sections according to the nature of the human activity,
i.e. individual, group, organisational, and also according to the ‘working material’
concerned, i.e. documents, notations and representations. The main aim of producing
the method is to create a systematic means of applying the human factors checklist to
requirements processes.
The method addresses several generic issues which had also been experienced during an
early trial use of the original version of the human factors checklist. These were: how to
support gaining familiarity with the process under examination; how to restrict the
number of questions which must be asked of a process to a manageable number; the
adoption of a notation for modelling and describing processes; the need to understand
processes from more than one perspective; and the need to provide effective guidance
for users of the method.
The contribution of this chapter is a method for RE process improvement which is
grounded in a broad and varied body of literature from the human sciences. The method
is a lightweight approach to incorporating an appreciation of vulnerabilities to human
error which is not restricted to one particular theoretical standpoint. The following
chapter presents a detailed description of how the method is implemented.
143
7. PERE in Detail: Implementation of
the Method
This chapter provides the detailed description of the PERE method, the core of which
is based upon the preceding work in this thesis on the integration of human factors
research into the endeavour of process improvement for requirements engineering
processes. The previous chapter provided a high level design for the method, which
consists of the following features:
a highly iterative approach to analysis;
analysis conducted from two distinct perspectives, or viewpoints;
a common process model shared between the viewpoints; and
step-by-step guidance to assist in its application.
The following sections expand upon the previous chapter’s description in order to
provide a detailed account of the method’s two viewpoints in terms of their
foundations, the process followed, and the deliverables produced by the process. Before
this, section 7.1 considers aspects related to the context in which PERE is applied.
Section 7.2 then presents the description of the two PERE viewpoints, and finally
section 7.3 discusses ways in which PERE can be modified to suit different purposes.
This chapter should be read in conjunction with Appendix B, which provides a more
detailed user manual for the method. What is presented here is essentially an abridged
version of the method details presented in Appendix B, as a means of understanding the
discussions of use in the following chapters.
7.1 Context of use
Before providing a detailed description of PERE, this section raises a number of
concerns and questions regarding the context in which PERE might be applied. These
are factors which must be borne in mind especially when coming to use PERE on a
144
particular process, for a particular purpose, but also when reading about the method, for
the purpose of building up an understanding of it.
The nature and the results of applying a process improvement method will be highly
dependent upon a number of factors external to its design. For example, for a given
process:
there may be several potential purposes for applying the method;
the people involved with applying the method, and interpreting and acting upon
its outcome, will have an impact on the method’s success; and
the maturity of the process under examination—how well it is documented, the
extent to which it is repeatable, and so on—will have an effect on the nature of
the application itself, and of the types of vulnerabilities uncovered.
This section considers the above factors in turn. Each should be attended to before an
application of PERE is embarked upon.
7.1.1 What is the purpose of PERE’s application?
An organisation may decide to enter into process assessment and improvement for a
number of reasons. It may be that they wish to do so as part of a wider improvement
strategy, or in response to one or more recognised problems, or maybe to be able to
demonstrate to potential purchasers the quality of the processes followed. Whatever the
reason, it is important to be explicit about it at the outset. Failure to do so could have
harmful consequences for the success of the improvement exercise. For example,
employees may see a PERE evaluation as a threat to their continued employment,
especially with its emphasis on errors due to individual and group working. If such a
situation were allowed to develop unhindered it is likely that any improvements
proposed as a result of the PERE analysis would be resisted by the employees as they
would feel mistrustful about the motivation behind them.
Whilst PERE has been developed with the RE process for safety-critical system
development in mind, it should be noted that its applicability is much broader than this.
PERE can be applied to any development process where dependability is important.
Dependability can be considered from a number of perspectives—typically reliability,
availability, maintainability, security and safety (RAMSS). Any one of these could be the
focus for PERE analysis, and it is important that the purpose of the PERE evaluation in
these terms is known at the outset as it will impact which parts of the process are
examined in the greatest detail, and also the nature of any defences and improvements
that may be proposed. Also, a PERE analysis could be used to highlight where processes
are more dependable than necessary—in whichever sense of dependability currently in
effect—and which could therefore be open to efficiency and cost-saving improvements.
Obviously, such improvements are of a totally different nature to those where safety is
paramount.
145
If the PERE analysis is in response to certain recognised problems with the
organisation’s processes, this should also be noted as it will allow the PERE analyst to
focus their efforts on the process components concerned and ignore those which do not
impact the stated problems. Such pruning of the analysis effort can lead to savings in the
cost and time taken for the evaluation to lead to useful process improvements. The
associated risk with such pruning of the analysis is, of course, that some important
aspects of the process might be ignored.
Section 7.3 is also relevant to these concerns, as it identifies a number of ways in which
PERE may be specialised in order to meet a variety of objectives. The PERE analyst
should be aware before the analysis commences whether any such factors should be
considered.
7.1.2 Who will be responsible?
The person who applies PERE in a given organisation will typically be an external
consultant who may be contracted to undertake a number of process improvement
strategies by the organisation concerned. This person is not, however, ultimately
responsible for the success or failure of the proposed improvements. This responsibility
will typically lie with the individual within the organisation who has requested the
analysis to take place. This is not to say that the PERE analyst has no part to play in
ensuring that their proposals are implemented effectively. Indeed, the manner in which
the analysis is conducted and the results presented is likely to have a great effect on the
opinions which those impacted by the proposed changes have about them, and the
likelihood that they will be accepted. Nevertheless, a great factor in the successful
implementation of improvements proposed as a result of a PERE analysis will be the
degree to which the elite in the organisation itself are seen to support them. It is
therefore important for the PERE analyst to ensure from the outset wherever possible
that those people in the organisation with the relevant status have a stake in the success
of the analysis.
Not only will support from the organisational hierarchy assist in successful
implementation of the results of a PERE analysis, it will also help in gaining acceptance
of the need for such an exercise while it is taking place. It is important that the people
who are involved in the process, and therefore most closely affected by the results of the
evaluation, be made aware of the purpose of the analysis and motivated to fully
cooperate with the analyst. This cooperation would be hindered if doubts about job
security and satisfaction were allowed to become established. It is therefore doubly
important that the prior planning and announcement of the PERE analysis are
conducted in the best way to encourage participation by those directly involved in the
process. Further, it would be helpful to allocate responsibility for the PERE analysis to
one individual (or more, depending on the size of the analysis) so that anyone with
questions, comments, or requiring reassurance, knows who can be turned to.
The PERE analyst, as mentioned already, is likely to be from outside of the organisation
under examination. There will be several points during the analysis, however, where
146
detailed knowledge of the process and application domain will be required. The means
by which this detailed information is obtained are largely at the discretion of the analyst,
in consultation with whoever is responsible for PERE within the organisation. A
number of techniques are available for the analyst to utilise, including observation,
interviews, and so on (see Appendix B). Some parts of the process may be best
performed in group settings, and others individually. The intention here is not to
stipulate which parts of the PERE analysis should be performed under which conditions,
merely to raise the issue that there may be different means by which one may conduct
the analysis, depending upon the nature of the organisation and the individuals
concerned.
7.1.3 How mature is the process to be examined?
A PERE analysis is not an exercise to be entered into lightly, and is unlikely to yield
useful results for processes of low maturity (e.g. “chaotic” processes in SEI (Paulk et al.,
1993) terminology). Before a process can be examined for its vulnerabilities in the form
of a PERE analysis, it should at least be repeatable across a number of projects, and
preferably be documented to some extent. Obviously, these are simple proposals which
would be put forward by any process improvement effort and are likely to return
benefits to the organisation in a relatively short time. No particular degree of process
maturity has been specified as a pre-requisite for PERE analysis (at least, not in terms of
SEI levels or such like schemes). However, it is likely that organisations which would
expect to use PERE as part of a process improvement strategy will have sought, or be
moving towards certification to ISO9000, or be at least equivalent to level 2
(repeatable) in the SEI Capability Maturity Model.
7.2 The PERE analysis process
Chapter 6 presented a high level description of the PERE method. In doing so, it
introduced the main features of the method, namely that it consists of two intermeshed
analysis techniques, each of which approaches the process under study from a different
perspective or viewpoint. The first viewpoint views processes in a mechanistic way, as if
they are enacted by machines. This ignores the part played by people in the process,
other than in terms of inputs, processes, outputs and interconnections between
processes. The second viewpoint is explicitly concerned with the human aspects of the
process, and where vulnerabilities to human error might lie. Together, the two
viewpoints produce a description of the process, along with a list of vulnerabilities, their
causes, and possible defences which could be put in place against them.
This section provides a more extensive and detailed description than presented thus far.
It starts by giving an overview of the analysis process, including issues of capturing
process information and coordination between the two viewpoints. Subsequently the
two viewpoints which combine together to make up PERE are described in detail. For
each viewpoint, the basis of analysis techniques used is described, along with the process
147
followed in application of the viewpoint, and the deliverable(s) produced as a result of
application.
7.2.1 Overview
As already stated in Chapter 6, PERE brings together two very different viewpoints to
the study of RE processes. The first viewpoint is inspired by Hazops, the classical safety
analysis technique, and it consequently takes a traditional mechanistic view of the
process under study. The other viewpoint is based upon this thesis’ review and
subsequent work on human factors in requirements engineering, and the vulnerabilities
which exist to human error due to various causes. To facilitate the coordination of these
two viewpoints, they both take a process-oriented approach to analysis. This means that
models can be shared between the two viewpoints, allowing common points of
reference, recommendations for improvement are made directly in terms of the process
under study, and PERE as a whole shares a common vocabulary based upon the process.
Process descriptions in PERE are built up in terms of process components, the
interconnections between them, and the working materials which are passed along the
interconnections. This is based upon the mechanistic viewpoint analysis, and the link
back to hazard analyses in the chemical process industry is apparent. This basis, however,
is adopted throughout PERE, and means that both viewpoints can refer to the process
in common terms.
The schematic overview of PERE presented at the end of Chapter 6 is repeated below
as Figure 7.1. It can be seen from this that PERE recognises the possibility of multiple
viewpoints throughout the process. What this means is that, where different
perspectives are apparent, such as in the way that the process should be followed, or the
way in which components are connected, they are recorded to allow for subsequent
examination. The reason for making this point is that, whilst it is important to recognise
the existence of differing viewpoints, it is also vital that these differences are not hidden
through attempts to ‘resolve’ them into one coherent description. The danger in doing
this is that at some later stage, for example when an incident occurs leading to an
examination into its causes, such a resolution effort will cover up points in the process
which may be vulnerable to deviations, or mistakes due to misunderstanding, for
example.
Figure 7.1 also displays an abstract representation of the two viewpoints, the main
activities involved, highlights major inputs and outputs, and coordination between the
viewpoints. The remainder of this section considers how PERE obtains information
about the process(es) under study, and how information is passed between the two
viewpoints.
148
Purpose of process evaluation(Organizational goals)
Process
models
Results of
empirical study
Identified
weaknesses
Suggested
protection
Clarified
understanding
of process
Mechanistic
Viewpoint
Process
models
Analyse components
• Individual errors
• Group errors
Analyse connections
• Materials
• Human &
Organizational
Describe
Organizational
context
Violations
Develop model
• Components
• Connections
• Invariant
• Working material
Vulnerability Review
• Consequences
• Likelihood
• Defences
PERE
Other process
analyses
Denotes
Multiple
Viewpoints
Human
Factors
Viewpoint
Figure 7.1: Overview of PERE
7.2.1.1 Process capture and modelling
The comments in section 7.1.3 regarding the degree of process maturity in the
organisation under examination implies that the organisation’s processes are expected to
already be modelled to a certain extent. If this is true, then the organisation is already
likely to be familiar with certain techniques for capturing and representing process
information. If, however, no such models exist, then it will be necessary to create them
as an early part of PERE analysis. One possible means of achieving this is provided
within the mechanistic component of PERE, but this is not intended to discount other
techniques available. For example, if the organisation is embarking on a broader process
improvement strategy, and making use of another process analysis technique, then this
can be used within PERE as the basis for subsequent analysis.
This being the case, the modelling notation or formalism in use may well differ from
those employed within PERE. This should not be seen as a problem as PERE’s tables
can be built up independently from available process information. If the organisation has
a preferred technique, then this can be used provided that the detail of the process
components, working materials, interconnections and so forth can be adequately
depicted using it.
Once more, the motivation here is to ensure that PERE is as flexible as possible in terms
of fitting in with existing practices and other contextual factors, whilst at the same time
providing sufficient resources to the analyst in order to enable completion of their tasks.
In summary, PERE inter works with process capture methods and techniques in the
following way:
149
process models produced as a result of any analysis activity may be utilised by
PERE;
if the organisation has a preferred technique for modelling processes, then this can
be used provided that the detail of the process components, working materials,
interconnections and so forth can be adequately depicted using it.
Appendix B gives consideration to some of these issues in more detail.
7.2.1.2 Coordination between the viewpoints
As will become apparent in the following sections, PERE analysis is inherently iterative,
and focused upon critical process components that are particularly vulnerable to errors,
or for which the consequences of error would be particularly serious. Further, there
exists a relationship between the two viewpoints such that, especially in situations
where the process has not yet been documented sufficiently, the human factors
viewpoint is dependent upon the mechanistic viewpoint for the creation of process
models to work with. One of the effects which this could have on the process of
analysis is that potential insights into the human aspects of the process must be delayed
until a process model has been completed. Given that PERE is particularly concerned
with the RE process for safety-critical systems development, it would be unwise to
suggest cutting corners. Nevertheless, it should not be necessary to have to wait for a
completed mechanistic analysis before human factors analysis can take place. For this
reason, there are a number of possible means of achieving coordination between the two
viewpoints.
Given the iterative and modular process by which the mechanistic viewpoint builds up
models and understanding of the process, this can be taken advantage of in order to pass
information over to the human factors viewpoint at opportune moments. This can be
described in terms of the level of granularity of the process model at which information
is made available. Possible points at which process information can be passed between
viewpoints are summarised in these terms below in Table 7.1.
The following sections now present the detailed description of the two viewpoints
which combine to make up PERE, starting with the mechanistic viewpoint.
7.2.2 Mechanistic viewpoint
The need for the PERE mechanistic viewpoint arose as a result of wanting to
encapsulate the human factors checklist into a systematic method for improving RE
processes. Advantage was taken of expertise within one of the REAIMS project partners
in the field of safety analysis, and collaboration with them led to the development of the
technique for analysing processes mechanistically presented here. The technique was
subsequently refined as a result of experience with application of PERE at Aerospatiale,
and in order to improve the way in which the two viewpoints intermesh.
150
Table 7.1: Passing process information between viewpoints
Granularity Comment
complete The mechanistic analysis is conducted in total before passing a complete
and detailed process model to the human factors viewpoint
iteration A process model is passed to the human factors viewpoint on completion
of each level of iteration of the mechanistic analysis
significant
component
A process model of a significant component of the process is passed to
the human factors viewpoint, either when completed in detail, or when
complete at a particular level of iteration
sub-component The information regarding a single sub-component can be passed to the
human factors viewpoint if it is seen as particularly significant and also
human intensive.
The remainder of this section describes the mechanistic viewpoint in terms of its
underlying concepts, the process which implements process analysis according to them,
and the deliverables produced as a result.
7.2.2.1 Basis of the mechanistic viewpoint
The origins of the mechanistic viewpoint’s analysis technique lie in the Hazops method
for hazard analysis of chemical processes. Hence, it is worded in terms of process
components, interconnections between them, and the working materials which are
passed along the interconnections. In the chemical process industry, these might be
valves or storage tanks, pipes, and various fluids respectively. The terms nevertheless
transfer quite well to RE processes where meetings, network connections, and
documents combine to make up the process, for example. The purpose of the analysis in
this case is the systematic building up of a process model which can subsequently be
analysed for hazardous conditions, or in other words vulnerabilities to error. The
grounding in hazard analysis in the chemical process industry provides the mechanistic
viewpoint with a vocabulary of failures, their likelihood, and consequences thereof,
which can be adapted to refer to RE processes in a systematic way.
The model which is built up by the mechanistic viewpoint is based on principles of
abstraction and modularity. This allows the model to be constructed in a top-down
manner, hiding detail irrelevant to the current level of analysis, and concentrating
separately upon significant sub-processes if deemed necessary. This also allows the
analyst to rapidly focus upon process components which are anticipated to be
particularly vulnerable to error, or critical for the safe functioning of the process as a
whole. The ability to focus effort in this manner could prove to be significant where
resources for application of PERE are very restricted in terms of either time or finances.
Whilst being a process-oriented analysis technique, the mechanistic viewpoint also
makes use of object oriented notions. As process components are identified and entered
into the model, they are organised into a hierarchical classification according to their
process type. This classification has a single root class of component, which is specialised
into five basic subclasses—transduce, process, channel, store, and control. The choice of
151
these five basic classes has its roots in the Hazops approach to analysis of chemical
processes, and is designed to provide coverage of all process component types. The
definitions of these classes, along with interpretation for RE processes, where the
working materials will be forms of representation (text, notations, etc.) are given in
Table 7.2 below.
Table 7.2: Basic component classes in the mechanistic viewpoint
Class Definition Applied to RE
transduce conversion from one form of
energy to another (e.g. a
thermocouple, hydro-electric
generator, AC/DC converter,
electrical heater)
a change of the (physical) form of the
representation (transcription, recording,
scanning, compiling software, printing out), not a
change in the semantics of the representation
process conversion of working
material(s) from one form to
another (e.g. chemical
reaction, metal casting, paper
cutting, image processing)
transformation of the representation into a new
one of the same physical form, typically with an
associated change in semantic content (editing
documents, summarising reports, testing
software, analysing recordings, annotating print-
outs)
channel transfer of working material
from one function to another
(e.g. a pipe, an electrical
conductor)
the means by which representations are
transported from one destination (a sender) to
another (a recipient) (e.g. telecommunications
links, networks supporting electronic messaging,
physical paper-mail systems, etc.)
store holds the working material for
an extended period of time
(e.g. a storage tank or a
warehouse)
holds the working material for an extended period
of time (e.g. archives, libraries, filing cabinets,
computerised databases, file servers, local file
stores, etc.)
control includes a means to control
or determine the flow of
working material from one
function to another or directs
how the functioning of
another component takes
place (e.g. a valve or a switch)
manages how the processing of information takes
place; switches the responsibility of document
production from one process-component to
another; or controls the means of storage of
documents or files, etc. Control includes the
‘executive control functions’ involved in switching
the flow of physical materials and forms of
representation from one set of components to
another.
These five basic classes are themselves subsequently specialised into further sub-classes as
appropriate and necessary for the application domain. So, for example, a database would
be of class store, communication networks of type channel, and so on. This leads to the
creation of a hierarchy of classes which is specialised for a given application domain.
This component class hierarchy can be used as a resource for subsequent applications of
PERE, and over time a PERE analyst can build up a repertoire of component class
hierarchies as a result of their experience with using PERE.
Each class also has a number of attributes, which are used to describe a component’s
behaviour and properties. The attributes for each component are: class, interfaces and
working material, possible states, its invariant mode of functioning, the pre-conditions
152
and resources necessary for the component to function, and the external control the
component is subject to. These are described in Table 7.3 below.
Table 7.3: Component attributes
Attribute Definition
class The class of the component together with the ‘path’ in the class hierarchy
from the component to the root class, thereby defining any further
properties the component might inherit
interfaces A listing of the interfaces through which the component receives and
passes on working materials together with a specification of the type of
working material(s) passed in and out
state
(optional)
The set of internal states of the component that affect its behaviour and
functioning (e.g. temperature, pressure, level, available buffer storage,
processing mode and so forth)
invariant The fixed relationships between the interfaces (how the component’s input
relates to its output, i.e. the expected mode of functioning of the
component). The invariant identifies what the component should do
preconditions
(optional)
The required conditions for normal operation (i.e. conditions which must be
true if the invariant is to hold). This includes resources (raw materials, tools,
the availability of human resources and so forth) that the component
requires for its mode of functioning to be satisfied
control
(optional)
The relationships between interfaces specified in the invariant which can be
influenced by external control sources
The real benefit of classifying and specifying process components in the above manner
becomes apparent when coming to consider vulnerabilities and defences for each
component of the model. PERE specifies generic vulnerabilities and defences based
upon the class of component and its specification in terms of component attributes.
Component class generic vulnerabilities allow for the rapid generation of potential
vulnerabilities for whole sections of the process which share a common basic component
class. Further, just as the five basic classes may be specialised, so may their generic
vulnerabilities. So, for example, a channel basic class may be subject to the basic
vulnerability of “wrong capacity”, while a pipe (a specialisation of channel) may be
subject to the vulnerability of “diameter too small” (a specialisation for pipes of “wrong
capacity”). Just as with the component classes themselves, the associated vulnerability
classes are also extensible on the basis of the experience of using PERE as well as
drawing on the known existence of vulnerabilities to particular classes of components
(e.g. as collected from critical incident records).
The specification of each process component in terms of its attributes provides a further
framework to assist in the identification of vulnerabilities. PERE provides heuristics for
generating potential vulnerabilities for a process component, given its specification.
These are summarised in Table 7.4 below.
153
Table 7.4: Component attribute vulnerabilities
Attribute Heuristics for identifying attribute vulnerabilities
Interfaces Interconnections
Connection not actually made, wrong connection made, interface does not
allow the passage of work materials in/out of the component, interfaces and
materials are incompatible, inconsistent protocols or formats are employed.
Working materials
Materials are of the wrong sort or are absent or are prone to degradation,
contamination or damage in transit.
State
(optional)
Too much/too little, too big/too small, too many/too few, over-range/under-
range with respect to the component state.
Invariant Violations of the invariant functioning of the component
Negate, requantify, or re-scope expressions in the invariant. E.g., if a store
component specifies “all documents of type x are archived”, potential
vulnerabilities in the process are the documents are not archived (negation),
or if only some of them are (requantification), or if documents of type y are
archived (re-scoping).
Problematic behaviours consistent with the invariant
I.e. when the invariant function is satisfied but hazardous conditions still
emerge; e.g. reverse flow down an intact pipe. Generally the result of the
interaction of correct invariant function of a component combined with failures
elsewhere in the component (e.g. an interface or control failure) or elsewhere
in the process.
Preconditions
(optional)
Negating, requantifying or re-scoping an expression of the precondition (see
invariant). Resource vulnerabilities can be identified by considering the non-
availability of necessary resources, the presence of an incorrect resource, the
presence of too much/too little of a required resource, etc.
Control
(optional)
Control connections (probably with some other component of type ‘control’)
are not satisfied in some way, either because the interconnection fails or is
not made, or because the control signals/messages are of the wrong type or
are inappropriately interpreted, etc.
Once again, the vulnerabilities generated for component attributes can be specialised in
the same manner as the components themselves.
The final part of the mechanistic viewpoint is the way in which it addresses the
vulnerabilities once they have been identified through the proposal of defences against
them. In line with traditional safety techniques, PERE’s mechanistic viewpoint uses the
concept of risk associated with vulnerabilities identified. They are reviewed to ascertain
the risk associated with their occurrence in terms of the likelihood and consequences of
the failure actually taking place. This assessment of risk is useful in determining which
defences should be implemented when following the ALARP principle. ALARP stands
for As Low As Reasonably Practicable, and refers to the extent to which measures are
introduced to reduce the risks in a given system or process. Figure 7.2 presents a figure
to explain this further.
154
Intolerable Risk cannot be
justified on any
grounds
Tolerable only
if risk reduction
is impracticable
or if its cost is
grossly disproportionate
to the improvement
gained
Tolerable if cost
of reduction would
exceed the improvement
gained
Negligible risk level
No need for detailed
working to demonstrate
ALARP
The ALARP
region
risk level
Figure 7.2: The ALARP principle (from Bloomfield, Bowers, Jones, Sommerville and Viller, 1995)
The basis of PERE’s mechanistic component, therefore, consists of the following:
Modelling the processes in terms of components and subcomponents, and their
interconnections and the working material which is passed along them.
Classification of process components according to one of five basic component
classes (process, transduce, store, channel, control), and specialising this, creating
a hierarchy of component classes.
Specification of each individual component in terms of five component attributes
(class, state, invariant, preconditions, external control).
Analysis of the process model for vulnerabilities to error, with guidance from
generic vulnerabilities for each class and attribute.
Proposal of defences against the identified vulnerabilities, with an assessment of
the risk associated with them.
In order to use these techniques systematically, PERE defines a process for the
mechanistic viewpoint, which is described in the following section.
155
7.2.2.2 The Mechanistic viewpoint process
This section is concerned with describing the process by which the model and analysis
described above are built up by PERE’s mechanistic viewpoint. There are six main
stages in the PERE mechanistic viewpoint process, namely:
Select relevant processes
Obtain process documentation
Specify process structure and working materials
Specify components
Identify vulnerabilities
Review vulnerabilities
Figure 7.3 presents a simple model of how these stages inter work, and also includes one
further component which concerns the passing of models and other information over to
the human factors viewpoint for further analysis. This can happen at a variety of stages,
as was described in section 7.2.1.2. Table 7.5 expands on this presentation of the
process, with brief descriptions of the stages and the questions addressed by each one.
Specify process
structure and
working materials
Select relevant
processes
Obtain process
documentation
Specify
components
Review
vulnerabilities
Identify
vulnerabilities
Output to
human factors
viewpoint
1
2
3
4
5
67
Figure 7.3: Simplified model of the PERE Mechanistic Viewpoint process
156
Table 7.5: The PERE Mechanistic viewpoint process stages
Select relevant
processes
Which parts of the lifecycle are to be analysed?
Processes are selected according to the customer needs and
organisational goals for the analysis
Obtain process
documentation
Is there sufficient documentation for the selected processes?
No Yes
further information is collected using whichever process
capture techniques are most appropriate (or preferred)
to add to existing documentation.
Proceed to next
stage.
Specify process
structure and
working materials
The PERE analyst can make use of any techniques they are familiar with
(e.g. those associated with ISO9000) to assist with capturing the necessary
process information in order to perform this step, which consists of the
following three activities:
Identify components (What are the components in the process?)
Components are identified and named from the process documentation
Classify components (What classes are the identified components?)
Components are classified into specialisations of the generic component
classes (transduce, process, channel, store, control) generating new ones
where no existing class is suitable
Identify interconnections and working materials (How are the components
connected together and what are the materials passed between them?)
The input and output components are identified for each component. The
different classes of working material passed between components are
identified.
Specify
components
What are the behaviour and properties of each component?
A row of the PERE Component Table (PCT, see §6.3.2.3) is completed for
each component, defining its attributes.
The specification may be output to the human factors viewpoint if required (see §6.3.1.2)
Identify
vulnerabilities
What could go wrong with each component?
Vulnerabilities are identified according to the class and specification of the
component. Each class and sub-class of component may have generic
vulnerabilities associated with it, and similarly for each component attribute.
The specification may be output to the human factors viewpoint if required (see §6.3.1.2)
Review
vulnerabilities
What are the likelihood and consequences of each of the identified
vulnerabilities, and how could they be defended against?
A row of the PERE Vulnerability Table (PVT, see §6.3.2.3) is completed for
each vulnerability, defining its class, likelihood, consequence, possible
defences, and possible secondary vulnerabilities.
Output to human
factors view
p
oint
Did the last iteration of vulnerability identification and review generate no
further vulnerabilities?
No Yes
Reiterate from Identify Components. The current
process model may be passed to the human factors
viewpoint if there is a sufficient number of
components which could be vulnerable to human
errors. The specifications of single process
components may also be passed on if they are
vulnerable to human errors.
Pass complete
process model
consisting of fully
completed PCT and
PVT to the human
factors viewpoint.
157
ITERATION IN THE MECHANISTIC VIEWPOINT
One of the ways in which PERE allows the amount of analysis required by the
mechanistic viewpoint to be limited is by only identifying process components to a
sufficient level of detail and no more. The mechanism by which this is governed is called
iterative deepening. What this means in practice is that on each iteration of the analysis
components are classified down to a particular level, starting on the first iteration at the
level of basic components (process, store, transduce, channel, control). On subsequent
iterations, only those components which are suspected to be vulnerable to errors are
analysed in further detail. In this way, process analysis proceeds in a directed manner,
focusing upon the most vulnerable components. the process of iterative deepening is
illustrated in Figure 7.4.
Further information and guidance on the mechanistic analysis process, the component
classes and attributes, and iterative deepening can be found in the PERE user guide in
Appendix B.
7.2.2.3 Deliverables
There are three main deliverables produced by the mechanistic viewpoint: the process
model, the PERE Component Table (PCT), and the PERE Vulnerability Table (PVT).
This section describes the preparation and format of these deliverables.
PROCESS MODEL
PERE does not preclude the use of existing process models, but where they do not
exist, or where their accuracy is in doubt, the mechanistic viewpoint can produce a
model as a result of following the process described in section 7.2.2.2. The model makes
use of an SADT-like notation, which is presented in Figure 7.5. Depending on the level
of abstraction, the model can be presented in more or less detail as appropriate. For
example, a top-level model could omit features such as interface labels and the
specification of the state and invariant as their inclusion would probably lead to too
much clutter.
158
Sub-sub-component level
Sub-sub-component level
Process A
A.1 A.8
A.6
A.5
A.4
A.3
Component level
A.2.4
A.2.1
A.2.2
A.2.5
A.2.6
A.2.7
Sub-component level
A.7.3A.7.1
Sub-component level
A.2.3.1 A.2.3.2 A.2.3.3 A.7.2.3.
A.7.2.2
A.7.2.5
A.7.2.4
A.7.2.1
A.2
A.7
A.2.3
A.7.2
Indicates a critical
vulnerability has
been identified
which requires
deeper analysis
Figure 7.4: Process analysis through selective iterative deepening
159
Working material
Working material
Working material Working material
Working material
State
External control
Pre-conditions
and resources
A
B
C
D
E
Interfaces
Invariant
I
N
P
U
T
S
O
U
T
P
U
T
S
Figure 7.5: SADT-like notation for process models in PERE
In fact, the process model is not intended to stand on its own as a representation of the
process. Rather, it is a graphical depiction of the complete process model, which exists
in the form of the PERE Component Table (PCT).
PERE COMPONENT TABLE (PCT)
The PERE Component Table is the authoritative model of the process under scrutiny by
PERE. As such, it is the PCT which should be turned to for points of clarification, and
where all modifications and updates to the model should be recorded. The table records,
for each process component, it’s classification in terms of the component class hierarchy,
its specification in terms of the component attribute headings introduced in Table 7.3,
and one additional entry—Source—which records the origin of the information in that
particular row of the table. This could be the name of an interviewee, the method used
to obtain the information (e.g. observation), a reference to documentation containing it,
and so on. The headings of the table are presented in Table 7.6.
Table 7.6: PERE Component Table Headings
Component
name
Class Interfaces
and
working
materials
(Optional)
State
Invariant (Optional)
Pre-
conditions
and
resources
(Optional)
External
control
Source
PERE VULNERABILITY TABLE (PVT)
The PERE Vulnerability Table records the results of the Vulnerability Identification and
Review processes. For each vulnerability identified, the PVT lists its classification in
terms of the generic vulnerabilities associated with component classes, its likelihood and
consequence (which combined form a measure of its risk), possible defences against it,
and finally any secondary vulnerabilities. These are taken to be possible vulnerabilities
which might be associated with the introduction of the suggested defences. This is a
160
notion which arose through development of the human factors viewpoint, where the
variety of possible perspectives on a particular problem can lead to the suggestion of
defences which themselves have vulnerabilities from a different perspective. The extra
column was included in the PVT to account for the same phenomena in the mechanistic
viewpoint. The most likely secondary vulnerability is the introduction of further
complexity to the process, and the problems associated with complex processes. Table
7.7 presents the headings of the PVT.
Table 7.7: PERE Vulnerability Table Headings
Vulnerability Class Likelihood Consequence Possible
defences
Possible
secondary
vulnerabilities
7.2.2.4 Summary
This section has described the mechanistic viewpoint of PERE in terms of the
techniques underpinning the viewpoint, and their source. It also described the process
followed in applying the viewpoint, and finally the deliverables produced as a result.
The mechanistic viewpoint of PERE was developed from the Hazops approach to
hazard and risk analysis in the chemical process industry. The analysis is based upon
systematically analysing a process into its components and sub-components with their
associated attributes, and then identifying vulnerabilities in the resulting model. Once
vulnerabilities have been identified, their risk is assessed in terms of likelihood and
consequence, and defences are proposed accordingly. The process is highly iterative, and
makes use of the mechanism called iterative deepening in order to focus the analysis on
the most critical process components. Component descriptions may be passed to the
human factors viewpoint at various stages during the mechanistic analysis, in order to
allow early examination of human-intensive components from the other perspective.
Three deliverables are produced by the mechanistic viewpoint: a graphical and tabular
model of the process, and the analysis of possible vulnerabilities, along with their risk of
occurrence, and possible defences against them.
This concludes the description of the mechanistic viewpoint. The following section
describes the human factors viewpoint in a similar manner.
7.2.3 Human factors viewpoint
The human factors viewpoint encapsulates the main contribution of this thesis—the
bringing together of a large and diverse body of literature from the human sciences into a
methodical means of being applied to the improvement of requirements engineering
processes. This initially took the form of the human factors checklist (see Chapter 5 for
the checklist in full), but it became apparent that the checklist on its own was not
enough. Consequently, there was a need to provide more structure to the application of
the checklist, and more support for applying the work to RE processes. The result of
this was the development of PERE, with the mechanistic viewpoint to provide a more
systematic means of building up an understanding, and model, of the processes being
161
studied; and the guidance for applying the human factors checklist which is built into
the human factors viewpoint.
Just as the previous section described the mechanistic viewpoint in terms of its
underlying concepts, the process by which they are applied, and the deliverables
produced as a result, so the remainder of this section does the same for the human
factors viewpoint.
7.2.3.1 Basis of the human factors viewpoint
Unlike the mechanistic viewpoint, the human factors viewpoint is not based on an
existing approach to the analysis of processes. Rather, it is the culmination of this thesis’
work which precedes this chapter. It’s foundations lie in research undertaken in a variety
of human sciences, which was reviewed in Chapter 3. This work was assessed for its
suitability for application to RE processes in Chapter 4, and subsequently developed into
the human factors checklist which was introduced in Chapter 5. Chapter 5 outlined a
number of issues and concerns to be taken account of in the development of a
method—PERE—which would enable the findings contained within the checklist to be
applied to RE processes in a systematic manner. In particular, where the human factors
checklist was concerned, a problem existed with respect to the potentially very large
number of questions it would require the analyst to ask of any non-trivial process. The
following statement was made regarding how PERE will address this problem:
a layered approach to application of the human factors checklist will allow for a
more stepwise approach to human factors analysis, and will bring with it the
opportunity to reduce the number of questions which must be asked of each
process component.
The main aim of the human factors viewpoint, therefore, is to provide a framework
within which the human factors checklist can be applied in such a way that the analyst is
provided with guidance as to which parts of the checklist are the most relevant to the
process component under consideration.
The human factors viewpoint utilises the model of the process under study which was
developed by the mechanistic viewpoint. This provides the analyst with a ready-made
set of process components, interconnections, and working materials to consider from a
human factors perspective. Alternatively, the level of granularity at which process
models are passed from the other viewpoint may be after intermediate iterations of the
mechanistic analysis, or even specifications of single components. Nevertheless, the
principle is the same in that the ‘working material’ for the human factors viewpoint is
process models to varying degrees of completeness as output by the mechanistic
viewpoint.
Once provided with the specifications of processes and/or process components, the
human factors viewpoint proceeds to examine them for vulnerabilities proposed in the
human factors checklist. To recap, the checklist structures human factors vulnerabilities
into a number of headings. These headings are related to the theoretical discipline which
162
the findings are supported by, and in two cases the nature of the working materials of
requirements engineers. The checklist is broken down into six main headings:
errors in the work of individuals
violations by individuals
problems with working in social groups
problems due to organisational factors
problems related to the use of documents; and
problems related to the use of notations and representations
The first of these headings is further broken down into:
skill-based slips and lapses
rule-based mistakes; and
knowledge-based mistakes.
Each of these eight checklist headings in total considers a number of possible
vulnerabilities which may exist in a process which can be characterised in terms of
individual, social, or organisational activity of people. Each entry in the checklist
consists of a reference number, a description of the vulnerability, suggestions for possible
defences against the vulnerability, and a reference to the section number in Chapter 3 of
this thesis where the vulnerability, and its theoretical foundations, are discussed. An
example entry from the checklist is reproduced below in Figure 7.6.
5.1 Social facilitation and inhibition
(i.e. the degree and direction
in which individual
performance of a task is
affected by being observed)
Consider whether the introduction of
direct supervision of activity is
appropriate, whether its gains outweigh
any potential performance losses as
might be the case for skill-based tasks
Tend not to employ direct supervision
(and prefer more indirect, deferred error-
checking) for more knowledge-based
tasks.
3.2.1
unique reference
number
the
vulnerability
to error
possible defence(s)
against the
vulnerability
reference to the section number(s) in this
thesis where the vulnerability is discussed
Figure 7.6: An excerpt from the human factors checklist
The layered approach of the human factors viewpoint, alluded to earlier in this section,
is achieved in two levels. At the top level, a number of questions have been defined in
163
order to guide the analyst through the process of applying the checklist, and also to
facilitate the ‘pruning’ of the number of questions which must be asked of each process
component by ensuring that only relevant questions are directed to each component. For
example, it does not make sense to enquire about social group issues concerned with a
requirements engineer working in isolation. At the second level, a number of ‘key
figures’ have been developed which group some related top level questions together.
These exist to guide the analyst through particular parts of the checklist concerned with
human errors. Three of these key figures exist, one each for individual errors, group
process problems, and organisational issues. Each one provides a set of higher level
questions which direct the analyst to particular entries in the checklist, and from there
to the detail of the findings upon which the checklist is based. An excerpt from Key
Figure 2 (social group activities) is given in Figure 7.7 below. The full list of top level
questions are provided in Table 7.8 in the section describing the human factors
viewpoint process below.
Match supervision to type of task, and
provide means for all opinions to be critiqued
regardless of status.
How are the contributions
of the group members and
the overall products of the
group evaluated?
Apprehension about the methods and
consequences of evaluation
For further detail, see PERE Human Factors
checklist items 5.1 & 5.5
generic defences
reference to specific
items in the checklist
generic
vulnerability class
top level question
Figure 7.7: excerpt from Key Figure 2 (see appendix A for full checklist and appendix B for key figures)
The vulnerabilities and defences listed in the checklist are generic, and therefore must be
treated in a similar manner to the generic class descriptions, vulnerabilities, and defences
of the mechanistic viewpoint. The particular context of the process under consideration
will have a bearing upon how applicable relevant entries in the checklist are, and
whether suggestions for defences will need to be specialised for the particular setting.
Just as with the component class hierarchy in the mechanistic viewpoint, each section of
the human factors viewpoint may be extended in the light of experience with the
application of PERE. Therefore, any specialised use of the checklist should be
considered for potential addition to the checklist, along with appropriate qualification
(see also section 7.3on specialising PERE).
Again in a similar manner to the mechanistic viewpoint, consideration of which defences
to implement is assisted through consideration of the risk of any vulnerabilities
identified, in terms of their likelihood and consequences.
In summary, the basis of the human factors viewpoint consists of the following:
Utilisation of the process model output by the mechanistic viewpoint.
164
Application of research findings in the human sciences with a bearing on human
error through their encapsulation in a human factors checklist.
A layered approach to navigating the human factors checklist in order to reduce
the overhead of asking all questions of all process components.
Consideration of the human activity in each process component in terms of
individual, social, and organisational work, and of the working materials
processed.
Analysis of the activity and working materials for generic vulnerabilities as
recorded in the relevant sections of the human factors checklist.
Proposal of defences against the identified vulnerabilities, with an assessment of
the risk associated with them.
The following section presents the process by which the human factors viewpoint
satisfies the above concerns.
7.2.3.2 The human factors viewpoint process
The previous section introduced the basic features of the process to be followed in the
human factors viewpoint. Essentially, the process is driven by the process model derived
by the mechanistic viewpoint. It may be that the mechanistic viewpoint highlights
particular process components for early human factors analysis, or that some degree of
prioritisation takes place when the process model is handed over between viewpoints.
Nevertheless, the process consists of analysing each process component, in whichever
order deemed appropriate, according to the headings in the human factors checklist.
Navigation of the checklist is facilitated by the asking of a number of questions of each
component, and using the responses to direct the analysis according to the process in
Figure 7.8.
165
Hazardous conditions due to human factors
Errors Violations
Organisational
context
problems
Human and
organizational
connections
Documents Notations and
representations
Individual
activities
Key figure 3:
• structure
• communication
• culture
• learning
Key figure 2:
• resources
• norms
• performance
• evaluation
Key figure1:
• skills
• rules
• knowledge
Group
activities
Problems
with working
materials
Component
connection
problems
Component
problems
PERE Human Factors Checklist
Q0
Q1 Q7 Q8 Q9- 12
Q2 Q3-6
Ref 1- 3 Ref 5 Ref 7 Ref 8 Ref 6 Ref 4
Figure 7.8: The overall structure of PERE’s human factors analysis method
In Figure 7.8, arcs descending from boxes are sometimes labelled (e.g. Q1). When this is
the case, there is a question to be asked (e.g. Question 1) which may govern the choice
to be made between arcs (and hence the next factor to consider). When an arc descends
into a rounded box labelled with a question number, the rounded box contains links
between the questions and the PERE human factors checklist. These links are depicted
in Key Figures 1, 2 and 3, which are included in Appendix B. If an arc is not labelled,
the issues summarised in the next box(es) should be regarded as obligatory. Table 7.8
below presents the full list of questions which are referred to in Figure 7.8.
7.2.3.3 Deliverable
As the human factors viewpoint shares the same process model as the mechanistic
viewpoint, there is no need to repeat the process model deliverables in diagrammatic or
tabular form. The only deliverable resulting from the human factors viewpoint is
166
therefore the description of the vulnerabilities identified from a human perspective—the
PERE Human Factors Table (HFT).
Table 7.8: PERE human factors analysis process questions
Q0: Is it suspected that this process component, its connections or its working materials
can be vulnerable to error and/or violation?
Q1: Is the component principally characterised by individual or group activity?
Q2: Is the component principally characterised by skill-based, rule-based or knowledge-
based activity?
Q3: (resources) What are the available resources for the group to fulfil its function?
Q4: (norms) How is the function of the group presented to group members and what are
the norms (specifications of what the group and its members should do) that govern
the activity of the group in executing this function?
Q5: (performance) How are the contributions of group members produced and
coordinated?
Q6: (evaluation) How are the contributions of the group members and the overall
products of the group (decisions, jointly authored documents or whatever)
evaluated?
Q7: What are the means used for interconnecting components?
Q8: What are the documents (and other forms of representation) which constitute the
working materials of the process?
Q9: (structure) What is the overall formal structure of the organisation? Is it hierarchical,
bureaucratic, heterarchical, structured on an ad hoc basis and so forth?
Q10: (communication) What are the channels and patterns of communication as they exist
in the organisation over and above those provided by formal structural reporting
relationships?
Q11: (culture) What form does the organisational culture take particularly with respect to
safety issues? For example, does a specific ‘safety culture’ exist?
Q12: (learning) How is organisational learning promoted in the organisation?
Q13: Might the process encourage or require violation?
PERE HUMAN FACTORS TABLE (HFT)
In much the same way as the PVT represents the results of the mechanistic analysis, so
the PERE Human Factors Table represents the output from the human factors
viewpoint. The main headings are virtually identical to the PVT (cf. Table 7.7) with the
one exception being that rather than identifying the vulnerability in terms of its position
in a class hierarchy, the reference from the human factors checklist is used instead. A
reference to the checklist is also used to identify any possible secondary vulnerabilities
which may be associated with proposed defences. The other main difference between
PHT and PVT is that the PHT is split into four sections. The first—Problems with process
components—is the result of applying the checklist in the manner described in the
previous section on each process component in the process model. The second—Problems
with connections and working materials—lists vulnerabilities relating to how components of
the process are linked together. The third—Organisational context problems—presents
vulnerabilities at an organisational level. Finally, the fourth section—Violations—lists
separately the features of the process which may lead to or encourage violations of the
process as represented in the model by individuals following it. Table 7.9 presents the
headings for the PHT.
167
Table 7.9: PERE Human Factors Table headings
Vulnerability Analysis
(checklist
ref.)
Likelihood Consequence Possible
defences
Possible
secondary
vulnerabilities
(checklist ref.)
Problems with
Process
Components
Problems with
Connections
and Working
Materials
Organisational
Context
Problems
Violations
7.2.3.4 Summary
This section has described the human factors viewpoint of PERE in terms of its
theoretical foundations, the checklist which encapsulates the findings from human
factors research, the process which guides the application of the checklist, and the
deliverable which is produced as a result of the analysis.
The bulk of the novelty of the work in this thesis exists in the bringing of a large number
of research findings from a variety of diverse human sciences to the analysis of RE
processes with a view to their improvement. In these terms, PERE’s human factors
checklist is the culmination of the process of making the research findings available to
RE process improvement, and PERE as a whole exists to structure and manage the way
in which the findings are applied. The human factors viewpoint relies upon the
mechanistic viewpoint to build up an accurate representation (model) of the process
under analysis. Once it has a model to work with, the human factors viewpoint
systematically considers each component of the process for vulnerabilities to error from
a variety of perspectives. The results of this analysis are presented in the PERE Human
Factors Table (PHT).
This concludes the description of PERE. The following section considers the ways in
which PERE as described thus far can be altered.
168
7.3 Modifying PERE
This thesis so far has presented the essential concepts underlying PERE and a detailed
description of its mechanistic and human factors viewpoints. However, it is
unreasonable to suppose that PERE will be or should be applied in an entirely uniform
way across all organisations and across all process types. Organisations wishing to apply
PERE will vary in terms of the domain of their work (chemical or electronics industry,
office and factory-production work) and hence the kinds of processes they wish to
evaluate with PERE. Additionally, the purposes and expected benefits of PERE can
vary on an application-by-application basis. Accordingly, this section provides guidance
for the tailoring or specialisation of PERE to particular organisational, process and
application contexts.
7.3.1 Strategies for Specialising PERE
PERE can be specialised in a variety of ways. Four main strategies can be identified for
specialising PERE:
Adapting PERE. This concerns the adaptation of PERE as a function of process
type and of the kinds of process information which may have been captured.
Simplifying PERE. This concerns the ways in which the application of PERE
might be simplified and targeted if, say, resources for PERE are limited or only
particular kinds of process problems are of interest.
Developing PERE. It is intended that PERE can be incrementally added to in a
variety of ways to enhance its organisational utility and domain-relevance.
Improving PERE. It is intended that PERE itself should be subject to process
improvement criteria much like the processes it studies.
The next four sections expand on these strategies giving more precise examples of
components of PERE which can be specialised. It is not supposed that the strategies for
specialisation that are listed here are a final definitive set. Indeed, an organisation may
find its own ways to specialise PERE not anticipated here. This is further discussed as
one aspect of the improvement of PERE.
7.3.2 Adapting PERE
An application of PERE will involve taking the generic PERE process and moulding it
to the specific business and industrial context that the processes to be analysed are
situated in. Naturally, this raises some difficulties as we cannot be sure that PERE will
be equally applicable or intelligible in all contexts of use. This section provides some
guidance as to how PERE might be applied in a manner which recognises the variability
of different organisations and process-types.
169
7.3.2.1 Specialisation by Process Domain
It is not imagined that a general typology of all processes can be defined—or at least not
one that gives specific enough guidance on the details of how PERE should be tailored
to specific processes. This has the implication that finding out how to apply PERE to a
particular process will inevitably be part of its application, at least whilst an analyst builds
up application experience. PERE is, however, open and extensible. In particular, PERE
allows its users to add to the network of classes and the human factors checklist items in
ways which are idiomatic for the process and industrial sector being worked with.
Rather than stipulate any generic typology of processes, PERE recognises that, especially
initially, the artfulness of PERE analysts themselves will be an important aspect of
PERE’s use and usefulness. The task of the PERE analyst at these stages will also be
aided by using any existing data sources, survey results, process-component vocabularies
or whatever which are in use in an organisation. These (or the most relevant or effective
of them) can be used to ‘seed’ the extension of PERE’s classes and checklist items. (For
more guidance on revising the classes and checklist items, see below.)
7.3.2.2 Specialisation by Distribution of Function
Although it is not believed that an adequate generic typology of all processes can be
found, there are a number of simple distinctions between process classes which might be
of use in guiding PERE specialisation. For example, processes can be seen to vary in
terms of how they distribute function between humans and machines. Many
‘knowledge-intensive’ processes are principally conducted by human beings in
interaction with each other and through documents. A process of this sort would place a
priority on the human factors viewpoint within PERE and (perhaps) upon checklist
items relevant to documents and reading and writing them. Conversely, a process with a
high degree of automation may be most relevantly analysed by focusing on the
mechanistic viewpoint of PERE.
7.3.2.3 Specialisation by Process Complexity
Processes can also be seen to vary in terms of their complexity. Of course, the PERE
analyst must be careful of any superficial judgement of process complexity as even the
most apparently simple processes can involve much fine detail on analysis. Nevertheless,
initial assessments of process complexity can be used to specialise the application of
PERE. For example, simple processes may not require an extensive analysis with
iterative deepening or with detailed component classification. A process which is not
subject to extensive interconnections or complex interactions between components (e.g.
a process which is more ‘serial’ and ‘linear’ in nature) may not need a separate analysis of
its interconnections over and above an analysis of its process components. The
complexity of the process may also impact upon the human factors analysis. A complex
process, for example, may involve specific tasks to manage the complexity (e.g. review
meetings). These could be specifically targeted in the human factors analysis.
170
7.3.2.4 Specialisation by Availability of resources
The resources available to an application of PERE may well vary between one
organisation and another, or even between applications within the same organisation.
This has implications for the way in which a PERE analysis is scheduled to take place.
For example, a well-resourced application of PERE which has several people available
to undertake the analysis would be able to partition the tasks such that some may be
performed in parallel. This is one reason why the different forms of communication
between mechanistic and human factors viewpoints may be tailored to suit a particular
application.
Of course, it is also possible that resources may be scarcely available for a full application
of PERE as described in section 7.2 (and also in B.2 and B.3). In such cases, the
organisation may still wish to apply a simplified version of PERE in order to obtain a
reduced yet nevertheless useful set of improvements within the scope of the application.
The following section covers the various ways in which PERE might be simplified in
order to achieve this.
7.3.3 Simplifying PERE
PERE, like any other activity in an organisation, requires the commitment of resources.
As discussed in section 4 above, it is unwise to underestimate the resources required for
an effective application of PERE. In the light of considerations of the costs and benefits
of PERE, it may be decided that a simplified or more focused application of PERE is
appropriate than the ‘fulldescription of PERE given in this document would suggest.
This section reviews the main junctures in the PERE process where an application
might be simplified.
7.3.3.1 Conducting a ‘One-Shot’ Application of PERE
Although PERE has been characterised as involving a potentially unlimited number of
iterations between the mechanistic and human factors viewpoints, much can often be
gained from a ‘one-shot’ application of PERE with just a single analysis from each
viewpoint. This may be especially useful in the early stages of PERE application to a
process to decide whether further application (and iteration) of PERE is justified.
Appendix C presents an example of such an application.
7.3.3.2 Simplifying the Mechanistic Viewpoint
The analysis from the mechanistic viewpoint can be simplified in various ways and
several of these are discussed in Table 7.10. It is worth noting though that, under some
circumstances, it might be possible to dispense with the mechanistic analysis altogether.
For example, the process may have already have been analysed in the terms very closely
related to those that PERE uses. Indeed, as PERE employs mechanistic analysis
concepts which have drawn on their use in methods like Hazops, it may not be
surprising if a prior analysis as part of, say, Hazops has already yielded a usable analysis.
Furthermore, it is possible that the use of one of the other REAIMS modules may have
171
already given a rich enough process analysis for the human factors analysis to take place.
For example, a prior use of PREview-PV might enable one to dispense with much of
the mechanistic analysis or to focus the analysis more closely.
Table 7.10: Simplifying PERE’s mechanistic viewpoint
Focusing on
Specific
Components
and
Materials
The mechanistic analysis in PERE can also be simplified by focusing on specific
components, working material and inter-connections, perhaps on the basis of existing
incident data. If a particular component has shown itself to be a likely source of faults,
then that component could form the main focus of mechanistic analysis. A possible
strategy then would be to iterate the analysis to greater depth for that component while
also increasing the breadth of the analysis by taking in similar components or those
which the fault-prone component is immediately connected to. In short, when there is
reason to target the analysis on particular components (or materials or inter-
connections), a strategy of iterative deepening and broadening may be preferred (cf.
section 7.2.2.2).
If an organisation has information available which would enable the analysis to be
targeted, then this information should be used. It is also likely that information to focus
the analysis will naturally emerge as part of process capture. For example, people
interviewed may have informed the PERE analyst of the critical process components and
so forth.
Classifying
Components
Abstractly
The mechanistic analysis can be simplified by analysing components to the more
abstract levels of the network of component classes and not seeking to classify a
component in terms of the ‘lowest’ possible, most detailed level. At the limit this
would involve merely noting which of the five basic classes the component belongs to.
While this would permit only the most generic vulnerabilities to be identified in
vulnerability review and in subsequent human factors analysis, it may nevertheless allow
important defences to be implemented swiftly.
Analysing
Processes
without
Redundancy
PERE’s mechanistic viewpoint involves a degree of redundancy. For example,
component inter-connections are identified implicitly within the components in terms of
‘interfaces’ as well as explicitly in their own right. This redundancy allows cross-
checking and a degree of robustness in the analysis. However, it does add to the amount
of analysis that needs to be done. It may be decided to eliminate a separate consideration
of aspects of the process already analysed. For example, having analysed the process
components, it may be decided not to undertake a separate analysis of inter-connections.
If this simplification is undertaken, it must be recognised that certain kinds of faults
(e.g. those arising through the interaction of components) might be missed and that
PERE itself is losing a degree of defence against error. These matters should be
periodically reviewed (see below).
Prioritising
Vulnerability
Identification
and Review
It may not be necessary to review all of the vulnerabilities that are exposed by the PERE
mechanistic viewpoint. Some may be trivial. Others may be too costly to eliminate and
already protected against adequately. Additionally, there may be other sources of
information (e.g. incident data) which can enable the analyst to more effectively identify
the most plausible vulnerabilities. The purposes of the application of PERE can also be
used to prioritise vulnerability identification and review. Vulnerabilities which impact
upon process reliability will be clearly more important than those which impact upon
process availability if reliability is the goal of the application of PERE. Finally, as
PERE’s PVT involves the separate identification of likelihood and severity of
consequence, vulnerabilities can be ordered to prioritise review by likelihood or review
by consequence, again depending upon the overall purpose of the PERE application.
Using a
Simplified
PCT and
PVT
Simplifying analysis and/or vulnerability review enables simplified versions of the PCT
and PVT to be used. For example, if it is decided that the purpose of PERE is to identify
working materials whose vulnerabilities are the most consequential to the process, then
rows and columns for materials and consequences are the most important to fill out.
Also, if a ‘one-shot’ application of PERE is undertaken, then the PCT and PVT will
have a lesser role in coordinating the analysis iteration-by-iteration and a greater role in
expressing the outcomes of analysis.
172
7.3.3.3 Simplifying the Human Factors Viewpoint
The human factors viewpoint of PERE can also be simplified in a variety of ways. It is
also important to emphasise that simplifications to the mechanistic viewpoint have
consequences for the human factors viewpoint, as it is the mechanistic analysis’ results
which serve as input to the human factors analysis. In this way, the human factors
analysis will ‘inherit’ many of the simplifications made to the mechanistic analysis (e.g.
those made by application of PRA techniques). This may be enough to make the human
factors analysis manageable in many applications, especially as the human factors analysis
has been designed to give step-by-step guidance which ‘filters’ the number of human
factors questions which need to be asked of any component (as shown in Figure 7.8). In
Table 7.11 we identify further sites for the simplification of the human factors
viewpoint.
Table 7.11: Simplifying PERE’s human factors viewpoint
Focusing on
Specific Error
and Violation
Types
It may be decided to focus the PERE analysis of specific error and/or violation types.
For example, it may be known that process violations have become a common
sources of critical incidents in a particular process. In such a case, the analysis could
be focused on violations and the items devoted to violations and defence against them
from the checklist become of paramount significance.
Focusing on
Specific Problem
Types
Figure 3.4 shows the different types of problems which might arise in a process as
identified in PERE’s human factors component. Incident data or other forms of
intelligence (e.g. that gathered through an application of PREview-PV or MERE)
may enable the PERE analyst to focus on, say, organisational context problems
rather than those due to individual activities. Such simplifications are equivalent to
following only some of the links on Figure B.4.
Analysing
Generic
Vulnerabilities
and Defences
PERE’s human factors viewpoint presents human factors vulnerabilities and defences
in a ‘layered’ fashion. For example, the analyst can consult Key Figures 1, 2 and 3
for the definition of the most generic vulnerabilities and defences associated
respectively with individual activities, group activities and the organisational
context. A more detailed layer of analysis and suggested defence can be found by
consulting the human factors checklist. The checklist itself can be consulted at two
levels. In this way, the human factors viewpoint can be simplified, for example, by
consulting just the more generic ‘layers’ and only proceeding to lower levels if the
generic analysis seems plausible.
Using the Human
Factors
Checklist
Selectively
The checklist can be used selectively in another way. For most of the vulnerabilities,
a number of different ‘glosses’ are given and a number of different defences
suggested. Depending on the process domain, some of these glosses on a
vulnerability may be irrelevant. For example, vulnerabilities associated with
document work may be of limited relevance in a factory production-line context.
Furthermore, again, other sources of data may enable the analyst to prioritise specific
items on the checklist or specific ‘glosses’ within a give item.
Prioritising the
Consideration of
Vulnerability and
Defence Types
Just as the overall purpose of the PERE application and so forth can be used to
prioritise the vulnerability review from the mechanistic viewpoint, the same
considerations can simplify the examination of human factors vulnerabilities and
possible defences.
Using a
Simplified PHT
Just as the mechanistic viewpoint’s PCT and PVT can be simplified in response to
simplifications of that component, so the PHT can be simplified to reflect a
simplified human factors analysis.
173
7.3.4 Developing PERE
It is recognised that PERE can (indeed, should) be developed so as to more closely
match its domain of application in an organisation. This is principally done by
incrementally adding to the analysis classes used in its two viewpoints. However, the
sustained application of PERE in a particular organisation may require other forms of
development, for example the translation of its checklists and other supporting materials
(including this document for that matter) into a national language other than English or
into forms of language which are more idiomatic for the industry in question.
7.3.4.1 Specialising and Adding to the Mechanistic Analysis Classes
The mechanistic viewpoint can be principally extended by adding domain specific classes
and vulnerabilities to those outlined in this document (e.g. the five basic component
classes). It is anticipated that the basic component classes are indeed generic and can be
equally applied to many different domains of work or forms of industrial production. In
which case, incrementing the classes will involve adding specialisations to the basic
classes and those descended from it. However, it may turn out that a new component
class emerges which requires a new basic or ‘root’ class. Nevertheless, PERE analysts are
urged to use the network of classes which has emerged as most useful for the industry in
question as this not only facilitates the analysis process but also allows earlier results to
be reused and stored.
7.3.4.2 Specialising and Adding to the Human Factors Analysis Classes
The human factors analysis classes, the checklist items and the key questions which guide
human factors analysis can all be incrementally added to. PERE adopts a layered
approach to the presentation of the checklist which should be followed where possible.
The checklist (Appendix A) has adopted a numbering scheme to facilitate reference and
additions to it. The human factors viewpoint’s ‘layered’ approach is designed to be less
strict than the mechanistic analysis’ network of classes. The fundamental purpose of
layering the checklist is to support the simplification strategy outlined above. Additions
to the checklist should be mindful of this. It is anticipated that the human factors
checklist will change considerably with PERE application as, for example, the abstract
references to activities and documents made in its items need to be concretised by
reference to specific domain tasks and document-types found in the processes which are
analysed.
7.3.5 Improving PERE
A fundamental goal of PERE is to aid the improvement of the processes it analyses.
However, it is recognised that for PERE to reliably accomplish this, PERE itself must
be dependable and analysable in terms of its reliability, availability, maintainability,
security and safety (RAMSS). Thus, the improvement of PERE itself should be
considered as part of its specialisation. This section discusses how this can be promoted.
174
7.3.5.1 Reviewing the Application of PERE
The application of PERE itself should be periodically reviewed, both within a particular
application to a single process and across its applications within a given domain or
organisation. In particular, attention should be given to how PERE is being applied,
whether it is being applied ‘in full’ or in an appropriately specialised fashion, by whom it
is being applied and whether these personnel are appropriate, and so forth. Different
strategies for specialisation should be reviewed as outlined above and decisions should
be taken when an ‘incorrect’ application of PERE has been detected whether this arises
through error or whether a new strategy for PERE specialisation is required.
Organisations are encourage to develop their own PERE Application Guide which gives
PERE analysts guidance on the appropriate strategies for PERE specialisation in the
organisation’s business domain. This Guide would then be the reference for periodic
PERE application review.
7.3.5.2 Reviewing the Utility of PERE
In addition to monitoring how PERE is applied in an organisation, it is necessary to
obtain assessments of the usefulness of PERE itself in detecting vulnerabilities and
suggesting effective defences. The usefulness of PERE needs to be assessed in reference
to the purposes of any particular application. It would not be appropriate to assess
PERE’s utility against criteria of cost savings if it was being applied with an exclusive
interest in safety in mind. The usefulness of PERE can be assessed informally (e.g.
through assessing the subjective impressions of PERE analysts and persons involved in
the processes studied by PERE) and also formally (e.g. if quantitative before-and-after
analyses of, say, incident data can be reliably made). It would be unreasonable to assess
the usefulness of all of PERE’s suggested defences, but it would be essential to the
continued worth of PERE to an organisation that its application was yielding some
overall gains.
Information on PERE’s utility should be fed back into the review of PERE application
described above at both generic domain levels as well as (if possible) in terms of PERE’s
effectiveness within a single application case. this would enable an organisation to
decide whether, for example, the commitment of more resources to PERE is justified in
the light of its usefulness or whether new application strategies should be considered and
so forth.
7.3.5.3 Applying PERE Reflexively
It is possible to apply PERE reflexively and it is suggested that this should form part of
any periodic application review. That is, as PERE is concerned with different kinds of
vulnerabilities and defences against them, it is possible to apply PERE to itself. In
particular, PERE as it is defined within an organisation’s own PERE Application Guide
should be reviewed in this fashion as this will embody the most detailed descriptions of
the PERE process as it is instantiated in a particular organisation. Such an application of
PERE should have other benefits than simply its own direct improvement. It is likely
that if organisation members see a method applied evenly both to themselves and to
175
those whose business is to evaluate processes that a culture of cooperation is more likely
to be fostered. The ‘reflexive self-review’ of PERE can help in this.
7.4 Summary
This chapter has presented a detailed description of the PERE method of requirements
process improvement. In doing so, we have focused on the means by which an analyst
may use PERE. The principal reason for this synopsis is to provide a minimal grounding
in the much more extensive method description presented in Appendix B. In the
following chapter, we wish to turn to the application and use of the method as
described in Appendix B. The application of the method reported in the following
chapter represents a migration from an academic environment to the ‘real world’
context within which industrial methods are put to work.
The central feature of the following chapter is the experiences of applying PERE to an
industrial process being developed within Aerospatiale. PERE was exploited as a means
of analysing, understanding, and improving a process called MERE (Memorisation of
Experience in Requirements Engineering). The MERE process focused on developing
techniques and approaches to improving organisational learning within RE. This learning
was considered essential to many of the safety issues involved in the development of a
family of aircraft. The ability to access an actual process within the context of a
development organisation allows the opportunity to reflect on the applicability of
PERE outwith the research lab and situated within a user organisation.
176
8. PERE in Use: Evaluation of the
Method
So far, this thesis has described the development of PERE, a method for evaluating
processes for vulnerabilities to error and proposing defences against them, with a
particular focus on failures that originate in human activity. This chapter shifts the focus
from development to how PERE has been evaluated. However, it is worth stressing
that considerable partnership existed between development and evaluation with a
strongly formative approach to evaluation with the continuous refinement of prototype
checklists and method reported in Chapters 5, 6 and 7.
8.1 The Motivation for Assessment
In considering the evaluation of PERE, two broad approaches have been adopted, each
seeking to support the development of PERE and the understandings gained from its
application. A formative evaluation was used as part of the overall prototypical
development of the checklist. This was complemented by a more summative assessment
which sought to gain a broader understanding of the acceptability and utility of the
method when applied in practice. In the case of both formative and summative
approaches to assessment, evaluation has taken the form of applying the method to real-
world requirements processes. This sought to uncover both how effective the checklist
and method are, and also how easy they are to use by those other than the developers.
Information on the use of PERE played a major part in its formation and refinement,
which was described in detail in Chapters 6 and 7. Basically, the design of PERE was
informed by the results of formative evaluation of components of the method as they
were being developed. Initial prototypes were used in the field, the developers in
partnership with members of the field site would reflect upon this use, and subsequent
debriefing meetings would focus on the redesign of PERE. This formative evaluation of
PERE was complemented by a much more summative approach to assessment. This
took the form of using PERE on a series of industrial processes following its
177
development. Some of this evaluation was carried out by third parties not directly
involved in the development of PERE. These two complementary approaches to
evaluation structure the presentation in the remainder of this chapter.
It is worth stressing that the evaluation presented here is highly situated in nature.
Consequently, it provides a set of experiences drawn from an industrial organisation
rather than an experimental setting. One consequence of this is that assessment has a
strong qualitative, rather than quantitative flavour, with an emphasis on conveying
experiences of use rather than some form of measured summative assessment.
8.2 Evaluation during the development of PERE
The initial focus in the development of PERE was on the development of the human
factors checklist (see Chapter 5 and Appendix A), based upon the review of research in
the human sciences concerned with human error and related process losses (see Chapter
3). Before taking the development further into building a method around the checklist,
it was necessary to explore its utility by applying it to a real requirements process, and
using the feedback to inform the subsequent development of the checklist and
ultimately the PERE method itself. This evaluation was reported on briefly in Chapter
6, and is now returned to in more detail with the focus on the checklist's evaluation
rather than its development.
8.2.1 A First Application of the Human Factors Checklist
This first version of the human factors checklist was applied within the REAIMS
project at Aerospatiale—one of the partner sites—on a process under development in
the company for the generation of design requirements based upon reports of
experience. This section provides a brief description of the process, and then provides
feedback from this initial experience of applying the human factors checklist. The
material contained in this section results from the application of the human factors
checklist to the Aerospatiale process, and a subsequent meeting with senior safety and
reliability engineers at Aerospatiale to discuss the results and obtain feedback from them
regarding their own experiences with applying the checklist.
8.2.1.1 Aerospatiale’s MERE process
Aerospatiale are France’s major aerospace manufacturer, and are a leading partner in the
Airbus Industrie consortium, a European partnership building commercial jet aircraft. As
such, they produce a variety of civil and military aircraft, of diverse sizes and technology.
In particular, they are involved in the development of an increasing amount of computer
hardware and software technology as newer designs become more and more
technologically advanced. This is especially true for the class of aircraft known as ‘fly by
wire’, typified by the Airbus A320, where the pilots’ manipulation of the aircraft’s
control surfaces is mediated by (several) computer systems. Obviously, safety is a very
important issue for companies like Aerospatiale, and they devote a great deal of effort to
178
ensuring that all their products are as safe as possible. With this in mind, they have
recently implemented a process to systematise the way that the design of new products is
influenced by reports of experience with (and incidents involving) current designs. The
process is known as MERE, for Memorisation of Experience in Requirements
Engineering. Before MERE was implemented, Aerospatiale found that errors which had
been rectified for existing designs were being repeated when it came to new designs.
This was largely due to the loss of experience in design teams because of staff moving
elsewhere within the company, retiring, leaving for other organisations, and so on.
MERE consists of two separate processes. One running constantly, which is concerned
with the generation and maintenance of a database of what Aerospatiale term
‘Rules/Recommendations’, or R/Rs. This is a two-stage process. The first stage, called
Experience Fact Collection, Coding and Recording first of all collects what are termed
‘Experience Facts’ from a variety of sources. The sources may be as diverse as incident
reports following an accident, passenger complaints passed on from an operator (airline),
or reports from maintenance and operations staff within the company. These facts are
then coded into a standard scheme, and stored in a centralised database. These stored
facts are then used in the following stage—R/R Elaboration and Validation—to generate
design rules based upon experience. The validation process includes two separate
committees to determine the technical as well as strategic validity of the generated R/Rs
before they are marked as ‘Applicable’ and are ready to be used in new design projects.
These committees are known as the Technical Validation Committee (TVC) and ‘Wise
Men’ Committee (WMC) respectively.
The other main process is concerned to apply relevant R/Rs to new design projects.
There will be several instances of the application process, one for each design project.
This is also a two stage affair, with the first stage—Application of R/Rs in
Design—selecting relevant R/Rs for a design project; and the second stage—Verification
of Application—ensuring that all the R/Rs have been correctly applied, and if not, why
not. If suitable justification for deviating from the R/Rs is given, this can lead to the
R/Rs concerned being reformulated. Figure 8.1 displays the MERE process in this
simplified form. A more detailed process model can be found in Appendix C.
MERE, as described above, was used to evaluate the human factors checklist for its
suitability for applying to safety-critical requirements processes. Whilst MERE may not
be a requirements engineering process in the traditional sense, it still involves engineers
undertaking a number of activities typically engaged in by requirements engineers, and
fits well with the characterisation of RE developed in Chapter 2. The following sections
present the feedback resulting from this trial application. First of all, the results of
applying the checklist by its developers are reported. Following this, feedback is incuded
from the safety and reliability engineers at Aerospatiale, who also tried to apply the
checklist to the MERE process (which they themselves had designed).
179
Experience fact collection,
coding and recording
R/R Elaboration
R/R Validation
R/R
Application
during design
R/R
Verification of
Application
Technical
(TVC)
Strategic
(WMC)
Incidents
Exp. Facts
DB
R/R
DB
Project
DB
Figure 8.1: Simplified MERE process
8.2.1.2 Experience of Applying the Human Factors Checklist to MERE
A number of difficulties were experienced when the original checklist was applied by its
developers to MERE, despite them being very familiar with the human factors literature
encapsulated within it. They centred on the following points:
The human factors work deriving from cognitive psychological approaches, which
concentrates on slips and lapses due to alleged problems in human planning was
especially problematic. It largely concentrates on safety issues ‘at the sharp end’ of
traditional industrial production and does not provide clear guidance for the
design of processes.
It was necessary to have a deeper understanding of MERE in order to know how
to apply much of the checklist at all. The human factors checklist in its original
form is hard to apply ‘from cold’.
Nevertheless, it was possible to use it to raise questions about MERE and to point to
some issues of process design and make some suggestions. These included:
The possibility of tape-recording meetings or critical parts of them to improve
traceability and alleviate some memory errors.
The possibility of introducing simulation activities to MERE.
The case based approach to retrieval of relevant R/Rs for application in design,
and the coupling of (generic) R/Rs with (specific) experience facts does much to
anticipate availability and other knowledge-based biases.
180
However, when R/Rs are retrieved from the MERE database, adequate
attention should be drawn to the unique features of the new experience fact as
well as to those features shared with other R/Rs.
Additionally, it may be sensible that retrieval from the MERE database should
involve the retrieval of dissimilar rules as well as those ‘close by’. This will assist
users in scoping the whole problem space and give them information as to how
well ‘calibrated’ their search queries are.
It is important that MERE supports not only ‘serial traceability’ (i.e. tracing the
series of versions a R/R has gone through, together with the experience facts that
initially gave rise to it) but also what might be termed ‘lateral traceability’ (e.g.
the alternative formulations of an R/R which have been proposed but not
adopted) so that one can see the significant ‘branching’ and ‘choice points’ in the
history of an R/R as well as its ‘backwards chronology’. This will help avoid so-
called hindsight biases as well as giving a valuable resource should R/Rs need
revision. It is also a richer notion of traceability than that followed in much of
requirements engineering.
As R/Rs accumulate (and more and more of an engineer’s practice may be
governed by a verified MERE R/R), it will be necessary to be watchful for
violations. A senior Aerospatiale safety engineer pointed out the fact that the
analysis of new experience facts is closely tied to the retrieval of existing R/Rs
from the database and that this may provide an internal barrier against the
unnecessary proliferation of R/Rs. It is important to monitor MERE when it is
applied to see if this hope is fulfilled.
In terms of group process issues, there is a case for introducing ‘devil’s advocates’
and facilitators to meetings.
Steps should be taken within MERE to preserve minority opinions. In some
respects this is already accounted for as all alternative interpretations are retained
in MERE. Specific moments might be designed into meetings to allow minority
opinions to be raised and discussed.
The consideration of the organisational aspects of dependable processes as
covered in the checklist led to consideration of the issue of training in particular.
It is an open issue just how much training MERE itself will require. It seems that
the role of MERE correspondent (the member of each division who has
responsibility for coordinating the application of MERE) will require some
assistance, but even here MERE may be able to capitalise on existing skills within
the organisation.
MERE in many respects embodies much of what one would wish to recommend for the
design of a safe process in its current form. For example, it is a method of experience
capture and organisational learning and both of these are important for promoting safety
in organisations (see Chapter 3). In MERE, an R/R, its experience fact(s) and the
‘means of compliance’ all circulate together after elaboration. This approach does much
to secure that R/Rs are well understood and applied and that many so-called
181
knowledge-based or rule-based mistakes may be circumvented thereby. Furthermore,
MERE already has designed into it that many professions are represented on its various
committees. This should do much to prevent the establishment of groupthink in MERE.
Organisationally, MERE does include much redundancy and cross-checking and mixes
decentralisation and centralisation in an interesting way. However, MERE—in this
regard—especially capitalises on the safety culture and organisational form of
Aerospatiale itself. The implementation of MERE in a different organisational context
may well require more specific organisational aspects of safety to be attended to for
MERE to become a dependable process.
8.2.1.3 Aerospatiale’s Experience Applying the Human Factors Checklist to MERE
While some difficulties were experienced in applying the human factors checklist to
MERE, ones which anyway provoked the revision of the checklist, it was possible to
use it to point to issues which would not otherwise have arisen in discussion with the
Aerospatiale engineers. However, the Aerospatiale engineers found the checklist very
difficult to apply. Their difficulties centred on two issues:
The checklist draws attention to many points. The checklist is long. This means
that if they were to ask each question of each sub-process, they would end up
asking thousands of questions and that this might well not be the most effective
way to organise their thinking about human factors issues!
They encountered difficulties applying some of the human factors work which
seemed to have an origin in the study of individual tasks in traditional industrial
production and manufacturing settings.
As developer of the checklist, the first point was not found to be problematic. This is
because a somewhat different strategy had been adopted to that employed by the
Aerospatiale engineers. Rather than apply every issue to every sub-process, the checklist
was surveyed and it was determined whether there were sub-processes to which the
issue highlighted in the checklist obviously applied. However (and this is an important
point), this would also have led to encountering the thousands of questions problem
were it not for a familiarity with the human factors literature. This made some points
obviously apply to some sub-processes and obviously not to others. This is not claiming
that the people at Aerospatiale are less intelligent! Rather, they are not so well practised
in applying the literature that the human factors checklist is based on. Reciprocally, they
are far more practised in issues in the aircraft industry. The tables would be turned if
confronted by a checklist to do with some aspect of aircraft manufacture.
What is clear from this discussion is that having just one way in to the human factors
checklist is not appropriate. A checklist like this is of use to remind human factors
specialists of the issues they need to consider. On the other hand, people from other
backgrounds and with other interests would benefit from a simplified (even at the
expense of abstraction) series of questions to ask of any process or sub-process. The
Aerospatiale engineers advised that 3 or 4 would be the maximum number of questions
that they could exhaustively ask of every sub-process. Not only would this reduce the
effort of applying the checklist, it would also take it closer to the current techniques
182
used in safety analysis of hardware or software which they are familiar with, and
therefore fit in better with their present working practices.
8.2.2 Summary of PERE's formative evaluation
The application of the human factors checklist by myself and the Aerospatiale engineers
provided useful feedback for the further development of what ultimately became the
PERE method. Two main features of the method were developed as a direct result of
this evaluation stage:
Step-by-step guidance to assist with navigation of the human factors checklist,
especially for people less familiar with the human factors literature it is derived
from and which it refers to.
Analysis from an alternative, mechanistic, viewpoint in order to develop a process
model and deepen the understanding of the process under examination.
The resulting method was evaluated summatively following on from its development.
8.3 Evaluation following the development of PERE
In the development of interactive systems, summative evaluation is recognised as being
of less value than formative evaluation (Newman and Lamming, 1995) for the obvious
reason that once the development has ceased there is less opportunity for the results of
the evaluation to inform the design. Nevertheless, it is still useful to exercise the method
in its entirety in its final form. For PERE, this evaluation took the form of two
applications of the method on two different processes. The first was a repeat application
on Aerospatiale’s MERE process, and the second was carried out by a third party
focusing on the process by which international standards are developed.
8.3.1 PERE analysis of Aerospatiale’s MERE process
The MERE process, described above, is another member of the REAIMS family of
modules. Its main aim is to improve an organisation’s requirements process by
systematising the process of learning from experience in such a way that incidents are
reported and subsequently lead to requirements being generated. These requirements
(Rules/Recommendations or R/Rs in MERE terminology) are developed to cover
related incidents and should prevent their recurrence. The MERE process defines the
life-cycle of these R/Rs from collecting incident data through elaboration and validation
to application and verification.
8.3.1.1 Organisational aspects of MERE
It became apparent during the earlier work with MERE that the organisational context
in which it has been developed (Aerospatiale) has influenced its design in a number of
ways. Additionally, various key terms in MERE have a clear sense within Aerospatiale
183
but which might seem somewhat opaque to members of other organisations.
Understanding these issues became important in making sense of MERE.
THE ORIGINS OF MERE
MERE has been developed within Aerospatiale in response to clear and expressed needs
within that organisation. It is important to appreciate that the arguments embodied in
MERE are not merely arguments from the point of view of principle but have a clear
practical origin in the organisation. For example, Aerospatiale experienced difficulties
with transferring expertise across projects when the turnover of personnel either
through reassignment or through (e.g.) retirement was prominent. Aerospatiale already
had a database recording details of incident reports, but they did not provide sufficient
detail to be of any real use in the design process. Hence, there was a tendency to produce
solutions to make particular aircraft work, but lacking the generality to be applicable in
the design of a new aircraft. Projects at Aerospatiale were often completed by
retrospective project review ‘synthesis’ documents. However, these—in part because
they were written at the end of projects—were inadequate means for passing on
experience. Additionally, people at the outset of a new project were often deterred by
the prospect of having to read several end of project reviews from earlier projects as a
means to acquire the past expertise. Accordingly, Aerospatiale personnel see MERE as
an approach to process improvement of just the right kind to capture and efficiently
transmit expertise and experience.
FITTING MERE WITHIN EXISTING PROCESSES
Not only did MERE arise out of clear needs within Aerospatiale to improve the
memorisation and reuse of experience, it was also designed to fit in with existing
working practices and organisational structures. For example, the term
rules/recommendations was already familiar to designers, who work to such rules for
implementation of designs (in fact, it is envisaged that MERE R/Rs will migrate over to
design or implementation R/Rs as they become longer established). Also, the role of
MERE correspondent is reflected in the way that information regarding computing and
information technology issues is disseminated throughout the organisation via a
computer correspondent in each service/profession. The similarity between these two
roles is such that computer correspondents may well take on the MERE correspondent
responsibilities as well.
THE DISTINCTION BETWEEN GENERALISTS AND SPECIALISTS
MERE documentation makes much of the distinction between ‘generalists’ and
‘specialists’. It is important to realise that these terms have an exact sense in
Aerospatiale. On the basis of the details reported in the documentation (Branet and
Durstewitz, 1994), it is easy to misunderstand the relationship between generalists and
specialists as a hierarchical one. That is, that generalists are in some sense in a higher
organisational position where they can oversee a variety of service-providing specialists.
This is a false view of MERE, though one which is easy to fall into. In Aerospatiale,
generalists and specialists are two ‘dimensions’ of a matrix management organisation. In
184
this way, generalists and specialists are not necessarily distinguished in terms of
organisational hierarchy and status. Indeed, this is understood to be rarely the case.
Rather, they are distinguished in terms of their different views onto problems which
may well be held in common. For example, a generalist department, such as safety, will
be concerned with several specialist departments at different stages of a design, e.g.
hydraulics, avionics, and so on. An individual within a generalist department will be
concerned with one specialism at any one time, but they will be moved around between
specialisms as a matter of course in order to build up their overall expertise. This matrix
management philosophy—with the consequence that problems, incidents and so forth
might come to the attention of two groups of people—introduces a requisite organisation
redundancy into the application of MERE which should promote its dependable
application.
8.3.1.2 Experience of applying PERE to MERE
PERE was applied to MERE towards the end of both of their respective development
efforts as part of the evaluation of both modules. The PERE analysis of MERE
reported here is mainly based upon process documentation distributed within the
REAIMS project (Branet and Durstewitz, 1994; Branet and Durstewitz, 1995)
supplemented by information gathered from discussion with the developers of MERE.
As shown below, PERE detected a number of possible vulnerabilities in MERE and the
defences which MERE does or could provide are discussed. However, it must be
emphasised that a full application of MERE has not been witnessed. Accordingly, details
about how MERE is realised in practice could not inform this PERE analysis. In
particular, this means that the PERE analysis could not point out human factors
problems which may only be known from a study of the usage of MERE. This
constraint especially impacted upon the analysis of the documents (e.g. Experience
Record Sheets) that support the MERE process. How the design of these relates to their
usage (and any potential vulnerabilities therein) can only be properly understood by
following MERE through an entire process cycle.
It is also to be noted that the MERE process at Aerospatiale depends upon other,
established processes in R/R application and verification. As these, strictly speaking, are
external to MERE itself and not specified in MERE documentation, such process
components were noted but not analysed in depth.
The analysis produced a top-level process model for MERE, developed from the MERE
documentation, a completed PCT and PWT derived from mechanistic viewpoint
analysis, followed by a completed PHT derived from human factors viewpoint analysis.
These are presented in full in Appendix C. For clarity, the analysis presented Appendix C
is the result of a single iteration of PERE’s two viewpoints. PERE table entries where
further iterations of PERE would need to be done are explicitly noted. As just a single
iteration is presented, the suggested defences remain generic. The exact details of a
defence to be actually implemented could only be derived after further deepening of the
PERE analysis. Finally, it is to be noted that the human factors analysis concentrated on
possible sources of human error in the definition of process components in MERE. The
185
analysis did not reveal any significant opportunities for any violations other than
harmless ones.
8.3.1.3 Summary of PERE Analysis of MERE
Appendix C presents the full results of a single iteration of PERE analysing
Aerospatiale’s implementation of MERE. As, for clarity’s sake, only a single iteration is
shown, the analyses and suggested defences remain at a fairly generic level of detail.
More precise recommendations and suggestions could be derived with further iterations.
Even so, a number of points came to light on analysis.
(1) The PERE mechanistic viewpoint on MERE (summarised in the PCT and PWT)
revealed only a few sites of significant vulnerability that are not already anticipated
in the design of MERE. This is doubtless because MERE has been designed with the
explicit concern of developing a dependable process. Thus, it manifests many
feedback loops and interrelations between process components which encourage
error checking and reduce the likelihood that errors, if they do occur, will propagate
through the whole MERE process.
(2) The sites of possible vulnerability that have been found in the mechanistic analysis
are largely concentrated on process components to do with the ‘Wise Men
Committee’ (WMC) and concern possibilities that, for example, erroneously
validated R/Rs might be passed on to application in design. Accordingly, processes
should be considered whereby erroneous R/Rs which pass through the WMC or
mistakes of recording of WMC’s decisions might be trapped. For example, feeding
back the summary of results of the WMC to the WMC may help trap some of these
possible sources of error (e.g. giving the WMC print outs from FichEyes—the tool
developed to manage and interact with the R/R databases).
(3) The human factors viewpoint (summarised in the PHT) points out a number of
possible sites of error. However, again, most of these will not be severe as MERE
itself incorporates checking mechanisms and the organisational context at
Aerospatiale is one of a strong safety culture with a matrix management structure
which supports dependability at the organisational level.
(4) The main suggestions which the PERE analysis makes are devoted to the design of
meeting procedures to ensure that the TVC (Technical Validation Committee) and
the WMC do not yield biased, inaccurate or inappropriate decisions. The analysis also
identifies the possible need for training in aspects of the MERE process, especially in
the procedures associated with recording of experience facts. Finally, the analysis
points to MERE management itself as a potential site of failure. For the most part,
MERE is a self-managing process in that the output of one component directly
initiates action in the next component(s) it is connected to. However, it is possible
for the process to halt or for errors to propagate where components are connected
by or require proactive MERE management activity (e.g. judging that a sufficiently
large ‘batch’ of technically validated R/Rs has accumulated to justify the
convocation of a meeting of the WMC). Such vulnerabilities can be counteracted by
186
making MERE management the explicit responsibility of a team of individuals,
rather than the responsibility of a single MERE manager. It is understood that this is
the way that Aerospatiale are implementing MERE. Accordingly, this will do much
to eliminate the likelihood of single point failures which can originate in process
management itself. Nevertheless, it is important that the management of MERE be
given explicit consideration for its dependability in any implementation of it.
8.3.1.4 Summary of PERE’s application to MERE
In contrast with the previous application to MERE, which only focussed on the human
factors checklist, this application also exercised the components which were added to
PERE as a result of the formative evaluation. Whilst it was necessary to do this in order
to evaluate the whole method, this thesis’ focus is on the human factors viewpoint in
particular, and so the experience of applying this part of the method is of particular
interest.
MECHANISTIC VIEWPOINT
Following the mechanistic viewpoint was relatively straight-forward, and was assisted
by the detailed documentation available for the MERE process. The high quality of this
documentation was related to the fact that MERE itself was in the process of being
created and refined, and great effort had been taken to produce adequate documentation
to be able to market the process to other organisations. Whilst the novelty of the process
meant that there was little or no information regarding actual use of MERE, this also
meant that issues of practice deviating from documented procedures was not an issue
here. The application of PERE to longer established processes would have to determine
how accurately the process documentation reflects actual practice.
The process model (see Figure C.1) was produced without difficulty from the MERE
documentation, and the PERE Component Table (PCT) was derived from this. A
number of issues arose out of the creation of the PCT, largely related to the mechanistic
component’s origins in the chemical process industry:
deciding between a component class being of type process or transduce was not
always clear, and at times felt contrived. For example, does validation involve a
physical change in the working material—from R/Rs to validated R/Rs—or are
they processed and altered semantically?
it was difficult to enter a state for the human-intensive processes in particular,
which made up a large proportion of the process.
the nature of the MERE process meant that control was largely manifest via the
passing of working material from one component to the next. Components that
explicitly control the process were therefore the exception.
The creation of the PERE Vulnerabilities Table (PVT) from the PCT was
straightforward. Generic vulnerability types were used for each of the components based
on their classification, and a likelihood of the failure occurring was assigned. This was
largely based on common sense judgement, grounded in knowledge of the components,
187
on the design of MERE, and the organisational context of its use within Aerospatiale.
The likelihood of occurrence was used to filter out which components required further
analysis in the PVT in terms of their consequences.
On reflection and subsequent consultation with partners on the REAIMS project, it was
felt that this ordering of the consequence analysis would be better if reversed. Rather
than filtering on likelihood, the consequence of a vulnerability is usually used to
determine whether further analysis is required. Vulnerabilities with inconsequential
outcomes can therefore be dismissed at first, before studying the more dangerous
vulnerabilities in more detail. Likelihood is then used to focus the analysis on serious
vulnerabilities which may realistically occur. Steps are then taken to either remove the
hazard, mitigate its consequences, reduce its frequency of occurrence, or ultimately
provide emergency and recovery procedures should it occur (Bloomfield, 1995).
HUMAN FACTORS VIEWPOINT
The improved navigation built into the human factors analysis proved helpful in its
application, even for someone already familiar with the human factors literature
contained within the checklist. Using the components generated by the mechanistic
viewpoint, contained within the PCT, the PHT is straightforward to construct. The
human-intensive nature of MERE meant that each of the top level components in the
process model had a sufficient human component to warrant analysis from the human
factors viewpoint. In most cases, the dependability designed into the MERE process
already acted as a defence against the vulnerabilities identified, although in most cases it
was also possible to suggest further defences based on the checklist. In fact, there were
only three components where this was not the case. Their entries in the PHT are
repeated below in Table 8.1.
Table 8.1: human factors vulnerabilities without existing defences in MERE
Name Analysis
(vulnerability
reference in
parentheses)
Likelihood Consequence Possible defences
(those already
implemented in
MERE appear in
parentheses)
Possible
secondary
vulnerabilities
(vulnerability
reference in
parentheses)
Component
2.2.1.2
Level 1
validation.
Group coordination
failures and process
losses leading to
incorrectly validated
R/Rs and/or
inaccurate rationales
(5.1- 5.9)
Possible Incorrectly
validated
R/Rs for
WMC review.
Review the design of
and procedures for
the TVC to minimise
likelihood of (e.g.)
status, motivational,
leadership problems
etc.
None.
Component
2.2.2.2
Level 2
validation.
Group coordination
failures and process
losses leading to
incorrectly validated
R/Rs (5.1- 5.9)
Possible Incorrectly
validated
R/Rs for
possible
application.
Review the design of
and procedures for
the WMC to
minimise likelihood
of (e.g.) status,
motivational,
leadership problems
etc.
None.
188
Component
3.3
Review and
record
deviations.
Group coordination
failures and process
losses leading to
incorrectly accepted
or rejected
deviations (5.1-
5.9)
Possible Incorrectly
accepted or
rejected R/Rs
or deviations
recorded in
project and/or
MERE
database.
Review the design of
and procedures for
review meetings (etc.)
to minimise
likelihood of (e.g.)
status, motivational,
leadership problems
etc.
None.
In all three cases, the vulnerability is of the same type, namely group coordination
failures and process losses. It is interesting to note also that all three components are
concerned with checking and reviewing results of other parts of the MERE process.
This seems to imply that the designers of MERE saw meetings as a defence against
possible errors or mistakes in individual work, but had overlooked the potential for
vulnerabilities associated with working in groups and teams. This is therefore a good
example of the checklist shifting the focus away from concern with ‘traditional’ human
errors associated with individuals (adequately covered already in MERE), towards
broader concerns with social and organisational issues.
Utilising group work to counteract individual human errors is not, of course, without
problems. In the PHT, two of the defences proposed had associated secondary
vulnerabilities related to group coordination failures and process losses21. These are
included below in Table 8.2.
Table 8.2: Proposed defences with secondary vulnerabilities
Name Analysis
(vulnerability
reference in
parentheses)
Likelihood Consequence Possible defences
(those already
implemented in MERE
appear in parentheses)
Possible
secondary
vulnerabilities
(vulnerability
reference in
parentheses)
Component
2.1.1
Preliminary
elaboration
of R/R.
Individual,
knowledge-
based errors by
generalists
leading to poor
R/Rs (3.1- 3.6).
Possible Inapplicable or
unreliable
R/Rs.
Introduce group processes
whereby the elaboration of
R/Rs is examined and
critiqued to avoid biases in
elaboration. (The checking
of individual elaborations
by a group of generalist-
engineers is often
implemented in MERE.)
Group failures
and errors
(5.1- 5.9).
Component
2.1.2
Analysis of
R/R by
specialists.
Individual,
knowledge-
based errors by
MERE
correspondents
and/or
specialists
leading to poor
R/Rs (3.1- 3.6).
Possible Inapplicable or
unreliable
R/Rs.
Introduce group processes
whereby the elaboration of
R/Rs is examined and
critiqued to avoid biases in
elaboration and pre-
validation. (This is
accomplished in MERE by
iterating between specialist-
critique and synthesis by
MERE correspondents.)
Group failures
and errors
(5.1- 5.9).
21 Secondary vulnerabilities do not appear in the previous excerpt as the defences already exist as part of
MERE’s specification and are not being proposed as a result of PERE analysis.
189
One notable problem already referred to in this chapter is that associated with points at
which MERE makes use of existing processes from elsewhere in the organisation. This
problem is one of scoping and is likely to feature in other applications of PERE. To be
able to think of an application of PERE as being ‘complete’, it would be necessary to
also examine processes that are not necessarily a part of the process under examination.
Of course, if used seriously, it is unlikely that a process such as MERE would be
artificially separated out from the rest of an organisation’s development process.
Nevertheless, it is worth noting here that there may be points at which the analysis must
be curtailed and flagged as such where, for example, processes cross organisational
boundaries and it would not be possible to impose any defences on third parties (or
access to their processes may be problematic).
Finally, it is notable that for almost all of the possible vulnerabilities highlighted in the
PHT due to organisational issues, MERE already had sufficient defences against them
or, indeed, the vulnerability was exactly what MERE itself was intended to address. The
exception to this is the possibility of single point failure with managing MERE in the
role of the MERE correspondent, mentioned above in point (4) (on page 185).
8.3.2 PERE Applied to the standards production process
In addition to my own application of PERE described above, there have been other
applications carried out by third parties within other REAIMS partner organisations
(Bloomfield, Bowers, Emmet and Viller, 1996; Emmet, 1996; Märtins, Schippers, Viller
and Sawyer, 1996). One in particular has been written up in detail as part of PERE’s
evaluation. This section describes how PERE was applied to the process by which
international standards are produced22.
8.3.2.1 Why choose the standards process?
The choice of the standards process as an application for PERE is interesting and
appropriate for a number of reasons. First and foremost, whilst once again it is not what
would traditionally be considered an RE process, the process by which standards are
produced is essentially the same as RE. It involves the understanding of a domain
requiring support (from a standard rather than a piece of software) and requires a team
of people to collaborate in the production of a document which specifies the standard so
that it can be used by organisations working within the area concerns. Where it differs
from many, if not most, instances of RE is the scale of the operation. Team members
will typically not belong to the same organisation, and the time taken to produce a
standard can be up to 10 years. This makes the standards process an interesting one for
PERE as any proposed improvements could lead to a standards process which is better
suited to keeping pace with the rapid development of technologies, especially in the field
of dependable computing. Furthermore, unlike the application on MERE above, there
22 This section is based upon the description of PERE’s application to the standards process which was
carried out by members of Adelard (Emmet, 1996).
190
was no pre-existing process model for PERE to make use of, so the mechanistic
viewpoint should be exercised to a greater degree.
For this application of PERE, Emmet (1996) chose to study the work of the EWICS
TC7 (European Workshop on Industrial Computer Systems Technical Committee 7), a
“European cross-sector committee producing a set of guidelines in safety, security, and
reliability implications of the use of industrial computer systems.” (p9). The remainder
of this section draws heavily on Emmet’s report on his application of PERE to the
work of the EWICS TC7 body (Emmet, 1996).
8.3.2.2 Description of the standards process
Standards-producing organisations are typically hierarchically structured, consisting of
groups and subgroups down to individual members with specific responsibilities. Actions
are agreed at the group level and then subdivided and allocated to subgroups and
ultimately individuals within the subgroups. These individuals and subgroups produce
documents or parts of documents which must be agreed upon and accepted by the
higher group and incorporated into integrated documents. The process can generally be
thought of in terms of three interlocking stages:
Identification of user requiremens—user requirements are gathered from various
sources, and the need for a standard is expressed;
Advancement of user requirements—the requirements are collated, examined, and
made concise. They are presented as a proposal for a New Work Item (NWI). If
this is agreed on by a plenary vote, then it is assigned a project number and it is
put forward for development and review;
Development and review—the standard is subdivided into various sections to be
worked on by groups, subgroups, etc. who produce relevant sub-documents.
These are reviewed within their subgroups and when approved are passed on to
be integrated in the current draft. This in turn is subject to further review at a
higher (plenary and draft) level.
Emmet focuses on the last of these three processes, assuming that a NWI has already
been expressed and that original position papers have already been allocated to groups
and to individuals within the groups in turn. This process is broken down further into
the following sub-processes:
Produce group-approved paper—work from individuals is passed to the group for
comment and review. The document is reviewed within the group and a
subsequent group meeting assigns actions to individuals within the group in order
to implement the agreed resolutions of any comments made. This is iterated until
a document meeting approval by the whole group is arrived at.
Produce plenary-approved document—group-approved papers are collated and the
whole document is distributed for plenary review. In a similar manner to the
above, comments are made on the document and these are passed back to the
appropriate group, where they are in turn allocated to the relevant individuals for
191
resolution. Another group review may be necessary prior to passing the document
back for plenary review if major rework was required.
Produce approved draft document—Once a document has achieved plenary approval,
it is distributed to external reviewers for general comment and review. These are
once more fed back to the relevant group, and on to individuals within the
groups for the appropriate actions to be carried out.
Several iterations are possible at each stage of the process, leading to a several stage
iterative refinement process as illustrated in Figure 8.2, moving from individual, via
group and plenary, to world level. Documents are created at the individual level, then
integrated into group documents at group review, then plenary at the plenary review,
and world at the draft review. The progress of a document to becoming a standard can
be interrupted at any level review and sent back for further work, integration and
review once more. Each stage further consists of multiple parallel activities, one for each
sub-document at each level.
Group review
Plenary review
Draft
review
Individu al level
Group level
Plenary level
Wor ld level
flow of
comments
flow of proposed
integ rated
documents/sub-
documents
Concerns that tend to return
document with comments for
further review
Concerns that
tend to push
document
towards th e
outside wor ld
Figure 8.2: A ‘circulation-percolation’ view of the standards process (from Emmet, 1996).
8.3.2.3 Experience of applying PERE to the standards process
Very early on, Emmet found that a simple characterisation of the standards process as a
‘document producing factory’ did not capture the full complexity of the process. He
192
instead characterises the standards process in terms of a number of viewpoints,
according to the type of activity it can be thought in terms of, namely: document producing,
requirements engineering, consensus building, market influencing, and career enhancing. Each of
these viewpoints implies different motives and concerns during the various stages of
standards production.
In addition to conducting PERE analysis according to the step-by-step instructions,
Emmet also gained access to observe an EWICS meeting in action, and interviewed
several of the members in order to identify the concerns that they had with the process,
and where they thought it could be improved. According to Emmet (1996), PERE
successfully captures:
Structural aspects—explicitly identifies preconditions, working material, invariants for each
process component and provides generic vulnerabilities associated with the component and
its attributes. Picks up coordination and material resource problems, failures of pre-
conditions e.g. homework not done or late, distribution of documents, etc.
identifies human factors vulnerabilities associated with individual and group activity, and due
to organisational context. Generic individual and group problems (wrong knowledge,
inappropriate leadership etc.) (Emmet, 1996, p. 13)
Where PERE did not prove so successful (at identifying the problems elicited from the
committee members), was with non-explicit activities in the standards process. For
example, one source of delay in the process is the challenging of existing “agreed”
sections of documents by newer members of the groups. These members do not have
access to the different degrees of consensus which exist on the various agreed items in
the document. The problem is that the ‘document factory’ view on the process assumes
that the working material, which drives the process, completely expresses everything
that needs to be known about the status of different parts of the document (and hence
the degree of consensus arrived at on different sections).
Emmet suggests a broadening of the perspective on the standards process, in order to
counteract the way in which such ‘invisible’ requirements are handled, to a more general
process of negotiation and consensus building. Making the group process more explicit
provides access to it for the mechanistic analysis. This in turn opens up the individual
activity to be considered in terms of the broader social context within which it takes
place, with the associated requirements for group resources, norms, etc. that are vital to
the success of the whole process. Emmet refers to this as a requirements focus, i.e. rather
than having a process component “produce current document section”, instead we have
“implement prescribed document requirements”. These requirements include not only
the original need for sufficient time, materials and lack of conflicts, but also that non-
explicit requirements and group consensus are also available. Treating the ‘non-explicit’
in this explicit manner allows the PERE mechanistic analysis (and subsequently the
human factors analysis also) to better capture group consensus issues—vital to the
success of the standards process, and any other requirements process—than previously.
193
8.3.2.4 Summary of PERE’s application to the standards process
The application of PERE to the standards process raised important issues regarding the
way in which PERE should be used. This should not come as a surprise, as this is the
first occasion that someone not directly involved in the development of the method had
tried to use it in a novel situation.
Emmet found that PERE did a good job of highlighting the problems that had been
identified by the members of the EWICS group he interviewed, with the major
exception of issues relating to group consensus and delays and disruption caused by new
group members. This led him to re-examine the process from a number of different
viewpoints and ultimately re-cast the standards production process as one of consensus
building rather than simply the production of documents. The implication of this is that
a degree of process analysis is wise prior to embarking upon the application of PERE to
a process. Conducting this analysis in a viewpoint-oriented manner facilitates the
consideration of a number of different perspectives on the process which may give rise
to several possible applications of PERE. Another REAIMS module—PREview-PV
(PREview-Process Viewpoints) (Sommerville and Sawyer, 1996; Sommerville and
Sawyer, 1997b; Sommerville et al., 1998) performs exactly such an analysis, and has been
suggested elsewhere in this thesis as one approach for process capture for PERE.
8.3.3 Comparison of PERE to ISO9000
One final evaluation carried out by a third party which differs from the above, which
are both applications of the method, is a comparison of PERE analysis with that carried
out as part of an audit for certification for ISO9000 (Märtins et al., 1996). This report
was carried out by a consultant experienced in software process improvement in the
German IT industry, especially in the context of conducting audits for ISO9000, SEI
CMM, Bootstrap, and SPICE. He compares the two approaches to process improvement
under a number of headings:
Definition and general
Main tasks and objectives
Application area
Cost and time effort
Steps of performing a QMS (Quality Management System)-implementation and
the PERE analysis approach
Certification and PERE
The report finishes by relating the Quality Management elements of ISO9001 with
PERE, and comments that PERE should only be applied to processes that have
achieved level 2 (repeatable) in the SEI CMM (see Chapter 2), and when process
documentation is available. Märtins et al. conclude by saying that
194
“… PERE’s added value to the ISO9001 and especially to the ISO 9000-3 norm is that it offers
a method for discovering process vulnerabilities in documentation and in reality instead of
questions that firstly only concern the mechanistic view and secondly only check in retrospect.
PERE is also concerned with the human factors view and also helps to recognise and to prevent
errors. It is a good complement to the establishment of a Quality System according to the
ISO9000 series”. (Märtins et al., 1996, p. 9)
8.3.4 Summary of PERE’s summative evaluation
The summative evaluation of PERE consisted of two separate applications and a
comparison with an industry standard framework. The first of these was by the
developer of the method, on a familiar process. The second application was by someone
new to PERE, albeit with a human factors background, on a process unfamiliar to him,
and the comparison by someone also new to PERE, with a background in software
process improvement. The two applications had quite different characteristics in these
terms, but both returned positive and interesting results. In the first case, the
modifications suggested by the formative evaluation were found to be useful, and the
results from the analysis of the MERE process highlighted some human factors
vulnerabilities for which there were no defences in place. In the second case, the
application highlighted issues related to the perspective taken when capturing the detail
of the process at the start of the mechanistic analysis, and how it is useful to consider a
number of viewpoints before embarking upon the analysis. The application to the
standards process also used interview reports from participants in the process in order to
test how successful PERE is at discovering the problems in the process from their
perspective.
Whilst the comparison was on a much smaller scale than the two applications, it gave
useful feedback on how PERE analysis could fit alongside and support existing industry
standard approaches to software process improvement.
8.4 Conclusions
This chapter has presented the evaluation of PERE. The evaluation has been both
formative and summative, and has been performed by third parties in addition to the
author.
The formative evaluation was carried by applying the human factors checklist (later to
become PERE’s human factors viewpoint) to the MERE process at Aerospatiale. It led
to two modifications to the approach in particular. First, the inclusion of step-by-step
guidance for the human factors analysis in order to aid navigation through the checklist
during the analysis of a process. Second, the addition of analysis from a second,
mechanistic, viewpoint in order to build up the understanding and model of the process
under examination, and further to search for vulnerabilities from a different perspective.
195
The summative evaluation of the completed PERE method returned to the MERE
process and performed a single iteration of the analysis, which is presented in full in
Appendix C. MERE had defences already in place for many of the vulnerabilities
identified by PERE, which was reassuring given the highly dependable nature of the
process. The major exception to this was for three process components which were all
vulnerable to group coordination failures and process losses. Possible secondary
vulnerabilities were associated with the suggested defences for two other process
components, where the defences took the form of the introduction of group processes in
order to guard against (individual) human errors. Only one organisational issue was not
already guarded against in MERE, and this was related to the role of the MERE
Correspondent, who is single-handedly responsible for the management of the process.
Here, PERE highlighted the possibility of a single point of failure in the process.
PERE was also evaluated by third parties in two very different manners. The first was
another application of the method, this time on the process by which standards are
produced. This evaluation included interviews with process participants, as well as
observation of the group of people following the process. This allowed for a check to
find out if PERE managed to highlight all of the problems that were elicited directly
from the process participants. It proved to be successful about this, once it had been
discovered that the simple, mechanistic, view onto the process was not suitable, and a
more sophisticated view would have to be taken which would encompass the social
context within which the groups operate and the standards are produced.
Finally, PERE was compared by yet another third party with the Quality Management
elements of ISO9000/9001. This found PERE to be a complement to the existing
standard, and a useful addition to the software process improvement consultant’s
repertoire of approaches to process analysis.
196
9. Conclusions
This thesis has presented a method called PERE, a human-factors informed approach to
process improvement, which focuses in particular on the RE process for safety-critical
systems development. At its core is a human factors checklist which consists of a
taxonomy of vulnerabilities to error in human activity, broadly structured into
individual, social, and organisational factors. Associated along with each vulnerability
are one or more defences which are possible means of avoiding the vulnerability
concerned. PERE provides a viewpoint-based framework within which the checklist
can be applied to a requirements process. For the human factors viewpoint, this consists
of a number of key questions that serve to narrow the choice of relevant items in the
checklist for each process component under analysis, thus facilitating navigation through
the checklist in a step-by-step manner. In addition to the human factors viewpoint,
PERE includes analysis from a mechanistic viewpoint, which is informed from a hazard
analysis perspective. The mechanistic viewpoint delivers a process model which the
human factors viewpoint makes use of to structure its own analysis process. The
outcome of applying PERE is the analysis of the process from both mechanistic and
human factors perspectives, considering the vulnerabilities that exist in the process, and
suggesting defences which may be put in place against them.
Chapter 1 introduced the thesis, placing the work in the context of improving RE
processes for safety-critical systems development from a human factors perspective. The
novel contributions of the work were outlined, which are returned to later in this
chapter. Chapter 2 provided the main motivation for the rest of the work in the thesis.
The field of RE was reviewed from a number of perspectives, presenting various
methods and approaches to RE, problems that exist with RE, and techniques concerned
with improving software processes. The chapter concluded by characterising RE as
‘intellectual office work’ which has been largely ignored by the process improvement
community. Chapter 3 presented a review of human factors literature relevant to the
study of errors made by humans when working as individuals, in groups, and within an
organisational context. The broad and detailed review resulted in a typology of error
types that are applicable to different types of human activity, presented as a human
factors checklist.
197
Chapter 4 verified that the error types identified in the checklist do indeed apply to the
RE process. Whilst this was found to be the case, the error types were also found to be
too generic. This led to the generation of a number of further categories in the human
factors checklist, and the chapter resulted in an expanded checklist which is more
specific to RE. Chapter 5 provided a more detailed description of the Human factors
checklist providing an overview of its structure and anticipated use before providing a
more detailed listing of the different elements in the checklist.
Chapter 6 complemented the presentation of the checklist in Chapter 5 by turning to the
design of a methodical approach to applying the human factors checklist to RE
processes. The design was informed by the results of an early application of the checklist
on its own in an industrial context, which led to a number of requirements for the final
method. The detailed design of PERE was presented in Chapter 7, describing how it is
to be used in terms of the context and the process of application. Chapter 8 described
how PERE was evaluated by being applied to a requirements process in an industrial
context.
The remainder of this chapter reflects on the work presented in this thesis by considering
its novel contributions to the fields of process improvement and human error. The
following section first of all turns to the objectives of the work and shows how they
have been met.
9.1 Objectives of the work
The primary objective of the work reported in this thesis was to develop a method for
assessing the potential for human-related errors and failures in RE processes for the
development of safety-critical systems, and to suggest possible solutions. Existing work
on software process improvement largely ignores the RE process, and pays little or no
attention to identifying and preventing errors which have their origin in human activity.
This thesis started from the premise that, particularly for the development of safety-
critical systems, the development process should be considered safety-critical itself.
In what follows, the overall objective of the work is broken down into sub-objectives,
in the light of which the work is examined:
9.1.1 Focus on the RE process
This thesis proposes a process improvement method which focuses on identifying and
providing defences against errors in the RE process. RE is focused on in particular for
two reasons. First, it is of great importance for the success of the software process as a
whole. The RE process determines the requirements of the software to be developed,
against which the system will be tested. Second, the RE process has been found to be a
prominent source of errors in the software process as a whole. Errors in requirements
are harder to detect, persist for longer during the software process as a consequence, and
198
are therefore more costly to rectify once identified, or lead to systems which fail to meet
the real needs they are meant to address.
In reviewing RE, Chapter 2 presented a variety of approaches to the eliciting,
understanding, and documentation of requirements. Despite the breadth of approaches
covered, RE was not found to differ greatly from one technique to another, in terms of
the nature of the work undertaken by requirements engineers. Significantly, RE was
found to rely heavily upon the skills of people in the process to understand and
document the requirements using whichever method and notation adopted, and was
ultimately characterised as ‘intellectual office work’.
The human factors checklist was adapted in Chapter 4 to be more focussed upon the
work of requirements engineers. Whilst Chapter 2 established that RE essentially
consists of human activity, the checklist categories were found to be too generic. For
this reason, further categories were added with a more specific focus on RE, namely:
document design, notations and representations, meeting procedures, and organisational
issues. The full checklist was presented in Chapter 5.
9.1.2 Inform improvement of the RE process from a human factors
perspective
Chapter 2 identified the human-intensive nature of RE. Requirements engineers,
regardless of the actual process or notation being applied, must understand and
document the features of the domain that are to be supported. To do this, they must
engage in the intellectual activities of understanding and documenting the requirements,
as well as functioning as part of a team, interacting with domain experts, with all of this
occurring within an organisational context. This pointed to three broad areas of the
human sciences which in turn became the focus of attention in Chapter 3.
Chapter 3 extensively reviewed the human factors literature in cognitive and social
psychology, sociology, and organisational studies relating to human errors, failures, and
how people operate in hazardous settings. This review contributes the basis of the
human factors checklist, which in turn is the core of the PERE approach to improving
RE processes. Chapter 5 describes how the checklist was developed and Chapter 6
relates how, in the light of an early application of the checklist to a safety-critical RE
process, a number of requirements for the PERE method were developed.
Chapter 7 describes in detail how the human factors perspective is encapsulated in
PERE in the form of the human factors viewpoint, one of two major components of
the method. The human factors viewpoint provides a means of navigating the data
contained in the human factors checklist in a systematic manner by responding to a
number of key questions regarding the process under investigation. This results in an
evaluation of the potential points in the process that are vulnerable to human error,
along with possible defences which may be implemented against the vulnerabilities.
199
9.1.3 Provide a structured framework for applying the findings
The human factors checklist provides a broad coverage of potential vulnerabilities in
human intensive processes, but as a result is somewhat unwieldy to apply without
assistance. Chapter 7 describes in detail how the application of PERE is structured in
two ways. The inclusion of the mechanistic viewpoint at the top level provides a
systematic approach to analysis of a process in terms of its components. The mechanistic
viewpoint produces a process model that shows how the components are
interconnected, and examines the process for vulnerabilities due to the way in which the
components are configured. This process model forms the basis for the human factors
viewpoint analysis.
Within the human factors viewpoint itself, PERE structures the analysis with the help
of a number of key questions which focus investigation on the relevant parts of the
human factors checklist for the process component being evaluated. The key questions
take advantage of the structure of the checklist, based upon the categorisation of human
factors developed from the review in Chapter 3, in order to remove from consideration
the sections that do not apply to the process component under examination.
9.1.4 Evaluate the method on an ‘industry strength’ process
PERE has been developed and evaluated in partnership with Aerospatiale, a major
aircraft manufacturer, on a process designed to generate design requirements based upon
experience in design, manufacture, operation and maintenance of civil aircraft. Chapter
6 describes how an early trial of the human factors checklist by Aerospatiale personnel
led to a number of requirements for the design of PERE. These arose largely out of their
concerns for the practicalities of performing such an analysis in an industrial context.
PERE itself was evaluated on the same process, and has since been applied by a third
party on the process of developing international standards, which is analogous to a large
scale requirements process. These evaluations are reported on in Chapter 8.
9.2 Novel Characteristics
This thesis presents a novel integration of a broad body of work in a variety of human
science communities into a method for understanding and improving requirements
processes for the development of safety-critical systems. The work contributes to a
broader understanding of human error than exists in any one of the contributing
communities alone, and brings this understanding to bear upon the endeavour of
improving software processes in general and RE processes in particular, which are
especially prone to human error. This contribution has been multidisciplinary in nature,
and has led to direct contributions to a number of areas of computer science. This
focuses on combining issues of requirements, human factors, safety, and process
improvement. In doing so the work :
200
Contributes to HCI research on safety and human error (Viller et al., 1999).
Informs RE methods which address issues of safety (Sommerville et al., 1998)
Extends safety considerations to consider the human factors issues of RE
(Bloomfield et al., 1996)
Extends considerations of process improvement to RE (Sawyer, Sommerville and
Viller, 1997)
The novelty of this work can be considered more specifically in terms of PERE’s focus
on requirements engineering, applying human factors research to software process
improvement, the human factors checklist, and the multi-perspective approach adopted
in PERE.
9.2.1.1 The focus of process improvement effort on the RE process
Until very recently (Sawyer et al., 1997; Sommerville and Sawyer, 1997a), the RE
process has received scant attention from the software process improvement
community. Efforts have been directed at the more formalised stages of software
development, after a requirements specification has been produced. PERE, however, is
concerned with the requirements process and how the potential for errors can be
minimised. Specific sections of the human factors checklist are concerned with
particular activities engaged in during RE, such as the creation of documents, using
notations, and working in teams.
9.2.1.2 The application of human factors research to software process improvement
The field of software process improvement has largely been concerned with
documenting and assessing the capabilities of software processes. Not only has there
been a lack of focus on the RE process in particular, but also very little work has been
done on the capabilities of the humans involved in the process. Lessons learned from the
human sciences are routinely applied in work emerging from the fields of HCI and
CSCW, but this work has not been applied to the development process as yet.
Techniques are well established in the safety-critical systems arena for assessing risks
with a human origin, but these are typically applied to the design of larger control
systems, and have yet to find their way to the design of computer-based systems. Work
has recently emerged from a similar perspective to PERE applying human error research
from cognitive psychology to the design of safety-critical systems (Sutcliffe, Galliers and
Minocha, 1999), but PERE is still unique in bringing a human factors perspective to
software process improvement.
9.2.1.3 Human factors checklist
The major novel contribution at the core of PERE’s approach is the Human Factors
checklist. This brings together a substantial body of research in a variety of disciplines all
concerned with the nature of human activity, and how it is prone to error, poor
performance, etc. in some settings, and how processes can be modified to reduce the
likelihood of such failures occuring. The checklist embodies a taxonomy of human errors
201
and performance losses which is split into three main sections, namely: individual, social,
and organisational. Individual factors arise out of cognitive psychology, and draw greatly
upon work within the field of Human Error. Social factors are drawn from social
psychology and sociology, paying particular attention to how people behave and perform
when working in teams. Organisational factors emanate from sociology and
organisational and political studies, drawing in particular upon the study of how
organisations may or may not function reliably in hazardous environments.
Each entry in the checklist presents a particular vulnerability to error, with one or more
associated defences against it. The basic checklist is very generic, and could be applied to
any safety-critical human-intensive process. Further categories focus on human activity
more specific to RE, such as the generation of documents, working with notations and
diagrams, functioning in teams, and constrained by organisational factors. Navigation
through the checklist is assisted via a number of key questions that determine which
sections of the checklist are most relevant to the process component currently under
consideration.
9.2.1.4 The combination of approaches contributing to PERE
PERE adopts a multi-perspective, or viewpoint oriented approach to the problem of
improving RE processes. In addition to the human factors viewpoint, which provides a
framework for applying the human factors checklist to a process, PERE also includes a
mechanistic viewpoint. The mechanistic viewpoint is inspired by hazard analysis
techniques which are well established in the field of safety-critical systems design. The
combination of techniques in the two viewpoints allows more than one perspective to
be brought to bear upon the process, and the mechanistic viewpoint further contributes
a model of the process under examination which can be used to assist the human factors
analysis. Adopting a viewpoint-oriented approach allows the results of the two analyses
to be considered alongside each other, and also permits PERE analysis to contribute to a
broader viewpoint oriented process improvement effort such as that in PREview
(Sommerville and Sawyer, 1996; Sommerville and Sawyer, 1997b; Sommerville et al.,
1998) if desired.
9.3 Future Work
PERE represents a significant step forwards in the application of human factors research
to the improvement of RE processes, but it is still an initial step, and there is much
scope for further work in this area. This section briefly reflects upon some of the issues
raised by this thesis which will need to be addressed by future research in the areas of
safety-critical systems, requirements engineering process improvement, and human
factors.
202
9.3.1 Tool support
As it stands, PERE is a completely manual technique, relying on the analyst to manage
the generation of the analysis tables and process models using standard, off-the-shelf
products such as word processors, spreadsheets, and databases. More specialist tools
already exist for modelling processes, but these tend to be proprietary, and may not
integrate well with other tools. A certain amount of potential exists for customising
some standard software by using macros, but ultimately for PERE to be of any practical
use it will require more dedicated software support.
9.3.2 Contribute towards safety case/standards, etc.
Continuing the theme of PERE’s practicality and its acceptance by practising software
process improvement professionals, it is necessary for further work to be done in the
area of how PERE can contribute to existing standard approaches to process
improvement. PERE may be seen as a technique which will contribute to a broader
process improvement effort where critical systems are being developed and further
assurance is required that the development process will not introduce errors or faults
into the systems that are developed. This is already the case with the RE Good Practice
Guide (Sommerville and Sawyer, 1997a). There is also a case, however, for considering
how a PERE analysis could support an organisation seeking accreditation to ISO9000,
SEI CMM, IEC/ISO 15504, etc.. For the development of safety-critical systems, there is
also the need to bear in mind the development of a safety case, and how a PERE
analysis could contribute to this in a useful manner.
9.3.3 Specialisation and adaptation of PERE
PERE is, by design, extendible and adaptable for different domains of application,
incorporation of further human error categories, and so on. Section 7.3 describes a
number of ways in which PERE can be adapted, specialised, developed and improved
for various purposes. Some of these developments will be theory driven, as new findings
emerge in the human sciences that have a bearing on human performance in the type of
activities engaged in by requirements engineers. Others will be experience driven, as the
technique is repeatedly applied in one domain, or adapted to suit others. It would be
useful to foster a community of PERE users in order to facilitate such developments
through the maintenance of a repository of error categories specialised for different
domains, reports of applications of PERE highlighting successes and failures, costs and
benefits, and so on.
There is also scope for taking the development of PERE in the other direction, and
taking advantage of the generic nature of the human factors checklist. Whilst this thesis
has endeavoured to specialise the checklist for RE processes, it could be applied without
the specialisation to any human-intensive process. This could lead to PERE being used
to examine, for example, processes in other critical applications where there is a concern
to reduce the likelihood of human processes introducing errors. Bearing in mind the
problems that are known to exist with the maintenance of safety-critical technologies,
203
the application of PERE to human-intensive maintenance processes could prove to be
fruitful. The costs of applying PERE are likely, however, to always restrict its
application to critical processes where the consequence of failure are so great as to
warrant extra measures being taken to ensure that they are mitigated.
9.4 Final remarks
As computer-based systems become more ubiquitous in modern society, the likelihood
of them being utilised in safety-critical settings increases accordingly. The unpredictable
ways in which these systems can fail leads in turn to a greater concern with ensuring that
the likelihood of failure is reduced to a minimum. This thesis takes the view that for
safety-critical applications, the process by which they are developed should also be
considered to be safety-critical.
Two other perspectives contribute to the work presented in this thesis. First, RE is a
prominent source of errors in software systems, and therefore warrants particular
attention—attention that has largely been lacking to date in the software process
improvement community. Second, RE is a human-intensive process, and there is a great
deal of literature in the human sciences on how humans perform better or worse under
particular conditions.
When taken together, these perspectives lead directly to the development of PERE—a
human-factors based method for improving the RE process in safety-critical systems
development. Just as was the case with structured and object-oriented techniques,
which started out being applied to programming, and then to analysis and design, PERE
can be seen as part of a movement that shifts the focus of process improvement
techniques from the more mechanistic development processes involved with design and
implementation, to earlier, less well defined requirements processes.
The approach taken with PERE is pragmatic, for a particular practical purpose. The
various findings contained in the human factors checklist are not all theoretically
consistent with each other, and in some cases may be in conflict. It is up to the analyst
to exercise their judgement, informed by the source material, as to which vulnerability
and defence best apply in a given situation. As new perspectives emerge from the
literature, the extensibility of PERE will allow for them to become incorporated into
the checklist.
PERE provides a first step towards systematically considering human error not in the
way that products are used, but in the processes by which they are designed.
204
10. References
Agresti, W. W. (1986), The Conventional Software Life-Cycle Model: Its Evolution and
Assumptions, In New Paradigms for Software Development, W.W. Agresti ed. 2-5,
Washington, DC: IEEE Computer Society Press.
Anderson, R., Button, G. and Sharrock, W. (1993), Supporting the design process
within an organisational context, In Proceedings of ECSCW’93, G. de Michelis, C.
Simone and K. Schmidt eds., 47-59, Milan, Italy: Kluwer.
Asch, S. E. (1951), Effects of group pressure upon the modification and distortion of
judgement, In Groups, Leadership, and Men, M. Guetzkow ed. Pittsburgh, PA:
Carnegie Press.
Baddeley, A. (1997), Human Memory: Theory and Practice, (Revised Edition edn.) Hove,
UK: Taylor & Francis.
Baecker, R., ed. (1993), Readings in Groupware and Computer-Supported Cooperative Work:
Assisting Human-Human Collaboration, San Mateo, CA: Morgan Kaufmann.
Baldwin, J. A. and Ayres, D. M. (1993), Presentation of the DARTS Error Data,
Deliverable, DARTS-236-WTC-190393-C, December 1993, AEA Technology,
Winfrith, Dorchester, UK.
Bales, R. F. and Slater, P. E. (1955), Role differentiation in small decision-making
groups, In Family, Socialization and Interaction Process, T. Parsons and R.F. Bales
eds., Glencoe: Free Press.
Bannon, L., Bowers, J., Carstensen, P., Hughes, J. A., Kutti, K., Pycock, J., Rodden, T.,
Schmidt, K., Shapiro, D., Sharrock, W. and Viller, S., eds. (1993), Informing
CSCW System Requirements COMIC Project Deliverable D2.1, October 6, 1993,
Lancaster University, Lancaster, UK, available from URL.
Bansler, J. P. and Bødker, K. (1993), A reappraisal of structured analysis: design in an
organizational context, ACM Transactions on Information Systems, 11 (2) : 165-193.
205
Basili, V. and Weiss, D. (1981), Evaluation of a software requirements document by
analysis of change data, In Proceedings of Fifth International Conference on Software
Engineering, 314-323, Washington, DC: IEEE Conputer Society Press.
Belady, L. and Lehman, M. (1976), A model of large program development, IBM Systems
Journal, 15 : 225-252.
Bell, B. J. and Swain, A. D. (1985), Overview of a procedure for human reliability
analysis, Hazard Prevention, (January/February) : 22-25.
Bello, G. C. and Colombari, V. (1980), Empirical technique to estimate operator's errors
(TESEO), Reliability Engineering, 1 (3) :
Bendor, J. B. (1985), Parallel Systems: Redundancy in Government, Berkeley, CA: University
of California Press.
Bentley, R., Hughes, J. A., Randall, D., Rodden, T., Sawyer, P., Shapiro, D. and
Sommerville, I. (1992), Ethnographically-Informed Systems Design for Air
Traffic Control, In Proceedings of ACM CSCW'92 Conference on Computer-Supported
Cooperative Work, 123-129, Toronto, Canada: ACM Press.
Bloomfield, R. (1995), Integration of PERE in PRA, REAIMS Working paper,
REAIMS/WP5/AD/W/99, 9th November 1995, Adelard, London.
Bloomfield, R., Bowers, J., Emmet, L. and Viller, S. (1996), PERE: Evaluation and
Improvement of Dependable Processes, In Safecomp 96—The 15th International
Conference on Computer Safety, Reliability and Security, E. Schoitsch ed. Vienna:
Springer Verlag.
Bloomfield, R., Bowers, J., Jones, C., Sommerville, I. and Viller, S. (1995), Dependable
and assessable cooperative work: Process Evaluation in Requirements Engineering,
REAIMS Deliverable D1.4, REAIMS/WP1.4/AD/W/70, 27th February 1995,
Adelard, London.
Blythin, S., Rouncefield, M. and Hughes, J. A. (1997), Never mind the ethno
stuff—what does all this mean and what do we do now?: Ethnography in the
commercial world, Interactions, 4 (3) : 38-47.
Bødker, S. (1991), Through the Interface: A Human Activity Approach to User Interface Design,
Hillsdale, NJ: Lawrence Erlbaum Associates.
Bødker, S., Ehn, P., Kammersgaard, J., Kyng, M. and Sundblad, Y. (1987), A Utopian
experience, In Computers and Democracy: A Scandinavian Challenge, G. Bjerknes, P.
Ehn and M. Kyng eds., 251-278, Aldershot, UK: Avebury.
206
Bødker, S. and Grønbæk, K. (1991), Design in action: From prototyping by
demonstration to cooperative prototyping, In Design at Work: Cooperative Design of
Computer Systems, J. Greenbaum and M. Kyng eds., 197-218, Hillsdale, NJ:
Lawrence Erlbaum Assiciates.
Boehm, B. W. (1976), Software engineering, IEEE Transactions on Computers, C-25 (12) :
1226-1241.
Boehm, B. W. (1988), A spiral model of software development and enhancement, IEEE
Computer, 21 (5) : 61-72.
Bollinger, T. B. and McGowan, C. (1991), A critical look at software capability
evaluations, IEEE Software, 8 (4) : 25-41.
Booch, G. (1994), Object Oriented Analysis and Design with Applications, :
Benjamin/Cummings.
Bowers, J., O’Brien, J. and Pycock, J. (1996), Practically accomplishing immersion:
Cooperation in and for virtual environments, In Proceedings of the ACM 1996
Conference on Computer Supported Cooperative Work - CSCW'96, M.S. Ackerman ed.
380-389, Boston, MA: ACM Press.
Bowers, J. and Pycock, J. (1996), Getting others to get it right: an ethnography of design
work in the fashion industry, In Proceedings of the ACM 1996 Conference on
Computer Supported Cooperative Work - CSCW'96, M.S. Ackerman ed. 219-228,
Boston, MA: ACM Press.
Bowers, J. and Viller, S. (1994), Report on visit to Aerospatiale, 13th & 14th December
1994, REAIMS Project Report, REAIMS/WP1.4/LU021, 23rd December
1994, Lancaster University, Lancaster, UK.
Branet, C. and Durstewitz, M. (1994), Generic "MERE" Process Description, REAIMS
Deliverable D1.1, REAIMS/WP1/AS001, 15th July 1994, Aerospatiale.
Branet, C. and Durstewitz, M. (1995), Aerospatiale MERE Process Description, REAIMS
Deliverable D2.1b, REAIMS/WP2.1/AS011, 24th February 1995,
Aerospatiale.
Brooks Jr., F. P. (1987), No Silver Bullet: Essence and Accidents of Software
Engineering, IEEE Computer, 20 (4) : 10-19.
Button, G., ed. (1991), Ethnomethodology and the human sciences, Cambridge: Cambridge
University Press.
Button, G. and Dourish, P. (1996), Technomethodology: paradoxes and possibilities, In
ACM Conference on Human Factors in Computing Systems—CHI’96, M.J. Tauber ed.
19-26, Vancouver, Canada: ACM Press.
207
Campbell, R. L. (1992), Will the real scenario please stand up?, SIGCHI Bulletin, 24 (2) :
6-8.
Carlsson, J., Ehn, P., Erlander, B., Perby, M.-L. and Sandberg, Å. (1978), Planning and
control from the perspective of labour: a short presentation of the Demos
project, Accounting, Organizations and Society, 3 (3-4) : 249-260.
Carroll, J. M., ed. (1995), Scenario-Based Design: Envisioning Work and Technology in System
Development, New York: John Wiley.
Carroll, J. M., Kellogg, W. A. and Rosson, M. B. (1991), The task-artifact cycle, In
Designing Interaction: Psychology at the Human-Computer Interface, J.M. Carroll ed. 74-
102, Cambridge, UK: Cambridge University Press.
Catterall, B. J. (1990), The HUFIT functionality matrix, In Human-Computer Interaction -
INTERACT ’90, D. Diaper, D. Gilmore, G. Cockton and B. Shackel eds., 377-
381, Amsterdam: Elsevier Science Publishers (North Holland).
Coad, P. and Yourdon, E. (1990), Object-oriented Analysis, Englewood Cliffs, NJ:
Prentice-Hall.
Conklin, J. and Begeman, M. L. (1988), gIBIS: A hypertext tool for exploratory policy
discussion, ACM Transactions on Office Information Systems, 6 (4) : 303-331.
Curtis, B., Krasner, H. and Iscoe, N. (1988), A field study of the software design process
for large systems, Communications of the ACM, 31 (11) : 1268-87.
Davis, A. M. (1993), Software Requirements: Objects, Functions and States, Englewood Cliffs,
NJ: Prentice Hall International.
DeMarco, T. (1978), Structured Analysis and System Specification, New York: Yourdon
Press.
Diaper, D., ed. (1989a), Task Analysis for Human—Computer Interaction, Chichester: Ellis
Horwood.
Diaper, D. (1989b), Task Analysis for Knowledge Descriptions (TAKD); the method
and an example, In Task Analysis for Human—Computer Interaction, D. Diaper ed.
108-159, Chichester: Ellis Horwood.
Diehl, M. and Stroebe, W. (1987), Productivity loss in brainstorming groups: Toward
the solution of a riddle, Journal of Personality and Social Psychology, 53 (3) : 597-509.
Diehl, M. and Stroebe, W. (1991), Productivity loss in idea-generating groups: Tracking
down the blocking effect, Journal of Personality and Social Psychology, 61 (3) : 392-
403.
Dix, A., Finlay, J., Abowd, G. and Beale, R. (1993), Human-Computer Interaction, New
York: Prentice Hall.
208
Doms, M. and Van Avermaet, E. (1985), Social support and minority influence: the
innovation effect reconsidered, In Perspectives on Minority Influence, S. Moscovici,
G. Mugny and E. Van Avermaet eds., Cambridge, UK: Cambridge University
Press.
Dorfman, M. and Thayer, R. H. (1990), Requirements Definition, In Standards,
Guidelines, and Examples on System and Software Requirements Engineering, M.
Dorfman and R.H. Thayer eds., 324-363, Los Alamitos: IEEE Computer Society
Press Tutorial.
Ehn, P. and Kyng, M. (1987), The collective resource approach to systems design, In
Computers and Democracy: A Scandinavian Challenge, G. Bjerknes, P. Ehn and M.
Kyng eds., Aldershot, UK: Avebury.
Ehn, P. and Kyng, M. (1991), Cardboard computers: mocking-it-up or hands-on the
future, In Design at Work: Cooperative Design of Computer Systems, J. Greenbaum
and M. Kyng eds., 169-195, Hillsdale, NJ: Lawrence Erlbaum.
El Emam, K., Drouin, J.-N. and Melo, W. (1997), SPICE: The Theory and Practice of
Software Process Improvement and Capability Determination, Los Alamitos, CA: IEEE
Computer Society Press.
Embrey, D. E. (1987), Human reliability, In Human Reliability in Nuclear Power, R.
Anthony ed. London: IBC Technical Services.
Emmet, L. (1996), Application of PERE to the Standards Process, REAIMS Working paper,
REAIMS/WP5/AD/W/103, 19th January 1996, Adelard, London.
Fitzpatrick, G., Kaplan, S. and Mansfield, T. (1996), Physical spaces, virtual places and
social worlds: a study of work in the virtual, In Proceedings of the ACM 1996
Conference on Computer Supported Cooperative Work - CSCW’96, M.S. Ackerman ed.
334-343, Boston, MA: ACM Press.
Forsyth, D. R. (1983), An Introduction to Group Dynamics, Monterey, CA: Brooks/Cole.
Fowler, M. and Scott, K. (1997), UML Distilled: Applying the Standard Object Modeling
Language, Reading, MA: Addison-Wesley.
Goffman, E. (1961), Asylums: Essays on the Social Situation of Mental Patients and Other
Inmates, New York: Doubleday.
Goguen, J. A. (1993), Social issues in requirements engineering, In Proceedings of the IEEE
International Symposium on Requirements Engineering: RE’93, S. Fickas and A.
Finkelstein eds., 194-195, San Diego, CA: IEEE Computer Society Press.
209
Goguen, J. A. and Linde, C. (1993), Techniques for requirements elicitation, In
Proceedings of the IEEE International Symposium on Requirements Engineering: RE’93,
S. Fickas and A. Finkelstein eds., 152-164, San Diego, CA: IEEE Computer
Society Press.
Greenbaum, J. and Kyng, M. (1991), Design at Work: Cooperative Design of Computer
Systems, Hillsdale, NJ: Lawrence Erlbaum.
Greif, I., ed. (1988), Computer Supported Cooperative Work: A Book of Readings, San Mateo,
CA: Morgan Kaufmann Publishers Inc.
Haag, S., Raja, M. K. and Schkade, L. L. (1996), Quality function deployment usage in
software development, Communications of the ACM, 39 (1) : 41-49.
Hardy, R. (1990), Callback: NASA’s Aviation Safety Reporting System, Shrewsbury, UK:
Airlife.
Harkins, S. G. and Jackson, J. M. (1985), The role of evaluation in eliminating social
loafing, Personality and Social Psychology Bulletin, 11 : 575-584.
Harper, R. H. R., Lamming, M. G. and Newman, W. M. (1992), Locating systems at
work: Implications for the development of active badge applications, Interacting
with Computers, 4 (3) : 343-363.
Harper, R. R. (1991), The computer game: Detectives, suspects, and technology, British
Journal of Criminology, 31 (3) : 292-307.
Heath, C., Jirotka, M., Luff, P. and Hindmarsh, J. (1993), Unpacking collaboration: the
interactional organisation of trading in a City dealing room., In Proceedings of the
Third European Conference on Computer-Supported Cooperative Work - ECSCW’93, G.
de Michelis, C. Simone and K. Schmidt eds., 155-170, Milan, Italy: Kluwer.
Heath, C. and Luff, P. (1992), Collaboration and control: crisis management and
multimedia technology in London Underground control rooms, Computer
Supported Cooperative Work, 1 (1) : 69-94.
Heath, C. and Luff, P. (1996), Documents and professional practice: 'bad' organisational
reasons for 'good' clinical records, In Proceedings of the ACM 1996 Conference on
Computer Supported Cooperative Work: CSCW’96, M.S. Ackerman ed. 354-363,
Boston, MA: ACM Press.
Hemphill, J. K. (1961), Why people attempt to lead, In Leadership and Interpersonal
Behaviour, L. Petrullo and B.M. Bass eds., New York: Holt, Rinehart & Winston.
Hoare, C. A. R. (1985), Communicating Sequential Processes, London: Prentice-Hall.
Hoc, J.-M., Green, T. R. G., Samurçay, R. and Gilmore, D. J., eds., (1990), Psychology of
Programming, London: Academic Press.
210
Hughes, J., King, V., Rodden, T. and Andersen, H. (1994), Moving out from the control
room: ethnography in system design, In Proceedings of the ACM 1994 Conference on
Computer Supported Cooperative Work - CSCW’94, R. Furuta and C. Neuwirth eds.,
429-439, Chapel Hill, NC: ACM Press.
Hughes, J. A., O’Brien, J., Rodden, T. and Rouncefield, M. (1997), Designing with
Ethnography: A Presentation Framework for Design, In Symposium on Designing
Interactive Systems—DIS’97, 147-159, Amsterdam, Netherlands: ACM Press.
Hughes, J. A., O’Brien, J., Rodden, T., Rouncefield, M. and Sommerville, I. (1995),
Presenting ethnography in the requirements process, In Proceedings of the Second
IEEE International Symposium on Requirements Engineering - RE’95, M. Harrison
and P. Zave eds., 27-34, York, UK: IEEE Computer Society Press.
Hughes, J. A., Sommerville, I., Bentley, R. and Randall, D. (1993), Designing with
ethnography: making work visible, Interacting with Computers, 5 (2) : 239-253.
Humphrey, W. S. (1988), Characterizing the software process: a maturity framework,
IEEE Software, 5 (2) : 73-79.
Humphrey, W. S. (1989), Managing the software process, Reading, MA: Addison-Wesley.
IEC-1508-3 (1997), Functional safety of electrical/electronic/programmable electronic sfatey-
related systems., IEC Draft Standard, 1508 Part 3: Software requirements,
11/02/97, IEC.
IEC-61508-3 (1999), Functional safety of electrical/electronic/programmable electronic sfatey-
related systems., IEC Standard, 61508 Part 3: Software requirements, IEC.
Isenberg, D. J. (1986), Group polarization: a critical review and meta-analysis, Journal of
Personality and Social Psychology, 50 : 1141-1151.
Jacobson, I., Christerson, M., Jonsson, P. and Övergaard, G. (1992), Object-Oriented
Software Engineering: A Use Case Driven Approach, Reading, MA: Addison-Wesley.
Jacobson, I., Ericsson, M. and Jacobson, A. (1995), The Object Advantage: Business Process
Reengineering with Object Technology, Reading, MA: Addison-Wesley.
Janis, I. L. (1972), Victims of Groupthink, Boston, MA: Houghton Mifflin.
Jirotka, M. and Goguen, J. A., eds., (1994), Requirements Engineering: Social and Technical
Issues, London: Academic Press.
Johnson, P. (1992), Human—Computer Interaction: psychology, task analysis and software
engineering, London: McGraw Hill.
Jones, C. B. (1986), Systematic Software Development Using VDM, London: Prentice-Hall.
211
Kahneman, D., Slovic, P. and Tversky, A., eds., (1982), Judgement Under Uncertainty:
Heuristics and Biases, New York: Cambridge University Press.
Karau, S. J. and Williams, K. D. (1993), Social loafing: a meta-analytic review and
theoretical integration, Journal of Personality and Social Psychology, 65 (4) : 681-706.
Kelly, J. C. and Sherif, J. S. (1992), An analysis of defect densities found during software
inspections, Journal of Systems Software, 17 :
Kensing, F. and Madsen, K. H. (1991), Generating visions: Future workshops and
metaphorical design, In Design at Work: Cooperative Design of Computer Systems, J.
Greenbaum and M. Kyng eds., 155-168, Hillsdale, NJ: Lawrence Erlbaum
Assiciates.
Keutzer, K. (1991), The need for formal verification in hardware design and what
formal verification has not done for me lately, In Proceedings of the 1991
International Workshop on the HOL Theorem Proving System and its Applications, :
IEEE Computer Society.
Kletz, T. A. (1992), Hazop and Hazan—Identifying and Assessing Process Industry Hazards,
(3rd edn.) Rugby, UK: Institution of Chemical Engineers.
Kogan, N. and Wallach, M. A. (1964), Risk Taking: A Study in Cognition and Personality,
New York: Holt, Rinehart & Winston.
Kotonya, G. and Sommerville, I. (1992), Viewpoints for requirements definition,
Software Engineering Journal, 7 (6) : 375-387.
Kotonya, G. and Sommerville, I. (1998), Requirements Engineering: Processes and
Techniques, Chichester: John Wiley.
Kuvaja, P., Simila, J., Krzanik, L., Bicego, A., Soukkonen, S. and Koch, G. (1994),
Software Process Assessment and Improvement: The BOOTSTRAP Approach, Oxford:
Blackwell.
La Porte, T. R. and Consolini, P. M. (1991), Working in practice but not in theory:
theoretical challenges of ‘high reliability organizations’, Journal of Public
Administration Research and Theory, 1 (1) : 19-47.
Lamm, H. and Myers, D. G. (1978), Group-induced polarization of attitudes and
behaviour, In Advances in Experimental Social Psychology, L. Berkowitz ed. New
York: Academic Press.
Likert, R. (1967), The Human Organization, New York: McGraw-Hill.
Lim, K. Y. and Long, J. (1992), Rapid prototyping, structured methods and the
incorporation of human factors into system development, In Proceedings of East-
West International Conference on Human-Computer Interaction — EWHCI 92, St.
Petersburg, Russia.
212
Luff, P. and Heath, C. (1993), System use and social organisation: observations on
human computer interaction in an architectural practice, In Technology in Working
Order, G. Button ed. London: Routledge.
Luff, P. and Heath, C. (1998), Mobility in collaboration, In Proceedings of the ACM 1998
Conference on Computer Supported Cooperative Work - CSCW’98, D. Durand ed. 305-
314, Seattle, WA: ACM Press.
Lutz, R. R. (1993), Analyzing software requirements errors in safety-critical embedded
systems, In Proceedings of RE’93, 126-133, San Diego, CA: IEEE Computer
Society Press.
Maass, A. and Clark, R. D. (1984), Hidden impact of minorities—15 years of minority
influence research, Psychological Bulletin, 95 (3) : 428-450.
Macaulay, L. (1996), Requirements Engineering, London: Springer-Verlag.
Macaulay, L. A. (1993), Requirements as a cooperative activity, In Proceedings of RE’93,
174-181, San Diego, CA.: IEEE.
Macaulay, L. A., Fowler, C., Kirby, M. and Hutt, A. (1990), USTM: a new approach to
requirements specification, Interacting with Computers, 2 (1) : 92-118.
Maclean, A., Young, R. and Moran, T. (1989), Design rationale: the argument behind
the artifact, In Human Factors in Computing Systems: Proceedings of CHI’89, K. Bice
and C.S. Lewis eds., 247-252, New York, NY: ACM Press.
Maier, N. R. F. and Solem, A. R. (1952), The contribution of a discussion leader to the
quality of group thinking: the effective use of minority opinions, Human
Relations, 5 : 277-288.
Manstead, A. S. R. and Semin, G. R. (1980), Social facilitation effects: mere
enhancement of dominant responses?, British Journal of Social and Clinical
Psychology, 19 : 119-136.
Marone, J. G. and Woodhouse, E. J. (1986), Averting Catastrophe: Strategies for Regulating
Risky Technologies, Berkeley: University of California Press.
Martin, D., Bowers, J. and Wastell, D. (1997), The interactional affordances of
technology: an ethnography of human–computer interaction in an ambulance
control centre, In People and Computers XII: Proceedings of HCI’97, H. Thimbleby,
B. O’Conaill and P.J. Thomas eds., 263-281, London: Springer-Verlag.
Märtins, F., Schippers, H., Viller, S. and Sawyer, P. (1996), Process evaluation in
requirements engineering, In Design for Protecting the User: 13th Annual Centre for
Software Reliability Workshop, J. Dobson ed. 11th-13th September, 1996,
Bürgenstock, Switzerland.
McConnell, S. (1993), Code Complete, Redmond, WA: Microsoft Press.
213
McGraw, K. L. and Harbison-Briggs, K. (1989), Knowledge Acquisition: Principles and
Guidelines, Englewood Cliffs, NJ: Prentice-Hall International.
Mellor, P. (1994), CAD: Computer-aided disaster!, High Integrity Systems Journal, 1 (2) :
101-156.
MIL-STD-961D (1995), Department of Defense Standard Practice for Defense Specifications,
MIL-STD-961D, Defense Standardization Program Office (DSPO), Fort
Belvoir, VA, available from http://www.dsp.dla.mil/.
Milner, A. J. R. G. (1980), A Calculus of Communicating Systems, Heidelberg: Springer-
Verlag.
Ministry of Defence (1996a), Hazop Studies on Systems Containing Programmable Electronics.
Part1: Requirements, Interim Defence Standard, 00-58(Part1)/Issue 1, 26th July
1996, Ministry of Defence, Directorate of Standardization, Glasgow, available
from http://www.dstan.mod.uk/data/00/058/01000100.pdf.
Ministry of Defence (1996b), Hazop Studies on Systems Containing Programmable Electronics.
Part 2: General Application Guidance, Interim Defence Standard, 00-
58(Part2)/Issue 1, 26th July 1996, Ministry of Defence, Directorate of
Standardization, Glasgow, available from
http://www.dstan.mod.uk/data/00/058/02000100.pdf.
Ministry of Defence (1996c), Safety Management Requirements for Defence Systems. Part1:
Requirements, Defence Standard, 00-56(Part1)/Issue 2, 13th December 1996,
Ministry of Defence, Directorate of Standardization, Glasgow, available from
http://www.dstan.mod.uk/data/00/056/01000200.pdf.
Ministry of Defence (1996d), Safety Management Requirements for Defence Systems. Part 2:
Guidance, Defence Standard, 00-56(Part2)/Issue 2, 13th December 1996,
Ministry of Defence, Directorate of Standardization, Glasgow, available from
http://www.dstan.mod.uk/data/00/056/02000200.pdf.
Ministry of Defence (1997a), Requirements for Safety Related Software in Defence Equipment
Part 1: Requirements, Defence Standard, 00-55(Part1)/Issue 2, 1st August 1997,
Ministry of Defence, Directorate of Standardization, Glasgow, available from
http://www.dstan.mod.uk/data/00/055/01000200.pdf.
Ministry of Defence (1997b), Requirements for Safety Related Software in Defence Equipment
Part 2: Guidance, Defence Standard, 00-55(Part1)/Issue 2, 1st August 1997,
Ministry of Defence, Directorate of Standardization, Glasgow, available from
http://www.dstan.mod.uk/data/00/055/02000200.pdf.
Moscovici, S. (1976), Social Influence and Social Change, London: Academic Press.
Muller, M. J. and Kuhn, S. (1993), Participatory design: Introduction to special issue,
Communications of the ACM, 36 (4) : 24-28.
214
Mumford, E. (1986), Using Computers for Business Success: The ETHICS Method,
Manchester, UK: Manchester Business School.
Myers, D. G. (1982), Polarizing effects of social interaction, In Group Decision Making,
H. Brandstätter, J.H. Davis and G. Stocker-Kreichgauer eds., New York:
Academic Press.
Neumann, P. G. (1995), Computer-Related Risks, New York: ACM Press.
Newman, W. M. and Lamming, M. G. (1995), Interactive System Design, Reading, MA:
Addison-Wesley.
Norman, D. A. and Draper, S. W., eds., (1986), User Centred System Design: New
Perspectives on Human–Computer Interaction, Hillsdale, NJ: Lawrence Erlbaum
Associates.
Nygaard, K. and Bergo, O. (1975), The trade unions—new users of research, Personnel
Review, 4 (2) : 5-10.
Osborn, A. F. (1957), Applied Imagination, New York: Scribner.
Page, D., Williams, P. and Boyd, D. (1993), Report of the Inquiry into the London Ambulance
Service, ISBN. 0 905133 70 6, February, Communications Directorate, South
West Thames Regional Health Authority, London, UK.
Paulk, M. C., Curtis, B., Chrissis, M. B. and Weber, C. V. (1993), Capability maturity
model, version 1.1, IEEE Software, 10 (4) : 18-27.
Paulk, M. C., Weber, C. V., Curtis, B. and Chrissis, M. B., eds., (1995), Capability
Maturity Model: Guidelines for Improving the Software Process, Reading, MA: Addison-
Wesley.
Paulus, P. B., ed. (1989), Psychology of Group Influence, (2nd edn.) Hillsdale, NJ: Erlbaum.
Paulus, P. B. and Dzindolet, M. T. (1993), Social influence processes in group
brainstorming, Journal of Personality and Social Psychology, 64 (4) : 575-586.
Perrow, C. (1984), Normal Accidents, New York: Basic Books.
Pessin, J. (1933), The comparative effects of social and mechanical stimulation on
memorizing, American Journal of Psychology, 11 : 431-8.
Pohl, K. (1993), The three dimensions of requirements engineering, In Fifth International
Conference on Advanced Information Systems Engineering (CAiSE'93), C. Rolland, F.
Bodart and C.Cauvet eds., 556-560, Paris: Springer-Verlag.
Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S. and Carey, T. (1994), Human-
Computer Interaction, Reading, MA: Addison-Wesley.
215
Pycock, J., Palfreyman, K., Allanson, J. and Button, G. (1998), Representing fieldwork
and articulating requirements through VR, In Proceedings of the ACM 1998
Conference on Computer Supported Cooperative Work - CSCW’98, D. Durand ed. 383-
392, Seattle, WA: ACM Press.
Quality Systems & Software (1999), DOORS, http://www.qss.co.uk/products/doors/.
Rasmussen, J. (1983), Skills, rules, knowledge; signals, signs and symbols; and other
distinctions in human performance models, IEEE Transactions on Systems, Man
and Cybernetics, SMC-13 (3) : 257-266.
Rational (1997), Rational Objectory Process 4.1: Your UML Process, White paper, version
1.1, available from http://www.rational.com/uml/, 13 January 1997, Rational
Software Corporation.
Rational Software Corporation (1999), Requisite Pro,
http://www.rational.com/products/reqpro/index.jtmpl.
Reason, J. (1990), Human Error, Cambridge, UK: Cambridge University Press.
Reason, J. (1992), The identification of latent organizational failures in complex
systems, In Verification and Validation of Complex Systems: Human Factors Issues,
J.A. Wise, V.D. Hopkin and P. Stager eds., 223-237, Berlin: Springer-Verlag.
Reason, J. (1997), Managing the Risks of Organizational Accidents, Aldershot: Ashgate.
Roberts, K. H. (1989), New challenges in organizational research: high reliability
organizations, Industrial Crisis Quarterly, 3 (2) : 111-125.
Robinson, M. and Bannon, L. (1991), Questioning representations, In ECSCW’91 - The
Second European Conference on Computer Supported Cooperative Work, L. Bannon, M.
Robinson and K. Schmidt eds., 219-233, Amsterdam, Netherlands: Kluwer.
Rodden, T., King, V., Hughes, J. and Sommerville, I. (1994), Process modelling and
development practice, In Proceedings of the Third European Workshop on Software
Process Technology, EWSPT’94, B.C. Warboys ed. 59-64, : Berlin: Springer-Verlag.
Rouncefield, M., Viller, S., Hughes, J. A. and Rodden, T. (1995), Working with
‘constant interruption’: CSCW and the small office, The Information Society, 11 (3)
: 173-199.
Rumbaugh, J., Blaha, M., Premerlani, W., Eddy, F. and Lorenzen, W. (1991), Object
Oriented Modelling and Design, : Prentice Hall.
Rutte, C. G. and Wilke, H. A. M. (1984), Social dilemmas and leadership, European
Journal of Social Psychology, 14 : 105-121.
Ryan, K. and Mathews, B. (1993), Matching conceptual graphs as an aid to requirements
re-use, In Proceedings of RE’93, 112-120, San Diego, CA.: IEEE.
216
Sagan, S. D. (1993), The Limits of Safety: Organizations, Accidents, and Nuclear Weapons,
Princeton, NJ: Princeton University Press.
Sawyer, P., Sommerville, I. and Viller, S. (1997), Requirements process improvement
through the phased introduction of good practice, Software Process Improvement and
Practice, 3 (1) : 19-34.
Schuler, D. and Namioka, A., eds., (1993), Participatory Design: Principles and Practices,
Hillsdale, NJ: Lawrence Erlbaum Associates.
Sharrock, W. W. and Anderson, R. J. (1986), The Ethnomethodologists, Chichester: Ellis
Harwood Ltd.
Shaw, M. E. (1932), A comparison of individuals and small groups in the rational
solution of complex problems, American Journal of Psychology, 44 : 491-504.
Shaw, M. E. (1981), Group Dynamics: The Social Psychology of Small Group Behaviour, (3rd
edn.) New York: McGraw-Hill.
Sheldon, F. T., Kavi, K. M., Tausworthe, R. C., Yu, J. T., Brettschneider, R. and
Everett, W. W. (1992), Reliability measurement: from theory to practice, IEEE
Software, 9 (4) :
Shepherd, A. (1989), Analysis and training in information technology tasks, In Task
Analysis for Human—Computer Interaction, D. Diaper ed. 15-55, Chichester: Ellis
Horwood.
Sherif, M. and Sherif, C. W. (1953), Groups in Harmony and Tension: An Integration of
Studies on Intergroup Relations, New York: Octagon.
Simon, H. A. (1976), Administrative Behaviour: a study of decision making process in
administrative organizations, New York: Free Press.
Sommerville, I. (1996), Software Engineering, (5th edn.) New York: Addison-Wesley.
Sommerville, I., Kotonya, G., Viller, S. and Sawyer, P. (1995), Process Viewpoints, In
Proceedings of the Fourth European Workshop on Software Process Technology,
EWSPT’95, W. Schäfer ed. Lecture Notes in Computer Science 913, 2-8,
Noordwijkerhout, NL: Springer-Verlag.
Sommerville, I., Rodden, T., Sawyer, P. and Bentley, R. (1992), Sociologists can be
surprisingly useful in interactive systems design, In People and Computers VII:
Proceedings of the HCI’92 conference, A. Monk, D. Diaper and M.D. Harrison eds.,
341-353, York: Cambridge University Press.
Sommerville, I. and Sawyer, P. (1996), PREview: Viewpoints for Process and Requirements
Analysis, REAIMS Project Deliverable, REAIMS/WP5.1/LU060, 29th May
1996, Lancaster University.
217
Sommerville, I. and Sawyer, P. (1997a), Requirements Engineering: A good practice guide,
Chichester: John Wiley.
Sommerville, I. and Sawyer, P. (1997b), Viewpoints: principles, problems and a
practical approach to requirements engineering, Annals of Software Engineering, 3 :
101-130.
Sommerville, I., Sawyer, P. and Viller, S. (1998), Viewpoints for requirements
elicitation: a practical approach, In Proceedings of the IEEE International Conference
on Requirements Engineering - ICRE’98, D.M. Berry and B. Lawrence eds., 74-81,
Colorado Springs, Colorado: IEEE Computer Society Press.
Sommerville, I. and Viller, S. (1997), Safety-Critical Systems Programme Process Comparison,
Lancaster University, Lancaster, UK, available from
http://www.aber.ac.uk/~dcswww/SCSP/reports/lancs.htm.
Spivey, J. M. (1988), Understanding Z: A Specification Language and its Formal Semantics,
Cambridge, UK: Cambridge University Press.
Steiner, I. D. (1972), Group Processes and Productivity, New York: Academic Press.
Steiner, I. D. (1976), Task-performing groups, In Contemporary Topics in Social Psychology,
J.W. Thibaut, J.T. Spence and R.C. Carson eds., Morristown, NJ: General
Learning Press.
Stewart, C. and Cash, W. (1985), Interviewing: Principles and Practices, (4th edn.) Dubuque,
IO: Wm Brown Publishers.
Suchman, L. A. (1987), Plans and situated actions: The problem of human–machine
communication, Cambridge: Cambridge University Press.
Sullivan, L. P. (1986), Quality function deployment, Quality Progress, 19 (6) : 39-50.
Sutcliffe, A., Galliers, J. and Minocha, S. (1999), Human errors and system
requirements, In Proceedings of the IEEE International Symposium on Requirements
Engineering: RE’99, W.N. Robinson and K. Ryan eds., 23-30, Limerick, Eire:
IEEE Computer Society Press.
Taylor, B. (1990), The HUFIT planning analysis and specification toolset, In Human-
Computer Interaction - INTERACT ’90, D. Diaper, D. Gilmore, G. Cockton and
B. Shackel eds., 371-376, Amsterdam: Elsevier Science Publishers (North
Holland).
The Standish Group (1995), The CHAOS Report, Research paper, The Standish Group,
available from http://www.standishgroup.com/chaos.html.
Thomas, E. J. and Fink, C. F. (1961), Models of group problem solving, Journal of
Abnormal and Social Psychology, 63 : 53-63.
218
Tierney, M. (1993), Potential difficulties in managing safety-critical computing projects:
a sociological view, In Directions in Safety-Critical Systems, F. Redmill and T.
Anderson eds., 43-64, Berlin: Springer Verlag.
Travis, L. E. (1925), The effect of a small audience upon eye-hand coordination, Journal
of Abnormal and Social Psychology, 20 : 142-146.
Trillium (1994), Trillium: Model for Telecom Product Development and Support Process
Capability, Release 3.0, Bell Canada Acquisitions, Canada.
Turner, B. A. (1992), The sociology of safety, In Engineering Safety, D.I. Blockley ed. 186-
201, London: McGraw-Hill.
Van Avermaet, E. (1988), Social influence in small groups, In Introduction to Social
Psychology, M. Hewstone, W. Stroebe, J.-P. Codol and G.M. Stephenson eds.,
350-380, Oxford: Basil Blackwell.
Viller, S., Bowers, J. and Rodden, T. (1999), Human factors in requirements
engineering: A survey of human sciences literature relevant to the improvement
of dependable systems development processes, Interacting with Computers, 11 (6) :
665-698.
Viller, S. and Sawyer, P. (1995), REAIMS: Requirements Engineering Adaptation and
Improvement strategies for Safety and Dependability, Safety Systems: The Safety
Critical Systems Club Newsletter, 5 (1) : 15-16.
Viller, S. and Sommerville, I. (1999a), Coherence: an approach to representing
ethnographic analyses in systems design, Human–Computer Interaction, 14 (1 & 2) :
9-41.
Viller, S. and Sommerville, I. (1999b), Social analysis in the requirements engineering
process: from ethnography to method, In Proceedings of the IEEE International
Symposium on Requirements Engineering: RE’99, W.N. Robinson and K. Ryan eds.,
6-13, Limerick, Eire: IEEE Computer Society Press.
Viller, S. and Sommerville, I. (in press), Coherence: Ethnographically informed analysis
for software engineers, International Journal of Human–Computer Studies, (Special
issue on understanding work and designing artefacts) :
Viller, S. A. (1993), The group facilitator: a CSCW perspective, In Readings in Groupware
and Computer Supported Cooperative Work, R. Baecker ed. 145-152, San Mateo, CA:
Morgan Kaufmann.
Walsh, P. (1989), Analysis for task object modelling (ATOM): towards a method of
integrating task analysis with Jacson system development for user interface
software design, In Task Analysis for Human—Computer Interaction, D. Diaper ed.
186-209, Chichester: Ellis Horwood.
219
Weinberg, J. (1997), Quality Software Management, Volume 4: Anticipating Change, New
York, NY: Dorset House.
Westrum, R. (1991), Technologies and Society: The Shaping of People and Things, Belmont,
CA: Wadsworth Publishing Company.
Wildavsky, A. (1988), Searching for Safety, New Brunswick, NJ: Transaction Books.
Wilke, H. and Van Knippenberg, A. (1988), Group performance, In Introduction to Social
Psychology, M. Hewstone, W. Stroebe, J.-P. Codol and G.M. Stephenson eds.,
315-349, Oxford: Basil Blackwell.
Wright, P. C., Pocock, S. and Fields, R. E. (1998), Practice and prescription of work on
the flightdeck: Implications for technological support, In ECCE-9, .
Yourdon, E. (1989), Modern Structured Analysis, Englewood Cliffs, NJ: Prentice-Hall.
Zahran, S. (1998), Software Process Improvement: Practical Guidelines for Business Success,
Harlow: Addison-Wesley.
Zajonc, R. B. (1965), Social facilitation, Science, 149 : 269-274.
A-1
Appendix A PERE Human Factors
Checklist
This appendix presents the PERE Human Factors Checklist in full. It is repeated here in
order to simplify it being copied for use in PERE analysis. The checklist consists of a
number of tables, for each of the different categories of vulnerabilities to error. The data
in each row is illustrated in the figure below (reproduced from Chapter 5)
5.1 Social facilitation and inhibition
(i.e. the degree and direction
in which individual
performance of a task is
affected by being observed)
Consider whether the introduction of
direct supervision of activity is
appropriate, whether its gains outweigh
any potential performance losses as
might be the case for skill-based tasks
Tend not to employ direct supervision
(and prefer more indirect, deferred error-
checking) for more knowledge-based
tasks.
3.2.1
unique reference
number
the
vulnerability
to error
possible defence
(
against the
vulnerability
reference to the section number(s) in this
thesis where the vulnerability is discussed
The checklist should be used along with the process guide and key figures presented in
appendix B, and ultimately as an index into the relevant literature as reviewed in
Chapters 3 and 4.
A-2
A.1 Individual Errors—Slips & Lapses
Ref Vulnerability Possible Defence Source
1 Slips and lapses in familiar
skilled activities
Check products of work
Consider introducing supervision and
document review if none existing
Consider redesign of how information is
presented and of individuals’ work
environments
Consider introduction of appropriate tool
support and/or redesign of current
working tools
3.1.1
1.1 Recognition and
attentional failures
(e.g. errors in use of
standard, familiar notations
and errors in pursuing
multiple cross-references
in a requirements
document)
Design presentation of information to
maximise ease of recognition
Design notations to allow significant
differences to be highlighted
Avoid naming conventions which lead to
groups of names that are ambiguous or
may be easily confused
Minimise worker fatigue and interruptions
through monitoring and/or redesigning
work hours, routines and work
environments
3.1.1.1
3.1.1.2
1. 2 Memory failures
(e.g. omitting some familiar
action or incorrectly
reconstructing the history
of a requirement)
Consider introduction of tool support for
requirements traceability and/or ‘memory
prostheses’ (including video and audio
recordings of significant meetings etc.)
If none exist currently, provide means by
which team members can support each
other’s remembering
Enable details of original contexts (e.g.
when a requirement was first expressed)
to be captured and retrieved
3.1.1.3
1.3 Selection failures
(e.g. selecting the wrong
course of action when
engaged in several tasks
simultaneously)
Simplify the work setting so that workers
confront simultaneous tasks more rarely
Redesign technologies so that after
interruption workers are easily able to
return to their work at the point they were
at beforehand
3.1.1.4
A-3
A.2 Individual Errors—Rule-Based Mistakes
Ref Vulnerability Possible Defence Source
2 Rule-based mistakes
where ‘pre-packaged’
solutions or rules of
practice are inappropriately
applied
Training in appropriate solutions
Continual revision of training in light of
changes or innovations in work domain
Dissemination of expertise and experience
(if appropriate) introduce checking,
supervision or review if none exists
currently
3.1.2.1
2.1 Misapplying good rules
(i.e. following a course of
action that is known to work
in certain contexts but that
does not work in this one)
Test the appropriateness of existing rules
or procedures in new situations under
experimental or simulated conditions
(where appropriate) design how information
is presented so that workers can easily
recognise conditions for which an
appropriate rule exists
3.1.2.1
(i)
2.2 Applying bad rules
(i.e. following a course of
action that has been
shown not to work, or that
used to work but no longer
does so)
Avoid persistence of sub-optimal rules by
testing over a wide range of conditions
through (e.g.) simulations
Disseminate ‘near-miss’ and ‘critical
incident’ data throughout the organisation
3.1.2.1
(ii)
A.3 Individual Errors—Knowledge-Based Mistakes
Ref Vulnerability Possible Defence Source
3 Knowledge-based
mistakes where new
solutions to problems
have to be devised
without recourse to pre-
packaged ones
Design group processes to include
‘critiquing’ and reflection on how solutions
are arrived at (e.g. through video or audio
tape analysis or other means of experience
capture)
Introduce checking and review of products
of work with particular scrutiny of possible
sites of knowledge-related bias
3.1.2.2
A-4
Ref Vulnerability Possible Defence Source
3. 1 Availability biases
(i.e. solutions are chosen
because they come to
mind easily)
Carefully scrutinise the adoption of a
solution on the basis of prominent past
experience
Focus in detail on what distinguishes the
new situation from past situations which
come to mind easily
3.1.2.2
(
I
)
3. 2 Frequency and similarity
biases
(i.e. solutions are chosen
because they are similar or
have been used often
before)
Carefully scrutinise the adoption of a
solution merely because similar solutions
had been adopted before
Focus on the uniqueness of current
situation
3.1.2.2
(ii)
3.3 Over-confidence and
confirmation biases
(i.e. solutions are chosen
solely because of the
confidence with which
they are believed to
succeed, or because only
evidence confirming their
success is sought)
Actively seek out information to disconfirm
the currently favoured solution by (if
necessary) assigning ‘critics’ to it who do
not necessarily favour the same solution
Do not favour solutions merely on the basis
of the confidence their advocates have in
them
3.1.2.2
(iii)
3.1.2.2
(iv)
3.4 Inappropriate exploration
of the problem space,
bounded rationality and
satisficing
(i.e. making do with what
appears to be an
appropriate solution,
ignoring the possibility of
better solutions which may
result from alternative lines
of enquiry)
Especially when the range of options is
large, ensure that the whole problem space
is adequately ‘scoped’ initially so as to
minimise the chances of a good solution
being missed
Do not prematurely allocate resources to
the deep exploration of a possible solution
until the whole problem space has been
‘scoped’
3.1.2.2
(v)
3.1.2.2
(vii)
3.5 Attending and forgetting
in complex problem
spaces
(i.e. losing one’s way
because of the complexity
of the problem)
Consider appropriate tool support and
other techniques for aiding memory and
attention (see §1.2 above)
3.1.2.2
(vi)
3.6 Problem simplification
through (i) halo effects
(ii) control/attribution errors
and
(iii) hindsight biases
As above, counteract these effects through
critiquing solutions by
(i) concentrating on the specificities of new
situations
(ii) ensuring that factors outside direct
human control are adequately recognised,
and
(iii) recognising how things could have
been otherwise (i.e. documenting the
significant branching- and choice-points) in
past critical events
3.1.2.2
(viii)
3.1.2.2
(ix)
3.1.2.2
(x)
A-5
A.4 Individual Errors—Violations
Ref Vulnerability Possible Defence Source
4 Violations of established
operating procedures
Survey whether established procedures
are unnecessarily prescriptive (perhaps
due to restrictions accumulating as a result
of past incidents), and (if so) consider their
redesign
Carefully distinguish between harmful
violations and those necessary to get the
job done at all
3.1.4
4.1 Routine violations
(i.e. violations that are
habitual and performed
regularly as part of the
process)
Carefully assess the effects of such
violations to see if they are in fact harmful
Check to see if these arise through
inappropriate management styles and/or
processes which overly regulate the
conduct of workers, allowing inadequate
skilful or professional discretion. If so, it may
be appropriate to review operating
procedures
Alternatively, if violations arise through an
inadequate emphasis on safety, introduce
a strong safety culture.
3.1.4.2
(
I
)
4.2 Optimising violations As above, and also attend particularly to
processes which—themselves—motivate
risky optimisation by (for example) being
too boring to perform otherwise
3.1.4.2
(
I
)
4.3 Situational violations As above 3.1.4.2
(ii)
4.4 Exceptional violations Ensure the practical usefulness of training
even for rare situations through (as
appropriate) imagination, simulation or drills
Encourage the envisionment of unusual
combinations of circumstances and
responses to them as part of training
3.1.4.2
(iii)
A-6
A.5 Problems with Group work
Ref Vulnerability Possible Defence Source
5 Group coordination
failures and process
losses
Scrutinise the internal dynamics of
meetings, presentations, brainstorming
and other group interaction situations and
review procedures for them paying
particular attention to the factors listed
below
Employ (where appropriate) video and
other recording facilities to enhance the
possibility of such reflection
3.2
5. 1 Social facilitation and
inhibition
(i.e. the degree and
direction in which
individual performance of a
task is affected by being
observed)
Consider whether the introduction of direct
supervision of activity is appropriate,
whether its gains outweigh any potential
performance losses as might be the case
for skill-based tasks
Tend not to employ direct supervision (and
prefer more indirect, deferred error
checking) for more knowledge-based
tasks.
3.2.1
5.2 Inappropriate human
resources
Ensure that teams and groups are selected
with skills, knowledge and experience
which match the task at hand
Do not merely select personnel who
happen to be available.
3.2.2
5.3 Socio-motivational
problems
When ‘free-rider’ problems become acute,
consider introducing procedures for group
interaction to specifically address them
(e.g. the introduction of a group ‘facilitator’
or a ‘leader’ focusing on the balance
between contributors)
3.2.2
5.4 Group coordination
problems
Consider dedicating personnel to
monitoring coordination or undertaking
coordination tasks
Ensure that all proposals for solutions to
problems receive adequate critiquing and
other examination (cf. group facilitation
above)
Technological support (e.g. email and other
applications) may be appropriate to alleviate
coordination problems for distributed work
groups
3.2.2
5.5 Status related problems Ensure that experts are not ‘over-believed’
by critiquing all opinions
Ensure that novices and low-status group
members have the opportunities and
resources to confidently contribute
3.2.2
A-7
Ref Vulnerability Possible Defence Source
5.6 Group planning and
management problems
Ensure that the breakdown of tasks to sub-
tasks and the allocation of personnel to task
is conducted appropriately
3.2.2
5.7 Inappropriate leadership
style, skills and influence
Ensure that group leaders and others in a
management role have relevant experience
and knowledge of the problem domain
Ensure that they do not exclude minority
opinion or inappropriately bias solutions,
and balance ‘task-centred’ (making sure the
job’s done) with ‘socio-centred’ (e.g.
making sure all team members contribute
appropriately) leadership
3.2.3
5.8 Premature consensus and
the exclusion of minority
opinion
(i.e. the reaching of a
decision too early, before
all appropriate options
have been fully explored,
maybe because they were
only proposed or
supported by a minority of
the team)
Preserve, record and periodically review
minority opinions (e.g. courses of action not
favoured by the group as a whole)
Make resources available for the
development of minority opinions and the
critical exchange between minority and
majority opinions
3.2.4
3.2.5
5.9 Group polarisation
problems (e.g. risky shifts
and groupthink)
Counteract groupthink by composing
groups from varied backgrounds and skill-
bases
Encourage access to sources of information
which might disconfirm existing opinions
Discourage dogmatism on the part of group
leaders or high status members
Especially review the composition and
dynamics of groups who seem to be
heading for increasingly risky decisions
3.2.6
A-8
A.6 Problems of an Organisational Nature
Ref Vulnerability Possible Defence Source
6 Organisational
vulnerabilities
Review organisation structure,
communication, culture and learning to
uncover how sensitive the organisation is to
safety and reliability issues
Consider the introduction of new
organisational forms, patterns of reporting, a
strong safety culture and explicit
mechanisms for experience capture to
promote fault tolerance and prevent error
propagation.
3.3
6.1 ‘Single points of failure’
exist where a mistake by
an individual can lead
directly to a failure or
hazardous condition
Higher levels of redundancy should exist in
personnel and/or technology (e.g. through
overlapping expertise and sharing of
responsibilities)
3.3.1 to
3.3.323
6. 2 Errors and failures
propagate through the
process
Introduce appropriate supervision and
checking
ditto
6.3 Wide fluctuations in
workload
If the fluctuations can be anticipated, then
consider allocating staff flexibly to coincide
with high loads
ditto
6.4 Reporting procedures and
hierarchy of decision-
making authority prevents
rapid response to
problems as they arise
Decentralise authority to speed
responsiveness
ditto
6.5 Working practices allowed
to ‘sli
p
’ into unsafe modes
Emphasise continuous training and a strong
safety and reliability culture
ditto
6. 6 Failure to comply with
existing safety regulations
or develop new safety
procedures
Promote a strong safety and reliability
culture at all organisational levels
ditto
6. 7 Recurrent failures of a
similar nature
Promote organisational learning ditto
6.8 Potential safety hazards
are allowed to pass
unrecorded
Implement reporting and disseminating
accounts of ‘near-misses’ and other safety
information
ditto
23 In chapter 3, it is argued that vulnerabilities at the organisational level have a character which requires
one to consider a large range of interacting factors. For this reason, analysts are recommended to
become acquainted with all of sections 3.3.1, 3.3.2, 3.3.3 and 4.2.3 no matter what the specific
organisational vulnerability is that ahas been identified.
A-9
Ref Vulnerability Possible Defence Source
6.9 Organisational rigidities of
perception and belief
Introduce reviews and critiquing groups with
the responsibility of exploring the basis of
existing organisational practice
ditto
6.10 Significance of
vulnerability is minimised
Promote a strong safety and reliability
culture at all organisational levels
ditto
6.11 There exists tight coupling
of processes within a
complex production
system
Redesign processes to loosen coupling
and simplify production wherever possible
ditto
6.12 Process varies from
p
ro
j
ect to
p
ro
j
ect
(
ad-hoc
)
Adopt a specified and documented process ditto
A.7 Problems with Document Design
Ref
.
Vulnerability Possible Defence Source
7 Document Design Carefully review document standards and
whether they promote the readability and
error-free production of documents
Carefully scrutinise any special textual
conventions used
Consider the introduction of technologies
(e.g. hypertext-based systems) which can
promote dynamic cross referencing
between documents.
4.3.1
7.1 Document standards do
not support the needs of
certain projects, leading to
e.g. omission of relevant
detail
The rationales for document standards
should be scrutinised, especially with
respect to how the standards might allow
the recognition of project-specific
idiosyncrasies.
4.3.1
7.2 Document standards
impose a strict ordering of
sections, regardless of
their relationship to each
other for a particular
system or project
The structure of documents should be
assessed both for how it supports
readability and how it reflects the
document’s conceptual organisation.
4.3.1
7.3 Textual notations are used
in documents to provide
additional information
regarding the status or
category of entities
contained therein
Textual conventions should be examined
with respect to how vulnerable they are to
error.
4.3.1
A-10
Ref Vulnerability Possible Defence Source
7.4 Requirements
specifications are used for
several purposes, e.g.
serves both as contract
between procurer and
supplier, and as
description of the system
to be developed
Separate the requirements specification
into different documents for specific
audiences and purposes.
Be sensitive to how documents serve
different functions and are intended for
different readers
Tool support could facilitate multiple views
onto the same document, thereby
maintaining consistency between views.
4.3.1
7.5 Documentation pertaining
to a particular system is
spread across a number of
documents, which must
be read together.
As the number of documents increases, it is
important to introduce effective strategies
for coordination across documents
Consider use of hypertext documents to
facilitate following cross-references and so
on.
4.3.1
A.8 Problems with Notations, Representations, and
Diagrams
Ref Vulnerability Possible Defence Source
8 Notations, representations
and diagrams
Carefully review the rationale behind
notation conventions and representational
formalisms
Especially consider whether the notations
do in fact offer advantages over ordinary
language descriptions
Ensure that notations etc. are commonly
understood across development teams.
4.3.2
8.1 Diagrams often contain
only bounded objects
(circles, squares, etc.)
Allow for less distinct or self-contained
entities to be expressed in a less formal
notation
4.3.2
8.2 The set of relations that
can be drawn in a diagram
or otherwise formalised are
restricted in order to
reduce complexity (e.g.
only simple trees).
Allow for less constrained expression of
relationships between entities and consider
‘open-ended’, extensible formalisms
4.3.2
A-11
Ref Vulnerability Possible Defence Source
8. 3 Diagrams and other
formalisms are used in
documents as a substitute
for exposition.
Include both exposition and
diagrams/formalisms in documents
4.3.2
8.4 Proximity effects
(i.e. two objects in a
diagram may be taken to
be related to each other
merely because they are
close to each other on the
page)
Make explicit any conventions used to
indicate relationships between entities in
diagrams
Construct alternate views onto diagrams to
prevent particular views being treated as the
‘only’ views.
4.3.2
8.5 Centre/periphery,
top/bottom, and left/right
biases
(i.e. pre-existing
perceptual or reading-
related biases may lead to
attaching inappropriate
importance to entities
because of their position
in a diagram)
Make explicit any conventions used to
indicate relative importance of entities in
diagrams
Construct alternative views onto diagrams as
above
4.3.2
8. 6 Diagrams and other
formalisms are copied and
pasted from one
document to another,
possibly in inappropriate
contexts
Include the source of diagrams and
formalisms if pasted from elsewhere, in
order to assist checking that they are being
used appropriately.
4.3.2
B-1
Appendix B PERE User Manual
This appendix presents a detailed user guide for PERE analysis. Some of the information
in this appendix is summarised in tabular form in Chapter 7. In addition to the step-by-
step guides to following the mechanistic and human factors viewpoint analyses, this
appendix also contains discussion on techniques that could be used to capture process
information for PERE, and also on how the method can be adapted and extended to
suit different domains of application, as new findings become available, and so on.
B.1 Process Capture for PERE
PERE analyses processes from two viewpoints and facilitates the evaluation of
processes in terms of identified vulnerabilities and potential defences. However, to do
this, PERE must have descriptive information about the nature of the process. How is
this information ‘captured’ so that PERE can model the process at all? In principle,
PERE can inter-work with any process capture method. However, within the REAIMS
family of modules, there is one notable method available which is especially appropriate
for process capture, and another which often yields a great deal of descriptive process
information as a ‘by-product’ of its normal operation. In addition, familiar techniques
from the social sciences (interviews, participant-observation and ethnography) could
also be used in a systematic way to support process capture for PERE. This section
discusses the utility of these methods and how they can work with PERE.
B.1.1 Techniques from Within the REAIMS Project
Two other techniques within the REAIMS project are particularly of interest with
regards to process capture for PERE. First, PREview-PV is a process improvement
method which builds up a process model from a number of identified viewpoints.
Second, MERE is a method for the capture, storage and evaluation of recommendations
for practice, which frequently gives rise to process descriptions which may be of use to
PERE.
B-2
B.1.2 Process Capture through PREview-PV
PREview-PV is a viewpoint-oriented method for understanding and analysing processes
in their organisational context and for identifying process improvement suggestions.
PREview-PV structures the information it captures regarding a process around
viewpoints, a concept which recognises that many different stakeholders can exist each
having a distinctive view onto and providing input to a process. Viewpoints can
correspond to both system components and humans involved in the process. Having
established the overall organisational concerns which the process relates to, PREview-
PV systematically sets about understanding and analysing the process by studying and
documenting its current state. While doing this, PREview-PV encourages the analyst to
collect process improvement suggestions which may be made by people who are sources
of information about the process. These, together with the results of further analysis,
comprise PREview-PV’s improvement planning stage. Once planned, selected
improvements can be implemented.
As PREview-PV is centrally concerned with the documentation of the current state of a
process, it is well suited as a process capture method for PERE. There are various stages
in the implementation of PREview-PV where a process description could be made
available as input to PERE. Optimally, PERE requires a single process description along
with notable or irreconcilable differences between viewpoints if these occur. This can be
obtained at the end of the viewpoint process description validation stage of PREview-
PV (described in section 5.4.6 of the PREview-PV document, Validate and refine
viewpoint process descriptions). A process description taken from PREview-PV at this stage
will be more stable, complete and validated than a description taken from any earlier
stage.
Process descriptions could be taken from PREview-PV prior to validation and
viewpoint reconciliation if it is essential to start PERE analysis as quickly as possible.
However, the PERE analyst would have to reckon on the possible costs involved in
working with pre-validated descriptions (errors and their consequences may have to be
corrected later) or ones whose viewpoints have not yet been reconciled (PERE may
involve unnecessary duplication of analysis of process descriptions where these
descriptions duplicate reconcilable material).
Process descriptions could also be taken from PREview-PV at a later stage, that is, after
work on process improvement suggestions has already commenced. This could have
value for PERE as it would enable the analyst to investigate the vulnerabilities (if any)
associated with the emerging process improvement suggestions and/or use those
suggestions to focus the PERE analysis itself (see section B.4). Provided that there is
adequate communication between those applying PREview-PV and those working with
PERE in any particular case, PERE should be able to study or utilise process
improvement suggestions derived by PREview-PV if not immediately, then on
subsequent iterations as the PREview-PV application becomes complete.
This suggests that a parallel application of PREview-PV and PERE, with PREview-
PV’s later stages overlapping with PERE’s earlier ones is most effective. This strategy is
B-3
further confirmed when it is recognised that PERE may also require more detailed
material from PREview-PV than PREview-PV may have initially given. For example, if
a particular process component on analysis seems to be a prominent source of
vulnerabilities, then PERE may need more information about it in order to conduct a
more detailed analysis. This may require the PREview-PV analyst to ‘re-open’ analysis of
that component so as to gain further detail. Furthermore, proposed defences against
vulnerabilities suggested by PERE could be subject to further analysis in PREview-PV
or generally influence the improvement planning strategy of PREview-PV. It is clear
then that PREview-PV and PERE are well-suited to joint application and that close
links should be maintained between the modules so that information can be efficiently
exchanged between them.
B.1.3 Process Capture through MERE
The MERE process within the REAIMS family is concerned to support an
organisation’s requirements activities by systematising the process of learning from
experience. It does so by structuring how ‘experience facts’ or reports of incidents are
captured and by supporting the formulation of ‘Rules/Recommendations (R/Rs)’ which
offer generalisations of good practice on the basis of experience facts. The MERE
process makes suggestions as to how experience facts should be captured, how R/Rs
should be generated by elaboration, then validated, selected for application and verified
(see PERE’s application to MERE in Chapter 8 for a more detailed description of
MERE).
While MERE is not directly concerned to capture and document the details of processes
(in contrast to PREview-PV), parts of MERE could nevertheless be used to provide
process information for PERE. For example, the initial description of experience facts
typically requires an account of a relevant incident to be contextualised in relation to the
process it is part of. At later stages in MERE, when for example R/Rs are being
elaborated, process information again emerges as a means of supplying context for the
experience facts under consideration as well as in the comparative assessment of an R/R
across different application contexts.
In short, descriptive process information may often emerge in MERE as a by-product of
MERE’s principal application. Such information is likely to emerge in a piecemeal way
(and hence MERE cannot be counted upon as the only capture technique required by
PERE) but nevertheless this will usefully complement the information obtained by
other methods. Indeed, it is quite likely that process information captured by MERE
will be most informative of how processes behave ‘under stress’ or in critical conditions
as it is under these circumstances that the experience facts which MERE documents are
most likely to emerge. This will complement process information captured by methods
more likely to reflect normal mode operation.
MERE may also be very useful for informing PERE about less mature processes. In such
cases, where the repeatability of the process is itself in question, access to a set of
relevant experience facts may comprise the most appropriate process information. Under
B-4
such circumstances, the experience fact capture and R/R elaboration components of
MERE will both be relevant for capturing process information for PERE.
B.1.4 Other Techniques for Process Capture
Several research methods standardly employed by social scientists may be usable as
process capture techniques for PERE, and ethnographic, interview-based and task
analysis methods are briefly discussed here. It should be emphasised that, although these
methods have long traditions of use in various social sciences and as research-oriented
techniques, they are relatively immature methods as techniques specifically for process
capture purposes. Accordingly, it is more likely that these methods would be most
appropriately used by practitioners already skilled in their use who may find their
extension to capturing processes more readily foreseeable.
For example, ethnographic research methods are typically characterised by field studies
of human behaviour and social activity in some naturally occurring setting where the
analyst will attempt to understand the setting ‘from the inside’, in terms recognisable by
the setting’s ‘members’ themselves. A well conducted ethnography will attend to the
details of the setting and social activity within it and concentrate on exactly how the
work is done (for workplace settings, that is). Ethnographic methods are becoming more
commonly known and applied in requirements engineering, though very few studies
exist which take requirements processes and how they are enacted as their object of
investigation. Thus, from the point of view of this thesis, ethnography is a relatively
‘immature’ method for process capture. Nevertheless, the details which carefully
conducted ethnographies uncover can often be invaluable for informing system design.
Furthermore, there exist some attempts to integrate ethnographic workplace research
with viewpoint-oriented techniques. For example, Hughes et al (1995) discuss how the
results of such observational studies can be structured according to a number of
viewpoints (e.g. ‘workplace ecology’, ‘flow of work’).
Familiar research techniques like the structured open-ended interview could also be
applied as a process capture technique for PERE. Indeed, interviews of this sort form
the basis of PREview-PV with its viewpoint concepts enabling interview derived
material to be structured effectively. Interviews with personnel able to inform the
PERE analyst about the nature of the process to be analysed can be conducted using
other methods, however. A summary of the uses of the research interview and issues to
consider in the planning and execution of interviews can be found in literature on
knowledge acquisition (McGraw and Harbison-Briggs, 1989; Stewart and Cash, 1985).
Methods of ‘task analysis’ from human-computer interaction (HCI) and ergonomics
research could also be used to support PERE. Diaper (1989a) collects together a
number of papers on a variety of task analysis methods which could be adapted for the
purposes of process capture.
B-5
B.2 The PERE Mechanistic Viewpoint
B.2.1 Introduction
This approach to analysing and finding vulnerabilities in systems has its origins in the
classical safety analysis technique Hazops (Kletz, 1992). Whilst this grounds the
approach in tried and tested analysis methods, it should be noted that prior knowledge
of this technique is not intended to be a pre-requisite for applying PERE. The source for
the particular approach adopted here was internal to the REAIMS project (Bloomfield et
al., 1995). The mechanistic viewpoint first defines a very general method for describing
systems, and representing vulnerabilities of systems. The system model described is based
on the principles of using abstraction and modularity to describe systems, to consider
generic component classes with generic vulnerabilities, and to explicitly consider the
‘working material’ which the components act on and which moves through the system
(typically being transformed in the process). Then to identify possible vulnerabilities, the
components are systematically examined. This is done using an error classification which
could be general or tailored to fit the application area or by past experience.
It should be emphasised that, for this viewpoint, both human and machine activity in
the process are taken to be analysable into components. As discussed below, the generic
component classes are defined with respect to their function within the process, not
whether that function is realised by a human or a machine. Specifically, human factors in
processes are analysed in the human factors viewpoint (see section B.3).
The system model has two parts. The first part of the model is a method for describing
components of a system, inspired by ideas from object-oriented programming. This
method also supports top-down design using abstraction to hide details not relevant to
the level of analysis. The second part is a method for defining hazardous conditions and
unsafe behaviours of the system, in terms of the components and their behaviour
defined in the first part.
A system model therefore consists of a collection of components, connected via
interfaces. Connections are only permitted when the interfaces are of compatible
material types. The component connects to the external world through those interfaces
that are not connected to other components, and the behaviour of the complete system
can be derived from composing the behaviours of the components.
PERE’s mechanistic analysis component is therefore concerned to analyse a process in
terms of its components, interconnections and working materials and to uncover
possible process vulnerabilities. A further analysis then considers the likelihood and
severity of consequences associated with each identified vulnerability and whether
defences need to be put in place. The ultimate decisions regarding defences to process
vulnerabilities will depend upon complex judgements of the cost, availability and so
forth of defences as well as a consideration of purely safety related issues.
B-6
This section concentrates on providing step-by-step guidance for the mechanistic analysis
component of PERE. Essentially this consists of six main steps. These are described in
the following overview section. Subsequent sections provide more detailed guidance to
assist the analyst in following this process.
B.2.2 Overview of the Mechanistic analysis
What follows is a ‘fast-track’ guide to the mechanistic viewpoint. It is designed to allow
analysts to rapidly obtain an overview of the mechanistic viewpoint analysis, as well as
functioning as a quick reference to the process. Figure B.1 below illustrates this process,
and there follows brief instructions for each stage of the process in the sections
following. These sections take the form of a question to be asked, along with actions to
be performed related to the response to the question. In the figure, the links between
the process and PREview-PV indicate the points at which process capture information
may pass in either direction (see section B.1). Similarly, the links to the human factors
viewpoint indicate the various points at which information may be passed on for
analysis with respect to vulnerabilities due to human activity (see section B.2.11 for
discussion on this point).
B-7
Specify process
structure and
working materials
Select relevant
processes
Obtain process
documentation
Specify
components
Review
weaknesses
Identify
weaknesses
Output to
human factors
viewpoint
PREview-PV
MERE
Figure B.1: The PERE Mechanistic Viewpoint process24
B.2.2.1 Select relevant processes
Which parts of the lifecycle are to be analysed?
Answer this question referring to the customer needs and organisational goals for
the analysis
Select processes accordingly
24 Hairlines indicate optional links. PREview-PV can be used as a means of process capture (see section
B.1.2). MERE can be used both to provide input to process capture through the identification of
problematic processes, but also can inform the identification of process vulnerabilities, both through
the analysis of experience facts (see section B.1.3).
B-8
B.2.2.2 Obtain process documentation
Is there sufficient documentation for the selected processes?
No: collect further information to add to existing documentation (using
whichever process capture techniques are most appropriate or preferred)
Yes: Proceed to next stage.
B.2.2.3 Specify process structure and working materials
The PERE analyst can make use of any techniques they are familiar with (such as
containted within PREview-PV, or those associated with ISO9000, for example) to
assist with capturing the necessary process information in order to perform this step.
IDENTIFY COMPONENTS
What are the components in the process?
Identify and name components from the process documentation
CLASSIFY COMPONENTS
What classes are the identified components?
Classify into specialisations of the generic component classes (transduce, process,
channel, store, control) generating new ones where no existing class is suitable
IDENTIFY INTERCONNECTIONS AND WORKING MATERIALS
How are the components connected together and what are the materials passed
between them?
Identify the input and output components for each component
Identify the different classes of working material passed between components
B.2.2.4 Specify components
What are the behaviour and properties of each component?
Complete a row of the PERE Component Table (PCT, see Table B.1 and section
B.2.7) for each component, defining its class, interfaces and working materials,
(optional) state, invariant, (optional) pre-conditions and resources, and (optional)
external control.
Output component specification to human factors analysis as required (see
section B.2.2.7 below)
B.2.2.5 Identify vulnerabilities
This step could be informed by data available from an experience capture technique such
as MERE (see B.1.3).
B-9
What could go wrong with each component?
Identify vulnerabilities according to the class of the component. Each class and
sub-class of component may have generic vulnerabilities associated with it.
Identify vulnerabilities according to the specification of the components. Each
column heading in the PCT may have generic vulnerabilities associated with it.
Output current process model after each iteration to human factors analysis as
required (see section B.2.2.7 below)
B.2.2.6 Review vulnerabilities
What are the likelihood and consequences of each of the identified vulnerabilities, and
how could they be protected against?
Complete a row of the PERE Vulnerability Table (PVT, see Table B.2) for each
vulnerability, defining its class, likelihood, consequence, possible defences, and
possible secondary vulnerabilities.
B.2.2.7 Output to human factors analysis
Did the last iteration of vulnerability identification and review generate no further
vulnerabilities?
No: reiterate from Identify Components, consider passing current process model
to human factors analysis if there is a sufficient number of components which
could be vulnerable to human errors. Also consider passing on specifications of
single process components if they are vulnerable to human errors.
Yes: pass complete process model consisting of fully completed PCT and PVT to
the human factors analysis.
B.2.3 Identifying Components by Iterative Deepening
Components can be identified at any level of abstraction provided that a consistent level
is adopted throughout the analysis in a single iteration of the mechanistic component of
PERE. Initial analysis should, however, take place at a level where basic classes of
components can be identified in the process. ‘Basic classes’ should be taken to mean
classes of component which can be seen as elementary specialisations of an abstract
‘component’ class. Currently, five elementary specialisations of ‘component’ are
identified at this level: ‘transduce’, ‘process’, ‘channel’, ‘store’ and ‘control’.
Analysis in terms of these basic classes will yield generic vulnerabilities associated with
the process. More specific analyses can subsequently be carried out on those components
which manifest the most critical vulnerabilities at this generic level. That is, subsequent
iterations of the mechanistic PERE analysis will deepen analysis just where it is
required. (The definition and identification of ‘critical’ vulnerabilities can be facilitated
by integrating PERE with PRA techniques, see section B.2.13 below.) Further iteration
will involve further decomposition of the process components already identified as
B-10
sources of critical vulnerability. The characterisation of these sub-components will
involve further specialisation of the basic component classes (transduce, process and the
rest) to accurately depict the sub-components.
Thus, the identification of components follows a process of ‘iterative deepening’. First,
components are identified at the highest levels of abstraction. The most vulnerable
components are decomposed further in a second iteration with the sub-components
identified in terms of more specialised classes of component-type. Further iterations can
be conducted until the analysis is revealing no further vulnerabilities of significance. This
strategy of iterative deepening is depicted in Figure B.2.
This approach to the decomposition of processes and the identification of vulnerabilities
is often facilitated by the development of an extensible set of component classes and
their specialisations. In application, users of PERE will tend to specialise it to their
particular application domain (the aircraft industry, railway signalling, power-plant
control systems or whatever). A great part of this specialisation will involve the addition
of new component classes descending from the basic classes of transduce, process,
channel, store and control. It is not possible for this general description of PERE to list
all possible specialisations of these basic classes as these can only emerge in PERE’s use
in specific application domains (see section B.4 below).
Accordingly, PERE draws upon the approach (familiar in object-oriented programming
and analysis) of taking classes and sub-classes of objects that inherit properties from the
classes to which they belong. In usage, PERE involves the definition (and in mature
usage the reuse) of a hierarchy or taxonomy of components from the most general to the
most specific. Specific sub-classes of components should retain the basic properties of
their class and include new behaviours or properties of the specialised sub-class.
It should be emphasised that PERE draws upon these notions from object-oriented
analysis but in no way requires PERE analysts to be experts in object-oriented
techniques. Indeed, under some circumstances (e.g. for a very simple process with few
process components), presenting the process components as derived from a hierarchy of
classes and sub-classes may be excessive. Such issues are considered in further depth in
section B.4.
It is to be noted that many processes may involve quite complex components which are
not readily understood as specialisations from a single parent class. For example, batch
processing components (whether they be batch chemical reactors or batch information
processing systems) have both store and process functions. This means that component
class relations should allow multiple inheritance.
B-11
Sub-sub-component level
Sub-sub-component level
Process A
A.1 A.8
A.6
A.5
A.4
A.3
Component level
A.2.4
A.2.1
A.2.2
A.2.5
A.2.6
A.2.7
Sub-component level
A.7.3A.7.1
Sub-component level
A.2.3.1 A.2.3.2 A.2.3.3 A.7.2.3.
A.7.2.2
A.7.2.5
A.7.2.4
A.7.2.1
A.2
A.7
A.2.3
A.7.2
Indicates a critical
vulnerability has
been identified
which requires
deeper analysis
Figure B.2: Process analysis through selective iterative deepening
B.2.4 Basic Component Classes
Here more detail is provided about the nature of PERE’s basic component classes
(transduce, process, channel, store and control).
B-12
B.2.4.1 Transduce
A transducing component converts from one form of energy to another (e.g. a
thermocouple, hydro-electric generator, AC/DC converter, electrical heater). Where the
working materials are forms of representation (e.g. documents, descriptions, records,
database entries), transduction involves a change of the (physical) form of the
representation (transcription, recording, scanning, compiling software, printing out), not
a change in the semantics of the representation.
B.2.4.2 Process
A processing component converts working material(s) from one form to another (e.g.
chemical reaction, metal casting, paper cutting, image processing). Where the working
materials are forms of representation, processing involves transforming the
representation into a new one of the same physical form, typically with an associated
change in semantic content (editing documents, summarising reports, testing software,
analysing recordings, annotating print-outs).
B.2.4.3 Channel
A channel component transfers working material from one function to another (e.g. a
pipe, an electrical conductor). Where the working materials are forms of representation,
a channel component will be the means by which representations are transported from
one destination (a sender) to another (a recipient), such as telecommunications links,
networks supporting electronic messaging, physical paper-mail systems and so forth.
B.2.4.4 Store
A storage component holds the working material for an extended period of time (e.g. a
storage tank or a warehouse). Where the working material is a form of representation,
storage components include archives, libraries, filing cabinets, computerised databases,
file servers, local file stores and so on.
B.2.4.5 Control
A control component includes a means to control or determine the flow of working
material from one function to another or directs how the functioning of another
component takes place (e.g. a valve or a switch). Where the working material is a form
of representation, control consists of managing how the processing of information takes
place, or switching the responsibility of document production from one process-
component to another, or controlling the means of storage of documents or files, and so
forth. Control includes the ‘executive control functions’ involved in switching the flow
of physical materials and forms of representation from one set of components to
another.
B-13
B.2.5 Component Specification
In addition to the class the component is a member of, the overall description of a
component is completed by defining its behaviour and properties in terms of its
interfaces, (optionally) its possible states, its invariant mode of functioning, (optionally)
the pre-conditions and resources necessary for the component to function, and
(optionally) the external control the component is subject to. These attributes are now
discussed in more detail:
B.2.5.1 Class
The class of the component together with the ‘path’ in the class hierarchy from the
component to the root class, thereby defining any further properties the component
might inherit (e.g. pipe, which is a sub-class of the channel class, or database, which is a
sub-class of the store class).
B.2.5.2 Interfaces and Working Materials
A listing of the interfaces through which the component receives working materials
together with a specification of the type of working material received (e.g. liquid flow,
electrical power, document, entity-relationship diagram and so forth, which are all
descended from the root class of ‘working material’). A scheme for designating and
referring to interfaces is suggested below in section B.2.7.
B.2.5.3 (Optional) State
The set of internal states of the component that affect its behaviour and functioning
(e.g. temperature, pressure, level, available buffer storage, processing mode and so
forth). While states may be easy to identify and define for mechanical components,
process components whose function is realised by a person or a group of people may be
harder to define in this way. Hence state definitions are optional.
B.2.5.4 Invariant
The fixed relationships between the interfaces. To define this is to define how the
component’s input relates to its output, that is, the expected mode of functioning of the
component. Defining the invariant is to identify what the component should do. For
example, for the transduce component ‘compile’, the invariant might be specified as “the
output (object code) when executed performs the instructions as specified in the input
(source code)”. For a store component ‘database’, the invariant might be specified as
“data input to the database is retained in its input form and not lost, so that it is
available for retrieval unaltered”).
B.2.5.5 (Optional) Preconditions and Resources
This involves the definition of required conditions for normal operation, that is,
conditions which must be true if the invariant is to hold. This includes resources (fuel,
tools, the availability of human resources and so forth) that the component requires for
B-14
its mode of functioning to be satisfied. This attribute of the component definition is
optional because the preconditions may be non-specific to any particular component
(e.g. workforce must not be on strike) or trivial (e.g. in the case of ‘passive’ electrical
components). Examples would include: “must be protected from electrical spikes and
humidity”, “must be initialised by a reset signal”, “must be a version compatible with the
target platform”, or “must be operated by appropriately skilled personnel”.
B.2.5.6 (Optional) External Control
The relationships between interfaces specified in the invariant which can be influenced
by external control sources. Thus, if the mode of functioning of a component can be
affected by control signals, the source of the control and the effects it has should be
noted here (e.g. “executive component ensures throughput is optimal” or “whether
document is further edited or approved is determined by senior management”, or
“quality of software is approved at review meeting”). Note: in a complete analysis, the
source of control should be a component of class control.
B.2.6 Interconnection and Working Materials
The treatment of components which has just been outlined implicitly involves the
identification of the interconnections and working materials involved in the process.
Specifically, these are noted within the definition of each component under the
interfaces attribute. Having listed the components of a process, the PERE analyst
should then specifically check that the interfaces of a component are compatible with
the interfaces of the components it is assumed to connect with. If two interfaces are
incompatible (because either the connection has not been noted in analysis or the
working materials have been inconsistently described), then the analyst should further
investigate whether this implies some problem of process analysis and whether some re-
analysis is required.
Alternatively, such incompatibilities may reflect not so much an error in analysis but real
vulnerabilities in the system. If the analysis cannot be rectified and it is suspected that
incompatible interfaces suggest the presence of process faults, this should be immediately
noted and such faults given priority attention in later Vulnerability Review.
To further ensure the reliability of the emerging process description, the PERE analyst
should independently list the interconnections and working materials used in the process
and check these off against the definition of the process components. If interconnections
or working materials are noted which have not yet been included in the definition of
some component, then the analyst should first inspect the components to see if their
interface definitions need to be amended. If this fails, then the analyst should consider
the definition of new components with appropriate interfaces.
Clearly, PERE involves a degree of redundancy in defining connections between
components and interfaces within individual component definitions. For simple
processes with relatively few components, interconnections and a small variety of
working materials, this redundancy may be unnecessary. However, especially for more
B-15
complex processes, this redundancy provides a useful second check on the definition of
the process as discussed above. Furthermore, some processes may be easiest to analyse by
first focusing on components, others may be easiest to analyse by first focusing on
interfaces or interconnections. PERE’s flexibility is most enhanced by separately
recognising each of these concepts. These issues are discussed in more depth in section
B.4.
B.2.7 Representing the Results of Analysis
Analysis of processes in terms of their components, interconnections and working
materials permits the construction of the ‘PERE component table’ or PCT (see
Appendix C for examples). Typically, the PCT for a process will contain a row for each
process component. The PCT contains columns for component names, classes (in terms
of the classes of components and their specialisations), the interfaces and working
materials of each component, (optionally) states, invariants, (optionally) preconditions
and resources, and (optionally) external control.
Table B.1: PERE Component Table Headings
Component
name
Class Interfaces
and
working
materials
(Optional)
State
Invariant (Optional)
Pre-
conditions
and
resources
(Optional)
External
control
Source
A final column labelled ‘source’ enables the PERE analyst to note the source or sources
of information used to base the analysis of the component upon. This enables the analyst
to explicitly link the analysis with process documentation, recordings of interviews or
notes of observations, whatever source was used in the analysis of the particular process
component. If the viewpoint analysis techniques of PREview-PV have been used to
capture the process for PERE analysis, then the contents of the source column should
closely relate to the sources used in the application of PREview-PV. Inclusion of links
to information sources importantly supports the traceability of the PERE analysis. For
example, if subsequent iteration of analysis is required for a component, the source
column will give an indication of the sources which have so far been used to gain
information about the component. In addition to facilitating traceability, the source
column allows the PERE analyst to note further information about the component
which needs to be recorded explicitly alongside the other details about the component
but which does not fall readily under any other column. For example, experience in
applying PERE has indicated that PERE analysts often find it useful to note
information about exceptional cases alongside information sources. Additionally, when
there are contradictions between information sources, these can be noted here (e.g.
when the reports of persons involved in a process contradict process documentation).
It is often useful to give a numerical reference to components and interfaces alongside
their textual description in a PCT. This facilitates referring to components compactly
and, in addition, the discipline of providing such references can encourage the PERE
analyst to ensure that the analysis is complete and correctly organised. Such references
B-16
can come from principally two sources: i) pre-existing process documentation or ii)
supplied by the PERE analyst. If pre-existing process documentation does use a
consistent numbering or lettering system, we recommend that the PERE analyst use this
(or some simple variant on it) as this will further facilitate cross-referring the analysis to
documentation. If the information sources are not consistent, then the analyst should
either adopt one of the available reference systems or devise their own and ensure that
the discrepancies are noted in the PCT sources column.
Experience with the use of PERE has suggested the following reference system if none
is satisfactorily available in pre-existing process documentation.
For components: refer to components numerically by a string of period (.) separated
integers with each position in the list indicating the depth of decomposition of
the process component. For example, a component referred to as 1.4.2 will be
sub-sub-component number 2 of sub-component number 4 within component 1.
Not only does such a scheme give a unique identifier to each component, it
indicates something of the process structure as well as providing some idea of
how many iterations of analysis have been conducted. As further iterations are
conducted and the analysis deepened, longer strings can be written as component
references building on those from earlier iterations. Component references should
be recorded in the component name column of the PCT.
For interfaces: refer to each interface by a string of three hyphen (-) separated
elements. The first two elements are component references, the first of which
indicates where the working material which enters the interface has come from
and the second of which indicates the component which the working material is
entering. The third element is an alphabetic letter to disambiguate cases where
more than one connection exists between components (a string can be composed
if the number of connections between two components exceeds the number of
single letters in the alphabet used). For example, 2.1-3.4.1-a is interface ‘a’ in
component 3.4.1 bringing working material to that component from component
2.1. We recommend that, in the case where a single connection exists between
two components, a letter is still appended to the two component numbers even
though it is not necessary for disambiguation purposes as this heightens the visual
difference between interface and component references.
In section B.2.6, it was noted that PERE separately identifies interfaces on the one hand
and inter-connections even though this introduces redundancy into the process
modelling concepts. It was argued that such redundancy can often introduce a degree of
robustness to the analysis as well as permitting the analysis to be conducted equally
effectively whether the analyst was focusing on components or on inter-connections.
The scheme discussed for referring to components and interfaces is consistent with this
principle. The PCT can be used to, for example, first list all process components (with,
say, just interface references appearing in the interfaces column) and then list all
interconnections together following this (with the working materials and, if necessary,
some other textual description in the ‘interfaces and working materials’ column).
Alternatively, all of the information related to a component could be gathered together
B-17
in a single row of the PCT, in which case details of the interfaces would appear
alongside other component information and each interconnection would be mentioned
twice in the PCT (once in each component inter-connected). It is acknowledged that the
PCT should be adapted in this fashion in line with the kind of analysis being performed.
The referencing scheme can permit some simple checks on the consistency and
completeness of the analysis to be made at a glance. For example, when interfaces are
fully noted within each process component in the PCT, it is recommended that interface
references should be recorded alongside each interface listed in the ‘interfaces and
working materials’ column in the row for the component in question. It is further
recommended that ‘in’ interfaces are listed first, followed by ‘out’ interfaces. If all are
numbered as suggested, the first half of the interface list in the component’s row will
have entries with the component number in the second position of their references, and
the remaining interfaces will have the component number in the first position of their
references. This provides a simple visual check that the analysis of components and of
interfaces is consistent and that the direction of the connections is correctly recorded.
(Note: two-way connections should be recorded as a pair of one-way connections in this
scheme.)
In addition to a textual representation of the component in terms of its interfaces,
states, invariant, pre-conditions and external control, a component can be given a
diagrammatic depiction using an extended form of the standard SADT notation as
shown in Figure B.3. SADT-style diagrams can often be an effective way of presenting
the organisation of a process at a glance (for an examples of an entire process expressed
in this fashion, see Appendix C). However, diagrams (unless they become somewhat
cluttered) are not always so effective at representing very complex processes with many
components or for representing all the information about components (e.g. there may
not be room to give information about sources on an SADT-style diagram). Accordingly,
it is recommended that SADT-style diagrams are used together with the tabular/textual
representation provided by the PCT unless there is good reason to dispense with one
form of representation.
Working material
Working material
Working material Working material
Working material
State
External control
Pre-conditions
and resources
A
B
C
D
E
Interfaces
Invariant
I
N
P
U
T
S
O
U
T
P
U
T
S
Figure B.3: SADT-like notation for process models in PERE
B-18
In addition to representing the results of analysis the PCT and process diagrams can also
have the function of indicating to the analyst the current state of the analysis itself. For
example, if an entry in the PCT is blank, this will indicate to the PERE analyst that
further analysis is required. The PCT, then, can serve as a visual ‘to-do’ list for the
PERE analyst.
B.2.8 Vulnerability Identification
When the analysis of the process into its components, interconnections and working
materials is complete at a given level of abstraction, then the next step of identifying
vulnerabilities can commence. To support this, vulnerability classes can be suggested at
each level of abstraction for which there are component classes. Thus, there are basic
vulnerabilities associated with the transduce, process, channel, store and control basic
classes and more specialised vulnerabilities associated with specialisation of these basic
classes. For example, a channel basic class may be subject to the basic vulnerability of
“wrong capacity”, while a pipe (a specialisation of channel) may be subject to the
vulnerability of “diameter too small” (a specialisation for pipes of “wrong capacity”).
The existence of vulnerabilities associated with components and matched in their level
of abstraction provides heuristics for the PERE analyst to identify vulnerabilities in any
actually existing process. Thus, the analyst can check suggested vulnerabilities against the
existing process for plausibility at each level of abstraction that the mechanistic analysis
iterates through.
As with the component classes, the associated vulnerability classes are also extensible on
the basis of the experience of using PERE as well as drawing on the known existence of
vulnerabilities to particular classes of components (e.g. as collected from critical incident
records).
At this stage in the analysis, it may be useful to turn to what records are kept (if any) of
incidents and accidents to examine them for evidence of process vulnerabilities. If the
organisation has implemented it’s own version of the MERE process for experience
capture, then this will also be a useful input to the identification of vulnerabilities (see
also section B.1.3 on MERE as a process capture technique).
B.2.9 Basic Component Classes and Basic Vulnerability Classes
There follows some provisional basic level vulnerabilities associated with the five basic
component classes.
B.2.9.1 Transduce Component Vulnerabilities
These include: component fails (no transduction occurs, form of energy or
representation not changed), wrong change (transduction to an inappropriate form of
energy or representation which, e.g., matches no other component interface) etc.
B-19
B.2.9.2 Process Component Vulnerabilities
These include: component fails (no processing occurs, working material or semantic
content unchanged), wrong change, process locked in null state, deadlocks, livelocks,
missing transitions between processing states, failure to change processing mode in
response to control command, etc.
B.2.9.3 Channel Component Vulnerabilities
These include: component fails (working material or representation fails to pass through
sender’s interface, is lost or fails to pass through recipient’s interface), wrong capacity,
wrong connections, not enabled, incorrect message passing protocol used etc.
B.2.9.4 Store Component Vulnerabilities
These include: component fails (failure to store working material or register data,
material or data lost from store), working material damaged or corrupted, wrong or no
addressing, back-up failures, misleading duplications, inconsistent versions or entries,
store capacity exceeded, etc.
B.2.9.5 Control Component Vulnerabilities
These include: component fails (failure to control connected components), wrong
control messages sent, feedback inappropriately given or interpreted, or not present, etc.
As remarked above, it is intended that these vulnerability classes be used as heuristics (or
‘rules of thumb’) by the PERE analyst in searching systematically for vulnerabilities. It is
not supposed that they will apply to all cases. It is a matter for the analyst’s skill and
judgement whether any particular vulnerability makes sense in the context of the case
under analysis.
B.2.10 Component Attribute Vulnerabilities
The approach to the definition of components discussed above requires the analyst to
describe for each component its interface, state, invariant, pre-condition and external
control attributes. The definition of components in terms of these attributes gives the
analyst a further framework for identifying vulnerabilities in addition to those associated
with the generic class that the component is a member of (see previous section).
B.2.10.1 Interface Vulnerabilities
The interface definition involves identifying the interconnections and work materials
which pass in and out of the component. Hence, there may be vulnerabilities associated
with interconnections (connection not actually made, wrong connection made, interface
does not allow the passage of work materials in/out of the component, interfaces and
materials are incompatible, inconsistent protocols or formats are employed) and work
materials (materials are of the wrong sort or are absent or are prone to degradation,
contamination or damage in transit).
B-20
B.2.10.2 State Vulnerabilities
The internal state(s) of the component can be subject to intrinsically hazardous
conditions. These are typically extreme or out-of-range values for the state (e.g. over-
pressure, under-pressure, over-voltage). A useful heuristic for generating state related
vulnerabilities is to take the state(s) identified in the model so far and consider a
potential vulnerability as arising from too much/too little, too big/too small, too
many/too few, over-range/under-range with respect to the component state.
B.2.10.3 Invariant Vulnerabilities
These are of two sorts. First, violations of the invariant functioning of the component.
These can be heuristically generated by negating or requantifying or re-scoping an
expression of the invariant. For example, if a store component specifies “all documents
of type X are archived”, potential vulnerabilities in the process occur if the documents
are not archived (negating), or if only some of them are (requantifying), or if documents
of type Y are archived (re-scoping). Second, problematic behaviours consistent with the
invariant (i.e. when the invariant function is satisfied but hazardous conditions still
emerge; e.g. reverse flow down an intact pipe). These problems are often notoriously
difficult to define or anticipate. It is assumed that they are generally the result of the
interaction of correct invariant function of a component combined with failures
elsewhere in the component (e.g. an interface or control failure) or elsewhere in the
process. Known or anticipated vulnerabilities should be noted at this stage, though some
may only emerge as possibilities when one considers the whole configuration of all of the
process components.
B.2.10.4 Pre-condition and Resource Vulnerabilities
These arise if the pre-conditions are not satisfied or if resources for the component’s
functioning are not available. Again, the non-satisfaction of any pre-condition can be
heuristically generated by negating, requantifying or re-scoping an expression of the
precondition. Resource vulnerabilities can be identified by considering the non-
availability of necessary resources, the presence of an incorrect resource, the presence of
too much/too little of a required resource etc.
B.2.10.5 External Control Vulnerabilities
These arise if the control relations the component has (probably with some other
component of type ‘control’) are not satisfied in some way, either because the
interconnection fails or is not made, or because the control signals/messages are of the
wrong type or are inappropriately interpreted etc.
B.2.10.6 Specialising attribute vulnerability classes
With the specialisation of component classes, there will also be the associated
specialisation of attribute vulnerability classes. Thus, consider a component of class
channel. The vulnerabilities associated with its invariant are mainly due to the failure of
the link or connection that the channel is supposed to make.
B-21
If this class is specialised to the sub-class ‘pipe’, then more specific possible failures and
vulnerabilities can be identified associated with the violation of the invariant. It makes
sense to talk of ‘blockage’ and ‘leakage’ being vulnerabilities for a pipe. Furthermore, for
certain kinds of pipe, a blockage may lead to contamination or a change of state of the
normal working material (e.g. freezing). Expansion of ice might also cause a total failure
by fracturing the pipe.
On the other hand, if the channel class is specialised to an electrical cable which passes a
signal to a computer system, a rather different set of failures and vulnerabilities
associated with the invariant attribute appear. Breakage might cause computing
resources to become invisible to their ‘clients’. Messages might become corrupted or
intermittent in the case of a bad contact. Messages might be spooled, temporarily stored
and delivered with delay at unexpected times.
By approaching vulnerabilities in the same ‘object-oriented’ inspired fashion as the
definition of components has been, PERE’s mechanistic viewpoint can support a
general fault analysis with the identification of generic vulnerabilities and more specific
analyses as further specification is added to the process component model. This approach
also supports flexibility in the application of PERE (see section B.4).
B.2.11 Vulnerability Review
For each component present in the process model and depicted in the PERE component
table (PCT, see Table B.1), vulnerabilities should be generated as outlined in the
previous section. Meaningful vulnerabilities in the context of the process under analysis
should then be subject to review. Vulnerability review consists of specifying for each
vulnerability its likelihood and the severity of its consequences, that is, the risk
associated with the vulnerability. If excessive risk is thought to be involved, defences
should be considered. PERE does not insist on the adoption of any specific method for
deriving these beyond recommending that an ALARP principle should be applied. It is
recognised that this depends upon quite specific skills, judgement and knowledge of the
application area (not to mention an appreciation of the relative costs of different
possible solutions) and hence cannot be prescribed in general. A number of methods
already exist for the support of risk appraisal and these can be embedded within PERE
at this point. However, as perhaps the most familiar are those associated with
Probabilistic Risk Assessment (PRA), the next section discusses how PERE’s
Vulnerability Review and PRA can work together.
No matter how vulnerability review is conducted in detail, it is possible for the PERE
analyst to be supported by the emerging process component model itself. This is because
the likelihood and consequence of any vulnerability are at least in part a function of the
organisation of the process as revealed by the process component model. For example:
Components with many interfaces will tend to be those which have either a
complex internal structure (a complex set of invariant functions with many
different states) or which are enacted very frequently (or both). These factors
may increase the likelihood of failure associated with the component.
B-22
Components with complex resource requirements or many preconditions may also
be the more likely to fail.
Components without external control may be subject to failure due to the non-
detection of hazardous conditions or the absence of monitoring to ensure the
satisfaction of the invariant.
Components which interface with many other components on the output side
may lead to important consequences (e.g. many knock-ons) in the event of
failure.
A consideration of the overall organisation and configuration of the process
model should enable critical components whose failure may lead to general
process halting (e.g. a component which mediates links between all process pairs).
Vulnerabilities in such components tend to be of high risk.
The overall process configuration as revealed in the model should enable the
analyst to detect component loops (cycles) which may be subject to livelock or
deadlock problems.
As before, these are heuristics designed to support the analyst, not formal methods by
which the analyst can calculate risk.
B.2.12 Proposing Defences
Just as the previous sections have considered the identification of vulnerabilities at
component class and attribute levels in a generic manner, it is possible to provide
guidance for possible defences against these vulnerabilities in a similar generic fashion.
The following sections provide suggestions and starting points for proposing defences
based upon the class of component and the component attributes.
It is often the case that a possible defence can be proposed which addresses
vulnerabilities across a variety of component classes (differences in the actual defence
implemented appearing as the classes are more specialised). Thus, rather than repeating
very similar recommendations for the different component classes, the most common
defences are considered below.
B.2.12.1 Redundancy
This is a common strategy in dependable hardware design, and is also considered by some
to be of benefit when designing reliable organisations (see section 3.3.3 for a discussion
of this). It can be used as a defence in many cases to provide more than one means of
achieving the same aims. When redundant components are introduced, they should be
checked against each other for agreement. Any mismatch in their output is an indication
of a problem in one of the components. Voting mechanisms can be used if desired where
there are more than two redundant components working in parallel. For example:
different teams may be assigned to the same task; several backup stores in separate
locations and on different media can be employed; messages can be delivered by more
than one medium; and so on.
B-23
B.2.12.2 Feedback
It can be useful to provide feedback from a process component for a number of reasons.
The feedback can be used to acknowledge that the input has been received, to notify
other components when an action has been completed, to confirm that a component is
active, and so on. It can be particularly useful for components of the process and control
classes.
B.2.12.3 Checking
Depending upon the class of component being protected, various means can be used to
check that the actions performed by the component have been carried out correctly. For
example: test inputs can be created with known outcomes for particular components
which can subsequently be verified; test restore commands can be performed on backup
storage components; and so on.
B.2.12.4 Specialising for component class
As was the case for the basic vulnerabilities offered in section B.2.9, these suggestions
are at a generic level, and are intended as a guide to the analyst when confronted with an
actual process analysis. They are also open to specialisation for classes and sub-classes,
and should be treated as a resource which is to be extended as a result of use. For
example, store components can be specialised into paper and electronic storage, the
latter of which can in turn be specialised into tape or disk storage, and so on. Protecting
against loss of data can then also be specialised according to the sub-class of store
component. So, whilst both paper and electronic storage should be protected against fire
and water damage, checks to ensure that the data has not been corrupted will take
different forms depending on the media. The frequency with which tests should be made
will also differ. Generic defences can be treated in a similar manner for all classes and
sub-classes of component.
B.2.12.5 Component attribute defences
Section B.2.10 provided the means to generate generic vulnerabilities for components
through consideration of their attributes (interface, state, invariant, preconditions and
resources, and external control). If vulnerabilities have been generated in this manner, it
is possible to consider generic defences against them also according to the component
attributes. In the main, these defences will consist of checks that the various conditions
described by the attributes are all satisfied. For example, checks to ensure: that all
interfaces are of the correct type and compatible with the working materials; limit and
range restrictions on the component’s internal state are not violated; the component
invariant is satisfied; pre-condition and resource constraints are satisfied; and control
relations are satisfied.
Table B.2 shows the PERE Vulnerability Table (PVT) which, for each vulnerability,
records its name (or some other designator), its class, likelihood, consequence, possible
defences and possible secondary vulnerabilities (if the defence can be anticipated as
having vulnerabilities associated with it in turn).
B-24
Table B.2: PERE Vulnerability Table Headings
Vulnerability Class Likelihood Consequence Possible
defences
Possible
secondary
vulnerabilities
B.2.13 Vulnerability Review and Probabilistic Risk Assessment
PERE and PRA can work together in a number of ways. For example, PERE’s human
factors viewpoint (see section B.3) can provide insights into sources of failure due to the
human aspects of a process and to common mode failures which are not always fully
recognised in a PRA. Additionally, PERE can help the conduct of a PRA by uncovering
factors which might impact the sensitivity of probability estimates. In short, there are a
number of ways in which PERE might be embedded within PRA and be conducted at
the same stage of the safety life-cycle.
Importantly, aspects of PRA can also be embedded within PERE to facilitate the
conduct of Vulnerability Review and guide the iteration of the analysis from the
mechanistic viewpoint. First, provisionally identified vulnerabilities brought to light by
PERE can be given a more detailed analysis through deriving a PRA fault tree which
would show in more detail the faults which might lead to an occurrence of the
vulnerability. This would enable the PERE analyst to investigate the causal antecedents
of the vulnerability and, where appropriate probabilistic estimations can be made on the
basis of available data, attach a quantitative estimate of the likelihood of the
vulnerability occurring as a system fault.
Second, the potential consequences associated with a vulnerability uncovered by PERE
can be given further analysis through deriving a PRA event tree. This would enable the
PERE analyst to assess the causal consequences of the vulnerability if it occurred as a
system fault. Again, if appropriate data exist, probabilistic estimations and assessments
of severity could be attached to the consequences to give a quantitative picture of the
ramifications of failure. The evaluation of consequences is dependent upon the overall
purpose of applying PERE. For example, consequences might be assessed in term of
how life threatening they are or in terms of their financial cost.
Third, fault and event tree analysis can support the ‘filtering’ and prioritisation of
vulnerability review and the implementation of defences. The PERE analyst supported
by PRA techniques will be able to ‘sort’ the vulnerabilities by probability of occurrence
and by severity of consequence depending on the overall purpose of PERE application
and the nature of the application domain. For example, if PERE is being applied in
order to identify the most consequential vulnerabilities, then prioritising by severity (or
cost or whatever the relevant criteria which have been used in assessment) of
consequence is appropriate. Further PERE analysis can then be given to just those
vulnerabilities whose consequences are most severe and so forth. In other words,
PERE’s analysis strategy of ‘iterative deepening’ can be influenced by assessments of risk
derived from PRA techniques.
B-25
B.2.14 Iterating the Mechanistic Analysis and Feeding into the Human
Factors Analysis Component
The essential steps involved in the conduct of PERE’s mechanistic viewpoint have been
described: modelling the process in terms of its components, interconnections and
working materials, followed by the systematic identification and review of
vulnerabilities. These steps can iterate further as outlined above to uncover more
detailed analyses of sub-components and/or more refined component classes. At each
iteration new vulnerabilities can be generated, reviewed and defences suggested. It is
suggested that analysis from this viewpoint can be declared complete when an iteration
is conducted which reveals no vulnerabilities of significance over and above those which
have already been revealed. ‘Vulnerabilities of significance’ here means ‘vulnerabilities
which require defences or need to be noted’.
A mechanistic viewpoint analysis which is complete in this sense can be re-opened if (i)
new components are brought to the attention of the analyst which were not previously
noted; (ii) new tools or techniques are introduced into the components which may have
implications for their vulnerabilities; (iii) the process or some component within it is
redesigned either as a result of a defence suggested through the application of PERE
being adopted or for other commercial, engineering or manufacturing reasons; (iv) the
human factors analysis raises issues about the process under study which cannot be
resolved with the process analysed to its current level of detail. A re-opened mechanistic
viewpoint would follow the same procedures as outlined above focusing on new
components, new component attributes or new levels of detail as required.
In addition to highlighting vulnerabilities in its own right, PERE’s mechanistic
viewpoint forms the basis of the human factors analysis described in the next section. It
does this by providing the human factors viewpoint with the process model which it has
generated. The handing over of the process model for scrutiny from a human factors
viewpoint can occur in one of three main ways:
Only once the model is declared complete as defined above.
Per iteration. That is, once a full iteration at a given level of detail has been
conducted (but before the Vulnerability Review, it is not necessary for the
human factors analysis to wait for the results of that).
Per component. That is, within each iteration, once each component has been
analysed.
There are advantages and disadvantages to each. The less frequently the process model is
passed on for human factors analysis, the less this analysis is able to capitalise on
provisional results and get started and (hence) the more likely the human factors analysis
is to find itself pressurised by any time constraints there might be on PERE application.
On the other hand, the more frequently the process model is passed on for human
factors analysis, the more likely it is that provisional analysis results will have to be
corrected, and the greater is the burden of version control for the PERE process model
itself and the greater the probability that human factors judgements would be
B-26
incomplete or based on an outdated mechanistic analysis. An application of PERE will
have to make trade-offs in these respects (see section B.4 below).
B.3 The PERE Human Factors Viewpoint
B.3.1 Introduction
Figure B.4 depicts the overall organisation of the PERE human factors viewpoint in
terms of the identification of sources of hazard identified in Chapter 3. Appendix A
presents the human factors vulnerabilities and possible defences in full checklist form.
To aid the navigation of the checklists, arcs descending from boxes are sometimes
labelled (e.g. Q1). When this is the case, there is a question to be asked (e.g. Question 1)
which may govern the choice to be made between arcs (and hence the next factor to
consider). When an arc descends into a rounded box labelled with a question number,
the rounded box contains links between the questions and the PERE human factors
checklist. These links are depicted in Key Figures 1, 2 and 3 of this section. If an arc is
not labelled, the issues summarised in the next box(es) should be regarded as obligatory.
Finally, Figure B.4 shows labelled entry points to the PERE human factors checklist.
The reference numbers of these entry points correspond to numbers of sections in the
full checklist, which presents the vulnerabilities and defences together with section-by-
section cross references to Chapter 3, which is a full review of the human factors
literature as it is relevant to requirements engineering.
The following sections guide the PERE analyst through application of the human
factors viewpoint analysis to the process under consideration, which has already been
subject to mechanistic viewpoint analysis as described in section B.2.
The purpose of structuring the human factors viewpoint in the manner that exists here is
to allow for ‘pruning’ of the work required at each step. So, for example, it does not
make sense to search for vulnerabilities due to the effects of working in a group if the
component under consideration consists solely of individual activity. Also, in the spirit
of the iterative deepening approach to analysis as introduced in section B.2.3, it is useful
to be able to omit certain components from the analysis if it is felt (to some degree of
confidence) that they are not vulnerable to error or failure or whatever is the particular
focus of the analysis. Therefore, one can first of all ask of each component in turn:
Question 0: Is it suspected that this process component, its connections or its working
materials can be vulnerable to error and/or violation?
This allows us to leave out from consideration any process components that, for
example, contain no human activity. We can now move on to consider human activity
within the process components.
B-27
Hazardous conditions due to human fac tors
Errors Violations
Organisational
context
problems
Human and
organizational
connections
Documents Notations and
representations
Individual
activities
Key fig ure 3:
• structure
c ommunication
• culture
learning
Key figure 2:
• resources
• norms
performance
evaluation
Key figure1:
• skills
• rule s
knowledge
Group
activities
Problems
with working
materials
Component
connection
problems
Component
problems
PERE Human Factors Checklist
Q0
Q1 Q7 Q8 Q9-12
Q2 Q3-6
Ref 1-3 Ref 5 Ref 7 Ref 8 Ref 6 Ref 4
Figure B.4: The overall structure of PERE’s human factors analysis method
B.3.2 Human activity within components
First, let us consider the components which are identified as comprising the process. The
key concern now is to determine the character of the work and human activity which
constitutes the execution of the component’s function. Thus:
Question 1: Is the component principally characterised by individual or group activity?
This enables us to distinguish two broad classes of vulnerability which have been
identified in human factors research and tailor recommended defences to the nature of
the human activity within the component. A process component characterised by
individual activity is one where the function of the process is largely executed by a single
individual (next see section B.3.2.1 below). A component characterised by group
activity is one where the contributions of a number of individuals are required and need
to be coordinated for the component to fulfil its function (next see section B.3.2.2 ).
B-28
B.3.2.1 Individual activities
Where a process component is analysed as principally consisting of the work of a single
individual, it is important to further analyse the nature of that person’s work. Under
certain circumstances, a task analysis method may be appropriately applied in this
connection if feasible. This may give a more detailed breakdown of the tasks and actions
involved in fulfilling the function of the component. Alternatively, generic
vulnerabilities may be captured by distinguishing between skill based, rule based and
knowledge based errors and reviewing the human activity in the component with
respect to this three-way distinction. Thus it is important to ask of the component:
Question 2: Is the component principally characterised by skill based, rule based or
knowledge based activity?
Key Figure 1 (Figure B.5) structures a series of further questions to support the
identification of these three generic classes of vulnerability and proposes generic
defences, along with links to the full PERE human factors checklist in Appendix A
which gives a further breakdown of the vulnerabilities into sub-classes and associated
defences together with precise cross references into Chapter 3 for further information.
B-29
Does this activity involve
novel problem solving or
exploration through the
application of relevant
rules of practice based on
past experience?
Is this an activity performed
within an individual's
routine skills and under
normal operating
conditions?
Check products of work, consider introducing
supervision and document review if none
existing, consider redesign of how information
is presented and of individuals' work
environments, consider introduction of
appropriate tool support and/or redesign of
current working tools
Design group processes to include ‘critiquing’
and reflection on how solutions are arrived at
(e.g. through video or audio tape analysis or
other means of experience capture), introduce
checking and review of products of work with
particular scrutiny of possible sites of
knowledge-related bias
Provide training in appropriate solutions,
continual revision of training in light of changes
or innovations in work domain, disseminate
expertise and experience, (if appropriate)
introduce checking, supervision or review if
none exists currently
YES
YES
NO
NO
Possible generic defences
Questions to ask of
processes involving
individual activity
Generic weaknesses to error
and
Slips and lapses in familiar skilled
activities
For further detail, see PERE Human Factors
checklist, items 1.1 to 1.3
Rule-based mistakes where
‘prepackaged’ solutions or rules of
practice are inappropriately applied
For further detail, see PERE Human Factors
checklist, items 2.1 to 2.2
Knowledge-based mistakes where new
solutions to problems have to be
devised without recourse to
prepackaged ones.
For further detail, see PERE Human Factors
checklist, items 3.1 to 3.6
Figure B.5: Key Figure 1: Generic vulnerabilities in relation to individual activity
B.3.2.2 Group activities
If the component depends critically on the joint action of a group of people, then it is
important to analyse the nature of the group, its constitution and activity. Four key
questions promote this:
Question 3 (resources): What are the available resources for the group to fulfil its
function? (‘Resources’ here comprises in particular the skill and experience levels of the
group members and the distribution and coverage of skills and experience of the group
as a whole.)
B-30
Question 4 (norms): How is the function of the group presented to group members and
what are the norms (specifications of what the group and its members should do) that
govern the activity of the group in executing this function?
Question 5 (performance): How are the contributions of group members produced and
coordinated?
Question 6 (evaluation): How are the contributions of the group members and the
overall products of the group (decisions, jointly authored documents or whatever)
evaluated?
Key Figure 2 (Figure B.6) depicts four generic classes of vulnerability associated
respectively with group resources, norms, performance and evaluation issues. Generic
defences corresponding to each vulnerability are also proposed in the Figure. Again, the
full PERE human factors checklist in Appendix A gives a further breakdown of the
generic vulnerabilities into sub-classes and associated defences together with precise
cross references into Chapter 3 for further background information.
Pursuing these questions will yield vulnerabilities specifically of a group origin.
Naturally, it is possible for individuals within the group to be still susceptible to the
vulnerabilities associated with individual work (section B.3.2.1). This may be especially
prominent if the group itself does not include adequate error checking and evaluation
processes. In such cases, it may also be necessary to scrutinise the work of individuals
within the group for the vulnerabilities noted in section B.3.2.1 and redesign group
activities by implementing the defences to individual error which involve error checking
and so forth at the group level (see sections 1-3 of the checklist).
B.3.3 Interconnections and working materials
Question 7: What are the means used for interconnecting components?
Interconnections between process components are typically sustained by the exchange of
materials (e.g. documents) or personnel (e.g. delegates, representatives or
spokespersons). This enables us to distinguish two further generic classes of
vulnerability: where connections fail due to inappropriate, misused or misunderstood
materials (next see section B.3.3.1) and where connections fail due to inappropriate or
unrepresentative personnel (next see section B.3.3.2).
Question 8: What are the documents (and other forms of representation) which
constitute the working materials of the process?
While documents (etc.) are often important to provide connections between
components, it is not necessarily the case that all documents (etc.) are used in this way.
There may be documents and other forms of working material which are ‘internal’ to a
component and not exchanged between components. It is still important to analyse the
design and production of these materials with respect to their vulnerabilities (next see
section B.3.3.1).
B-31
Questions to ask of
processes involving
group activity
For further detail, see PERE Human Factors
checklist items 5.2 & 5.7
What are the
available resources
for the group to fulfil
its function?
For further detail, see PERE Human Factors
checklist items 5.3, 5.5, 5.6 & 5.9
How is the function of
the group presented to
group members and
what are the norms that
govern the activity of the
group in executing this
function?
Match supervision to type of task, and
provide means for all opinions to be critiqued
regardless of status.
How are the contributions
of the group members and
the overall products of the
group evaluated?
Be mindful of the match between the nature
of the group and the activity it undertakes.
Plan tasks to make best use of available
resources
How are the contributions of
group members produced
and coordinated?
Possible generic defences
Generic weaknesses to error
and
Errors due to available resources
lacking or inappropriate for the task
Ensure that the group is able to access all the
resources it may need in order to complete its
task. Select group members and leader(s)
according to the skills and experience they
possess relevant to the task.
Errors due to poor group formation,
cohesion, and leadership style
Introduce measures to make desired group
norms explicit. Consider use of group
facilitation techniques to assist group in
performing its function.
Poor performance due to group
process losses or the inappropriate
management of consensus
For further detail, see PERE Human Factors
checklist items 5.1, 5.4, 5.5, 5.6, 5.7, 5.8 &
5.9
Apprehension about the methods and
consequences of evaluation
For further detail, see PERE Human Factors
checklist items 5.1 & 5.5
Figure B.6: Key Figure 2: Generic vulnerabilities in relation to social-group activity
B.3.3.1 Materials, documents and representations
Where the connections between components are sustained by the exchange or passing on
of reports, documents, representations and other textual or pictorial materials which are
intended for human interpretation, then a series of generic vulnerabilities arise
concerned with document and diagram (etc.) design. We note some of the most relevant
B-32
of these for requirements engineering in sections 7 & 8 of the PERE human factors
checklist in Appendix A where the generic vulnerabilities are further broken down into
sub-classes and possible defences are offered.
B.3.3.2 Human and organisational connections
Where the connections between components are sustained by a delegate or
representative from one component reporting on it to another component, the details of
that representative’s task should be analysed as a form of individual work (return to
section B.3.2.1) and/or in terms of the group interactions involved (return to section
B.3.2.2) as appropriate.
In some contexts where interconnections are sustained by personnel, these relations
between components may instantiate organisational connections which exist in the
organisation at large (for example, if it is a standard part of some person’s job to report
on a process component at regular management meetings). In such cases, existing
patterns of communication within an organisation may themselves provide connections
between process components. This gives rise to a further set of generic vulnerabilities
concerned with the adequacy of the organisation for supporting interconnections
between components.
Generic vulnerabilities and suggested defences concerning the human and organisational
connections between components are further analysed in section 6 of the PERE human
factors checklist in Appendix A.
B.3.4 Organisational context
The identification of vulnerabilities associated with the general organisational context
can be promoted if we analyse organisations into four key features.
Question 9 (structure): What is the overall formal structure of the organisation? Is it
hierarchical, bureaucratic, heterarchical, structured on an ad hoc basis and so forth?
Question 10 (communication): What are the channels and patterns of communication as
they exist in the organisation over and above those provided by formal structural
reporting relationships?
Question 11 (culture): What form does the organisational culture take particularly with
respect to safety issues? For example, does a specific ‘safety culture’ exist?
Question 12 (learning): How is organisational learning promoted in the organisation?
Key Figure 3 (Figure B.7) identifies four generic classes of vulnerability associated
respectively with organisational structure, communication, culture and learning together
with generic defences.
B-33
Questions to ask of
the organisational
context of processes
Consider decentralised authority structures
and reporting procedures closely tied to the
processes they need to support. Consider
the redesign of processes and structures to
minimise complexity, loosen inter-process
dependencies and allow for the flexible
reallocation of staff.
What is the overall
formal structure of
the organisation?
Introduce appropriate supervision and
checking along with redundancy in terms of
personnel and/or technology.
What are the channels
and patterns of
communication as they
exist in the organisation?
Introduce methods so that experience data
can be disseminated throughout the
organisation in addition to management
structures and standard reporting
relationships.
How is organisational
learning promoted in the
organisation?
Introduce a strong safety culture at all levels
of the organisation and incorporate safety as
part of training.
What form does the
organisational culture take,
particularly with respect to
safety issues?
Possible generic protection
Generic weaknesses to error
and
Errors due to reporting procedures
and ineffective structures slowing
response
For further detail, see PERE Human Factors
checklist items 6.1-4 &6.11
Errors due to ‘single point failures’
and error propagation throughout
the process
For further detail, see PERE Human Factors
checklist items 6.1 & 6.2
Errors due to failures to comply with
regulations, recurrent similar failures
and the minimising of weaknesses
For further detail, see PERE Human Factors
checklist items 6.5-6.8 & 6.10
Errors due to failures to learn from
experience or adapt to changed
conditions. Processes are either
badly understood, continually
re-invented or inadequately
documented
For further detail, see PERE Human Factors
checklist items 6.7-9 & 6.12
Figure B.7: Key Figure 3: Generic vulnerabilities in relation to the organisational context
B-34
B.3.5 Human activity outwith the process model: Violations
As noted above and commonly remarked in the human factors literature, human error is
not the only source of hazard in a process. Process models and descriptions, rules of
practice and procedures for safe operation can also be violated. Violations can be
deliberate but they can also be unintentional (e.g. when insufficient training has been
given to inform personnel of the nature of the process). In contrast to the errors we
have analysed so far, violations occur when personnel act outside the specifications of
the process model and execute other functions instead. Violations are not necessarily
hazardous in themselves. Indeed, under certain circumstances (e.g. when the process
model itself is faulty), violations may be harmless or even necessary.
For each aspect of the process (component, connection etc.), it is important to ask:
Question 13: Might the process encourage or require violation?
Violations may be due to problems with the specification of process components, their
interconnections or due to problems with working materials. Violations may arise at the
individual level, in group activity or for organisational reasons. Accordingly, the
questions in B.3.2-B.3.4 can be iterated again with possible sources of violation in mind.
If the process model requires violation for its purpose to be fulfilled, this can be taken as
a good indication that process redesign may be necessary.
Section 4 of the PERE human factors checklist in Appendix A further breaks down
vulnerabilities and defences associated with process violations.
B.3.6 Vulnerabilities and Defences
Throughout the development of PERE’s human factors viewpoint, we have been
concerned to make generic suggestions for the improvement of processes based on the
identification of generic vulnerabilities. In this way, we intend the human factors
analysis to contribute to the evolution of more dependable versions of processes.
However, because the process modelling approach taken in the mechanistic viewpoint is
formulated in terms of generic and abstract notions (component, connection, working
material and so forth), our suggestions for improvements must be understood to be
provisional. Thus, alongside generic vulnerabilities, we identify provisional possible
defences.
Whether these defences should actually be implemented depends upon:
their relevance in terms of the details of the actual process under investigation
the purposes of the investigation
an assessment of the risk (likelihood combined with severity) associated with the
vulnerability
prioritisation where many possible defences could be implemented.
B-35
Additionally, there are no easy answers for the improvement of processes, and defences
often have secondary vulnerabilities associated with them in turn. That is, implementing a
defence may itself lead to further vulnerabilities. For example, introducing redundancy
and explicit monitoring to make a process fault tolerant and to minimise the chance of
error propagation may make the process inherently more complex and hard to manage at
another level. Thus, the suggested defences we offer have to be combined with further
risk assessment. If the secondary vulnerabilities turn out to be associated with a greater
risk than the existing vulnerable condition, it may be unwise to implement the defence.
However, under such circumstances, especially if secondary vulnerabilities are found to
be risky throughout the process, there is a strong argument for extensive process
redesign, not merely for evolutionary, incremental process improvement.
From the human factors point of view then, opportunities for extensive process redesign
arise when:
secondary vulnerabilities involve equal or greater risks than leaving things as they
are and/or
existing conditions encourage or require extensive violation.
B.3.7 Documenting the Human Factors Analysis
The results of PERE’s human factors viewpoint analysis can be documented using a
PHT (PERE Human factors Table—see Table B.3). The PHT closely resembles the PVT
and has columns for the name (of the component, connection, working material,
organisational context component, or violation), the analysis given (including, where
possible, a numerical cross-reference to the potential source of error as enumerated in
the PERE Human Factors Checklist), the likelihood of the error/vulnerability, what
consequences it might have, possible defences, and possible secondary vulnerabilities.
B.3.8 Summary
The human factors analysis method presented here has the following main features:
1) it builds upon the process models derived in mechanistic viewpoint analysis
2) it identifies different vulnerabilities associated with human error and process
violation
3) vulnerabilities are further broken down into human factors problems with
components, interconnections and working materials, and the organisational
context of the process
4) after identifying the character of the human activity in components, and the
nature of working materials, interconnections and organisational context in more
depth, more specific vulnerabilities can be identified and defences can be
suggested
5) criteria are presented for deciding whether to actually implement suggested
defences
B-36
6) criteria are also presented for indicating when extensive process redesign rather
than incremental improvement may be the more appropriate strategy.
Table B.3: PERE Human Factors Table headings
Vulnerability Analysis
(checklist
ref.)
Likelihood Consequence Possible
defences
Possible
secondary
vulnerabilities
(checklist ref.)
Problems with
Process
Components
Problems with
Connections
and Working
Materials
Organisational
Context
Problems
Violations
B.4 Modifying PERE
This thesis so far has presented the essential concepts underlying PERE and a detailed
description of its mechanistic and human factors viewpoints. However, it is
unreasonable to suppose that PERE will be or should be applied in an entirely uniform
way across all organisations and across all process types. Organisations wishing to apply
PERE will vary in terms of the domain of their work (chemical or electronics industry,
office and factory-production work) and hence the kinds of processes they wish to
evaluate with PERE. Additionally, the purposes and expected benefits of PERE can
vary on an application-by-application basis. Accordingly, this section provides guidance
for the tailoring or specialisation of PERE to particular organisational, process and
application contexts.
B.4.1 Strategies for Specialising PERE
PERE can be specialised in a variety of ways. Four main strategies can be identified for
specialising PERE:
Adapting PERE. This concerns the adaptation of PERE as a function of process
type and of the kinds of process information which may have been captured.
B-37
Simplifying PERE. This concerns the ways in which the application of PERE
might be simplified and targeted if, say, resources for PERE are limited or only
particular kinds of process problems are of interest.
Developing PERE. It is intended that PERE can be incrementally added to in a
variety of ways to enhance its organisational utility and domain-relevance.
Improving PERE. It is intended that PERE itself should be subject to process
improvement criteria much like the processes it studies.
The next four sections expand on these strategies giving more precise examples of
components of PERE which can be specialised. It is not supposed that the strategies for
specialisation that are listed here are a final definitive set. Indeed, an organisation may
find its own ways to specialise PERE not anticipated here. This is further discussed as
one aspect of the improvement of PERE.
B.4.2 Adapting PERE
An application of PERE will involve taking the generic PERE process and moulding it
to the specific business and industrial context that the processes to be analysed are
situated in. Naturally, this raises some difficulties as we cannot be sure that PERE will
be equally applicable or intelligible in all contexts of use. This section provides some
guidance as to how PERE might be applied in a manner which recognises the variability
of different organisations and process-types.
B.4.2.1 Specialisation by Process Domain
It is not imagined that a general typology of all processes can be defined—or at least not
one that gives specific enough guidance on the details of how PERE should be tailored
to specific processes. This has the implication that finding out how to apply PERE to a
particular process will inevitably be part of its application, at least whilst an analyst builds
up application experience. PERE is, however, open and extensible. In particular, PERE
allows its users to add to the network of classes and the human factors checklist items in
ways which are idiomatic for the process and industrial sector being worked with.
Rather than stipulate any generic typology of processes, PERE recognises that, especially
initially, the artfulness of PERE analysts themselves will be an important aspect of
PERE’s use and usefulness. The task of the PERE analyst at these stages will also be
aided by using any existing data sources, survey results, process-component vocabularies
or whatever which are in use in an organisation. These (or the most relevant or effective
of them) can be used to ‘seed’ the extension of PERE’s classes and checklist items. (For
more guidance on revising the classes and checklist items, see below.)
B.4.2.2 Specialisation by Distribution of Function
Although it is not believed that an adequate generic typology of all processes can be
found, there are a number of simple distinctions between process classes which might be
of use in guiding PERE specialisation. For example, processes can be seen to vary in
terms of how they distribute function between humans and machines. Many
B-38
‘knowledge-intensive’ processes are principally conducted by human beings in
interaction with each other and through documents. A process of this sort would place a
priority on the human factors viewpoint within PERE and (perhaps) upon checklist
items relevant to documents and reading and writing them. Conversely, a process with a
high degree of automation may be most relevantly analysed by focusing on the
mechanistic viewpoint of PERE.
B.4.2.3 Specialisation by Process Complexity
Processes can also be seen to vary in terms of their complexity. Of course, the PERE
analyst must be careful of any superficial judgement of process complexity as even the
most apparently simple processes can involve much fine detail on analysis. Nevertheless,
initial assessments of process complexity can be used to specialise the application of
PERE. For example, simple processes may not require an extensive analysis with
iterative deepening or with detailed component classification. A process which is not
subject to extensive interconnections or complex interactions between components (e.g.
a process which is more ‘serial’ and ‘linear’ in nature) may not need a separate analysis of
its interconnections over and above an analysis of its process components. The
complexity of the process may also impact upon the human factors analysis. A complex
process, for example, may involve specific tasks to manage the complexity (e.g. review
meetings). These could be specifically targeted in the human factors analysis.
B.4.2.4 Specialisation by Availability of resources
The resources available to an application of PERE may well vary between one
organisation and another, or even between applications within the same organisation.
This has implications for the way in which a PERE analysis is scheduled to take place.
For example, a well-resourced application of PERE which has several people available
to undertake the analysis would be able to partition the tasks such that some may be
performed in parallel. This is one reason why the different forms of communication
between mechanistic and human factors viewpoints may be tailored to suit a particular
application.
Of course, it is also possible that resources may be scarcely available for a full application
of PERE as described in section 7.2 and in sections B.2-B.3. In such cases, the
organisation may still wish to apply a simplified version of PERE in order to obtain a
reduced yet nevertheless useful set of improvements within the scope of the application.
The following section covers the various ways in which PERE might be simplified in
order to achieve this.
B.4.3 Simplifying PERE
PERE, like any other activity in an organisation, requires the commitment of resources.
As discussed in section B.4.2.4 above, it is unwise to underestimate the resources
required for an effective application of PERE. In the light of considerations of the costs
and benefits of PERE, it may be decided that a simplified or more focused application
of PERE is appropriate than the ‘full’ description of PERE given in this thesis would
B-39
suggest. This section reviews the main junctures in the PERE process where an
application might be simplified.
B.4.3.1 Conducting a ‘One-Shot’ Application of PERE
Although PERE has been characterised as involving a potentially unlimited number of
iterations between the mechanistic and human factors viewpoints, much can often be
gained from a ‘one-shot’ application of PERE with just a single analysis from each
viewpoint. This may be especially useful in the early stages of PERE application to a
process to decide whether further application (and iteration) of PERE is justified.
Appendix C presents an example of such an application.
B.4.3.2 Simplifying the Mechanistic Viewpoint
The analysis from the mechanistic viewpoint can be simplified in various ways and
several of these are discussed in the next few sections. It is worth noting though that,
under some circumstances, it might be possible to dispense with the mechanistic analysis
altogether. For example, the process may have already have been analysed in the terms
very closely related to those that PERE uses. Indeed, as PERE employs mechanistic
analysis concepts which have drawn on their use in methods like Hazops, it may not be
surprising if a prior analysis as part of, say, Hazops has already yielded a usable analysis.
Furthermore, it is possible that the use of one of the other REAIMS modules may have
already given a rich enough process analysis for the human factors analysis to take place.
For example, a prior use of PREview-PV might enable one to dispense with much of
the mechanistic analysis or to focus the analysis more closely. Table B.4 presents a
number of ways in which the mechanistic viewpoint may be specialised.
B.4.3.3 Simplifying the Human Factors Viewpoint
The human factors viewpoint of PERE can also be simplified in a variety of ways. It is
also important to emphasise that simplifications to the mechanistic viewpoint have
consequences for the human factors viewpoint, as it is the mechanistic analysis’ results
which serve as input to the human factors analysis. In this way, the human factors
analysis will ‘inherit’ many of the simplifications made to the mechanistic analysis (e.g.
those made by application of PRA techniques). This may be enough to make the human
factors analysis manageable in many applications, especially as the human factors analysis
has been designed to give step-by-step guidance which ‘filters’ the number of human
factors questions which need to be asked of any component (as shown in Figure B.4).
Table B.5 identifies further sites for the simplification of the human factors viewpoint.
B-40
Table B.4: Simplifying PERE’s mechanistic viewpoint
Focusing on
Specific
Components
and Materials
The mechanistic analysis in PERE can also be simplified by focusing on specific
components, working material and inter-connections, perhaps on the basis of existing
incident data. If a particular component has shown itself to be a likely source of faults,
then that component could form the main focus of mechanistic analysis. A possible
strategy then would be to iterate the analysis to greater depth for that component while
also increasing the breadth of the analysis by taking in similar components or those
which the fault-prone component is immediately connected to. In short, when there is
reason to target the analysis on particular components (or materials or inter-
connections), a strategy of iterative deepening and broadening may be preferred (cf.
section B.2.3).
If an organisation has information available which would enable the analysis to be
targeted, then this information should be used. It is also likely that information to
focus the analysis will naturally emerge as part of process capture. For example,
people interviewed may have informed the PERE analyst of the critical process
components and so forth.
Classifying
Components
Abstractly
The mechanistic analysis can be simplified by analysing components to the more
abstract levels of the network of component classes and not seeking to classify a
component in terms of the ‘lowest’ possible, most detailed level. At the limit this
would involve merely noting which of the five basic classes the component belongs
to. While this would permit only the most generic vulnerabilities to be identified in
vulnerability review and in subsequent human factors analysis, it may nevertheless
allow important defences to be implemented swiftly.
Analysing
Processes
without
Redundanc
y
PERE’s mechanistic viewpoint involves a degree of redundancy. For example,
component inter-connections are identified implicitly within the components in terms
of ‘interfaces’ as well as explicitly in their own right. This redundancy allows cross-
checking and a degree of robustness in the analysis. However, it does add to the
amount of analysis that needs to be done. It may be decided to eliminate a separate
consideration of aspects of the process already analysed. For example, having analysed
the process components, it may be decided not to undertake a separate analysis of inter-
connections. If this simplification is undertaken, it must be recognised that certain
kinds of faults (e.g. those arising through the interaction of components) might be
missed and that PERE itself is losing a degree of defence against error. These matters
should be periodically reviewed (see section B.4.5 below).
Prioritising
Vulnerability
Identification
and Review
It may not be necessary to review all of the vulnerabilities that are exposed by the
PERE mechanistic viewpoint. Some may be trivial. Others may be too costly to
eliminate and already protected against adequately. Additionally, there may be other
sources of information (e.g. incident data) which can enable the analyst to more
effectively identify the most plausible vulnerabilities. The purposes of the application
of PERE can also be used to prioritise vulnerability identification and review.
Vulnerabilities which impact upon process reliability will be clearly more important
than those which impact upon process availability if reliability is the goal of the
application of PERE. Finally, as PERE’s PVT involves the separate identification of
likelihood and severity of consequence, vulnerabilities can be ordered to prioritise
review by likelihood or review by consequence, again depending upon the overall
purpose of the PERE application.
Using a
Simplified
PCT and PVT
Simplifying analysis and/or vulnerability review enables simplified versions of the
PCT and PVT to be used. For example, if it is decided that the purpose of PERE is to
identify working materials whose vulnerabilities are the most consequential to the
process, then rows and columns for materials and consequences are the most important
to fill out. Also, if a ‘one-shot’ application of PERE is undertaken, then the PCT and
PVT will have a lesser role in coordinating the analysis iteration-by-iteration and a
greater role in expressing the outcomes of analysis.
B-41
Table B.5: Simplifying PERE’s human factors viewpoint
Focusing on
Specific Error
and Violation
Types
It may be decided to focus the PERE analysis of specific error and/or violation types.
For example, it may be known that process violations have become a common
sources of critical incidents in a particular process. In such a case, the analysis could
be focused on violations and the items devoted to violations and defence against them
from the checklist become of paramount significance.
Focusing on
Specific
Problem T
yp
es
Figure 3.4 shows the different types of problems which might arise in a process as
identified in PERE’s human factors component. Incident data or other forms of
intelligence (e.g. that gathered through an application of PREview-PV or MERE)
may enable the PERE analyst to focus on, say, organisational context problems
rather than those due to individual activities. Such simplifications are equivalent to
following only some of the links on Figure B.4.
Analysing
Generic
Vulnerabilities
and Defences
PERE’s human factors viewpoint presents human factors vulnerabilities and defences
in a ‘layered’ fashion. For example, the analyst can consult Key Figures 1, 2 and 3
for the definition of the most generic vulnerabilities and defences associated
respectively with individual activities, group activities and the organisational
context. A more detailed layer of analysis and suggested defence can be found by
consulting the human factors checklist. The checklist itself can be consulted at two
levels. In this way, the human factors viewpoint can be simplified, for example, by
consulting just the more generic ‘layers’ and only proceeding to lower levels if the
generic analysis seems plausible.
Using the
Human Factors
Checklist
Selectively
The checklist can be used selectively in another way. For most of the vulnerabilities,
a number of different ‘glosses’ are given and a number of different defences
suggested. Depending on the process domain, some of these glosses on a
vulnerability may be irrelevant. For example, vulnerabilities associated with
document work may be of limited relevance in a factory production-line context.
Furthermore, again, other sources of data may enable the analyst to prioritise specific
items on the checklist or specific ‘glosses’ within a give item.
Prioritising the
Consideration
of Vulnerability
and Defence
Types
Just as the overall purpose of the PERE application and so forth can be used to
prioritise the vulnerability review from the mechanistic viewpoint, the same
considerations can simplify the examination of human factors vulnerabilities and
possible defences.
Using a
Sim
p
lified PHT
Just as the mechanistic viewpoint’s PCT and PVT can be simplified in response to
simplifications of that component, so the PHT can be simplified to reflect a
simplified human factors analysis.
B.4.4 Developing PERE
It is recognised that PERE can (indeed, should) be developed so as to more closely
match its domain of application in an organisation. This is principally done by
incrementally adding to the analysis classes used in its two viewpoints. However, the
sustained application of PERE in a particular organisation may require other forms of
development, for example the translation of its checklists and other supporting materials
(including this document for that matter) into a national language other than English or
into forms of language which are more idiomatic for the industry in question.
B.4.4.1 Specialising and Adding to the Mechanistic Analysis Classes
The mechanistic viewpoint can be principally extended by adding domain specific classes
and vulnerabilities to those outlined in this document (e.g. the five basic component
classes). It is anticipated that the basic component classes are indeed generic and can be
B-42
equally applied to many different domains of work or forms of industrial production. In
which case, incrementing the classes will involve adding specialisations to the basic
classes and those descended from it. However, it may turn out that a new component
class emerges which requires a new basic or ‘root’ class. Nevertheless, PERE analysts are
urged to use the network of classes which has emerged as most useful for the industry in
question as this not only facilitates the analysis process but also allows earlier results to
be reused and stored.
B.4.4.2 Specialising and Adding to the Human Factors Analysis Classes
The human factors analysis classes, the checklist items and the key questions which guide
human factors analysis can all be incrementally added to. PERE adopts a layered
approach to the presentation of the checklist which should be followed where possible.
The checklist (Appendix A) has adopted a numbering scheme to facilitate reference and
additions to it. The human factors viewpoint’s ‘layered’ approach is designed to be less
strict than the mechanistic analysis’ network of classes. The fundamental purpose of
layering the checklist is to support the simplification strategy outlined above. Additions
to the checklist should be mindful of this. It is anticipated that the human factors
checklist will change considerably with PERE application as, for example, the abstract
references to activities and documents made in its items need to be concretised by
reference to specific domain tasks and document-types found in the processes which are
analysed.
B.4.5 Improving PERE
A fundamental goal of PERE is to aid the improvement of the processes it analyses.
However, it is recognised that for PERE to reliably accomplish this, PERE itself must
be dependable and analysable in terms of its reliability, availability, maintainability,
security and safety (RAMSS). Thus, the improvement of PERE itself should be
considered as part of its specialisation. This section discusses how this can be promoted.
B.4.5.1 Reviewing the Application of PERE
The application of PERE itself should be periodically reviewed, both within a particular
application to a single process and across its applications within a given domain or
organisation. In particular, attention should be given to how PERE is being applied,
whether it is being applied ‘in full’ or in an appropriately specialised fashion, by whom it
is being applied and whether these personnel are appropriate, and so forth. Different
strategies for specialisation should be reviewed as outlined above and decisions should
be taken when an ‘incorrect’ application of PERE has been detected whether this arises
through error or whether a new strategy for PERE specialisation is required.
Organisations are encourage to develop their own PERE Application Guide which gives
PERE analysts guidance on the appropriate strategies for PERE specialisation in the
organisation’s business domain. This Guide would then be the reference for periodic
PERE application review.
B-43
B.4.5.2 Reviewing the Utility of PERE
In addition to monitoring how PERE is applied in an organisation, it is necessary to
obtain assessments of the usefulness of PERE itself in detecting vulnerabilities and
suggesting effective defences. The usefulness of PERE needs to be assessed in reference
to the purposes of any particular application. It would not be appropriate to assess
PERE’s utility against criteria of cost savings if it was being applied with an exclusive
interest in safety in mind. The usefulness of PERE can be assessed informally (e.g.
through assessing the subjective impressions of PERE analysts and persons involved in
the processes studied by PERE) and also formally (e.g. if quantitative before-and-after
analyses of, say, incident data can be reliably made). It would be unreasonable to assess
the usefulness of all of PERE’s suggested defences, but it would be essential to the
continued worth of PERE to an organisation that its application was yielding some
overall gains.
Information on PERE’s utility should be fed back into the review of PERE application
described above at both generic domain levels as well as (if possible) in terms of PERE’s
effectiveness within a single application case. This would enable an organisation to
decide whether, for example, the commitment of more resources to PERE is justified in
the light of its usefulness or whether new application strategies should be considered,
and so forth.
B.4.5.3 Applying PERE Reflexively
It is possible to apply PERE reflexively and it is suggested that this should form part of
any periodic application review. That is, as PERE is concerned with different kinds of
vulnerabilities, and defences against them, it is possible to apply PERE to itself. In
particular, PERE as it is defined within an organisation’s own PERE Application Guide
should be reviewed in this fashion as this will embody the most detailed descriptions of
the PERE process as it is instantiated in a particular organisation. Such an application of
PERE should have other benefits than simply its own direct improvement. It is likely
that if organisation members see a method applied evenly both to themselves and to
those whose business is to evaluate processes that a culture of cooperation is more likely
to be fostered. The ‘reflexive self-review’ of PERE can help in this.
C-1
Appendix C Results of applying
PERE to MERE
This appendix presents the data obtained from the application of PERE to
Aerospatiale’s MERE process, as described in Chapter 8.
Figure C.1 is a representation of the MERE process in total, and is based on the
documentation (Branet and Durstewitz, 1995). It is followed by the MERE component
class hierarchy (Figure C.2), and then the three main tables containing the data from the
application, namely:
PERE Component Table (PCT),
PERE Vulnerability Table (PVT), and
PERE Human Factors Table (PHT).
C-2
1
E
x
pe
ri
e
n
ce
F
ac
t
Co
ll
ec
ti
o
n
,
Cod
in
g
a
n
d
Collection
selectio
Analysis
codin
Recording
databas
4 Rule/Recommendation Verification of
S
e
l
ec
ti
on o
f
R/R
v
erification
app
licatio
Verificati on that
se
l
ec
t
e
d
R/R
b
een a
pp
lied
incor
p
orated in
d
es
ig
R
ecor
di
n
g
o
f
a
pp
lication
and
3 Rule/Recommendation A
pp
lication Durin
g
Ch
o
i
ce o
f
a
pp
licable to
p
ro
j
ec
Appl i
ca
ti
on
selected R/R
d
esi
g
n task
R
ev
i
ew an
d
o
f R/R
deviatio
“L
eve
l
2”
V
a
lid
at
i
on:
“Wi
se
M
en
P
re
p
aration
t
ec
h
n
i
ca
materials
W.M.
R
ev
i
ew o
f
va
lid
a
t
e
d
l
eve
l
1
“W.M.
members
(
level
v
a
lid
a
ti
o
Recording of
comm
itt
ee
2.2.1
Level 1” Validation: “Technical Validation
Review
experience
and R/R by
committee
1
Re cording
TV
c
ommittee
resu
lt
2.1 Rule/Recommendation
Preliminar
el aboration of
Anal
sis of
b
y
R
ecordin
g
R/R
2
R
ule/Recommendati on Elaboratio n and Validation
2
.
2R
u
l
e/
R
ec
o
m
me
n
da
t
i
o
nValid
a
t
i
o
n
2
.
2
.
2
R/R
other
Experienc
facts
S
electe
facts
A
nal
y
sed
coded
A
na
ly
se
d
co
d
e
d
R
/R not
preval idat
Preliminar
R/R
Prevalidate
R
/R
P
reva
lid
a
t
e
d
T.V.
Committe
R/R not validated
R/R validated
“W.M.
Committe
Validated R/R
ass ociated
facts
R/R not
v
alidat e
level
Applicabl
R/R
s
R/R
a
pp
licatio
devi ation
Non accept ed
List of applied and
S
e
l
ec
t
e
R/R
s
R/R
applicatio
deviation
N
o
t
acce
pt
e
d
S
urve
y
o
f
R/R
“i
n serv
i
ce e
ffi
c
i
enc
y”
v
i
a ex
p
er
i
ence
Ex
p
erienc
facts
d
ata
b
as
R/R
d
ata
b
as
P
ro
j
ec
t
d
eviation
databas
P
ro
j
ec
t
R/R
databas
Main flow of
Flow of feedback
K
EY
Concern
checkl i
A
B
C
D
A
B
A
B
A
B
A
B
C
D
E
A
B
A
B
A
B
C
A
B
A
B
Or
g
anization
convoca
on
“T.V.
A
B
C
A
B A B A
B
C
A
B
C
D
A
B
C
D
A
A
A B
A
B
A
B
C
B
B
AB
C
C
D
C
Figure C.1: The Aerospatiale MERE process
C-3
Com
onent
Tr
a
n
sduce
Pr
ocess
C
h
a
nn
e
l
Co
ntr
o
l
S
t
o
r
e
ollectData
R
ecord Select Anal
y
se
D
ata
Elaborate
R
equ
ir
e
m
e
nt
s
Va
li
da
t
e
App
l
y
Requ
ir
e
m
e
nt
s
V
e
rif
y
App
licatio n
Convocation Database
Databa
se
D
ataba
se
P
r
e
Vali
d
at
e
L
e
v
e
l1 L
e
v
e
l2
Mee
tin
g
D
i
s
tribut
ed
M
ee
tin
g
Figure C.2: Component Class Hierarchy for MERE application
C-4
C.1 PERE Component Table (PCT) for The Aerospatiale
MERE Process
Below is a completed PCT corresponding to a single iteration of analysis of the
Aerospatiale MERE process. The component reference numbers correspond with
reference numbers in MERE’s own documentation. This correspondence enables us to
simplify the PCT by omitting the source column. As the reference number indexes the
MERE documentation that has been used in PERE analysis sufficiently.
Component
name
Class Interfaces and
working materials
State Invariant (Optional) Pre-
conditions and
resources
(Optional)
External
control25
1.1
Collect &
select
experience
facts.
Transduce:
CollectData
A: Concerns
checklist;
B: Experience
facts from various
sources;
C: In service
efficiency data;
D: Selected
experience facts.
Selected
sources of
experience
facts.
Selected facts are
relevant as
potential
candidates for
R/R elaboration.
Safety/
reliability
MERE enginee
r
team;
Concern
checklist.
1.2
Analyse and
code
experience
facts.
Process:
AnalyseDat
a
A: Selected
experience facts;
B: Analysed and
coded facts.
Current
code list.
All the selected
facts are coded
according to the
current code list.
Safety/reliabilit
y
MERE enginee
r
team;
Specialist
engineers;
Experience dat
a
sheet format,
code list and
analysis
procedures;
FichEyes tool.
25 It should be noted that, for the most part, control of components in the MERE process takes place
by virtue of working material (i.e. coded experience facts, R/Rs, etc.) being passed from one
component to the next.
C-5
1.3
Record
experience
facts in
database.
Transduce:
Record:
Database
A: Analysed and
coded facts;
B: Analysed and
coded facts.
Experienc
e facts
entered so
far.
All the coded
facts should be
entered into the
experience facts
database.
Safety/reliabilit
y
MERE enginee
r
team;
Experience fact
code list;
FichEyes tool.
2.1.1
Preliminary
elaboration
of R/R.
Process:
ElaborateRe
quirements
A: Analysed and
coded facts;
B: R/R from
other inputs;
C: R/R not
validated level 1;
D: R/R not
prevalidated;
E: Preliminary
R/R list.
R/Rs elaborated
should be general
enough to be
applicable to a
range of designs,
yet specific
enough for their
application to be
verifiable.
Safety/reliabilit
y
MERE enginee
r
team;
R/R data sheet
format and
code list;
FichEyes tool.
2.1.2
Analysis of
R/R by
specialists.
Process:
Validate:
PreValidate
A: Preliminary
R/R list;
B: Prevalidated
R/R;
C: R/R not
prevalidated.
All preliminary
R/Rs should be
examined for
accuracy, coding,
means of
appliance, etc.
until deemed
ready for
submission to the
validation
committees.
Safety/reliabilit
y
MERE enginee
r
team;
Specialist
engineers;
MERE
correspondents.
2.1.3
Record R/R
in database.
Transduce:
Record:
Database
A: Prevalidated
R/R;
B: Prevalidated
R/R.
All prevalidated
R/Rs should be
entered into the
R/R database
with the “Service
validation” field
set to “Validated”
Safety/reliabilit
y
MERE enginee
r
team;
FichEyes tool.
C-6
2.2.1.1
Organisation
&
convocation
for TVC.
Control:
Convocatio
n: Meeting
A: Prevalidated
R/R;
B: R/R not
validated level 2;
C: R/R.
Prevalidated R/Rs
should be
distributed to all
members of the
TVC, and a date
for the meeting
should be set.
Safety/reliabilit
y
MERE enginee
r
team;
T.V. committee
participant list;
Format for
edition of
technical
materials for
T.V.
committee;
FichEyes tool.
2.2.1.2
Level 1
validation.
Process:
Validate:
Level1
A: R/R;
B: T.V.
committee results.
The TVC should
determine for
each prevalidated
R/R that its
rationale is valid;
any remaining
discrepancies are
resolved; that
R/R are explicit,
understandable,
applicable, and
that the
application is
verifiable; and that
there are no
adverse safety
effects if the R/R
is applied
incorrectly.
Safety/reliabilit
y
MERE enginee
r
team;
T.V. committee
participants
(specialist
engineers);
MERE
correspondents.
2.2.1.1
Organis-
ation/
convo-
cation for
TVC
C-7
2.2.1.3
Recording of
TVC results.
Transduce:
Record:
Database
A: T.V.
committee results;
B: R/R not
validated level 1;
C: R/R validated
level 1.
All R/Rs
reviewed and
agreed by the
TVC should be
recorded in the
R/R database as
“Technically
validated” (status
“Validated” in the
“Validation
Committee” field).
Other R/Rs are
passed back to
Pre-validation
cycle for
reconsideration by
specialists.
Safety/reliabilit
y
MERE enginee
r
team;
FichEyes tool.
2.2.2.1
Preparation
for WMC.
Control:
Convocatio
n:
Distributed
Meeting
A: R/R validated
level 1;
B: R/R.
All technically
validated R/Rs
should be
distributed to
each member of
the WMC by the
WMC manager.
Safety/reliabilit
y
MERE enginee
r
team;
W.M.
committee
member list;
W.M.
committee
manager;
Format for
edition of
technical
materials for
W.M.
committee
members;
FichEyes tool.
2.2.2.2
Level 2
validation.
Process:
Validate:
Level2
A: R/R;
B: W.M.
committee results.
Each individual
member of the
WMC should
validate the R/Rs
for non-technical
issues such as
cost,
confidentiality, etc.
Safety/reliabilit
y
MERE enginee
r
team;
W.M.
committee
manager.
C-8
2.2.2.3
Recording of
WMC
results.
Transduce:
Record:
Database
A: W.M.
committee results;
B: R/R not
validated level 2;
C: Validated R/R
and associated
experience facts.
All R/Rs
validated by the
WMC should be
labelled as
“Applicable”
(status
“Validated” in the
“Wise Men
Committee” field),
and incorporated
into the “MERE
Applicable
Document”.
Other R/Rs
should be passed
back to either
technical
validation or pre-
validation for
reconsideration.
Safety/reliabilit
y
MERE enginee
r
team;
FichEyes tool.
3.1
Selection of
R/R for one
project.
Transduce:
Select:
Database
A: Validated R/R
and associated
experience facts;
B: Not accepted
deviations;
C: Applicable
R/Rs;
D: Applicable
R/Rs.
At the start of a
project, all R/Rs
deemed relevant
are selected and
stored in the
project R/R
database, and a
MERE R/R
applicable
document for the
project is issued.
Safety/reliabilit
y
MERE enginee
r
team;
Safety/reliabilit
y
engineers in
charge of the
project;
Project chief
engineer
representative;
Format for
edition of
technical
materials for
R/R application
according to th
e
design tasks;
FichEyes tool.
C-9
3.2
Application
of R/R in
design.
Process:
ApplyRequi
rements
A: Applicable
R/Rs;
B: Non accepted
deviations;
C: R/R design
application
deviations;
D: R/R design
application
deviations.
All R/Rs should
be distributed to
the relevant
services for
application as per
all R/Rs during
design.
Safety/reliabilit
y
engineers in
charge of the
project;
Specialist
engineers and
associated
hierarchy;
Procedures and
formats for
R/R deviation
reporting.
3.3
Record
deviations.
Transduce:
Record:
Database
A: R/R design
application
deviations;
B: List of applied
and deviated
R/Rs;
C: Non accepted
deviations.
Any deviations
and problems
with application
should be
recorded in the
project R/R
deviations
database, along
with their
associated
rationales.
Safety/reliabilit
y
project engineer
team;
Safety/reliabilit
y
specialist
engineers;
Format and
process for
deviation
discussions and
acceptation;
FichEyes tool.
4.1
Selection of
applied R/R
for
verification.
Transduce:
Select:
Database
A: List of applied
and deviated
R/Rs;
B: Selected R/Rs.
All applied R/Rs
which were
generated by the
MERE process
should be selected
for verification,
regardless of
which applicable
document they
are stored in.
Safety/reliabilit
y
engineers in
charge of the
project;
Project chief
engineer
representative;
Format for
edition of
technical
materials for
R/R verificatio
n
according to th
e
type of
verification
tasks;
FichEyes tool.
C-10
4.2
Verification
of
application.
Process:
VerifyAppli
cation
A: Selected R/Rs;
B: R/R
application
deviations.
Normal
verification
activities should
be performed for
the selected R/Rs
by specialist
engineers not
involved in the
application.
Safety/reliabilit
y
engineers in
charge of the
project;
Specialist
engineers and
associated
hierarchy;
Procedures and
formats for
R/R deviation
reporting.
4.3
Recording of
application
status and
results.
Transduce:
Record:
Database
A: R/R
application
deviations;
B: Survey of R/R
in service
efficiency via
experience
collection;
C: Not accepted
deviations.
R/Rs not applied
or deviated from
should be
discussed and
rationales for this
recorded.
Safety/reliabilit
y
project engineer
team;
Safety/reliabilit
y
specialist
engineers;
Project chief
engineer
representative;
Format and
process for
deviation
discussions and
acceptation;
FichEyes tool.
Experience
facts
database.
Store:
Database
A: Analysed and
coded facts;
B: Analysed and
coded facts.
All facts
entered so
far.
All facts stored
should be
retrievable
according to
selection criteria.
FichEyes tool.
R/R
database.
Store:
Database
A: Prevalidated
R/R;
B: Prevalidated
R/R;
C: R/R validated
level 1;
D: Validated R/R
and associated
experience facts.
All R/Rs
entered so
far.
All R/Rs stored
should be
retrievable
according to
selection criteria.
FichEyes tool.
C-11
Project R/R
database.
Store:
Database
A: Applicable
R/Rs;
B: Applicable
R/Rs.
Selection
criteria
All R/Rs stored
should be
retrievable
according to
selection criteria.
FichEyes tool.
Project R/R
deviations
database.
Store:
Database
A: R/R
application
deviations;
B: R/R
application
deviations.
All R/Rs stored
should be
retrievable
according to
selection criteria.
FichEyes tool.
C-12
C.2 PERE Vulnerability Table (PWT) for The Aerospatiale
MERE process
Vulnerability Class Likelihood Consequence Possible defences (those
already implemented in
MERE appear in
parentheses)
Possible
secondary
vulnerabilities
No
experience
facts
collected/elec
ted
Transduce:
CollectData:
Failure
Possible MERE does not
generate R/Rs for
certain incidents
(There are multiple sources
of facts in MERE, so
important facts should not
be missed.)
None
Experience
facts not
analysed or
coded
Process:
AnalyseDat
a: Failure
Unlikely
Incorrect
analysis or
coding of
experience
facts
Process:
AnalyseDat
a: Incorrect
Possible Incorrectly coded
facts will lead to
incorrect R/Rs in
turn
(Validation and verification
stages within MERE should
protect against this.)
None
Experience
facts not
recorded in
database
Transduce:
Record:
Database:
Failure
Unlikely
Experience
facts
incorrectly
recorded in
database.
Transduce:
Record:
Database:
Incorrect
Possible Inappropriate
recording in the
experience fact
database and
incorrect R/R
formulated.
(MERE includes checking of
transcribed Experience
Record Sheets.)
None
C-13
Failed
preliminary
elaboration
of R/R.
Process:
ElaborateRe
quirements:
Failure
Unlikely
Incorrect
preliminary
elaboration
of R/R.
Process:
ElaborateRe
quirements:
Incorrect
Possible Inapplicable or
unreliable R/Rs.
(The cross-checking of
elaborated R/Rs is
implemented in MERE.)
None
R/R not
analysed by
specialists.
Process:
Validate:
PreValidate:
Failure
Unlikely
R/R
incorrectly
analysed by
specialists.
Process:
Validate:
PreValidate:
Incorrect
Possible Inapplicable or
unreliable R/Rs.
(Cross-checking is
accomplished in MERE by
iterating between specialist-
critique and synthesis by
MERE correspondents.)
None
R/R not
recorded in
database.
Transduce:
Record:
Database:
Failure
Unlikely
R/R
incorrectly
recorded in
database.
Transduce:
Record:
Database:
Incorrect
Possible Inappropriate
recording in the
R/R database.
(Checking of print outs from
FichEyes by specialists
occurs within MERE.)
None
Failure of
organisation
&
convocation
for TVC.
Control:
Convocatio
n: Meeting:
Failure
Unlikely
Incorrect
organisation
&
convocation
for TVC.
Control:
Convocatio
n: Meeting:
Incorrect
Unlikely
C-14
No level 1
validation.
Process:
Validate:
Level1:
Failure
Unlikely
Incorrect
level 1
validation.
Process:
Validate:
Level1:
Incorrect
Possible Incorrectly
validated R/Rs
for WMC review.
(Checking of R/Rs by WMC
inherent within MERE.)
None
TVC results
not recorded.
Transduce:
Record:
Database:
Failure
Unlikely
TVC results
recorded
incorrectly.
Transduce:
Record:
Database:
Incorrect
Possible Inappropriate
recording in the
R/R database.
(Checking of R/Rs by WMC
inherent within MERE.)
Consider introducing
checking of printouts from
FichEyes by TVC members.
None
No
preparation
for WMC.
Control:
Convocatio
n:
Distributed
Meeting:
Failure
Unlikely
Incorrect
preparation
for WMC.
Control:
Convocatio
n:
Distributed
Meeting:
Incorrect
Possible Inappropriate
materials for
WMC review.
(Checking of materials by
WMC members inherent
within MERE.)
None
No level 2
validation.
Process:
Validate:
Level2:
Failure
Unlikely
C-15
Incorrect
level 2
validation.
Process:
Validate:
Level2:
Incorrect
Possible Incorrectly
validated R/Rs
for possible
application.
Consider introducing further
checking of WMC activity or
feedback to WMC members.
None
WMC results
not recorded.
Transduce:
Record:
Database:
Failure
Unlikely
WMC results
recorded
incorrectly.
Transduce:
Record:
Database:
Incorrect
Possible Inappropriate
recording in the
R/R database.
Consider introducing further
checking of WMC activity or
feedback to WMC members.
None
No selection
of R/R for a
project.
Transduce:
Select:
Database:
Failure
Possible Relevant R/Rs
not passed onto
design for
application.
Include a check for the
existence of a MERE
Applicable Document in
standard design procedures
Incorrect
selection of
R/R for one
project.
Transduce:
Select:
Database:
Incorrect
Possible Wrong R/Rs
passed onto
design for
application—relev
ant R/Rs may be
missed, and
irrelevant R/Rs
may be applied
unnecessarily.
(Selection is responsibility of
project team management in
AS) Consider checking by
specialists if not already
done.
Selected
R/Rs not
applied in
design.
Process:
ApplyRequir
ements:
Failure
Possible Relevant R/Rs
not applied.
(Integration of MERE into
AS’s standard procedures
for application of R/Rs
should ensure this does not
happen)
C-16
Selected
R/Rs
incorrectly
applied
during
design.
Process:
ApplyRequir
ements:
Incorrect
Possible Designs are
changed in
unexpected ways.
(Validation stages in MERE
include checks for
applicability of R/Rs which
should remove the chance
of misunderstandings.
Verification stage checks on
application. Standard AS
design procedure should also
include similar checks)
None
Deviations
not recorded.
Transduce:
Record:
Database:
Failure
Possible Unplanned
changes to
designs. No
feedback on poor
or hard to apply
R/Rs.
(Standard AS design
procedure and MERE
verification stage should
ensure that application of all
R/Rs is properly monitored)
None
Deviations
recorded
incorrectly.
Transduce:
Record:
Database:
Incorrect
Possible Incorrect details
of R/R
application
passed on to
verification stage.
(R/R application verification
stage should capture all such
problems) Consider
introducing checking of
printouts from FichEyes by
relevant personnel.
None
R/Rs not
selected for
verification.
Transduce:
Select:
Database:
Failure
Unlikely
R/Rs
incorrectly
selected for
verification.
Transduce:
Select:
Database:
Incorrect
Unlikely
No
verification
of
application.
Process:
VerifyApplic
ation:
Failure
Unlikely
C-17
Incorrect
verification
of
application.
Process:
VerifyApplic
ation:
Incorrect
Possible Mistakes in
application are
not uncovered.
Incorrect
feedback
provided to other
MERE stages
regarding the
applicability of
R/Rs.
(Verification is not specific
to MERE, but standard AS
procedure. Multiple and
overlapping activities and
different personnel to those
involved in the rest of
MERE should ensure that
any mistakes are uncovered)
None
Application
status and
results not
recorded.
Transduce:
Record:
Database:
Failure
Unlikely
Application
status and
results
recorded
incorrectly.
Transduce:
Record:
Database:
Incorrect
Possible Incorrect
feedback
provided to other
MERE stages.
Record for
project not
accurate when
consulted later.
Consider introducing
checking of printouts from
FichEyes by relevant
personnel.
None
Experience
facts
database
fails.
Store:
Database:
Failure
Unlikely
R/R
database
fails.
Store:
Database:
Failure
Unlikely
Project R/R
database
fails.
Store:
Database:
Failure
Unlikely
Project R/R
deviations
database
fails.
Store:
Database:
Failure
Unlikely
C-18
C.3 PERE Human Factors Table (PHT) for The Aerospatiale
MERE process
Name Analysis
(vulnerability
reference in
parentheses)
Likelihood Consequence Possible defences
(those already
implemented in
MERE appear in
parentheses)
Possible
secondary
vulnerabilities
(vulnerability
reference in
parentheses)
Problems with
Process
Components
Component 1.1
Collect & select
experience facts.
Individual, rule-
based errors by
specialists or
generalists leading
to inappropriately
selected facts or
facts omitted (2.1
& 2.2).
Possible Inappropriate
fact analysed
and coded or
fact omitted
from analysis.
Training in
collection and
selection
procedures.
(Checking and
review achieved in
Elaboration and
Validation
Processes within
MERE.)
None
Component 1.2
Analyse and code
experience facts.
Individual, rule-
based errors by
specialists leading
to inappropriately
coded facts (2.1 &
2.2).
Possible Inappropriate
recording in
the experience
fact database
and incorrect
R/R
formulated.
Training in coding
and analysis.
(Checking and
review achieved in
Elaboration and
Validation
Processes within
MERE.)
None
C-19
Component 1.3
Record experience
facts in database.
Individual, skill-
based errors in
transcription
leading to
inappropriately
recorded facts
(1.1- 1.3).
Possible Inappropriate
recording in
the experience
fact database
and incorrect
R/R
formulated.
Minimise
transcription-
worker fatigue.
(Checking of print
outs by specialists
occurs within
MERE. Tool
support provided
by FichEyes.)
None
Component 2.1.1
Preliminary
elaboration of
R/R.
Individual,
knowledge-based
errors by
generalists leading
to poor R/Rs (3.1-
3.6).
Possible Inapplicable or
unreliable
R/Rs.
Introduce group
processes whereby
the elaboration of
R/Rs is examined
and critiqued to
avoid biases in
elaboration. (The
checking of
individual
elaborations by a
group of generalist-
engineers is often
implemented in
MERE.)
Group failures
and errors (5.1-
5.9).
Component 2.1.2
Analysis of R/R
by specialists.
Individual,
knowledge-based
errors by MERE
correspondents
and/or specialists
leading to poor
R/Rs (3.1- 3.6).
Possible Inapplicable or
unreliable
R/Rs.
Introduce group
processes whereby
the elaboration of
R/Rs is examined
and critiqued to
avoid biases in
elaboration and pre-
validation. (This is
accomplished in
MERE by iterating
between specialist-
critique and
synthesis by
MERE
correspondents.)
Group failures
and errors (5.1-
5.9).
C-20
Component 2.1.3
Record R/R in
database.
Individual, skill-
based errors in
transcription
leading to
inappropriately
recorded R/Rs
(1.1- 1.3).
Possible Inappropriate
recording in
the R/R
database.
Minimise
transcription-
worker fatigue.
(Checking of print
outs by specialists
occurs within
MERE. Tool
support provided
by FichEyes.)
None
Component
2.2.1.1
Organisation &
convocation for
TVC.
Individual, skill-
based errors in
information
retrieval leading to
inappropriately
assembled
materials for the
TVC (1.1- 1.3).
Possible Inappropriate
materials for
TVC review.
Consider tool
support for meeting
scheduling.
(Checking of
materials by TVC
members inherent
within MERE.
Tool support
provided by
FichEyes.)
None
Component
2.2.1.2
Level 1 validation.
Group
coordination
failures and
process losses
leading to
incorrectly
validated R/Rs
and/or inaccurate
rationales (5.1-
5.9)
Possible Incorrectly
validated
R/Rs for
WMC review.
Review the design
of and procedures
for the TVC to
minimise likelihood
of (e.g.) status,
motivational,
leadership problems
etc.
None.
Component
2.2.1.3
Recording of TVC
results.
Individual, skill-
based errors in
transcription
leading to
inappropriately
recorded R/Rs
(1.1- 1.3).
Possible Inappropriate
recording in
the R/R
database.
Consider introduce
checking of
permanent meeting
records by TVC
members.
(Checking of R/Rs
by WMC inherent
within MERE.
Tool support
provided by
FichEyes.)
None
C-21
Component
2.2.2.1
Preparation for
WMC.
Individual, skill-
based errors in
information
retrieval leading to
inappropriately
assembled
materials for the
WMC (1.1- 1.3).
Possible Inappropriate
materials for
WMC review.
Consider tool
support for
distributed
meetings. (Checking
of materials by
WMC members
inherent within
MERE. Tool
support provided
by FichEyes.)
None
Component
2.2.2.2
Level 2 validation.
Group
coordination
failures and
process losses
leading to
incorrectly
validated R/Rs
(5.1- 5.9)
Possible Incorrectly
validated
R/Rs for
possible
application.
Review the design
of and procedures
for the WMC to
minimise likelihood
of (e.g.) status,
motivational,
leadership problems
etc.
None.
Component
2.2.2.3
Recording of
WMC results.
Individual, skill-
based errors in
transcription
and/or
knowledge-based
errors in synthesis
leading to
inappropriately
recorded R/Rs and
meeting records
(1.1- 1.3, 3.1- 3.6).
Possible Inappropriate
recording in
the R/R
database.
Consider introduce
checking of
permanent meeting
records by WMC
members. (Tool
support provided
by FichEyes.)
None
Component 3.1
Selection of R/R
for one project.
No single main
source of error is
identifiable at this
level of analysis.
Further PERE
mechanistic
analysis required.
C-22
Component 3.2
Application of
R/R in design.
Individual, rule-
based errors in
design by specialist
engineers (and
possibly further
errors depending
on the nature of
the design activity
itself).
Possible Inappropriatel
y applied
R/Rs, R/R
application not
reviewed,
deviations and
rationales not
recorded.
(Testing of R/Rs in
novel or simulated
conditions inherent
to MERE and the
design activity.)
Further defence
may also be
plausible if the
design activity itself
is subjected to
analysis so that the
nature of R/R
application and in-
service review are
understood.
None.
Component 3.3
Review and record
deviations.
Group
coordination
failures and
process losses
leading to
incorrectly
accepted or
rejected deviations
(5.1- 5.9)
Possible Incorrectly
accepted or
rejected R/Rs
or deviations
recorded in
project and/or
MERE
database.
Review the design
of and procedures
for review meetings
(etc.) to minimise
likelihood of (e.g.)
status,
motivational,
leadership problems
etc.
None.
Component 4.1
Selection of
applied R/R for
verification.
MERE depends
upon standard
verification
procedures defined
externally to
MERE itself.
Thus, errors due
to human factors
can only be
plausibly suggested
on analysis of
those procedures.
Component 4.2
Verification of
application.
ditto
C-23
Component 4.3
Recording of
application status
and results.
No single main
source of error is
identifiable at this
level of analysis.
Further MERE
specification and
PERE mechanistic
analysis required.
Problems with
Connections
and Working
Materials
Experience Record
Sheet
Unknown
(7.1- 7.5)
Unknown Unknown (Multiple views and
dynamic cross-
referencing
supported by
FichEyes)
Unknown
R/R Description
Sheet
Unknown
(7.1- 7.5)
Unknown Unknown (Multiple views and
dynamic cross-
referencing
supported by
FichEyes)
Unknown
R/R Application
Data Sheet
Unknown
(7.1- 7.5)
Unknown Unknown (Multiple views and
dynamic cross-
referencing
supported by
FichEyes)
Unknown
R/R Verification
Data Sheet
Unknown
(7.1- 7.5)
Unknown Unknown (Multiple views and
dynamic cross-
referencing
supported by
FichEyes)
Unknown
C-24
Organisational
Context
Problems
Formal Structure Errors due to
ineffective
reporting
procedures etc.
(6.1- 6.4 and 6.11)
Unlikely
Communication
Channels
Errors due to
‘single point
failures’ and error
propagation (6.1
and 6.2)
Possible MERE
process halts,
incorrectly
processed
R/Rs are
passed on to
application etc.
All MERE
management
activities which are
potential sites of
single point failure
(e.g. tasks which
are the
responsibility of
single individuals
and which have
other tasks
dependent upon
them) should be
subject to
supervision and
checking. (MERE
does contain many
feedback loops
which protect
against error
propagation.)
None.
Safety Culture Errors due to
inadequate
organisational
emphasis on safety
(6.5- 6.8 and 6.10)
Unlikely
C-25
Organisational
Learning
Errors due to
inadequate
organisational
learning (6.7- 6.9
and 6.12)
Possible Recurrent
failures,
experience lost
as personnel
move from
project to
project, etc.
(MERE’s entire
rationale is to
address such
problems and
sources of error.)
None.
... PERE (Process Engineering in Requirements Engineering) [22] is a tool that analyzes interconnected process components to adapt techniques to the requirements process while conducting an analysis of the human factors of those involved in development. ...
Article
Full-text available
This paper shows a development practice for the requirements process of a medical application. In this process, a psychology specialist was included in order to improve the application from the specification of requirements, considering subjects such as: support networks, behavior modification, increased intrinsic and extrinsic motivation and the creation of tools to facilitate patient-physician communication. This resulted in 11 new requirements. When a psychology specialist is involved in the requirements process, a new perspective on the patient is obtained, one that is broader than the requirements engineer usually has. It considers human needs that can serve to stimulate the user to avoid certain negative behaviors or to encourage positive behaviors. One of the main objectives of an application that helps a patient in their medical treatment is that the user does not abandon it, and the psychological perspective created requirements may help in this.
... Dicho método identifica las tareas y las dimensiones del trabajo hacia los cuales se deba dirigir el diseño de las ayudas al trabajador del conocimiento [9].  La tesis doctoral de Viller recoge el uso de la ergonomía de los errores humanos en el mejoramiento de los procesos de ingeniería de requerimientos, a través de la creación del método PERE, Process Evaluation in Requirements Engineering [10].Este método ofrece una taxonomía de errores humanos y recomendaciones para identificar, clasificar y resolver posibles errores en el trabajo de ingeniería de requerimientos del software. ...
Article
Full-text available
In this article is presents a method to investigate the software requirements to be satisfied in the design of an informatics tool, to give effective support to the knowledge of the user who executes a complex job of assigning and scheduling of work and rest. The procedure was broken down into 10 stages, was applied in two workshops by two teams of experienced engineers. It was trained an equal number of people without experience, and were given to them a similar task. To assess the knowledge work intensity (KWI) who sued each stage to each of the four teams, and a taxonomy of human error was applied to characterize the errors at each stage. It was proven the efficacy of the KWI and the human error taxonomy for the detection of needs of aid to the design according to knowledge worker, and in the designing of ergonomic aids to minimize errors at each stage
... Dicho método identifica las tareas y las dimensiones del trabajo hacia los cuales se deba dirigir el diseño de las ayudas al trabajador del conocimiento [9].  La tesis doctoral de Viller recoge el uso de la ergonomía de los errores humanos en el mejoramiento de los procesos de ingeniería de requerimientos, a través de la creación del método PERE, Process Evaluation in Requirements Engineering [10]. Este método ofrece una taxonomía de errores humanos y recomendaciones para identificar, clasificar y resolver posibles errores en el trabajo de ingeniería de requerimientos del software. ...
Article
Full-text available
RESUMEN Se presenta un método para investigar los requerimientos del software a satisfacerse en el diseño de una herramienta informática que apoye eficazmente los conocimientos de los usuarios que apliquen un determinado método complejo de asignación y programación de trabajos y descansos a obreros. El método, consta de 10 etapas, se aplicó en dos talleres por sendos equipos de ingenieros experimentados. Se capacitó a una cantidad igual de personas sin experiencia, y se les dio el mismo encargo. Se evaluó la intensidad del trabajo de conocimiento (ITC) que demandó cada etapa a cada uno de los cuatro equipos, y se aplicó una taxonomía de errores humanos para caracterizar los errores cometidos en cada etapa. Se comprobó la eficacia de la ITC y de la taxonomía para detectar la necesidad de ayudas al diseño acorde al trabajador del conocimiento, y en el diseño de apoyos ergonómicos para minimizar errores en cada etapa. ABSTRACT In this article is presented a method to investigate the software requirements to be satisfied in the design of an informatics tool, to give effective support to the knowledge of the user who executes a complex job of assigning and scheduling of work and rest. The procedure was broken down into 10 stages, was applied in two workshops by two teams of experienced engineers. It was trained an equal number of people without experience, and were given to them a similar task. To assess the knowledge work intensity (KWI) who sued each stage to each of the four teams, and taxonomy of human error was applied to characterize the errors at each stage. It was proven the efficacy of the KWI and the human error taxonomy for the detection of needs of aid to the design according to knowledge worker, and in the designing of ergonomic aids to minimize errors at each stage.
Conference Paper
Full-text available
Despite the widespread introduction of information technology into primary health care within the United Kingdom, medical practitioners continue to use the more traditional paper medical record often alongside the computerized system. The resilience of the paper document is not simply a consequence of an impoverished design, but rather a product of the socially organized practices and reasoning which surround the use of the record within day to day consultative work. The practices that underpin the use of the medical records may have a range of important implications, not only for the general design of systems to support collaborative work, but also for our conceptions of `writers', `readers', `objects' and `records' utilized in those designs.
Article
Full-text available
Despite technical advances over the past few years in the area of systems support for cooperative work there is still relatively little understanding of the organisation of collaborative activity in real world, technologically supported, work environments. Indeed, it has been suggested that the failure of various technological applications may derive from their relative insensitivity to ordinary work practice and situated conduct. In this paper we discuss the possibility of utilising recent developments within sociology, in particular the naturalistic analysis of organisational conduct and social interaction, as a basis for the design and development of tools and technologies to support collaborative work. Focussing on the Line Control Rooms in London Underground, a complex multimedia environment in transition, we begin to explicate the tacit work practices and procedures whereby personnel systematically communicate information to each other and coordinate a disparate collection of tasks and activities. The design implications of these empirical observations, both for Line Control Room and technologies to support cooperative work, are briefly discussed.
Article
This paper reports some of the advantages of using a crime reporting system in police organizations. In particular, it shows how such a system enables detectives to access records and files more easily than before, and, given the centrality of files, records, and reports to detectives' work, how the system has increased efficiency and 'professional' competence. It will be argued that these are advantages that might be expected; but in addition, that detectives seem able further to improve their ability to locate correct files, especially in relation to offences 'taken into consideration', by interviewing accused persons while accessing the system. It will also be noted that the system can be used to 'bluff' information from suspects. These are unexpected findings. The data for these observations derive from interviews with detectives in a medium-sized shire force. The impact of the Police and Criminal Evidence Act 1984 on these practices is remarked and the limitations of the research are discussed. © 1991 The Institute for the Study and Treatment of Delinquency.
Article
This paper describes a multidisciplinary project concerned with the design and management of hazardous organizations that achieve extremely high lev els of reliable and safe operations. The organizations in which the research is being conducted are the Federal Aviation Administration's Air Traffic Control system, Pacific Gas and Electric Company's electrical distribution system, and two nuclear aircraft carriers of the United States Navy. The paper describes the research strategy and then presents some initial organizational paradoxes and findings. These are discussed in terms of describing the organizations, decision making, interdependence, the "culture" of high reliability and adaptation to technological change. The paper concludes with the admonition that managers in hazardous organizations should consider the cost of safeguards against or ganizational catastrophes versus the costs of catastrophe (in money, lives and public outcry).
Article
The chapter explains the group-induced polarization of attitudes and behavior. The chapter highlights the concept of group polarization with four observations. First, remember that group polarization refers to a strengthening of the dominant tendency, not to increased cleavage and diversity within a group. Second, it denotes an exaggeration of the initial mean tendency derived from data averaged over groups (this includes between-subject designs where baseline choices made alone are compared with choices made by other people following group discussion of group decision.) Note, third, that the polarization hypothesis is a more precise prediction than group extremization, which denotes movement away from neutrality regardless of direction. Finally, group polarization can occur without individual group members becoming more polarized. This could easily happen if a sharply split group of people converged on a decision that was slightly more polar than their initial average. In addition, future study of group interaction seems, therefore, to have the potential of developing a creative synthesis between theory and its social usefulness, thus making this an area that fulfills Kurt Lewin's vision for social psychology.
Article
Computers can kill (or have other undesirable effects). This paper describes a number of recent disasters in which computers have been wholly or partly to blame, including the Therac-25, which administered overdoses of radiation to its patients, the London ambulance fiasco, the crashes of the Gripen fly-by-wire fighter, the failure of the Patriot missile system to defend against Scuds, problems affecting space probes and satellites, the crashes of the A320 at Habsheim, Bangalore, Strasburg and Warsaw, and several others. It discusses what these disasters have in common, what lessons can be learned from them, how such disasters can be prevented, and whether computer-based systems are inherently unsafe. The accident sequences are analysed, taking into account the underlying social and human causes, and the part played by the inherent weaknesses of computer systems, in particular their role in making systems more interactively complex and more tightly coupled.
Article
The Norwegian National Union of Iron and Metal Workers (in this report abbreviated to MWU), the largest trade union in Norway, has recently completed a research project on ‘Planning Methods for the Trade Unions’. This is the first major research project in Norway conducted by a trade union without the employers participating. Joint projects have been going on for some time and are well publicized.