Fig 2 - uploaded by Peter Johnson
Content may be subject to copyright.
Airbus A320 Flight Control Unit (FCU) 

Airbus A320 Flight Control Unit (FCU) 

Source publication
Conference Paper
Full-text available
In this paper we propose an account of human/computer awareness for use in the (re)design of complex human/computer interaction, before empirically testing its utility. Specifically, having situated our work in the wider field of human/computer awareness research, we address the well-reported phenomenon of "situation awareness" breakdowns in the av...

Contexts in source publication

Context 1
... half major changes are foreseen in order to cope with increasing demands for civilian air travel. These envisaged changes fall into three major types: provision of automated tools to help the controller deal with increased traffic density and complexity; changes to airspace structures and procedures; and at least a partial delegation of responsibility for separating aircraft from the controllers to the cockpit. During these changes it is imperative that safety is not compromised, and if possible, is actually improved against a steady rise of traffic movements. The changes are seen broadly in two time-horizons, between now and 2012 and then between 2012 and 2020. The first set of changes may be introduced as a series of smaller steps, but the visions for 2020 are more radical and represent more of a paradigm shift for ATM. The challenge for safety management is therefore how to assure the transition from current relatively safe operations to these future visions. There are a number of aspects about the proposed changes that make this challenge a complex one: There is as yet not a clear vision of how the various controller tools will work together, nor how they will fit exactly with airspace structure and procedural changes. There is thus uncertainty about the technical and operational vision for 2012 and more so for 2020; So far safety case work has proceeded on a sub-system basis – since the operational vision is not detailed enough to specify interactions between systems. It is therefore not clear whether the safety afforded by each individual sub-system when added together will be enough to maintain the target level of safety for European ATM. It is also not clear whether unplanned interactions between sub-systems may negate anticipated safety advantages, nor if potential ‘safety synergies’ could exist between different sub-elements but are not being realised during the design and transition process – there is thus inherent complexity in the safety management objective of having to deal with many elements that can interact in planned and unplanned ways, and whose effects can be positive and negative, where both of these effects are of interest to safety management, but traditionally only negative interactions are identified; The implementation of the various tools and other changes will vary between different European States – there are likely to be many different permutations for ‘local’ national reasons, but the sequence and timing of implementation could have safety implications, particularly when viewed against the background rise of traffic – this means that safety management must be able to deal with many permutations; Safety management will initially rely on predictions of safety afforded by the various changes, but the actual impact could be measured over the complete transition period from roughly 2007 – 2020 – this would allow 57 proper calibration of the safety measures and predictions, and warn us if predictions were optimistic (i.e. risk of an accident is increasing) – this means there is a certain amount of ‘dynamicity’ (even if relatively slow in usual terms) of the safety management process; and The changes may affect some of the less technical and more human aspects that make current ATM so safe – the ‘culture of safety’ may be affected – safety management must therefore address such aspects which are relatively little-understood and which tend to act in indirect rather than direct ways on safety performance. The above bullet points show that the challenge for safety management of future ATM is indeed one of coping with complexity, since the task has all the hallmarks of complex problems: uncertainty, many interacting elements with poorly specified inter-relationships; opportunities for unplanned interactions; many permutations; dynamicity (non-stability); and lack of basic theory (on why ATM is a HRO and ‘resilient’ in safety terms). The challenge for ATM Safety Management is therefore a significant one, and one with relatively little time to deliver appropriate methodologies to overcome the difficulties raised above. This challenge is being at least partly met by key Research and Development activities in the area of safety management and method development. Two particular studies that have just begun in earnest are explored in the remainder of this paper, the first relating to the development of an approach for a more integrated or holistic risk assessment process in ATM, and the second a related approach to explore unplanned interactions as a source both of risk and also of additional safety. The objective of the work is to develop an integrated risk picture (IRP) for ATM in Europe, showing the relative safety priorities in the gate-to-gate (G2G) ATM cycle. As defined by the International Civil Aviation Organisation (ICAO), G2G “is considered to start at the moment the user first interacts with ATM and ends with the switch-off of the engines”. The various phases are illustrated in Figure 2: Air traffic management (ATM) is defined by ICAO as “The aggregation of the airborne functions and ground- based functions (air traffic services, airspace management and air traffic flow management) required to ensure the safe and efficient movement of aircraft during all phases of operation”. The present work will cover the entire ATM service, i.e. everything ATM supplies to the pilot. This includes: • Strategic conflict management; • Airspace organization and management; • Demand and capacity balancing; • Traffic synchronization; • Tactical conflict management; • Tactical separation provision (preventive); • Collision avoidance (recovery); and • Information services (AIS, MET, etc.) The work will cover all ATM systems, i.e. everything that contributes to safe movement of air traffic, including ground-based and air-based communications, navigation and surveillance (CNS) equipment, and ATC-related equipment on the aircraft. 58 The purpose of developing the IRP is to show the relative safety priorities in the gate-to-gate ATM cycle. To do this, it must be capable of showing: • The overall contribution of ATM to aviation risk, i.e. the reduction in accident risk that would result if ATM were somehow perfect; • The relative importance of different accident types and the causal factors underlying the ATM contribution to risk; • The contribution of ATM in both causing and preventing aviation accidents. This will show areas where risk reduction would be desirable in principle; and • The relative importance of the different phases of the G2G cycle, i.e. the effects of strategic versus tactical conflict management. It will be able to combine all the above contributions and influences in consistent units, in order to make clear comparisons between them. This is necessary to show the benefits that could be achieved by improvements to the various safety defences that control the ATM contribution to risk. As defined above, an “integrated” risk picture is primarily one that shows the sum of the effects of all causal factors within the G2G cycle. However, this adding up is not a simple exercise because safety gains in one area may be accompanied by losses in another. For example, a new safety device may provide extra redundancy in the warning about imminent accidents, but this may induce passivity in the pilots or controllers, or unrecognised potential for common-cause failures, which may offset or even reverse the expected safety benefits. The IRP will therefore take account not only of the direct benefits of a defence but also of possible indirect effects on all other safety defences. These effects are described as “cross-boundary hazards” (see below). The initial part of the work will cover ATM as it is in 2004. The picture for 2004 will be the baseline against which the safety significance of the changes proposed for the 2012 timescales will be measured. Consequently, the risk model for 2004 will be quantified and calibrated using incidents and accidents data from UK NATS (National Air Traffic Services), ASRS (NASA' s Aviation Safety Reporting System), and Flight Safety Foundation (FSF) and a project called SAFLEARN 3 in Eurocontrol as well as the Functional Hazard Assessment (FHA) for Maastricht UAC (Eurocontrol Maastricht UAC provides air traffic control services in the upper airspace over Belgium, Luxembourg, the Netherlands and part of Germany). The work will make use of two distinct models (see Figure 3 ) An ATM model representing the operational concept for commercial aviation, i.e. the way in which different actors and systems (notably ATM safety nets) in the aviation industry work together in normal operations. This will in particular emphasise the flow of safety-critical information and instructions between actors and systems. A risk model, representing the way in which different causal factors (human, procedural and equipment failures, including failures of safety nets) combine to result in aviation accidents. Its output will be the required risk picture. The risk model will implement the integrative modelling approach, and will also provide the primary means for presenting the risk results and the assumptions on which they depend As mentioned earlier, ATM and aviation in general, is made up of a multiplicity of overlapping and mutually supporting defences. This successive layers of protection has made ATM and aviation proof against single and isolated failures, be human-related, procedural or technical. However, rare windows of opportunity for accidents In this project, incidents are being analysed to learn lessons for future tools such as conflict resolution advisors, datalink, etc. 59 and incidents to develop are created when the postulate of mutual independence between defences is violated (e.g. potential dependency between Air Traffic Control (ATC) making initial error and failing to detect/ resolve a Controlled Flight Towards Terrain (CFTT) correctly). Consequently, identifying hazards ...
Context 2
... in terms of a runner taking a step backwards before running (forwards) down a path. In this third condition (C3), we believed that the semantic relevance of the signal to the underlying autopilot activity would reduce the cognitive load on the viewer by reducing the amount of mental work required to map from the incoming signal to its underlying meaning – a notion explored by Johnson, Johnson & Hamilton [11] in 66 the field of task performance, but often overlooked in HCA support. We believed that this reduction in cognitive load would lead to the higher levels of awareness being achieved with greater regularity and, ultimately, reduce the number of breakdowns observed. We reasoned, however, that the utility of our two warning signals could vary both with the perceptual strength of the signal used (i.e. its size, range of movement, brightness etc) as well as its semantic relevance. In order to avoid a potential confound, therefore, we made sure that the warning signal involved in C3 contained an icon of similar size and brightness to that in C2 but actually involved a smaller range of motion. We could, therefore, argue that the warning signal in condition C2 was of high signal strength and low semantic relevance, whilst the one in C3 offered the reverse. If we were to gain support for our model, then, we would need to show that support provided in each condition (i.e. the support targeted at different levels of awareness) would provide tangibly different results, ultimately affecting the extent to which people saw, attended to and/or understood the developing path of the flight. With this in mind, our first hypothesis was that the provision of any warning signal (i.e. any attempt to draw our participants attention towards the interventions) would increase the likelihood that these interventions would be reported. More clearly stated then, this first hypothesis (H1) became: H1: An explicit signal indicating autopilot activity would increase the number of reported observations that such activity had occurred i.e. a significantly greater number of such reports will occur where a warning signal is given (i.e. C2, C3) than in the condition where it is not, (C1). Beyond this, however, we were able to make predictions about the likelihood of our participants moving from simply noticing that something had occurred to understanding what it was. Again we phrased this second hypothesis (H2) more clearly in terms of our experiment: H2: The inclusion of a specific semantic link between the warning signal given and the underlying autopilot activity will increase the participants’ understanding (cognitively processed awareness) of such activity, leading to a significantly higher rate of correction of those undesirable interventions reported. i.e. the ratio of interventions corrected to interventions reported will be significantly higher in the condition where explicit semantic support is included (C3) than in those where it is not (C1, C2). We also included a third, weaker hypothesis that would hold only if the signal strength in C2 and C3 turned out to be similar, rather than skewed in the way that we had intended. In this case, if no significant difference existed in the number of interventions reported, we would expect to see not only a rise in the ratio of corrections to observations, but also a significant higher number of corrections per intervention in C3 (vs C2). In other words, if the only important difference between the characteristics of two interaction designs is that one better supports the cognitive processing of information which is available, perceived and attended to, then that design will result in a higher incidence of true awareness (understanding) than its competitors. In terms of our experiment, this third hypothesis (H3) could be phrased as follows: H3: If the number of reported interventions is similar in the two conditions involving warning signals (C2, C3), then the absolute number of corrections observed in C3 will be significantly higher than in C2. With these hypotheses in mind, we asked thirty postgraduate students to participate in our between-subjects experiments (separated into three groups of ten, one for each of our three conditions). Clearly, the use of non- professional participants reduced the ecological validity of our experiment, but the resource of commercial pilots’ time is extremely limited and we felt that our interface literate replacements would be sufficient for this initial empirical study. Having recruited our participants, we set up a simple working simulation of the panel and displays in question on a Pentium-4 PC with a 19” screen. We then programmed our control and extended interfaces using the Java programming language, relying heavily on the swing graphical interface packages to produce the simulated interfaces described below. First, we constructed an input interface, a faithful replica of the Flight Control Unit (FCU) used in the A320. The FCU, shown in figure 2, consists of four dials, six buttons and three switches. Most importantly in the context of this experiment, the dials allow targets for speed (SPD), lateral heading (HDG), altitude (ALT) and vertical speed (VS) to be given to the autopilot. For those interested in a more complete description of the FCU or of the other A320 panels described here, one can be found in our previous work on the subject [15] or in the official accident report of the Strasbourg crash ...
Context 3
... displays by clustering icon sets into meaningful groupings also helps users access meaning. This is readily apparent to anyone using computer packages where icons with related functions are clustered in the same location. Recent research has shown that icon clustering not only allows us to search displays more effectively (Niemela & Saarinen, 2000), but also allows us to learn icon-function relationships more quickly (Richards & McDougall, 1999). This is because clustering allows users to make inferences about categories of icon-function relationships and this, in turn, aids the interpretation of individual icons within the clusters. Clustering therefore appears to be a very useful design tool which is capable of delivering a number of benefits including workload reduction and enhanced visual search and comprehension. The relationship between perception and interpretation is also apparent when visual conventions are used to convey meaning. For example, colour is commonly used to communicate meaning such as red for danger and green for go and these colour conventions appear to have reasonable cross-cultural transferability (c.f. Courtney, 1986). Shapes are also used to convey conventional meanings as a new visual language using icons is created: two arrows pointing left are now commonly understood as ‘fast forward’; ‘contrast’ is indicated by a circle which is half white and half black; and hazard is indicated by a combination of shape (triangle) and colour (red) conventions. To summarise, user performance can be enhanced by ensuring that display design allows users to access meaning at the same time as carrying out visual search. It is clear that there is considerable scope for expanding the ways in which meaning can be conveyed implicitly through visual search. The use of visual conventions is also used to convey meaning via perception. For these reasons, the interplay between interpretation and perception is likely to become increasingly important in future display design. This section evaluates the role of cognitive icon characteristics in icon interpretation including icon concreteness, semantic distance (the closeness of the icon-function relationship), and familiarity (see Figure 1). Research in the area has tended to examine the effects of each characteristic in isolation. In order to arrive at a better understanding of the role of cognitive icon characteristics in practice, the relative importance of these characteristics is evaluated along with how they behave under high workload conditions. The icon characteristic which has received most attention to date is icon concreteness, (i.e. the degree of pictorial resemblance that an icon bears to its real world counterpart). The consensus is that concrete icons are easy to interpret because they allow users to apply their everyday knowledge about the objects depicted in order to make inferences about icon functions (Moyes & Jordan, 1993). Abstract icons, which have less obvious connections with the real world (since they consist mainly of shapes, arrows and lines), are therefore more difficult to interpret. Using this logic, concrete icons should be the best to use on graphical user interfaces because they seem more ‘natural’ to use and require less explicit learning because of what we already know about everyday objects. However, studies which have sought to find out more about the role of icon concreteness can be difficult to interpret. This is because, until recently, no measures of concreteness were available and researchers had to rely on their own intuitions in order to decide whether an icon was concrete or not. When creating concrete icon sets researchers tended to add more detail to pictorial icons which meant they were also more visually complex (e.g. Arend et al, 1987; Stammers & Hoffman, 1991). This made it difficult to decide whether concrete icons were easier to use because they were more pictorial or because the greater detail in the icon made them easier to understand. The roles which icon concreteness and visual complexity had in determining usability were resolved in later research. Icon complexity determines how quickly users can search displays for appropriate icons. Because simple icons can be distinguished on the basis of a few visual features, graphical interfaces containing simple icons are searched much more quickly than those containing complex icons (Byrne, 1993). The use of concrete icons in displays, however, is important in determining how accurately novices interpret icons, although differences in accuracy disappear as users gain experience (McDougall, Curry & de Bruijn, 2000). As can be seen from Figure 2, ease of interpretation does not depend on visual complexity or the level of detail in the ...

Similar publications

Article
Full-text available
Current solutions for autonomy plan creation, monitoring and modification are complex, resulting in loss of flexibility and safety. The size of ground control operations and the number of aviation accidents involving automation are clear i ndicators of this problem. The solution is for humans to naturally collaborate with autonomous systems. Visual...

Citations

... However, the displays in the cockpit of an aircraft can be quite complex and have to function in a harsh visual environment that may strongly affect the quality of the displayed information. Numerous reports and studies clarify specifi c fi elds of research such as situation awareness [4], tactile sensation [10], color patterns [3] and so forth. Major drawback of existing solutions is a lack of operational feedback regarding human performances connected with audio, visual and haptic cues in highly interactive environments such as aircraft cockpit. ...
Article
Full-text available
In this paper, we present an approach to design of command tables in aircraft cockpits. To date, there is no common standard for designing this kind of command tables. Command tables impose high load on human visual senses for displaying flight information such as altitude, attitude, vertical speed, airspeed, heading and engine power. Heavy visual workload and physical conditions significantly influence cognitive processes of an operator in an aircraft cockpit. Proposed solution formalizes the design process describing instruments in terms of estimated effects they produce on flight operators. In this way, we can predict effects and constraints of particular type of flight instrument and avoid unexpected effects early in the design process.
... For example, a system allowing participants to monitor each other's tasks in a collaborative activity [15]. Hourizi & Johnson [12] have shown that this awareness can be significantly enhanced when information about future actions, intentions and implications are provided. In addition they have proposed and applied a framework to aid the design of technology to support this. ...
... Hourizi and Johnson's awareness model predicts that for information in an environment to generate awareness it must be subject to certain cognitive processes -(1) available (2) perceived, (3) attended to and (4) evaluated for implications [12]. Applying this general model of awareness, leads to the identification of challenges to improve intentionality awareness. ...
Article
Full-text available
Understanding intentionality is a necessary feature of joint activities involving interdependent agents. This challenge has increased alongside the deployment of autonomous systems that are to some degree unsupervised. This research aims to reduce the number of intentionality recognition breakdowns between people and autonomous systems by designing systems to support the awareness of information cues used in those decisions. The paper outlines theoretical foundations for this approach using simulation theory and process models of intention. The notion of breakdowns is then applied to mistaken intentions in a diary study to gain insight into the phenomena.
... However, the displays in the cockpit of an aircraft can be quite complex and have to function in a harsh visual environment that may strongly affect the quality of the displayed information. Numerous reports and studies clarify specific fields of research such as situation awareness [4], tactile sensation [5], color patterns [6] and so forth. Major drawback of existing solutions is a lack of operational feedback regarding human performances connected with audio, visual and haptic cues in highly interactive environments such as aircraft cockpit. ...
Article
Full-text available
In this paper, we present an approach to design of command tables in aircraft cockpits. To date, there is no common standard for designing this kind of command tables. Command tables impose high load on human visual senses for displaying flight information such as altitude, attitude, vertical speed, airspeed, heading and engine power. Heavy visual workload and physical conditions significantly influence cognitive processes of an operator in an aircraft cockpit. Proposed solution formalizes the design process describing instruments in terms of estimated effects they produce on flight operators. In this way, we can predict effects and constraints of particular type of flight instrument and avoid unexpected effects early in the design process.
... Awareness relies on fundamental (and intentionally used) capabilities and skills. Of the different types of awareness that have been studied [1, 5, 13, 16, 18], workspace awareness [9, 10] is the most closely related to our study setting. It refers to the collection of up-to-the-moment knowledge about collaborators' interactions within a shared workspace [9] rather than just about the workspace itself. ...
... With clearly defined tasks, it is possible to verify participants' answers to questions about what the others did in the task [18]. Situation awareness can also be assessed by using simulations of defined tasks with standard procedures to determine whether participants take notice of problems and react as required [16]. Perceived effort often differs from actual performance and observed behavior [cf. ...
Conference Paper
Full-text available
Multi-touch surfaces are becoming increasingly popular. An assumed benefit is that they can facilitate collaborative interactions in co-located groups. In particular, being able to see another's physical actions can enhance awareness, which in turn can support fluid interaction and coordination. However, there is a paucity of empirical evidence or measures to support these claims. We present an analysis of different aspects of awareness in an empirical study that compared two kinds of input: multi-touch and multiple mice. For our analysis, a set of awareness indices was derived from the CSCW and HCI literatures, which measures both the presence and absence of awareness in co-located settings. Our findings indicate higher levels of awareness for the multi-touch condition accompanied by significantly more actions that interfere with each other. A subsequent qualitative analysis shows that the interactions in this condition were more fluid and that interference was quickly resolved. We suggest that it is more important that resources are available to negotiate interference rather than necessarily to attempt to prevent it.
... This could avoid unexpected failure of the machines in a real industry scenario. In [13], the author explains similar situation with an experiment on the primary flight display on an Airbus A320. He mentions that it is very important to let the pilot know that an error occurred while in auto-pilot mode, through a signal in the flight display and it is also equally important to make him understand the meaning of this signal clearly and quickly so that he immediately tries to correct it. ...
... It would require more attention from either coach or operator to watch the display to stay updated. A similar kind of problem is discussed in [13], where the pilot needs to be made aware of the actions of an autopilot. Despite the problem to make the user aware of some system, the situation in TELL is more complicated. ...
Thesis
Full-text available
Less than twenty years after Mark Weiser coining the term “Ubiquitous Computing,” dozens of visions and hundreds of development projects aim at building so-called “smart,” prototypical high-tech environments, such as smart homes, smart classrooms, and even future smart factories. But even nowadays, already millions of people live and work in substantially technologically augmented environments. Especially in production environments, workers have to cope with vast arrays of technical devices, which were usually not developed within one holistic, user-centred process. Rather, devices from different providers are being combined, which also results in a plethora of vendor-specific, proprietary user interfaces. Obviously, this poses quite a challenge to workers who need to learn to use all these different devices, user interfaces, and interaction paradigms. This leads to long training periods and increases the risk of operating errors and human failure. Several approaches to a more user-friendly development and design of technical devices have been defined, such as the Useware Development Process pursued by the Center for Human-Machine-Interaction (ZMMI), but they all restrain to support the development of merely single devices or device families, at best. There is no user-centred approach allowing for the specification and formalization of a technologically equipped environment in its entirety. Building upon the Useware Markup Language (useML) employed by the ZMMI for its Useware Development Process, this thesis therefore suggests an enhanced, room-based use model which allows for the formal specification of whole environments with all comprised technical devices. This joint model can represent room-based, spatial structures, the technical devices and device compounds contained therein, multiple interaction zones per device, and task-based use models for devices and device types. It thereby enables a Useware Engineer to model all selected users’ or user groups’ interactions with technical devices within such an environment, in a comparatively easy, consistent, and comfortable way. While the need for such a model is pointed out in chapter 2, key technologies and related works taken into consideration during the development of the room-based use model are highlighted in chapter 3. Subsequently, in chapter 5, the room-based use model scheme (version 3.0 of the Useware Markup Language, useML), is itemized and explained in detail. Since it has already been employed within the scope of a research project led by the author of this thesis, the evaluation of the proposed approach is conducted in form of a proof of concept which relies on a demonstration software for the SmartFactoryKL; the results are discussed in chapter 6 and constitute the basis of the consecutively following assessment of the model. Finally, an outlook and a summary conclude this thesis.
Conference Paper
People in complex scenarios face the challenge of understanding the purpose and effect of other human and computational behaviour on their own goals through intent recognition. They are left asking what caused person or system ‘x’ to do that? The necessity to provide this support human-computer interaction has increased alongside the deployment of autonomous systems that are to some degree unsupervised. This paper aims to examine intent recognition as a form of decision making about causality in complex systems. By finding the needs and limitations of this decision mechanism it is hoped this can be applied to the design of systems to support the awareness of information cues and reduce the number of intent recognition breakdowns between people and autonomous systems. The paper outlines theoretical foundations for this approach using simulation theory and process models of intention. The notion of breakdowns is then applied to intent recognition breakdowns in a diary study to gain insight into the phenomena.
Conference Paper
What is the point of modelling anything - why don't we simply get on with the job of designing and building interactive systems using our intuitions, creative genius, experience and knowledge. This is an argument often put forward in certain areas. We do use all the above when we design and build systems and we still get them wrong, often in serious ways that mostly only cost in terns of time or money. Models do not necessarily allow us to avoid the mistakes and pitfalls of design, a model is only an abstraction over detail, a representation of something that we believe is of relevance, interest and/or concern. We have to be creative, ingenius, experienced and knowledgeable in our development and use of models in design. Models do not exist for themselves, and people who engage in developing and using models are not doing that purely for the purpose of developing or perpetuating the use of a particular model. In my view modelling is a way to further understanding, and as our understanding progresses so should the models we use. In this respect I want to consider how my understanding of interaction has progressed and with that my attempts to model aspects of interaction that I am trying to understand so that I might use that understanding to improve interaction design. That means that while task modelling has been and still is important it is not a be all and end all.
Conference Paper
Work in paper and chemical factories include controlling several processes and cooperating with several workers. This needs lots of awareness and information sharing. Breakdowns in information sharing can lead to low quality production and unsafe work situations. During last couple of years different social media and web 2.0 applications and services have become popular ways of sharing information in leisure environment. We created a prototype from social media perspective to respond the needs in information sharing in factories. Our electronic notice board prototype (El Nobo) uses a metaphor from process operators’ current work environment and is designed to face the specific needs that occur in the chemical factory process operators’ work. The prototype aims to introduce social media type of working practices to process control work and to test the possibilities of informal cross-organizational information sharing in industrial settings.