ArticlePDF Available

Abstract

this article was supported by grants from the Office of Naval Research
Running head: Cognitive Modeling
COGNITIVE MODELING AND HUMAN-COMPUTER INTERACTION
Wayne D Gray1 & Erik M. Altmann
Human Factors & Applied Cognition
Department of Psychology
George Mason University
Fairfax, VA 22030
Date printed: Sun, Oct 14, 2001
Last changed: Sun, Mar 21, 1999
Gray, W. D., & Altmann, E. M. (2001). Cognitive modeling and human-computer
interaction. In W. Karwowski (Ed.), International encyclopedia of ergonomics and
human factors (Vol. 1, pp. 387-391). New York: Taylor & Francis, Ltd.
1 The writing of this article was supported by grants from the Office of Naval Research
(#N00014-95-1-0175) and the Air Force Office of Scientific Research (AFOSR#F49620-97-1-
0353).
Correspondence concerning this article should be addressed to Wayne D. Gray, George Mason
University, msn 3f5, Fairfax, VA 22030. gray@gmu.edu
COGNITIVE MODELING AND HUMAN-COMPUTER INTERACTION
“There is nothing so useful as a good theory ” (Lewin, 1951).
“Nothing drives basic science better than a good applied problem” (Newell &
Card, 1985).
1 Introduction
The quotations from Lewin and from Newell and Card capture what motivates
those who apply cognitive modeling to human-computer interaction (HCI). Cognitive
modeling springs from cognitive science. It is both a research tool for theory building
and an engineering tool for applying theory. To the extent that the theories are
sound and powerful, cognitive modeling can aid HCI in the design and evaluation of
interface alternatives. To the extent that the problems posed by HCI are difficult to
model or cannot be modeled, HCI has served to pinpoint gaps or inconsistencies in
cognitive theory. In common with design, science is an iterative process. The
symbiotic relationship between modeling and HCI furthers the scientific enterprise of
cognitive science and the engineering enterprise of human factors.
Cognitive modeling is a form of task analysis and, as such, is congenial to many
areas and aspects of human factors. However, the control provided by the computer
environment, in which most dimensions of behavior can be easily and accurately
measured, has made HCI the modeler’s primary target. As modeling techniques
become more powerful and as computers become more ubiquitous, cognitive
modeling will spread into other areas of human factors.
Cognitive Modeling Page 3 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
We begin this article by discussing three cognitive models of HCI tasks, focusing
on what the models tell us about the tasks rather than on the details of the models
themselves. We next examine how these models, as well as cognitive models in
general, integrate constraints from the cognitive system, from the artifact that the
operator uses to do the task, and from the task itself. We then explore what sets
cognitive modeling apart from other types of cognitive task analysis and examine
dimensions on which cognitive models differ. We conclude with a brief summary.
2 Three Examples of Cognitive Modeling Applied to HCI
The three examples span the gamut of how models are used in HCI. We discuss
them in the order of most applied to most theoretical. However, it would be a mistake
to think of these as application versus research as each has contributed strongly to
theory and each has clear applications to HCI issues.
2.1 Project Ernestine: CPM-GOMS
In the world of the Telephone Company, time is literally money. In the late 80’s,
NYNEX calculated that if the length of each operator-assisted call decreased by 1
sec the company’s operating costs would be reduced by $3 million per year.
Potential savings on this scale provided an incentive to shave seconds from the time
that toll and assistance operators (TAOs) spent on operator assisted calls.
A major telecommunications equipment manufacturer promised to do just that.
For an equipment investment of $60 to $80 million, the old TAO workstations could
be replaced by new, ergonomically engineered workstations. The manufacturer’s
back-of-the-envelope style calculations predicted that the new workstations would
Cognitive Modeling Page 4 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
shave about 4 s from the average call for an estimated savings of $12 million
annually.
Project Ernestine involved a combination of cognitive modeling and field trial to
compare the new workstations with the old (Gray, John, & Atwood, 1993; Gray,
John, Stuart, Lawrence, & Atwood, 1995). The cognitive models created in Project
Ernestine used the GOMS task analysis technique developed by Card, Moran, and
Newell (1983). GOMS analyzes a task in terms of Goals, simple Operators used by
the person performing the task, and sequences of operators that form Methods for
accomplishing a goal. If alternative methods exist for accomplishing a goal, then a
Selection rule is required to choose among them. GOMS is best suited to the
analysis of routine, skilled performance, as opposed to problem solving. The power
of GOMS derives in part from the fine-grain level of detail at which it specifies the
operators involved in such performance. (For a fuller exposition on GOMS, see John
& Kieras, 1996a; John & Kieras, 1996b.)
Project Ernestine employed a GOMS variant, CPM-GOMS, to analyze the TAO’s
task. CPM-GOMS specifies the parallelism and timing of elementary cognitive,
perceptual, and motor operators, using a schedule chart format that enables use of
the critical path method to analyze dependencies between these operators.
Contrary to expectations, the cognitive models predicted that the new
workstations would add about 1 s to the average call. Rather than reducing costs as
predicted by the manufacturer, this increased time would result in $3 million in
additional operating costs. This prediction was borne out empirically by a 4-month
Cognitive Modeling Page 5 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
field study using live telephone traffic. A sample of the CPM-GOMS model for the
beginning part of one call type is shown in Figure 1.
Beyond its prediction, CPM-GOMS was able to provide explanation. The
manufacturer had shown that the proposed workstation reduced the number of
keystrokes required to process a typical call and from this inferred that the new
workstation would be faster. However, their analysis ignored the context of the call,
namely the interaction of customer, workstation, and TAO. CPM-GOMS captured
this context in the form of a critical path of cognitive, perceptual, and motor actions
required for a typical call. By filling in the missing context, CPM-GOMS showed that
the proposed workstation added more steps to the critical path than it eliminated.
This qualitative explanation made the model’s prediction credible to telephone
company executives.
INSERT FIGURE 1 ABOUT HERE
2.2 Postcompletion Error
An adequate theory of error “is one that enables us to forecast both the
conditions under which an error will occur, and the particular form that it will take”
(Reason, 1990, p. 4). Such a theory was developed by Byrne and Bovair (1997) for
a phenomenon that they named postcompletion error.
The tasks that people want to accomplish are usually distinct from the devices
used to accomplish them. For example, the task might be to withdraw cash from a
bank account and the device might be an automated teller machine (ATM). From the
perspective that task and device are distinct, any action that the device (the ATM)
Cognitive Modeling Page 6 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
requires us to perform after we complete our task (withdrawing cash) is a
postcompletion action. An omitted postcompletion action is thus a postcompletion
error. Postcompletion errors include leaving the card in the ATM after taking the
money; leaving the originals on the photocopier after taking the copies; and
forgetting to set the video cassette recorder (VCR) to record after programming it to
videotape a show.
The striking characteristic of postcompletion errors is that, although they occur,
they do not occur often. Most people, most of the time, take both the money and the
card from the ATM (else, we suspect many fewer of us would use ATMs). What, if
anything, predicts the occurrence of a postcompletion error?
Byrne and Bovair's postcompletion error model is based on the notion of
activation of memory elements. Activation is a hypothetical construct that quantifies
the strength or salience of information stored in memory. The postcompletion error
model was constructed using CAPS, a programmable model of the human cognitive
architecture (Just, Carpenter, & Keller, 1996). CAPS assumes that a memory
element is accessible only if it has enough activation. It also assumes that total
activation is limited. Activation flows from one memory element to another if the two
are related and if one is the focus of attention. This spreading activation accounts for
standard psychological effects like semantic priming, in which, for example, focusing
on the notion of “doctor” might spread activation to related concepts like “nurse.”
In Byrne and Bovair’s error model, as long as the focus is on a task goal like
getting money, related device actions like take the card continue to receive
activation. However, when a task goal is accomplished, attention shifts away from it.
Cognitive Modeling Page 7 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
When this shift occurs, the device actions associated with the task begin to lose
activation. This is fine for actions like take the money, which are necessarily
complete, but problematic for postcompletion actions like take the card. If these
postcompletion actions lose enough activation, they will simply be forgotten.
Beyond its explanation, the postcompletion error model offered a prediction. Like
most memory theories, CAPS assumes that unused memory elements decay over
time; that is, their activation decreases. Because activation in CAPS is a common
resource, decay of one memory element makes more activation available for other
elements. Commensurately, Byrne and Bovair found fewer postcompletion errors in
a condition that included a prolonged tracking task. Apparently the tracking task,
which involved no memory load itself, allowed completed actions of the main task to
decay. Postcompletion actions continued to receive activation because the task goal
was not yet accomplished. In addition, they received the activation lost by the
actions that decayed. This additional activation reduced postcompletion error.
As an example of applied theory, the postcompletion error model is important for
several reasons. First, its explanations and predictions flow from existing cognitive
theory, not from ad hoc assumptions made by the analysts. The model functioned
primarily as a means of instantiating a theory on a particular problem. Second, the
prediction comes from the model not the modeler. Any analyst could run the
postcompletion error model with the same outcome. The debate over this outcome is
then limited to and focused by the representational assumptions and parameter
settings reified in a running computer program.
Cognitive Modeling Page 8 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
2.3 Information Access
Information in the world is useful only if we can find it when we need it. For
example, an illustration in a book is helpful only if we know it exists, if we recall its
existence when it is needed, and if we can find it. This view of information adds a
cognitive dimension to research into information access (e.g., the HCI subareas of
information retrieval and interface design). How do we recall the existence of the
helpful illustration? What was stored in memory about the illustration, and exactly
what is being recalled? What were the cues that prompted the recollection? From
the cognitive perspective, the process of information access is complex. However,
with a better understanding of the role of memory, we can engineer memory aids
that support this process.
Altmann and John (in press) studied the behavior of a programmer making
changes to code that had been written over a series of years by a team of which the
programmer was a member. Verbal and action protocols (keypresses and scrolling)
were recorded throughout an 80-min session. During this session, the programmer
would trace the program for several steps, stop it, interrogate the current value of
relevant variables, and so on. Over the course of the session 2,482 lines of code
were generated and displayed on the programmer’s screen. On 26 occasions, she
scrolled back to view information that had appeared earlier but had scrolled off the
screen.
Of interest was the role of memory in these scrolling episodes. The volume of
potential scrolling targets was huge and the programmer's need for any particular
target was small. However, the protocol data revealed that scrolling was purposeful
Cognitive Modeling Page 9 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
rather than random, implying a specific memory triggered by a specific cue. These
constraints meant that the programmer's memory-encoding strategy must have been
both sweeping in its coverage of potential targets and economical in terms of
cognitive effort.
Altmann and John developed a computational cognitive model of episodic
indexing that simulated the programmer's behavior. The model was developed using
Soar, which, like CAPS, is a cognitive theory with a computational implementation
(Newell, 1990). Based on the chunking theory of learning, Soar encodes sweeping
amounts of information economically in memory, but retrieval of this information
depends on having the right cue. When the episodic-indexing model attends to a
displayed item (e.g., a program variable), Soar creates an episodic chunk in memory
that maps semantic information about the item to episodic information indicating that
the item was attended. A second encounter with the item triggers recall of the
episodic chunk, which in turn triggers an inference that the item exists in the
environment. Based on this inference, the model decides whether or not to pursue
the target by scrolling to it.
The episodic indexing model suggests that memory depends on attention, not
intent. That is, episodic chunks are stored in memory as a by-product of attending to
an object, with no need for any specific intent to revisit that object later. The
implication is that people store vast amounts of information about their environment
that they would recall given the right cues. This, in turn, suggests that activities like
browsing are potentially much better investments than we might have thought. The
key to unlocking this potential is to analyze the semantic structure of the knowledge
Cognitive Modeling Page 10 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
being browsed and to ask how artifacts might help produce good cues later when
the browsed information would be relevant.
3 The Cognition-Artifact-Task Triad
Almost everything we do requires using some sort of artifact to accomplish some
sort of task. As Figure 2 illustrates, the interactive behavior for any given artifact-task
combination arises from the limits, mutual constraints, and interactions between and
among each member of the Cognition-Artifact-Task triad. Cognitive modeling
requires that each of these three factors be incorporated into each model.
INSERT FIGURE 2 ABOUT HERE
Traditional methodologies generally consider cognition, artifact, and task pairwise
rather than altogether. For example, psychological research typically seeks
experimental control by using simple tasks that require little external support,
thereby focussing on cognition and task but minimizing the role of artifact. Industrial
human-factors research often takes the artifact itself to be the task, largely ignoring
the artifact’s purpose. For example, the proposed TAO workstation had an
ergonomically designed keyboard and display but ignored the TAO’s task of
interacting with the customer to complete a call. Finally, engineering and computer
science focus on developing artifacts, often in response to tasks, but generally not in
response to cognitive concerns. The price of ignoring any one of cognition, artifact,
and task is that the resulting interactive behavior may be effortful, error-prone, or
even impossible.
Cognitive Modeling Page 11 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
In contrast, cognitive modeling as a methodology is bound to consider cognition,
artifact, and task as inter-related components. The primary measure of cognition is
behavior, so analysis of cognition always occurs in the context of a task. Moreover,
analyzing knowledge in enough detail to represent it in a model requires attention to
where this knowledge resides -- in the head or in artifacts in the world -- and how its
transmission between head and world is constrained by human perceptual/motor
capabilities. Indeed, computational theories of cognition are now committed to
realistic interaction with realistic artifacts (see, for example, Anderson, Matessa, &
Lebiére, 1997; Howes & Young, 1997; Kieras & Meyer, 1997). Thus, given that
human factors must consider cognition, artifact, and task together, cognitive
modeling is an appropriate methodology.
4 Cognitive Modeling vs. Cognitive Task Analysis
Cognitive task analysis, broadly defined, specifies the cognitive steps (at some
grain size) required to perform a task using an artifact. Cognitive modeling goes
beyond cognitive task analysis per se in that each step is grounded in cognitive
theory. In terms of the triad of Figure 2, this theory fills in the details of the cognition
component at a level appropriate to the task and the artifact.
In Project Ernestine, for example, the manufacturer’s predictions about the
proposed workstation came from a cognitive task analysis. However, this analysis
specified the cognitive steps involved in using the workstation as the manufacturer
saw them. The CPM-GOMS models, in contrast, took into account theoretical
constraints on cognitive parallelism and made predictions that were dramatically
more accurate. In the other two models the influence of theory is strong as well.
Cognitive Modeling Page 12 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
Most any cognitive analysis would identify memory failure as the cause of
postcompletion error. However, the model based on CAPS went further, linking
decay of completed goals to improved memory for pending goals. Similarly, memory
is clearly a factor in information access, but the model based on Soar detailed the
underlying memory processes to highlight the potential of browsing and the
importance of effective cues.
Our discussions of cognitive theory have focused on GOMS (John & Kieras,
1996a; John & Kieras, 1996b), ACT-R (Anderson & Lebiére, 1998), CAPS (Just et
al., 1996), EPIC (Kieras & Meyer, 1997), and Soar (Newell, 1990). These are broad
and integrated theories that deal with cognitive control (GOMS, ACT-R, and Soar),
learning and memory (ACT-R, CAPS, and Soar), and perception and action (ACT-R
and EPIC). There is also the class of connectionist or neural network models that
offers learning and memory functions that have been highly successful in accounting
for lower-level cognitive phenomena like visual attention (e.g., Mozer & Sitton,
1998). In sum, a broad range of theory is now available to elaborate the steps of a
cognitive task analysis and thus produce cognitively plausible models of interactive
behavior.
5 Dimensions of cognitive models
The models we have described are points in a much larger space. In general, a
model simply represents or stands for a natural system that for some reason we are
unable to study directly. Many psychological models, for example, are mathematical
functions, like memory-retention functions or regression equations. These make
Cognitive Modeling Page 13 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
accurate, quantitative predictions but are opaque qualitatively in that they provide no
analysis of what lies behind the behavior they describe.
We have focused on models that characterize the cognitive processes involved
in interactive behavior. Process models can make quantitative predictions, like those
of the TAO model, but go beyond such predictions to specify with considerable
precision the cognitive steps involved in the behavior being analyzed. To give a
sense of the space of possibilities, we compare and contrast process models on two
dimensions: generative vs. descriptive, and generality vs. realism.
5.1 Generative versus Descriptive
Two of our sample models are generative and one is descriptive. The
postcompletion error and episodic indexing models actually generate behavior
simulating that of human subjects. Generative models are implemented as
executable computer programs (hence are often referred to as computational
cognitive models) that take the same inputs and generate the same outputs that
people do. The TAO model, in contrast, simply describes sequences of actions
rather than actually carrying them out.
Generative models have several advantages. These include, first, proof of
sufficiency. Running the model proves that the mechanisms and knowledge it
represents are sufficient to generate the target behavior. Given sufficiency,
evaluation can shift, for example, to whether the model’s knowledge and
mechanisms are cognitively plausible. A second benefit is the ability to inspect
intermediate states. To the extent that a model is cognitively plausible, its internal
states represent snapshots of what a human operator may be thinking. A third
Cognitive Modeling Page 14 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
benefit is reduced opportunity for human (analyst) error. Generative models run on a
computer, whereas descriptive models must be hand-simulated, increasing the
chance of error.
5.2 Generality versus Realism
Models vary in their concern with generality versus realism. Generality is the
extent to which a model offers theoretical implications that extend beyond the
model’s domain. Realism, in contrast, is the extent to which the modeled behavior
corresponds to the actual interactive behavior of a particular operator performing a
given task.
Project Ernestine showed high realism in that each model accounted for the
behavior for an entire unit task; that is, one phone call of a particular call category for
a particular workstation. These models were not general in that it would be difficult to
apply them to any task other than the one modeled. For example, they could not be
applied to model ATM performance or VCR programming. Indeed, the existing
models apply only to a particular set of call categories. If another call category were
to be modeled, another model would have to be built.
In contrast, the models of postcompletion error and episodic indexing lack
realism in that their accounts of behavior are incomplete. Byrne and Bovair’s model
cannot perform the entire task, and Altmann and John’s model cannot debug the
code. However, the implications of these models extend far beyond the tasks in
which they are based. Situations involving postcompletion actions are susceptible to
postcompletion error. If postcompletion actions cannot be designed out of an
interface then special safeguards against postcompletion error must be designed in.
Cognitive Modeling Page 15 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
Likewise, episodic indexing suggests that human cognition reliably encodes a little
information about whatever it attends to. With the right cue, this information can be
retrieved. These hypotheses bear on any artifact-task combination in which memory
is an issue.
6 Summary
The space of cognitive process models, even within the space of models in
general, is quite large (see Gray, Young, & Kirschenbaum, 1997 for an alternative
cross-section). It used to be that developing process models required access to
specialized hardware and software that was available only at certain locations.
Fortunately, the technology of programmable cognitive theories has improved to the
point where computational models can be run and inspected over the Web (e.g.,
most of the models discussed in Anderson & Lebiére, 1998 are available on the
web). Access to such models enables the analyst to study working copies of
validated models and potentially to build on, rather than duplicate, the work of
others.
Cognitive modeling is the application of cognitive theory to applied problems.
Those problems serve to drive the development of cognitive theory. Some
applications of cognitive modeling are relatively pure application with little return to
theory. Of the three models we considered, the model of the TAO in Project
Ernestine (Gray et al., 1993) best fits this characterization.
In contrast, the episodic indexing model (Altmann & John, in press) was driven
by an applied question – how a programmer works on her system – but produced no
new tool or concrete evaluation. Instead, it proposed a theory of how people
Cognitive Modeling Page 16 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
maintain effective access to large amounts of information. This theory suggests a
class of design proposals in which the artifact plays the role of memory aid.
In the middle, the model of postcompletion error (Byrne & Bovair, 1997) used
existing theory to predict when an applied problem (error) was most likely to occur.
On this middle ground, where theory meets problem, is where cognitive modeling
will have its greatest effect – first on HCI, then on human factors.
7 References
Altmann, E. M., & John, B. E. (in press). Modeling episodic indexing of external
information. Cognitive Science.
Anderson, J. R., & Lebiére, C. (Eds.). (1998). Atomic components of thought.
Hillsdale, NJ: Erlbaum.
Anderson, J. R., Matessa, M., & Lebiére, C. (1997). ACT-R: A theory of higher-
level cognition and its relation to visual attention. Human-Computer
Interaction, 12(4), 439-462.
Byrne, M. D., & Bovair, S. (1997). A working memory model of a common
procedural error. Cognitive Science, 21(1), 31-61.
Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-
computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.
Gray, W. D., John, B. E., & Atwood, M. E. (1993). Project Ernestine: Validating a
GOMS analysis for predicting and explaining real-world performance. Human-
Computer Interaction, 8(3), 237-309.
Cognitive Modeling Page 17 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
Gray, W. D., John, B. E., Stuart, R., Lawrence, D., & Atwood, M. E. (1995). GOMS
meets the phone company: Analytic modeling applied to real-world problems.
In R. M. Baecker, J. Grudin, W. A. S. Buxton, & S. Greenberg (Eds.), Readings
in human-computer interaction: Toward the year 2000 (Second ed., pp.
634-639). San Francisco: Morgan Kaufmann Publishers, Inc.
Gray, W. D., Young, R. M., & Kirschenbaum, S. S. (1997). Introduction to this
Special Issue on Cognitive Architectures and Human-Computer Interaction.
Human-Computer Interaction, 12(4), 301-309.
Howes, A., & Young, R. M. (1997). The role of cognitive architecture in
modelling the user: Soar's learning mechanism. Human-Computer Interaction,
12(4), 311-343.
John, B. E., & Kieras, D. E. (1996a). The GOMS family of user interface analysis
techniques: Comparison and contrast. ACM Transactions on Computer-
Human Interaction, 3(4), 320-351.
John, B. E., & Kieras, D. E. (1996b). Using GOMS for user interface design and
evaluation: Which technique? ACM Transactions on Computer-Human
Interaction, 3(4), 287-319.
Just, M. A., Carpenter, P. A., & Keller, T. A. (1996). The capacity theory of
comprehension: New frontiers of evidence and arguments. Psychological
Review, 103(4), 773-780.
Cognitive Modeling Page 18 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
Kieras, D. E., & Meyer, D. E. (1997). An overview of the EPIC architecture for
cognition and performance with application to human-computer interaction.
Human-Computer Interaction, 12(4), 391-438.
Lewin, K. (1951). Field theory in social science. New York: Harper Row.
Mozer, M. C., & Sitton, M. (1998). Computational modeling of spatial attention.
In H. Pashler (Ed.), Attention (pp. 341-388). East Sussex, UK: Psychology
Press.
Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard
University Press.
Newell, A., & Card, S. K. (1985). The prospects for psychological science in
human-computer interaction. Human-Computer Interaction, 1(3), 209-242.
Reason, J. (1990). Human Error. New York, NY: Cambridge University Press.
Cognitive Modeling Page 19 of 19
file name: gray&altmann_body last saved: 2001-10-14 17:47
8 Figures
Figure 1: Section of CPM-GOMS analysis for an operator-assisted
call. The proposed workstation (bottom) has two fewer keystrokes
totaling 6 motor and 2 cognitive steps. However, deleting these
steps did not alter the critical path (shown in bold).
Figure 2: The Cognition-Artifact-Task Triad. The behavior of an
operator using an artifact to perform a task arises from the limits,
mutual constraints, and interactions between and among each
member of the cognition-artifact-task triad.
file name: gray&altmann_figure1 Page 1 of 1
last saved: 03/23/99 19:45
Perceive -
Operator, bill
560
silence (a)
430
verify bill
Perceive-
this to
340
initiate
F1
home from
lap to F1
540
D
initiate
F2
U
Perceive -
4 (1)
220
H
170
D
F2
initiate
4 (1)
U
F2
Perceive -
Operator, bill
560
silence (a)
430
verify bill
Perceive-
this to
340
Perceive -
4 (1)
220
initiate
4 (1)
Perceptual
Cognitive
Motor
(right hand)
Perceptual
Cognitive
Current Workstation
Proposed Workstation
file name: gray&altmann_figure2 Page 1 of 1
last saved: 03/24/99 20:07
Artifact
Interactive
Behavior
Cognition
Task
... Future work will focus on stress-testing these models in the context of real cybersecurity data. Although it is the case that "[t]here is nothing so useful as a good theory, " (Lewin, 1951, as cited by Gray and Altmann, 2001) it is also the case that "[n]othing drives basic science better than a good applied problem" (Newell and Card, 1985, as cited by Gray and Altmann, 2001). We believe that the methods presented in this paper can be of great use for cybersecurity, but also that the applied problem of cybersecurity itself and the datasets derived in this domain can serve to refine these methods and to push them from research stages and toward production. ...
... Future work will focus on stress-testing these models in the context of real cybersecurity data. Although it is the case that "[t]here is nothing so useful as a good theory, " (Lewin, 1951, as cited by Gray and Altmann, 2001) it is also the case that "[n]othing drives basic science better than a good applied problem" (Newell and Card, 1985, as cited by Gray and Altmann, 2001). We believe that the methods presented in this paper can be of great use for cybersecurity, but also that the applied problem of cybersecurity itself and the datasets derived in this domain can serve to refine these methods and to push them from research stages and toward production. ...
Article
Full-text available
Cybersecurity stands to benefit greatly from models able to generate predictions of attacker and defender behavior. On the defender side, there is promising research suggesting that Symbolic Deep Learning (SDL) may be employed to automatically construct cognitive models of expert behavior based on small samples of expert decisions. Such models could then be employed to provide decision support for non-expert users in the form of explainable expert-based suggestions. On the attacker side, there is promising research suggesting that model-tracing with dynamic parameter fitting may be used to automatically construct models during live attack scenarios, and to predict individual attacker preferences. Predicted attacker preferences could then be exploited for mitigating risk of successful attacks. In this paper we examine how these two cognitive modeling approaches may be useful for cybersecurity professionals via two human experiments. In the first experiment participants play the role of cyber analysts performing a task based on Intrusion Detection System alert elevation. Experiment results and analysis reveal that SDL can help to reduce missed threats by 25%. In the second experiment participants play the role of attackers picking among four attack strategies. Experiment results and analysis reveal that model-tracing with dynamic parameter fitting can be used to predict (and exploit) most attackers' preferences 40−70% of the time. We conclude that studies and models of human cognition are highly valuable for advancing cybersecurity.
... Regarding the cognition-task-artefact tirade [12] for a model in HCI the task, the artefact and the cognitive processes involved to perform the task, need to be considered. These three aspects will be described in the examples and in addition the purpose of the cognitive digital twin in each case. ...
... Although human factors was initially unwelcoming to cognitive science, this changed in the US in the 1980s and attracted more cognitive scientists to the field ( [20], [47]). According to [32], early cognitive models tended to be used for pre-graphical user interface designs that could be created without user testing, and therefore applied cognitive psychology approaches such as Smith s & Mosier [80] 1986 guidelines fell out of favor as graphical user interfaces and iterative design became the norm. Historical analyses of HCI have also noted that early 1980s visions for HCI came from work-related disciplines such as human factors and involved using cognitive science (e.g., [17]). ...
Article
We review HCI history from both the perspective of its 1980s split with human factors and its nature as a discipline. We then revisit human augmentation as an alternative to user friendliness that seems particularly relevant in the areas of inclusive design and artificial intelligence. Viewing human-AI interaction as a kind of human augmentation raises issues such as how to promote trust and situation awareness. We also pose the question: Can HCI and human factors engineering work together to solve the increasingly urgent challenges of human-AI technology? In an initial look at this question, we contrast the different approaches of HCI and human factors on emerging AI research. This paper concludes by considering other potentially promising paths for HCI. We propose more collaboration between HCI and human factors, or related disciplines, in the future to address the massive challenges posed by the rapid growth in data science and artificial intelligence.
... A complete survey of human models is out of the scope of the present work (see (Gray and Altmann, 2001;Norman, 2013) for more details). Nonetheless, I provide here a brief overview of some of the main common concepts. ...
Thesis
Given the ubiquitous nature of interactions across applications and systems, users often need to alternate between devices, software, or techniques to complete a single task. In this thesis, I investigate retroactive transfer when users alternate between different interfaces. Retroactive transfer is the influence of a newly learned interface on users' performance with a previously learned interface. I explore the theoretical and psychological foundations behind learning, skill acquisition, and transfer of skill to better characterize the retroactive transfer phenomenon. In an interview study, participants described their experiences when alternating between different interfaces, e.g. different operating systems, devices, or techniques. Negative retroactive transfer related to text entry was the most frequently reported incident. I then report on a laboratory experiment that investigated the impact of similarity between two keyboard layouts, and the number of alternations between them, on retroactive interference. Results indicate that even small changes in the interference interface produced a significant performance drop for the entire previously learned interface. The amplitude of this performance drop decreases with the number of alternations. Based on the findings of this thesis, the retroactive transfer should receive more attention by designers in Human-Computer Interaction (HCI). Their interfaces should be more systematically evaluated not only for intramodal learning and proactive transfer but also for retroactive transfer.
... In order to acquire information on how problems in question can be solved and which prerequisites are therefore needed it is helpful for an instructional designer to perform a cognitive task analysis. This can be done by using cognitive modeling as a method of task analysis, i.e., by constructing computer models to simulate a human problem-solver's behavior based on cognitive theories (Gray & Altmann, 2001). Cognitive models are characterized by a high precision that is achieved by the necessity of stating explicit and formalized assumptions in order to get running computer models (Zachary, Ryder, & Hicinbothom, 1998). ...
... There has been a long interest in creating models of users in ACT-R for use in interface design and as opponents and colleagues in shared interfaces (Anderson, Matessa, & Lebiere, 1997;Byrne & Gray, 2003;Gray & Altmann, 2001;Gray, Young, & Kirschenbaum, 1997). We note a few particular models here, but many of the models of specific regularities are also in service of modeling interface users, or could be, including models of perception and motor output (Ritter, Baxter, Jones, & Young, 2001). ...
Article
Full-text available
ACT‐R is a hybrid cognitive architecture. It is comprised of a set of programmable information processing mechanisms that can be used to predict and explain human behavior including cognition and interaction with the environment. We start by reviewing its history, which shapes its current form, contrasts and relates it to other architectures, and helps readers to anticipate where it is going. Based on this history, we then describe it as a theory of cognition that is realized as a computer program. After this, we briefly discuss tools for working with ACT‐R, and also note several major accomplishments that have been gained by working with ACT‐R in both basic and applied science, including summarizing some of the insights about human behavior. We conclude by discussing its future, which we believe will include adding emotions and physiology, increasing usability, and the use of nongenerative models. This article is categorized under: • Computer Science > Artificial Intelligence • Psychology > Reasoning and Decision Making • Psychology > Theory and Methods
... On the other hand, we would not entirely ignore cognition and cognitive processes which we noticed in some agent based models; [16] and [4]; (though not discussed in details here). This is because our investigations showed that cogni-tive modeling can help measure aspects of human behavior in a computer environment as well as support the assessment of interface design choices in HCI [23]. We would also acknowledge the affective dimensions of HCI (i.e. ...
Chapter
Building reliable interactive systems has been identified as an important and difficult task from the late ‘60s. One approach to augment the reliability of interactive systems is to use formal models during system development. Formal methods have received attention for the design and analysis of human-computer interaction (HCI) for thirty years. The field of adaptive instructional systems (AIS) in general, and intelligent tutoring systems in particulars, have been mostly relying on empirical methods for training systems validation (the system supports learning), rather than formal methods for verification (the system meets its specifications). Empirical methods focus on the validity of pedagogical interventions at the individual task and problem sequence levels, using learning analytic methods such as Bayesian knowledge tracing, additive factors models, or machine learning models of human performance. The purpose of the paper is to explore some parallel and the applicability of HCI formal models to AIS. The paper: a) presents key concepts related to HCI formal models using semi-formal representations (workflow graphs), b) gives examples of formal properties to be verified, c) discuss briefly formal notations, and d) defines adaptive human-computer interaction. The last section of the paper discuss the similarity between HCI formal models and AIS standard modules, and identifies some area of applicability of HCI formal models to AIS design, recognizing the central value of AIS empirical methods at the foundation of AIS iterative design.
Thesis
Full-text available
Daten über Umweltverschmutzung zeigen, dass Nutzen und Lasten ungerecht auf Bevölkerungsgruppen verteilt sind. Die vorliegende Untersuchung variiert experimentell Unterschiede des grafischen Verstehens visuell präsentierter Umweltdaten. Zum einen hinsichtlich des Diagrammtyps: gestapelt oder gruppiertes Balkendiagramm. Zum anderen werden zwei unterschiedlich komplexe Aufgabentypen als zweite Bedingung mit den Diagrammen verglichen: Identifizieren von Extremwerten oder Erkennen von Relationen. Gemessen werden Korrektheit und subjektive Urteilssicherheit. Nur die Kombination aus Diagramm- und Aufgabentyp beeinflusst korrektes Entnehmen von Informationen. Dies bestätigt für die Domäne Umweltdaten die Ergebnisse früherer Forschungen. Umweltkommunikation zielt auf Einstellungsänderung. Auch die subjektive Urteilssicherheit als neu eingeführte abhängige Variable ist Gegenstand der Untersuchung: Sie wird sowohl von den einzelnen Bedingungen Diagramm und Aufgabe als auch von deren Interaktion beeinflusst. Die vorliegende Untersuchung zeigt die Bedeutung beider Einflussgrößen auf die neu eingeführte subjektiv wahrgenommene Urteilssicherheit, die als potentieller Indikator für Einstellungs- oder Motivationsbildung für weitere Forschung zur Umweltkommunikation dienen kann.
Article
This paper presents a position statement on using ergonomics in conjunction with the multi-modelling paradigm. Multi-modelling is a computational approach to combine models of systems and components for design and simulation of Cyber Physical Systems and Systems of Systems. Despite potentially significant benefits in terms of more human-centric system modelling there is limited evidence of the application of ergonomics within multi-modelling. This paper presents the case for applying ergonomics within multi-modelling. We open with an introduction to multi-modelling and benefits, applications and gaps for ergonomics in multi-modelling, and of potentially useful models from ergonomics. We then describe a proof-of-concept implementation of ergonomics within a multi-model of UAV control. This demonstrates that as well as user-centred modelling, this approach supports ergonomics in how we can access rich systems models, and the collaborative value of applying ergonomics theory in systems design. Practitioner Statement: Examines multi-modelling, a computational approach for complex modelling, and the contribution of ergonomics. An autonomous UAV test implementation demonstrates the application of ergonomics knowledge for improving design and evaluation processes, and how multi-modelling can give ergonomics access to rich systems models.
Article
Full-text available
The ACT-R system is a general system for modeling a wide range of higher level cognitive processes. Recently, it has been embellished with a theory of how its higher level processes interact with a visual interface. This includes a theory of how visual attention can move across the screen, encoding information into a form that can be processed by ACT-R. This system is applied to modeling several classic phenomena in the literature that depend on the speed and selectivity with which visual attention can move across a visual display. ACT-R is capable of interacting with the same computer screens that subjects do and, as such, is well suited to provide a model for tasks involving human-computer interaction. In this article, we discuss a demonstration of ACT-R's application to menu selection and show that the ACT-R theory makes unique predictions, without estimating any parameters, about the time to search a menu. These predictions are confirmed.
Article
Full-text available
What is the role of a cognitive architecture in shaping a model built within it? Compared with a model written in a programming language, the cognitive architecture offers theoretical constraints. These constraints can be "soft," in that some ways of ...
Article
Full-text available
Project Ernestine served a pragmatic as well as a scientific goal: to compare the worktimes of telephone company toll and assistance operators on two different workstations and to validate a GOMS analysis for predicting and explaining real-world performance. Contrary to expectations, GOMS predicted and the data confirmed that performance with the proposed workstation was slower than with the current one. Pragmatically, this increase in performance time translates into a cost of almost $2 million a year to NYNEX. Scientifically, the GOMS models predicted performance with exceptional accuracy. The empirical data provided us with three interesting results: proof that the new workstation was slower than the old one, evidence that this difference was not constant but varied with call category, and (in a trial that spanned 4 months and collected data on 72,450 phone calls) proof that performance on the new workstation stabilized after the first month. The GOMS models predicted the first two results and explained all three. In this article, we discuss the process and results of model building as well as the design and outcome of the field trial. We assess the accuracy of GOMS predictions and use the mechanisms of the models to explain the empirical results. Last, we demonstrate how the GOMS models can be used to guide the design of a new workstation and evaluate design decisions before they are implemented.
Article
This paper discusses the prospects of psychology playing a significant role in the progress of human-computer interaction. In any field, hard science (science that is mathematical or otherwise technical) has a tendency to drive out softer sciences, even if the softer sciences have important contributions to make. It is possible that, as computer science and artificial intelligence contributions to human-computer interaction mature, this could happen to psychology. It is suggested that this trend might be prevented by hardening the applicable psychological science. This approach, however, would be criticized on the grounds that the resulting body of knowledge would be too low level, too limited in scope, too late to affect computer technology, and too difficult to apply. The prospects for ovrcoming each of these obstacles are analyzed here.