ArticlePDF Available

A New Approach for Assessment of Mental Architecture: Repeated Tagging

PLOS
PLOS ONE
Authors:

Abstract and Figures

A new approach to the study of a relatively neglected property of mental architecture-whether and when the already-processed elements are separated from the to-be-processed elements-is proposed. The process of numerical proportion discrimination between two sets of elements defined either by color or by orientation can be described as sampling with or without replacement (characterized by binomial or hypergeometric probability distributions respectively) depending on the possibility to tag an element once or repeatedly. All empirical psychometric functions were approximated by a theoretical model showing that the ability to keep track of the already tagged elements is not an inflexible part of the mental architecture but rather an individually variable strategy which also depends on conspicuity of perceptual attributes. Strong evidence is provided that in a considerable number of trials, observers tagged the same element repeatedly which can only be done serially at two separate time moments.
Content may be subject to copyright.
A New Approach for Assessment of Mental Architecture:
Repeated Tagging
Aire Raidvee
1
*, Agne Po
˜lder
1
,Ju
¨ri Allik
1,2
1Department of Psychology and Estonian Center of Behavioral and Health Sciences, University of Tartu, Tartu, Estonia, 2Estonian Academy of Sciences, Tallinn, Estonia
Abstract
A new approach to the study of a relatively neglected property of mental architecture—whether and when the already-
processed elements are separated from the to-be-processed elements—is proposed. The process of numerical proportion
discrimination between two sets of elements defined either by color or by orientation can be described as sampling with or
without replacement (characterized by binomial or hypergeometric probability distributions respectively) depending on the
possibility to tag an element once or repeatedly. All empirical psychometric functions were approximated by a theoretical
model showing that the ability to keep track of the already tagged elements is not an inflexible part of the mental
architecture but rather an individually variable strategy which also depends on conspicuity of perceptual attributes. Strong
evidence is provided that in a considerable number of trials, observers tagged the same element repeatedly which can only
be done serially at two separate time moments.
Citation: Raidvee A, Po
˜lder A, Allik J (2012) A New Approach for Assessment of Mental Architecture: Repeated Tagging. PLoS ONE 7(1): e29667. doi:10.1371/
journal.pone.0029667
Editor: Suliann Ben Hamed, CNRS - Universite
´Claude Bernard Lyon 1, France
Received June 16, 2011; Accepted December 2, 2011; Published January 9, 2012
Copyright: ß2012 Raidvee et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This study has been supported by the Estonian Science Foundation Grant 8231 and the Estonian Ministry of Science and Education Grant
SF0180029s08. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing Interests: The authors have declared that no competing interests exist.
* E-mail: aire.raidvee@ut.ee
Introduction
The way mental processes are organized—their architecture—
has been one of the main concerns for both psychologists and
neuroscientists [cf. 1]. The question of whether people perform
perceptual and mental operations in parallel or in series, has been
pivotal in many of these pursuits [2,3]. Overwhelmingly, the
debate about serial vs parallel processing has been concentrated
on reaction time data. In a seminal experiment, Sternberg [4]
demonstrated that when observers judge whether a test symbol is
contained in a short memorized sequence of symbols, their mean
reaction-time increases linearly with the length of the sequence.
The linearity and slope of the function were interpreted as strong
evidence in favor of an internal serial-comparison process whose
average rate is between 25 and 30 symbols per second. However,
as it was soon shown by a thorough theoretical analysis, the
distinction between serial and parallel processing is constrained by
model mimicking: parallel models can lead to exactly the same
predictions as serial ones despite the completely different
psychological assumptions they are based on [3,5].
One lesson that can be derived from the serial vs parallel
controversy is that it cannot be resolved in isolation from other
relevant attributes of the cognitive architecture. For example, it
became evident that the questions about stopping rule the
conditions under which the system ceases processing and generates
a response or the questions about capacity limitations, are
inevitably linked to the question about serial vs parallel
architecture [3]. Considering this lesson, it is surprising that even
though a number of studies exist on serial vs parallel processing in
the context of enumeration accuracy of independent sets, e.g.
[6,7], the serial vs parallel debate has almost entirely escaped the
numerosity discrimination accuracy problem. At least one study
has shown similar counting and subitizing processes to those
measured in standard enumeration tasks to be involved in the
number discrimination task with a single stimulus set [8]. Yet, not
much information is available about the nature of processes
involved in numerosity discrimination in case the stimulus display
contains multiple distinct sets.
In the following, we use the term counting as referring to any
process aimed at finding the total number of elements in a set. The
term is neutral with respect to the temporal properties of the
processes involved: counting can be parallel, serial, or mixed.
It has long been known that it takes at least 5–6 years before
children are able to learn all principles that are needed for
counting, including assignment of numerals for objects [9]. But
even after learning to count it is not guaranteed that perceptual
mechanisms follow the principles used in verbal and propositional
thinking. It is possible that even the most fundamental principle of
numeration the one-to-one correspondence between items and
counting tags in the process of transformation of every item from
the to-be-counted category to the already-counted category
cannot always be obeyed [cf. 9]. Perceptually it may be difficult to
assign only one counting tag to every object with the purpose of
preventing the same object from being counted twice. When the
searched objects lack a clear structure it may be difficult to keep
track of which object is already counted and which is still on the
waiting list.
To the best of our knowledge, there is no generally accepted
method for establishing whether or not the tagging process follows
exactly the one-to-one principle. Unlike many previous studies
which have used analysis of reaction times to differentiate between
serial vs parallel processing styles, we attempt to reveal this
property of mental architecture on the basis of probability
distribution of responses. Our approach stems from an ideal
PLoS ONE | www.plosone.org 1 January 2012 | Volume 7 | Issue 1 | e29667
observer analysis which purpose is to establish an absolute scale of
performance for an ideal perceptual device that is limited only by
stochastic characteristics of the stimulus itself [10]. Let’s suppose
that the observer’s task is to discriminate the numbers of two
distinct sets of randomly distributed elements. These two sets can
be distinguished by their spatial position, occupy two separate
areas, for example [11], or they can be intermixed but disting-
uished by a certain visual attribute, such as color or orientation
[12]. This is a relatively simple task, as even pigeons, with a brain
weighing less than 3 g, can be trained to discriminate numerical
proportion in the mixtures of two types of elements with
considerable accuracy [13,14]. As expected, an ideal perceptual
device can notice even one element difference irrespective of the
total number of elements. Real observers, human or nonhuman,
usually perform less accurately, presumably because their decisions
seem to be based on only a fraction of available items. It is
conceivable that instead of all presented elements the real
observers are able to take into account only a fraction of the
elements, especially when these elements have a random spatial
distribution and are presented for a very short time. Formally, this
situation resembles the inverse probability problem in which a
sample of randomly selected elements serves as a basis for
inference about the true proportion of elements hidden from the
observer. Jacob Bernoulli in his posthumous Ars conjectandi (1713/
1899) devised an ingenious urn problem as an idealized mental
exercise in which some objects or concepts of real interest (such as
people, event outcomes, visual objects, etc.) are represented as
colored balls or pebbles which are drawn, one after another,
randomly from the urn and their color is noted. Every probability
textbook teaches that balls or pebbles once extracted can or cannot
be returned to the urn, which leads to two distinct probability
distributions for the number of balls of a given color: the binomial
and hypergeometric distributions, respectively. These two different
replacement schemes, however, have an important application to
the problem of mental architecture. Provided that Bernoulli’s urn
model describes sufficiently accurately what happens in the
perception of numerical differences, the scheme of sampling with
replacement (leading to the binomial distribution) implies that
there is no tagging of which elements are already counted and
which are not: the same element can, in principle, be inspected
more than once. Consequently, if empirically determined
psychometric functions for numerical discriminations between
two sets of items are better described by binomial than
hypergeometric distribution, it would provide evidence that some
of these elements are inspected twice or more times which,
understandably, can only be done serially at two or more different
time moments. On the other hand, the scheme of sampling
without replacement (leading to the hypergeometric distribution)
implies that there is accurate one-to-one tagging of which elements
are already counted and which are not, leading to an element
being inspected only once, maximally. The attribution of one-to-
one counting tags (corresponding to the sampling scheme without
replacement) is by itself neutral to the problem of parallel or serial
counting.
If an observer strictly adhered to the hypergeometric model (see
equations (3) and (4) in the Methods section) with the parameter K
(the number of elements taken into account in the decision process)
being equal to the total number of elements in the stimulus display,
N, then he or she would always determine correctly which of the
two types of the elements is more numerous. The fact that the real
observers in our experiments make errors indicates, within the
proposed approach, that either they only take into account proper
subsets of the elements (adhering to the hypergeometric model
with K,N) or they count some of the elements more than once,
adhering, at least partially, to the binomial model. Our analysis
below indicates that both these possibilities take place: to account
for the data best we need to assume that the observers in some
trials use the hypergeometric model and in other the binomial
model, with Kvarying from trial to trial. In relation to the seriality
vs parallelity of counting, the conformity of the data with the
hypergeometric model (i.e., sampling without replacement, one-to-
one tagging of selected elements) leaves the question of seriality vs
parallelity open. But once the data are shown to require the
binomial model for at least a fraction of all trials, one has to accept
that some elements can sometimes be counted more than once,
and this can only be done serially, at two or more separate time
moments.
The overall aim of the experiments was to introduce a new
approach for the assessment of mental architecture, namely the
property of whether, in the process of proportion discrimination of
multiple stimulus sets, certain elements were being counted
repeatedly. In our view, the aim was achieved by showing that
this is indeed the case at least in some of the trials.
Methods
Ethics Statement
The study has been approved by the local Research Ethics
Committee.
Four 20-year-old female observers with normal or corrected to
normal vision were asked to decide which of the two distinctive sets
of objects were more numerous by pressing one of two buttons. In
two separate series these two sets of objects were distinguished
either by color or by orientation. A schematic view of the two types
of stimulus configurations is shown in Figure 1. In the first series a
randomly distributed collection of red and green circles was
presented. The red and green circles had a luminance of about
23.5 cd/m
2
. To diminish the impact of total red vs green area on
the responses, size of the circles was randomly varied in the range
of 11 to 22 minutes of arc. In the second series of the experiments
a collection of short black line segments of luminance 0.3 cd/m
2
and tilt of 20ueither to the left or to the right from the vertical
direction was presented. The width and length of a line subtended
29and 199respectively (and height of its vertical projection 169).
Both types of stimuli were presented within an elliptical gray
background with luminance of 54 cd/m
2
and with lengths of
horizontal and vertical axes 8.86uand 8.70urespectively. This
elliptical background was in the center of a rectangular area of
luminance 64 cd/m
2
filling the rest of the screen. In order to avoid
overlaps between elements, each element was positioned within an
invisible inhibitory area which prevented other elements to be
closer than 229. Each stimulus element had a high contrast to
guarantee its 100% identification would it have been presented in
isolation. The total number of objects Npresented on the display
was kept constant through each experimental session and was
equal either to N= 9 or 13 elements. These two relatively small
values were chosen because the difference between the response
probabilities from the binomial vs hypergeometric models is
greater in case the total number of elements is small. During
experimental sessions, the relative proportion of the type Aand
type Belements was varied. For example, for the total number of
N= 9 the relative proportions of A(red or tilted to the left) and B
(green or tilted to the right) element categories were the following:
1:8, 2:7, 3:6, 4:5, 5:4, 6:3, 7:2, and 8:1. The stimuli were presented
at a viewing distance of 170 cm for 200 milliseconds, with
3 seconds for responding.
All stimuli were generated on the screen of a Mitsubishi
Diamond Pro 2070SB 220color monitor (frame rate was 140 Hz
Serial versus Parallel Processing
PLoS ONE | www.plosone.org 2 January 2012 | Volume 7 | Issue 1 | e29667
with the resolution 10246769 pixels) with the help of a ViSaGe
(Cambridge Research Systems Ltd.) stimulus generator. Every
stimulus condition was replicated 100 times. Choice probability of
the red circles was plotted as a function of the proportion of red
elements N
R
in the total number of elements on the display
N=N
R
+N
G
. Similarly in the orientation experiment, probability of
the choice of the leftward tilted elements was measured as a
function of the proportion of leftward tilted elements N
(\)
in the
total number of elements on the display N=N
(\)
+N
(/)
.
Mathematical expression of the psychometric models
The probabilities of a certain choice response for odd and even
Kfrom the binomial model are given by equations (1) and (2):
PbinfKis oddg~X
K
i~1ztK
2s
K
i

pi(1{p)K{i,K~2k{1ð1Þ
PbinfKis eveng~X
K
i~1zK
2
K
i
!
pi(1{p)K{iz
0:5
K
K
2
0
@1
ApK
2(1{p)K
2,K~2k
ð2Þ
where
kis any positive natural number;
pis the proportion of a certain type of elements to the
total number of elements (either N
A
/(N
A
+N
B
)orN
B
/
(N
A
+N
B
), depending on the experimental definition;
Kis the number of elements taken into account in the
decision process.
The probabilities of a certain choice response for odd and even
Kfrom the hypergeometric model are given by equations (3) and
(4):
PhypfKis oddg~X
K
i~1ztK
2s
NA
i

NB
K{i

N
K
 ,K~2k{1ð3Þ
PhypfKis eveng~X
K
i~1zK
2
NA
i
!
NB
K{i
!
N
K
! z0:5
NA
K
2
0
@1
A
NB
K
2
0
@1
A
N
K
! ,
K~2k
ð4Þ
where
kis any positive natural number;
N
A
is the number of type Aelements in the stimulus;
N
B
is the number of type Belements in the stimulus;
Nis the total number of elements in the stimulus
(N=N
A
+N
B
);
Kis the number of elements taken into account in the
decision process.
As stated above, one only needs to consider either odd or even
values of Kbecause the probabilities given by a pair of equations
(either those for the binomial model or for the hypergeometric
model) are equal, given equal values for k.
Results
The obtained psychometric functions are shown in Figure 2.
The probability of the choice of ‘‘red’’ (color experiment) or
‘‘leftward tilt’’ (orientation experiment) are plotted as a function of
the proportion of the respective type of elements in the total
number of displayed elements. As expected, the choice probability
monotonically increases with the increase in the proportion of the
indicated elements.
It is assumed that the observer’s decisions between response
categories Aand Bare based on the inspection of Kelements that
are randomly selected from all available elements N. If the
number of the A-type elements K
A
in the selection exceeds the
number of B-type elements (K
A
.K
B
), then the response category
‘‘A’’ is chosen; in the opposite case the response category ‘‘B’’ is
chosen. If the numbers of Aand Belements happen to be equal
(K
A
=K
B
) for an even number of selected elements K, then the
choice between ‘‘A and ‘‘B’’ response categories is random with
probability 0.5. Following this simple decision rule it is easy to
compute all theoretical cumulative probability functions for
Figure 1. Stimulus configurations in the two experiments. Schematic view of stimulus configurations used in the numerosity discrimination
experiment using color (left panel) or orientation (right panel) as a distinctive attribute.
doi:10.1371/journal.pone.0029667.g001
Serial versus Parallel Processing
PLoS ONE | www.plosone.org 3 January 2012 | Volume 7 | Issue 1 | e29667
binomial and hypergeometric distributions. Figure 3 demonstrates
these theoretical binomial and hypergeometric models for odd
numbers of selected elements K(the sample size). One only needs to
consider odd numbers of elements since K=2k21 (odd) and K=2k
(even) yield identical predictions. The equivalence of K=2k21 and
K=2kis easy to demonstrate numerically for any arbitrary kvalue
or demonstrate their formal equivalence by using, for example,
Wolfram’s Mathematica. However, an analytic proof seems to go
beyond ordinary algebra. The mathematical formulations of
response probabilities from both types of models binomial and
hypergeometric are given in the Methods section.
Only in a few cases were the empirical psychometric functions
close enough to one of these model predictions. This outcome is
expected since it would be unrealistic to assume that the observer
can use a fixed number of elements Kin each trial through the
whole sequence of trials. It is more realistic to assume that the
number of selected elements Kis a variable and changes from one
trial to another. Also, there is no clear reason to hold any one
specific combination of theoretical models strictly superior to the
others as, within error limits, many mixture models are able to
provide a comparable fit. Therefore, the emphasis of the current
analysis is to estimate the relative performance of the hypergeo-
Figure 2. The best fitting theoretical models (dotted line)
vs
empirical results (red points). The choice probability as a function of the
proportion of the chosen response category for four observers, two discrimination tasks (color and orientation), and two numbers of elements (N=9
and 13). Each point is a probability estimate computed from 100 trials. The dotted line represents the best fitting theoretical mixture model shown in
Tables 1.A and 2.A.
doi:10.1371/journal.pone.0029667.g002
Serial versus Parallel Processing
PLoS ONE | www.plosone.org 4 January 2012 | Volume 7 | Issue 1 | e29667
metric models to that of a combination of both hypergeometric
and binomial models. We are greatly indebted to Ehtibar
Dzhafarov for suggesting the described approach. At the heart
of the underlying logic lies the assumption that in case any
binomial component(s) is/are able to improve the overall fit of the
mixture model (with the maximum number of possible mixture
components held equal to the number of respectively possible
hypergeometric models) then that would be an indication in
support of serial processing in at least some of the trials.
An approximation algorithm based on least squares optimization
was written which looked for the weighted combination of all
theoretical models which minimizes the sum of squared errors
between theoretical predictions and points of empirical functions.
Prior to plotting the best mixture of theoretical models vs the
empirical psychometric functions, the latter were shifted to the left or
right to make their mean (m) equal to 0.5. If the mean of all responses
deviates from the expected 0.5 then it characterizes a response bias
towards one of the two response alternatives. As expected, the
empirical means were close to 0.5, ranging from 0.44 to 0.53.
The best predictions of the mixtures of theoretical models are
shown in Figure 2 as continuous psychometric functions. The
parameters of these best fitting mixture models are shown in
Tables 1.A and 2.A. The number in the column corresponding to
the theoretical model (bin
K
or hyp
K
) indicates the percentage of
trials in which each of these models is expected to be used. For
example, in the first row in Table 1.A the mixture model is
described as 31Nhyp
5
+26Nhyp
7
+15Nhyp
9
+28Nbin
3
, which means
that for the observer S1 the best fit was obtained when the
hypergeometric model with the sample size of either K=5, K=7
or K= 9 was supposed to be used in 31%, 26% and 15% of all the
Figure 3. All possible theoretical models. All possible theoretical models corresponding to binomial (bin
K
) or hypergeometric (hyp
K
)
distributions with the length of trials K.
doi:10.1371/journal.pone.0029667.g003
Serial versus Parallel Processing
PLoS ONE | www.plosone.org 5 January 2012 | Volume 7 | Issue 1 | e29667
individual trials, respectively, and the binomial model with the
sample size of K= 3 was used in the remaining 28% of the trials.
Even a visual inspection can reveal that the fit to all 16 empirical
psychometric functions shown in Figure 2 was excellent. This was
confirmed by more formal tests showing that the predicted
psychometric functions were able to explain on average 98.86% of
the total response variance. Thus, only about 1.14% of total
variance on average remained unexplained and could be
attributed to measurement error.
The maximum number of components in the best fitting
mixture models is four in case N= 9 (Table 1.A) and six in case
N= 13 (Table 2.A) in order to keep the number of regressors equal
to that of the competing mixture composed of hypergeometric
models only. The best predictions obtained by hypergeometric
models alone are given in Tables 1.B and 2.B. In most cases does
the fit of the mixture containing binomial model(s) surpass that of
the respective mixture containing only hypergeometric models. In
Tables 1.A and 2.A, in cases where the binomial component
improved the fit, the number presenting the proportion of
unexplained variance is underlined. Since in 12 out of 16 cases
addition of the binomial component improved the fit one can
conclude that there were a significant number of trials in which the
observers were not able to track exactly the elements that were
already counted and those that were not.
In general, it is known that numerical discrimination based on
color is more efficient than one based on geometric attributes, such
as orientation [cf. 12]. This seems to be in agreement with our
results: across all conditions and observers on average 5 elements
were taken into account in orientation discrimination task and 7.5
elements when color was the distinguishing attribute.
In both types of tasks the hypergeometric distribution provided
a better fit than the binomial one: in 65.3% of all trials when
applied to discrimination on the basis of orientation, and in 88% of
trials when applied to discrimination based on color. It was not
entirely surprising to discover some small individual differences
since it was previously shown that some participants adhered to a
serial processing profile in most conditions while other participants
could exhibit parallel-like strategy in some conditions at least [15].
Discussion
In order to enumerate objects accurately it is necessary to follow
certain rules. One of these basic rules is the maintenance of the one-
to-one relationship between objects and tags assigned to these
objects: every object needs to be tagged only once. It is generally
unknown whether and how well different perceptual processes are
able to separate the to-be-counted items from the already-counted
ones. In this study we have proposed a new approach to this
problem. Although the question of whether and when people can
perform perceptual and mental operations in parallel or in series has
been dominating debates about mental architectures, it was also
made clear that this central question can be answered only when
other related questions such as stopping rules, selective influence
[16,17], and capacity limitations have been answered as well [1,18].
The one-to-one principle of tagging obviously belongs to the same
category of the related problems. In this study we presented strong
evidence that it is reasonable to assume that in a considerable
number of trials observers behave as if they are not able to keep
track of the elements they have already counted. It is very likely that
when forming their decision, they have taken the same element into
account repeatedly. Since the same element can be visited twice or
more times only on different time moments, this is a strong
indication that at least some operations are executed serially.
The obtained evidence does not allow to assert that the
adherence to the one-to-one tagging principle is an inflexible part
Table 1.
A. The combinations of theoretical hypergeometric and binomial models providing the best fit to the empirical psychometric functions (
N
=9).
Observer hyp
3
hyp
5
hyp
7
hyp
9
bin
3
bin
5
bin
7
bin
9
%Error
COLOR (
N
=9)
S1 31 26 15 28 1.5677
S2 45 39 15 1 1.0888
S3 61 32 7 0.2616
S4 29 48 15 8 0.3019
ORIENTATION (
N
=9)
S1 23 9 60 8 0.0005
S2 10 12 77 1 0.8859
S3 16 73 11 0.9085
S4 18 19 63 1.2777
B. The combinations of theoretical hypergeometric models providing the best fit to the empirical psychometric functions (
N
=9).
COLOR ORIENTATION
Observer hyp
3
hyp
5
hyp
7
hyp
9
%Expl hyp
3
hyp
5
hyp
7
hyp
9
%Error
S1 30 36 18 16 1.5958 71 29 0.0095
S2 18 42 40 1.0914 91 9 0.9217
S3 8 60 32 0.2647 16 73 11 0.9085
S4 12 23 50 15 0.3034 85 15 1.3358
Note:N= number of elements on the display; %Error = the percentage of variance unexplained by the mixture of the theoretical models; bin
K
= the binomial model
sampling Kelements; hyp
K
= the hypergeometric model sampling Kelements.
doi:10.1371/journal.pone.0029667.t001
Serial versus Parallel Processing
PLoS ONE | www.plosone.org 6 January 2012 | Volume 7 | Issue 1 | e29667
of the mental architecture. Previous studies have shown that
depending on the observer and stimulus conditions the parallel
processing strategy can be used in some and the serial processing
strategy in other situations [15]. Our results seem to suggest that in
perceptual tasks that can be solved more automatically and
spontaneously, like discriminations based on color, the observers
have a tendency to keep track of elements that have already been
counted. By contrast, in tasks like discrimination based on
orientation that require more deliberation and scrutinizing of
each element, the observers tend to confuse which elements have
already been counted and which have not. Although the accurate
tagging of the counted elements does not necessarily mean that the
processing is executed in parallel, lack of the one-to-one tagging
implies that at least some elements are processed serially, one after
another. However, these are not inflexible rules. For instance, one
of the four observers performed better in the orientation based
discrimination task than in the color discrimination task. This
seems to suggest that avoidance of repeated tagging of elements is
not a rigid part of mental architecture but rather a flexible strategy
that can be changed and, if necessary, learned. This conclusion is
supported by the fact that no single theoretical model was able to
provide a satisfactory explanation for most of the empirical
psychometric functions. The best fit was found when predictions of
different theoretical models were combined. This implies that the
observers do not adhere to only one strategy even during one
experimental session. We can only guess the number of different
strategies used during one session but at least three appear to be
the norm in most cases.
The observed individual differences are particularly interesting
in the light of a recent report showing that the ability to
discriminate numbers of elements in two sets was correlated with a
psychometrically measured intelligence [19]. It is an intriguing
possibility that the ability to keep track of elements which have
already been counted (together with the sample size one is able to
base his/her decisions upon), forms a precondition for numerical
intelligence which, in turn, among other faculties, gives rise to
general intellectual abilities.
Acknowledgments
We are greatful to Ehtibar Dzhafarov for major improvements to our work
and Mario Fific and two anonymous reviewers for commenting on an
earlier draft of the manuscript. We thank Kristin Kurjama for help in
collecting data. We also thank all our subjects.
Author Contributions
Conceived and designed the experiments: AR JA. Performed the
experiments: AP. Analyzed the data: AR JA AP. Contributed reagents/
materials/analysis tools: AR. Wrote the paper: JA AR AP.
References
1. Townsend JT, Fific M, Neufeld RWJ (2007) Assessment of mental architecture
in clinical/cognitive research. In: Treat TA, Bootzin RR, Baker TB, eds.
Psychological clinical science: Papers in Honor of Richard M. McFall. Mahwah,
NJ: Lawrence Erlbaum Associates. pp 223–258.
2. Townsend JT (1990) Serial vs parallel processing: Sometimes they look like
Ttweedledum and Tweedledee but they can (and should) be distinguished.
Psychological Science 1: 46–54.
3. Townsend JT, Wenger MJ (2004) The serial-parallel dilemma: A case study in
a linkage of theory and method. Psychonomic Bulletin & Review 11: 391–
418.
4. Sternberg S (1966) High-speed scanning in human memory. Sci ence 153:
652–654.
5. Dzhafarov EN, Schweickert R (1995) Decompositions of response-times: An
almost general-theory. Journal of Mathematical Psychology 39: 285–314.
Table 2.
A. The combinations of theoretical hypergeometric and binomial models providing the best fit to the empirical psychometric functions (
N
= 13).
Observer hyp
3
hyp
5
hyp
7
hyp
9
hyp
11
hyp
13
bin
3
bin
5
bin
7
bin
9
bin
11
bin
13
%Error
COLOR (
N
= 13)
S1 13 2 54 7 11 13 3.3804
S2 41 43 16 1.5035
S3 51258 4 21 0.1676
S4 65 29 6 1.5758
ORIENTATION (
N
= 13)
S1 72 28 0.7928
S2 47 17 8 8 20 1.3801
S3 74 20 6 2.1331
S4 61 5 34 1.0062
B. The combinations of theoretical hypergeometric models providing the best fit to the empirical psychometric functions (
N
= 13).
COLOR ORIENTATION
Observer hyp
3
hyp
5
hyp
7
hyp
9
hyp
11
hyp
13
%Expl hyp
3
hyp
5
hyp
7
hyp
9
hyp
11
hyp
13
%Error
S1 13 2 54 7 11 13 3.3804 72 28 0.7928
S2 17 40 43 1.5111 72 9 2 9 8 1.3822
S3 24 3 10 59 4 0.1789 6 76 18 2.1339
S4 65 29 6 1.5758 98 2 1.0200
Note:N= number of elements on the display; %Error = the percentage of variance unexplained by the mixture of the theoretical models; bin
K
= the binomial model
sampling Kelements; hyp
K
= the hypergeometric model sampling Kelements.
doi:10.1371/journal.pone.0029667.t002
Serial versus Parallel Processing
PLoS ONE | www.plosone.org 7 January 2012 | Volume 7 | Issue 1 | e29667
6. Dehaene S, Cohen L (1994) Dissociable mechanisms of subitizing and counting:
neuropsychological evidence from simultanagnosic patients. Journal of exper-
imental Psychology: Human Perception and Performance 20: 958–975.
7. Feigenson L (2008) Parallel non-verbal enumeration is constrained by a set-
based limit. Cognition 107: 1–18.
8. Trick LM, Enns JT, Brodeur DA (1996) Life Span Changes in Visual
Enumeration: The Number Discrimination Task? Developmental Psychology
32: 925–932.
9. Gelman R, Gallistel CR (1978) The child’s understanding of number.
Cambridge, MA: Harvard University Press.
10. Rose A (1948) The sensitivity performance of the human eye on an absolute
scale. Journal of the Optical Society of America 38: 196–208.
11. Allik J, Tuulmets T (1991) Occupancy mod el of perceived numerosity.
Perception & Psychophysics 49: 303–314.
12. Tokita M, Ishiguchi A (2009) Effects of feature types on proportion
discrimination. Japanese Psychological Research 51: 57–68.
13. Honig WK, Matheson WR (1995) Discrimination of relative numerosity and
stimulus mixture by pigeons with comparable tasks. Journal of Experimental
Psychology-Animal Behavior Processes 21: 348–362.
14. Emmerton J, Renner JC (2006) Scalar effects in the visual discrimination of
numerosity by pigeons. Learning & Behavior 34: 176–192.
15. Townsend JT, Fific M (2004) Parallel versus serial processing and individual
differences in high-speed search in human memory. Perception & Psychophysics
66: 953–962.
16. Townsend JT (1984) Uncovering mental processes with factorial experiments.
Journal of Mathematical Psychology 28: 363–400.
17. Dzhafarov EN (2003) Selective influence through conditional independenc e.
Psychometrika 68: 7–26.
18. Dzhafarov EN (1997) Process representations and decompositions of response
times. In: Marley AAJ, ed. Choice, Decision, and Measurement: Essays in
Honor of R. Duncan Luce. Nahwah, NJ: Lawrence Erlbaum Associates. pp
255–277.
19. Halberda J, Mazzocco MMM, Feigenson L (2008) Individual differences in non-
verbal number acuity correlate with maths achievement. Nature 455: 665-U662.
Serial versus Parallel Processing
PLoS ONE | www.plosone.org 8 January 2012 | Volume 7 | Issue 1 | e29667
... Part of the data presented here has been published previously (Raidvee, Põlder, & Allik, 2012) in a different modeling approach. ...
... As we have already stressed, the formal equality of the goodness of fits is not surprising, given a relatively simple link (see Eq. 4) between the parameters of these two models. It is well known that binomial function becomes practically inseparable from the normal distribution with the increase of the number of binomial trials N. If only the total number of elements is sufficiently small, it becomes possible to distinguish between different distributions, even as close as binomial and hypergeometric (Raidvee, Põlder, et al., 2012). Thus, we expected to see some meaningful differences between binomial and Gaussian models, mainly with the smallest number of used elements, N = 9. ...
... It does not make unrealistic pledges and is intuitively transparent. One obvious advantage of the binomial model before the Gaussian is to question whether the alreadyprocessed elements are separated from the to-be-processed elements, or is it possible that some of the elements are processed more than once (Raidvee, Põlder, et al., 2012). Perhaps the most salient benefit of the binomial model over the Gaussian is that it is not committed to an unrealistic, and only occasionally tested, assumption that all ...
Article
Observers discriminated the numerical proportion of two sets of elements (N = 9, 13, 33, and 65) that differed either by color or orientation. According to the standard Thurstonian approach, the accuracy of proportion discrimination is determined by irreducible noise in the nervous system that stochastically transforms the number of presented visual elements onto a continuum of psychological states representing numerosity. As an alternative to this customary approach, we propose a Thurstonian-binomial model, which assumes discrete perceptual states, each of which is associated with a certain visual element. It is shown that the probability β with which each visual element can be noticed and registered by the perceptual system can explain data of numerical proportion discrimination at least as well as the continuous Thurstonian–Gaussian model, and better, if the greater parsimony of the Thurstonian-binomial model is taken into account using AIC model selection. We conclude that Gaussian and binomial models represent two different fundamental principles—internal noise vs. using only a fraction of available information—which are both plausible descriptions of visual perception.
... Similar investigations concern the assessment of perceptions and opinions about public policies and interventions on relevant subjects. For these data, the binomial regression model is a well-acknowledged choice since it offers a simple and effective representation of the data generating process (Allik, 2014;Grilli et al., 2015;Pinto da Costa JF. et al., 2008;Raidvee et al., 2012;Zhou & Lange, 2009). The estimable probability parameter conveys a synthesis of the latent feeling with versatile interpretation (satisfaction, preference, agreement, etc., depending on the topic under investigation). ...
... provides a parsimonious yet effective choice for the response R, acknowledged by several literature (Allik, 2014;Pinto da Costa JF. et al., 2008;Raidvee et al., 2012;Zhou & Lange, 2009). The estimable parameter ξ ∈ (0, 1) accounts for the location and shape of the response: specifically, parameter 1 − ξ assesses the probability that a category is preferred over the previous ones. ...
Article
Full-text available
The paper proposes a method to perform diagnostics of model-based trees for preference and evaluation data on the basis of surrogate residual analysis for ordinal data models. The discussion stems from the introduction of binomial regression trees and discusses how to perform local diagnostics of misspecification against alternative model extensions within the framework of mixture models with uncertainty. Three case studies concerning customer satisfaction and perceived trust for information sources illustrate usefulness and versatile applicative extent of the proposal.
Article
Full-text available
Arrays of small squares of 2 colors were presented in various proportions to pigeons on a video screen. Birds pecked differentially at the left or right side of the screen to obtain grain. In Experiment 1, pecking at 1 side was correct when more blue than red elements were displayed; the other was correct with more red than blue. Proportions of responses to the 2 locations reflected the proportions of elements in an orderly manner and were little affected by alterations in spacing or size of elements. When red elements were replaced by green, the discrimination readily transferred to the new arrays. In Experiment 2, 1 side of the screen was correct when uniform red or blue arrays were presented; the other was correct for mixed arrays. Orderly gradients of response location reflected degree of stimulus mixture. Good transfer was obtained with green and blue elements. These results support the robust nature of discriminations of emergent properties of complex arrays when stimuli are equally associated with reinforcement and when response location, and not response rate, indicates stimulus control.
Article
Full-text available
A number of important models of information processing depend on whether processing is serial or parallel. However, many of the studies purporting to settle the case use weak experimental paradigms or results to draw conclusions. A brief history of the issue is given along with examples from the literature. Then a number of promising methods are presented from a variety of sources with some discussion of their potential. A brief discussion of the topic with regard to overall issues of model testing and applications concludes the paper.
Article
Full-text available
Participants from 5 groups with mean ages of 6, 8, 10, 22, and 72 yrs were tested on a series of speeded number discriminations: 1 vs 2, 3 vs 4, 6 vs 7, and 8 vs 9. The primary measure of interest (response time slope as a function of number size) decreased with age for numbers in the 1–4 range. However, a U-shaped age function emerged in the 6–9 range, with larger slopes for children and senior adults, and the smallest slopes for young adults. These data suggest that different processes are involved in enumerating small and large numbers of items. It is argued that subitizing, the process for small numbers, makes only minimal demands on spatial attention and thus shows developmental improvements without any decline in old age. In contrast, counting, the process for large numbers, requires sophisticated coordination of spatial attention, which has previously been shown first to improve and then decline over the life span. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
When subjects judge whether a test symbol is contained in a short memorized sequence of symbols, their mean reaction-time increases linearly with the length of the sequence. The linearity and slope of the function imply the existence of an internal serial-comparison process whose average rate is between 25 and 30 symbols per second.
Article
The analysis of reaction time (RT) additivity and interactions via factorial manipulation is a widely used and potentially powerful tool for elucidating mental processing; however, current implementation is limited by the scope of systems for which predictions are available, as well as statistical weakness. Predictions for an expanded set of system dimensions (serial/parallel, independent/dependent, and selective/nonselective influence) are given. The theorems are not limited to particular families of distributions. Statistical considerations are examined in the context of an exemplary nonadditive parallel system which adequately fits “additive” data but will only be rejected as additive when power is increased beyond the traditional criteria.
Article
Response time (RT) whose distribution depends on two external factors can sometimes be presented as an algebraic combination of two random variables, component times, each of which is selectively influenced by one of these factors. The algebraic operation connecting these component times is referred to as the decomposition rule. The authors consider a broad subclass of associative and commutative decomposition rules, and for any operation from this subclass they construct a decomposition test (DT), a relationship between observable RTs that must hold if these RTs are decomposable by means of this operation. Under the assumption of perfect positive stochastic interdependence, a successful DT is not only necessary but also sufficient for the RT decomposability by means of the corresponding operation. Under the assumption of stochastic independence, it is possible that a DT is successful but RTs cannot be decomposed by any operation. Under both assumptions, however, a successful DT recovers the true decomposition rule essentially uniquely. For a given decomposition rule, the component times themselves cannot be determined uniquely, and the stochastic relationship between them should be assumed rather than recovered from the DT. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
  We investigated the effects of feature types on statistical description of relative frequency by testing the accuracy and precision in a proportion discrimination task. We used search symmetry pairs and search asymmetry pairs as the elements for comparison. In Experiment 1, we used sets of red and green dots and sets of parallel lines and converging lines as search symmetry pairs, and we used sets of circles and circles with lines as a search asymmetry pair. The results demonstrated that the proportion of pop-out elements in the asymmetry pair was overestimated and that precision of proportion discrimination differed between pairs. In Experiment 2, to eliminate the possibility that the overestimation found in Experiment 1 was due to quantitative dominance, we used sets of circles with gaps and circles. In Experiment 3, we tested the effect of the exposure duration on the performance of proportion discrimination to determine the source of bias and of difference in precision. Overall, the findings show that the performance of the proportion discrimination depended on the types of features being compared.