ArticlePDF Available

Attention-deficit/hyperactivity disorder (ADHD): Delay-of-reinforcement gradients and other behavioral mechanisms

Authors:

Abstract and Figures

sagvolden, johansen, aase, and russell (sagvolden et al.) examine attention-deficit/hyperactivity disorder (adhd) at levels of analysis ranging from neurotransmitters to behavior. at the behavioral level they attribute aspects of adhd to anomalies of delay-of-reinforcement gradients. with a normal gradient, responses followed after a long delay by a reinforcer may share in the effects of that reinforcer; with a diminished or steepened gradient they may fail to do so. steepened gradients differentially select rapidly emitted responses (hyperactivity), and they limit the effectiveness with which extended stimuli become conditioned reinforcers, so that observing behavior is less well maintained (attention deficit). impulsiveness also follows from steepened gradients, which increase the effectiveness of smaller, more immediate consequences relative to larger, more delayed ones. individuals who vary in the degree to which their delay gradients are steepened will show different balances between hyperactivity and attention deficit. given the range of adhd phenomena addressed, it may be unnecessary to appeal to additional behavioral processes such as extinction deficit. extinction deficit is more likely a derivative of attention deficit, in that failure to attend to stimuli differentially correlated with extinction should slow its progress. the account suggests how relatively small differences in delay gradients early in development might engender behavioral interactions leading to very large differences later on. the steepened gradients presumably originate in properties of neurotransmitter function, but behavioral interventions that use consistently short delays of reinforcement to build higher-order behavioral units as a scaffolding to support complex cognitive and social skills may nonetheless be feasible.
Content may be subject to copyright.
1
To be published in Behavioral and Brain Sciences (in press)
© 2004 Cambridge University Press
Below is the copyedited final draft of an accepted PRE-COMMENTARY on the Sagvolden et al.
Target Article. This updated preprint has been prepared for formally invited commentators.
Please DO NOT write a commentary unless you have been formally invited. NOTE: YOU MAY
COMMENT ON THIS TARGET ARTICLE, THE PRE-COMMENTARY, OR BOTH.
Attention-deficit/hyperactivity disorder (ADHD):Delay-of-reinforcement
gradients and other behavioral mechanisms
This is a pre-commentary on Sagvolden et al., “A dynamic developmental theory of
attention-deficit/hyperactivity disorder (ADHD) predominantly hyperactive/impulsive and
combined subtypes”
A. Charles Catania
University of Maryland
Baltimore County, MD 21250
catania@umbc.edu
http://www.umbc.edu/psyc/personal/catania/catania.htm
Abstract: Sagvolden, Johansen, Aase, and Russell (SJA&R) examine attention-
deficit/hyperactivity disorder (ADHD) at levels of analysis ranging from neurotransmitters to
behavior. At the behavioral level they attribute aspects of ADHD to anomalies of delay-of-
reinforcement gradients. With a normal gradient, responses followed after a long delay by a
reinforcer may share in the effects of that reinforcer; with a diminished or steepened gradient
they may fail to do so. Steepened gradients differentially select rapidly emitted responses
(hyperactivity), and they limit the effectiveness with which extended stimuli become conditioned
reinforcers, so that observing behavior is less well maintained (attention deficit). Impulsiveness
also follows from steepened gradients, which increase the effectiveness of smaller, more
immediate consequences relative to larger, more delayed ones. Individuals who vary in the
2
degree to which their delay gradients are steepened will show different balances between
hyperactivity and attention deficit. Given the range of ADHD phenomena addressed, it may be
unnecessary to appeal to additional behavioral processes such as extinction deficit. Extinction
deficit is more likely a derivative of attention deficit, in that failure to attend to stimuli
differentially correlated with extinction should slow its progress. The account suggests how
relatively small differences in delay gradients early in development might engender behavioral
interactions leading to very large differences later on. The steepened gradients presumably
originate in properties of neurotransmitter function, but behavioral interventions that use
consistently short delays of reinforcement to build higher-order behavioral units as a scaffolding
to support complex cognitive and social skills may nonetheless be feasible.
Keywords: ADHD; delay gradient; hyperactivity; attention deficit; observing responses;
extinction deficit; impulsiveness; self-control; exponential decay; intervention
Sagvolden, Johansen, Aase, and Russell (SJA&R) provide an interpretation of attention-
deficit/hyperactivity disorder (ADHD) at levels of analysis that range from neurotransmitters to
behavior. In the long run, the success of their account will depend on the adequacy with which
fine details of dopamine systems are linked via grosser cellular and neuroanatomical levels to
their eventual molar behavioral products. To the extent that evolutionary contingencies have
selected nervous systems on the basis of the behavior that they engender, we must understand the
properties of that behavior if we are to understand how the brain serves it (Catania 2000). My
3
main objective here is to elucidate aspects of SJA&R’s account that bear on the possible roles of
delay-of-reinforcement gradients and other behavioral phenomena in producing ADHD.
The ubiquity of delayed reinforcement. Much important behavior, called operant
behavior, occurs because of its consequences, that is, its effects on the environment. Some
important consequences are those that afford opportunities for new behavior, as when something
one does allows eating or drinking or playing, or as when one’s shift of attention leads to new
things seen or felt or heard. Responses that produce particular consequences are said to be
members of operant classes. Some consequential effects are immediate and others are delayed,
and their immediacy determines the potency with which they change or maintain behavior. In
other words, the extent to which consequences such as reinforcers operate to alter the future
likelihood of responses in the class that produced them depends, along with many other
variables, on the delays between the responses and their consequences.
Delay of reinforcement is a ubiquitous effect even if reinforcers are delivered very promptly on
responses, because other responses typically precede the one that actually produces the reinforcer
(Dews 1962). “The reinforced response is followed by the reinforcing stimuli; the preceding
unreinforced responses are also followed by the reinforcing stimuli, though not quite so
promptly. Indeed, the whole pattern of . . . responding is followed by the reinforcing stimuli and
so, in a sense, is reinforced” (Dews 1966, p. 578). It was once regarded as paradoxical that
schedules of intermittent reinforcement produced more behavior than the reinforcement of every
response. But if only every 10th response produces a reinforcer, 10 responses, not just the last
one, share in the effects of that reinforcer. The earlier responses make a smaller contribution than
4
the later ones by virtue of the longer delays that separate them from the reinforcer, but the sum of
all 10 contributions is necessarily greater than that from the 10th response alone.
One way of thinking about how reinforcers work is to assume that responses weighted according
to a decay function by the delays that separate them from a reinforcer contribute to a reserve of
potential behavior, and that subsequent responding depends on the magnitude of that reserve,
which is then depleted when responding occurs without reinforcement (e.g., Catania 2001;
Catania 2003). Skinner (1938) proposed a reserve that received contributions only from the
response that just preceded the reinforcer, but retracted the proposal when it became clear that it
could not accommodate data from schedules of reinforcement (Skinner 1940). The retraction
might have been unnecessary if the contributions of responses preceding the one that produced
the reinforcer had been recognized (Catania 1971).
Furthermore, delays may affect behavior in other ways. The onset of a stimulus that sets the
occasion for responding may be followed by a reinforced response after a shorter or a longer
delay. If reinforcers are delivered in its presence, the stimulus will become a conditional
reinforcer, but its potency will depend on the delay (Dinsmoor 1983; 1995). One simple but
exceedingly important response that is maintained by such a stimulus is that of attending to it. A
stimulus in the presence of which an opportunity for reinforcement is likely to arise very soon is
more likely to be observed or looked at or attended to than one in the presence of which that
opportunity is still some time away.
5
Experimental assessments of delay gradients. Figure 1 provides examples of two delay
gradients obtained with pigeons. The first shows rates of responding as a function of the time
between one response and the later reinforcement of a different response; the second shows rates
of responding maintained by a response-produced stimulus as a function of the time between the
onset of that stimulus and the subsequent delivery of a reinforcer in its presence. In both cases
the data have been fit by exponential decay functions. Candidates for the delay gradient have
included exponential, hyperbolic, and logarithmic functions, but the appropriateness of one or
the other depends on both procedural and statistical considerations. For example, integrals of
hyperbolic functions approach logarithmic functions, so the former are better fits to data from
procedures that assess one point on the gradient at a time, whereas the latter are better fits to data
from procedures that assess rates of responding over long time periods and therefore across a
range of delays. Therefore, variance in the decay parameters of exponential functions may
generate hyperbolic functions when data are averaged (Killeen 1994; Killeen 2001).
6
Figure 1. Pigeon 73: Rate of left-key pecks as a function of the delay between the last left-key
peck (*) and a reinforcer produced by a right-key peck (). Pigeon 47: Rate of key-A pecks as
a function of the delay between the key-A peck that turned on the key-B stimulus (*) and the
later production of a reinforcer by a key-B peck in the presence of that stimulus (). Procedures
are shown schematically below each graph.
The first experiment illustrated in Figure 1 involved random-interval reinforcement of a
sequence of pecks on two keys by a pigeon. For example, if reinforcement was contingent on
exactly four left pecks followed by exactly four right pecks, left pecks would always be
separated from the reinforcer by the time taken to emit the right pecks, and that time could be
manipulated by varying the required number of right pecks. The data for Pigeon 73 in Figure 1
were obtained by varying the required number of pecks on the right key (R), while the number
required on the left key (L) was held constant (cf. Catania 1971). Similar data can be generated
with procedures that alter the time it takes for the pigeon to emit its right-key pecks; such
7
procedures demonstrate that time rather than the intervening number of responses is the
appropriate dimension along which to measure the effects of delayed reinforcers (cf. Catania
1991).
The second experiment involved an observing-response procedure (Kelleher et al. 1962). During
successive presentations of yellow on the right key (B), contingencies irregularly alternated
between a fixed-interval schedule of reinforcement and an equal duration of extinction. These
presentations were preceded by brief presentations of the left or observing-response key (A), lit
white. If a white-key (observing) peck occurred during a brief window of time before the onset
of the right-key stimulus, the right key lit green if the current contingency was fixed-interval
reinforcement, and the right key lit red if it was extinction. Procedures that allow observing
pecks to produce only green if fixed interval or only red if extinction show that observing pecks
are maintained because green under these circumstances functions as a conditional reinforcer.
Essentially, pigeons peck the observing key to get a look at green on the right key. But, as shown
in Figure 1, the rate of left-key pecking decreases as a function of the duration of the fixed
interval. The potency of green as a conditional reinforcer that maintains the observing response
depends on the delay from the onset of green to the later delivery of a reinforcer. A substantial
body of evidence demonstrates that organisms work to observe discriminative stimuli correlated
with the delivery of reinforcers; they do not work to observe discriminative stimuli that are
equally informative but are instead correlated with extinction or aversive events (Dinsmoor
1983; 1995).
8
Both delay gradients in Figure 1 extend over many seconds. They are the facts about behavior
that must be taken into account by hypotheses about mechanism. The gradients may be expected
to vary as a function of a variety of parameters, and their properties are presumably influenced
by such factors as whether response sequences are homogeneous or heterogeneous and whether
the responses that make up those sequences are relatively simple units or are instead integrated
higher-order, and perhaps temporally extended, ones (Catania 1995; 1998). In any case, the
durations of the delays considered here differ by orders of magnitude from those of synaptic
events or even of cascading neuronal processes involving large numbers of cells.
Implications of anomalous delay gradients. Now we are ready to examine the
implications for ADHD. As argued by SJA&R, the two major components of ADHD,
hyperactivity and attention deficit, can each be interpreted as consequences of a delay-of-
reinforcement gradient that is more limited in its temporal range than the ordinary delay gradient.
Figure 2 illustrates the rationale by comparing one hypothetical exponential decay gradient with
another that declines more steeply. Each gradient is assumed to end when it reaches the previous
reinforcer, based on data showing that the retroactive effects of reinforcers do not extend back
past the previous reinforcer to still earlier responses (Catania et al. 1988), though this blocking
might be attenuated in situations where reinforcers vary in kind or magnitude.
9
Figure 2. A hypothetical normal delay gradient (1) and one that decays more steeply over time
(2). Each gradient represents the magnitude of the effect of a reinforcer (arrow) on events that
occur at different earlier times. Illustrative response sequences are shown in A and B;
illustrative discriminative stimuli (and therefore potential conditional reinforcers) are shown in C
and D (cf. Figures 8 and 10 in SJA&R).
If gradient 1 operates for the reinforced behavior of a given organism at a given time, then the
five responses in A as well as the five in B will share in the effects of the reinforcer, but the
summed effects in B will clearly be greater than those in A. Similarly, it will support the stimuli
in both C and D as conditional reinforcers, but the effectiveness as a conditional reinforcer of
the stimulus in C will clearly be weaker than that in D. With gradient 2, however, the early
responses in A and the stimulus with early onset in C will be outside the range of effectiveness of
the reinforcer, because at those longer delays the gradient is at near-zero levels. This gradient
will differentially strengthen relatively rapid sequences of responses, and only stimuli with
10
relatively short delays from onset to reinforcer will be sufficiently effective as conditional
reinforcers to sustain observing behavior. The outcome will be rapid responding accompanied by
deficits in observing behavior or, in other words, hyperactivity plus attention deficit. The
differential strengthening of relatively rapid responding takes time, so a delay function like that
of gradient 2 may engender hyperactivity; but the hyperactivity may take a while to develop and
may develop separately in different environments.
The case for steepened delay gradients as a mechanism underlying ADHD is strengthened by
comparisons of the behavior of Wistar Kyoto (WKY) and spontaneously hyperactive (SHR) rats
(though the latter abbreviation was originally based on the hypertension of those rats, which was
discovered first, rather than on their hyperactivity). SJA&R present the argument for SHR rats as
a nonhuman model for ADHD in some detail (and see also Sagvolden 2000; Sagvolden et al.
1993; 1988). In other research with WKY and SHR rats, reinforcers were arranged for a fixed
consecutive number of responses on one lever followed by a single response on a second lever,
and longer response sequences were maintained by WKY rats than by SHR rats (Evenden &
Meyerson 1998). This is what we would expect if delay gradients for SHR rats were abridged or
steepened relative to those of WKY rats, and it suggests that a direct comparison of delay
gradients for SHR and WKY rats in experiments similar to those illustrated in Figure 1 would be
of substantial interest. And if a quick way could be developed to obtain such gradients from non-
ADHD and ADHD children (say, using computer games on laptop computers), such data would
not only help to validate SJA&R’s SHR model but might also be of considerable diagnostic
value.
11
To this point I have considered only gradients based on reinforcing events. It would be useful to
know about the properties of delay gradients involving aversive stimuli. Aversive stimuli may
reduce behavior when they are contingent on responses in punishment procedures, or they may
maintain behavior when they are postponed or canceled by responses in avoidance procedures
(Catania 1998, pp. 88–110). Steepened gradients would probably make a difference in either
case. Steepened punishment gradients would reduce the effectiveness of both natural punishment
contingencies (e.g., getting burned on touching a hot stove) and artificial ones (getting scolded
after teasing a sibling); this could be manifested in proneness to accidents as well as in
disobedience. Steepened avoidance gradients would make it more difficult to maintain avoidance
behavior, because such behavior makes only indirect contact with aversive events (after a
successful avoidance response, nothing happens); this could be manifested in risk-taking or other
varieties of carelessness.
Impulsivity. One aspect of behavior often included in diagnoses of ADHD is impulsivity or
impulsiveness, where behavior with fairly immediate consequences dominates over behavior
with larger but more delayed consequences. Impulsivity is sometimes described in terms of
executive dysfunction, disinhibition, or failure to withhold behavior, and it is typically regarded
as the inverse of self-control (Rachlin & Green 1972). An account of impulsivity and self-control
in terms of hypothetical delay gradients is illustrated in Figure 3 (cf. Rachlin 1995, Fig. 1, p.
111).
Imagine a rat given access to two levers on trials that occur every minute or so. A press on the
first lever 10 seconds into the trial or later produces a small reinforcer, and a press on the second
12
lever 30 seconds into the trial or later produces a large reinforcer. Each trial ends as soon as
either reinforcer is delivered. If 10 seconds pass and the rat presses the first lever, it receives the
small reinforcer but has permanently lost the large one on that trial. The only way to obtain the
later large reinforcer is to refrain from pressing the first lever until the large reinforcer is
available for a press on the other lever. On the left, Figure 3 shows the respective exponential
decay gradients engendered by the smaller but earlier reinforcer arranged for the first response at
time A and by the larger but later reinforcer arranged for the other response at time B.
This example assumes some separate experience with the contingencies arranged for each lever.
A rat in this situation for the first time might start with presses on the A lever, always producing
the smaller, more immediate reinforcer, and so might never reach the time at which its press on
the B lever could produce the larger but later one. The relative heights of the respective gradients
can be taken as representing the relative likelihoods of the two responses during the time leading
up to the earlier reinforcer. The two gradients are shown starting at different maxima reflecting
the different A and B reinforcer magnitudes; if they started at equal maxima and decayed at
equal rates, they could not cross at E.
In this example, the B response is more probable than the A response up until time E, but
thereafter the A response becomes more probable. One way to overcome the higher probability
of A (or, in other words, to show self-control rather than impulsiveness) is if a B response prior
to time E becomes a commitment of some kind. For example, the B response might make the A
response unavailable (perhaps via retraction of the A lever) for the remainder of the time until
the B reinforcer becomes available. Under such circumstances, we might observe many instances
13
of self-control, in the sense that B responses committing to the later larger reinforcer would
occur before any A responses that would produce the smaller earlier reinforcer and therefore end
the sequence.
Figure 3. Hypothetical normal (A and B) or anomalous (C and D) delay gradients based on a
relatively small reinforcer at an early time (A or C) and a larger one at a later time (B or D). If
the relative height of the gradient at a given moment is a predictor of changing preference
between the smaller and larger reinforcers, the gradients on the left generate impulsiveness, or
selection of the more immediate smaller rather the more delayed larger reinforcer, only between
E and A; a commitment made prior to E results in selection of B and would be regarded as an
instance of self-control. With the steeper gradients on the right, however, impulsiveness
prevails throughout the entire range of delays.
Now consider the steeper gradients on the right in Figure 3. In this instance, the gradient
engendered by the smaller earlier reinforcer is everywhere higher than the other gradient in the
time leading up to C, even though the D gradient starts at a relatively higher maximum. With
these steepened gradients, there will be no circumstances in which the probability of the D
response exceeds that of the C response, so self-control will be completely displaced by
14
impulsivity. Impulsivity follows so directly from these kinds of gradients that it is not necessary
to appeal to deficient extinction or executive dysfunction.
For impulsivity as for hyperactivity and attention deficit, no problems are posed by issues of
localization, such as SJA&R’s discussion of dopaminergic systems in mesolimbic, mesocortical,
and nigrostriatal branches (e.g., Fig. 1 SJA&R). Delay gradients with common decay properties
could as easily operate for behavior classes intermixed within a single area as for those discretely
localized in separate areas.
Individual differences in the balance between hyperactivity and attention deficit.
As outlined in SJA&R’s review of ADHD, some individuals display both hyperactivity and
attention deficit, but in others one or the other component dominates. These individual
differences vary with gender, age, and other variables (e.g., Sagvolden & Berger 1996). They can
be accommodated by assuming delay gradients that decline at different rates. Varieties of
presentation of ADHD symptoms are perhaps best viewed not as separate classes but rather as
lying along a continuum involving rate of decay of the delay gradient as a parameter. Two ways
in which delay gradients might vary are illustrated in Figure 4.
15
Figure 4. On the left, the hypothetical delay gradients descend exponentially from common
maximum values. In this instance, the normal gradient (a) is the highest, and all other gradients
are based on decrements relative to it. On the right, a similar family of gradients has been
transformed so that the area under each curve is a constant. In this instance, the normal
gradient (b) is the one that intersects the origin at the lowest point, so that the other gradients
show decrements relative to it at longer delays and increments at shorter delays.
Consider first the family of gradients on the left, in which the highest gradient (a) represents a
normal or non-ADHD gradient. Let us start with the steepest gradient, furthest from the normal
gradient. For the individual whose gradient drops asymptotically to near zero within a second or
so, responses must be very close to the reinforcer to be captured by it. The time period is so short
that only single responses can typically be strengthened. If sequences of responses cannot be
strengthened, there will be no hyperactivity. But this gradient will generate profound attention
deficit, because only brief stimuli quickly followed by reinforcers will acquire any conditional
reinforcing effectiveness. (We might also expect such other problems as severe impulsiveness
and poor acquisition of coordinated sequential behavior.)
16
Next consider a gradient that drops asymptotically to near zero only after a delay of a couple of
seconds or so. Attention deficit is still likely to be a problem, but in this case sequences of rapid
responses will sometimes be fully captured within the effective temporal extent of the gradient.
They will come to dominate over slower sequences of responses, so in this instance we can
expect to see both attention deficit and hyperactivity.
Finally consider a gradient that drops asymptotically to near zero only after several seconds and
therefore is closer to the normal gradient (a). The longer time period means that attention deficit
will be less of a problem, because stimuli will acquire conditional reinforcing properties, though
perhaps with slightly diminished potency. But faster response sequences will still be
differentially strengthened relative to more leisurely ones. In this case hyperactivity will
dominate and any attention deficit that becomes evident is likely to be mild.
We could play out the details further (e.g., by extending the argument to impulsivity), but the
point is that a single parameter determining the rate of decay of the delay gradient might be
sufficient to determine both the absolute and the relative severity of the attention and
hyperactivity components of ADHD. If a compromised dopamine neurotransmitter mechanism is
implicated in ADHD, as proposed by SJA&R, graded behavioral outcomes should be expected
from variations in the degree of compromise. The account is of special interest because it
promises to subsume a range of individual differences under a single mechanism.
But this is only one way in which the parameters of delay gradients might vary. Another
possibility is illustrated in the right graph of Figure 4. In that case, the normal or non-ADHD
17
gradient (b) is the one that crosses the y-axis at the lowest point. The others decline more steeply,
like those in the left graph. Here the area under each curve is equal to a constant. Such functions
might be appropriate, for example, if variations in the rate of decay depend on how quickly a
fixed quantity of some neurotransmitter is depleted. Such depletion can occur either slowly or
rapidly, as in the family of curves on the left, but the steeper the rate of decay, the higher the
maximum would have to be to hold the area constant. Differential selection of response
sequences and maintenance of attention would still vary with the rate-of-decay parameter, but
these curves have some additional implications.
One argument in favor of the equal-area functions on the right over the exclusively decremental
functions on the left is suggested by the impulsivity examples in Figure 3. An account of
impulsivity in terms of exponential gradients will not work unless the gradients generated by
different reinforcer magnitudes start at different maxima. Furthermore, if the effects are
everywhere decrements, as on the left in Figure 4, then the only source of higher rates of
responding would be the differential selection of rapid sequences; with extreme decrements, little
if any responding could be supported by reinforcers. This might be an appropriate model for
other behavior pathologies, but it seems not to capture the defining features of ADHD.
The equal-area functions in Figure 4, however, are consistent with a model in which a reserve of
potential behavior is replenished by responses weighted according to the delays that separate
them from a reinforcer and in which subsequent responding depends on the magnitude of that
reserve. In this case, hyperactivity follows not only from the differential strengthening of more
rapid sequences but also from the direct strengthening of responses that are very quickly
18
followed by reinforcers. With equal-area functions, greater strengthening occurs with steeper
functions, but with steeper and steeper functions, the temporal window within which responding
will be strengthened progressively narrows.
SJA&R argue that children with ADHD are less sensitive to changes in reinforcement
contingencies and require stronger and more salient reinforcers. This might seem consistent with
the decremental (left) gradients of Figure 4, but problems that appear to be motivational might
instead be problems of contingencies. Apparent insensitivity to reinforcement contingencies can
come about not only because of weak reinforcers but also because of strong reinforcers presented
after a delay. Furthermore, the latter problem will be more likely with steeper delay gradients.
Extinction deficit. I have so far emphasized delay gradients. But along with SJA&R’s
presentation in terms of delay gradients, they have also offered extinction deficit as an alternative
mechanism contributing to the complex of symptoms that define ADHD. We have already seen
that delay gradients on their own adequately account for many features of ADHD, but there are
other reasons besides parsimony to question the role of extinction deficits.
Extinction demonstrates that the effects of reinforcement are temporary, and SJA&R correctly
point out that the variables that produce increments in responding when reinforcement begins
may be different from those that produce decrements after it ends. It is therefore appropriate to
consider different mechanisms for reinforcement and for extinction. But extinction deficit, the
absence of the response decrements that typically occur during extinction, has no relevant
temporal parameters and therefore is not applicable to situations that can be interpreted in terms
19
of differential delays (that is another reason why the direct determination of delay gradients with
WKY and SHR rats might be especially valuable).
One problem with assessing extinction effects is the metric used to assess the progress of
extinction. For example, if extinction for SHR rats begins with higher baseline rates of
responding than for WKY rats, should comparisons be based on relative declines in responding
or on the absolute levels reached at certain times? Procedures that changed baseline rates of
responding for one or the other group in an attempt to match baseline rates would have to deal
somehow with the differential effects of the contingencies that such matched baselines would
require.
Another and perhaps even less tractable problem with assessing extinction deficit, however, is
that extinction is rarely studied in isolation. In Johansen & Sagvolden (2004), for example,
extinction was studied in successive sessions that each began with a fixed period of
reinforcement. Thus, the procedure involved the acquisition of a discrimination between the
early and the late portions of each session. If attention deficit affects orientation toward visual
cues, it presumably also affects attention without evident motor components, such as attention to
temporal cues. (I here treat attention as a variety of behavior, but one defined by the
environmental contingencies it can enter into rather than by a particular topography.) Thus, even
if SHR rats responded more in extinction than WKY rats, the difference could be attributed as
readily to differences in attention to temporal stimuli as to an extinction deficit.
20
Failure to attend to temporal cues rather than extinction deficit might also account for continued
responding early in the individual segments of fixed-interval (FI) schedules of reinforcement. A
similar confounding exists in procedures that compare reinforcement versus extinction
contingencies arranged in the presence of different visual or auditory stimuli, where what might
seem like extinction deficit might depend instead on a failure to attend to relevant stimuli. Thus
it seems reasonable to consider the possibility that extinction deficit is not a separate source of
some of the properties of ADHD, but rather is a derivative of the kinds of anomalies of delay
gradients that we have already considered.
I have had little to say here about other factors that might contribute to ADHD, such as executive
functions, verbal governance, and other higher-order processes. But given differences in delay
gradients similar to those already considered, it is plausible that complex skills such as the
hierarchical structuring of verbal governed behavior and the monitoring of one’s own behavior
would develop differently in a child with than in a child without ADHD.
ADHD and development. As we know from the analysis of nonlinear systems, very small
differences in initial conditions can result in exceedingly large long-term differences (Gleick
1987). For example, even if the only problem with autism was aversion with regard to both eye
contact and touch, many of the everyday contingencies that build social interaction would be
missed, such as not noticing when a parent smiles at something one has done. These interactions
provide the scaffolding on which more complex social behavior depends, including verbal
behavior, so the effects will be seen in all of the other behavior that depends on them. This is
presumably why early intervention matters so much.
21
One significant feature of SJA&R’s account is the parallel case they have presented for ADHD.
It should be no surprise that different early histories with ADHD, especially in combination with
the variations in delay gradients that we have entertained, could lead to vastly different spectra of
behavioral competencies and difficulties. Might small path dependencies lead sometimes to
oppositional defiant disorder and sometimes to conduct disorder and sometimes to neither? Even
the dominance of motor versus cognitive components might depend on differences in historical
paths, and perhaps we should also entertain the possibility that such behavioral trajectories can
drive certain features of brain organization rather than be driven by them. As suggested by
SJA&R, analyses in terms of the ebb and flow of complex interactions of behavior with
contingencies involving parents, peers, teachers, and others are a daunting but unavoidable
challenge.
Perhaps there are also circumstances in which features of ADHD are advantageous. With
experimental contingencies that favor varied over stereotyped response sequences, for example,
comparisons of the behavior of WKY and SHR rats have shown that SHR rats learn to vary
rather than repeat sequences more readily than WKY rats (Mook et al. 1993). Variable behavior
provides the raw material on which the selection of behavior by contingencies operates within
individual lifetimes, so this behavioral capacity may have been selected by evolutionary
contingencies (cf. Neuringer 2002). We may argue from our anthropocentric view that an
organism with more extended delay gradients will be more capable of taking into account events
that are more remote in time, but such capabilities surely must be balanced against the
importance of its sensitivity to the immediate consequences produced by its behavior.
22
Interventions and implications. If delay gradients are implicated in ADHD, their properties
presumably originate in the properties of neurotransmitter function, but this does not imply that
pharmacological interventions are the only recourse. Behavioral interventions that use
consistently short delays of reinforcement to build higher-order behavioral units as a scaffolding
to support complex cognitive and social skills may nonetheless be feasible. For example, the
shaping of behavior with prompt consequences both correlated with and intermixed with longer-
term ones might provide the prerequisites for building conditional reinforcers that maintain
longer periods of attention and that bridge increasingly extended delays. The decremental (and
detrimental) effects of delays might be attenuated with the creation of higher-order temporal
units, especially if they also involve mediation by verbal behavior. Computer games may be
particularly useful tools, because their rapid responsivity, which sometimes so easily captures the
behavior of children with ADHD, allows both for the precise control of contingencies relating
skilled behavior to its consequences and for the structured embedding of minimal behavioral
units into higher-order coordinated units. Behavior is the interaction of an organism with its
environment, so such interventions might teach us things not only about how brain structure
drives behavior but also about how behavior drives brain structure.
It may be worth noting that this account has mostly dealt with behavior in its own terms.
Although the interpretation of ADHD in terms of delay gradients is theoretical, delay gradients
themselves are not theory but rather are measurable properties of behavior. At least in part
because of the limitations of my expertise, this commentary has only occasionally made contact
with other levels of analysis. One of the great strengths of SJA&R’s contribution is its
23
articulation among the several levels, and I look forward to the buttressing and the widening of
the bridges that they have begun to build among those levels. The following quotation is
particularly apt: “Valid facts about behavior are not invalidated by discoveries concerning the
nervous system, nor are facts about the nervous system invalidated by facts about behavior. Both
sets of facts are part of the same enterprise, and I have always looked forward to the time when
neurology would fill in the temporal and spatial gaps which are inevitable in a behavioral
analysis” (Skinner 1984, p. 543).
ACKNOWLEDGMENTS
Eliot Shimoff collaborated in the research that generated the data used illustratively in Figure 1.
Rouben Rostamian provided helpful insights into the properties of exponential decay functions.
References
Catania, A. C. (1971) Reinforcement schedules: The role of responses preceding the one that produces the
reinforcer. Journal of the Experimental Analysis of Behavior 15:271–87.
Catania, A. C. (1991) Time as a variable in behavior control. In: Experimental analysis of behavior, part
2, ed. K. A. Lattal. Elsevier.
Catania, A. C. (1995) Higher-order behavior classes: Contingencies, beliefs, and verbal behavior. Journal
of Behavior Therapy and Experimental Psychiatry 26:191–200.
Catania, A. C. (1998) Learning. Prentice Hall.
Catania, A. C. (2000) From behavior to brain and back again: Review of Orbach on Lashley-Hebb.
Psycoloquy (March 18): psyc.00.11.027.1ashley-hebb.14.catania (online Journal), 890 lines.
Catania, A. C. (2001) Delay of reinforcement and the operant reserve. Society for Quantitative Analyses
of Behavior, New Orleans.
Catania, A. C. (2003) The operant reserve: A simulation. Society for Quantitative Analyses of Behavior,
San Francisco.
Catania, A. C., Sagvolden, T. & Keller, K. J. (1988) Reinforcement schedules: Retroactive and proactive
effects of reinforcers inserted into fixed-interval performances. Journal of the Experimental Analysis of
Behavior 49:49–73.
24
Dews, P. B. (1962) The effect of multiple SD periods on responding on a fixed-interval schedule. Journal
of the Experimental Analysis of Behavior 5:369–74.
Dews, P. B. (1966) The effect of multiple SD periods on responding on a fixed-interval schedule: V.
Effect of periods of complete darkness and of occasional omissions of food presentation. Journal of the
Experimental Analysis of Behavior 9:573–78.
Dinsmoor, J. A. (1983) Observing and conditioned reinforcement. Behavioral and Brain Sciences 6:693–
728.
Dinsmoor, J. A. (1995) Stimulus control (parts I and II). Behavior Analyst 18:51–68; 253–69.
Evenden, J. & Meyerson, B. (1998) A comparison of the behaviour of spontaneously hypertensive rats
and Wistar Kyoto rats on a paced fixed consecutive number schedule of reinforcement. In: Serotonergic
and steroidal influences on impulsive behaviour in rats, ed. J. L. Evenden. Acta Universitatis Upsaliensis.
Gleick, J. (1987) Chaos. Viking.
Johansen, E. B. & Sagvolden, T. (2004) Response disinhibition may be explained as an extinction deficit
in an animal model of attention-deficit/hyperactivity disorder (ADHD). Behavioural Brain Research 149:
183-96.
Kelleher, R. T., Riddle, W. C. & Cook, L. (1962) Observing responses in pigeons. Journal of the
Experimental Analysis of Behavior 5:3–13.
Killeen, P. (1994) Mathematical principles of reinforcement. Behavioral and Brain Sciences 17:105–72.
Killeen, P. (2001) Writing and overwriting short-term memory. Psychonomic Bulletin and Review
8(1):18–43.
Mook, D. M., Jeffrey, J. & Neuringer, A. (1993) Spontaneously hypertensive rats (SHR) readily learn to
vary but not repeat instrumental responses. Behavioral and Neural Biology 59:126–35.
Neuringer, A. (2002) Operant variability: Evidence, functions, and theory. Psychonomic Bulletin and
Review 9:672–705.
Rachlin, H. (1995) Self-control: Beyond commitment. Behavioral and Brain Sciences 18:109–59.
Rachlin, H. & Green, L. (1972) Commitment, choice and self-control. Journal of the Experimental
Analysis of Behavior 17:15–22.
Sagvolden, T. (2000) Behavioral validation of the spontaneously hypertensive rat (SHR) as an animal
model of attention-deficit/hyperactivity disorder (AD/HD). Neuroscience and Biobehavioral Reviews
24:31–39.
Sagvolden, T. & Berger, D. F. (1996) An animal model of attention deficit disorder: The female shows
more behavioral problems and is more impulsive than the male. European Psychologist 1:113–22.
25
Sagvolden, T., Pettersen, M. B. & Larsen, M. C. (1993) Spontaneously hypertensive rats (SHR) as a
putative animal model of childhood hyperkinesis: SHR behavior compared to four other rat strains.
Physiology and Behavior 54:1047–55.
Sagvolden, T., Slåtta, K. & Arntzen, E. (1988) Low doses of methylphenidate (Ritalin) may alter the
delay-of-reinforcement gradient. Psychopharmacology (Berlin) 95:303–12.
Skinner, B. F. (1938) The behavior of organisms: An experimental analysis. Appleton-Century-Crofts.
Skinner, B. F. (1940) The nature of the operant reserve. Psychological Bulletin 37:423.
Skinner, B. F. (1984) Theoretical contingencies. Behavioral and Brain Sciences 7:541–45.
... Due to the association between dopamine and LTD, the theory also proposes that extinction processes are depressed in ADHD, causing a slowed or deficient elimination of previously reinforced behavior [17]. Altered reinforcement learning described by a steepened delay-ofreinforcement gradient combined with deficient extinction can produce the main symptoms of ADHD: Inattention , hyperactivity, impulsivity, and additionally increased behavioral variability [17,29,92,93,100101102. Slowed learning of discriminative stimuli due to the steepened delay-of-reinforcement gradient leads to a weaker control of behavior by contextual cues: Behavior is not controlled over extended periods of time by the discriminative stimulus and may be inappropriate for the current situation [103]. ...
... In this case the function is "hinged" at an intercept of c. It is an empirical question which of these models is most relevant to research on ADHD [100]. Because capacity c is often a free parameter, the difference between the two models is blunted by the models' ability to absorb λ into c: c' = (cλ). ...
Article
Full-text available
Attention-deficit/hyperactivity disorder (ADHD), characterized by hyperactivity, impulsiveness and deficient sustained attention, is one of the most common and persistent behavioral disorders of childhood. ADHD is associated with catecholamine dysfunction. The catecholamines are important for response selection and memory formation, and dopamine in particular is important for reinforcement of successful behavior. The convergence of dopaminergic mesolimbic and glutamatergic corticostriatal synapses upon individual neostriatal neurons provides a favorable substrate for a three-factor synaptic modification rule underlying acquisition of associations between stimuli in a particular context, responses, and reinforcers. The change in associative strength as a function of delay between key stimuli or responses, and reinforcement, is known as the delay of reinforcement gradient. The gradient is altered by vicissitudes of attention, intrusions of irrelevant events, lapses of memory, and fluctuations in dopamine function. Theoretical and experimental analyses of these moderating factors will help to determine just how reinforcement processes are altered in ADHD. Such analyses can only help to improve treatment strategies for ADHD.
... However, there are many results of neurobiological research that are highly relevant to understanding the pathophysiology of ADHD and it is useful to include them in theoretical approaches and emerging dimensional frameworks. For example, we and others have suggested that many of the symptoms of ADHD arise from an altered sensitivity to reinforcement (Catania, 2005;Iaboni, Douglas, & Baker, 1995;Sagvolden, Aase, Zeiner, & Berger, 1998;Tripp & Wickens, 2008Wickens & Tripp, 1998;Williams & Dayan, 2005). ...
Article
Full-text available
An altered behavioral response to positive reinforcement has been proposed to be a core deficit in attention deficit hyperactivity disorder (ADHD). The spontaneously hypertensive rat (SHR), a congenic animal strain, displays a similarly altered response to reinforcement. The presence of this genetically determined phenotype in a rodent model allows experimental investigation of underlying neural mechanisms. Behaviorally, the SHR displays increased preference for immediate reinforcement, increased sensitivity to individual instances of reinforcement relative to integrated reinforcement history, and a steeper delay of reinforcement gradient compared to other rat strains. The SHR also shows less development of incentive to approach sensory stimuli, or cues, that predict reward after repeated cue-reward pairing. We consider the underlying neural mechanisms for these characteristics. It is well known that midbrain dopamine neurons are initially activated by unexpected reward and gradually transfer their responses to reward-predicting cues. This finding has inspired the dopamine transfer deficit (DTD) hypothesis, which predicts certain behavioral effects that would arise from a deficient transfer of dopamine responses from actual rewards to reward-predicting cues. We argue that the DTD predicts the altered responses to reinforcement seen in the SHR and individuals with ADHD. These altered responses to reinforcement in turn predict core symptoms of ADHD. We also suggest that variations in the degree of dopamine transfer may underlie variations in personality dimensions related to altered reinforcement sensitivity. In doing so, we highlight the value of rodent models to the study of human personality.
... That insufficiency was never demonstrated, and amphetamines also ameliorate the symptoms with negligible impact on the dopamine system. A behavioral theory of ADHD proposed by Sagvolden and associates (Sagvolden et al., 2005;Catania, 2005) posited shorter steeper delay of reinforcement gradients as one of the factors that caused behavioral deficits in ADHD. There is some evidence for these in an animal model of ADHD (Johansen, Killeen, Russell, et al., 2009;Johansen et al., 2009a;Pellón et al., 2018), along with the predicted greater entropy of responding. ...
Article
Full-text available
One of the most notable aspects of the behavior of individuals with Attention Deficit Hyperactivity Disorder (ADHD) is increased variability in many aspects of their behavior, including response times and attentional focus. Among the many theories of ADHD is one that identifies its material cause as phasic malnutrition of the neurons required to maintain constancy of performance. Of the diverse predictions issuing from this theory, one concerns ubiquitous data: response times and their variance in decision tasks. This paper reviews that behavioral neuroenergetics theory and model, shows how they predict representative data, and suggests their relevance to researchers studying animal models of ADHD.
... We take different responses to be intrinsically "marked" differently in memory and, thus, likely to support different delay of reinforcement gradients (Figures 3, 5, and 6). The implications of these results for the study of impulsivity are clear (Catania, 2005b). One cannot talk of impulsivity in the abstract; it depends DELAY GRADIENTS FOR LICKING AND ENTERING 23 on the contingencies (some classes of individuals who are impulsive in discounting delay are conservative in discounting risk), magnitude effects, and nature of the good discounted. ...
Article
Full-text available
The present experiments studied impulsivity by manipulating the delay between target responses and presentation of a reinforcer. Food-deprived SHR, WKY, and Wistar rats were exposed to a fixed-time 30-s schedule of food pellet presentation until they developed stable patterns of water spout-licking and magazine-entering. In successive phases of the study, a resetting delay contingency postponed food delivery if target responses (licks or entries) occurred within the last 1, 2, 5, 10, 20, 25, or 28 s of the inter-food interval. Response-food delays were applied independently for the two behaviors during separate experimental phases, and order of presentation and the behavior that was punished first were counterbalanced. Licking was induced in the order of Wistar > SHR > WKY, and magazine entries were in the order of SHR > WKY > Wistar. Magazine entries showed steeper delay gradients than licking in SHR and Wistar rats but were of similar great inclination in the WKY rats. The different responses were differentially sensitive to delays. This suggests a different ordering of them in the interval between reinforcers. It also has implications for attempts to change impulsive behavior, both in terms of the nature of the response and its removal from reinforcing consequences.
... Sagvolden, Aase, Johansen e Russell (2005), por exemplo, apontavam a escassez de dados sobre tratamentos médicos eficazes para o subtipo déficit de atenção do TDAH. No que concerne às contribuições da Análise do Comportamento, os padrões comportamentais usualmente característicos de pessoas com diagnóstico de déficit de atenção são, ao menos em parte, explicados a partir dos gradientes de atraso de reforço (Catania, 2005). Baseado em extensa literatura, Catania explica que uma das causas do que chamamos déficit de atenção pode estar relacionada à dificuldade de controle por reforços condicionados. ...
... For instance, boys with ADHD rely on inappropriate entry strategies (disruptive attention seeking) when seeking to join in games with unfamiliar peers, which over time "turns off" the peers who respond negatively and reject future interaction with these ADHD peers (Ronk et al. 2011). Reinforcement strengthens preceding behavior regardless of whether the parent or teacher deems the behavior correct or disruptive (Catania 1971(Catania , 2005. ...
... The demand for different gradients for different classes of behavior may seem profligate; but it is no more so than nature. The existence of long-tailed gradients for some classes of behaviors has important implications for applied behavior analysis, an idea adumbrated by Catania (1971; 2005a), Madden and Perone (2003), and Kwok et al. (2012). Thirty-six years ago, Herrnstein (1977, p. 602) noted the following: " We seem destined to undertake Watsonian bot- anizing [of behavior], but with better prospects for success than Watson would have had 50 years ago. ...
... In this procedure, therefore, changes in the FI value changed the delay between the onset of green on the right key and the later delivery of the FI reinforcer. Delay gradients engendered by a range of reinforcers, including stimulus onset in the maintenance of observing behavior, have been incorporated into a model of ADHD (attentiondeficit hyperactivity disorder) by Sagvolden and colleagues (Catania, 2005a(Catania, , 2005bSagvolden et al., 2005). In that account, delay gradients that decrease more rapidly than those in a general population can lead to hyperactivity when they differentially reinforce rapid response sequences, because the gradients cannot support the earlier responses of those sequences when responding is emitted more slowly. ...
Article
Full-text available
Random-interval reinforcement was arranged for a sequence of pigeon first-key pecks followed by second-key pecks. First-key pecks, separated from reinforcers by delays that included number of second-key pecks and time, decreased in rate as delays increased. Delay functions, or gradients, were obtained in one experiment with reinforced sequences consisting of M first-key pecks followed by N second-key pecks (M + N = 16), in a second where required first-key pecks were held constant (M = 8), and in a third where minimum delay between most recent first-key pecks and reinforcers varied. In each, gradients were equally well fitted by exponential, hyperbolic and logarithmic functions. Performances were insensitive to reinforcer duration and functions were consistent across varied random-interval values. In one more experiment, time and number delays were independently varied using differential reinforcement of rate of second-key pecks. Delay gradients depended primarily on time rather than on number of second-key pecks. Thus, reinforcers have effects based on earlier responses, not just the ones that produced them, with the contribution of each response weighted by the time separating it from the reinforcer rather than by intervening behavior. Situations where unwanted responses (e.g., errors) often precede reinforced corrects can maintain them unless designed to avoid such effects of delay.
Article
O presente artigo apresenta alguns esforços reflexivos incipientes acerca da viabilidade teórico-filosófica de um diálogo entre behaviorismo radical/análise do comportamento e psiquiatria, enquanto especialidade médica preocupada com os problemas “mentais e do comportamento”. Tal aproximação, apesar de incipiente, parece promissora e vem se mostrando muito produtiva na prática daqueles que lidam com problemas do comportamento. Baseando a reflexão predominantemente na psiquiatria biológica e num de seus principais pilares, as neurociências, defende-se que o behaviorismo radical poderia ajudar a dar rumo para estas disciplinas que, por sua vez, poderiam ajudar na compreensão e estudo de partes do comportamento que ocorrem no organismo, principalmente nos problemas do comportamento. Em outras palavras, o autor suporta a tese de que tal diálogo ajudaria muito no aprimoramento de ambas as disciplinas e que a separação entre elas é menos nítida do que vem se assumindo de ambos os lados.
Article
Full-text available
The present paper presents some initial reflexive efforts in order to evaluate the viability of a dialogue between radical behaviorism/behavior analysis and psychiatry, taken as the branch of medicine concerned with "mental and behavioral" problems. Such an approximation, in spite of incipient, seems promising to those who have been experiencing it in practice, once it has been showing very productive in the matter of behavioral problems. Basing the reflexive efforts mainly in biological psychiatry, as well as in its main ground, the neurosciences, it is proposed that radical behaviorism could give direction to these disciplines that, in their turns, could help to understand and to study parts of the behaviors that happens in the organism, especially in terms of behavioral problems. In other words, the author supports the view that, especially in the field of the problems of behavior, such a dialogue would give much help in the improvement of both areas and that the distinction between them is not as clear as it has been taken.
Chapter
Full-text available
This chapter discusses time as a variable in behavior analysis. Behavior takes place in time and has temporal dimensions. As an independent variable, time is an essential property of the environments within which behavior occurs. As a dependent variable, it includes not only response durations but also the distribution of responses in time. Each stimulus dimension has intrinsic properties, and those of the temporal dimension differ in important ways from those of other stimulus dimensions such as wavelength, intensity, and spatial extent. The molecular properties of behavior involve the properties of individual stimuli and responses. The processes of discrimination and differentiation are molar as they are aspects of populations of stimuli and responses observed over extended periods of time. If the search for behavior mediating timing were successful, the outcome might be regarded as relevant to the organism's temporal receptor.
Article
Full-text available
Orbach's examination of the work of Lashley and Hebb is of great historical interest, but it illustrates a vast gap, both past and present, between research on the nervous system and research on behavior. Grand strides in the neurosciences have taken place with insufficient attention to the behavior of the organisms that are the hosts of nervous systems. In the final analysis, nervous systems are selected by evolutionary contingencies on the basis of the behavior that they engender. If we fail to understand the behavior, we will probably also fail to understand how the brain serves it. As we move away from the Decade of the Brain into the Decade of Behavior, those unfamiliar with the properties of behavior will be at a disadvantage when they seek its sources in the brain, because they will not know what they should be looking for. Lashley was on the right track when he used the properties of serial order in behavior to make inferences about the nervous system, but too often both Lashley and Hebb speculated about the nervous system without firm grounding in what was even then known about learning and behavior. We now know much more, and neuroscience and the science of behavior have each reached a point at which a modern synthesis holds great promise.
Article
Full-text available
Effective conditioning requires a correlation between the experimenter's definition of a response and an organism's, but an animal's perception of its behavior differs from ours. These experiments explore various definitions of the response, using the slopes of learning curves to infer which comes closest to the organism's definition. The resulting exponentially weighted moving average provides a model of memory that is used to ground a quantitative theory of reinforcement. The theory assumes that: incentives excite behavior and focus the excitement on responses that are contemporaneous in memory. The correlation between the organism's memory and the behavior measured by the experimenter is given by coupling coefficients, which are derived for various schedules of reinforcement. The coupling coefficients for simple schedules may be concatenated to predict the effects of complex schedules. The coefficients are inserted into a generic model of arousal and temporal constraint to predict response rates under any scheduling arrangement. The theory posits a response-indexed decay of memory, not a time-indexed one. It requires that incentives displace memory for the responses that occur before them, and may truncate the representation of the response that brings them about. As a contiguity-weighted correlation model, it bridges opposing views of the reinforcement process. By placing the short-term memory of behavior in so central a role, it provides a behavioral account of a key cognitive process.
Article
Full-text available
In a two-key pigeon chamber, variable-interval reinforcement was scheduled for a specified number of pecks, emitted either on a single key or in a particular sequence on the two keys. Although the distribution of pecks between the two keys was affected by whether pecks were required on one or on both keys, the total pecks emitted was not; the change from a one-key to a two-key requirement simply moved some pecks from one key to the other. Thus, each peck preceding the one that produced the reinforcer contributed independently to the subsequent rate of responding; the contribution of a particular peck in the sequence was determined by the time between its emission and the delivery of the reinforcer (delay of reinforcement), and was identified by the proportion of pecks moved from one key to the other when the response requirement at that point in the sequence was moved from one key to the other.
Article
Full-text available
Examined sex differences in the temporal discrimination and activity level of an animal model of attention deficit disorder (ADD) using a conjunctive 120-sec variable interval 16-sec differential reinforcement of low rate schedule of reinforcement. The Ss were 8 male and 8 female spontaneously hypertensive (SHR) rats and 8 male and 8 female Wistar-Kyoto rats. Results show that SHR males were generally hyperactive and that SHR females were hyperactive and had severe time discrimination problems. The latter caused relatively fewer reinforcers to be delivered. When a reinforcer was delivered, SHR females frequently failed to collect it. When the SHR females were in diestrus, their behavior became even less efficient. Findings with the animal model seem to be in general agreement with the behavior of ADD children when a differential reinforcement of low rate schedule is used. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Article
Skinner outlines a science of behavior which generates its own laws through an analysis of its own data rather than securing them by reference to a conceptual neural process. "It is toward the reduction of seemingly diverse processes to simple laws that a science of behavior naturally directs itself. At the present time I know of no simplification of behavior that can be claimed for a neurological fact. Increasingly greater simplicity is being achieved, but through a systematic treatment of behavior at its own level." The results of behavior studies set problems for neurology, and in some cases constitute the sole factual basis for neurological constructs. The system developed in the present book is objective and descriptive. Behavior is regarded as either respondent or operant. Respondent behavior is elicited by observable stimuli, and classical conditioning has utilized this type of response. In the case of operant behavior no correlated stimulus can be detected when the behavior occurs. The factual part of the book deals largely with this behavior as studied by the author in extensive researches on the feeding responses of rats. The conditioning of such responses is compared with the stimulus conditioning of Pavlov. Particular emphasis is placed on the concept of "reflex reserve," a process which is built up during conditioning and exhausted during extinction, and on the concept of reflex strength. The chapter headings are as follows: a system of behavior; scope and method; conditioning and extinction; discrimination of a stimulus; some functions of stimuli; temporal discrimination of the stimulus; the differentiation of a response; drive; drive and conditioning; other variables affecting reflex strength; behavior and the nervous system; and conclusion. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
When experimenters require their subjects to perform some readily recorded response to gain access to discriminative stimuli but do not permit this behavior to alter the schedule of reinforcement, the response is classified, by analogy, as an “observing” response. Observing responses have been used not only to analyze discrimination learning but also to substantiate the concept of conditioned reinforcement and to measure the reinforcing effect of stimuli serving other behavioral functions. A controversy, however, centers around the puzzling question of how observing can be sustained when the resulting stimuli are not associated with any increase in the frequency of primary reinforcement. Two possible answers have been advanced: (a) that differential preparatory responses to these stimuli as conditional stimuli make both the receipt and the nonreceipt of unconditional stimuli more reinforcing; and (b) that information concerning biologically significant events is inherently reinforcing. It appears, however, that the stimulus associated with the less desirable outcome is not reinforcing. The maintenance of observing can be reconciled with the traditional theory that the acquisition of reinforcing properties proceeds according to the same rules as those for Pavlovian conditioning if it is recognized that the subject is selective in what it observes and procures a greater than proportionate exposure to the stimulus associated with the more desirable outcome. As a result of this selection, the overall frequency of primary reinforcement increases in the presence of the observed stimuli and declines in the presence of the nondifferential stimuli that prevail when the subject is not observing.
Article
Self-control, so important in the theory and practice of psychology, has usually been understood introspectively. This target article adopts a behavioral view of the self (as an abstract class of behavioral actions) and of self-control (as an abstract behavioral pattern dominating a particular act) according to which the development of self-control is a molar/molecular conflict in the development of behavioral patterns. This subsumes the more typical view of self-control as a now/later conflict in which an act of self-control is a choice of a larger but later reinforcer over a smaller but sooner reinforcer. If at some future time the smaller-sooner reinforcer will be more valuable than the larger-later reinforcer, self-control may be achieved through a commitment to the largerlater reinforcer prior to that point. According to some, there is a progressive internalization of commitment in the development of self-control. This presents theoretical and empirical problems. In two experiments – one with pigeons choosing between smallersooner and larger-later reinforcers, the other with adult humans choosing between short-term particular and long-term abstract reinforcers – temporal patterning of choices increased self-control.