Content uploaded by William B. Stiles
Author content
All content in this area was uploaded by William B. Stiles
Content may be subject to copyright.
Responsive Regulation of Treatment Duration in Routine Practice in
United Kingdom Primary Care Settings: Replication in a Larger Sample
William B. Stiles
Miami University
Michael Barkham
Centre for Psychological Services Research,
University of Sheffield
Janice Connell
University of Leeds
John Mellor-Clark
CORE Information Management Systems, Ltd.
Replicating a previous study (M. Barkham et al., 2006), the authors examined rates of improvement in
psychotherapy in United Kingdom primary care settings as a function of the number of sessions attended.
Included in the study were adult clients who returned valid assessments at the beginning and the end of
their treatment, had planned endings, began treatment above the clinical cutoff score, and were seen for
20 or fewer sessions (N⫽9,703; 72.4% female; 87.7% Caucasian; average age ⫽40.9 years). Clients’
average assessment scores improved substantially across treatment, with a pretreatment–posttreatment
effect size of 1.96; 62.0% achieved reliable and clinically significant improvement (RCSI). Clients’ mean
pretreatment–posttreatment change was approximately constant regardless of treatment duration (in the
range of 0 to 20 sessions); the RCSI rate decreased slightly with treatment duration, as fewer clients fell
below the cutoff at longer durations. Results were interpreted as suggesting that therapists and clients
tend to make appropriately responsive decisions about treatment duration.
Keywords: good enough level, psychotherapy effectiveness, responsiveness
The dose– effect model of treatment duration seeks to determine
the optimal dose of psychotherapy, where dose is denominated in
sessions and more sessions is presumed to represent a stronger
dose (e.g., Barkham et al., 1996; Barkham, Rees, Stiles, Hardy, &
Shapiro, 2002, Barkham et al., 2006; Feaster, Newman, & Rice,
2003; Grissom, Lyons, & Lutz, 2002; Hansen & Lambert, 2003;
Howard, Kopta, Krause, & Orlinsky, 1986; Howard, Lueger, Ma-
ling, & Martinovich, 1993; Lueger et al., 2001; Lutz, Lowry,
Kopta, Einstein, & Howard, 2001; Lutz, Martinovich, Howard, &
Leon, 2002). Presented in this way, the model raises the obvious
question, “How much is enough?” (Kopta, 2003, p. 728). Howard
et al. (1986), for example, suggested, “The present meta-analysis
indicates that by 26 sessions, about 75% of patients have shown
some improvement. . . [I]n clinics that serve a large population
with limited resources, 26 sessions might be used as a rational time
limit” (p. 163).
The responsive regulation model approaches treatment duration
differently than does the dose– effect model, considering clients as
active decision makers rather than as passive recipients of services.
The responsive regulation model suggests that, in routine practice,
level of improvement and treatment duration are mutually regu-
lated so that treatments tend to end when clients have improved to
agood enough level (Barkham et al., 1996, 2006). This model
considers treatment duration in naturalistic settings as a manifes-
tation of responsiveness, that is, of behavior being influenced by
emerging context (Stiles, Honos-Webb, & Surko, 1998). Respon-
sive regulation may involve adjusting the length of therapy to
reflect degree of improvement, adjusting the degree of focus and
effort of therapeutic work in response to the available time, ad-
justing expectations, or various combinations of such strategies.
Characteristics of the clients, therapists, and settings, including the
nature and severity of the problems, the personalities of the par-
ticipants, and available resources, combine to affect how quickly
clients improve and when treatment can be terminated. Most
obviously, clients and therapists may agree to end treatment when
goals have been reached or when an acceptable amount of progress
has been made. In addition, however, imposing time limits may
accelerate the process of therapy (Eckert, 1993; Reynolds et al.,
1996) and the rate of improvement.
In support of the responsive regulation model, a previous inves-
tigation of the relation of number of sessions to treatment effects
in a routine practice sample showed that treatment effects were
William B. Stiles, Department of Psychology, Miami University;
Michael Barkham, Centre for Psychological Services Research, University
of Sheffield, Sheffield, United Kingdom; Janice Connell, Psychological
Therapies Research Centre, University of Leeds, Leeds, United Kingdom;
John Mellor-Clark, CORE (Clinical Outcomes in Routine Evaluation)
Information Management Systems, Rugby, United Kingdom.
Michael Barkham and Janice Connell were supported by the United
Kingdom National Health Service Priorities and Needs Research and
Development Levy. Michael Barkham and John Mellor-Clark received
funding from the United Kingdom Mental Health Foundation to develop
the Clinical Outcomes in Routine Evaluation—Outcome Measure (CORE-
OM), a measure used in this study. John Mellor-Clark runs a company that
supplies training, software support, and data analysis and benchmarking
services to users of the CORE system.
Correspondence concerning this article should be addressed to William
B. Stiles, Department of Psychology, Miami University, Oxford, OH
45056. E-mail: stileswb@muohio.edu
Journal of Consulting and Clinical Psychology Copyright 2008 by the American Psychological Association
2008, Vol. 76, No. 2, 298–305 0022-006X/08/$12.00 DOI: 10.1037/0022-006X.76.2.298
298
constant or even decreased with number of sessions (Barkham et
al., 2006). The proportion of clients showing reliable and clinically
significant improvement (RCSI) was slightly higher among clients
who had only 1 or 2 sessions than among clients who had 11 or 12
sessions. The seemingly paradoxical inverse relation of improve-
ment rate with treatment duration presumably reflected clients’
and therapists’ dropping expectations for acceptable outcomes as
the costs of additional sessions in time, effort, and other resources
accumulated.
The responsive regulation model suggests that clients and ther-
apists may act together as rational decision makers. Appropriate
decisions regarding treatment duration may be reached by client
and therapist as a self-regulating process, rather than by adminis-
trative dictum or externally imposed criteria. Treatment is ended
by client and/or therapist when the client reaches his or her good
enough level. In effect, a naturally occurring mechanism regulates
continuation of treatment, just as it regulates moment-by-moment
processes within sessions (Stiles et al., 1998).
We sought further evidence regarding such a naturally occurring
process of regulation by replicating and extending our previous
study (Barkham et al., 2006) in a larger, more focused sample
gathered in routine primary care mental health settings in the
United Kingdom. We focused on the RCSI rates and mean im-
provement on the Clinical Outcomes in Routine Evaluation—
Outcome Measure (CORE–OM; Barkham et al., 2001; Barkham,
Gilbert, Connell, Marshall, & Twigg, 2005; C. Evans et al., 2002)
achieved by psychotherapy clients at the end of their treatment as
a function of the number of sessions they had attended. We
reasoned that if clients and therapists tend to end the therapy when
the good enough level is reached, then observed rates of improve-
ment should be approximately constant— or declining (reflecting
increasing costs)—across sessions. Clinically, support for the re-
sponsive regulation model would strengthen the case for individ-
ualized rather than standardized decisions about treatment dura-
tion.
Method
Participants
We studied adult clients (N⫽9,703) drawn from the CORE
National Research Database 2005 (described below), who returned
valid pre- and posttherapy assessment forms, began treatment in
the clinical range, completed 20 or fewer sessions, and were
described by their therapists as having had planned endings. End-
ings were considered as planned if the therapist and client agreed
to ending therapy or a previously planned course of therapy was
completed. The clients averaged 40.9 years old (SD ⫽12.8; range
16 –99); 7,022 (72.4%) were women; 8,506 (87.7%) were White,
111 (1.1%) were Black, 144 (1.5%) were Asian, 214 (2.2%) listed
other or mixed ethnicity, and for 728 (7.5%) ethnicity was not
stated, not available, or missing. Most clients were not given
formal diagnoses, but therapists indicated their presenting prob-
lems using categories provided on the CORE assessment forms.
Clients’ problems included anxiety (indicated for 78.2% of the
clients), depression (74.0%), interpersonal relationship problems
(52.2%), low self-esteem (50.6%), bereavement/loss (35.9%),
work/academic problems (21.3%), trauma and abuse (19.5%), and
problems associated with living on welfare (14.9%), as well as
other problems less frequently cited (multiple problems were in-
dicated for most clients).
The clients were treated at 32 primary care services in the
United Kingdom by 445 therapists who each treated 1 to 217 of the
clients. Therapist characteristics were not recorded. Treatment
approaches were determined using the service’s normal procedures
and were recorded by the therapist at the end of therapy. The most
common approaches were integrative (40.4%), person-centered
(37.0%), structured/brief (31.4%), cognitive– behavioral (26.4%),
supportive (16.8%), and psychodynamic (16.1%). A majority
(52.6%) of treatments included more than one type of therapy.
Several of the services routinely offered a fixed number of ses-
sions, most often 6. These limits were administered flexibly,
however, and all services returned data from some clients seen for
more than 6 sessions.
CORE Measures
Outcome measure. The CORE–OM (Barkham et al., 2001,
2005; C. Evans et al., 2002) is a self-report inventory consisting of
34 items that address domains of subjective well-being, symptoms
(anxiety, depression, physical problems, trauma), functioning
(general functioning, close relationships, social relationships), and
risk (risk to self, risk to others). Half of the items focus on
low-intensity problems (e.g. “I feel anxious/nervous”), and half
focus on high-intensity problems (e.g. “I feel panic/terror”). Items
are scored on a 5-point (0 – 4) scale, with the response choices not
at all, only occasionally, sometimes, often, and all or most of the
time. Forms are considered valid if up to 3 items are omitted (C.
Evans et al., 2002). CORE–OM clinical scores are computed as the
mean of all completed items, which is then multiplied by 10, so
that clinically meaningful differences are represented by whole
numbers. Thus, CORE–OM clinical scores can range from 0 to 40.
The 34-item scale has a reported internal consistency of .94
(Barkham et al., 2001), with test–retest correlations ⱖ.80 for
intervals of up to 4 months in an outpatient sample (Barkham,
Mullin, Leach, Stiles, & Lucock, 2007).
The recommended clinical cutoff, dividing the dysfunctional
from the normal populations, is a CORE–OM score of 10. Connell
et al. (2007) derived this cutoff by applying formulas recom-
mended by Jacobson and Truax (1991) to discriminate optimally
between a clinical sample and a systematic general population
sample. The score corresponds to an average response of only
occasionally (albeit encompassing great variation) to the question
of how often during the past week the respondent had experienced
each of the psychological symptoms listed on the CORE–OM.
Therapist assessments. The CORE assessment (Mellor-Clark
& Barkham, 2006; Mellor-Clark, Barkham, Connell, & Evans,
1999) is comprised of the Therapist Assessment form, completed
at intake, and the End of Therapy form. On the Therapist Assess-
ment form, therapists gave referral information; client demograph-
ics; and data on the nature, severity, and duration of presenting
problems using the following 14 categories: depression, anxiety,
psychosis, personality problems, cognitive/learning difficulties,
eating disorder, physical problems, addictions, trauma/abuse, be-
reavement, self-esteem, interpersonal problems, living/welfare,
and work/academic. On the End of Therapy form, therapists re-
ported information about the completed treatment, including num-
299
RESPONSIVE REGULATION OF TREATMENT DURATION
ber of sessions the client attended, whether the ending was planned
or unplanned, and the type(s) of therapy undertaken with the client.
Procedure
Selection of clients. The 9,703 clients we studied were se-
lected from the CORE National Research Database 2005 (Mellor-
Clark, Curtis Jenkins, Evans, Mothersole, & McInnes, 2006),
which includes information on 33,587 clients (69.4% female;
mean age ⫽38.5 years, SD ⫽13.1) whose therapist returned a
Therapist Assessment form at 1 of 34 primary care counseling
services in the National Health Service (NHS) of the United
Kingdom during the period of April 2002 through September
2005. These services had been using the Personal Computer format
of the CORE system (CORE–PC; Mellor-Clark, 2003; Mellor-
Clark & Barkham, 2006) for at least 2 years and represented
approximately 50% of the NHS primary care psychological ther-
apy services using the CORE common methodology (R. Evans,
Mellor-Clark, Barkham, & Mothersole, 2006). This database com-
prises data donated by services that were involved in ongoing data
collection, rather than a time-limited study.
We selected all adult clients from this database who returned
valid pre- and posttherapy CORE–OM forms, were described by
their therapists as having planned endings, had pretreatment
CORE–OM scores of 10 or greater, and completed 20 or fewer
sessions (N⫽9,703). A majority of the clients in the database
were omitted because they did not return valid CORE–OM forms:
5,327 did not return valid pre- or posttherapy forms; 569 returned
posttherapy but not pretherapy forms; and 14,945 returned pre-
therapy but not posttherapy forms. The latter, largest category
included clients who did not attend any sessions, clients who
attended sessions but left without completing the final form, and
clients who had not ended their treatment by the closing date of
data collection.
A further 1,384 clients returned valid forms but did not have a
planned ending. Clients with planned endings were far more likely
to have completed the measures. In the full data set (N⫽33,587),
valid pre- and posttreatment CORE–OM forms were returned by
78.2% of the 14,434 clients with planned endings but only by 7.3%
of the 19,053 who were not reported to have had planned endings.
An additional 10 clients had reported ages below 16 years and
were not included (though most of these ages were listed as below
10 years and thus probably reflected data entry errors).
A further 1,235 clients were not included because they began
treatment below the clinical cutoff, a score of 10 on the
CORE–OM (Connell et al., 2007). We considered only clients who
began treatment at or above the clinical cut-off because we were
focusing on RCSI rates. Clients who began treatment below the
clinical threshold were not in the clinical population to begin with,
so they could not move from the clinical to the normal population
(part of the definition of RCSI) regardless of how much they
improved (see Barkham et al., 2006; Gray, 2003; Hansen, Lam-
bert, & Forman, 2002, 2003).
Finally, we restricted the sample to clients whose therapists
reported (on the End of Therapy form) that they had completed
between 0 and 20 sessions. We excluded 414 clients who had met
previous criteria (valid pre- and posttreatment CORE–OM,
planned ending, pretreatment CORE–OM of 10 or higher): 336
whose therapists did not record the number of sessions attended
and 78 who received more than 20 sessions (range ⫽21–98). We
omitted the latter because the numbers of clients receiving partic-
ular numbers of sessions were too small to estimate RCSI rates
reliably.
Data collection. Details of data collection procedures were
determined by the sites. Clients completed the CORE–OM at
intake (i.e., before any intervention), during screening or assess-
ment or immediately before the first therapy session. The post-
therapy CORE–OM was administered at or after the last session.
The timing and specific procedures for this were determined by
what worked best for each service administratively and were not
recorded. Therapists completed the Therapist Assessment form at
intake and the End of Therapy form when treatment had ended.
Data collection complied with applicable data protection proce-
dures for the use of routinely collected clinical data. Anonymized
data were entered electronically at the site, collected by CORE
Information Management Systems Ltd., and sent to the University
of Leeds for aggregation and analysis.
Reliable and Clinically Significant Improvement
Following Jacobson and Truax (1991), we held that clients had
achieved RCSI if they entered treatment in a dysfunctional state
and left treatment in a normal state, having changed to a degree
that was probably not due to measurement error. We used a
CORE–OM score of 10 as the clinical cutoff, dividing the dys-
functional from the normal populations, following the recommen-
dation by Connell et al. (2007). The reliable change index,a
pretreatment–posttreatment difference that, when divided by the
standard error of the difference, equals 1.96, depends on (a) the
standard deviation of the pretreatment–posttreatment difference
and (b) the reliability of the measure (see Jacobson & Truax, 1991,
for formulas). Using SD
difference
⫽6.65 (based on the 12,746
clients in the CORE National Research Database 2005 who had
valid pre- and posttreatment CORE–OM scores) and the reported
internal consistency reliability of .94 (Barkham et al., 2001)
yielded a reliable change index of 4.5.
Results
Improvement Rates for Full Sample
Among the 9,703 clients we studied, 6,019 (62.0%) achieved
RCSI. That is, having begun treatment with a CORE–OM score at
or above 10, the clients left with a CORE–OM score below 10,
having changed by at least 4.5 points. An additional 1,879 clients
(19.4%) showed reliable improvement only, that is, their
CORE–OM scores dropped by 4.5 or more points but did not fall
below 10. Reliable deterioration (i.e., an increase of 4.5 or more
points) was shown by 100 clients (1.0%). The remaining 1,705
clients (17.6%) showed no reliable change.
Whereas 7,898 (81.4%) of the clients showed reliable improve-
ment, only 6,288 (64.8%) ended treatment below the cutoff (clin-
ically significant change, in Jacobson & Truax’s, 1991, terms), just
2.8% more than achieved RCSI. That is, in this sample, with these
parameters, the cutoff (leaving treatment in a normal state) was a
more powerful determinant of RCSI than was the reliable change
index (changing to a degree not attributable to measurement error).
300 STILES, BARKHAM, CONNELL, AND MELLOR-CLARK
The clients’ mean CORE–OM clinical score was 18.8 (SD ⫽
5.1) at intake and 8.8 (SD ⫽6.1) at posttherapy, a mean difference
of 10.0, SD ⫽6.3, t(9702) ⫽157.18, p⬍.001, yielding a
pretreatment–posttreatment effect size (difference divided by pre-
treatment standard deviation) of 1.96. Clients’ pretreatment scores
were correlated r⫽.38 ( p⬍.001, N⫽9,703) with their
posttreatment scores.
Improvement Rates by Session
Within the parameters of our study, clients who attended fewer
sessions were slightly more likely to have achieved RCSI by the
end of treatment than were those who attended more sessions.
Table 1 shows the number and percentage of clients who achieved
RCSI as a function of the number of sessions they attended. Most
of the RCSI rates were between 50% and 75%, though two
categories with small numbers of clients (18 sessions and 20
sessions) were lower. RCSI rates were negatively correlated with
number of sessions attended (r⫽⫺.75, p⬍.001, n⫽21
categories), replicating a previous finding (Barkham et al., 2006).
The proportion of clients achieving reliable improvement—that
is, who improved by 4.5 or more CORE–OM points, regardless of
whether they ended below 10 —was approximately constant re-
gardless of the number of sessions attended. Table 1 (last two
columns) shows the number and percentage of clients who
achieved reliable improvement. These rates are, of course, higher
than the RCSI rates because they include clients who achieved
RCSI. Reliable improvement rates were not significantly corre-
lated with number of sessions attended (r⫽.11, ns,n⫽21).
The “0 sessions” entry in Table 1 represents clients who re-
turned to the site, perhaps after some time on a waiting list, agreed
with their therapist that formal treatment was no longer indicated,
and completed a second (nominally posttreatment) CORE–OM.
Although there were only a few such clients, their relatively high
rate of improvement was consistent with the broader pattern, and
we consider this to be a logical beginning of our continuum,
reflecting appropriate decisions by participants. We return to this
issue in the Discussion.
Like reliable improvement, mean pretreatment–posttreatment
change was approximately constant regardless of the number of
sessions attended. As shown in Table 2, the mean pretreatment–
posttreatment change scores varied around the overall mean of 10,
and the effect size varied around the overall effect size of 1.96.
Neither was significantly correlated with number of sessions.
The clients who attended large numbers of sessions tended to
have relatively higher CORE–OM scores both before and after
treatment; pretreatment and posttreatment means are shown in
Table 2. The mean pretreatment and posttreatment CORE–OM
scores were strongly correlated with sessions attended across the
21 categories (r⫽.93, p⬍.001, for pretreatment; r⫽.70, p⬍
.001, for posttreatment). However, the correlations of individual
clients’ pre- and posttreatment CORE–OM scores with the number
of sessions attended, though reliably positive, was far smaller,
reflecting large within-category variation in CORE–OM scores
(r⫽.16, p⬍.001, for pretreatment; r⫽.16, p⬍.001, for
posttreatment, N⫽9,703).
Discussion
In this large routine practice sample, clients who began treat-
ment above the CORE–OM clinical cutoff score and completed
treatment (evidenced by therapist-reported planned endings)
showed very substantial gains, whether calculated in terms of
RCSI rates (Table 1) or mean change on the CORE–OM (Table 2).
The changes were comparable in magnitude to those observed
among clients who have completed treatment in randomized effi-
cacy trials (cf. Barkham, Stiles, Twigg, et al., 2008).
The average improvement these clients showed on the
CORE–OM was virtually independent of how long they remained
in therapy, up to 20 sessions (see Tables 1 and 2). The outcomes
of clients who had one or two sessions, for example, were at least
as positive as those of clients who had 15 or 16 sessions. This
constant relation of improvement rates with treatment duration
replicated our previous findings (Barkham et al., 2006) using a
sample more than six-fold larger (N⫽9,703 versus N⫽1,472).
The result may seem paradoxical and surprising if treatment is
considered as an independent variable in an experimental manip-
ulation, but it seems clinically sensible if clients and therapists are
considered as responsively ending treatment when a good enough
level has been reached. Insofar as clients change at different rates,
they achieve a satisfactory level of gains at different treatment
durations, and they (working with their therapists) end treatment
when this happens.
The moderate but reliable decline in RCSI rates across sessions
among clients who began above the clinical cutoff (Table 1)
Table 1
Rates of Reliable and Clinically Significant Improvement (RCSI)
as a Function of Number of Sessions Attended
No.
sessions
attended N
RCSI
Reliable
improvement
N%N%
0 17 12 70.6 15 88.24
1 67 50 74.6 56 83.58
2 413 297 71.9 354 85.71
3 747 554 74.2 648 86.75
4 1,030 747 72.5 891 86.50
5 1,274 861 67.6 1,062 83.36
6 2,674 1,587 59.3 2,126 79.51
7 905 527 58.2 743 82.10
8 958 527 55.0 753 78.60
9 373 213 57.1 296 79.36
10 326 185 56.7 257 78.83
11 213 111 52.1 160 75.12
12 428 197 46.0 316 73.83
13 59 35 59.3 50 84.75
14 44 25 56.8 33 75.00
15 49 26 53.1 38 77.55
16 37 22 59.5 32 86.49
17 25 15 60.0 22 88.00
18 29 10 34.5 17 58.62
19 13 8 61.5 11 84.62
20 22 10 45.5 18 81.82
Total 9,703 6,019 62.0 7,898 81.40
Note. Only clients with planned endings whose initial CORE–OM score
was at or above the clinical cutoff of 10 were included. RCSI ⫽reliable
and clinically significant improvement, defined as having a posttreatment
CORE–OM score below 10 and having changed by at least 4.5 points.
Reliable improvement ⫽change of at least 4.5 points regardless of post-
treatment score (and thus includes clients who achieved RCSI).
301
RESPONSIVE REGULATION OF TREATMENT DURATION
represents a replicated effect and suggests that a client’s good
enough level is influenced by costs (Barkham et al., 2006; Rear-
don, Cukrowicz, Reeves, & Joiner, 2002). That is, clients and
therapists may tend to be satisfied with somewhat less as more
time and effort are required. The observation that the rate of
reliable improvement (Table 1) and the average change score
(Table 2) were roughly constant across treatment durations, rather
than declining, suggests that the adjustment takes the form of
settling for a higher final level of symptom intensity.
The correlation of clients’ pretreatment CORE–OM scores with
the number of sessions they attended suggests, plausibly, that the
more severely distressed clients tended, responsively, to remain in
treatment longer. The similar correlation of posttreatment
CORE–OM scores with number of sessions suggests that treatment
was not consistently successful in overcoming the initial differ-
ences. Both of these correlations were very modest when they were
calculated across clients, suggesting that initial distress was only
one of many factors influencing treatment duration.
Alternative Interpretations of the Dose–Effect Model
In considering clients and therapists as informed, active agents
and considering decisions about treatment duration as responsive
to emerging circumstances (e.g., gains achieved thus far), the
responsive regulation model challenges the usual way of thinking
about dose– effect relations in psychotherapy research. As we
discussed in our previous article (Barkham et al., 2006), the usual
interpretation, which considers dose– effect curves as applying to
individual psychotherapy clients (e.g., Howard et al., 1986; Kopta,
2003; Kopta et al., 1994), resembles the concept’s use in medicine,
where the dose effect (or dose response) is the physiological
response observed when otherwise equal individuals are given
differing amounts of a compound. An alternative population in-
terpretation of the dose– effect model is used in agriculture, for
example, to assess the effectiveness of insecticides. In that context,
the dose– effect curve documents the number of insects that remain
alive after varying doses. Applied to psychotherapy, the population
interpretation suggests that the easy-to-treat clients respond
quickly and leave treatment, so that only the hard-to-treat clients
remain in later sessions. This population interpretation is consis-
tent with the distribution of clients ending treatment at each
numbered session (see Tables 1 and 2), relatively few for whom
low doses were sufficient or for whom high doses were needed,
with most clients achieving a good enough level after a mid-range
number of sessions.
The responsive regulation model elaborates the population
dose– effect interpretation by adding the concept that results can be
judged as good enough by participants, using their own standards
for what is acceptable, given requirements and costs. That is, the
responsive regulation model suggests that decisions about termi-
nation can be reached by participants rather than requiring admin-
istrative decisions about the optimum amount of treatment. The
results argue for letting therapists and clients do what they do
without imposing standard limits based on averages, even limits
guided by evidence from dose– effect research.
In contrast to the responsive regulation in naturalistic condi-
tions, duration may have a positive relation to mean improvement
in studies that impose fixed treatment durations. Under such ex-
perimental conditions, some treatments are terminated before the
clients are ready, whereas other clients remain in therapy and may
continue to improve after they would otherwise have stopped (e.g.,
see Barkham et al., 2002).
In the NHS, therapy sometimes may be reported as completed
before clients have recovered. By RCSI criteria, 38% of our
sample of completers failed to recover (see Table 2). Some clients
Table 2
Mean Clinical Outcomes in Routine Evaluation—Outcome Measure (CORE–OM) Scores as a Function of Number of Sessions Attended
No. sessions
attended No. clients Pretreatment Posttreatment
Pretreatment–posttreatement
difference Effect size
0 17 17.97 7.25 10.72 2.10
1 67 17.41 7.45 9.96 1.95
2 413 17.36 7.17 10.19 2.00
3 747 17.84 7.07 10.77 2.11
4 1,030 17.80 7.30 10.50 2.06
5 1,274 18.42 8.09 10.33 2.03
6 2,674 18.80 9.13 9.67 1.90
7 905 19.30 9.26 10.04 1.97
8 958 19.21 9.81 9.40 1.84
9 373 19.59 9.53 10.06 1.97
10 326 20.12 9.83 10.29 2.02
11 213 19.60 9.84 9.76 1.91
12 428 20.10 11.23 8.87 1.74
13 59 20.06 9.78 10.28 2.02
14 44 21.85 10.50 11.35 2.22
15 49 21.07 10.79 10.28 2.01
16 37 22.80 9.13 13.67 2.68
17 25 20.57 9.29 11.28 2.21
18 29 21.09 13.59 7.50 1.47
19 13 22.98 7.81 15.17 2.97
20 22 21.77 11.36 10.41 2.04
Note. Effect size was calculated as the pretreatment–posttreatment difference divided by the full sample pretreatment standard deviation (5.1).
302 STILES, BARKHAM, CONNELL, AND MELLOR-CLARK
or therapists may give up hope or comply with contextual pres-
sures to settle for less than full recovery. The slightly declining
RCSI rates across durations suggest that such pressures may build
as durations lengthen.
The responsive regulation model converges with the phase
model extension of the dose– effect model, which suggests that a
client’s rate of improvement varies depending on the characteris-
tics of the problem (e.g., feelings of distress seem to change more
quickly than interpersonal relations and characterological issues;
Grissom et al., 2002; Howard et al., 1993; Kopta, Howard, Lowry,
& Beutler, 1994). The responsive regulation model adds that
improvement rates may vary with other characteristics of the client
(e.g., personal resources, external stressors) or characteristics of
the treatment (e.g., therapist skill, limitation to a greater or lesser
number of sessions) and that these multiple factors are routinely
and responsively integrated into decision making by the partici-
pants. It suggests a self-regulated alternative to stepped care,an
approach in which clients whose problems do not respond to brief
treatment are selectively allocated additional sessions or more
intensive treatments (e.g., Bower & Gilbody, 2005).
The Good Enough Level and Recovery
The concept of RCSI was developed as an attempt to specify
and quantify the medical notion of “recovered” (Jacobson &
Truax, 1991). The responsive regulation model offers an alterna-
tive approach to evaluating benefits to psychotherapy— one that
considers the acceptability of gains in the context of the client’s
requirements, the resources available (including those specific to
the particular therapist and context), and the personal and institu-
tional costs. The good enough level is a way of naming or char-
acterizing what effect is acceptable and raises the question of how
this varies across clients and contexts. We do not know how clients
and therapists reach their decisions and, in this sense, the model
points to a need for theory and research—perhaps qualitative
research— on this process. Some client–therapist pairs may make
more appropriate decisions than others do. Presumably the good
enough level criteria differ for different clients and depend on
social context and many other factors besides current symptom
intensity, as measured by the CORE–OM, so that different clients
may end treatment with different CORE–OM scores. Of course,
the decision to end therapy may or may not involve consulting
with the therapist and may or may not be deliberate or explicit.
An individual’s good enough level criteria may therefore differ
from the criteria we used for RCSI. The systematic relations
between treatment duration and measures of improvement shown
in Tables 1 and 2 represent aggregate statistical effects, incorpo-
rating individual differences in good enough level.
Methodological Issues
Our results cannot be generalized to the sizable number of
clients in routine practice who do not complete post-session forms
or who leave treatment before their planned ending (see Connell,
Grant, & Mullin, 2006). On the one hand, there is evidence that
clients who do not return after treatment to complete measures
tend to have made smaller gains than clients who do return (Stiles
et al., 2003). On the other hand, many clients who do not appear
for scheduled sessions may have found other sources of help
(Snape, Perren, Jones, & Rowland, 2003) or feel that they have
achieved their goals (Hunsley, Aubry, Verstervelt, & Vito, 1999).
Although the reporting level approached 100% for some servic-
es—suggesting that obtaining nearly complete data sets is achiev-
able— overall reporting levels were lower than desirable. Ap-
proaches that might address this issue involve combinations of
better management to collect posttreatment data and use of emerg-
ing low-cost technologies for tracking change session-by-session
so that data are available for clients who decide not to return.
The large number of clients who did not return posttreatment
CORE–OM forms raised concerns about possible selective report-
ing. As reported elsewhere (Barkham, Stiles, Connell, & Mellor-
Clark, 2008), we addressed one such concern by examining data
from the 343 therapists who saw 15 or more clients; these included
31,966 (95%) of the clients in the CORE National Research
Database 2005. We reasoned that if therapists were selectively
influencing their good-outcome clients to return posttreatment
forms, improvement rates would be negatively correlated with
reporting rates; that is, the more selective therapists would tend to
have higher improvement rates. We found that the proportion of
clients who returned posttreatment forms varied hugely across
services and therapists (from less than 10% to more than 90%), but
this rate of return was essentially uncorrelated with indexes of
improvement. Thus, perhaps therapists were not selectively report-
ing after all. Alternatively, perhaps therapists tried to report selec-
tively but failed. Successful selective reporting would require
therapists to know which of their cases would do well or poorly on
their posttreatment CORE–OM. However, other research suggests
that most therapists think most of their cases did very well, and
their judgments bear little relation to degree of client improvement
on standard measures (Hannan et al., 2005; Hunsley et al., 1999).
If therapists cannot discriminate successful from unsuccessful cli-
ents, they are unlikely to be able to select the successful ones.
Finally, the parameters used to calculate RCSI are variable and
somewhat arbitrary, so comparisons across studies or measures
should be made with caution. In this sample, RCSI rates were
lower than in our previous study (Barkham et al., 2006), primarily
because we used a more recent and lower cutoff dividing the
clinical and normal populations (see Connell et al., 2007). In
addition, we could have obtained a larger reliable change index by
calculating the standard error of the difference using the CORE–
OM’s 1-month test–retest reliability (.88; Barkham, Mullin, et al.,
2007) rather than the internal consistency reliability (.94; Barkham
et al., 2001).
Clinical and Service Delivery Implications
The central clinical implication of the responsive regulation
model is that participants appear to have, and responsively use,
information about the amount of therapy that clients need in
deciding when to end therapy (cf. Stiles et al., 1998). The model
implies that clients may not just keep showing up for sessions until
they are made to stop. For some clients, treatments as short as two
or three sessions may yield clinically significant gains (cf.
Barkham, Shapiro, Hardy, & Rees, 1999), and for others, time
limits may be an effective means of accelerating progress (cf.
Eckert, 1993; Reynolds et al., 1996). Our results suggest that there
are large individual differences in how quickly clients’ problems
respond to treatment, so that imposing standardized limits across
303
RESPONSIVE REGULATION OF TREATMENT DURATION
clients seems inappropriate. These findings, therefore, have par-
ticular relevance in a climate where limited resources tempt ad-
ministrators and policy makers to impose fixed durations of treat-
ment.
Insofar as the good enough level incorporates the limited avail-
ability of resources and institutional pressures to distribute them
widely, the improvement rates we reported reflected service de-
livery practices within the NHS services we studied, as well as
client needs and therapist judgments. Strained services in the NHS
and elsewhere may impose pressures to terminate treatments that
are going beyond usual limits. Such pressures could encourage
therapists and clients to settle for lesser goals—for reductions in
symptom intensity, for example, which occur more quickly than
characterological changes or action that addresses problematic
interpersonal relationships (Barkham et al., 2002; Kopta et al.,
1994).
The many clients who drop out before completing treatment or
who never begin treatment in the first place also deserve attention
by researchers. The substantial mean improvement we observed
among those who did complete treatment is a strong argument for
improving access to psychotherapy.
References
Barkham, M., Connell, J., Stiles, W. B., Miles, J. N. V., Margison, J.,
Evans, C., & Mellor-Clark, J. (2006). Dose– effect relations and respon-
sive regulation of treatment duration: The good enough level. Journal of
Consulting and Clinical Psychology, 74, 160 –167.
Barkham, M., Gilbert, N., Connell, J., Marshall, C., & Twigg, E. (2005).
Suitability and utility of the CORE–OM and CORE–A for assessing
severity of presenting problems in psychological therapy services based
in primary and secondary care settings. British Journal of Psychiatry,
186, 239 –246.
Barkham, M., Margison, F., Leach, C., Lucock, M., Mellor-Clark, J.,
Evans, C., Benson, L., Connell, J., Audin, K., & McGrath, G. (2001).
Service profiling and outcomes benchmarking using the CORE–OM:
Towards practice-based evidence in the psychological therapies. Journal
of Consulting and Clinical Psychology, 69, 184 –196.
Barkham, M., Mullin, T., Leach, C., Stiles, W. B., & Lucock, M. (2007).
Stability of the CORE–OM and BDI–I prior to therapy: Evidence from
routine practice. Psychology and Psychotherapy: Theory, Research and
Practice, 80, 269 –278.
Barkham, M., Rees, A., Stiles, W. B., Hardy, G. E., & Shapiro, D. A.
(2002). Dose– effect relations for psychotherapy of mild depression: A
quasi-experimental comparison of effects of 2, 8, and 16 sessions.
Psychotherapy Research, 12, 463– 474.
Barkham, M., Rees, A., Stiles, W. B., Shapiro, D. A., Hardy, G. E., &
Reynolds, S. (1996). Dose– effect relations in time-limited psychother-
apy for depression. Journal of Consulting and Clinical Psychology, 64,
927–935.
Barkham, M., Shapiro, D. A., Hardy, G. E., & Rees, A. (1999). Psycho-
therapy in two-plus-one sessions: Outcomes of a randomized controlled
trial of cognitive– behavioral and psychodynamic–interpersonal therapy.
Journal of Consulting and Clinical Psychology, 67, 201–211.
Barkham, M., Stiles, W. B., Connell, J., & Mellor-Clark, J. (2008). Psy-
chological treatment outcomes in routine NHS services: What do we
mean by treatment effectiveness? Manuscript submitted for publication.
Barkham, M., Stiles, W. B., Twigg, E., Connell, J., Leach, C., Lucock, M.,
Mellor-Clark, J., Bower, P., King, M., Shapiro, D. A., Hardy, G. E.,
Greenberg, L. S., & Angus, L. (2008). Effects of psychological therapies
in efficacy studies and effectiveness studies. Manuscript submitted for
publication.
Bower, P., & Gilbody, S. (2005). Stepped care in psychological therapies:
Access, effectiveness and efficiency. British Journal of Psychiatry, 186,
11–17.
Connell, J., Barkham, M., Stiles, W. B., Twigg, E., Singleton, N., Evans,
O., & Miles, J. N. V. (2007). Distribution of CORE–OM scores in a
general population, clinical cut-off points, and comparison with the
CIS-R. British Journal of Psychiatry, 190, 69 –74.
Connell, J., Grant, S., & Mullin, T. (2006). Client initiated termination of
therapy at NHS primary care counselling services. Counselling and
Psychotherapy Research, 6, 60 – 67.
Eckert, P. A. (1993). Acceleration of change: Catalysts in brief therapy.
Clinical Psychology Review, 13, 241–253.
Evans, C., Connell, J., Barkham, M., Margison, F., Mellor-Clark, J.,
McGrath, G., & Audin, K. (2002). Towards a standardised brief outcome
measure: Psychometric properties and utility of the CORE–OM. British
Journal of Psychiatry, 180, 51– 60.
Evans, R., Mellor-Clark, J., Barkham, M., & Mothersole, G. (2006).
Developing the resources and management support for routine evalua-
tion in counselling and psychological therapy service provision: Reflec-
tions on a decade of CORE development. European Journal of Psycho-
therapy and Counselling, 8, 141–161.
Feaster, D. J., Newman, F. L., & Rice, C. (2003). Longitudinal analysis
when the experimenter does not determine when treatment ends: What is
dose–response? Clinical Psychology and Psychotherapy, 10, 352–360.
Gray, G. V. (2003). Psychotherapy outcomes in naturalistic settings: A
reply to Hansen, Lambert, and Forman. Clinical Psychology: Science &
Practice, 10, 505–507.
Grissom, G. R., Lyons, J. S., & Lutz, W. (2002). Standing on the shoulders
of a giant: Development of an outcome management system based on the
dose model and phase model of psychotherapy. Psychotherapy Re-
search, 12, 397– 412.
Hannan, C., Lambert, M. J., Harmon, C., Nielsen, S. L., Smart, D. W.,
Shimokawa, K., & Sutton, S. W. (2005). A lab test and algorithms for
identifying clients at risk for treatment failure. Journal of Clinical
Psychology, 61, 155–163.
Hansen, N. B., & Lambert, M. J. (2003). An evaluation of the dose–
response relationship in naturalistic treatment settings using survival
analysis. Mental Health Services Research, 5, 1–12.
Hansen, N. B., Lambert, M. J., & Forman, E. M. (2002). The psychother-
apy dose–response effect and its implications for treatment delivery
services. Clinical Psychology: Science & Practice, 9, 329 –343.
Hansen, N. B., Lambert, M. J., & Forman, E. M. (2003). The psychother-
apy dose effect in naturalistic settings revisited: Response to Gray.
Clinical Psychology: Science & Practice, 10, 507–508.
Howard, K. I., Kopta, S. M., Krause, M. S., & Orlinsky, D. E. (1986). The
dose– effect relationship in psychotherapy. American Psychologist, 41,
159 –164.
Howard, K. I., Lueger, R. J., Maling, M. S., & Martinovich, Z. (1993). A
phase model of psychotherapy outcome: Causal mediation of change.
Journal of Consulting and Clinical Psychology, 61, 678 – 685.
Hunsley, J., Aubry, T. D., Verstervelt, C. M., & Vito, D. (1999). Compar-
ing therapist and client perspectives on reasons for psychotherapy ter-
mination. Psychotherapy, 36, 380 –388.
Jacobson, N. S., & Truax, P. (1991). Clinical significance: A statistical
approach to defining meaningful change in psychotherapy research.
Journal of Consulting and Clinical Psychology, 59, 12–19.
Kopta, S. M. (2003). The dose– effect relationship in psychotherapy: A
defining achievement for Dr. Kenneth Howard. Journal of Clinical
Psychology, 59, 727–733.
Kopta, S. M., Howard, K. I., Lowry, J. L., & Beutler, L. E. (1994). Patterns
of symptomatic recovery in psychotherapy. Journal of Consulting and
Clinical Psychology, 62, 1009 –1016.
Lueger, R. J., Howard, K. I., Martinovich, Z., Lutz, W., Anderson, E. E.,
& Grissom, G. R. (2001). Assessing treatment progress of individual
304 STILES, BARKHAM, CONNELL, AND MELLOR-CLARK
patients using expected treatment response models. Journal of Consult-
ing and Clinical Psychology, 69, 150 –158.
Lutz, W., Lowry, J., Kopta, S., Einstein, D. A., & Howard, K. I. (2001).
Prediction of dose–response relations based on patient characteristics.
Journal of Clinical Psychology, 57, 889 –900.
Lutz, W., Martinovich, Z., Howard, K. I., & Leon, S. C. (2002). Outcomes
management, expected treatment response, and severity-adjusted pro-
vider profiling in outpatient psychotherapy. Journal of Clinical Psychol-
ogy, 58, 1291–1304.
Mellor-Clark, J. (2003). National innovations in the evaluation of psycho-
logical therapy service provision. Journal of Primary Care Mental
Health, 7, 82– 85.
Mellor-Clark, J., & Barkham, M. (2006). The CORE System: Developing
and delivering practice-based evidence through quality evaluation. In C.
Feltham & I. Horton (Eds.), Handbook of counselling and psychotherapy
(2nd ed., pp. 207–224). London: Sage.
Mellor-Clark, J., Barkham, M., Connell, J., & Evans, C. (1999). Practice-
based evidence and the need for a standardised evaluation system:
Informing the design of the CORE System. European Journal of Psy-
chotherapy, Counselling and Health, 3, 357–374.
Mellor-Clark, J., Curtis Jenkins, A., Evans, R., Mothersole, G., & McInnes,
B. (2006). Resourcing a CORE network to develop a National Research
Database to help enhance psychological therapy and counselling service
provision. Counselling and Psychotherapy Research, 6, 16 –22.
Reardon, M. L., Cukrowicz, K. C., Reeves, M. D., & Joiner, T. E., Jr.
(2002). Duration and regularity of therapy attendance as predictors of
treatment outcome in an adult outpatient population. Psychotherapy
Research, 12, 273–285.
Reynolds, S., Stiles, W. B., Barkham, M., Shapiro, D. A., Hardy, G. E., &
Rees, A. (1996). Acceleration of changes in session impact during
contrasting time-limited psychotherapies. Journal of Consulting and
Clinical Psychology, 64, 577–586.
Snape, C., Perren, S., Jones, L., & Rowland, N. (2003). Counselling—Why
not? A qualitative study of people’s accounts of not taking up counsel-
ling appointments. Counselling and Psychotherapy Research, 3, 239 –
245.
Stiles, W. B., Honos-Webb, L., & Surko, M. (1998). Responsiveness in
psychotherapy. Clinical Psychology: Science and Practice, 5, 439 – 458.
Stiles, W. B., Leach, C., Barkham, M., Lucock, M., Iveson, S., Shapiro,
D. A., Iveson, M., & Hardy, G. (2003). Early sudden gains in psycho-
therapy under routine clinic conditions: Practice-based evidence. Jour-
nal of Consulting and Clinical Psychology, 71, 14 –21.
Received June 14, 2007
Revision received November 15, 2007
Accepted December 27, 2007 䡲
305
RESPONSIVE REGULATION OF TREATMENT DURATION
A preview of this full-text is provided by American Psychological Association.
Content available from Journal of Consulting and Clinical Psychology
This content is subject to copyright. Terms and conditions apply.