Content uploaded by Trina D. Spencer
Author content
All content in this area was uploaded by Trina D. Spencer on Mar 29, 2016
Content may be subject to copyright.
Correspondence to Trina D. Spencer, Institute for Human Development, Northern
Arizona University, PO Box 5630, Flagsta AZ, 86011-5630 ; e-mail: trina.spencer@nau.
edu.
EDUCATION AND TREATMENT OF CHILDREN Vol. 35, No. 2, 2012
Pages 127-151
Evidence-based Practice: A Framework for
Making Eective Decisions
Trina D. Spencer
Northern Arizona University
Ronnie Detrich
Wing Institute
Timothy A. Slocum
Utah State University
Abstract
The research to practice gap in education has been a long-standing concern.
The enactment of No Child Left Behind brought increased emphasis on
the value of using scientically based instructional practices to improve
educational outcomes. It also brought education into the broader evidence-
based practice movement that started in medicine and has spread across a
number of human service disciplines. Although the term evidence-based
practice has become ubiquitous in education, there is no common agreement
about what it means. In this paper, we oer a denition of evidence-based
practice, provide a rationale for it, and discuss some of the main tenants of
evidence-based practice. Additionally, we describe a decision-making model
that features the relationships between the critical sources of inuence and
the chief responsibilities of evidence-based practitioners.
“Knowing is not enough; we must apply.
Willing is not enough; we must do.”
—Goethe
Education has long struggled with the gap between the methods
that are best supported by systematic research and those that
are most widely used (e.g., Burns & Ysseldyke, 2009; Carnine, 1997;
Espin & Deno, 2000; Hoagwood, Burns, & Weisz, 2002; Kazdin, 2000;
Kratochwill & Stoiber, 2000). Researchers, practitioners, theorists, and
policy-makers alike have speculated about the cause of this division.
Researchers contend that practitioners do not understand the
implications of their work or possess the skills necessary to be good
consumers of science. Practitioners complain that too often research
128 SPENCER, DETRICH, and SLOCUM
is not applicable in the real world and research ndings are largely
inaccessible because they are published in journals designed for
researchers, not practitioners (Carnine, 1997; Greenwood & Abbo,
2001). Presumably, if practitioners were using research evidence as a
basis for selecting interventions there would be no research to practice
gap.
This “research to practice gap” is not just an issue for educa-
tion but has been a concern across disciplines as varied as medicine
and psychology. In an aempt to address the gap in psychology in
1949, the American Psychological Association established the scien-
tist-practitioner model as a basis for training psychologists (Drabick
& Goldfried, 2000). Despite the emphasis on the training of psychol-
ogists as scientist-practitioners, there continues to be great concern
about the lack of science in practice and the lack of practice in the
science of psychology (Chwalisz, 2003; Hayes, Barlow, Nelson-Grey,
1999). Evidence-based practice has been proposed as a means of nar-
rowing the research to practice gap (Chwalisz, 2003).
The passage of No Child Left Behind (NCLB, 2001) was a true
watershed for eorts to increase the role of research evidence in ed-
ucation. For the rst time, the use of scientic research for making
educational decisions was prominently featured in national education
legislation. NCLB (2001) included more than 100 references to the use
of science or scientically based evidence as a basis for educational
practice. The research to practice gap was transformed from a concern
of a relatively small group of educational researchers and reformers to
a national policy issue resulting in greater political and social traction
than ever before. The subsequent passage of Individuals with Disabil-
ities Education Improvement Act (IDEIA) in 2004 extended this trend
and further established the use of science in education. Educational
practice based on scientic evidence was no longer just a good idea;
it became the law.
The mandates in NCLB (2001) and IDEIA (2004) to use scienti-
cally based evidence have resulted in widespread interest in evidence-
based practice. Originating in medicine, evidence-based practice was
quickly adopted by numerous other professions because it provided
a means of addressing a serious problem that has challenged numer-
ous professions – the research to practice gap. A gap between research
and practice means that consumers are not receiving services that are
based on the best research evidence that exists and therefore may suf-
fer from poorer outcomes and unnecessary costs associated with inef-
fective treatments. Many professions have recognized that evidence-
based practice has the potential to bring research results into daily
practice and thereby improve outcomes for consumers.
129
EVIDENCE-BASED PRACTICE
Within the medical profession, evidence-based practice has
been dened as a decision-making process informed by three distinct
sources of inuence: (1) the best available evidence, (2) clinical exper-
tise and (3) client values (Institute of Medicine, 2001; Sacke, Rosen-
berg, Gray, Haynes, & Richardson, 1996; Sacke, Straus, Richardson,
Rosenberg, & Haynes, 2000). Major organizations including the Amer-
ican Psychological Association (APA, 2002), the American Speech and
Language Association (ASHA, 2005), Institute for Education Science
(WWC, hp://ies.ed.gov/ncee/wwc/), and the National Autism Cen-
ter (NAC, 2009) followed by adopting very similar denitions of evi-
dence-based practice. Table 1 presents denitions from several orga-
nizations that are relevant to education. In these denitions, the word
practice refers to the whole of one’s professional activities. This is the
sense in which a physician practices medicine and a lawyer practices
law. Importantly, these denitions do not limit evidence-based prac-
tice to a small portion of the decisions that a professional must make;
instead, they suggest that this model of decision-making should be
applied pervasively across their entire professional practice. Drawing
directly from these well-established denitions from related profes-
sions, we dene evidence-based practice in education as a decision
making process that integrates (1) the best available evidence, (2) pro-
fessional judgment, and (3) client values and context. Similar to the
use of the term practice in other professions, we use it to refer to all
professional activities of an educator. We suggest that evidence-based
practice should be pervasive in all aspects of the professional practice
of educators. We will elaborate on these issues throughout this paper.
The word practice also has a second meaning – specic method
or technique used by a professional. This is the sense in which one
might talk about “best practices” or the practice of providing immedi-
ate corrections of errors. Within education, many have dened evi-
dence-based practice with this sense of the word practice. Thus, within
education, the term evidence-based practice is most often used to re-
fer to a program or intervention that has been found to have strong
research support (e.g., Cook, Tankersley, & Landrum, 2009; Dunst,
Trivee, & Cutspec, 2002; Odom et al., 2005). This denition focuses
on the important issue of the research support for a particular pro-
gram or intervention; however, it leaves the broader question of how
(or whether) practitioners should go about balancing this information
with other constraints on problem solving outside the scope of the
evidence-based practice (as used in this denition).
There are two main reasons why we believe it is important to
adopt the broader view that is common in other professions. First,
this broader view explicitly recognizes that the choice of treatments
130 SPENCER, DETRICH, and SLOCUM
Table 1
Denitions of evidence-based practice in various professions.
American Psychological Association (APA, 2005)
“Evidence-based practice in psychology is the integration of the best
available research with clinical expertise in the context of patient
characteristics, culture, and preferences.”
American Speech and Hearing Association (ASHA, 2005)
“The term evidence-based practice refers to an approach in which
current, high-quality research evidence is integrated with practitioner
expertise and client preferences and values into the process of making
clinical decisions.”
National Autism Center (2009)
“Evidence-based practice requires the integration of research ndings with other
critical factors. These factors include:
Professional judgment and data-based decision-making
Values and preferences of families, including the student on the autism
spectrum whenever feasible
Capacity to accurately implement the interventions.”
Education (Whitehurst, 2002)
Evidence-based education is “The integration of professional wisdom
with the best available empirical evidence in making decisions about
how to deliver instruction.”
in a particular school, classroom, or individual case is an outcome of
a decision-making process in which empirical evidence is one of sev-
eral important inuences. This decision-making process is the key to
whether programs and interventions that have strong research sup-
port are implemented in schools; therefore, articulating how research
evidence should be integrated into decision-making is extremely im-
portant for addressing the goal of increased use of this evidence to im-
prove outcomes. Second, the broader view suggests that the best avail-
able evidence should inuence all educational decision-making, whereas
the narrower view leads to lists of well-supported treatments. These
lists can be extremely useful within a decision-making process but by
themselves, they cannot oer solutions to many of the challenges that
educators face daily (Slocum, Spencer, & Detrich, 2012 [this issue]).
These points are elaborated upon throughout the rest of this paper
and in other articles in this special issue.
131
EVIDENCE-BASED PRACTICE
Many of the authors who have used the narrower denition of
evidence-based practice also fully recognize the importance of the
problem-solving processes that may lead to the use of well-support-
ed treatments. For example, Cook, Tankersley, and Harjusola-Webb
(2008) provided extensive discussion of the necessary role of profes-
sional decision making by special educators in selecting and imple-
menting treatments. Cook and Cook (2011) recognized these issues
and suggested “that educators use the term ‘evidence-based educa-
tion’ to describe the broader process of evidence-based decision mak-
ing” (pg. 7). In addition, many of the other authors who have used the
narrower sense of evidence-based practice have published extensively
on systems to improve educational decision-making and support for
high quality implementation. We do not dier from these authors in
our overall vision of educational decision-making, but we do strongly
believe that there is great value in the broader denition of evidence-
based practice.
We believe that diverging from the meaning of evidence-based
practice that is used in other professions invites a great deal of confu-
sion. The educational endeavor includes a variety of disciplines (e.g.,
special education, psychology, speech-language pathology, occupa-
tional therapy, and physical therapy) and if these disciplines adopt
dissimilar notions of evidence-based practice, the risk of confusion is
extremely high. In addition, practitioners from dierent disciplines
speak to each other in dierent “languages” which could result in
delays to eective treatment for clients. Additionally, consumers are
likely to be confused by the dierent uses of the terms and less able
and willing to participate in decision-making.
We suggest that the term evidence-based practice be reserved for
the decision-making process described above and the term empirically
supported treatment be used for activities that have been shown to meet
specic criteria for research support. These terms correspond with us-
age in the extensive literature on evidence-based practice in psychol-
ogy (e.g., APA, 2005; Chambless & Ollendick, 2001; Stoiber & Kratoch-
will, 2000). In the context of education, treatment typically refers to an
intervention, program, or curriculum that addresses any academic or
behavioral outcome. We would add that treatment could also refer
to smaller scale strategies, tactics, and techniques. Most fundamen-
tally, a treatment is any specic educator behavior that has an iden-
tiable eect on student behavior. A treatment can either be preven-
tive (teaching reading to all students), concurrent (correction routines
within instruction), or remedial (extra support for struggling readers)
and can be applied to any level of educational service such as whole
school, classroom, or individual student.
132 SPENCER, DETRICH, and SLOCUM
Evidence-based Practice: New Wine or New Bole?
The increased interest in evidence-based practice has not been
received without reservations from responsible practitioners across
disciplines (Norcross, Beutler, & Levant, 2006). The concern may be
framed as whether evidence-based practice is a new way to describe
what practitioners have always done (new bole) or a new way of
practicing (new wine). It is understandable that those who have been
working in educational seings may be skeptical about the evidence-
based practice movement. After all, educational services have been
oered for years without any explicit requirements that practitioners
rely on scientic research to inform decisions about services. How-
ever, it has been argued that a service is not a service if the treatments
are unevaluated (Hoagwood et al., 2002). “Service” implies the treat-
ment is benecial. Unless we know the benets and risks of a treat-
ment there is no basis for claiming it is a service. A treatment may
cause harm to students and that harm may not be detected unless
the treatment has been evaluated. This suggests there is an ethical
responsibility to base decisions on the best available evidence. Most
organizations that license practitioners have ethical statements requir-
ing decisions to be based on scientic evidence (APA, 2002; National
Association of School Psychologists [NASP], 2000; Behavior Analyst
Certication Board [BACB], 2004). However, the persistent research
to practice gap makes it clear that it is not always common practice
to base decisions about treatments on the existing research base. In-
creasing the emphasis on best available evidence as a basis for making
treatment decisions is indeed a new wine for many practitioners.
It could be argued (Detrich, 2008) that the evidence-based prac-
tice movement is ultimately a consumer protection movement. Nar-
rowing the research to practice gap protects consumers by using treat-
ments that the best evidence suggests are most likely to be eective.
An equally important but less obvious protection for consumers is
that in a fully realized evidence-based practice approach client values
and context are also explicitly recognized as critical considerations in
decision-making. Recognition of client values raises important ques-
tions of the goals of educational treatments and reminds practitioners
that eectiveness can only be judged relative to established goals.
When decisions are made for districts, schools, or classrooms, clients
can be considered to be the larger community. When decisions are
made for individual students, their specic families become impor-
tant partners in the process of seing goals and evaluating potential
treatments. Finally, being sensitive to the various contexts in which a
treatment might be implemented, the practitioner will have to make
133
EVIDENCE-BASED PRACTICE
judgments about which treatment is most appropriate and how to
adapt it to best t the relevant clinical context.
The following section elaborates on each of the three elements of
evidence-based practice (best available evidence, professional judg-
ment, and client values and context). This serves as background infor-
mation for a framework of evidence-based practice, which highlights
the interactive nature of these three inuences in educational practice.
The expanded descriptions of best available evidence, professional
judgment, and client values and context also set the stage for the sub-
sequent articles in the special issue. In Figure 1, we mark the primary
responsibilities of evidence-based practitioners with squares and the
sources of inuence with circles. Arrows indicate the direction of in-
uence, as one element informs the other. We recognize this process
is neither as simple nor as linear as we have depicted here. In fact,
practitioners are continually making decisions and the interacting in-
uences are much too complicated to capture accurately in a diagram.
For present purposes, it is useful to describe some of the basic process-
es related to selecting, adapting, and implementing treatments before
layering on additional complexities, many of which are addressed in
the other articles in this special issue.
Best Available Evidence
The evidence-based practice model asserts that the best available
evidence should be one of the three main inuences on educational
decision-making. The term best available evidence implies that there is
a range of evidence and that educators should select the best of what
is available—although seemingly simple, this concept is powerful and
has far reaching implications for educational practice. It requires that
educators determine the evidence that is best for the particular deci-
sion to be made. The best evidence is that which is (a) most relevant
to the decision and (b) has the highest degree of certainty. Relevance
depends on how closely the evidence matches the educator’s particu-
lar problem in terms of the nature of the students involved, the de-
sired outcomes, the details of the treatment, and the school context.
The certainty of the evidence depends on methodological quality and
amount of research available. When highly relevant and highly cer-
tain evidence is available, educators should use it. And when ideal
evidence is not available, then educators should use the best of what
is available. The best available evidence may be only indirectly rele-
vant and may fall short of the highest standards for rigorous research.
Nonetheless, the mandate to use the best available evidence suggests
that imperfect evidence, used wisely, is beer than no evidence at all.
This concept is necessary if evidence-based practice is to be pervasive
134 SPENCER, DETRICH, and SLOCUM
in educational decision-making. If educators aend only to the high-
est quality evidence, evidence-based practice is limited to a small
minority of educational decisions. This topic is explored thoroughly
in Slocum et al. (2012 [this issue]), but for the current purposes it is
important to understand that in our evidence-based practice frame-
work best available evidence is suciently broad to inform selecting
and adapting treatments, designing treatments locally, and relying on
progress monitoring data (practice-based evidence) to evaluate im-
pact (see Figure 1).
Perhaps, not surprisingly, there are other perspectives about
what is meant by the term best available evidence. In some instances,
best available evidence has come to mean research with very high
methodological quality. Organizations such as the What Works Clear-
inghouse (WWC, 2011) rely on a specic type of systematic review
to develop evidence about a treatment’s impact. The review process
is rigorous and excludes research studies that do not meet dened
quality standards. In general, randomized clinical trials, as well as
very high quality quasi-experimental designs (WWC, 2011) and sin-
gle-case designs (Kratochwill et al., 2010) constitute evidence and all
other methods of developing evidence are ignored. Further, even very
high quality research is excluded if it does not meet specic require-
ments of relevance to the specic review topics. In many instances,
after reviewing the research base for a specic topic, it is determined
that none or very lile of the published research meets the quality and
relevance standards, and as a consequence the systematic review pro-
vides limited guidance on topics that are of great importance to prac-
titioners. For example, in a recent report the WWC (2012) reviewed
the evidence for Peer Assisted Learning Strategies (PALS) as an inter-
vention to improve adolescent literacy. Of the 97 research articles that
were reviewed none met standards and only one was found to meet
standards with reservations, so it was determined that the evidence
only supported the conclusion that PALS had potentially positive ef-
fects. It must be emphasized that the articles reviewed had all been
through a peer-review process and found to be adequate for publica-
tion in professional journals.
This sort of review is very useful for some educational ques-
tions. In addition, it is valuable to know when this level of evidence is
not available so educators can be aware that they are using less certain
evidence in decision-making. However, if this is the only source of
evidence that is considered to be legitimate, most educational deci-
sions will be made without the benet of evidence. Educators can-
not put their decisions on hold until numerous high quality research
studies are conducted on their particular situation. Part of the power
135
EVIDENCE-BASED PRACTICE
Figure 1. Evidence-based practice decision-making framework. Squares represent evidence-based practitioners’ responsibilities.
Circles represent sources of inuence. Small shaded circles suggest a ltering eect of professional judgment. Arrows indicate the
direction of inuence.
Figure1.Evidence‐basedpracticedecision‐makingframework.Squaresrepresentevidence‐basedpractitioners’
responsibilities.Circlesrepresentsourcesofinfluence.Shadedsmallcirclessuggestafilteringeffectofprofessionaljudgment.
Arrowsindicatethedirectionofinfluence.
Best Available
Evidence
Client Values
and Context
Practical
Question
Selecting
Treatments
Adapting
Treatments
Implementing
Tr
ea
tm
e
nt
s
Positive
Ou
t
co
m
es
Professional
Judgment
136 SPENCER, DETRICH, and SLOCUM
of the concept best available evidence is that is recognizes this reality
of professional practice and provides exibility so that decisions can
be as eective as possible given the evidence that is available. When
evidence is incomplete and less than denitive, the educator must ex-
ercise greater professional judgment in selecting treatments. Recog-
nizing the uncertainties involved in these decisions, evidence-based
educators place greater reliance on progress monitoring to evaluate
the eectiveness of their decisions.
Professional Judgment
The second component of evidence-based practice is profession-
al judgment. Professional judgment is ubiquitous. We do not say this
as a suggestion of how professional decision-making processes ought
to work, but as recognition of how they must work. Decisions cannot
be made without the lter of professional judgment (see Figure 1).
From the initial steps of recognizing that a situation is a problem and
the formulation of a problem statement through later steps of eval-
uating progress and judging whether the problem has been solved,
decision-making simply cannot occur without judgment. Through
professional judgment, the practitioner renes the other sources of
inuence by retaining relevant and valuable information and discard-
ing the rest. At every juncture practitioners must employ professional
judgment to make decisions about how to weigh the best available
evidence and client values and contextual factors and navigate the
decision-making process.
Professional judgment is a fundamental element of evidence-
based practice, but often its value and complexity is not fully recog-
nized. The APA Presidential Task Force on Evidence-Based Practice
(2005) described eight competencies that contribute to professional
judgment: (1) formulating the problem so that treatment is possible,
(2) making clinical decisions, implementing treatments, and monitor-
ing progress, (3) interpersonal expertise, (4) continuous development
of professional skills, (5) evaluating research evidence, (6) under-
standing the inuence of context on treatment, (7) utilizing available
resources, and (8) having a cogent rationale for treatment. The pro-
cess of developing professional judgment has been a source of discus-
sion. Some have suggested that professional judgment (aka profes-
sional wisdom) is developed purely through experience (Kamhi, 1994;
Whitehurst, 2002). However, the list of competencies that contribute
to professional judgment suggests more complex interrelations; the
evaluation of research, monitoring progress, and continuous develop-
ment of professional skills are aspects of professional judgment that
identify linkages with the research base, with systematically learning
137
EVIDENCE-BASED PRACTICE
from progress monitoring, and with professional education. This no-
tion of professional judgment as a rigorous and informed aspect of
professionalism is far dierent from the idea that judgment allows
room for an aitude that anything goes based on uninformed opinion
and unconstrained bias. Stanovich and Stanovich (2003) argue:
Teachers, like scientists, are ruthless pragmatists (Gersten &
Dimino, 2001; Gersten, Chard, & Baker, 2000). They believe
that some explanations and methods are beer than others.
They think there is a real world out there--a world in ux,
obviously--but still one that is trackable by triangulating
observations and observers. They believe that there are valid,
if fallible, ways of nding out which educational practices
are best. Teachers believe in a world that is predictable
and controllable by manipulations that they use in their
professional practice, just as scientists do. Researchers and
educators are kindred spirits in their approach to knowledge,
an important fact that can be used to forge a coalition to bring
hard-won research knowledge to light in the classroom (p.
35).
This captures an approach to professional judgment that complements
(rather than detracts from) the best available evidence. In our evidence-
based practice framework, one function of best available evidence is
to sharpen practitioner judgment. It was not possible to represent how
this sharpening occurs in Figure 1 without complicating the visual
display of the other interactions, but the relationship between best
available evidence and professional judgment is indeed vital to fully
realizing this model. Given the potential sources of bias that can aect
judgment, both formal research and ongoing progress monitoring can
serve as moderating inuences and improve the quality of decision-
making. Knowledge of recommendations derived from the best
available evidence in a variety of circumstances helps practitioners
learn from their clinical experience by focusing their aention on
variables that are most important for change. Through an interaction
between evidence and direct experience of their eectiveness,
practitioners are shaped into wise decision-makers.
Client Values and Context
The nal component of evidence-based practice is client values
and contexts. Client values represent the deeply held ideals of indi-
vidual clients, their families, and the broader community. The inclu-
sion of client values in the evidence-based practice framework recog-
nizes many of the important factors that have been described as social
138 SPENCER, DETRICH, and SLOCUM
validity of research (Wolf, 1978). Both professional ethics and practi-
cality demand that the values of the students, families, and communi-
ties we serve must be a fundamental contributor to decision-making.
The very purposes and goals of education are socially determined as
is the range of acceptable forms of treatment. Professional ethics re-
quire that the consideration of values extend from the broad commu-
nity to smaller groups and families. This ethical stance is embodied in
the requirement of family involvement in the process of developing
Individual Educational Plans (IEPs) as is mandated by the Individuals
with Disability Education Improvement Act (IDEIA, 2004).
On the practical level, we can see that schools exist only with the
support of the larger community and often the strength of that sup-
port can be measured through funding and other resources. Further,
the eectiveness of many interventions may be partially dependent
on student and family involvement. This involvement may correlate
with the degree to which the goals and the nature of the treatment
correspond with deeply held values. For example, intervention to im-
prove classroom behavior may be more eective with strong “buy-
in” from students and their families. Also, if families contribute to the
selection of an academic intervention, they may be more willing to
support homework and eorts to promote generalization.
Recognition of client values as an important contributor to deci-
sion-making can support the overall purpose of evidence-based prac-
tice in education—improved outcomes for students. However, we
must also recognize that the interaction between client values and the
other two components is not simple. One of the key roles of profes-
sional judgment is to bring the best available evidence together with
client values. Client values should not dominate this judgment (e.g.,
“it does not maer what the research says—we like a particular pro-
gram”) any more than the best available evidence without consider-
ation of client values should drive decisions. Giving priority to client
values without consideration of the best available evidence can result
in wasted time, money, and resources and fail to produce meaningful
outcomes. Giving special weight to the best available evidence with-
out consideration of client values can result in treatments with low
acceptability, which may fail to produce eective outcomes. In ad-
dition, low treatment acceptability can result in failure to maintain
implementation even if outcomes are positive.
In addition to client values, when making decisions about treat-
ments, practitioners should also consider the context in which treat-
ment is to occur. The consideration of the role of context is important
for selecting treatments that are most likely to produce positive re-
sults in the particular school, classroom, or situation within a class-
139
EVIDENCE-BASED PRACTICE
room. Each seing includes specic resources and constraints that in-
uence the eectiveness of a particular treatment. Contextual factors
to be considered include the correspondence of values and theoretical
assumptions of a treatment and those of the implementers; the match
between the resources required and the resources available in the
treatment seing (including materials, time, space, and personnel);
and the degree to which implementers have the training and skills
to implement the treatment with adequate levels of integrity. For ex-
ample, there would be lile value in selecting a treatment if the school
cannot aord to purchase the materials and provide the professional
development necessary for high quality implementation. When sev-
eral treatment options have roughly the same level of support from
the best available evidence, contextual variables may determine the
best choice. Treatments that are a beer contextual t may be imple-
mented with higher quality and ultimately produce beer outcomes
(Albin, Lucyshyn, Horner, & Flannery, 1996). The resources required,
level of training and skills, and acceptability to the implementers, al-
though important, should not be used as the only or even primary
basis for selecting a treatment with minimal research support when a
well-supported treatment exists.
Evidence-based Practice Framework
In this section, we highlight some of the ways that the best avail-
able evidence, professional judgment, and client values and context
interact throughout a problem solving process (including selecting,
adapting, and implementing treatments). The process begins with
a practical question and ends with positive outcomes for students.
Many important interactions between the best available evidence,
professional judgment, and client values and context are captured in
Figure 1; however, it is impossible to portray the true dynamic nature
of these relationships in a two dimensional illustration. Therefore, we
oer this gure as a starting point for thinking carefully about an im-
portant and complex topic.
Practical Question
The evidence-based practice decision-making process is initi-
ated by a concrete educational problem—the practitioner identies
student performance that is not adequate in some way. This may be
as broad as reading outcomes for an entire school district or as specic
as social behavior of an individual with autism. The identication of
performance as inadequate is itself a professional judgment based on
experience, training, and context. In order to begin the systematic ev-
idence-based practice decision-making process, the practitioner must
140 SPENCER, DETRICH, and SLOCUM
formulate a practical question. A well-constructed question denes
the population or student(s) under consideration (e.g., third grade
student(s) with autism), the outcome to be achieved (e.g., increased
reading comprehension), and key features of the seing (e.g., special
education teacher and six students in self-contained classroom). The
question usually takes one of two forms. In a problem-based question,
the practitioner asks about the best interventions to solve a specic
problem. For example, “What treatment should I use to teach read-
ing comprehension to my third grade students with autism in a small
group arrangement?” This type of question asks for a comparative
evaluation of all relevant treatments. Alternatively, in a treatment-
based question, the practitioner asks about the eectiveness of a spe-
cic treatment. For example, “What is the evidence supporting the use
of direct and explicit instruction for teaching reading comprehension
to third grade students with autism in a small group arrangement?”
This type of question asks about the evidence on a single treatment of
interest. In the course of day-to-day practice, practitioners are likely to
ask both types of questions.
Regardless of which form the question takes, practitioners
should formulate the question in such a way that evidence is use-
ful and relevant. In other words, formulation of the question must
be informed by client values concerning educational goals and an
understanding of the opportunities and limitations aorded by the
educational context (see Figure 1). Important educational goals may
be informed by state core standards, but students and families may
have suggestions about which goals are priorities to them. For exam-
ple, a family of a student with autism may consider social outcomes
and inclusion more important than learning to count. In this example,
the school’s infrastructure for supporting inclusion is also relevant to
the formulation of the question. Before initiating a search for the best
available evidence, a great deal of professional judgment is needed to
incorporate client values and the context into the question. To mini-
mize or ignore either the client values or the school context will re-
sult in a question that is not properly formed and could misdirect the
search for the best available evidence.
Selecting Treatments
A well-constructed question that incorporates client values and
context guides the search for the best available evidence. Practitioners
must judge the available evidence for “bestness”—scientic strength
and relevance to their question. The process of identifying the best
available evidence involves interplay between the practical question,
the various sources of evidence, and considerations of client values
141
EVIDENCE-BASED PRACTICE
and context (see Figure 1). As evidence is encountered it must be
weighed for strength and relevance to the question, and this process
may require further consideration of goals and acceptable forms of
treatment as well as the context in which the treatment will be imple-
mented.
The search for the best available evidence involves multiple
sources of evidence. Among those are empirically supported treat-
ment reviews, practice guides, best practice reviews, primary research
articles, and relevant principles of behavior (see Slocum et al., 2012
[this issue]). These sources of evidence can all contribute to selecting
a treatment that addresses the practical question, is supported by the
best available evidence, and makes sense in the particular context. An
empirically supported treatment review may identify a treatment that
has strong supporting evidence in which the population and seing
in the research are a close match with the practice context. But this is
rare. It is more likely that the match of populations and/or seings is
not perfect. This requires practitioners to be sensitive to specic char-
acteristics of their students (e.g., age, performance level, specic aca-
demic, behavioral, and cognitive strengths and weaknesses) and their
context (e.g., skills of sta, available training and supervision, and
other resources). Dierences between the specic participants and
context in research seing and those in the practice seing require the
practitioner to make judgments about the importance of these dier-
ences. Various sources of evidence could inform these judgments. The
practitioner could consult practice guides and best practice reviews
for guidance on whether the treatment is likely to be eective with the
particular population of students and context in question. This dif-
cult professional judgment is also informed by his or her knowledge
of relevant principles of behavior. Thus, the selection of treatments is
not equivalent to choosing an intervention from a list of empirically
supported treatments; rather, the selection of treatments involves crit-
ical interplay between the best available evidence, client and contex-
tual considerations, and professional judgment.
Adapting Treatments
After selection of a treatment, the evidence-based practice pro-
cess involves judgments about whether the treatment as it is described
in research studies, manuals, curricula or other materials must be
adapted to the specic local context (see Figure 1). Practitioners must
make detailed decisions about the specic features to be changed and
exactly how they are to be adjusted to produce a good “contextual
t” (Albin et al., 1996) and increase the probability of a positive out-
come. The best available evidence should inform these judgments and
142 SPENCER, DETRICH, and SLOCUM
decisions, as well. This step is extremely important for success of the
entire process. Failure to adapt the treatment to local circumstances
may render an otherwise powerful treatment ineective. On the other
hand, adaptations that eliminate or undermine critical elements of the
treatment may also render it ineective. The evidence base informing
adaptation may come from research covering the range of the treat-
ment’s variations that retain its eectiveness. These decisions can also
be informed by more general evidence about eective instructional
and behavioral strategies. Although this more general evidence may
not be specic to the treatment in question, it may provide a very ef-
fective reference for wise decision-making and may constitute the best
evidence that is available to inform these decisions.
Implementing Treatments
Selecting an empirically supported treatment is not sucient to
assure positive outcomes. It is necessary to carefully consider many
issues related to implementation in a specic context. Failing to at-
tend to issues of implementation will likely result in the failure of the
treatment eort. Eective implementation requires ongoing profes-
sional judgment that can be informed by the best available evidence
and include consideration of client values and context (see Figure 1).
The implementation process requires careful consideration of the fea-
tures of treatment seing to assure a good contextual t. Professional
development to ensure that those who deliver the treatment have all
the necessary skills is fundamentally important to eective imple-
mentation. There is likely to be tension between the requirements of
training and the realities of providing training in a service delivery
context. The best available evidence for eective training and profes-
sional judgment should guide decisions about how professional de-
velopment is conducted in a specic practice context.
Joyce and Showers (2002) present compelling evidence that su-
pervision and coaching are necessary to assure that the treatment is
actually delivered in an eective maer. Again, there will be tension
between the demands of eective supervision and coaching and the
many other demands on the practitioners’ time. The practitioners
must make decisions about how to arrange supervision and coach-
ing so that it is eective and ecient. Eectiveness without eciency
will likely result in poor sustainability of the implementation eort.
Eciency without eectiveness will likely result in limited positive
outcomes. Eective implementation has been the focus of research in
recent years (Adelman & Taylor, 2003; Ellio & Mihalic, 2004; Fixsen,
Naoom, Blasé, Friedman, & Wallace, 2005). This research clearly sug-
gests that implementation is a series of judgments about making an
143
EVIDENCE-BASED PRACTICE
empirically supported treatment eectively t into a specic treat-
ment context. Evidence-based practitioners would be well served to
take advantage of the best available evidence to guide and inform
their judgments about implementation in their particular context.
The evidence-based practitioner relies on progress monitoring
to evaluate the eects of a treatment. Progress monitoring to improve
outcomes is itself supported by substantial research (Fuchs & Fuchs,
1986; Yeh, 2007). When a treatment is implemented, progress monitor-
ing provides the best available evidence on the eects of the imple-
mentation. The data from progress monitoring provide an occasion
for additional problem solving. These data must be evaluated and
judgments made about whether progress is adequate, and if not what
should be done about it (Barne, Daly, Jones, & Len, 2004; Wi,
VanDerHeyden, & Gilbertson, 2004). The practitioner may continue
to monitor outcomes, institute measures of delity of implementa-
tion, adjust supervision and coaching, modify treatment procedures,
change treatments, or make other decisions. Evidence-based practice
suggests that the best available evidence, the practitioner’s experi-
ence, and relevant aspects of context should inform those choices.
For example, there is a growing research base on monitoring delity
of implementation, interpreting these results, and intervening to im-
prove delity (Bartels & Mortenson, 2005; Codding, Livanish, Pace,
& Vaca, 2008; Gresham, 2009; Mortenson & Wi, 1998). One of the
common features for assuring high levels of integrity over time is the
use of performance feedback (Mortenson & Wi, 1998; Noell, Duhon,
Gai, & Connell, 2002). Extending beyond treatment integrity, there
is a large research base about the eects of performance feedback on a
wide range of behaviors in a wide variety of seings (Alvero, Bulkin,
& Austin, 2001; Balcazar, Hopkins, & Suarez, 1986). Much of the estab-
lished research is from seings other than schools so the practitioner
will have to make judgments about the relevance of this literature to
the practical problem he or she is trying to solve and the appropriate-
ness of generalizing from this research base to the current context.
Even though the practitioner is relying on the best available evidence,
judgments are necessary for making decisions.
Positive Outcomes
Positive outcomes and positive social evaluations are the nal
arbiters of the adequacy of decisions and the basis for claiming that
the initial practical problem has been solved. The decision-making
process is iterative and is not complete until positive outcomes are
achieved. However, the achievement of positive outcomes is also a
professional judgment that can be based on the two other pillars of
144 SPENCER, DETRICH, and SLOCUM
evidence-based practice. The best available evidence and client values
have important roles in informing reasonable expectations for out-
comes (see Figure 1). Research evidence may show whether the out-
comes obtained are comparable to those reported for similar groups of
students. Client values are also important for seing standards of per-
formance. Although it may seem that “beer” outcomes are always a
clear goal, it is not always so simple. First, dierent communities and
families may place dierent relative weight on various outcomes (see
Strain, Barton, & Dunlap, 2012 [this issue]). Second, any decision (or
lack of decision) entails opportunity costs—choosing to devote time
and resources to any course of action preclude the use of those re-
sources to pursue other courses of action. For example, a community
may place high value on both reading and math skills. Implementing
a particular reading program and providing two hours per day for lit-
eracy instruction may raise reading performance. The question for the
practitioner then, is whether to devote additional time and resources
to further improve reading outcomes or to devote those resources to
improving math performance. The evidence-based practitioner might
seek the best evidence on whether the current level of reading perfor-
mance predicts success in later grades and also consider the value the
community places on both reading and math. A reading program, of
course, is an on-going treatment. Other treatments, such as special in-
terventions to improve social behavior or address particular reading
challenges of specic students may be temporary. When such treat-
ments are successful practitioners must decide whether the treatment
will continue and if so, in what form. It may continue indenitely in
its present form or features of the treatment might be modied so that
treatment is less obvious and less demanding on the support system.
These types of decisions are very common in tiered decision-making
approaches such as Response to Intervention (RtI; Walker & Shinn,
2010) and school-wide positive behavior supports (Sugai & Horner,
2005).
Conclusion
The basic assumption of evidence-based practice is that basing
important professional decisions on the three pillars of the best avail-
able evidence, professional judgment, and client values and context is
more likely to result in positive outcomes than a process that does not
take advantage of these three inuences in a comprehensive manner.
Ultimately, this assumption has to be tested empirically. Currently,
the best available evidence regarding the eectiveness of evidence-
based practice is a chain of logic. Embedded in this logic is a deli-
cate balance between research evidence and client values and context,
145
EVIDENCE-BASED PRACTICE
with a great deal of condence in practitioners’ judgment to create
this balance. It has to be acknowledged that being an evidence-based
practitioner is extremely challenging and requires a great deal from
the professional because there is no research-based guidance about
how practitioners should achieve the balance. Despite a lack of evi-
dence indicating the evidence-based practice process is eective, it is
likely the best model for practitioners who seek to have a positive im-
pact. In order to honor their professional responsibility, practitioners
should seek out the best available evidence for selecting, adapting,
and implementing treatments. To be maximally eective, they must
do this by relying on their professional judgment aided by ongoing
progress monitoring data, and by making the client an active part of
the decision-making process.
We have sketched a conceptual roadmap that may serve as an
initial basis for further reection and inspection of evidence-based
practice. Given the limited empirical examinations that exist, it is also
a logical thing to do. However, logic is not an acceptable substitute
for evidence. The impact of evidence-based practice on the research to
practice gap (and educational outcomes), as an area of investigation, is
ripe with opportunity and one that is best approached through collab-
orative relationships between invested researchers and practitioners.
Introduction to Special Issue
This special issue of Education and Treatment of Children explores
some of the many important facets of evidence-based practice. Slo-
cum, Spencer, and Detrich examine the concept best available evidence
and suggest that evidence-based practice can be most widely applied
and most eective in education if this concept is understood to in-
clude multiple sources of evidence and multiple kinds of treatments.
Strain, Barton, and Dunlap illustrate the importance of considering
client values and preferences in the selection of intervention targets
and the design of service delivery systems. From a social validity
perspective, they present lessons learned about consumer input and
satisfaction. The remaining articles explore the challenges and un-
certainties that are inherent in the process of identifying empirically
supported treatments. Slocum, Detrich, and Spencer suggest that the
process of systematically reviewing and evaluating research support
for treatments is a measurement process and that the concepts of mea-
surement validity are relevant. In this article, they discuss how vari-
ous aspects of measurement validity can be applied to the process of
reviewing research to identify empirically supported treatments. One
particularly challenging component of these reviews is quality ap-
praisal of the studies—the process of rating the methodological qual-
146 SPENCER, DETRICH, and SLOCUM
ity of each research study. Wendt and Miller compare seven scales
for quality appraisal of single-subject research. They describe each
scale, compare each to the quality indicators proposed by Horner et
al. (2005), and apply each to a set of four research studies. In a comple-
mentary article, Horner, Swaminathan, Sugai, and Smolkowski exam-
ine the fundamental logic of single-subject research and suggest how
this research paradigm can come to be more inuential in identifying
empirically supported treatments. They clarify the specic features
of data that support strong conclusions from single subject research
results. This is important for disciplined visual analysis of results and
it also provides a basis for evaluating strategies for statistical analysis
of single-subject results. Susan Wilczyinski draws on her experience
as the director of the National Autism Center’s National Standards
Project to identify numerous risks that are inherent in the process of
systematically reviewing research and identifying treatments. This
article is an important reminder that reviewing research and identi-
fying well-supported interventions is an extremely complex process
fraught with challenges. Gardner, Spencer, Boelter, Dubard, and Jen-
ne use the single-subject quality indicators suggested by Horner et
al (2005) to evaluate the evidence base on brief functional analysis
methodology as a means of assessing the behavioral diculties of
typically developing children. It is an example of many of the key
issues and challenges related to identifying empirically supported
treatments (and assessments). Finally, O’Keee, Slocum, Burlingame,
Snyder, and Bundock examine the question of whether systematic re-
views that identify empirically supported treatments tend to derive
recommendations that are similar to those from traditional narrative
reviews and meta-analyses. They use the research on repeated read-
ings—an intervention to improve reading uency—to test the conver-
gence of these dierent types of reviews. There are many intriguing
questions for future evidence-based practice research in these articles.
We look forward to the continued advancement of evidence-based
practice and renement of many of the concepts discussed in these
papers.
References
Adelman, H. S., & Taylor, L. (2003). Rethinking school psychology:
Commentary on public health framework series. Journal of
School Psychology, 41, 83-90.
Albin, R. W., Lucyshyn, J. M, Horner, R. H., & Flannery, K. B. (1996).
Contextual t for behavioral support plans. In L. Koegil, R.
Koegil, & G. Dunlap (Eds.), Positive behavioral support: Includ-
ing people with dicult behaviors in the community (pp. 81-97).
147
EVIDENCE-BASED PRACTICE
Baltimore, MD: Brookes.
Alvero, A. M., Bucklin, B. R., & Austin, J. (2001). An objective review of
the eectiveness and essential characteristics of performance
feedback in organizational seings (1985-1998). Journal of Or-
ganizational Behavior Management, 21, 3-30.
American Psychological Association (August, 2005). American Psy-
chological Association Policy Statement on evidence-base
practice in psychology. Published as appendix of APA Presi-
dential Task Force of Evidence-Based Practice (2006). Evi-
dence-based practice in psychology. American Psychologist, 61,
271-285.
American Psychological Association (2002). Ethical principles of psy-
chologists and code of conduct. Retrieved from hp://www.
apa.org/ethics/.
American Speech-Language-Hearing Association. (2005). Evidence-
based practice in communication disorders [Position Statement].
Retrieved from www.asha.org/policy.
Balcazar, F. R., Hopkins, B. L., & Suarez, Y. (1986). A critical, objective
review of performance feedback. Journal of Organizational Be-
havior Management, 7, 65-89.
Barne, D., Daly, E., Jones, K., & Len, F. (2004). Response to interven-
tion: Empirically based special service decisions from single-
case designs of increasing and decreasing intensity. Journal of
Special Education, 38, 66-79.
Bartels, S. M., & Mortenson, B. P. (2005). Enhancing adherence to a
problem solving model for middle-school pre-referral teams:
A performance feedback and checklist approach. Journal of
Applied School Psychology, 22, 109-123.
Behavior Analysis Certication Board (2004). Behavior Analysis Certi-
cation Boardguidelines for responsible conduct for behavior ana-
lysts. Retrieved from hp://www.bacb.com/consum_frame.
html
Burns, M. K., & Ysseldyke, J. E. (2009). Reported prevalence of evi-
dence-based instructional practices in special education. Jour-
nal of Special Education, 43, 3-11.
Carnine, D. (1997). Bridging the research-to-practice gap. Exceptional
Children, 63, 513-521.
Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported
psychological interventions: Controversies and evidence. An-
nual Review of Psychology, 52, 685-716.
148 SPENCER, DETRICH, and SLOCUM
Chwalisz, K. (2003). Evidence-based practice: A Framework for
twenty-rst-century scientist-practitioner training. The
Counseling Psychologist, 31, 497-528.
Codding, R., Livanis, A., Pace, G., & Vaca, L. (2008). Using perfor-
mance feedback to improve treatment integrity of classwide
behavior plans: An investigation of observer reactivity. Jour-
nal of Applied Behavior Analysis, 40, 417-422.
Cook, B. G., & Cook, S. C. (2011). Thinking and communicating clear-
ly about evidence-based practices in special education. Retrieved
from hp://www.cecdr.org/subpage.cfm?id=67726117-C544-
E2C6-4E287FE0E2A6A05E.
Cook, B.G., Tankersley, M., & Harjusola-Webb, S. (2008). Evidence-
based special education and professional wisdom: Puing it
all together. Intervention in School and Clinic, 44, 105-111.
Cook, B. G., Tankersley, M., & Landrum, T. J. (2009). Determining ev-
idence-based practices in special education. Exceptional Chil-
dren, 75, 365-383.
Detrich, R. (2008). Evidence-based, empirically-supported, or best
practice: A guide for the scientist practitioner. In J. K. Luiselli,
D. C. Russo, W. P. Christian, & S. M. Wilczynski (Eds.), Eec-
tive practices for children with autism: Educational and behavioral
support interventions that work (pp. 3-25). New York, NY: Ox-
ford.
Drabick, D. A. G., & Goldfried, M. R. (2000). Training the scientist-
practitioner for the 21st century: Puing the bloom back on the
rose. Journal of Clinical Psychology, 56, 327-340.
Dunst, C. J., Trivee, C. M., & Cutspec, P. A. (2002). Toward an op-
erational denition of evidence-based practices. Centerscope,
1, 1-10.
Ellio, D. S., & Mihalic, S. (2004). Issues in dissemination and replicat-
ing eective prevention programs. Prevention Science, 5, 47-53.
Espin, C. A., & Deno, S. L. (2000). Introduction to the special issue of
learning disabilities research & practice: Research to practice:
Views from researchers and practitioners. Learning Disabilities
Research & Practice, 15(2), 67-68.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace,
F. (2005). Implementation research: A synthesis of the literature.
Tampa: University of South Florida, Louis de la Parte Florida
Mental Health Institute (FMHI Publication #231).
Fuchs, L. S., & Fuchs, D. (1986). Eects of systematic formative evalua-
tion: A meta-analysis. Exceptional Children, 53, 199-208.
149
EVIDENCE-BASED PRACTICE
Greenwood, C. R., & Abbot, M. (2001). The research to practice gap in
special education. Teacher Education and Special Education, 24,
276-189.
Gresham, F. M. (2009). Evolution of the treatment integrity concept:
Current status and future directions. School Psychology Review,
38, 533-540.
Hayes, S. C., Barlow, D. H., & Nelson-Grey, R. O. (1999). The scientist-
practitioner: Research and accountability in the age of managed
care. Boston, MA: Allyn and Bacon.
Hoagwood, K., Burns, B. J., & Weisz, J. (2002). A protable conjunc-
tion: From science to service in children’s mental health. In B.
J. Burns & K. Hoagwood (Eds.), Community-based interventions
for youth with severe emotional disturbances (pp. 327–338). New
York, NY: Oxford University Press.
Horner, R. H., Carr, E. G., Halle, J., Mcgee, G., Odom, S., & Wolery, M.
(2005). The use of single-subject research to identify evidence-
based practice in special education. Exceptional Children, 71,
165-179.
Horner, R., Swaminathan, H., Sugai, G., & Smolkowski, K. (in press).
Expanding analysis of single case research. Washington, DC:
Institute of Education Science, U.S. Department of Education.
Individuals with Disabilities Education Improvement Act (IDEA) of
2004, 20 United States Congress 1412[a] [5]), Pub. L. No. 108-
466.
Institute of Medicine. (2001). Crossing the quality chasm: A new health
system for the 21st century. (Commiee on Quality of Health
Care in America). Washington, DC: National Academies
Press.
Joyce, B., & Showers, B. (2002). Student Achievement Through Sta De-
velopment (3rd ed.). Alexandria, VA: Association for Supervi-
sion and Curriculum Development.
Kamhi, A. G. (1994). Toward a theory of clinical expertise in speech-
language pathology. Language Speech, and Hearing Services in
Schools, 25, 115-118.
Kazdin, A. E. (2000). Psychotherapy for children and adolescents: direc-
tions for research and practice. New York, NY: Oxford Univer-
sity Press.
Kratochwill, T. R., & Stoiber, K. C. (2000). Empirically supported in-
terventions and school psychology: Conceptual and practical
issues: Part II. School Psychology Quarterly, 15, 233-253.
150 SPENCER, DETRICH, and SLOCUM
Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom,
S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case
designs technical documentation. Retrieved from: hp://ies.
ed.gov/ncee/wwc/pdf/wwc_scd.pdf
Mortenson, B. P., & Wi, J. C. (1998). The use of weekly performance
feedback to increase teacher implementation of a pre-referral
academic intervention. School Psychology Review, 27, 613–627.
National Association of School Psychology. (2000). Professional con-
duct manual. Retrieved from hp://www.nasponline.org/
standards/index.aspx
National Autism Center. (2009). National Standards Report: National
Standards Project – Addressing the need for evidence-based practice
guidelines for autism spectrum disorders. Randolph, MA: Na-
tional Autism Center, Inc.
No Child Left Behind, 20 U.S.C. § 16301 et seq. (2001).
Noell, G. H., Duhon, G. J., Gai, S. L., & Connell, J. E. (2002). Con-
sultation, follow-up, and implementation of behavior man-
agement interventions in general education. School Psychology
Review, 31, 217–234.
Norcross, J., Beutler, L., & Levant, R. (2006). Evidence-based practices in
mental health: Debate and dialogue on the fundamental questions.
Washington, DC: American Psychological Association.
Odom, S. L., Brantlinger, E., Gersten, R., Horner, R. H., Thompson, B.,
& Harris, K. R. (2005). Research in special education: Scientic
methods and evidence-based practices. Exceptional Children,
71, 137-148.
Sacke, D. L., & Rosenberg, W. C., Gray, J. A. M., Haynes, R. B., &
Richardson, W. S. (1996). Evidence based medicine: What it is
and what it isn’t. BMJ: British Medical Journal, 312(7023), 71-72.
Sacke, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., & Haynes,
R. B. (2000).Evidence-based medicine: How to practice and
teach EBP (2nd ed.). New York: Churchill Livingstone.
Slocum, T. A., Spencer, T. D., & Detrich, R. (2012 [this issue]). Best
available evidence: Three complementary approaches. Educa-
tion and Treatment of Children, 35(2), 27-55.
Stanovich, P. J., & Stanovich, K. E. (2003). Using research and reason
in education: How teachers can use scientically based research to
make curricular & instructional decisions. Washington, DC: US
Department of Education.
Strain, P. S., Barton, E. E., & Dunlap, G. (2012 [this issue]). Lessons
151
EVIDENCE-BASED PRACTICE
learned about the utility of social validity. Education and Treat-
ment of Children, 35(2), 57-74.
Stoiber, K. C., & Kratochwill, T. R. (2000). Empirically supported in-
terventions and school psychology: Rationale and method-
ological issues: Part 1. School Psychology Quarterly, 15, 75-105.
Sugai, G., & Horner, R. H. (2005). Schoolwide positive behavior sup-
ports: Achieving and sustaining eective learning environ-
ments for all students. In W. L. Heward et al., (Eds.), Focus
on behavior analysis in education: Achievements, challenges, and
opportunities (pp. 90-102). Upper Saddle River, NJ: Person
Education, Inc.
Walker, H. M., & Shinn, M. R. (2010). Systematic, evidence-based ap-
proaches for promoting positive student outcomes within a
multi-tier framework: Moving from ecacy to eectiveness.
In M. R. Shinn & H. M. Walker (Eds.), Interventions for achieve-
ment and behavior problems in a three-tier model including RTI
(pp. 1–26). Washington, DC: National Association of School
Psychologists.
Whitehurst, G. J. (2002, October). Evidence-based education. Paper pre-
sented at the Student Achievement and School Accountability
Conference. Retrieved from hp://www2.ed.gov/nclb/meth-
ods/whatworks/eb/edlite-index.html.
What Works Clearinghouse. (2011). What Works Clearinghouse Pro-
cedures and Standards Handbook (Version 2.1). Retrieved
from hp://ies.ed.gov/ncee/wwc/pdf/reference_resources/
wwc_procedures_v2_1_standards_handbook.pdf.
What Works Clearinghouse. (2012). Adolescent literacy intervention
report: Peer-assisted learning strategies. Retrieved from hp://
whatworks.ed.gov
Wi, J. C., VanDerHeyden, A. M., & Gilbertson, D. (2004). Trouble-
shooting behavioral interventions: A systematic process for
nding and eliminating problems. School Psychology Review,
33, 363-383.
Wolf, M. M. (1978). Social validity: The case for subjective measure-
ment or how applied behavior analysis is nding its heart.
Journal of Applied Behavior Analysis, 11, 203-214.
Yeh, S. S. (2007). The cost-eectiveness of ve policies for improv-
ing student achievement. American Journal of Evaluation, 34,
220 – 241.
Copyright of Education & Treatment of Children (West Virginia University Press) is the property of West
Virginia University Press and its content may not be copied or emailed to multiple sites or posted to a listserv
without the copyright holder's express written permission. However, users may print, download, or email
articles for individual use.