ArticlePDF Available

Evidence-based Practice: A Framework for Making Effective Decisions

Authors:

Abstract and Figures

The research to practice gap in education has been a long-standing concern. The enactment of No Child Left Behind brought increased emphasis on the value of using scientifically based instructional practices to improve educational outcomes. It also brought education into the broader evidence-based practice movement that started in medicine and has spread across a number of human service disciplines. Although the term evidence-based practice has become ubiquitous in education, there is no common agreement about what it means. In this paper, we offer a definition of evidence-based practice, provide a rationale for it, and discuss some of the main tenants of evidence-based practice. Additionally, we describe a decision-making model that features the relationships between the critical sources of influence and the chief responsibilities of evidence-based practitioners.
Content may be subject to copyright.
Correspondence to Trina D. Spencer, Institute for Human Development, Northern
Arizona University, PO Box 5630, Flagsta AZ, 86011-5630 ; e-mail: trina.spencer@nau.
edu.
EDUCATION AND TREATMENT OF CHILDREN Vol. 35, No. 2, 2012
Pages 127-151
Evidence-based Practice: A Framework for
Making Eective Decisions
Trina D. Spencer
Northern Arizona University
Ronnie Detrich
Wing Institute
Timothy A. Slocum
Utah State University
Abstract
The research to practice gap in education has been a long-standing concern.
The enactment of No Child Left Behind brought increased emphasis on
the value of using scientically based instructional practices to improve
educational outcomes. It also brought education into the broader evidence-
based practice movement that started in medicine and has spread across a
number of human service disciplines. Although the term evidence-based
practice has become ubiquitous in education, there is no common agreement
about what it means. In this paper, we oer a denition of evidence-based
practice, provide a rationale for it, and discuss some of the main tenants of
evidence-based practice. Additionally, we describe a decision-making model
that features the relationships between the critical sources of inuence and
the chief responsibilities of evidence-based practitioners.
“Knowing is not enough; we must apply.
Willing is not enough; we must do.”
—Goethe
Education has long struggled with the gap between the methods
that are best supported by systematic research and those that
are most widely used (e.g., Burns & Ysseldyke, 2009; Carnine, 1997;
Espin & Deno, 2000; Hoagwood, Burns, & Weisz, 2002; Kazdin, 2000;
Kratochwill & Stoiber, 2000). Researchers, practitioners, theorists, and
policy-makers alike have speculated about the cause of this division.
Researchers contend that practitioners do not understand the
implications of their work or possess the skills necessary to be good
consumers of science. Practitioners complain that too often research
128 SPENCER, DETRICH, and SLOCUM
is not applicable in the real world and research ndings are largely
inaccessible because they are published in journals designed for
researchers, not practitioners (Carnine, 1997; Greenwood & Abbo,
2001). Presumably, if practitioners were using research evidence as a
basis for selecting interventions there would be no research to practice
gap.
This “research to practice gap” is not just an issue for educa-
tion but has been a concern across disciplines as varied as medicine
and psychology. In an aempt to address the gap in psychology in
1949, the American Psychological Association established the scien-
tist-practitioner model as a basis for training psychologists (Drabick
& Goldfried, 2000). Despite the emphasis on the training of psychol-
ogists as scientist-practitioners, there continues to be great concern
about the lack of science in practice and the lack of practice in the
science of psychology (Chwalisz, 2003; Hayes, Barlow, Nelson-Grey,
1999). Evidence-based practice has been proposed as a means of nar-
rowing the research to practice gap (Chwalisz, 2003).
The passage of No Child Left Behind (NCLB, 2001) was a true
watershed for eorts to increase the role of research evidence in ed-
ucation. For the rst time, the use of scientic research for making
educational decisions was prominently featured in national education
legislation. NCLB (2001) included more than 100 references to the use
of science or scientically based evidence as a basis for educational
practice. The research to practice gap was transformed from a concern
of a relatively small group of educational researchers and reformers to
a national policy issue resulting in greater political and social traction
than ever before. The subsequent passage of Individuals with Disabil-
ities Education Improvement Act (IDEIA) in 2004 extended this trend
and further established the use of science in education. Educational
practice based on scientic evidence was no longer just a good idea;
it became the law.
The mandates in NCLB (2001) and IDEIA (2004) to use scienti-
cally based evidence have resulted in widespread interest in evidence-
based practice. Originating in medicine, evidence-based practice was
quickly adopted by numerous other professions because it provided
a means of addressing a serious problem that has challenged numer-
ous professions – the research to practice gap. A gap between research
and practice means that consumers are not receiving services that are
based on the best research evidence that exists and therefore may suf-
fer from poorer outcomes and unnecessary costs associated with inef-
fective treatments. Many professions have recognized that evidence-
based practice has the potential to bring research results into daily
practice and thereby improve outcomes for consumers.
129
EVIDENCE-BASED PRACTICE
Within the medical profession, evidence-based practice has
been dened as a decision-making process informed by three distinct
sources of inuence: (1) the best available evidence, (2) clinical exper-
tise and (3) client values (Institute of Medicine, 2001; Sacke, Rosen-
berg, Gray, Haynes, & Richardson, 1996; Sacke, Straus, Richardson,
Rosenberg, & Haynes, 2000). Major organizations including the Amer-
ican Psychological Association (APA, 2002), the American Speech and
Language Association (ASHA, 2005), Institute for Education Science
(WWC, hp://ies.ed.gov/ncee/wwc/), and the National Autism Cen-
ter (NAC, 2009) followed by adopting very similar denitions of evi-
dence-based practice. Table 1 presents denitions from several orga-
nizations that are relevant to education. In these denitions, the word
practice refers to the whole of one’s professional activities. This is the
sense in which a physician practices medicine and a lawyer practices
law. Importantly, these denitions do not limit evidence-based prac-
tice to a small portion of the decisions that a professional must make;
instead, they suggest that this model of decision-making should be
applied pervasively across their entire professional practice. Drawing
directly from these well-established denitions from related profes-
sions, we dene evidence-based practice in education as a decision
making process that integrates (1) the best available evidence, (2) pro-
fessional judgment, and (3) client values and context. Similar to the
use of the term practice in other professions, we use it to refer to all
professional activities of an educator. We suggest that evidence-based
practice should be pervasive in all aspects of the professional practice
of educators. We will elaborate on these issues throughout this paper.
The word practice also has a second meaning – specic method
or technique used by a professional. This is the sense in which one
might talk about “best practices” or the practice of providing immedi-
ate corrections of errors. Within education, many have dened evi-
dence-based practice with this sense of the word practice. Thus, within
education, the term evidence-based practice is most often used to re-
fer to a program or intervention that has been found to have strong
research support (e.g., Cook, Tankersley, & Landrum, 2009; Dunst,
Trivee, & Cutspec, 2002; Odom et al., 2005). This denition focuses
on the important issue of the research support for a particular pro-
gram or intervention; however, it leaves the broader question of how
(or whether) practitioners should go about balancing this information
with other constraints on problem solving outside the scope of the
evidence-based practice (as used in this denition).
There are two main reasons why we believe it is important to
adopt the broader view that is common in other professions. First,
this broader view explicitly recognizes that the choice of treatments
130 SPENCER, DETRICH, and SLOCUM
Table 1
Denitions of evidence-based practice in various professions.
American Psychological Association (APA, 2005)
“Evidence-based practice in psychology is the integration of the best
available research with clinical expertise in the context of patient
characteristics, culture, and preferences.”
American Speech and Hearing Association (ASHA, 2005)
“The term evidence-based practice refers to an approach in which
current, high-quality research evidence is integrated with practitioner
expertise and client preferences and values into the process of making
clinical decisions.”
National Autism Center (2009)
“Evidence-based practice requires the integration of research ndings with other
critical factors. These factors include:
Professional judgment and data-based decision-making
Values and preferences of families, including the student on the autism
spectrum whenever feasible
Capacity to accurately implement the interventions.”
Education (Whitehurst, 2002)
Evidence-based education is “The integration of professional wisdom
with the best available empirical evidence in making decisions about
how to deliver instruction.”
in a particular school, classroom, or individual case is an outcome of
a decision-making process in which empirical evidence is one of sev-
eral important inuences. This decision-making process is the key to
whether programs and interventions that have strong research sup-
port are implemented in schools; therefore, articulating how research
evidence should be integrated into decision-making is extremely im-
portant for addressing the goal of increased use of this evidence to im-
prove outcomes. Second, the broader view suggests that the best avail-
able evidence should inuence all educational decision-making, whereas
the narrower view leads to lists of well-supported treatments. These
lists can be extremely useful within a decision-making process but by
themselves, they cannot oer solutions to many of the challenges that
educators face daily (Slocum, Spencer, & Detrich, 2012 [this issue]).
These points are elaborated upon throughout the rest of this paper
and in other articles in this special issue.
131
EVIDENCE-BASED PRACTICE
Many of the authors who have used the narrower denition of
evidence-based practice also fully recognize the importance of the
problem-solving processes that may lead to the use of well-support-
ed treatments. For example, Cook, Tankersley, and Harjusola-Webb
(2008) provided extensive discussion of the necessary role of profes-
sional decision making by special educators in selecting and imple-
menting treatments. Cook and Cook (2011) recognized these issues
and suggested “that educators use the term ‘evidence-based educa-
tion’ to describe the broader process of evidence-based decision mak-
ing” (pg. 7). In addition, many of the other authors who have used the
narrower sense of evidence-based practice have published extensively
on systems to improve educational decision-making and support for
high quality implementation. We do not dier from these authors in
our overall vision of educational decision-making, but we do strongly
believe that there is great value in the broader denition of evidence-
based practice.
We believe that diverging from the meaning of evidence-based
practice that is used in other professions invites a great deal of confu-
sion. The educational endeavor includes a variety of disciplines (e.g.,
special education, psychology, speech-language pathology, occupa-
tional therapy, and physical therapy) and if these disciplines adopt
dissimilar notions of evidence-based practice, the risk of confusion is
extremely high. In addition, practitioners from dierent disciplines
speak to each other in dierent “languages” which could result in
delays to eective treatment for clients. Additionally, consumers are
likely to be confused by the dierent uses of the terms and less able
and willing to participate in decision-making.
We suggest that the term evidence-based practice be reserved for
the decision-making process described above and the term empirically
supported treatment be used for activities that have been shown to meet
specic criteria for research support. These terms correspond with us-
age in the extensive literature on evidence-based practice in psychol-
ogy (e.g., APA, 2005; Chambless & Ollendick, 2001; Stoiber & Kratoch-
will, 2000). In the context of education, treatment typically refers to an
intervention, program, or curriculum that addresses any academic or
behavioral outcome. We would add that treatment could also refer
to smaller scale strategies, tactics, and techniques. Most fundamen-
tally, a treatment is any specic educator behavior that has an iden-
tiable eect on student behavior. A treatment can either be preven-
tive (teaching reading to all students), concurrent (correction routines
within instruction), or remedial (extra support for struggling readers)
and can be applied to any level of educational service such as whole
school, classroom, or individual student.
132 SPENCER, DETRICH, and SLOCUM
Evidence-based Practice: New Wine or New Bole?
The increased interest in evidence-based practice has not been
received without reservations from responsible practitioners across
disciplines (Norcross, Beutler, & Levant, 2006). The concern may be
framed as whether evidence-based practice is a new way to describe
what practitioners have always done (new bole) or a new way of
practicing (new wine). It is understandable that those who have been
working in educational seings may be skeptical about the evidence-
based practice movement. After all, educational services have been
oered for years without any explicit requirements that practitioners
rely on scientic research to inform decisions about services. How-
ever, it has been argued that a service is not a service if the treatments
are unevaluated (Hoagwood et al., 2002). “Service” implies the treat-
ment is benecial. Unless we know the benets and risks of a treat-
ment there is no basis for claiming it is a service. A treatment may
cause harm to students and that harm may not be detected unless
the treatment has been evaluated. This suggests there is an ethical
responsibility to base decisions on the best available evidence. Most
organizations that license practitioners have ethical statements requir-
ing decisions to be based on scientic evidence (APA, 2002; National
Association of School Psychologists [NASP], 2000; Behavior Analyst
Certication Board [BACB], 2004). However, the persistent research
to practice gap makes it clear that it is not always common practice
to base decisions about treatments on the existing research base. In-
creasing the emphasis on best available evidence as a basis for making
treatment decisions is indeed a new wine for many practitioners.
It could be argued (Detrich, 2008) that the evidence-based prac-
tice movement is ultimately a consumer protection movement. Nar-
rowing the research to practice gap protects consumers by using treat-
ments that the best evidence suggests are most likely to be eective.
An equally important but less obvious protection for consumers is
that in a fully realized evidence-based practice approach client values
and context are also explicitly recognized as critical considerations in
decision-making. Recognition of client values raises important ques-
tions of the goals of educational treatments and reminds practitioners
that eectiveness can only be judged relative to established goals.
When decisions are made for districts, schools, or classrooms, clients
can be considered to be the larger community. When decisions are
made for individual students, their specic families become impor-
tant partners in the process of seing goals and evaluating potential
treatments. Finally, being sensitive to the various contexts in which a
treatment might be implemented, the practitioner will have to make
133
EVIDENCE-BASED PRACTICE
judgments about which treatment is most appropriate and how to
adapt it to best t the relevant clinical context.
The following section elaborates on each of the three elements of
evidence-based practice (best available evidence, professional judg-
ment, and client values and context). This serves as background infor-
mation for a framework of evidence-based practice, which highlights
the interactive nature of these three inuences in educational practice.
The expanded descriptions of best available evidence, professional
judgment, and client values and context also set the stage for the sub-
sequent articles in the special issue. In Figure 1, we mark the primary
responsibilities of evidence-based practitioners with squares and the
sources of inuence with circles. Arrows indicate the direction of in-
uence, as one element informs the other. We recognize this process
is neither as simple nor as linear as we have depicted here. In fact,
practitioners are continually making decisions and the interacting in-
uences are much too complicated to capture accurately in a diagram.
For present purposes, it is useful to describe some of the basic process-
es related to selecting, adapting, and implementing treatments before
layering on additional complexities, many of which are addressed in
the other articles in this special issue.
Best Available Evidence
The evidence-based practice model asserts that the best available
evidence should be one of the three main inuences on educational
decision-making. The term best available evidence implies that there is
a range of evidence and that educators should select the best of what
is available—although seemingly simple, this concept is powerful and
has far reaching implications for educational practice. It requires that
educators determine the evidence that is best for the particular deci-
sion to be made. The best evidence is that which is (a) most relevant
to the decision and (b) has the highest degree of certainty. Relevance
depends on how closely the evidence matches the educator’s particu-
lar problem in terms of the nature of the students involved, the de-
sired outcomes, the details of the treatment, and the school context.
The certainty of the evidence depends on methodological quality and
amount of research available. When highly relevant and highly cer-
tain evidence is available, educators should use it. And when ideal
evidence is not available, then educators should use the best of what
is available. The best available evidence may be only indirectly rele-
vant and may fall short of the highest standards for rigorous research.
Nonetheless, the mandate to use the best available evidence suggests
that imperfect evidence, used wisely, is beer than no evidence at all.
This concept is necessary if evidence-based practice is to be pervasive
134 SPENCER, DETRICH, and SLOCUM
in educational decision-making. If educators aend only to the high-
est quality evidence, evidence-based practice is limited to a small
minority of educational decisions. This topic is explored thoroughly
in Slocum et al. (2012 [this issue]), but for the current purposes it is
important to understand that in our evidence-based practice frame-
work best available evidence is suciently broad to inform selecting
and adapting treatments, designing treatments locally, and relying on
progress monitoring data (practice-based evidence) to evaluate im-
pact (see Figure 1).
Perhaps, not surprisingly, there are other perspectives about
what is meant by the term best available evidence. In some instances,
best available evidence has come to mean research with very high
methodological quality. Organizations such as the What Works Clear-
inghouse (WWC, 2011) rely on a specic type of systematic review
to develop evidence about a treatment’s impact. The review process
is rigorous and excludes research studies that do not meet dened
quality standards. In general, randomized clinical trials, as well as
very high quality quasi-experimental designs (WWC, 2011) and sin-
gle-case designs (Kratochwill et al., 2010) constitute evidence and all
other methods of developing evidence are ignored. Further, even very
high quality research is excluded if it does not meet specic require-
ments of relevance to the specic review topics. In many instances,
after reviewing the research base for a specic topic, it is determined
that none or very lile of the published research meets the quality and
relevance standards, and as a consequence the systematic review pro-
vides limited guidance on topics that are of great importance to prac-
titioners. For example, in a recent report the WWC (2012) reviewed
the evidence for Peer Assisted Learning Strategies (PALS) as an inter-
vention to improve adolescent literacy. Of the 97 research articles that
were reviewed none met standards and only one was found to meet
standards with reservations, so it was determined that the evidence
only supported the conclusion that PALS had potentially positive ef-
fects. It must be emphasized that the articles reviewed had all been
through a peer-review process and found to be adequate for publica-
tion in professional journals.
This sort of review is very useful for some educational ques-
tions. In addition, it is valuable to know when this level of evidence is
not available so educators can be aware that they are using less certain
evidence in decision-making. However, if this is the only source of
evidence that is considered to be legitimate, most educational deci-
sions will be made without the benet of evidence. Educators can-
not put their decisions on hold until numerous high quality research
studies are conducted on their particular situation. Part of the power
135
EVIDENCE-BASED PRACTICE
Figure 1. Evidence-based practice decision-making framework. Squares represent evidence-based practitioners’ responsibilities.
Circles represent sources of inuence. Small shaded circles suggest a ltering eect of professional judgment. Arrows indicate the
direction of inuence.
Figure1.Evidence‐basedpracticedecision‐makingframework.Squaresrepresentevidence‐basedpractitioners’
responsibilities.Circlesrepresentsourcesofinfluence.Shadedsmallcirclessuggestafilteringeffectofprofessionaljudgment.
Arrowsindicatethedirectionofinfluence.
Best Available
Evidence
Client Values
and Context
Practical
Question
Selecting
Treatments
Adapting
Treatments
Implementing
Tr
ea
tm
e
nt
s
Positive
Ou
t
co
m
es
Professional
Judgment
136 SPENCER, DETRICH, and SLOCUM
of the concept best available evidence is that is recognizes this reality
of professional practice and provides exibility so that decisions can
be as eective as possible given the evidence that is available. When
evidence is incomplete and less than denitive, the educator must ex-
ercise greater professional judgment in selecting treatments. Recog-
nizing the uncertainties involved in these decisions, evidence-based
educators place greater reliance on progress monitoring to evaluate
the eectiveness of their decisions.
Professional Judgment
The second component of evidence-based practice is profession-
al judgment. Professional judgment is ubiquitous. We do not say this
as a suggestion of how professional decision-making processes ought
to work, but as recognition of how they must work. Decisions cannot
be made without the lter of professional judgment (see Figure 1).
From the initial steps of recognizing that a situation is a problem and
the formulation of a problem statement through later steps of eval-
uating progress and judging whether the problem has been solved,
decision-making simply cannot occur without judgment. Through
professional judgment, the practitioner renes the other sources of
inuence by retaining relevant and valuable information and discard-
ing the rest. At every juncture practitioners must employ professional
judgment to make decisions about how to weigh the best available
evidence and client values and contextual factors and navigate the
decision-making process.
Professional judgment is a fundamental element of evidence-
based practice, but often its value and complexity is not fully recog-
nized. The APA Presidential Task Force on Evidence-Based Practice
(2005) described eight competencies that contribute to professional
judgment: (1) formulating the problem so that treatment is possible,
(2) making clinical decisions, implementing treatments, and monitor-
ing progress, (3) interpersonal expertise, (4) continuous development
of professional skills, (5) evaluating research evidence, (6) under-
standing the inuence of context on treatment, (7) utilizing available
resources, and (8) having a cogent rationale for treatment. The pro-
cess of developing professional judgment has been a source of discus-
sion. Some have suggested that professional judgment (aka profes-
sional wisdom) is developed purely through experience (Kamhi, 1994;
Whitehurst, 2002). However, the list of competencies that contribute
to professional judgment suggests more complex interrelations; the
evaluation of research, monitoring progress, and continuous develop-
ment of professional skills are aspects of professional judgment that
identify linkages with the research base, with systematically learning
137
EVIDENCE-BASED PRACTICE
from progress monitoring, and with professional education. This no-
tion of professional judgment as a rigorous and informed aspect of
professionalism is far dierent from the idea that judgment allows
room for an aitude that anything goes based on uninformed opinion
and unconstrained bias. Stanovich and Stanovich (2003) argue:
Teachers, like scientists, are ruthless pragmatists (Gersten &
Dimino, 2001; Gersten, Chard, & Baker, 2000). They believe
that some explanations and methods are beer than others.
They think there is a real world out there--a world in ux,
obviously--but still one that is trackable by triangulating
observations and observers. They believe that there are valid,
if fallible, ways of nding out which educational practices
are best. Teachers believe in a world that is predictable
and controllable by manipulations that they use in their
professional practice, just as scientists do. Researchers and
educators are kindred spirits in their approach to knowledge,
an important fact that can be used to forge a coalition to bring
hard-won research knowledge to light in the classroom (p.
35).
This captures an approach to professional judgment that complements
(rather than detracts from) the best available evidence. In our evidence-
based practice framework, one function of best available evidence is
to sharpen practitioner judgment. It was not possible to represent how
this sharpening occurs in Figure 1 without complicating the visual
display of the other interactions, but the relationship between best
available evidence and professional judgment is indeed vital to fully
realizing this model. Given the potential sources of bias that can aect
judgment, both formal research and ongoing progress monitoring can
serve as moderating inuences and improve the quality of decision-
making. Knowledge of recommendations derived from the best
available evidence in a variety of circumstances helps practitioners
learn from their clinical experience by focusing their aention on
variables that are most important for change. Through an interaction
between evidence and direct experience of their eectiveness,
practitioners are shaped into wise decision-makers.
Client Values and Context
The nal component of evidence-based practice is client values
and contexts. Client values represent the deeply held ideals of indi-
vidual clients, their families, and the broader community. The inclu-
sion of client values in the evidence-based practice framework recog-
nizes many of the important factors that have been described as social
138 SPENCER, DETRICH, and SLOCUM
validity of research (Wolf, 1978). Both professional ethics and practi-
cality demand that the values of the students, families, and communi-
ties we serve must be a fundamental contributor to decision-making.
The very purposes and goals of education are socially determined as
is the range of acceptable forms of treatment. Professional ethics re-
quire that the consideration of values extend from the broad commu-
nity to smaller groups and families. This ethical stance is embodied in
the requirement of family involvement in the process of developing
Individual Educational Plans (IEPs) as is mandated by the Individuals
with Disability Education Improvement Act (IDEIA, 2004).
On the practical level, we can see that schools exist only with the
support of the larger community and often the strength of that sup-
port can be measured through funding and other resources. Further,
the eectiveness of many interventions may be partially dependent
on student and family involvement. This involvement may correlate
with the degree to which the goals and the nature of the treatment
correspond with deeply held values. For example, intervention to im-
prove classroom behavior may be more eective with strong “buy-
in” from students and their families. Also, if families contribute to the
selection of an academic intervention, they may be more willing to
support homework and eorts to promote generalization.
Recognition of client values as an important contributor to deci-
sion-making can support the overall purpose of evidence-based prac-
tice in education—improved outcomes for students. However, we
must also recognize that the interaction between client values and the
other two components is not simple. One of the key roles of profes-
sional judgment is to bring the best available evidence together with
client values. Client values should not dominate this judgment (e.g.,
“it does not maer what the research says—we like a particular pro-
gram”) any more than the best available evidence without consider-
ation of client values should drive decisions. Giving priority to client
values without consideration of the best available evidence can result
in wasted time, money, and resources and fail to produce meaningful
outcomes. Giving special weight to the best available evidence with-
out consideration of client values can result in treatments with low
acceptability, which may fail to produce eective outcomes. In ad-
dition, low treatment acceptability can result in failure to maintain
implementation even if outcomes are positive.
In addition to client values, when making decisions about treat-
ments, practitioners should also consider the context in which treat-
ment is to occur. The consideration of the role of context is important
for selecting treatments that are most likely to produce positive re-
sults in the particular school, classroom, or situation within a class-
139
EVIDENCE-BASED PRACTICE
room. Each seing includes specic resources and constraints that in-
uence the eectiveness of a particular treatment. Contextual factors
to be considered include the correspondence of values and theoretical
assumptions of a treatment and those of the implementers; the match
between the resources required and the resources available in the
treatment seing (including materials, time, space, and personnel);
and the degree to which implementers have the training and skills
to implement the treatment with adequate levels of integrity. For ex-
ample, there would be lile value in selecting a treatment if the school
cannot aord to purchase the materials and provide the professional
development necessary for high quality implementation. When sev-
eral treatment options have roughly the same level of support from
the best available evidence, contextual variables may determine the
best choice. Treatments that are a beer contextual t may be imple-
mented with higher quality and ultimately produce beer outcomes
(Albin, Lucyshyn, Horner, & Flannery, 1996). The resources required,
level of training and skills, and acceptability to the implementers, al-
though important, should not be used as the only or even primary
basis for selecting a treatment with minimal research support when a
well-supported treatment exists.
Evidence-based Practice Framework
In this section, we highlight some of the ways that the best avail-
able evidence, professional judgment, and client values and context
interact throughout a problem solving process (including selecting,
adapting, and implementing treatments). The process begins with
a practical question and ends with positive outcomes for students.
Many important interactions between the best available evidence,
professional judgment, and client values and context are captured in
Figure 1; however, it is impossible to portray the true dynamic nature
of these relationships in a two dimensional illustration. Therefore, we
oer this gure as a starting point for thinking carefully about an im-
portant and complex topic.
Practical Question
The evidence-based practice decision-making process is initi-
ated by a concrete educational problem—the practitioner identies
student performance that is not adequate in some way. This may be
as broad as reading outcomes for an entire school district or as specic
as social behavior of an individual with autism. The identication of
performance as inadequate is itself a professional judgment based on
experience, training, and context. In order to begin the systematic ev-
idence-based practice decision-making process, the practitioner must
140 SPENCER, DETRICH, and SLOCUM
formulate a practical question. A well-constructed question denes
the population or student(s) under consideration (e.g., third grade
student(s) with autism), the outcome to be achieved (e.g., increased
reading comprehension), and key features of the seing (e.g., special
education teacher and six students in self-contained classroom). The
question usually takes one of two forms. In a problem-based question,
the practitioner asks about the best interventions to solve a specic
problem. For example, “What treatment should I use to teach read-
ing comprehension to my third grade students with autism in a small
group arrangement?” This type of question asks for a comparative
evaluation of all relevant treatments. Alternatively, in a treatment-
based question, the practitioner asks about the eectiveness of a spe-
cic treatment. For example, “What is the evidence supporting the use
of direct and explicit instruction for teaching reading comprehension
to third grade students with autism in a small group arrangement?”
This type of question asks about the evidence on a single treatment of
interest. In the course of day-to-day practice, practitioners are likely to
ask both types of questions.
Regardless of which form the question takes, practitioners
should formulate the question in such a way that evidence is use-
ful and relevant. In other words, formulation of the question must
be informed by client values concerning educational goals and an
understanding of the opportunities and limitations aorded by the
educational context (see Figure 1). Important educational goals may
be informed by state core standards, but students and families may
have suggestions about which goals are priorities to them. For exam-
ple, a family of a student with autism may consider social outcomes
and inclusion more important than learning to count. In this example,
the school’s infrastructure for supporting inclusion is also relevant to
the formulation of the question. Before initiating a search for the best
available evidence, a great deal of professional judgment is needed to
incorporate client values and the context into the question. To mini-
mize or ignore either the client values or the school context will re-
sult in a question that is not properly formed and could misdirect the
search for the best available evidence.
Selecting Treatments
A well-constructed question that incorporates client values and
context guides the search for the best available evidence. Practitioners
must judge the available evidence for “bestness”—scientic strength
and relevance to their question. The process of identifying the best
available evidence involves interplay between the practical question,
the various sources of evidence, and considerations of client values
141
EVIDENCE-BASED PRACTICE
and context (see Figure 1). As evidence is encountered it must be
weighed for strength and relevance to the question, and this process
may require further consideration of goals and acceptable forms of
treatment as well as the context in which the treatment will be imple-
mented.
The search for the best available evidence involves multiple
sources of evidence. Among those are empirically supported treat-
ment reviews, practice guides, best practice reviews, primary research
articles, and relevant principles of behavior (see Slocum et al., 2012
[this issue]). These sources of evidence can all contribute to selecting
a treatment that addresses the practical question, is supported by the
best available evidence, and makes sense in the particular context. An
empirically supported treatment review may identify a treatment that
has strong supporting evidence in which the population and seing
in the research are a close match with the practice context. But this is
rare. It is more likely that the match of populations and/or seings is
not perfect. This requires practitioners to be sensitive to specic char-
acteristics of their students (e.g., age, performance level, specic aca-
demic, behavioral, and cognitive strengths and weaknesses) and their
context (e.g., skills of sta, available training and supervision, and
other resources). Dierences between the specic participants and
context in research seing and those in the practice seing require the
practitioner to make judgments about the importance of these dier-
ences. Various sources of evidence could inform these judgments. The
practitioner could consult practice guides and best practice reviews
for guidance on whether the treatment is likely to be eective with the
particular population of students and context in question. This dif-
cult professional judgment is also informed by his or her knowledge
of relevant principles of behavior. Thus, the selection of treatments is
not equivalent to choosing an intervention from a list of empirically
supported treatments; rather, the selection of treatments involves crit-
ical interplay between the best available evidence, client and contex-
tual considerations, and professional judgment.
Adapting Treatments
After selection of a treatment, the evidence-based practice pro-
cess involves judgments about whether the treatment as it is described
in research studies, manuals, curricula or other materials must be
adapted to the specic local context (see Figure 1). Practitioners must
make detailed decisions about the specic features to be changed and
exactly how they are to be adjusted to produce a good “contextual
t” (Albin et al., 1996) and increase the probability of a positive out-
come. The best available evidence should inform these judgments and
142 SPENCER, DETRICH, and SLOCUM
decisions, as well. This step is extremely important for success of the
entire process. Failure to adapt the treatment to local circumstances
may render an otherwise powerful treatment ineective. On the other
hand, adaptations that eliminate or undermine critical elements of the
treatment may also render it ineective. The evidence base informing
adaptation may come from research covering the range of the treat-
ment’s variations that retain its eectiveness. These decisions can also
be informed by more general evidence about eective instructional
and behavioral strategies. Although this more general evidence may
not be specic to the treatment in question, it may provide a very ef-
fective reference for wise decision-making and may constitute the best
evidence that is available to inform these decisions.
Implementing Treatments
Selecting an empirically supported treatment is not sucient to
assure positive outcomes. It is necessary to carefully consider many
issues related to implementation in a specic context. Failing to at-
tend to issues of implementation will likely result in the failure of the
treatment eort. Eective implementation requires ongoing profes-
sional judgment that can be informed by the best available evidence
and include consideration of client values and context (see Figure 1).
The implementation process requires careful consideration of the fea-
tures of treatment seing to assure a good contextual t. Professional
development to ensure that those who deliver the treatment have all
the necessary skills is fundamentally important to eective imple-
mentation. There is likely to be tension between the requirements of
training and the realities of providing training in a service delivery
context. The best available evidence for eective training and profes-
sional judgment should guide decisions about how professional de-
velopment is conducted in a specic practice context.
Joyce and Showers (2002) present compelling evidence that su-
pervision and coaching are necessary to assure that the treatment is
actually delivered in an eective maer. Again, there will be tension
between the demands of eective supervision and coaching and the
many other demands on the practitioners’ time. The practitioners
must make decisions about how to arrange supervision and coach-
ing so that it is eective and ecient. Eectiveness without eciency
will likely result in poor sustainability of the implementation eort.
Eciency without eectiveness will likely result in limited positive
outcomes. Eective implementation has been the focus of research in
recent years (Adelman & Taylor, 2003; Ellio & Mihalic, 2004; Fixsen,
Naoom, Blasé, Friedman, & Wallace, 2005). This research clearly sug-
gests that implementation is a series of judgments about making an
143
EVIDENCE-BASED PRACTICE
empirically supported treatment eectively t into a specic treat-
ment context. Evidence-based practitioners would be well served to
take advantage of the best available evidence to guide and inform
their judgments about implementation in their particular context.
The evidence-based practitioner relies on progress monitoring
to evaluate the eects of a treatment. Progress monitoring to improve
outcomes is itself supported by substantial research (Fuchs & Fuchs,
1986; Yeh, 2007). When a treatment is implemented, progress monitor-
ing provides the best available evidence on the eects of the imple-
mentation. The data from progress monitoring provide an occasion
for additional problem solving. These data must be evaluated and
judgments made about whether progress is adequate, and if not what
should be done about it (Barne, Daly, Jones, & Len, 2004; Wi,
VanDerHeyden, & Gilbertson, 2004). The practitioner may continue
to monitor outcomes, institute measures of delity of implementa-
tion, adjust supervision and coaching, modify treatment procedures,
change treatments, or make other decisions. Evidence-based practice
suggests that the best available evidence, the practitioner’s experi-
ence, and relevant aspects of context should inform those choices.
For example, there is a growing research base on monitoring delity
of implementation, interpreting these results, and intervening to im-
prove delity (Bartels & Mortenson, 2005; Codding, Livanish, Pace,
& Vaca, 2008; Gresham, 2009; Mortenson & Wi, 1998). One of the
common features for assuring high levels of integrity over time is the
use of performance feedback (Mortenson & Wi, 1998; Noell, Duhon,
Gai, & Connell, 2002). Extending beyond treatment integrity, there
is a large research base about the eects of performance feedback on a
wide range of behaviors in a wide variety of seings (Alvero, Bulkin,
& Austin, 2001; Balcazar, Hopkins, & Suarez, 1986). Much of the estab-
lished research is from seings other than schools so the practitioner
will have to make judgments about the relevance of this literature to
the practical problem he or she is trying to solve and the appropriate-
ness of generalizing from this research base to the current context.
Even though the practitioner is relying on the best available evidence,
judgments are necessary for making decisions.
Positive Outcomes
Positive outcomes and positive social evaluations are the nal
arbiters of the adequacy of decisions and the basis for claiming that
the initial practical problem has been solved. The decision-making
process is iterative and is not complete until positive outcomes are
achieved. However, the achievement of positive outcomes is also a
professional judgment that can be based on the two other pillars of
144 SPENCER, DETRICH, and SLOCUM
evidence-based practice. The best available evidence and client values
have important roles in informing reasonable expectations for out-
comes (see Figure 1). Research evidence may show whether the out-
comes obtained are comparable to those reported for similar groups of
students. Client values are also important for seing standards of per-
formance. Although it may seem that “beer” outcomes are always a
clear goal, it is not always so simple. First, dierent communities and
families may place dierent relative weight on various outcomes (see
Strain, Barton, & Dunlap, 2012 [this issue]). Second, any decision (or
lack of decision) entails opportunity costs—choosing to devote time
and resources to any course of action preclude the use of those re-
sources to pursue other courses of action. For example, a community
may place high value on both reading and math skills. Implementing
a particular reading program and providing two hours per day for lit-
eracy instruction may raise reading performance. The question for the
practitioner then, is whether to devote additional time and resources
to further improve reading outcomes or to devote those resources to
improving math performance. The evidence-based practitioner might
seek the best evidence on whether the current level of reading perfor-
mance predicts success in later grades and also consider the value the
community places on both reading and math. A reading program, of
course, is an on-going treatment. Other treatments, such as special in-
terventions to improve social behavior or address particular reading
challenges of specic students may be temporary. When such treat-
ments are successful practitioners must decide whether the treatment
will continue and if so, in what form. It may continue indenitely in
its present form or features of the treatment might be modied so that
treatment is less obvious and less demanding on the support system.
These types of decisions are very common in tiered decision-making
approaches such as Response to Intervention (RtI; Walker & Shinn,
2010) and school-wide positive behavior supports (Sugai & Horner,
2005).
Conclusion
The basic assumption of evidence-based practice is that basing
important professional decisions on the three pillars of the best avail-
able evidence, professional judgment, and client values and context is
more likely to result in positive outcomes than a process that does not
take advantage of these three inuences in a comprehensive manner.
Ultimately, this assumption has to be tested empirically. Currently,
the best available evidence regarding the eectiveness of evidence-
based practice is a chain of logic. Embedded in this logic is a deli-
cate balance between research evidence and client values and context,
145
EVIDENCE-BASED PRACTICE
with a great deal of condence in practitioners’ judgment to create
this balance. It has to be acknowledged that being an evidence-based
practitioner is extremely challenging and requires a great deal from
the professional because there is no research-based guidance about
how practitioners should achieve the balance. Despite a lack of evi-
dence indicating the evidence-based practice process is eective, it is
likely the best model for practitioners who seek to have a positive im-
pact. In order to honor their professional responsibility, practitioners
should seek out the best available evidence for selecting, adapting,
and implementing treatments. To be maximally eective, they must
do this by relying on their professional judgment aided by ongoing
progress monitoring data, and by making the client an active part of
the decision-making process.
We have sketched a conceptual roadmap that may serve as an
initial basis for further reection and inspection of evidence-based
practice. Given the limited empirical examinations that exist, it is also
a logical thing to do. However, logic is not an acceptable substitute
for evidence. The impact of evidence-based practice on the research to
practice gap (and educational outcomes), as an area of investigation, is
ripe with opportunity and one that is best approached through collab-
orative relationships between invested researchers and practitioners.
Introduction to Special Issue
This special issue of Education and Treatment of Children explores
some of the many important facets of evidence-based practice. Slo-
cum, Spencer, and Detrich examine the concept best available evidence
and suggest that evidence-based practice can be most widely applied
and most eective in education if this concept is understood to in-
clude multiple sources of evidence and multiple kinds of treatments.
Strain, Barton, and Dunlap illustrate the importance of considering
client values and preferences in the selection of intervention targets
and the design of service delivery systems. From a social validity
perspective, they present lessons learned about consumer input and
satisfaction. The remaining articles explore the challenges and un-
certainties that are inherent in the process of identifying empirically
supported treatments. Slocum, Detrich, and Spencer suggest that the
process of systematically reviewing and evaluating research support
for treatments is a measurement process and that the concepts of mea-
surement validity are relevant. In this article, they discuss how vari-
ous aspects of measurement validity can be applied to the process of
reviewing research to identify empirically supported treatments. One
particularly challenging component of these reviews is quality ap-
praisal of the studies—the process of rating the methodological qual-
146 SPENCER, DETRICH, and SLOCUM
ity of each research study. Wendt and Miller compare seven scales
for quality appraisal of single-subject research. They describe each
scale, compare each to the quality indicators proposed by Horner et
al. (2005), and apply each to a set of four research studies. In a comple-
mentary article, Horner, Swaminathan, Sugai, and Smolkowski exam-
ine the fundamental logic of single-subject research and suggest how
this research paradigm can come to be more inuential in identifying
empirically supported treatments. They clarify the specic features
of data that support strong conclusions from single subject research
results. This is important for disciplined visual analysis of results and
it also provides a basis for evaluating strategies for statistical analysis
of single-subject results. Susan Wilczyinski draws on her experience
as the director of the National Autism Center’s National Standards
Project to identify numerous risks that are inherent in the process of
systematically reviewing research and identifying treatments. This
article is an important reminder that reviewing research and identi-
fying well-supported interventions is an extremely complex process
fraught with challenges. Gardner, Spencer, Boelter, Dubard, and Jen-
ne use the single-subject quality indicators suggested by Horner et
al (2005) to evaluate the evidence base on brief functional analysis
methodology as a means of assessing the behavioral diculties of
typically developing children. It is an example of many of the key
issues and challenges related to identifying empirically supported
treatments (and assessments). Finally, O’Keee, Slocum, Burlingame,
Snyder, and Bundock examine the question of whether systematic re-
views that identify empirically supported treatments tend to derive
recommendations that are similar to those from traditional narrative
reviews and meta-analyses. They use the research on repeated read-
ings—an intervention to improve reading uency—to test the conver-
gence of these dierent types of reviews. There are many intriguing
questions for future evidence-based practice research in these articles.
We look forward to the continued advancement of evidence-based
practice and renement of many of the concepts discussed in these
papers.
References
Adelman, H. S., & Taylor, L. (2003). Rethinking school psychology:
Commentary on public health framework series. Journal of
School Psychology, 41, 83-90.
Albin, R. W., Lucyshyn, J. M, Horner, R. H., & Flannery, K. B. (1996).
Contextual t for behavioral support plans. In L. Koegil, R.
Koegil, & G. Dunlap (Eds.), Positive behavioral support: Includ-
ing people with dicult behaviors in the community (pp. 81-97).
147
EVIDENCE-BASED PRACTICE
Baltimore, MD: Brookes.
Alvero, A. M., Bucklin, B. R., & Austin, J. (2001). An objective review of
the eectiveness and essential characteristics of performance
feedback in organizational seings (1985-1998). Journal of Or-
ganizational Behavior Management, 21, 3-30.
American Psychological Association (August, 2005). American Psy-
chological Association Policy Statement on evidence-base
practice in psychology. Published as appendix of APA Presi-
dential Task Force of Evidence-Based Practice (2006). Evi-
dence-based practice in psychology. American Psychologist, 61,
271-285.
American Psychological Association (2002). Ethical principles of psy-
chologists and code of conduct. Retrieved from hp://www.
apa.org/ethics/.
American Speech-Language-Hearing Association. (2005). Evidence-
based practice in communication disorders [Position Statement].
Retrieved from www.asha.org/policy.
Balcazar, F. R., Hopkins, B. L., & Suarez, Y. (1986). A critical, objective
review of performance feedback. Journal of Organizational Be-
havior Management, 7, 65-89.
Barne, D., Daly, E., Jones, K., & Len, F. (2004). Response to interven-
tion: Empirically based special service decisions from single-
case designs of increasing and decreasing intensity. Journal of
Special Education, 38, 66-79.
Bartels, S. M., & Mortenson, B. P. (2005). Enhancing adherence to a
problem solving model for middle-school pre-referral teams:
A performance feedback and checklist approach. Journal of
Applied School Psychology, 22, 109-123.
Behavior Analysis Certication Board (2004). Behavior Analysis Certi-
cation Boardguidelines for responsible conduct for behavior ana-
lysts. Retrieved from hp://www.bacb.com/consum_frame.
html
Burns, M. K., & Ysseldyke, J. E. (2009). Reported prevalence of evi-
dence-based instructional practices in special education. Jour-
nal of Special Education, 43, 3-11.
Carnine, D. (1997). Bridging the research-to-practice gap. Exceptional
Children, 63, 513-521.
Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported
psychological interventions: Controversies and evidence. An-
nual Review of Psychology, 52, 685-716.
148 SPENCER, DETRICH, and SLOCUM
Chwalisz, K. (2003). Evidence-based practice: A Framework for
twenty-rst-century scientist-practitioner training. The
Counseling Psychologist, 31, 497-528.
Codding, R., Livanis, A., Pace, G., & Vaca, L. (2008). Using perfor-
mance feedback to improve treatment integrity of classwide
behavior plans: An investigation of observer reactivity. Jour-
nal of Applied Behavior Analysis, 40, 417-422.
Cook, B. G., & Cook, S. C. (2011). Thinking and communicating clear-
ly about evidence-based practices in special education. Retrieved
from hp://www.cecdr.org/subpage.cfm?id=67726117-C544-
E2C6-4E287FE0E2A6A05E.
Cook, B.G., Tankersley, M., & Harjusola-Webb, S. (2008). Evidence-
based special education and professional wisdom: Puing it
all together. Intervention in School and Clinic, 44, 105-111.
Cook, B. G., Tankersley, M., & Landrum, T. J. (2009). Determining ev-
idence-based practices in special education. Exceptional Chil-
dren, 75, 365-383.
Detrich, R. (2008). Evidence-based, empirically-supported, or best
practice: A guide for the scientist practitioner. In J. K. Luiselli,
D. C. Russo, W. P. Christian, & S. M. Wilczynski (Eds.), Eec-
tive practices for children with autism: Educational and behavioral
support interventions that work (pp. 3-25). New York, NY: Ox-
ford.
Drabick, D. A. G., & Goldfried, M. R. (2000). Training the scientist-
practitioner for the 21st century: Puing the bloom back on the
rose. Journal of Clinical Psychology, 56, 327-340.
Dunst, C. J., Trivee, C. M., & Cutspec, P. A. (2002). Toward an op-
erational denition of evidence-based practices. Centerscope,
1, 1-10.
Ellio, D. S., & Mihalic, S. (2004). Issues in dissemination and replicat-
ing eective prevention programs. Prevention Science, 5, 47-53.
Espin, C. A., & Deno, S. L. (2000). Introduction to the special issue of
learning disabilities research & practice: Research to practice:
Views from researchers and practitioners. Learning Disabilities
Research & Practice, 15(2), 67-68.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace,
F. (2005). Implementation research: A synthesis of the literature.
Tampa: University of South Florida, Louis de la Parte Florida
Mental Health Institute (FMHI Publication #231).
Fuchs, L. S., & Fuchs, D. (1986). Eects of systematic formative evalua-
tion: A meta-analysis. Exceptional Children, 53, 199-208.
149
EVIDENCE-BASED PRACTICE
Greenwood, C. R., & Abbot, M. (2001). The research to practice gap in
special education. Teacher Education and Special Education, 24,
276-189.
Gresham, F. M. (2009). Evolution of the treatment integrity concept:
Current status and future directions. School Psychology Review,
38, 533-540.
Hayes, S. C., Barlow, D. H., & Nelson-Grey, R. O. (1999). The scientist-
practitioner: Research and accountability in the age of managed
care. Boston, MA: Allyn and Bacon.
Hoagwood, K., Burns, B. J., & Weisz, J. (2002). A protable conjunc-
tion: From science to service in children’s mental health. In B.
J. Burns & K. Hoagwood (Eds.), Community-based interventions
for youth with severe emotional disturbances (pp. 327–338). New
York, NY: Oxford University Press.
Horner, R. H., Carr, E. G., Halle, J., Mcgee, G., Odom, S., & Wolery, M.
(2005). The use of single-subject research to identify evidence-
based practice in special education. Exceptional Children, 71,
165-179.
Horner, R., Swaminathan, H., Sugai, G., & Smolkowski, K. (in press).
Expanding analysis of single case research. Washington, DC:
Institute of Education Science, U.S. Department of Education.
Individuals with Disabilities Education Improvement Act (IDEA) of
2004, 20 United States Congress 1412[a] [5]), Pub. L. No. 108-
466.
Institute of Medicine. (2001). Crossing the quality chasm: A new health
system for the 21st century. (Commiee on Quality of Health
Care in America). Washington, DC: National Academies
Press.
Joyce, B., & Showers, B. (2002). Student Achievement Through Sta De-
velopment (3rd ed.). Alexandria, VA: Association for Supervi-
sion and Curriculum Development.
Kamhi, A. G. (1994). Toward a theory of clinical expertise in speech-
language pathology. Language Speech, and Hearing Services in
Schools, 25, 115-118.
Kazdin, A. E. (2000). Psychotherapy for children and adolescents: direc-
tions for research and practice. New York, NY: Oxford Univer-
sity Press.
Kratochwill, T. R., & Stoiber, K. C. (2000). Empirically supported in-
terventions and school psychology: Conceptual and practical
issues: Part II. School Psychology Quarterly, 15, 233-253.
150 SPENCER, DETRICH, and SLOCUM
Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom,
S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case
designs technical documentation. Retrieved from: hp://ies.
ed.gov/ncee/wwc/pdf/wwc_scd.pdf
Mortenson, B. P., & Wi, J. C. (1998). The use of weekly performance
feedback to increase teacher implementation of a pre-referral
academic intervention. School Psychology Review, 27, 613–627.
National Association of School Psychology. (2000). Professional con-
duct manual. Retrieved from hp://www.nasponline.org/
standards/index.aspx
National Autism Center. (2009). National Standards Report: National
Standards Project – Addressing the need for evidence-based practice
guidelines for autism spectrum disorders. Randolph, MA: Na-
tional Autism Center, Inc.
No Child Left Behind, 20 U.S.C. § 16301 et seq. (2001).
Noell, G. H., Duhon, G. J., Gai, S. L., & Connell, J. E. (2002). Con-
sultation, follow-up, and implementation of behavior man-
agement interventions in general education. School Psychology
Review, 31, 217–234.
Norcross, J., Beutler, L., & Levant, R. (2006). Evidence-based practices in
mental health: Debate and dialogue on the fundamental questions.
Washington, DC: American Psychological Association.
Odom, S. L., Brantlinger, E., Gersten, R., Horner, R. H., Thompson, B.,
& Harris, K. R. (2005). Research in special education: Scientic
methods and evidence-based practices. Exceptional Children,
71, 137-148.
Sacke, D. L., & Rosenberg, W. C., Gray, J. A. M., Haynes, R. B., &
Richardson, W. S. (1996). Evidence based medicine: What it is
and what it isn’t. BMJ: British Medical Journal, 312(7023), 71-72.
Sacke, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., & Haynes,
R. B. (2000).Evidence-based medicine: How to practice and
teach EBP (2nd ed.). New York: Churchill Livingstone.
Slocum, T. A., Spencer, T. D., & Detrich, R. (2012 [this issue]). Best
available evidence: Three complementary approaches. Educa-
tion and Treatment of Children, 35(2), 27-55.
Stanovich, P. J., & Stanovich, K. E. (2003). Using research and reason
in education: How teachers can use scientically based research to
make curricular & instructional decisions. Washington, DC: US
Department of Education.
Strain, P. S., Barton, E. E., & Dunlap, G. (2012 [this issue]). Lessons
151
EVIDENCE-BASED PRACTICE
learned about the utility of social validity. Education and Treat-
ment of Children, 35(2), 57-74.
Stoiber, K. C., & Kratochwill, T. R. (2000). Empirically supported in-
terventions and school psychology: Rationale and method-
ological issues: Part 1. School Psychology Quarterly, 15, 75-105.
Sugai, G., & Horner, R. H. (2005). Schoolwide positive behavior sup-
ports: Achieving and sustaining eective learning environ-
ments for all students. In W. L. Heward et al., (Eds.), Focus
on behavior analysis in education: Achievements, challenges, and
opportunities (pp. 90-102). Upper Saddle River, NJ: Person
Education, Inc.
Walker, H. M., & Shinn, M. R. (2010). Systematic, evidence-based ap-
proaches for promoting positive student outcomes within a
multi-tier framework: Moving from ecacy to eectiveness.
In M. R. Shinn & H. M. Walker (Eds.), Interventions for achieve-
ment and behavior problems in a three-tier model including RTI
(pp. 1–26). Washington, DC: National Association of School
Psychologists.
Whitehurst, G. J. (2002, October). Evidence-based education. Paper pre-
sented at the Student Achievement and School Accountability
Conference. Retrieved from hp://www2.ed.gov/nclb/meth-
ods/whatworks/eb/edlite-index.html.
What Works Clearinghouse. (2011). What Works Clearinghouse Pro-
cedures and Standards Handbook (Version 2.1). Retrieved
from hp://ies.ed.gov/ncee/wwc/pdf/reference_resources/
wwc_procedures_v2_1_standards_handbook.pdf.
What Works Clearinghouse. (2012). Adolescent literacy intervention
report: Peer-assisted learning strategies. Retrieved from hp://
whatworks.ed.gov
Wi, J. C., VanDerHeyden, A. M., & Gilbertson, D. (2004). Trouble-
shooting behavioral interventions: A systematic process for
nding and eliminating problems. School Psychology Review,
33, 363-383.
Wolf, M. M. (1978). Social validity: The case for subjective measure-
ment or how applied behavior analysis is nding its heart.
Journal of Applied Behavior Analysis, 11, 203-214.
Yeh, S. S. (2007). The cost-eectiveness of ve policies for improv-
ing student achievement. American Journal of Evaluation, 34,
220 – 241.
Copyright of Education & Treatment of Children (West Virginia University Press) is the property of West
Virginia University Press and its content may not be copied or emailed to multiple sites or posted to a listserv
without the copyright holder's express written permission. However, users may print, download, or email
articles for individual use.
... Humble behaviorists recognize that the practice of behavior analysis is founded on the same fundamental principles of ethics (e.g., benevolence and do no harm) as all other human service professionals (Contreras et al., 2021;Rosenberg & Schwartz, 2018). Likewise, behavior analysts believe that ethical decisions are made through the integration of the best available evidence, clinical expertise, and client and family preferences and context (BACB, 2020; Contreras et al., 2021;Rosenberg & Schwartz, 2018;Sackett et al., 1996;Slocum et al., 2014;Spencer et al., 2012). As all health professionals are charged with engaging in evidence-based practice (or medicine), it is the common ground upon which all team decisions are processed (Cox, 2012). ...
... We posit that greater understanding of an interprofessional colleague's research evidence promotes mutual respect and leads to improvements in patient or client care. Regardless of behavior analysts' depth of cross-discipline knowledge, most professions hold progress monitoring as a critical element of evidence-based practice (Higginbotham & Satchidanand, 2019;Spencer et al., 2012). Guidance from Brodhead (2015) can support the use of progress monitoring to evaluate team decisions and implementation of nonbehavioral practices. ...
Article
Full-text available
Objectives: Individuals diagnosed with intellectual and/or developmental disabilities (IDDs) are often excluded from community-based activities. One common reason for exclusion is the occurrence of severe challenging behavior in the community. The objective of this study was to support this population’s inclusion in community-based activities. Method: We evaluated the effectiveness and ecological validity of two multicomponent behavior interventions aimed to reduce challenging behavior in community-based contexts using a single-case experimental design for two participants. Prior to intervention, both youths with IDD were restricted from community outings due to challenging behavior in those settings and challenging behavior previously resulted in police intervention. Results: For both individuals and based upon visual analysis, multicomponent behavior intervention resulted in successful community outings when progressing from a practice context to a community-based context. Furthermore, multicomponent intervention gains were identified across a variety of local community stores when parents implemented intervention. Conclusions: These findings (a) highlight the importance of ecologically valid research in behavior analysis and (b) support the use of multicomponent behavior intervention implemented by parents in community-based contexts.
... Researchers and stakeholders have advocated that Intuitions of Higher Learning (IHL) should endeavor to holistically develop competencies of accounting graduates, especially through the thorough application of International Education Standards (IES; Georgiou, 2018), because of the critical role accountancy plays in an economy (Fung, 2017;Nechita, 2019;Spencer et al., 2012;Vannatta, 2014). The World Bank (2020) published in its annual report that digital finance can be broadly defined as all the financial activities that rely on digital technologies, including various products, applications, processes, and business models delivered via online instruments such as cell phones and personal computers. ...
Article
Full-text available
Nowadays, employers of graduated students in finance and accounting require different soft skills, but the most essential are technical skills in software applications and technologies. In this context, various business problems have been solved, and job performance has increased through the diversified functions of application software. Purpose: This study deals with the self-assessment of the level of professional competencies of postgraduate students in finance and accounting in Albania and their performance on the job related to technological knowledge and software applications. The study aims to identify the ability to use advanced software at work, apply professional knowledge as a specialist advisor in financial software applications, and the need for further training in this field. Design/Methodology: This survey is conducted to estimate the technological competencies of postgraduate students studying a Master of Science in Finance or Accounting and Auditing at the Faculty of Economics, University of Tirana. According to official data (2022-2023 academic year), the population is 270 students (for two successive academic years), and the sample is 180. Proceeding with these data are two econometric models with multiple factors with index variables, measured by employed students' self-assessment and work performance. Findings: Students have high self-confidence perception in technological skills for any professional job, and they can apply professional knowledge as an advisor in financial measurement through software applications. Providing work performance by their team leaders or direct managers, students need training on the job for financial software applications, and they have no statistically significant competence to apply professional knowledge as specialist advisors in financial software applications. Practical Implications: Needed enrichment improvement of university curricula focusing on software applications to minimize the gap in technological knowledge. Businesses should be involved in adapting these curricula by work-integrated learning.
... Whatever the case, he had a lot to say about how to behaviorally understand the place of behavior analysis in a larger society. For those who might have overlooked Ronnie, as an introduction to his legacy we recommend articles on evidence-based practice (Detrich et al., 2007;Slocum et al., 2014;Spencer et al., 2012), treatment integrity (Detrich, 2014, Detrich et al., 2010, and dissemination and program adoption (Detrich, 2013a(Detrich, , 2013b(Detrich, , 2018(Detrich, , 2020Detrich & Keyworth, 2016;Pinkelman et al., 2022). ...
... This requires practitioners to seek the best available evidence for selecting, adapting, and implementing treatment. Ongoing progress monitoring aids such professional judgement and makes the patient play an active part in the decision-making process (Spencer et al., 2012). Baba and HakemZadeh (2012) propose five dimensions to assess the rigour and relevance of evidence, including methodological fit, contextualisation, replicability, transparency, and scholarly and expert consensus. ...
Article
Full-text available
Acknowledging the importance of skill development in graduate programs, Western University in Canada developed an innovative master’s program in interdisciplinary medical sciences. The program aims to promote students’ academic, professional, and personal skills by engaging them in experiential and interdisciplinary learning that adopts an explicit and reflective approach in focusing on seven core skills: problem-solving, communication, leadership, critical reflection, working in diverse teams, project management, and decision making. This paper draws on the experiences and reflections of the inaugural cohort of students enrolled in the program to address the following research questions: 1) How does the MSc IMS program impact students’ skill development? and 2) How did students practise the seven core interdisciplinary skills outlined in the program? The study utilizes a mixed methods approach by collecting quantitative and qualitative data using pre- and post-online surveys administered to the students. The findings highlight the program's positive impact in terms of students’ reflection on their level of competence in the seven core skills, especially in complex problem-solving, oral and written communication skills, and critical reflection. Results also show that students specifically appreciated the contribution of experiential learning components of the program in advancing their skills. The paper emphasizes the importance of addressing students’ skill development in higher education in an explicit and intentional approach and engaging students in reflective practise on their skill development. Implications for the design and review of graduate programs are also discussed.
Article
A survey completed by 22 literacy clinic directors indicated that clinics share beliefs and instructional practices. Literacy clinics provide a context in which children are taught to read and write by clinicians who are training to be literacy teachers. As best practices in reading instruction are debated, effective clinical assessment and instructional practices have endured. Using a student‐centered approach, literacy clinics help students become engaged, confident, and capable readers. Research results identified four themes: multiple literacy components , affective factors, a cyclical assessment and instruction process, and clinician and student agency . The article connects these themes and gives examples of the application of the themes for classroom instruction and assessment practices.
Article
Full-text available
Introduction: The evidence-based medicine (EBM) was introduced in the 1990s, paving the way for the new approaches to science methodology and research evidence that changed medicine-related practices. Following the EBM, social sciences ranging from education to public governance and policymaking entered a new stage of knowledge production and dissemination. Each evidence-based social science field produces its own evidence and evidence synthesis laying the foundation for efficient social practices. Pilot searches failed to bring complex and complete evidence-based methodology for social sciences. Purpose: This scoping review aims to identify the scope of the evidence-based social sciences and practices as an emerging field. Method: The review adhered to the PRISMA extension for scoping reviews, and the PPC framework. The eligibility criteria include problem (population), concept, context, language, time period, types of sources, geographical location, databases, areas of research. The searches to identify relevant publications entail searches in the Scopus database. The studies were identified and selected by screening titles, abstracts and full texts, totalling 35 documents. Results: The results cover search and selection outcomes; a bibliometric analysis, the breakdown of the publications among the four thematic clusters; the findings relating to evidence-based medicine and practice methodology applicable to social sciences; the analysis of the research area of evidence-based social sciences and practices; the social science practices by sectors. Much of the EBM methodology was directly borrowed by social sciences. Though, the major controversy was found in the hierarchy and levels of evidence as social sciences are subject to human choices. Randomized controlled trials and systematic reviews were analysed in the context of social sciences. The most elaborated and fast developing evidence-based areas in social sciences contained evidence-based education and evidence-based policymaking, with systems of governmental agencies and institutions introducing these evidence-based practices. Сonclusion. The review attained the objective and gave answers to the research questions. Only few studies were published to comprehensively address the emerging field of evidence-based social sciences and practices. Fragmentated sub-fields are covered unevenly, with many mythological divergences and disputed issues, including the quality of evidence, their weight and hierarchy, types of research.
Article
Full-text available
The teaching and learning of students with Autism Spectrum Disorder (ASD) requires the design of diversity projects at the different institutional levels of the educational centre. However, if these projects are not well adapted to the specific needs of the students, they will not achieve the expected effective results. In a study conducted with a total of 145 participants from different schools, it was shown that the presence of specific institutional educational projects was not a sufficient condition to respond effectively to the needs of the participants, whose curricular and social improvements were not shown to be significant (sig: .66). Even in the absence of these high-scale projects, when the usual methodology was well adapted to the specific needs of students with ASD, the improvements found in the academic and social, that´s coded as dependent variable (DV): improving, domains were highly significant (sig: .00) in terms of the use of meaningful didactics based on the creation of networks of relationships between informative content or highly meaningful learning. Now, the interactive constant of both components, i.e. the intersection of a project design when these have been appropriately adapted to the particular needs, then both variables became the explanatory variance of the academic and social of DV of students with ASD (constant t for the sum of nodal relationships + project: 3.70 (sig: .00), to which was added the explanatory variance of the students' age intervals (constant t for the sum of nodal relationships + project + age: 4.07, sig: .00): 3.70 (sig: .00), to which was also added the explanatory variance of the students´age intervals (constant t for the sum of nodal relationships + project + age: 4.07, sig: .00). 70 (sig: .00), to which the explanatory variance of the students´ age intervals were also added (constant t for the sum of nodes + project + age: 4.07, sig: .00). In conclusion, the design of general institutional projects, even if they cover all levels of education, are not effective on their own unless they are specifically tailored to the particular needs of the target student’s variable: “improving”.
Article
Full-text available
This article redefines evidence-based policing as 'a decision-making process which integrates the best available evidence, professional judgement and community values, preferences and circumstances'. The article argues that this definition serves to position evidence-based policing more clearly as a research-informed, practitioner-centred, and community-oriented approach to policing practice. It moreover advances links between evidence-based policing and related concepts - such as evidence-based medicine - which, in turn, promises to address misunderstandings in (and streamline) debates about evidence-based practice across disciplinary lines. The article provides an in-depth discussion of each key element of its proposed three-pronged definition - best available evidence, professional judgement, and community values, preferences, and circumstances - through a review of ongoing discussions about evidence-based policing in the field of criminology and evolving discussions (and conceptualizations) of evidence-based practice across other disciplines. Finally, the article outlines how its proposed definition - and its three integral elements - can guide the future development of an evidence-based policing agenda.
Article
Full-text available
The contribution of research findings to the education of America's students, including those with disabilities, depends on the quality of and market demand for research findings. This paper presents a rationale and suggestions for increasing the quality of and market demand for research findings as a vital component of any serious effort to improve American education. Responses to the paper are from representatives of the American Federation of Teachers; Learning Disabilities Association; National Alliance of Business; National Association of State Directors of Special Education; and Staff Director, Disability Policy Subcommittee, U.S. Senate. An additional response, which also synthesizes all responses, is provided by The Council for Exceptional Children.
Book
Full-text available
Available for download at http://nirn.fpg.unc.edu/resources/implementation-research-synthesis-literature
Article
Full-text available
Single-subject research plays an important role in the development of evidence-based practice in special education. The defining features of single-subject research are presented, the contributions of single-subject research for special education are reviewed, and a specific proposal is offered for using single-subject research to document evidence-based practice. This article allows readers to determine if a specific study is a credible example of single-subject research and if a specific practice or procedure has been validated as "evidence-based" via single-subject research.
Article
Full-text available
Determining evidence-based practices is a complicated enterprise that requires analyzing the methodological quality and magnitude of the available research supporting specific practices. This article reviews criteria and procedures for identifying what works in the fields of clinical psychology, school psychology, and general education; and it compares these systems with proposed guidelines for determining evidence-based practices in special education. The authors then summarize and analyze the approaches and findings of the 5 reviews presented in this issue. In these reviews, prominent special education scholars applied the proposed quality indicators for high-quality research and standards for evidence-based practice to bodies of empirical literature. The article concludes by synthesizing these scholars' preliminary recommendations for refining the proposed quality indicators and standards for evidence-based practices in special education, as well as the process for applying them.
Article
Full-text available
In the last 100 years, research discoveries and new knowledge have transformed the lives of many around the world. Special education research has provided equally startling advances leading to improved practices that have dramatically improved the lives, learning, and competencies of persons with and without disabilities. Common to research in all disciplines is the gap between initial discoveries and their becoming a part of routine practices. However, unique to special and general education research are the sepa-rateness of the research and practice communities, the limited relevance of educational research, the failure to articulate manageable research-validated interventions, and the weak opportunities for professional development. These are among the primary reasons that explain the current gap between research and practice in special education. Implications and solutions are discussed. Major Advances and Discoveries ~Jnique in our lifetime has been arrival of the much-anticipated year 2000. One of the more interesting aspects of this event has been the thought provoking retrospectives on the accomplishments of the last 100 years. Not the least of which was what 100 years of scientific innovation and discovery has meant to the human race. As global communication has moved from taking months to transport a single piece of information to the split-second time it takes to e-mail documents , the pace of scientific discovery has taken a similar path. For example, AIDS research was non-existent less than two decades ago when becoming HIV+ meant certain death. Today AIDS researchers have begun to discuss HIV+ as a serious but manageable disease. This outcome is truly impressive considering that large scale federal funding for research in the US only dates back to World War II; Lewis and Clark's 1803-06 exploration of the territory west of the Mississippi is considered by many to be the first federally-funded research project! Consider just a few of the discoveries that have transformed the lives of the current generation on this planet: the discovery of time, disease mechanisms (bacteriology/ immunology), the brain, computers, and observational measurement (telescope, microscope; magnetic resonance imaging).
Article
The field of clinical psychology has been characterized over the past five decades as shaped by social, economic, and political forces outside of it. Even before the Boulder Conference. meetings and writings evidenced the underpinnings of the scientist-practitioner model that was to be developed in a welcoming climate (Dosier, 1947; Kubie. 1949; Luchins, 1949). Difficulties in implementation, however, produced a schism between the scientist and practitioner aspects. Our thesis is that the current environment dictated by managed care necessitates a reaffirmation of the scientist-practitioner model. We believe that clinical psychologists are in an unparalleled position to intercede among forces in the current climate because of their sensitivity to clinical and research issues, and we offer suggestions for how clinicians, researchers, and training programs can close the gap between research and practice. (C) 2000 John Wiley & Sons, inc.
Chapter
This book presents innovative interventions for youth with severe emotional and behavioral disorders. The book is designed to fill a gap between the knowledge base and clinical practice through its presentation of theory, practice parameters, training requirements, and research evidence. Featuring community-based and state-of-the-art services for youth with severe emotional and behavioral disorders and their families, this book describes each intervention in depth, along with the supporting evidence for its utility. Most chapters present a single intervention as an alternative to institutional care. Shared characteristics of these interventions include delivery of services in the community (homes, schools, and neighborhoods) provided largely by parents and paraprofessional staff. The interventions are appropriate to use in any of the child human services sectors and have been developed in the field with real-world child and family clients. In addition, they offer a reduced cost in comparison to institutional care. Several chapters address diagnostic-specific psychosocial and psychopharmacological treatments, which are likely to be provided as adjunctive treatment in a clinical setting.
Article
The purpose of this study was to investigate the effects of performance feedback on the implementation of a reinforcer-based classroom intervention. The primary treatment agents were four classroom teachers and the degree to which each teacher implemented a prereferral intervention as designed (i.e., treatment integrity) was measured. Levels of teacher treatment integrity and student academic performance were examined and compared across experimental conditions in a multiple baseline design. Performance feedback increased teacher implementation of a prereferral intervention in 3 of the 4 cases. Student data demonstrated improvement but were more variable than teacher data throughout the study. Limitations of this investigation and implications for future research are offered.