ArticlePDF Available

Application of the Daubert Standards to the Meyers Neuropsychological Battery Using the Rohling Interpretive Method

Authors:
  • Self-Employed

Abstract

The Meyers Neuropsychological Battery (MNB) is a neuropsychological assessment battery used to detect cognitive impairment caused by acquired brain injury. Results obtained by examinees given the MNB have been submitted as evidence in a judicial proceeding in cases involving traumatic brain injury (TBI) and other neurocognitive disorders. We provide an examination of the MNB when used with the Rohling Interpretive Method (RIM) through the lens of Daubert v. Merrell Dow (1993). Daubert established criteria to be applied by judges to determine who can provide expert opinions and what these experts are allowed to present to the triers of fact. Daubert has five criteria judges consider in their role as gatekeepers of an expert’s testimony. These standards are utilized to ascertain if the expert’s testimony is scientific, with particular focus on its reliability, validity, and relevancy. We report on the MNB-RIM’s ability to withstand the rigors of a Daubert analysis, with each criterion addressed in sequence. To accomplish this task, we review the peer-reviewed literature that has tested each of the MNB components, as well as the utility of the battery in its entirety. The literature extends over the 20-year history that the MNB and the RIM have been in use in clinical and forensic assessments. Statistics regarding the MNB-RIM’s error rate have been empirically derived in numerous publications. These factors have led to a “general acceptance” of the battery and the material. Our review is intended to provide users of the MNB-RIM with the information they will need to successfully defend against a Daubert challenge.
1 23
Psychological Injury and Law
ISSN 1938-971X
Psychol. Inj. and Law
DOI 10.1007/s12207-015-9227-1
Application of the Daubert Standards to
the Meyers Neuropsychological Battery
Using the Rohling Interpretive Method
Martin L.Rohling, John E.Meyers,
Gerard R.Williams, Stephen S.Kalat,
Shanna K.Williams & Joshua Keene
1 23
Your article is protected by copyright and all
rights are held exclusively by Springer Science
+Business Media New York. This e-offprint is
for personal use only and shall not be self-
archived in electronic repositories. If you wish
to self-archive your article, please use the
accepted manuscript version for posting on
your own website. You may further deposit
the accepted manuscript version in any
repository, provided it is only made publicly
available 12 months after official publication
or later and provided acknowledgement is
given to the original source of publication
and a link is inserted to the published article
on Springer's website. The link must be
accompanied by the following text: "The final
publication is available at link.springer.com”.
Application of the Daubert Standards to the Meyers
Neuropsychological Battery Using the Rohling
Interpretive Method
Martin L. Rohling
1
&John E. Meyers
2
&Gerard R. Williams
3
&Stephen S. Kalat
4
&
Shanna K. Williams
5
&Joshua Keene
6
Received: 12 August 2015 /Accepted: 18 August 2015
#Springer Science+Business Media New York 2015
Abstract The Meyers Neuropsychological Battery (MNB) is
a neuropsychological assessment battery used to detect cogni-
tive impairment caused by acquired brain injury. Results ob-
tained by examinees given the MNB have been submitted as
evidence in a judicial proceeding in cases involving traumatic
brain injury (TBI) and other neurocognitive disorders. We
provide an examination of the MNB when used with the
Rohling Interpretive Method (RIM) through the lens of
Daubert v. Merrell Dow (1993). Daubert established criteria
to be applied by judges to determine who can provide expert
opinions and what these experts are allowed to present to the
triers of fact. Daubert has five criteria judges consider in their
role as gatekeepers of an experts testimony. These standards
are utilized to ascertain if the experts testimony is scientific,
with particular focus on its reliability, validity, and relevancy.
We report on the MNB-RIMs ability to withstand the rigors
of a Daubert analysis, with each criterion addressed in se-
quence. To accomplish this task, we review the peer-
reviewed literature that has tested each of the MNB compo-
nents, as well as the utility of the battery in its entirety. The
literature extends over the 20-year history that the MNB and
the RIM have been in use in clinical and forensic assessments.
Statistics regarding the MNB-RIMs error rate have been em-
pirically derived in numerous publications. These factors have
led to a Bgeneral acceptance^of the battery and the material.
Our review is intended to provide users of the MNB-RIM with
the information they will need to successfully defend against a
Daubert challenge.
Keywords Daubert .Neuropsychological assessment .
Forensic neuropsychology .Rohling interpretive method .
Meyers neuropsychological battery
There has been an increasing and widespread reliance on clin-
ical neuropsychology in providing expert opinions in the
courtroom (Greiffenstein & Kaufmann, 2012;Kaufmann&
Greiffenstein, 2013; Sweet, King, Malina, Bergman, &
Simmons 2002). Since the majority of licensed and practicing
psychologists seem to be moving in the direction of evidence-
based practice and were likely trained in either the scientist
practitioner model or the practitionerscholar model, the field
has been forced to develop assessment measures, tests, and
batteries that are empirically supported (Sweet & Meyers,
2011). The expanding knowledge base generated from neuro-
psychological research informs Btriers of fact,^which in-
cludes judges, jurors, or mediators (Greiffenstein, 2008). Neu-
ropsychologists are now, more than ever, required to develop
extensive knowledge of various pathological conditions, as
*Martin L. Rohling
mrohling@southalabama.edu
John E. Meyers
Jmeyersneuro@yahoo.com
Gerard R. Williams
drgerardwilliams@charter.net
Stephen S. Kalat
drkalat@aol.com
Shanna K. Williams
shanna.k.williams@gmail.com
Joshua Keene
keenejosh22@gmail.com
1
University of South Alabama, Mobile, AL, USA
2
Comprehensive MedPsych Systems, Sarasota, FL, USA
3
Hurley Medical Center, Flint, MI, USA
4
Private Practice, Denver, CO, USA
5
Nova Southeastern University, Ft. Lauderdale, FL, USA
6
Delta Family Clinic, Flint, MI, USA
Psychol. Inj. and Law
DOI 10.1007/s12207-015-9227-1
Author's personal copy
well as assessment and statistical interpretation skills, to ade-
quately respond to increased scrutiny typically applied in a
forensic setting. The ever expanding literature related to neu-
ropsychological practice has paradoxically increased the de-
gree of dueling amongst experts in the adversarial arena of
forensic practice. Most neuropsychologists can find and cite
published empirical research to support nearly any opinion
they wish to offer. It is for this reason that the Daubert stan-
dards (Daubert v. Merrill Dow 1993) have been developed so
sift through the evidence to separate the wheat from the chaff.
A long running argument in the field of clinical neuropsy-
chology has been the Bfixed^versus Bflexible^battery debate.
If anyone thinks this argument has been settled, refer to the
recent New Hampshire Supreme Court ruling of Baxter v.
Temple , which involved a child who was allegedly poisoned
by lead paint at the age of 1 year old. The initial filing was in
2001 when the child was already 7 years old and the case
settled in the fall of 2012, over a decade later and after the
child had turned 18. One of the major disagreements in the
case was the use of a flexible versus a fixed battery. Whether a
test battery can be considered reliable and valid if it is fixed,
flexible, or semi-flexible is based on empirical data that is
published in peer-reviewed specialty journals (Sweet et al.,
2002). To some degree, all batteries vary along a continuum
defined at the poles as fixed and flexible. Examples of more
fixed batteries, including the Halstead-Reitan Neuropsycho-
logical Battery (HRB; Reitan & Wolfson, 1985)andthe
Halstead Russell Neuropsychological System-Revised
(HRNES-R; Russell & Starkey, 2001). Other batteries that
allow for some degree of flexibility are the Neuropsycholog-
ical Assessment Battery (NAB; Stern & White, 2003) and the
WAIS-IV/WMS-III/ACS system (Wechsler, 2008). Regard-
less the degree of flexibility a batterysauthorsallow, it is
common that the fixed portion of the battery has undergone
extensive testing. The research typically involves the assess-
ment of hundreds of subjects, both normal and those who
suffer from cognitive impairment believed to be caused by
various neuropathologies.
More flexible batteries have gained wider acceptance, as
described in Baxter v. Temple in an Amicus Brief submitted to
the New Hampshire Supreme Court by the American Acade-
my of Clinical Neuropsychology (2007). The AACN brief
notes that 76 % of practicing clinical neuropsychologists uti-
lizes a flexible battery approach. Only 7 % of clinicians re-
ported that they relied on fixed batteries (Sweet, Meyers, Nel-
son, & Moberg 2011). The MNB contains both fixed and
flexible components, as described by Meyers and colleagues
(for examples, see Meyers, 2014; Meyers & Rohling, 2004;
Meyers & Rohling, 2009).
While a flexible battery might rely on the validity that is
inherent in its component tests, Rohling and colleagues have
developed a systematic approach to flexible battery interpre-
tation entitled the Rohling Interpretive Method (RIM; Miller
& Rohling, 2001; Rohling, Langhinrichsen-Rohling, & Miller
2003a; Rohling, Miller, & Langhinrichsen-Rohling 2004). We
believe that the MNB, when analyzed via the RIM technique,
has demonstrated superior proficiency in discriminating nor-
mal cognition from impairments associated with acquired
brain injuries (Larrabee, Millis, & Meyers 2008; Larrabee
2012a,b; Rohling, Meyers, & Millis 2003b, Rohling,
Williamson, Miller, & Adams 2003c). In reviewing the con-
flicts over the validity of test batteries, Kaufmann (2012)n
ot-
ed that every published legal case addressing this has found
that a fixed battery offered no advantage over a flexible battery
for forensic consultation (see also Larrabee et al. 2008;
Rohling et al. 2003a).
The Daubert standard has defined two broad criteria for
admissibility of scientific evidence in the courtroom. First,
was the information that is to be submitted in court obtained
in a scientifically valid manner? Second, are the results of an
assessment relevant to the issue before the court? Other guide-
lines have been put forth to better delineate what is considered
valid and what is considered relevant. An abbreviated listing
of the criteria adopted by Daubert is presented in Table 1.As
noted by Greiffenstein and Kaufmann (2012), these criteria
are unweighted, polythetic, and non-exclusive. BThis means
no single element is necessary and any single element or com-
bination of elements is sufficient to admit into evidence^(p.
51). Therefore, none of the factors alone are dispositive. The
party offering an expert opinion bears the burden of demon-
strating that the conclusions offered are based on sound sci-
ence (Russell, 2012). Thus, the examiner must show that the
battery included standardized tests that have been subjected to
peer-review and have been shown to be valid.
The current paper is designed to review the historical de-
velopments that have led up to the current version of the
MNB, which includes its use of the RIM interpretive system,
and to see if it meets the Daubert standards. If so, results from
a MNB-RIM assessment should be admitted into evidence,
via expert testimony, to assist the court in case adjudication.
Developments of the MNB and RIM
Approximately 20 years ago, Meyers began working on de-
veloping a battery of tests (Meyers & Diep, 2000)thatisnow
called the MNB (Meyers, 2014). Initially, the Meyersbattery
consisted of several commonly administered tests that came
from other batteries and were well-known to the neuropsycho-
logical community. The tests selected came from a variety of
existing batteries, including the Halstead Retain Battery
(HRB; Reitan & Wolfson 1985), BentonsIowaNeuropsy-
chological Battery (Benton, Savin, de Hamsher, Varney, &
Spreen 1994), and the Boston Diagnostic Aphasia Exam
(BDAE; Kaplan, Goodglass, & Weintraub 1983). The tests
selected for inclusion in the MNB by Meyers had been listed
Psychol. Inj. and Law
Author's personal copy
in various compendiums devoted to neuropsychological in-
struments and norms (i.e., Mitrushina, Boone, & DElia
1999;Spreen&Strauss,1991,1998; Lezak, 1995). The fre-
quency of use for these instruments has since been document-
ed by Rabin, Barr, & Burton (2005). It is now known that the
instruments chosen for inclusion in the MNB constituted over
50 % of the tests ranked in the top 30 frequently used mea-
sures by neuropsychologists.
The battery was administered to both patients and control
subjects. Control subjects were hospitalized patients who had
no history of acquired brain injury, but had been admitted to
undergo various other medical procedures and who
volunteered for an assessment to assist in the development
of the MNB. The data obtained from these assessments were
analyzed via to identify which had the highest level of sensi-
tivity and specificity in various pathological groups. The re-
sults were surprising, as only four or five test scores were
needed to discriminate most clinical entities from normal sub-
jects. Additional tests were required to discriminate scores
obtained by patients who suffered from various types of neu-
ropathology from those obtained from patients who had suf-
fered a TBI. By determining which tests were the most sensi-
tive to the neuropathology, the battery was whittled down to
its current state and was initially named the Meyers Short
Battery (MSB). The MSB was subsequently renamed the
MNB, so as to avoid confusing examiners that it was to be
used for screening purposes only.
Again, some of the criteria used to select tests for in-
clusion in the battery were: (1) How available was a test
to acquire for those interested in using the battery. Tests
that were Bin the public domain^(i.e., without copyright)
were preferred over those that were owned by publishers
and expensive to acquire. (2) How well known was a test
to the neuropsychological community. Tests that had a
large number of professionals who were familiar with it
were selected over those that were more obscure. (3) How
frequently did members of the profession administer the
test. Those that were more frequently given were selected
overthosethatwerelesscommonly given. Finally, (4)
how easy was a test to administer and score. Those that
weresimplertogiveandscorewereselectedoverthose
that had several nuances. For example, the Wisconsin
Card Sorting Test (WCST; Heaton, Chelune, Talley, Kay,
&Curtiss,1993) performed approximately as well as the
Category Test (CT; Reitan & Wolfson, 1985; Labreche,
1983) upon initial screening for selection. These two tests
have often been assumed to measure executive functioning,
which was the purpose of examining them to begin with. The
CT was retained as the preferred measure of abstract reasoning
and conceptual processing over the WCST because it is easier
to administer, score, and interpret than is the WCST. Subse-
quently, it has been shown that the two tests do not measure
the same construct (Perrine, 1993), nor frontal lobe function-
ing (Demakis, 2004). Nevertheless, this was the logic used for
selecting the CT.
The MNB can typically be administered and scored in ap-
proximately 3 h. The scores obtained are assigned to various
cognitive domains, i.e., verbal reasoning, visual reasoning,
verbal memory, visual memory, performance validity, and
emotional/personality. The assignment of scores to domains
is based on results from various factor analytic studies that
have appeared in the literature (e.g., Larrabee et al. 2008;
Larrabee, 2000; Leonberger, Nicks, Larrabee, & Goldfader
1992). Table 2provides a description of the core MNB tests,
which are presented in the order of their administration. Test
order was organized by Meyers (2014)topreventBoverlap,^
such as two verbal lists running congruently. Although the
battery is generally given in the specified order, small varia-
tions in administration have not been shown to adversely af-
fect influence obtained scores (Meyers, 2014).
At the same time that Meyers was in the process of devel-
oping his battery, Rohling began his work on developing an
interpretive system designed to for use with individual case
data (Miller & Rohling 1998,2001; Rohling, Miller, &
Langhinrichsen-Rohling 1998). In the study of Miller and
Rohling (2001), the purpose for developing the RIM was stat-
ed as follows:
Williams (1997) recommended the development and
use of technology that would function as a decision
aid. These decision aids would help neuropsychologists
correctly interpret psychometric test data and increase
the accuracy of their diagnoses (Sicoly, 1989). Meehl
(1954) was the first to advocate for additional reliance
Tabl e 1 The standards listed
below are used during a Daubert
challenge, which is designed to
determine if an experts opinion
should be admitted into
testimony. The issue at hand is
whether the information an expert
is planning on presenting in court
should be heard by the jury
Item Criteria
1. Has the MNB been tested scientifically for accuracy?
2. Has the MNB been subjected to peer-review and publication?
3. Does the MNB have a known error rate (reliability and validity)?
4. Has the MNB been generally accepted by the scientific community
and mainstream clinical neuropsychologist?
5. Does the MNB have a method of controlling its use?
Psychol. Inj. and Law
Author's personal copy
on actuarial data for decision making in psychological
assessmentAlthough common judgment errors can
threaten seriously the reliability and validity of multiple
test battery interpretation (Dawes, Faust, & Meehl
1989), we believe that neuropsychologists who combine
their statistical training with technology that is readily
available (e.g., personal computers and statistical soft-
ware) can address effectively these biases. However,
failure to do so reduces the probability of successfully
discriminating between real cognitive loss and change
and chance variability in the scores.
Miller and Rohling (2001) further noted that the RIM is
well grounded and statistically rigorous, stating:
The methodology used in this process is similar to that
recommended for meta-analytic reviews of research lit-
erature (e.g., see Glass, McGraw, & Smith 1981;
Rosenthal, 1984; Rosenthal, Rosnow, & Rubin 2000).
The linear combination of scores we recommend is sup-
ported by the research of Dawes (1979) and Dawes and
Corrigan (1974) and follows a similar model as present-
ed by Kiernan and Matthews (1976). It is a statistically
sound method of analysis that produces summary results
using a flexible battery that are analogous to those gen-
erated in fixed battery approaches, including the
Halstead Impairment Index (HII) of Reitan and Wolfson
(1985), the Average Impairment Rating (AIR) of Rus-
sell, Neuringer, & Goldstein (1970), and the Global
Tabl e 2 Basic information on the MNB in administration order
Test # MNB test Original norms Assigned domain
1. Wechsler scale (adult and child); information,
similarities; block design
a
; picture completion;
arithmetic, digit span
a
; digit symbol (coding)
a
Wec h sle r ( 1997,2003,2008); Ward
(1990); Pilgrim et al. (1999); Meyers
et al. (2013b)
Designated in manuals
Verbal reasoning Visual
reasoning Attention
processing Speed
2. Forced choice (FC)
a
Brandt et al. (1985)Attention
3. Rey complex figure test (RCFT)Copy
a
Meyers & Meyers (1995a,b) Visual reasoning
4. Animal naming Spreen & Strauss (1998), p. 457, Tables
1112, p. 458, Tables 11-14
Attention
5. 1 min estimation Lezak et al. (2004), p. 636); Meyers (2014)Attention
6. 3 min recall
a
of RCFT Meyers & Meyers (1995a,b) Visual memory
7. Controlled oral word association test (COWA) Spreen & Strauss (1998), p. 455, Tables
19, p. 458, Tables 11-13
Verbal reasoning
8. Dichotic listeningleft, right, both
a
Roberts et al. (1994); Meyers et al. (2002b) Verbal reasoning
Processing speed
9. North American Adult Reading Test Spreen & Strauss (1998); Meyers (2014) Premorbid estimate
10. Sentence repetition Spreen & Strauss (1998), pp. 370371,
Tables 1025, 1027; Read & Spreen
(1986); Meyers et al. (2000)
Attention
11. RCFT 30-min Recall
a
, Recognition Trial
a
Meyers & Meyers (1995a,b) Visual memory
12. Rey Auditory Verbal Learning Test (AVLT) Spreen & Strauss (1998), pp. 333335
Tables 1016 to 1018; Ivnik et al. (1990)
Verbal memory
13. Judgment of Line Orientation (JLO
a
) Benton et al. (1994), pp. 5053, Tables
53&56.
Visual reasoning
14. Boston Naming Test (BNT) Spreen & Strauss (1998), pp. 436438
Table 115and11-2
Verbal reasoning
15. Finger Tapping (FT)dominant hand
a
and
non-dominant hand
Reitan & Wolfson (1985); Heaton et al.
(1991,2004)
Motor
16. Finger Localization (FL)dominant
a
and
non-dominant
Benton et al. (1994); Spreen & Strauss
(1998), p. 558, Table 13-3
Sensory
17. Trail Making Test Parts A & B (TrA & TrB) Reitan & Wolfson (1985); Heaton et al.
(1991,2004)
Processing speed
18. Token Test
a
(TT) Spreen & Strauss (1998), p. 475, Tables
1116, 11-17
Verbal reasoning
19.0 AVLT 30-min recall and recognition
a
Spreen & Strauss (1998), pp. 333335,
Tables 1016 to 1018; Ivnik et al. (1990)
Verbal memory
20. Booklet Category Test (Victoria Revision) Spreen & Strauss (1991); Kozel & Meyers
(1998); Heaton et al. (1991,2004)
Visual reasoning
a
Test that is a part of a performance validity measure
Psychol. Inj. and Law
Author's personal copy
Neuropsychological Deficit Scale (GNDS) of Reitan
and Wolfson (1988). Thus, a primary advantage of the
RIM, in comparison with other indices, is that it does not
restrict a clinical neuropsychologist to a particular bat-
tery of tests (pp. 144145).
During the first 6 years of the development of both the
MNB and the RIM, Meyers and Rohling had not been
acquainted, despite the fact that they lived within close geo-
graphic proximity of each other, living and working just 150
miles from one another (Sioux City, IA, & Lincoln, NE, re-
spectively). However, they conducted their research without
the knowledge of one anothers work. Rohling wished to cre-
ate a system of individual test score analysis that is based on
meta-analytic procedures typically used for large sample
group analyses. The concept of the MNB was created by
Meyers when he became frustrated with assessments that of-
ten required patients to endure up to 8 h of testing. He wanted
to determine how many test scores were necessary to provide
reasonably accurate diagnostic conclusions for most cases.
In the remainder of this paper, we will detail the MNB-
RIMs ability to meet or exceed each of the five Daubert
standards. Daubert criteria will be presented in question and
answer format, which will allow readers to consider each
criteria applied to the MNB-RIM from a critical perspective.
This skeptical view places the burden of proof for admissibil-
ity of the results from an MNB-RIM assessment into evidence
and will assist users in responding to legal challenges when
administering an MNB-RIM assessment for forensic
purposes.
The Five Criteria of Daubert
Criterion 1: Has the MNB-RIM Been Tested Scientifically
for Accuracy?
Yes. First, as noted above, the MNB was developed by
selecting tests for the core battery based, in part, on their
established reliability and validity. Each test in the core battery
might be capable of independently passing the Daubert
criteria. The MNB tests use methods that are replicable and
result in diagnostic conclusions that are falsifiable. Individual
users can flexibly add tests. However, adding tests does not
alter the results provided to users from the core battery. Tests
are added to augment the assessment and are based on indi-
vidual userspersonal preference.
Criteria 2: Has the MNB-RIM Been Subjected
to Peer-Review and Publication?
Yes. Although Meyers and Rohling developed their systems
using rigorous and exhaustive methods, both the MNB and
the RIM have continued to be refined over time. In fact, sev-
eral researchers have published peer-reviewed manuscripts on
the properties of both systems. For example, using Google
Scholar as a search engine, over 40 researchers were identified
as having examined aspects of the MNB and over 30 re-
searchers have investigated aspects of the RIM. Using Google
as a search engine and entering in quotations the BMeyers
Neuropsychological Battery^resulted in 804 hits from the
Internet. Using the same process, but entering the BRohling
Interpretive Method^resulted in 760 hits. When a primary
outcome measure of the RIM, the BOverall Test Battery
Mean,^was entered in quotation marks, it resulted 1360 hits.
Finally, as of mid-2015, Meyers has published 52 manuscripts
and abstracts that have been cited 2065 times. Rohling has
published 89 manuscripts and abstracts that have been cited
3200 times (Publish or Perish; http://www.harzing.com/).
When examining the association between year of publica-
tion and citations, all regression equations for both Meyers
and Rohling, whether based on publications or citations, were
positively sloped, indicating both that the MNB and the RIM
are experiencing growth in their examinations by independent
researchers, as well as garnering increased interest by clini-
cians. In total, 65 manuscripts have been published relating to
the two systems. Meyers and/or Rohling have authored 40 of
these manuscripts. However, the two have jointly published
only 4 papers together. Thus, 25 manuscripts have been
authored by various independent researchers (e.g., Brad
Axelrod, Russell Bauer, Stephen Bowden, Jacobus Donders,
Paul Green, Michael Kirkwood, Glenn Larrabee, Robert
McCaffrey, & Scott Millis).
The MNB and/or the RIM have been examined in at least
65 publications. Such an extensive literature supports the idea
that the two systems are widely used by neuropsychologists,
and their use has grown substantially over the past decade.
The widespread use of the MNB-RIM lends itself to further
development regarding its efficacy, effectiveness, and
efficiency. Miller, Fichtenberg, & Millis (2010) in their review
noted:
Meyers and Rohling (2004) have made a major con-
tribution to the ability focused battery (AFB) by
conducting validation studies of the Meyers Neuropsy-
chological Battery (MNB), an example of the AFB that
has been in clinical use for yearsThe MNB has dem-
onstrated that a flexible core approach is capable of
performing in a manner comparable to a fixed battery
The MNB has also demonstrated sensitivity to different
causes of cognitive dysfunction. (p. 679)
Carone and Bush also commented on the MNB-RIM in
their edited text devoted to the issue of mild traumatic brain
injury (2013, p. 285 & pp. 288289). These researchers de-
scribed it as using comparison groups and discriminant
Psychol. Inj. and Law
Author's personal copy
functions to identify various diagnostic groups. The MNB-
RIM was positively reviewed by Bush (2011)aswellinthe
Encyclopedia of Clinical Neuropsychology (pp. 15891590).
Russell (2012) discussed the characteristics of the MNB-RIM
and noted the consistently high levels of diagnostic accuracy,
with a sensitivity of 93 %, specificity of 90 %, and overall
predictive power of 92 %. Russell concluded that
BUndoubtedly, the MNB will remain an excellent usable stan-
dardized battery (p. 298).^
Criteria 3: Does the MNB-RIM Have Known Error Rates
(i.e., reliability and validity)?
Yes. This criterion addresses the MNB-RIMs reliability and
validity in the detection of cognitive impairment asit relates to
acquired brain injury. To produce a known error rate, it is
important to establish internal and external validity when mea-
suring the capabilities of a psychological test instrument. It is
necessary for examiners to control for the influence of spuri-
ous variables when conducting a test procedure to establish
internal validity. Controlling for outside influences is neces-
sary to establish a batterys reliability. There must also be a
high degree of weight placed on external validity. Results
should be consistent in its results across comparable
populations.
Reliability A primary objective of Rohling during the devel-
opment of the RIM was to increase the inter-rater reliability of
diagnostic outcomes of neuropsychological assessment, par-
ticularly in the forensic setting. Prominent psychological re-
searchers have identified common heuristics frequently used
when making diagnostic decisions that result in errors (e.g.,
Tversky & Kahneman, 1974). In fact, to addresses these errors
generated when using a flexible or semi-fixed battery, Miller
and Rohling (2001)noted:
The risks involved with the evaluation and interpre-
tation of multiple measures are many and tend to in-
crease at a rate even greater than the number of tests
used (Ingraham & Aikken, 1996). These risks include
inflated error rates, multicollinearity, weighting decision
problems, unknown veracity/reliability of sets of tasks,
variability across interpreters, and a number of human
judgment errors (Wedding & Faust, 1989). (Miller &
Rohling, p. 143) Thus, Rohling and colleagues set
out on their development of the RIM. They worked to
improve the reliability of diagnostic outcomes by clini-
cal neuropsychologists by developing a system that re-
duces bias and increases accuracy.
The reliability of the MNB-RIM has been studied using a
general clinical sample of 63 subjects. These individuals were
tested twice, an average of 19.1 months between testing
sessions. The MNB-RIM obtained an overall reliability for
the OTBM of between .86 and .93 (Meyers & Rohling,
2004; Rohling, Miller, Axelrod, Wall, Lee, & Kinikini
2015). The reliabilities of the various subtests ranged from
.46(ReyAuditoryVerbalLearningTest-Trial1)to.88
(WAIS-III Block Design) with a mean of .72.
Furthermore, the MNB-RIM uses commonly administered
tests that are presented in a standard order. The MNB-RIMs
battery has built in rest breaks to ensure that fatigue is not a
factor that negatively influences performance. The test admin-
istration order was carefully designed to prevent Boverlap,^
such as two verbal learning tasks are administered consecu-
tively or concurrently. The order of test administration is one
of the methods used for maintaining control over the use of the
MNB-RIM.
Validity Internal Internal validity is defined as the degree
to which an independent variable causes a change in the de-
pendent variable (Goodwin, 2010). Tests must carefully con-
trol for spurious influences created by confounding variables
to determine a causal relationship between the variables of
interest. The MNB-RIM uses well-established and empirically
supported tests with a high degree of internal consistency and
validity (Strauss, Sherman, & Spreen 2006). These tests use
repeatable administrative procedures to produce internally
valid results. A newly developed standard of practice in test
batteries for neuropsychological assessment is the inclusion of
validity measures (Bush, Ruff, Tröster, Barth, Koffler, Pliskin,
Reynolds, & Silver 2005; Heilbronner, Sweet, Morgan,
Larrabee, Millis, & Conference participants 2009). The
MNB-RIM contains 11 internal PVMs (Meyers, 2014), which
are listed in Table 2and are identified with an asterisk (*).
Each of these has been included through the establishment of
their accuracy based on peer-reviewed published research.
Using a combination of nine PVMs, the MNB-RIM
had an invalid response rate of 36 % for a sample of
known subjects (Meyers & Volbrecht, 2003). This base
rate is quite similar to the 40 % reported by Larrabee,
Millis, & Meyers (2009). Using data from identified
groups (non-litigating: LOC<1 h, LOC>1 h, LOC 1
8 days, LOC 9+ days, pain, depression, normal, litigating:
LOC<1 h, LOC>1 h, pain and malingering actors), the
PVMs had a sensitivity to invalid responding of .83, a
specificity of 1.00, and an overall correct classification
rate of 95 %. These nine PVMs may be more conservative
than other freestanding measures (e.g., Word Memory
Test or WMT; Green, 2007). Nevertheless, Meyers,
Volbrecht, Axelrod, & Reinsch-Boothby (2011b) found
that the PVMs accounted for a similar amount of variance
in neuropsychological test scores as did the WMT, which
was found to be 52 % (Green, Rohling, Lees-Haley, &
Allen 2001) and the Test of Malingered Memory
(TOMM; Tombaugh, 1996), which has been reported to
Psychol. Inj. and Law
Author's personal copy
be 47 % (Constantinou, Bauer, Ashendorf, Fisher, &
McCaffrey 2005). Nearly identical results were found in
a German sample with the WMT and the Medical Symp-
tom Validity Test (MSVT; Green, 2004), with 45 % of the
variance accounted for by scores in the performance va-
lidity tests (Stevens, Friedel, Mehren, & Merten 2008).
A new PVM that was added to the MNB-RIM since the
publication of the original nine PVMs is the WMTMNB
algorithm. This new PVM was made using a sample of
264 consecutive referrals, which were referred for a neu-
ropsychological assessment, and were given the MNB and
the Word Memory Test (WMT; Green, 2003; Green,
2007). All were identified as either passing or failing the
WMT, based on standard interpretation and administra-
tion, using both cutoff scores and genuine memory im-
pairment patterns. All the MNB-RIM variables that are
not part of the PVM already described were included in
a stepwise discriminant function (Pohar, Blas, & Turk
2004; Maroco, Silva, Rodriquez, Guerreiro, Santana, &
de Mendonça 2011). The discriminant did not include
any of the WMT scores, but rather predicted results from
the WMT, based on scores below the cut score and the
Genuine Memory Impairment Profile (Green, Flaro, &
Courtney 2009). A score below 0 is a fail on this PVM.
The discriminant function was able to correctly classify
70 % of the sample. To continue with the setting of false-
positive rates at zero, for patients with less than 7 days of
loss of consciousness (LOC), a score of .50 is
recommended.
It is recommended that the MNB-RIM be administered
with the Minnesota Multiphasic Personality Test-2 (MMPI-
2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer
1989) or the Minnesota Multiphasic Personality Test-2-
Reformatted (MMPI-2-RF; Tellegen & Ben-Porath, 2008).
When examiners administer one of the two versions of this
personality test, then the Meyers Index (MI; Meyers, Millis, &
Vo l k e r t 2002a; Meyers, Miller, Haws, Murphy-Tafti, Curtis,
Rupp, Smart, & Thompson 2013a, Meyers, Zellinger,
Kockler, Wagner, & Miller 2013b) is available to assess the
validity of the examinees responses to emotional/personality
questions.
In a recent study by Meyers et al. (2014a,b), the now 11
PVMs were studied to determine the sensitivity and specificity
of failing any combination of these measures. Thus, a proba-
bility of the test scores obtained by a patient of being invalid
can be objectively calculated by chaining likelihood ratios,
which dramatically improves detection rates of invalid
responding, with corresponding increases in the sensitivity
and specificity, based on a post-test odds ratios of between
.934 and .999. Additionally, the MNB-RIM manual presents
PVM data at the domain level so that validity of performance
at this level can be assessed. The same data were used by
Meyers et al. (2013a,b) to determine the probabilities that
are associated with domains-level PVMs and inter-individual
variability.
ValidityExternal In the Standards for Educational and
Psychological Testing (AERA/APA/NCME, 2014), external
validity is defined by the degree to which results can be gen-
eralized to various populations, settings, and confounding var-
iables. Evidence used to establish external validity depends on
a combination of patient history, initial assessment conducted
by a qualified psychologist, presenting problems as divulged
by the patient, observations by the therapist, the patient legal
history, other self-report measures obtained from the patient,
and brain imaging techniques used by qualified professionals.
All of these factors contribute to the establishment of external
validity. There is evidence comparing MNB-RIM results with
these factors, which support the external validity of the bat-
tery. Meyers, Volbrecht, & Kaster-Bundgaard (1999b)found
94.4 % accuracy in distinguishing between individuals capa-
ble of driving and those advised by their physician not to
drive. There was 84 % agreement between positive CT and
MRI brain scans (Meyers & Rohling 2009) and scores on the
MNB-RIM. The subjects who were tested twice had an aver-
age 19.1 months between sessions. The testretest reliability
for the two administrations was .86 (Meyers & Rohling,
2004). The length of loss of consciousness (LOC) was highly
related to the severity of cognitive impairment, as measured
by the MNB-RIM (Rohling et al. 2003b). Further study of
the MNB-RIM was undertaken to identify its sensitivity to
cognitive impairment. Rohling et al. (2003b) found in a sam-
ple of 436 TBI patients that the length of loss of consciousness
(LOC) was related to the degree of cognitive impairment, as
measured by the overall test battery mean (OTBM). Results
were nearly identical to those presented by Dikmen,
Machamer, Winn, & Temkin (1995), who used the expanded
Halstead Reitan Battery (HRB) with group composite mea-
sures having a Pearson correlation coefficient of .98 (Rohling
et al. 2003a,b,c). An OTBM, generated from HRB data,
correlated .87 with the General Neuropsychological Deficit
Scale (GNDS; Reitan & Wolfson, 1997). It correlated .90 with
the Average Impairment Rating (AIR; Russell et al. 1970).
Finally, it correlated .79 with the Halstead Impairment
Rating (HII; Reitan & Wolfson, 1985). It was sensitive to
neurological impairment in the sample of .90, a specificity
of .32, a positive predictive value of .70, a negative predictive
value of .65, and an overall correct classification rate of 69 %.
The overall correct classification rate is consistent with that
obtained from the traditional HRB summary scores (GNDS=
71 %; AIR= 65 %; HII=65 %).
An OTBM from the MNB-RIM was sensitive to cognitive
impairment caused by other neurological and psychiatric con-
ditions, with an overall sensitivity of .90, specificity of .99,
and an overall correct classification rate of 96 %. These results
indicate that the MNB-RIM was as sensitive to the degree of
Psychol. Inj. and Law
Author's personal copy
cognitive impairment as is the more time-consuming Halstead
Reitan battery. Examinees who are administered the HRB
spend nearly 10 h being assessed compared to 3 h when given
the MNB-RIM.
The MNB-RIM has been studied in relation to test perfor-
mance by referring plaintiff and defense attorneys. In a study
by Meyers, Reinsch-Boothby, Miller, Rohling, & Axelrod
(2011a), the assessment results from 48 plaintiff referrals
and 63 defense referrals were examined. Results showed that
there were statistically reliable differences in the scores ob-
tained on the performance validity measures. Thus, the MNB-
RIM is not considered to be overly stressful for either plaintiff
or defense referrals.
Norms The process of developing norms is detailed in
the MNB-RIM manual (Meyers, 2014). Examination of
Tab le 2lists the references for the original norms. In
evaluating the original normative data, it was found that
there were variations in test norms by age and
education. For example, the AVLT normative data
from Spreen & Strauss (1998; p. 335) showed a mean
of 11.4 (SD= 2.4) for trial 6 (immediate recall) for sub-
jects in the 3039 years of age group. However, if the
examinees were 40 years of age, the mean listed is 10.4
(SD=2.7). This means that if an individual scored 10 on
this test 1 day before a birthday and was (theoretically)
tested on the birthday, the T score associated with the
raw score improves from 44 to 48. Using the Heaton,
Grant, & Matthews (1991) normative system, a raw
score of 10 would go from being in the below average
category to the average category simply by growing
1 day older. This is a common problem with non-
smoothed normative data. Another concern was the ef-
fects of various demographic variables (i.e., education,
gender, handedness, and ethnicity), which might not
have been significant in a normal population, but be-
comes a factor in a pathological sample. Thus, the
MNB norms are generated from patients who were re-
ferred for testing out of concern that they suffered from
a neurological or psychiatric condition. In response to
these concerns, Meyers decided to smooth the data
based on subjectsdemographics. This was done by first
selecting all subjects from the larger data set who
earned a validity score of either 0 or 1 (failures) and
were 15 years of age or older. The age limit was se-
lected based on the age limit developed for the Trail
Making Test by Reitan and Wolfson (1985). It was
retained to maintain historical consistency. The total
sample included 1727 patients with a mean age of
45.7 (SD= 20.7) and a mean level of education of
12.3 years (SD=2.7). The sample consisted of 779 fe-
males and 948 males. Of these, 1543 were right handed
and 184 were left handed. In terms of ethnicity, there
were 1617 White non-Hispanics, 32 mixed backgrounds,
27 Native Americans, 27 Latino Americans, and 2
Asian Americans.
Regression equations were generated using raw scores and
the demographic variables of age, education, gender, ethnicity,
and handedness to predict T scores previously calculated
using the standard normative data from Heaton et al. (1991).
Once the regression equations were calculated, they were used
to generate regression-based T scores for each test. It was
found that this procedure workedwell for all test scores except
the Token Test (adult) due to its skewed distribution proper-
ties. For the Token Test, percentile scores were calculated and
converted to T score equivalents.
By examining the normative data of the MNB, it is
apparent that the regression equations adequately pre-
dicted the T scores calculated using the original norms
(r=.90). The purpose of smoothing norms is to correct
for Btable jump.^T scores for either adults or children
can be calculated. The addition of demographic vari-
ables in the regression equations adjusts for differences
that are known to influence test performance. The pub-
lished literature that has examined the MNB has used
these regression-based norms in the analyses.
Rohling et al. (2015) conducted a comparison of the MNB-
RIM normative system with two other normative systems to
see what similarities and differences might exist, as well as to
try and determine which system gave the most accurate re-
sults. Using a random sample of 330 randomly selected re-
ferred cases, as well as 99 undergraduate volunteers, they
compared normative scores from the MNB-RIM, Heatons
comprehensive scoring system (Heaton, Miller, Taylor, &
Grant 2004) and Mitrushina, Boone, Razani, & DElia
(2005) normative systems, which was generated using meta-
analytic regression techniques (MA-Reg). All three systems
were highly correlated with one another, with the average
correlation for the Overall Test Battery Mean (OTBM) being
.96. The authors concluded that all three systems were gener-
ally acceptable and, for the most part, congruent with one
another. However, without a gold standard upon which to base
a judgment, their tentative conclusion was that Bthe Meyers
system is the preferred system^In making this judgment,
they took into consideration the similarities and difference
amongst the systems, as well as how well each system
accounted for the error variance associated with the demo-
graphic variables of age, education, ethnicity, and gender.
These confounds can be statistically corrected for, but the risk
of applying such corrections is challenging, as one might over
or under correction based on limited sampling of the variable
of concern, in combination with the other variables of con-
cern. Despite the fact that the normative sample developed by
Meyers has very few minority subjects included, the relation-
ships amongst the three normative systems were actually
greater in magnitude for the subset of subjects who were
Psychol. Inj. and Law
Author's personal copy
considered to be members of a demographic minority than
they were for White subjects.
Another study by Meyers and Rohling (2009) identified the
associations amongst scores from various tests and findings of
brain lesions based on neuroimaging results (i.e., CT scans
and/or MRI data). Test scores were examined from a sample
of 124 subjects who had a positive CTor MRI findings located
in discrete locations. The results showed that the tests that
make up the MNB-RIM are significantly correlated with CT
and MRI results, and scores were depressed when subjects had
lesions in the brain areas that were considered a priori to in-
fluence performance on such tasks. The MNB-RIM scores
were in agreement with the location of lesion 84 % of the
time. The MNB-RIM is not only sensitive to the presence of
brain lesions, but the magnitude of the lesions as well. The
MNB-RIM also possesses ecologic validity, as it has been
shown to be sensitive to complex tasks such as driving
(Meyers et al. 1999b).
Criteria 4: Has the MNB-RIM Been Generally Accepted
by the Scientific Community and Mainstream Clinical
Neuropsychologists?
Yes. General acceptance can be demonstrated through multi-
ple methods. First, the description and review of the compo-
nent tests exist in commonly referenced compendiums (e.g.,
Mitrushina et al. 2005; Strauss et al. 2006) and the large num-
ber of articles published on the MNB-RIM. The references for
this article are only a subset of the manuscripts that have been
published regarding the MNB-RIM.
Acceptancehas also been indicated by favorable reviews of
the MNB-RIM in the peer-reviewed journals: Miller et al.
(2010), whose review was cited above. Larrabee (2012a,b)
noted the findings of Rohling et al. (2003a,b,c) related to a
flexible battery employing a core set of tests in the various
domains found in the MNB-RIM showed the same dosere-
sponse association with head injury severity found by Dikmen
et al. (1995) in their seminal prospective study of neuropsy-
chological outcome in traumatic brain injury. BWhat was par-
ticularly impressive is that the Overall Test Battery Mean
(OTBM) average T score for the Meyers Neuropsychological
Battery (a core for a flexible battery) was 39.2, essentially
identical to the expanded HRB OTBM of 38.9; additionally,
the MNB-RIM correlated .99 with the five levels of TBI
severity^Further, the OTBM from the MNB-RIM and the
Halstead Reitan Bcorrelated .97 with one another across the
five levels of trauma severity (p. 14).^
General acceptance can be inferred by the growth of the
MNB-RIM e-mail user group, which now is over 360 mem-
bers. Another method is based on the number of computers
registered to run the MNB-RIM software, which has grown to
include nearly 1000 neuropsychologists. These data suggest
that the number of MNB-RIM users now exceeds the number
of users estimated by Sweet et al. (2011)oftheHRB(Reitan
& Wolfson 1985) and HRNES-R (Russell & Starkey, 2001).
The number of MNB-RIM users has steadily grown by 35 %
per year. The annual MNB-RIM workshop series is in its 8th
year. This is a 2-day event designed to train new as well as
experienced MNB-RIM users in the latest findings published
in the literature. The initial workshop was attended by approx-
imately 25 neuropsychologists. The past two workshops have
grown to just under 100 neuropsychologists.
Criteria 5: Does the MNB-RIM Have a Method Of
Controlling its Use?
Yes, there are methods for controlling the use and users of the
MNB-RIM. More specifically, this criterion addresses the
questions: (a) Who can administer and interpret the test bat-
tery? (b) Are the procedures for administration and interpreta-
tion well standardized? (c) Are these details described in a
published manual?
The manual establishes and maintains the standards that
control the MNB-RIMs administration, scoring, and interpre-
tation. The materials sold are restricted to qualified profes-
sionals, which can be defined as licensed psychologists. The
MNB-RIM manual, which is provided to users when they
purchase the software, details (a) the tests included in the core
battery, (b) the process of developing norms, (c) the normative
data, (d) the order of test administration, (e) the incorporation
of the RIM in the diagnostic process, and (f) the standard use
of performance validity indicators. In addition, when users
purchase the MNB-RIM package, they are provided with ma-
terials needed to administer the tests (e.g., picture of the right
and left hand for the finger localization test; wooden blocks
for the Token Test, etc.) The manual controls the ongoing
development and additions to the MNB-RIM in the future.
Detailed instructions for the administration of each test
included in the MNB-RIM are provided in the MNB-RIM test
protocol. The protocol functions as instructions for the admin-
istration of the battery. It includes specific areas for the record-
ing the responses provided by examinees. Examiners use
these testing procedures to ensure that the MNB-RIM is given
in a standardized manner. These procedures are also described
in detail in the MNB-RIM user manual.
The guidelines for interpreting MNB-RIM data are also
outlined in detail in the manual. Quantitative data are used
by the examiner to develop treatment plans that are more
likely to lead to improved outcomes. The profile matching
approach allows clinicians to remain more objective during
the interpretation phase of assessment. As a result, clinicians
are more likely to avoid arbitrary classification or simplistic
dichotomies (e.g., impaired vs. normal).
Besides the profile matching interpretation guidelines
(Maroco et al. 2011), as noted previously, the MNB is com-
patible with the Rohling Interpretive Method (Rohling et al.
Psychol. Inj. and Law
Author's personal copy
2003a,c,2004). Contained in the MNB-RIM software is an
artificial intelligence in the form of an Artificial Neural Net-
work (Maroco et al. 2011; Meyers, Miller, & Tuita 2014b),
which is used to improve the objectivity and accuracy of the
profile matching and interpretation. Criterion 2 addressed the
large body of peer-reviewed literature citing the MNB-RIM
interpretation process. The use of computer models reduces
human error and lends a high degree of inference to the results
of the MNB-RIM. In this way, the RIM maximizes the utility
of the MNB. Computer models are also able to quickly sim-
plify and organize information into easily interpretable results,
thus reducing potential errors made by practitioners (Dawes
et al. 1989).
Summary and Conclusion
The MNB-RIM meets or exceeds the criteria set by each of the
five Daubert standards. The MNB-RIM consists of individual
instruments that are well established and widely used by a
large number of psychologists. The MNB-RIM carefully cat-
egorizes each test into a cognitive domain based upon the
factor structure of the battery, which is consistent with the
suggestions of Larrabee (2000). The assignment of scores to
a domain prevents redundancy and produces a simplified bat-
tery that retains a high level of internal consistency. The
MNB-RIM takes approximately 3 h to complete the MNB-
RIM, while other batteries can take upwards of 8 h to admin-
ister. As a result, examinees are less likely to experience fa-
tigue, which we believe is a common cause of invalid results.
The literature also supports the external validity of the MNB-
RIM, with improved detection of both the existence of impair-
ments caused by a variety of neuropathologies (e.g., TBI), but
the severity of the deficits as well. Several peer-reviewed stud-
ies of the MNB-RIM have found it to be more strongly corre-
lated with a variety of TBI biomarkers (Meyers, Galinsky, &
Volbrecht 1999a; Meyers & Rohling, 2004,2009; Rohling
et al. 2003b).
The MNB-RIM has an acceptable sensitivity and specific-
ity to mild TBI. The batterys reliability was estimated to be
.86 (Meyers & Rohling, 2004). The MNB-RIM has been
found to be associated with a host of real life activities, such
as driving (Meyers et al. 1999a,b) and correlates well with
CT/MRI data (Meyers & Rohling 2009).
Another standard of Daubert that is satisfied by the MNB-
RIM is control over the methods and use of the battery. There
are detailed instructions in the user manual regarding the pro-
cess of administering the battery, which when followed likely
increases the test batterys reliability. General acceptance of
the MNB-RIM has been exhibited through clinicianswide-
spread use of the battery, as well as the extensive peer-
reviewed literature that has demonstrated its psychometric
properties. These conditions also satisfy the Frye standards,
which are often used in place of the Daubert standards in
certain jurisdictions.
MNB and/or RIM Criticisms
Experts who use MNB-RIM results and interpretations in
trials of Daubert admissibility standards should be informed
of published criticisms regarding the battery. Awareness of
such criticisms will help practitioners to adequately address
these claims during testimony. Lezak, Howieson, Bigler, &
Tranel (2012) discuss potential problems with the MNB-
RIM, noting:
A more serious problem with this kind of approach is
that it detracts from appropriate neuropsychological as-
sessment by fostering an overreliance on test scores
rather than clinical decisions based on patients as per-
sons, their history and their clinical presentation. More-
over, a set of test scores converted into a graphed
Bprofile^is of limited-if any-benefit for the important
social and humanitarian goals of patient care, rehabili-
tation, counseling, and community reintegration (pp.
747748).
The critique of Lezak et al. (2012)erroneously simplifies
the methods used by practitioners of the MNB-RIM. Qualified
clinicians do not solely depend on the MNB-RIM as criteria
for diagnosis. Medical history, face-to-face communication
with the patient, as well as relevant imaging and neurological
information are all taken into account before a diagnosis is
made. The MNB-RIM is not an exclusive method used for
diagnosis; rather, it contributes to the diagnosis. This careful
consideration of all relevant data should be the standard of
good clinical judgment and effective clinical practices.
Two critiques related to the RIM as an interpretive method
for flexible batteries have been published: one by Palmer,
Appelbaum, & Heaton (2004) and another by Willson &
Reynolds (2004). These critiques, collectively, focus on eight
concerns regarding the RIM. A rebuttal of these critiques was
submitted by Rohling et al. (2004). Psychometric theory, as
well as various analyses of data, has established the RIMs
ability to distinguish scores obtained from normal individuals,
as well as scores obtained from patients suffering from various
pathologies, such as TBI. Rohling et al. concluded that clini-
cians should adopt the RIM as a method of reducing and
avoiding errors in interpretation common in clinical practice.
These interpretation errors are outlined in (Dawes et al. 1989).
It is also worth noting that despite the criticisms of Palmer
et al., these same authors published a similar method to the
RIM examining the stability and course of deficits associated
with schizophrenia. It is ironic that they are critical of the
RIM, yet chose to use the method for their analyses (Heaton,
Gladsjo, Palmer, Kick, Marcotte, and Jeste 2001).
Psychol. Inj. and Law
Author's personal copy
References
American Educational Research Association, Psychological Association,
& National Council on Measurement in Education. (2014).
Standards for educational and psychological testing. Washington,
DC: American Educational Research Association.
Amicus Brief submitted on behalf of the American Academy of Clinical
Neuropsychology. (2007). in Support of Plaintiff in Baxter v.
Temple, 949 A.2d 167 (N.H. May 20, 2008). Available at https://
www.courtlistener.com/nh/8Ucx/baxter-v-temple/
Benton, A., Savin, A. B., de Hamsher, K. S., Varney, N. R., & Spreen, O.
(1994). Contributions to neuropsychological assessment: A clinical
manual (2nd ed.). New York: Oxford University Press.
Brandt, J., Rubinsky, E., & Larson, G. (1985). Uncovering malingered
amnesia. Annals of the New York Academy of Sciences, 44, 502503.
Bush, S. S. (2011). Meyers neuropsychological battery (MNB). In J. S.
Kreutzer, J. Deluca, & B. Caplan (Eds.), Encyclopedia of clinical
neuropsychology. New York: Springer.
Bush, S. S., Ruff, R. M., Tröster, A. I., Barth, J. T., Koffler, S. P., Pliskin,
N.H.,...Silver,C.H.(2005).Symptom validity assessment:
Practice issues and medical necessity NAN policy & planning com-
mittee. Archives of Clinical Neuropsychology, 20, 419426.
Butcher,J.N.,Dahlstrom,W.G.,Graham,J.R.,Tellegen,A.,&
Kaemmer, B. (1989). The Minnesota multiphasic personality
inventory-2 (MMPI-2): Manual for administration and scoring.
Minneapolis: University of Minnesota Press.
Carone, A. C., & Bush, S. S. (2013). Mild traumatic brain injury:
Symptom validity assessment and malingering. New York: Springer.
Constantinou,M.,Bauer,L.,Ashendorf,L.,Fisher,J.M.,&McCaffrey,R.
J. (2005). Is poor performance on recognition memory effort measures
indicative of generalized poor performance on neuropsychological
tests? Archives of Clinical Neuropsychology, 20, 191198.
Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993). 509 U.S. 579, 589
Dawes, R. M. (1979). The robust beauty of improper linear models in
decision making. American Psychologist, 34, 571582.
Dawes, R. M., & Corrigan, B. (1974). Linear models in decision making.
Psychological Bulletin, 81, 95106.
Dawes, R., Faust, D., & Meehl, P. (1989). Clinical versus actuarial judg-
ment. Science, 243(4899), 16681674.
Demakis, G. J. (2004). Frontal lobe damage and tests of executive pro-
cessing: A meta-analysis of the category test, stroop test, and trail-
making test. Journal of Clinical and Experimental
Neuropsychology, 26(3), 441450.
Dikmen, S. S., Machamer, J. E., Winn, H. R., & Temkin, N. R. (1995).
Neuropsychological outcome at 1-year post head injury.
Neuropsychology, 9, 8090.
Glass, G. V., McGraw, B., & Smith, M. L. (1981). Meta-analysis in social
research. Beverly Hills: Sage.
Goodwin, J. (2010). Research in psychology methods and designs (6th
ed., p. 191). Hoboken: Wiley.
Green, P. (2003). Word memory test. Edmonton: Green Publishing.
Green, P. (2004). Greensmedical symptom validity test (MSVT) for
Microsoft windows. Usersmanual. Edmonton: Greens Publishing.
Green, P. (2007). The pervasive influence of effort onneuropsychological
tests. Physical Medicine Rehabilitation Clinics of North America,
18(1), 4368.
Green, P., Flaro, L., & Courtney, J. (2009). Examining false positives on
the word memory test in adults with mild traumatic brain injury.
Brain Injury, 23(9), 741750.
Green, P., Rohling, M. L., Lees-Haley, P., & Allen, L. M. (2001). Effort
accounts for more neuropsychological test variance then does ability
in litigant sample. Brain Injury, 15, 10451060.
Greiffenstein, M. F. (2008). Basics of forensic neuropsychology. In J. E.
Morgan & J. Richter (Eds.), Forensic neuropsychology (pp. 316).
New York: Guilford Press.
Greiffenstein, M. F., & Kaufmann, P. M. (2012). Neuropsychology and
law: Principles of productive attorney-neuropsychologist relations.
In G. J. Larrabee (Ed.), Forensic neuropsychology: A scientific
approach (2nd ed.). New York: Oxford University Press.
Heaton, R. K., Chelune, G. J., Talley, J. L., Kay, G. G., & Curtiss, G.
(1993). Wisconsin Card Sorting Test manual. Odessa, FL:
PsychologicalAssessmentResources.
Heaton, R. K., Gladsjo, J. A., Palmer, B. W., Kick, J., Marcotte, T. D., &
Jeste, D. V. (2001). Stability and course of neuropsychological def-
icits in schizophrenia. Archives of General Psychiatry, 58, 2432.
Heaton, R. K., Grant, I., & Matthews, C. G. (1991). Comprehensive
norms for an expanded Halstead-Reitan Battery.Lutz:
PsychologicalAssessmentResources.
Heaton, R. K., Miller, W., Taylor, M. J., & Grant, I. (2004). Revised
comprehensive norms for an expanded Halstead-Reitan Battery:
Demographically adjusted neuropsychological norms for African
American and Caucasian adults. Lutz: Psychological Assessment
Resources.
Heilbronner, R. L., Sweet, J. J., Morgan, J. E., Larrabee, G. J., Millis, S.
R., & Conference participants. (2009). American academy of clini-
cal neuropsychology consensus conference statement on the neuro-
psychological assessment of effort, response bias, and malingering.
The Clinical Neuropsychologist, 23(7), 10931129.
Ingraham, L. J., & Aikken, C. B. (1996). An empirical-approach to de-
termining criteria for abnormality in test batteries with multiple mea-
sures. Neuropsychology, 10, 120124.
Ivnik, R. J., Malec, J. F., Tangalos, E. G., Peterson, R. C., Kokmen, E., &
Kurland, L. T. (1990). The auditory verbal learning test (AVLT):
Norms for ages 55 years and older. Psychological Assessment,
2(3), 304312.
Kaplan, E., Goodglass, H., & Weintraub, S. (1983). The Boston naming
test. Philadelphia: Lea & Febriger.
Kaufmann, P. M. (2012). Admissibility of expert opinions based on neu-
ropsychological evidence. In G. J. Larrabee (Ed.), Forensic neuro-
psychology: A scientific approach (2nd ed.). New York: Oxford
University Press.
Kaufmann, P. M., & Greiffenstein, M. F. (2013). In M. L. Mattingly & E.
Rinehardt (Eds.), Forensic neuropsychology: Training, scope of
practice, and quality control. National Academy of
Neuropsychology Bulletin, 27(1), 1115.
Kiernan, R. J., & Matthews, C. G. (1976). Impairment index versus T-
score averaging in neuropsychological assessment. Journal of
Consulting and Clinical Psychology, 44, 951957.
Kozel, J. J., & Meyers, J. E. (1998). A cross-validation study of the
Victoria revision of the category test. Archives of Clinical
Neuropsychology, 13, 327332.
Labreche, T. M. (1983). The Victoria revision of the Halstead Category
Test . Unpublished doctoral dissertation. University of Victoria.
Larrabee, G. J. (2000). Association between IQ and neuropsychological
test performance: Commentary on Tremont, Hoffman, Scott, and
Adams (1998). The Clinical Neuropsychologist, 14, 139145.
Larrabee, G. J. (2008). Aggregation across multiple indicators improves
the detection of malingering: Relationship to likelihood ratios. The
Clinical Neuropsychologist, 22, 666679.
Larrabee, G. J. (2012a). A scientific approach to forensic neuropsychol-
ogy. In G. J. Larrabee (Ed.), Forensic neuropsychology: A scientific
approach (2nd ed., pp. 322). New York: Oxford University Press.
Larrabee, G. J. (2012b). Performance validity and symptom validity in
neuropsychological assessment. Journal of the International
Neuropsychological Society, 18(4), 625630.
Larrabee, G. J., Millis, S. R., & Meyers, J. E. (2008). Sensitivity to brain
dysfunction of the Halstead Reitan vs. an ability-focused neuropsy-
chological battery. The Clinical Neuropsychologist, 22(5), 813825.
Larrabee, G. J., Millis, S. R., & Meyers, J. E. (2009). 40 plus or minus 10:
A new magical number: Reply to Russell. The Clinical
Neuropsychologist, 23(5), 841849.
Psychol. Inj. and Law
Author's personal copy
Leonberger, F. T., Nicks, S. D., Larrabee, G. J., & Goldfader, P. R. (1992).
Factor structure of the Wechsler memory scale-revised within a
comprehensive neuropsychological battery. Neuropsychology, 6(3),
239249.
Lezak, M. D. (1995). Neuropsychological assessment. New York: Oxford
University Press.
Lezak, M. D., Howieson, D. B., Bigler, E. D., & Tranel, D. (2012).
Neuropsychological assessment (5th ed., pp. 747748). New York:
Oxford University Press.
Lezak, M. D., Howieson, D. B., & Loring, D. (2004).
Neuropsychological assessment (4th ed., p. 636). New York:
Oxford University Press.
Maroco, J., Silva, D., Rodriquez, A., Guerreiro, M., Santana, I., & de
Mendonça, A. (2011). Data mining methods in the prediction of
dementia: A real-data comparison of the accuracy, sensitivity and
specificity of linear discriminant analysis, logistic regression, neural
networks, support vector machines, classification trees and random
forests. BMC Research Notes, 4, 299313.
Meehl, P. E. (1954). Clinical versus statistical prediction: A theoretical
analysis and a review of the evidence. Minneapolis, MN: University
of Minnesota.
Meyers, J. E. (2014). Manual for the Meyers Neuropsychological Battery
(MNB) (Electronic). Retrieved on 07/14/2015 from http://www.
meyersneuropsychological.com/.
Meyers, J. E., & Diep, A. (2000). Assessment of malingering in chronic
pain patients using neuropsychological tests. Applied
Neuropsychology, 7, 133139.
Meyers, J. E., Galinsky, A., & Volbrecht, M. E. (1999a). Malingering and
mild brain injury: How low is too low. Applied Neuropsychology, 6,
208216.
Meyers, J. E., & Meyers, K. R. (1995a). Rey complex figure test and
recognition trial: Professional manual. Odessa: Psychological
Assessment Resources.
Meyers, J. E., & Meyers, K. R. (1995b). The Rey complex figure and
recognition trial under four different administration procedures. The
Clinical Neuropsychologist, 9, 6567.
Meyers, J. E., Miller, R. M., Haws, N. A., Murphy-Tafti, J. L., Curtis, T.
D., Rupp, Z. W., ... Thompson, L. M. (2013a). An adaptation of the
MMPI-2 Meyers index for the MMPI-2-RF. Applied
Neuropsychology Adult, 21(2), 148154.
Meyers, J. E., Miller, R. M., Lisa, M., Thompson, L. M., Scalese, A. M.,
Allred, B. C., ... Lee, A. J.-H. (2014a). Using likelihood ratios to
detect invalid performance with performance validity measures.
Archives of Clinical Neuropsychology, 29, 224235.
Meyers, J. E., Miller, R. M., & Tuita, A. R. (2014b). Using pattern anal-
ysis matching to differentiate TBI and PTSD in a military sample.
Applied Neuropsychology Adult, 21(1), 6068.
Meyers, J. E., Millis, S. R., & Volkert, K. T. (2002a). Avalidity index for
the MMPI-2. Archives of Clinical Neuropsychology, 17(2), 157
169.
Meyers, J. E., Reinsch-Boothby, L., Miller, R. M., Rohling, M. L., &
Axelrod, B. N. (2011a). Does the source of a forensic referral affect
neuropsychological test performance on a standardized battery of
tests? The Clinical Neuropsychologist, 25, 477487.
Meyers, J. E., Roberts, R. J., Bayless, J. D., Volkert, K. T., & Evitts, P. E.
(2002b). Dichotic listening: Expanded norms and clinical applica-
tion. Archives of Clinical Neuropsychology, 17(1), 7990.
Meyers, J. E., & Rohling, M. L. (2004). Validation of the Meyers short
battery on mild TBI patients. Archives of Clinical Neuropsychology,
19, 637651.
Meyers, J. E., & Rohling, M. L. (2009). CT and MRI correlations with
neuropsychological tests. Applied Neuropsychology, 16(4), 237
253.
Meyers, J. E., & Volbrecht, M. E. (2003). A validation of multiple malin-
gering detection methods in a large clinical sample. Archives of
Clinical Neuropsychology, 18(3), 261276.
Meyers, J. E., Volbrecht, M. E., Axelrod, B. N., & Reinsch-Boothby, L.
(2011b). Embedded symptom validity tests and overall neuropsy-
chological test performance. Archives of Clinical Neuropsychology,
26, 815.
Meyers, J. E., Volbrecht, M. E., & Kaster-Bundgaard, J. (1999b). Driving
is more than pedal pushing. Applied Neuropsychology, 6, 154164.
Meyers, J. E., Volkert, K. T., & Diep, A. (2000). Sentence repetition test:
Updated norms and clinical utility. Applied Neuropsychology, 7,
154159.
Meyers, J. E., Zellinger, M. M., Kockler, T., Wagner, M., & Miller, R. M.
(2013b). A Validated seven-subtest short form for the WAIS-IV.
Applied Neuropsychology, 20, 249256.
Miller, J. B., Fichtenberg, N. L., & Millis, S. R. (2010). Diagnostic effi-
ciency of an ability-focused battery. The Clinical
Neuropsychologist, 24(4), 678688.
Miller, L. S., & Rohling, M. L. (1998).Statistical interpretive methods for
difficult neuropsychological test data. Poster presented at the 18th
annual conference of the National Academy of Neuropsychology in
Was h ing t on, D C .
Miller, L. S., & Rohling, M. L. (2001). A statistical interpretive method for
neuropsychological data. Neuropsychology Review, 11(3), 143169.
Mitrushina, M., Boone, K. B., Razani, J., & DElia, L. F. (2005).
Handbook of normative data for neuropsychological assessment
(2nd ed.). New York: Oxford University Press.
Mitrushina, M., Boone, K. B., & DElia, L. F. (1999). Handbook of
normative data for neuropsychological assessment. New York:
Oxford University Press.
Palmer, B. W., Appelbaum, M. I., & Heaton, R. K. (2004). Rohlings
interpretive method and inherent limitations on the flexibility of
flexible batteries. Neuropsychology Review, 14(3), 171176.
Perrine, K. (1993). Differential aspects of conceptual processing in the
category test and the Wisconsin card sort test. Journal of Clinical
and Experimental Neuropsychology, 15(4), 461473.
Pilgrim, B., Meyers, J. E., Bayless, J., & Whetstone, M. (1999). Validity
of the ward seven-subtest WAIS-III short form ina neuropsycholog-
ical population. Applied Neuropsychology, 6, 243246.
Pohar, M., Blas, M., & Turk, S. (2004). Comparison of logistic regression
and linear discriminant analysis: A simulation study.
Metodološkizvezki, 1, 143161.
Rabin, L. A., Barr, W. B., & Burton, L. A. (2005). Assessment practices
of clinical neuropsychologists in the United States and Canada: A
survey of INS, NAN, and APA Division 40 members. Archives of
Clinical Neuropsychology, 20(1), 3365.
Read, D. E., & Spreen, O. (1986). Normative data in older adults for
selected neuropsychological tests. In O. Spreen & Strauss (Eds.;
1998), A compendium of neuropsychological tests (2nd ed.; p.
181). Unpublished data: University of Victoria.
Reitan, R. M., & Wolfson, D. (1985). The Halstead-Reitan neuropsycho-
logical test battery: Theory and interpretation. Tucson:
Neuropsychology Press.
Reitan, R. M., & Wolfson, D. (1988). Traumatic brain injury volume II:
Recovery and rehabilitation. Tucson: Neuropsychology Press.
Reitan, R. M., & Wolfson, D. (1997). Emotional disturbances and their
interaction with neuropsychological deficits. Neuropsychology
Review, 7(1), 319.
Roberts, M., Persinger, M., Grote, C., Evertowski, L., Springer, J., Tuten,
T., ... Baglio, C. (1994). The dichotic listening test: Preliminary
observations in American and Canadian samples. Applied
Neuropsychology, 1, 4556.
Rohling, M. L., Langhinrichsen-Rohling, J., & Miller, L. S. (2003a).
Statistical methods for determining malingering: Rohlingsinterpre-
tive method. In R. Franklin (Ed.), Prediction in forensic and neuro-
psychology: Sound statistical practices (pp. 171207). Mahwah:
Lawrence Erlbaum Associates.
Rohling, M. L., Meyers, J. E., & Millis, S. R. (2003b).
Neuropsychological impairment following traumatic brain injury:
Psychol. Inj. and Law
Author's personal copy
A doseresponse analysis. The Clinical Neuropsychologist, 17,
289302.
Rohling, M. L., Miller, R. M., Axelrod, B. N., Wall, J. H., Lee, A. J.,
Kinikini, D. T. (2015). Is co-norming required? Archives of Clinical
Neuropsychology. Published online. Accessed 07/14/2015.
Rohling, M. L., Miller, L. S., Langhinrichsen-Rohling, J. (1998).
Rohlings evaluation method for interpreting neuropsychological
data (REMIND). Poster presented at the 28th annual conference of
the International Neuropsychological Society in Honolulu, HI.
Rohling, M. L., Miller, L. S., & Langhinrichsen-Rohling, J. (2004).
Rohlings interpretive method for neuropsychological case data: A
response to critics. Neuropsychology Review, 14(3), 155169.
Rohling, M. L., Williamson, D. J., Miller, L. S., & Adams, R. L. (2003c).
Using the Halstead-Reitan Battery to diagnose brain damage: A
comparison of the predictive power of traditional techniques to
Rohlings interpretive method. The Clinical Neuropsychologist,
17, 531543.
Rosenthal, R. (1984). Meta-analytic procedures for social research.
Beverly Hills: Sage.
Rosenthal, R., Rosnow, R. L., & Rubin, D. B. (2000). Contrasts and
effect sizes in behavioral research. New York: Cambridge
University Press.
Russell, E. W. (2012). Scientific foundations of neuropsychological as-
sessment with application to forensic evaluation (pp. 297298).
Waltham: Elsevier.
Russell, E. W., Neuringer, C., & Goldstein, G. (1970). Assessment of
brain damage: A neuropsychological key approach. New York:
Wiley.
Russell, E. W., & Starkey, R. I. (2001). Halstead Russell neuropsycho-
logical evaluation system-revised (HRNES-R). Los Angeles:
Western Psychological Services.
Sicoly, F. (1989). Computer-aided decisions in human services: Expert
systems and multivariate models. Computers in Human Behavior, 5,
4760.
Spreen, O., & Strauss, E. M. (1991). A compendium of neuropsycholog-
ical tests: Administration, norms, and commentary.NewYork:
Oxford University Press.
Spreen, O., & Strauss, E. M. (1998). A compendium of neuropsycholog-
ical tests: Administration, norms, and commentary (2nd ed.). New
York: Oxford University Press.
Stern, R. A.,& White, T. (2003). Neuropsychological assessment battery:
Administration, scoring, and interpretation manual.Lutz:
PsychologicalAssessmentResources.
Stevens, A., Friedel, E., Mehren, G., & Merten, T. (2008). Malingering
and uncooperativeness in psychiatric and psychological assessment:
Prevalence and effects in a German sample of claimants. Psychiatry
Research, 157, 191200.
Strauss, E., Sherman, E. M., & Spreen, O. (2006). A compendium of
neuropsychological tests: Administration, norms, and commentary
(3rd ed.). New York: Oxford University Press.
Sweet, J. J., King, J. H., Malina, A. C., Bergman, M. A., & Simmons, A.
(2002). Documenting the prominence of forensic neuropsychology
at nationalmeetings and relevant professional journals, from 1990 to
2000. The Clinical Neuropsychologist, 16, 481494.
Sweet, J. J., & Meyer, D. G. (2011). Trends in forensic practice and
research. In G. J. Larrabee (Ed.), Forensic neuropsychology: A sci-
entific approach (pp. 501516). New York: Oxford University
Press.
Sweet, J. J., Meyers, D. G., Nelson, N. W., & Moberg, P. J. (2011). The
TCN/AACN 2010 salary survey: Professional practices, beliefs, and
incomes if U.S. neuropsychologists. The Clinical
Neuropsychologist, 25(1), 1261.
Tellegen, A., & Ben-Porath, Y. S. (2008). MMPI-2-RF (Minnesota mul-
tiphasic personality inventory-2 restructured form): Technical
manual. Minneapolis: University of Minnesota Press.
Tombaugh, T. N. (1996). TOMM, Test of memory malingering. New
York: Multi-Health Systems Inc.
Tversky, A., & Kahneman, C. (1974). Judgment under uncertainty:
Heuristics and biases. Science, 185(4157), 11241131.
Ward, L. C. (1990). Prediction of verbal, performance, and full scale IQs
from seven subtests of the WAIS-R. Journal of Clinical Psychology,
46(4), 436440.
Wechsler,D.(1997).Manual for the Wechsler adult intelligence scale
(3rd ed.). San Antonio: The Psychological Corporation.
Wechsler, D. (2003). Manual for the Wechsler intelligence scale for
children (4th ed.). San Antonio: The Psychological Corporation.
Wechsler,D.(2008).Manual for the Wechsler adult intelligence scale
(4th ed.). San Antonio: Pearson.
Wedding, D., & Faust, D. (1989). Clinical judgment and decision making in
neuropsychology. Archives of Clinical Neuropsychology, 4, 233265.
Williams, J. M. (1997). The prediction of premorbid memory ability.
Archives of Clinical Neuropsychology, 12, 745756.
Willson, V. L., & Reynolds, C. R. (2004). A critique of Miller and
Rohlings statistical interpretive method for neuropsychological test
data. Neuropsychology Review, 14(3), 177181.
Psychol. Inj. and Law
Author's personal copy
Article
In the practice of psychological assessment there have been warnings for decades by the American Psychological Association (APA), the National Academy of Neuropsychology (NAN), other associations, and test vendors, against the disclosure of test raw data and test materials. Psychological assessment occurs across several different practice environments, and test raw data is a particularly sensitive aspect of practice considering what it implicitly represents about a client/patient, and this concept is further developed in this paper. Many times, test materials are intellectual property protected by copyrights and user agreements. It follows that improper management of the release of test raw data and test materials threatens the scientific integrity of psychological assessment. Here the matters of test raw data, test materials, and different practice environments are addressed to highlight the challenges involved with improper releases and to offer guidance concerning good-faith efforts to preserve the integrity of psychological assessment and legal agreements. The unique demands of forensic practice are also discussed, including attorneys’ needs for cross-examination and discovery, which may place psychologists (and other duly vetted evaluators) in conflict with their commitment to professional ethical codes and legal agreements. To this end, important threats to the proper use of test raw data and test materials include uninformed professionals and compromised evaluators. In this paper, the mishandling of test raw data and materials by both psychologists and other evaluators is reviewed, representative case examples, including those from the literature, are provided, pertinent case law is discussed, and practical stepwise conflict resolutions are offered.
Article
Full-text available
Neuropsychologists are increasingly being asked to apply neuropsychological test results to real world functioning; however, neuropsychological tests are not usually constructed to do so, but instead are more concerned with diagnostic accuracy than with prediction of daily functioning. Using samples of 5,460 patients that did self-ratings and 2791 patients that had family ratings plus the Meyers Neuropsychological Battery (MNB), it was found that the family ratings were better predicted by neuropsychological test data than were self-ratings on the 38 item Patient Competency Rating Scale (PCRS). The R values for family ratings on the 36 regression equations ranged from .236 to .763. The results show that the ratings given patients by family members could be predicted by the neuropsychological test results. These findings can help the clinician to make broad statements regarding likely real-life functioning and also support the ecological validity of the tests that make up the MNB.
Article
The Finger Tapping Test (FTT) is a widely utilized measure to assess lateralized motor speed and dexterity. The current study sought to cross-validate an abbreviated version of the FTT (i.e., M of Trials 3–5) and to evaluate a novel abbreviated method (i.e., M of three trials within five taps of each other; “3 within 5”) to examine their respective effectiveness as a predictor of full-score performance based on a traditional administration procedure. The results showed that the abbreviated methods accurately predicted the full-test score, and any statistically significant differences that emerged were small based on effect size analysis and unlikely to be clinically meaningful. These findings were consistent across genders, among older adults, and among individuals displaying significant inter-trial tapping variability and thus requiring lengthier administration time. Classification accuracy statistics for the detection of impairment and performance validity status were high for both abbreviated methods. The results support two valid options for shortening the duration of the FTT. Both methods, used independently or in combination, are compatible with traditional administration procedures.
Article
Full-text available
The purpose of this study is to further previous research that has shown that common neuropsychological tests can do “double duty” as test of motivation/malingering. Using a large clinical sample of 796 participants, it was found that the nine neuropsychological tests (when used together) were able to correctly identify litigant and nonlitigating groups. Failure on any two of the malingering tests suggested motivational/malingering issues. The groups consisted of mild, moderate, and severe traumatic brain-injured patients; chronic pain, depressed, community controls, and “malingering actors.” Institutionalized and noninstitutionalized patient performance were also examined. This method showed 83% sensitivity and 100% specificity. A 0% false positive rate was found, suggesting good reliability especially in litigating settings. A group of patients for whom this method of motivational assessment might not be appropriate was also identified.
Book
Neuropsychology is a specialized branch of psychology which focuses on the relationship between the brain and human functions including cognition, behaviour, and emotion. With an emphasis on a scientific approach which includes analysing quantitative data, neuropsychology follows an information processing approach to brain activity using standard assessments to evaluate various mental functions. This book examines the standard battery of tests in neuropsychology, with a particular focus on forensic applications of these tests, suggesting that a united theory of assessment needs to be established. Bringing together multiple articles related to forensic neuropsychology, this book offers an exploration of the neurological and psychometric theoretical basis for standardized batteries as well as a comparison between flexible and standard batteries. Ultimately, it is argued that a standard battery of tests need to be used and explains the justification for the reliability of this approach, especially in relation to expert witness testimony. While doing this, formal procedures, including advanced mathematical procedures such as formulas and decision tree algorithms, are presented to be utilized in assessments. With its thorough examination of the theoretical and practical applications of a standard battery in neuropsychological assessment, this book will prove helpful to clinical practitioners and attorneys using assessment for their cases. Provides a unified theoretical basis for a standardized neuropsychological assessment battery Shows the justification for using neuropsychological assessment in forensic applications Offers practical examples which can be used to create a standard assessment battery.
Article
A large sample of chronic postconcussive patients with and without overt malingering signs was compared with objectively brain-injured patients on common episodic memory and malingered amnesia measures. Probable malingerers and traumatically brain-injured Ss were not differentiated on popular episodic recall tests. In contrast, probable malingerers performed poorly on the Rey 15-item, Rey Word Recognition List, Reliable Digit Span, Portland Digit Recognition Test, and Rey Auditory Verbal Learning Test recognition trial. These findings validated both commonly cited malingering measures and newly introduced methods of classifying malingering in real-world clinical samples. The base rate for malingering in chronically complaining mild head injury patients may be much larger than previously assumed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.