ArticlePDF Available

Multivariate Analysis Versus Multiple Univariate Analyses

Authors:

Abstract

The argument for preceding multiple analysis of variance ({anovas}) with a multivariate analysis of variance ({manova}) to control for Type I error is challenged. Several situations are discussed in which multiple {anovas} might be conducted without the necessity of a preliminary {manova}. Three reasons for considering multivariate analysis are discussed: to identify outcome variable system constructs, to select variable subsets, and to determine variable relative worth. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Psychological
Bulletin
1989, Vol. 105,
No. 2,
302-308
Copyright
1989
by the
American
Psychological
Association,
Inc.
0033-2909/89/$00.75
Multivariate Analysis Versus
Multiple
Univariate Analyses
CarlJHuberty
University
of
Georgia
John
D.
Morris
Florida
Atlantic
University
The
argument
for
preceding multiple analysis
of
variance
(ANOVAS)
with
a
multivariate
analysis
of
variance
(MANOVA)
to
control
for
Type
I
error
is
challenged.
Several
situations
are
discussed
in
which multiple
ANOVAS
might
be
conducted
without
the
necessity
of a
preliminary
MANOVA.
Three
reasons
for
considering
a
multivariate analysis
are
discussed:
to
identify outcome variable system
constructs,
to
select variable
subsets,
and to
determine variable relative worth.
The
analyses discussed
in
this article
are
those appropriate
in
research situations
in
which analysis
of
variance techniques
are
useful.
These analyses
are
used
to
study
the
effects
of
treatment
variables
on
outcome/response variables
(in ex
post facto
as
well
as
experimental studies).
We
speak
of an
univariate
analy-
sis
of
variance
(ANOVA)
when
a
single outcome variable
is in-
volved;
when multiple outcome variables
are
involved,
it is a
multivariate
analysis
of
variance
(MANOVA).
(Covariance anal-
yses
may
also
be
included.)
With
multiple outcome variables,
the
typical analysis
ap-
proach used
in the
group-comparison context,
at
least
in the
behavioral
sciences,
is to
either
(a)
conduct multiple
ANOVAS
or
(b)
conduct
a
MANOVA
followed
by
multiple
ANOVAS.
That
these
are two
popular choices
may be
concluded
from
a
survey
of
some prominent behavioral science journals.
The
1986
issues
of
five
journals published
by the
American Psychological Asso-
ciation were surveyed: Journal
of
Applied
Psychology,
Journal
of
Counseling
Psychology,
Journal
of
Consulting
and
Clinical
Psychology,
Developmental
Psychology,
and
Journal
of
Educa-
tional
Psychology.
In
addition,
one
journal
published
by the
American
Educational Research Association, American Edu-
cational
Research Journal,
was
included
in the
survey.
The re-
sults
of the
survey
are
given
in
Table
1.
A
few
comments about these results would
be
appropriate.
First, only
one
count
was
made
per
article, even though some
articles reported analyses
for
multiple experiments
or
studies.
Only
the
main analysis
for an
experiment
was
considered;
so-
called preliminary analyses were
not
tallied. Sometimes there
were
only
two
groups involved;
in
this case, multiple
/
tests were
considered multiple
ANOVAS,
and a
Hotelling
T2
analysis
was
considered
a
MANOVA.
For the
second analysis
approach
(MAN-
OVA
plus
ANOVAS),
interpretations
or
explanations were invari-
ably
based
on the
multiple
ANOVAS.
In six
articles
in
which mul-
tiple
ANOVAS
were used, three
justifications
for not
doing
a
MA-
NOVA
were given:
(a) low
outcome variable intercorrelations;
(b)
small number
of
outcome variables;
and (c)
small design cell
frequencies.
After
a
MANOVA,
multiple
ANOVAS
were
often
used
implicitly
or
explicitly
to
assess relative variable importance.
Reasons given
for
conducting multiple
ANOVAS
after
a
MANOVA
Correspondence
concerning
this
article
should
be
addressed
to
Carl
J
Huberty,
Department
of
Educational Psychology, University
of
Geor-
gia, Athens,
Georgia
30602.
were
(a) to
clarify
the
meaning
of
significant
discriminators;
(b)
to
explain
the
results
of the
MANOVA;
and (c) to
document
effects
reflected
by the
MANOVA.
In one
case, multiple
ANOVAS
were
conducted even though results
of the
MANOVA
were non-
significant.
In 2 of the 88
analyses that involved
a
MANOVA
plus
multiple
ANOVAS,
discriminant functions were
briefly
consid-
ered,
but the
main interpretation
focus
was
still
on the
multiple
ANOVAS.
One of the
three
MANOVA-only
applications resulted
in
nonsignificance.
The
other
two
applications
incorporated
de-
scriptive
discriminant analysis techniques.
In
none
of the 222
multiple
outcome variable studies
was
there much interest
ex-
pressed
in any
structure associated with
the
MANOVA
results.
Our
thesis
is
that
the
MANOVA-ANOVAS
approach
is
seldom,
if
ever,
appropriate. Discussions
of the
appropriateness
of the
multiple-ANOVAs
approach
and the
strict multivariate
ap-
proach
are
given.
The
Type
I
error protection issue
is
briefly
reviewed
prior
to
some concluding comments.
Analysis
Purposes
The
primary reason
for
conducting
a
MANOVA
or an
ANOVA
is
to
determine
if
there
are any
treatment (used
generically)
variable
effects;
in a
one-way layout, this amounts simply
to
determining
(by a
statistical
test)
if any
group differences
exist.
These
effects
or
differences
may
pertain
to a
collection
of
out-
come variables
or to a
single outcome variable.
In
addition
to
using
the
statistical test, however,
a
researcher
will
want
to un-
derstand (explain/describe/interpret)
the
resulting
effects
or
differences.
An
understanding
of
resulting
ANOVA
effects
may
be
gained through
the
study
of
explained variation
and
univari-
ate
group contrasts.
An
understanding
of
resulting
MANOVA
effects
may be
gained through
the
study
of
explained variation,
multivariate
group contrasts, linear discriminant functions
(LDFs),
and
LDF-outcome variable correlations.
We
contend
that
an
understanding
of
resulting
MANOVA
effects
may not be
gained
by
studying
the
significance
of
multi-
ple
ANOVAS.
(As can be
seen
from
the
summary reported
in
Table
1,
the
MANOVA-ANOVAS
approach
is
fairly
common,
at
least
in
some areas
of
study.)
A
significant
MANOVA
difference
need
not
imply that
any
significant
ANOVA
effect
or
effects
exist;
see
Tatsuoka
(1971,
p. 23) for a
simple
bivariate
example.
A
justification
often
given
for
conducting
a
MANOVA
as a
pre-
liminary
to
multiple
ANOVAS
is to
control
for
Type
I
error prob-
ability
(see, e.g., Leary
and
Altmaier,
1980).
The
rationale
typi-
302
MULTIVARIATE
ANALYSIS
VERSUS MULTIPLE
UNIVARIATE
ANALYSES
303
Table
1
Frequencies
of
Alternative
Analyses
With
Multiple
Outcome
Variables
in
1986
Journal
Issues
Analysis
approach
JournalMultiple
MANOVA
Volume
ANOVAS
plusANOVAs
MANOVA
Total
Journal
of
Applied
Psychology
Journal
of
Counseling
Psychology
Journal
of
Consulting
and
Clinical
Psychology
Developmental
Psychology
Journal
of
Educational
Psychology
American
Educational
Research
Journal
Total
Percentage
71
33
54
22
78
23
10
18
24
48
19
12
131
59.0
10
15
41
12
4
6
88
39.6
1
1
0
0
1
0
3
1.4
21
34
65
60
24
18
222
100
Note.
ANOVA
=
analysis
of
variance;
MANOVA
=
multivariate
analysis
of
variance.
cally
used
is
that
if the
MANOVA
yields significance, then
one
has a
license
to
carry
out the
multiple
ANOVAS
(with
the
data
interpretation being based
on the
results
of the
ANOVAS).
This
is
the
notion
of a
protected (multivariate)
Ftest
(Bock, 1975,
p.
422).
The
idea that
one
completely controls
for
Type
I
error
probability
by first
conducting
an
overall
MANOVA
is
open
to
question (Bird
&
Hadzi-Pavlovic,
1983;
Bray
&
Maxwell,
1982,
p.
343),
because
the
alpha
value
for
each
ANOVA
would
be
less
than
or
equal
to the
alpha employed
for the
MANOVA
only
when
the
MANOVA
null
hypothesis
is
true. This notion does
not
have
convincing empirical support
in a
MANOVA-ANOVAS
context
(Wilkinson,
1975),
the
Hummel
and
Sligo
(1971)
and
Hummel
and
Johnston
(1986)
studies notwithstanding.
From
a
statistical point
of
view,
one
purpose
of
conducting
a
MANOVA
should
not be to
serve
as a
preliminary step
to
multi-
ple
ANOVAS.
The
multivariate method
and the
univariate
method address
different
research questions.
The
choice
to
con-
duct
a
strictly multivariate analysis
or
multiple univariate anal-
yses
is
based
on the
purpose
or
purposes
of the
research
effort.
Research
Questions
The
guiding
force
of an
empirical research
effort
should
be
the
question
or set of
questions formulated
by the
researcher.
Research questions suggest
not
only
the
appropriate design
and
data collection procedures
but
also
the
data analysis strategy
or
strategies.
It is
recognized that additional research questions
may
be
formulated
after
data collection commences
and
that
results
from
initial analyses
may
suggest research questions
in
addition
to
those
originally
posed.
Univariate
Questions
Obviously,
research questions that would call
for
multiple
ANOVAS
pertain
to
individual outcome variables.
For
example,
with
respect
to
which outcome variables
do the
groups
differ?
Or,
the
treatment variable
has an
effect
on
which outcome vari-
ables?
There are, perhaps,
four
situations
in
which multiple
ANOVAS
may
be
appropriate.
One is
when
the
outcome variables
are
"conceptually independent" (Biskin,
1980,
p.
70).
(This
is the
antithesis
of a
situation involving
a
variable system,
a
notion
discussed
in the
next section.)
In
such
a
situation
one
would
be
interested
in how a
treatment variable
affects
each
of the
out-
come variables. Here, there would
be no
interest
in
seeking
any
linear composite
of the
outcome variables;
an
underlying
con-
struct
is of no
concern.
In
particular,
an
underlying construct
would
perhaps
be of
little interest when each outcome variable
is
from
an
unrelated
domain. Dossey
(1976),
for
example, stud-
ied the
effects
of
three treatment variables (Teaching Strategy,
Exemplification,
Student Ability)
on
four
outcome variables:
Algebra
Disjunctive Concept Attainment, Geometric Disjunc-
tive
Concept Attainment, Exclusive Disjunctive Concept
At-
tainment,
and
Inclusive Disjunctive Concept Attainment. Con-
sidering these outcome variables
as
conceptually independent,
four
three-way
ANOVAS
were conducted.
A
second situation
in
which multiple univariate analyses
might
be
appropriate
is
when
the
research being conducted
is
exploratory
in
nature. Such situations would exist when
new
treatment
and
outcome variables
are
being studied,
and the
effects
of the
former
on the
latter
are
being investigated
so as to
reach some tentative,
nonconfirmatory
conclusions. This might
be
of
greater interest
in
status studies,
as
opposed
to
true experi-
mental studies.
A
third situation
in
which multiple
ANOVAS
may be
appropri-
ate is
when some
or all of the
outcome variables under current
study
have
been previously studied
in
univariate contexts.
In
this case, separate univariate analysis results
can be
obtained
for
comparison purposes,
in
addition
to a
multivariate analysis
if
the
latter
is
appropriate
and
desirable.
Finally, there
is an
evaluation design situation
in
which multi-
ple
univariate analyses might
be
conducted. This
is
when some
evidence
is
needed
to
show that
two or
more groups
of
units
are
equivalent with respect
to a
number
of
descriptors. These
analyses might
be
considered
in an in
situ design
for
the
purpose
of
a
comparative evaluation
of a
project.
In
this situation,
evi-
dence
of
comparability
may be
obtained
via
multiple informal
("eyeball")
tests
or
formal
statistical tests.
Some
four
situations
have
been presented that would seem
appropriate
for
multiple univariate analyses. Multiple
ANOVAS
might
be
conducted
to (a)
study
the
effects
of
some treatment
304
CARL
J
HUBERTY
AND
JOHN
D.
MORRIS
variable
or
variables
on
conceptually independent outcome
variables;
(b)
explore
new
treatment-outcome
variable
bivari-
ate
relationships;
(c)
reexamine bivariate relationships within
a
multivariate context;
and (d)
select
a
comparison group
in
designing
a
study.
Any
empirical interrelationships among
the
outcome vari-
ables
are
completely ignored when conducting multiple
AN-
OVAS;
this
is no
problem
if one can
argue
for
conceptual inde-
pendence.
It
should
be
recognized,
however,
that because
of the
nature
of
behavioral science variables, redundant information
will
usually
be
obtained with multiple
ANOVAS.
For
example,
suppose Variable
1
yields univariate significance
and
that
Vari-
able
2 is
highly
correlated with
Variable
1.
Significance
yielded
by
Variable
2,
then, would
not be a new
result.
Van de
Geer
(1971)
pointed
out
that
"with
separate analyses
of
variance
for
each
variable,
we
never know
how
much
the
results
are
dupli-
cating each
other"
(p.
271).
Thus, asking questions about indi-
vidual
outcomes
may
very
well
imply asking redundant ques-
tions.
Asking
redundant
questions
may be
acceptable
in
some
research
contexts;
however,
the
researcher should
be
cognizant
of
the
redundancy.
Multivariate
Questions
The
basic
MANOVA
question
is, Are
there
any
overall (inter-
action, main)
effects
present?
In
addition, questions pertaining
to
simple
effects
and to
group contrast
effects
may be
addressed.
(See
Huberty, 1986,
for a
discussion
on
these
and
subsequent
questions
in
this
section.)
After
addressing
the
effects
questions,
there
are
other research questions that
may be
addressed
via a
multivariate analysis. These questions pertain
to (a)
determin-
ing
outcome variable subsets that account
for
group separation;
(b)
determining
the
relative contribution
to
group separation
of
the
outcome variables
in the final
subset;
and (c)
identifying
underlying
constructs associated with
the
obtained
MANOVA
re-
sults. None
of
these questions
may be
adequately addressed
by
conducting
multiple
ANOVAS.
To
appropriately address them,
one
must consider outcome variable intercorrelations.
The
three questions
in the
preceding paragraph,
from
a
strict
multivariate point
of
view,
are now
briefly
reviewed.
In
some
instances
it may be
desirable
to
determine
if
fewer
outcome
variables than
the
total number initially chosen should
form
a
basis
for
interpretation. This
is the
so-called
variable
selection
problem,
and it is
discussed
in
some detail
by
Huberty
(in
press).
This question might
be
considered
so as to
seek
a
parsimonious
interpretation
of a
system
of
outcome variables.
It
should
be
noted that this
is not an
imposed parsimony,
as one
might
get
with
multiple univariate analyses,
but is a
parsimony that takes
into consideration
the
correlations among
the
outcome vari-
ables.
A
second potential reason
for
conducting
a
multivariate anal-
ysis
is to
make
an
assessment
of the
relative contribution
of the
outcome variables
to the
resultant group
differences
or to the
resultant
effects
of the
treatment variable
or
variables. This
is
the
so-called variable
ordering
problem. Although
the
assess-
ment
of
variable
importance
is
problematic
in all
multivariable
analyses (including multiple
and
canonical correlation, factor
analysis,
and
cluster analysis), some potentially
useful
indexes
have
been proposed
for the
MANOVA
context (see Huberty,
1984).
Of
course,
a
meaningful ordering
of
variables
can
only
be
legitimately accomplished
by
taking
the
variable intercorre-
lations
into
consideration.
(It
should
be
pointed
out
that typically employed criteria
for
variable selection
and
variable ordering
are
sample
and
system
specific.
What
a
good variable subset
or a
relatively good indi-
vidual variable
is
depends upon
the
collection
of the
variables
in
the
system being studied.
How
well
the
proposed selection
and
ordering results hold
up
over
repeated
sampling needs
to be
addressed with
further
empirical study.
Of
course, replication
is
highly
desirable.
The
rank-order
position
of a
given
variable
in
a
system
of
variables
may
change when
new
variables
are
added
to the
system.
The
same
may be
said
for the
composition
of
a
good subset
of
variables. Hence,
a
conclusion regarding
the
goodness
of a
variable subset
and the
relative goodness
of
indi-
vidual
variables must
be
made with some caution [see Huberty,
in
press,
and
Share,
1984,
for
elaboration].)
The
identification
of a
construct that underlies
the
collection
of
outcome
variables
to be
studied
is
more
a
matter
of art
than
statistics.
This
identification
process
is
legitimate only
if
the
col-
lection
of
variables constitutes
a
system.
A
system
of
outcome
variables
may be
loosely
defined
as a
collection
of
conceptually
interrelated variables
that,
at
least
potentially, determines
one
or
more
meaningful
underlying
variates
or
constructs.
In a
sys-
tem,
one has
several outcome variables that represent
a
small
number
of
constructs, typically
one or
two.
For
example, Wat-
terson,
Joe, Cole,
and
Sells (1980) studied
a
system
of five
out-
come measures
on
attitudes (based
on
interview
and
question-
naire data) that
led to two
meaningful
constructs, political
atti-
tude
and
freedom
of
expression;
Hackman
and
Taber
(1979)
studied
a
system
of
21
outcome measures
on
student perfor-
mance (based
on
interview data) that yielded
two
meaningful
constructs, academic performance
and
personal growth.
A
goal
of a
multivariate analysis
may be to
identify
and
inter-
pret
the
underlying construct
or
constructs.
For
such potential
constructs
to be
meaningful,
the
judicious choice
of
outcome
variables
to
study
is
necessary;
the
conceptual relationships
among
the
variables must
be
considered
in
light
of
some over-
riding
theory.
A
multivariate analysis should enable
the re-
searcher
to
"get
a
handle"
on
some characteristics
of his or her
theory:
What
are the
emerging
variables?
These emerging variables
are
identified
by
considering some
linear composites
of the
outcome variables, called
canonical
variates
or
linear discriminant
functions
(LDFs). Correlations,
sometimes called structure
coefficients,
between each outcome
variable
and
each
LDF are
found.
Just
as in
factor
analysis,
the
absolute values
of
these correlations,
or
loadings,
are
used
in
the
identification process: Those variables with high loadings
are
tied together
to
arrive
at a
label
for
each
construct.1
We
sub-
1
It
has
been pointed
out by
Harris
(1985,
pp.
129,257)
and
proven
by
Huberty
(1972)
that
in the
two-group
case,
the
squared
LDF-variable
correlations
are
proportional
to the
univariate
F
values. Thus,
it
might
seem
that
if a
system structure
is to be
identified
via
loadings, then
multiple
univariate analyses
would
suffice.
In the
multiple-group
case
in
which
at
least
two
LDFs result,
however,
identification
of the
multiple
constructs
by
multiple univariate analyses
is
generally problematic;
an
exception
may be if
only
one
interpretable
construct results (Rencher,
1986).
MULTIVARIATE
ANALYSIS VERSUS MULTIPLE
UNIVARIATE
ANALYSES
305
scribe
to the use of
structure coefficients
to
label
or
name
the
construct identified with each LDF. This
is in
opposition
to the
use of the
so-called standardized
LDF
weights
for
this purpose,
which
is
espoused
by
Harris
(1985,
p.
319).
Sometimes
a
researcher
is
interested
in
studying multiple sys-
tems,
or
subsystems,
of
variables. These subsystems
may be
studied
for
comparative purposes (see, e.g.,
Lunneborg
&
Lun-
neborg,
1977)
or
simply because
different
(conceptually inde-
pendent?) constructs based
on
unrelated variable domains
are
present (see, e.g.,
Elkins
&
Sultmann,
1981).
In
this case,
a
sepa-
rate multivariate analysis
for
each subsystem would
be
con-
ducted.
The
notion
of a
construct varies across
different
types
of
mul-
tivariate
analyses.
For the
group-comparison
or
treatment-vari-
able-effects
situation
on
which
we
focus
herein,
the
identified
constructs
are
extrinsic
to the set of
outcome variables. That
is,
the
optimization
of the
composites (i.e., LDFs)
is
based
on a
criterion that
is
external
to the
outcome variables, namely,
the
maximization
of
effects.
Similar optimization
of
composites
(linear classification functions)
that
is
based
on an
external cri-
terion occurs
in the
context
of
predictive discriminant analysis
(see
Huberty,
1984)
in
which classification accuracy
is
maxi-
mized.
On the
other hand,
in
component analysis,
for
example,
the
identified
constructs
are
intrinsic
to the set of
outcome vari-
ables.
That
is, the
optimization
of the
composites (i.e., compo-
nents)
is
based
on a
criterion that
is
internal
to the
outcome
variables, namely,
the
maximization
of
variance accounted
for
in
the
variable set. Furthermore,
an
extrinsic-intrinsic situa-
tion could result
when
one
conducts
a
MANOVA
or a
classifica-
tion
analysis using
component
or
factor
scores
as
input
(for
an
example,
see
Huberty
&
Wisenbaker,
1988;
see
Dempster,
1971,
for
more
on
data
structure).
In
a
multiple-group situation,
the
study
of
system structure
and of
variable importance
may
lead
to
some interesting
and
informative
conclusions.
In the
univariate case, group contrasts
(pairwise
or
complex)
are
often
of
interest
in
addition
to, or in
lieu
of, the
omnibus
intergroup
comparison.
Group
contrasts
may
also
be
studied with multiple outcome variables, that
is,
multivariate group contrasts.
The
construct associated with
one
contrast
may be
characterized quite
differently
from
that associ-
ated with another
contrast.
Also,
the
variable orderings
for
effects
defined
by two
contrasts
may be
quite
different.
For a
detailed discussion
of
this analysis strategy,
see
Huberty
and
Smith (1982).
None
of the
above three
data
analysis problems (selecting
variables, ordering variables,
identifying
system structure)
can
be
appropriately approached
via
multiple univariate analyses.
As
Gnanadesikan
and
Kettenring
(1984)
put it, an
objective
of
a
multivariate analysis
is to
increase
the
"sensitivity
of the
anal-
ysis
through
the
exploitation
of the
intercorrelations among
the
response variables
so
that
indications that
may not be
notice-
able
in
separate univariate analyses stand
out
more clearly
in
the
multivariate
analysis"
(p.
323).
We
interpret
"indications"
to
imply relative importance
of
variables
and
structure underly-
ing
the
data.
An
Example
Consider
a
three-group,
13-variable
data set, obtained
from
Bisbey
(1988).
Group
1
consists
of
college freshmen
in a
begin-
ning-level
course
in
French; Group
2
involves
freshmen
in in-
termediate French;
and
Group
3
involves
freshmen
in ad-
vanced French.
The
13
outcome measures consisted
of five
(En-
glish,
Mathematics, Social Science, Natural Science, French)
high
school cumulative grade point averages; number
of
semes-
ters
of
high school French;
four
American College Testing Pro-
gram
standard scores
(in the first
four
areas
of
study just men-
tioned);
two
(Aural
Comprehension, Grammar) scores
on the
Educational Testing Service Cooperative French Placement
Test;
and
number
of
semesters since last high school French
course
was
taken.
A
basic intent
of the
analysis
was to
explain
or
describe
the
resultant intergroup
differences,
Wilks
lambda
=
.231,
F(26,
276)
=
11.455,
p <
.001, eta-squared
=
.769.
For
purposes
of
this discussion,
we
focus
on
three aspects
of
explanation/de-
scription:
(a)
relative contribution
of the
outcome variables
to
intergroup
differences;
(b) the
construct
or
constructs underly-
ing the
differences;
and (c)
variable subset selection.
Values
of
indices
and
procedures used
for
these purposes
from
a
multivar-
iate
analysis2
may be
compared with
the
13
univariate
ANOVA
lvalues
(see Table
2).
Just
as in
multiple regression analysis,
we
think
it is
impor-
tant
to
take into
full
consideration
the
intercorrelations
of the
outcome variables when ordering variables
in
(descriptive) dis-
criminant analysis.
It may be
shown
(Urbakh,
1971)
in the
two-
group case that
the
quotient
of the
square
of a
standardized
LDF
weight
and an
index
of
collinearity
of the
outcome vari-
ables
is an
estimate
of the
decrease
in
group separation when
the
corresponding variable
is
deleted. That
is,
variable rankings
based
on
these
quotients
are
identical
to
those
based
on
F-to-
remove
values that
are
output
from
many statistical package
programs,
for
example,
SAS
STEPDISC,
SPSS"
DISCRIMINANT,
and
BMDP7M.
.F-to-remove
values
are
appropriate
for
ranking vari-
ables
when more than
two
groups
are
involved;
see
Huberty
(in
press)
and
Huberty
and
Wisenbaker (1988)
for
more
details.
One
might
ask if the
univariate
F
values
may be
used
for
vari-
able ordering, realizing,
of
course,
that
variable intercorre-
lations
would
be
ignored.
The
rank ordering
of the F
values
is
not at all
similar
to
that
of the
F-to-remove
values (our pre-
ferred
ordering basis),
r =
.651.
Now
for
some comments pertaining
to
underlying constructs.
2
A
popular index used
in
assessing relative variable contribution
is
the
standardized
LDF
weight.
As
popular
as
this
index might
be, we
prefer
the
F-to-remove
index
discussed
in the
latter part
of the
section.
One
reason
for not
preferring
the
standardized weight index
is
that
its
sampling
variability
is not
considered. Furthermore,
how to
utilize stan-
dardized
LDF
weights
for
variable ordering when there
are
more than
two
groups
is
somewhat open
to
question.
One
possibility
is to use
only
the
weights
for the first
LDF. Another possibility
is to
determine,
for
each
variable,
a
linear composite
of the
(absolute)
LDF
weights using,
say,
the
eigenvalues
as the
composite weights
(Eisenbeis
&
Avery,
1972,
p.
70).
A
comparison between variable orderings based
on
standardized
weights
for the first LDF and
those based
on the
preferred F-to-remove
values
may be
made
for the
example considered here.
The two
sets
of
ranks
are
given
in
Table
2; the
ranks
are
determined
by
clustering com-
parable values
of
each index.
The
correlation between
the two
sets
of
ranks
is
.878,
which
is
fairly
high, indeed.
For two
other data sets, taken
from
Huberty
and
Wisenbaker
(1988),
the
correlations between
the two
sets
of
ranks
are
only
.352
and
.359.
306
CARL
J
HUBERTY
AND
JOHN
D.
MORRIS
Table
2
UnivariateF,
F-to-Remave,
Standardized
LDF
Weight,
and
Structure
Correlation
Values
for
the
Example
Standardized
Univariate
F
Variable
XI
X2
X3
X4
XS
X6
XI
x%
X9
XIO
Xll
X\2
xn
Value
8.496
2.396
9.276
5.842
35.602
8.227
8.844
0.584
3.722
3.361
67.869
105.862
7.967
Rank
7
12
4
9
3
7
5
13
10.5
10.5
2
1
7
F
to
remove
Value
0.173
1.167
1.731
6.854
5.067
1.310
1.358
0.197
1.841
0.835
16.483
23.455
1.531
Rank
12.5
8.5
5.5
3
4
8.5
8.5
12.5
5.5
11
2
1
8.5
LDF1
weight
Value
.040
-.232
.191
.582
.322
.028
.205
.079
-.247
-.164
.606
.686
.176
Rank
12.5
5.5
8.5
2.5
4
12.5
8.5
11
5.5
8.5
2.5
1
8.5
Structure
correlation
LDF1
.182
.052
.140
.113
.409
.158
.203
.028
.131
.124
.555
.703
-.187
LDF2
.388
.439
.736
.577
.043
.554
-.065
.211
.080
-.082
-.480
-.243
.230
Note.
Ranks
are
based
on
"clusters"
of
values.
Using
structure correlations
from
Table
2 to
identify
the two
constructs,
it may be
seen that some joining
of
variables X\2,
XI1,
and X5
yields
the
dominant construct.
The
second con-
struct
is
basically
denned
by
variables
X3, X4, X6,
XI1,
and
XI.
We
claim that
it is
illogical
to
consider using
univariate
F
values
to
identify
some constructs, which inherently depend
on
inter-
correlations
of
constituent variables.
The
use of
univariate
F
values
to
determine good subsets
of
outcome variables
is
also
a
questionable practice.
For
example,
according
to F
values,
the
subset
{X\2,
XI1,
X5, X3,
XI,
XI}
might
be
considered; this subset
is
actually worse than
the
10th
best subset
of
size six,
in the
sense
of the
smallest
Wilks
lambda
value
(see
Huberty
&
Wisenbaker,
1988).
Finally,
when examining
the
univariate analyses,
one
might
conclude that variable
XI
is
significant,
F(2,150)
=
8.496,
p
=
.0003,
eta-squared
=
.102.
But,
in the
company
of the
other
12
variables,
XI
does
not
appear
to be
contributing much
at all to
overall
group
differences
nor to
structure identification.
Type
I
Error
Protection
Whenever
multiple statistical tests
are
carried
out in
inferen-
tial
data analysis, there
is a
potential problem
of
"probability
pyramiding."
Use of
conventional levels
of
Type
I
error proba-
bilities
for
each test
in a
series
of
statistical
tests
may
yield
an
unacceptably
high Type
I
error probability across
all of the
tests
(the
"experimentwise
error
rate").
In the
current context, this
may
be a
particular issue when multiple
ANOVAS
are
conducted.
If
a
researcher
has a
legitimate
reason
for
testing univariate
hypotheses, then
he or she
might consider either
of two
testing
procedures.
One is a
simultaneous test procedure (STP) origi-
nated
by
Gabriel
(1969),
advocated
by
Bird
and
Hadzi-Pavlovic
(1983),
and
programmed
by
O'Grady
(1986).
For the
STP,
as it
is
applied
to the
current
MANOVA-ANOVAS
context,
the
refer-
ent
distribution
for the
ANOVA
F
values would
be
based
on the
MANOVA
test statistic used. Bird
and
Hadzi-Pavlovic
(1983,
p.
168),
however,
point
out
that
for the
current context,
the
overall
MANOVA
test
is not
really
a
necessary prerequisite
to
simulta-
neous
ANOVAS.
Ryan
(1980)
makes
the
same point
for the
AN-
OVA
contrasts context. These
two
contexts
may be
combined
to a
MANOVA-ANOVA
contrasts context
in
which
it
would
be
reasonable
to go
directly
to the
study
of
univariate group con-
trasts,
if
univariate hypotheses
are the
main concern.
The STP
approach
has not
been used
to any
great
extent.
One
reason
for
this
is the low
statistical
power,
a
characteristic shared with
the
Scheffe
test
in a
univariate
context.
The
second procedure
for
testing univariate hypotheses
is to
employ
the
usual univariate test
statistics
with
an
adjustment
to the
overall Type
I
error probability.
How
overall
is
defined
is
somewhat
arbitrary.
It
could mean
the
probability
of
commit-
ting
a
Type
I
error across
all
tests conducted
on the
given
data
set,
or it
could mean
the
Type
I
error probability associated with
an
individual outcome variable when univariate questions
are
being
studied. Whatever
the
choice (which
can be a
personal
one and one
that
is
numerically
nonconventional;
see
Hall
&
Selinger,
1986),
some error splitting seems
very
reasonable.
As-
suming
that Type
I
error probability
for
each
in a set
of
m
tests
is
constant,
the
alpha level
for a
given
test
may be
determined
by
using either
of two
approaches.
One
approach
is to use an
additive
(Bonferroni)
inequality:
For m
tests,
the
alpha
level
for
each test
(a,)
is
given
by the
overall alpha level
(am)
divided
by
m. A
second approach
is to use a
multiplicative inequality (Si-
dak,
1967):
For m
tests,
a,
is
found
by
taking
1
minus
the
mth
root
of the
complement
of
am
(see Games,
1977).
The
per-test
alphas, constant across
the m
tests,
that
are
found
using
the two
approaches are,
for
most practical purposes,
the
same.
In
nearly
all
instances, outcome variables
are
interrelated.
Thus, multiple
ANOVA
F
tests
are not
independent. This lack
of
independence does not,
however,
present
difficulties
in
deter-
mining
the
per-test alpha
level
to
use. That this
is the
case
may
be
seen
by the
following
double inequality:
MULTIVARIATE
ANALYSIS VERSUS MULTIPLE UNIVARIATE ANALYSES
307
am<,
1
-(1
-
a,)"1
<
ma,.
That
is,
either function
of
a,
may be
considered
as an
upper
bound
for
the
overall alpha,
am.
It
turns
out
that when conducting
m
tests, each
at a
constant
alpha level,
a
considerably larger overall alpha
level
results.
For
example,
six
tests,
each conducted using
an
alpha level
of
.05,
yield
an
upper bound
for the
overall alpha level
of .30
using
the
additive inequality
and
yield about
.26
using
the
multiplicative
inequality (the middle
of the
double inequality
in the
preceding
paragraph).
Just
as the STP
approach
may be
lacking
for
statistical power,
so too may the
procedure
of
adjusting
the
Type
I
error probabil-
ity.
One way of
obtaining reasonable power values
is to use an
adequate sample size
(in
relation
to the
number
of
outcome
variables). This, however,
may
provide little solace
to the
prac-
ticing
researcher. Modifications
of the
adjustment procedure
have
been proposed
by
Larzelere
and
Mulaik
(1977)
and by
Schweder
and
Spjotvoll
(1982).
We
recommend
a
modified
ad-
justment procedure
to
control
for
experimentwise Type
I
error
when
conducting multiple
statistical
tests.
Discussion
Even
though
it is a
fairly
popular analysis route
to
take
in
the
behavioral sciences, conducting
a
MANOVA
as a
preliminary
step
to
multiple
ANOVAS
is not
only unnecessary
but
irrelevant
as
well.
We
consider
to be a
myth
the
idea that
one is
controlling
Type
I
error probability
by
following
a
significant
MANOVA
test
with
multiple
ANOVA
tests, each conducted using conventional
significance
levels. Furthermore,
the
research questions
ad-
dressed
by a
MANOVA
and
by
multiple
ANOVAS
are
different;
the
results
of one
analysis
may
have
little
or no
direct substantive
bearing
on the
results
of the
other.
To
require
MANOVA
as a
prerequisite
of
multiple
ANOVAS
is
illogical,
and the
comfort
of
statistical protection
is an
illusion.
The
view
that
it is
inappro-
priate
to
follow
a
significant
MANOVA
overall test with
univari-
ate
tests
is
shared
by
others (e.g., Share,
1984).
If
the
researcher
is
interested
in
outcome variable selection
or
ordering,
or in
variable system structure, then
a
multivariate
analysis should
be
done.
It has
been argued (e.g.,
by
Conger,
1984,
p.
303) that
a
weighted composite
of the
outcome vari-
ables (i.e.,
an
LDF)
is not
readily
interpretable.
That
may be
the
case when
a
small number
of
diverse outcome variables
is
being
studied. This should not,
however,
be
considered
a
draw-
back
to
conducting
a
MANOVA.
Obtaining
an
uninterpretable
structure does
not
logically lead
to the use of
multiple
ANOVAS.
Is
it
reasonable
to
shift
from
a
multivariate-type
research ques-
tion
to an
ANOVA-type
research question just because
the
multi-
variate question
is
difficult
to
answer?
The
possibility
of
obtain-
ing
an
interpretable composite when outcome variables
are ju-
diciously chosen
for
study
may
very
well
enhance analysis
findings.
This
is a
plus!
On
the
basis
of the
limited
journal survey
completed,
one
might
conclude that
the
multiple-ANOVAs
analysis strategy
will
be
appropriate
for
many empirical studies.
If
this conclusion
is
in
fact
correct, then
the
assessment
of
relative outcome variable
importance
and the
discovery
and
interpretation
of
data struc-
ture
will
apparently
be of
little interest. There
will
be
little con-
cern, too,
for the
potential
of finding and
reporting results that
may
be
redundant across
the set of
outcome variables.
Whether
a
researcher conducts
a
multivariate analysis
or
multiple
univariate
analyses,
it is
strongly recommended that
the
outcome variable intercorrelations
be
reported,
or at
least
be
made available.
Typically,
these correlations would
be re-
ported
in the
form
of a
matrix.
At the
very
least,
a
descriptive
summary
of the
distribution
of the
correlations should
be re-
ported.
References
Bird,
K..
D.,
&
Hadzi-Pavlovic,
D.
(1983).
Simultaneous
test
procedures
and the
choice
of a
test statistic
in
MANOVA.
Psychological
Bulletin,
93,
167-178.
Bisbey,
G. D.
(1988).
[Characteristics
of
college
freshmen
in
three levels
of
French
instruction].
Unpublished
raw
data.
Biskin,
B. H.
(1980).
Multivariate analysis
in
experimental counseling
research.
The
Counseling
Psychologist,
8,
69-72.
Bock,
R. D.
(1975).
Multivariate statistical methods
in
behavioral
re-
search.
New
\brk:
McGraw-Hill.
Bray,
J.
H.,
&
Maxwell,
S. E.
(1982).
Analyzing
and
interpreting
signifi-
cant
MANOV
AS.
Review
of
Educational
Research,
52,
340-367.
Conger,
A. J.
(1984).
Statistical consideration.
In M.
Hersen,
L.
Michel-
son,
& A. S.
Bellack
(Eds.),
Issues
in
psychotherapy
research
(pp. 285-
309).
New
York:
Plenum.
Dempster,
A. P.
(1971).
An
overview
of
multivariate
data
analysis.
Jour-
nal
of
Multivariate
Analysis,
12,
316-346.
Dossey,
J. A.
(1976).
The
relative
effectiveness
of
four
strategies
for
teaching algebraic
and
geometric disjunctive concepts
and for
teach-
ing
inclusive
and
exclusive disjunctive concepts.
Journal
for
Research
in
Mathematics Education,
7,
92-105.
Eisenbeis,
R.
A.,
&
Avery,
R. B.
(1972).
Discriminant analysis
and
clas-
sification
procedures.
Lexington,
MA:
Heath.
Elkins,
J.,
&
Sultmann,
W. F.
(1981).
ITPA
and
learning disability:
A
discriminant analysis. Journal
of
Learning
Disabilities,
14,
88-92.
Gabriel,
K. R.
(1969).
Simultaneous
test
procedures:
Some theory
of
multiple comparisons. Annals
of
Mathematical
Statistics,
40,
224-
250.
Games,
P. A.
(1977).
An
improved
table
for
simultaneous control
on g
contrasts.
Journal
of
the
American Statistical Association,
72,
531-
534.
Gnanadesikan,
R.,
&
Kettenring,
J. R.
(1984).
A
pragmatic review
of
multivariate methods
in
applications.
In H. A.
David
& H. T.
David
(Eds.),
Statistics:
An
appraisal
(pp.
309-337).
Ames: Iowa State Uni-
versity Press.
Hackman,
J.
D.,
&
Taber,
T. D.
(1979).
Patterns
of
undergraduate per-
formance
related
to
success
in
college. American Educational
Re-
search
Journal,
16,
117-138.
Hall,
P., &
Selinger,
B.
(1986).
Statistical
significance: Balancing evi-
dence against
doubt.
Australian Journal
of
Statistics,
28,
354-370.
Harris,
R. J.
(1985).
A
primer
of
multivariate
statistics.
New
\brk:
Aca-
demic Press.
Huberty,
C. J
(1972).
Regression analysis
and
2-group
discriminant
analysis.
Journal
of
Experimental
Education,
41,
39-41.
Huberty,
C. J
(1984).
Issues
in the use and
interpretation
of
discrimi-
nant analysis.
Psychological
Bulletin,
95,
156-171.
Huberty,
C. J
(1986).
Questions addressed
by
multivariate analysis
of
variance
and
discriminant analysis.
Georgia
Educational
Researcher,
5,
47-60.
Huberty,
C. J (in
press). Problems with
stepwise
methods: Better alter-
natives.
In B.
Thompson
(Ed.),
Advances
in
social science methodol-
ogy
(Vol.
1).
Greenwich,
CT:
JAI
Press.
308
CARL
J
HUBERTY
AND
JOHN
D.
MORRIS
Huberty,
C. J, &
Smith,
J. D.
(1982).
The
study
of
effects
in
MANOVA.
Multivariate
Behavioral
Research,
17,
417-432.
Huberty,
C. J, &
Wisenbaker,
J. M.
(1988, April). Discriminant
analy-
sis:
Potential
improvements
in
typical
practice.
Paper
presented
at the
annual
meeting
of the
American Educational Research Association,
New
Orleans.
Hummel,
T.
J.,
&
Johnston,
C. B.
(1986,
April).
An
empirical
compari-
son
of
size
and
power
of
seven
methods
for
analyzing
multivariate
data
in
the
two-sample
case.
Paper
presented
at the
annual meeting
of the
American
Educational Research Association,
San
Francisco.
Hummel,
T.
J.,
&
Sligo,
J. R.
(1971).
Empirical comparison
of
univari-
ate and
multivariate analysis
of
variance procedures.
Psychological
Bulletin,
76,49-57.
Larzelere,
R. E., &
Mulaik,
S. A.
(1977).
Single-sample
tests
for
many
correlations.
Psychological
Bulletin,
84,
557-569.
Leary,
M.
R.,
&
Altmaier,
E. M.
(1980).
Type
I
error
in
counseling
re-
search:
A
plea
for
multivariate analyses. Journal
of
Counseling
Psy-
chology,
27,
611-615.
Lunneborg,
C.
E.,
&
Lunneborg,
P. W.
(1977).
Is
there room
for
a
third
dimension
in
vocational interest
differentiation?
Journal
of
Voca-
tional
Behavior,
11,
120-127.
O'Grady,
K.
E.
(1986).
Simultaneous tests
and
confidence
intervals.
Be-
havior
Research
Methods,
Instruments,
&
Computers,
18,
325-326.
Rencher,
A. C.
(1986, August).
Canonical
discriminant
function
coeffi-
cients
converted
to
correlations:
A
caveat.
Paper presented
at the an-
nual
meeting
of the
American Statistical Association, Chicago.
Ryan,
T. A.
(1980).
Comment
on
"Protecting
the
overall
rate of
Type
I
errors
for
pairwise
comparisons with
an
omnibus test
statistic."
Psy-
chological
Bulletin,
88,
354-355.
Schweder,
T, &
Spjetvoll,
E.
(1982).
Plots
ofp-values
to
evaluate many
tests simultaneously.
Biometrika,
69,
493-502.
Share,
D. L.
(1984).
Interpreting
the
output
of
multivariate analyses:
A
discussion
of
current approaches. British Journal
of
Psychology,
75,
349-362.
Sidak,
Z.
(1967).
Rectangular confidence regions
for
the
means
of
multi-
variate normal distributions.
Journal
of
the
American Statistical
As-
sociation,
62,
626-633.
Tatsuoka,
M. M.
(1971).
Significance
tests.
Champaign,
IL:
Institute
for
Personality
and
Ability
Testing.
Urbakh,
V. Y.
(1971).
Linear discriminant analysis:
Loss
of
discriminat-
ing
power
when
a
variable
is
omitted. Biometrics,
27,
531-534.
Van
de
Geer,
J. P.
(1971).
Introduction
to
multivariate
analysis
for the
social
sciences.
San
Francisco:
Freeman.
Watterson,
O.
M.,
Joe,
G.
W.,
Cole,
S. G., &
Sells,
S. B.
(1980).
Impres-
sion
management
and
attitudes toward marihuana use. Multivariate
Behavioral
Research,
15,
139-156.
Wilkinson,
L.
(1975).
Response variable hypotheses
in the
multivariate
analysis
of
variance.
Psychological
Bulletin,
82,
408-412.
Received
May
14,1987
Revision
received
March
11,1988
Accepted
June
7,1988
Mineka Appointed Editor
of
Journal
of
Abnormal
Psychology,
1990-1995
The
Publications
and
Communications Board
of the
American Psychological
Association
an-
nounces
the
appointment
of
Susan Mineka, Northwestern University,
as
editor
of the
Journal
of
Abnormal
Psychology
for a
6-year term beginning
in
1990.
As of
January
1,
1989,
manu-
scripts
should
be
directed
to
Susan
Mineka
Northwestern University
Department
of
Psychology
102
Swift
Hall
Evanston, Illinois 60208
... To investigate how individual perceivers distinguished perceived popularity and liking, we conducted a 4 (criterion) ϫ 2 (gender) ϫ 3 (ethnicity) ϫ 4 (grade) repeated measures analysis of variance (ANOVA) for each behavior (athletic and academic ability, prosocial behavior, physical and relational aggression, and social withdrawal). Because we ran six ANOVAs, the p value of significance was adjusted to .008 for each effect (see Huberty & Morris, 1989). Criterion was a repeated measures factor formed by the four kappa scores for each behavior, measuring the perceived association of the behavior with being popular, unpopular, liked, and disliked, respectively. ...
... To examine the effects of perceiver characteristics on judgments of popular and unpopular targets, we conducted a 2 (target popularity) ϫ 2 (gender) ϫ 3 (ethnicity) ϫ 3 (grade) ANOVA on each of the 10 categories, with target popularity as a repeated measures factor. A Bonferroni correction was applied, adjusting the p value for significance to .005 for each effect (Huberty & Morris, 1989). To create sufficient cell sizes for all cells of the design, we divided participants into three age groups (fourth-fifth, sixth, and seventheighth grade). ...
Article
Full-text available
Children's perceptions of popular and unpopular peers were examined in 2 studies. Study 1 examined the degree to which 4th–8th- grade boys and girls (N = 408) nominated the same peers for multiple criteria. Children viewed liked others as prosocial and disliked others as antisocial but associated perceived popularity with both prosocial and antisocial behavior. In Study 2, a subset of the children from Study 1 (N = 92) described what makes boys and girls popular or unpopular. Children described popular peers as attractive with frequent peer interactions, and unpopular peers as unattractive, deviant, incompetent, and socially isolated. In both studies, children's perceptions varied as a function of the gender, age, and ethnicity of the participants.
... A high correlation was observed for the perceived stress scale and one of its subscales of perceived helplessness (r = 0.95-0.96). While a MANOVA could have been an alternative approach to manage multiple outcome variables, ANOVAs were employed instead of MANOVAs due to the exploratory nature of the study which sought to investigate the intervention's effects on individual outcome variables rather than their interrelationships (55). Nonetheless, a supplementary mixed MANOVA and repeated MANOVA analysis were conducted and the findings from these analyses were consistent with the results obtained from the ANOVAs, providing further support for the current analysis. ...
Article
Full-text available
Introduction The current study builds on the expertise of National Gallery Singapore and Nanyang Technological University Singapore (NTU) in developing and piloting an enhanced version of the Slow Art program, namely “Slow Art Plus” for mental health promotion. Methods A single-site, open-label, waitlist Randomized Control Trial (RCT) design comprising of a treatment group and waitlist control group was adopted (ClinicalTrials.gov ID: NCT05803226). Participants (N = 196) completed three online questionnaires at three timepoints: baseline [T1], immediately post-intervention/s baseline [T2], post-intervention follow-up/immediately post-intervention [T3]. Qualitative focus groups were conducted to evaluate program acceptability. Results A mixed model ANOVA was performed to understand intervention effectiveness between the immediate intervention group and waitlist control group. The analyses revealed a significant interaction effect where intervention group participants reported an improvement in spiritual well-being (p = 0.001), describing their thoughts and experiences (p = 0.02), and nonreacting to inner experiences (p = 0.01) immediately after Slow Art Plus as compared to the control group. Additionally, one-way repeated measure ANOVAs were conducted for the intervention group to evaluate maintenance effects of the intervention. The analyses indicated significant improvements in perceived stress (p < 0.001), mindfulness (p < 0.001) as well as multiple mindfulness subscales, active engagement with the world (p = 0.003), and self-compassion (p = 0.02) 1 day after the completion of Slow Art Plus. Results from framework analysis of focus group data revealed a total of two themes (1: Experiences of Slow Art Plus, 2: Insights to Effective Implementation) and six subthemes (1a: Peaceful relaxation, 1b: Self-Compassion, 1c: Widened Perspective, 2a: Valuable Components, 2b: Execution Requisites, 2c: Suggested Enhancements), providing valuable insights to the overall experience and implementation of the intervention. Discussion Slow Art Plus represents a unique approach, offering a standardized, multimodal, single-session program that integrates mindfulness and self-compassion practices, as well as reflective and creative expressions with Southeast Asian art. It demonstrates potential in meeting the mental health needs of a wide range of individuals and could be readily incorporated into social prescribing initiatives for diverse populations.
... Post hoc procedures are often necessary after the null hypothesis is rejected in an ANOVA [5] or a MANOVA [3]. This is because the null hypotheses for these procedures often do not provide researchers with all the information that they desire [6,7]. Tukey's test and Duncan's multiple-range test are two of the procedures that can be used and are found in most statistical packages [8]. ...
Article
Full-text available
The techniques commonly used in estimating the extent of acetylation are based on the principle of substituting hydroxyl groups with acetyl groups. In this study, three lignocellulosic materials were modified using a solvent-free method of acetylation with NBS (N – bromosuccinimide) as a catalyst. The extent of modification of these materials was estimated using three techniques – weight percent gain (WPG), extent of acetylation (R), and degree of substitution (DS). Six (6) factors were considered in the acetylation of the lignocellulosic materials. Equality of variance – covariance matrices of the techniques across the factors in all the materials were tested with Box’s M test. The performance and response of the techniques to variation of the factors studied were compared statistically using multivariate analysis (MANOVA) and Duncan multiple range test. MANOVA results showed no statistical difference in the response of the techniques towards variation of the factors studied in acetylating these materials. However, it also showed that there was a significant difference in the performance of the techniques used in estimating the extent of acetylation. Duncan multiple range test analysis indicated that WPG performed best in estimating the extent of acetylation. Thus, any of the techniques can be used to estimate the extent of acetylation satisfactorily.
... and men differed significantly from women, F(6, 39) ϭ 3.24, p ϭ .011, in the absence of a significant interaction, F(6, 39) ϭ .55, p Ͼ .50. 6 In order to assess the relative contribution of the different variables to the MANOVA, a discriminant analysis was performed that focused on the structure coefficients (Huberty & Morris, 1989). The relative contributions to the multivariate difference between light and heavy drinkers were (in descending order, with structure coefficients in parentheses): global attitudes (.66), negative reinforcement expectancies (.59), VAS-arousal (.42), positive reinforcement expectancies (.34), negative expectancies (.10), and VAS-sedation (.05). ...
Article
Full-text available
Implicit and explicit alcohol-related cognitions were measured in 2 dimensions: positive-negative (valence) and arousal-sedation, with 2 versions of the Implicit Association Test (IAT; A. G. Greenwald, D. E. McGhee, & J. L. Schwartz) and related explicit measures. Heavy drinkers (n = 24) strongly associated alcohol with arousal on the arousal IAT (especially men) and scored higher on explicit arousal expectancies than light drinkers (n = 24). On the valence IAT, both light and heavy drinkers showed strong negative implicit associations with alcohol that contrasted with their positive explicit judgments (heavy drinkers were more positive). Implicit and explicit cognitions uniquely contributed to the prediction of 1-month prospective drinking. Heavy drinkers' implicit arousal associations could reflect the sensitized psychomotor-activating response to drug cues, a motivational mechanism hypothesized to underlie the etiology of addictive behaviors.
... Although it would have been possible to combine the depression measures into a single, multivariable analysis, we did not do this because each of these measures had been previously reported in univariate analyses and each had an independently established cutoff score for determining the presence or absence of depression. Univariate analyses of the depression measures allowed for the direct comparison of these results with those previously reported and for the means to be directly interpreted in terms of the clinical meaningfulness of the outcomes (see Huberty & Morris, 1989, for a discussion of when it is appropriate to use univariate and multivariate approaches to analyses). Univariate analyses of the secondary measures was necessary because these variables, although related, all measured somewhat different constructs. ...
Article
Full-text available
In a study designed to maximize the effectiveness of treatment by allowing participants to select the target of treatment, 40 depressed older adults were randomly assigned to a waiting-list control condition or to conditions in which the target of treatment was either chosen or assigned. All participants received self-management therapy and the choice was between changing behavior or changing cognition. It was found that individually administered self-management therapy was effective in treating depression for older adults. There were no differences in outcome between versions of self-management therapy that targeted behavioral or cognitive change. Among those who completed treatment, there were no differences in outcome between those who received a choice and those who did not. Individuals who were given a choice of treatment options, however, were less likely to drop out of treatment prematurely.
Article
Purpose Social media sites provide autistic youth a familiar space to interact that is devoid of many of the challenges that accompany face-to-face interactions. As such, it is important to determine whether the linguistic profiles observed during online interactions are consistent with face-to-face interactions. This preliminary study took a step in this direction by examining gender differences observed in autistic adolescents in an online forum to determine whether they are consistent with the emerging body of research investigating linguistic gender differences in autistic adolescents. Method We analyzed the entries of self-identified autistic adolescents in an online forum to determine whether autistic girls ( n = 99) and boys ( n = 94) differ in their use of linguistic features as a proportion of total words produced. Transcriptions were coded across discourse, lexical, and semantic features and compared to previous research investigating linguistic gender differences in autistic people. Exploratory comparisons were also made to linguistic gender differences in neurotypical people. Results Of the linguistic features we examined, three out of four of the gendered usage patterns observed in the online forum language samples were consistent with previous research on face-to-face communication for autistic adolescents. Only one feature out of 12 occurred in the same gender distribution as previous research on neurotypical communication. Conclusions Autistic girls and boys demonstrate largely consistent gender differences in their language use across in-person and online communication contexts. Interestingly, most of the significant gender differences previously reported in neurotypical communicators were not seen in this sample of autistic adolescents, suggesting that perhaps autistic individuals may linguistically express gender characteristics to a different extent or in a different manner than neurotypical individuals.
Article
Full-text available
The authors examined applicant self-selection from a multiple hurdle hiring process. The relationships of the selection status of 3,550 police applicants (self-selected out prior to 1 of the hurdles, passing, or failing) and perceptions of the organization, commitment to a law enforcement job, expectations regarding the job, employment status, the need to relocate, the opinions of family and friends, and perceptions of the hiring process were examined. Differences between those who stayed in the process and those who self-selected out were observed in most areas, and those who self-selected out at early stages differed from those self-selecting out at later stages. African Americans' and women's perceptions also differed from the majority group, indicating some of the difficulties an organization faces in attempting to diversify.
Article
Some relationships and similarities between regression analysis and 2-group discriminant analysis are pointed out. In particular, it is shown that in the special case of just two criterion groups the predictor variables may be equivalently ordered (with respect to contribution to prediction or discrimination) by the univariate F-ratios and by estimates of the predictor versus the linear discriminant function correlations.
Article
The purpose of this study was to determine the role played by four instructional strategies and two exemplification-move approaches in teaching algebraic and geometric concepts as well as in teaching inclusive and exclusive disjunctive concepts to students of two different intellectual-ability levels. Eight programed instructional units were prepared to teach 12 contrived disjunctive concepts to 320 subjects. An analysis of the results showed that there were significant differences in the strategy factor for algebraic and inclusive-disjunctive concepts. There was a significant difference in the exemplification-approach factor for inclusive-disjunctive concepts. Significant three-way interactions were observed for geometric and exclusive disjunctive concepts.
Article
An estimate for the loss of the classification power when a single variate is excluded from the linear discriminant function is given. A criterion is proposed for the omission of a variate without an increase in misclassification probability in the case when the discriminant function is based on samples.
Article
Assuming a constant α, tables of t were developed using the Šidák multiplicative inequality: when doing g different contrasts the familywise rate of Type 1 errors, fwi ≤ 1 − (1 − α). Values of t are given that yield fwi = .01, .05, .10, or .20 for number of contrasts, g = 2(1)10, 15(5)50 and degrees of freedom, v = 2(1)30, 40, 60, 120, ∞. These t values are smaller than those in the previous Dunn (1961) tables, thus yielding slightly improved power.