ArticlePDF Available

A Comparative Analysis of Three Online Appraisal Instruments' Ability to Assess Validity in Qualitative Research

Authors:

Abstract and Figures

The concept of validity has been a central component in critical appraisal exercises evaluating the methodological quality of quantitative studies. Reactions by qualitative researchers have been mixed in relation to whether or not validity should be applied to qualitative research and if so, what criteria should be used to distinguish high-quality articles from others. We compared three online critical appraisal instruments' ability to facilitate an assessment of validity. Many reviewers have used the critical appraisal skills program (CASP) tool to complete their critical appraisal exercise; however, CASP appears to be less sensitive to aspects of validity than the evaluation tool for qualitative studies (ETQS) and the Joanna Briggs Institute (JBI) tool. The ETQS provides detailed instructions on how to interpret criteria; however, it is the JBI tool, with its focus on congruity, that appears to be the most coherent.
Content may be subject to copyright.
http://qhr.sagepub.com/
Qualitative Health Research
http://qhr.sagepub.com/content/20/12/1736
The online version of this article can be found at:
DOI: 10.1177/1049732310378656
2010 20: 1736 originally published online 29 July 2010Qual Health Res
Karin Hannes, Craig Lockwood and Alan Pearson
Qualitative Research
A Comparative Analysis of Three Online Appraisal Instruments' Ability to Assess Validity in
Published by:
http://www.sagepublications.com
can be found at:Qualitative Health ResearchAdditional services and information for
http://qhr.sagepub.com/cgi/alertsEmail Alerts:
http://qhr.sagepub.com/subscriptionsSubscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://qhr.sagepub.com/content/20/12/1736.refs.htmlCitations:
What is This?
- Jul 29, 2010 OnlineFirst Version of Record
- Nov 22, 2010Version of Record >>
at K U Leuven-Campusbibliotheek on January 24, 2013qhr.sagepub.comDownloaded from
Qualitative Health Research
20(12) 1736 –1743
© The Author(s) 2010
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1049732310378656
http://qhr.sagepub.com
A Comparative Analysis
of Three Online Appraisal
Instruments’ Ability to Assess
Validity in Qualitative Research
Karin Hannes,1 Craig Lockwood,2 and Alan Pearson2
Abstract
The concept of validity has been a central component in critical appraisal exercises evaluating the methodological quality
of quantitative studies. Reactions by qualitative researchers have been mixed in relation to whether or not validity should
be applied to qualitative research and if so, what criteria should be used to distinguish high-quality articles from others.
We compared three online critical appraisal instruments’ ability to facilitate an assessment of validity. Many reviewers
have used the critical appraisal skills program (CASP) tool to complete their critical appraisal exercise; however, CASP
appears to be less sensitive to aspects of validity than the evaluation tool for qualitative studies (ETQS) and the Joanna
Briggs Institute (JBI) tool. The ETQS provides detailed instructions on how to interpret criteria; however, it is the JBI
tool, with its focus on congruity, that appears to be the most coherent.
Keywords
evidence-based practice; metasynthesis; research evaluation; review; validity
Current discussions on the potential of qualitative research
findings to inform complex decision-making processes in
policy and practice have increased the interest in qualita-
tive evidence synthesis (QES). There is ongoing debate on
whether or not quality assessment should be part of QES
and if so, what criteria should be used to distinguish high-
quality studies from others. In a recent review of an article
by Cohen and Crabtree (2008) we identified seven quality
dimensions in existing appraisal instruments: evaluation
of researcher bias, validity, reliability, importance of the
research project, clarity and coherence of research reports,
ethics, and the use of appropriate and rigorous methods.
There is general agreement on the inclusion of the last
four criteria; however, reactions by qualitative researchers
have been mixed in relation to whether or not concepts
such as bias, validity, and reliability should be applied to
qualitative research. The concept of validity has been a
central component in critical appraisal exercises from
reviewers evaluating the methodological quality of quan-
titative studies (Higgins, Altman, the Cochrane Statistical
Methods Group [CSMG], & the Cochrane Bias Methods
Group [CBMG], 2008). It is mainly focused on the detec-
tion of risk of bias; that is, the risk that a study might over-
or underestimate the true intervention effect. Qualitative
researchers rely—implicitly or explicitly—on a variety of
understandings of validity in their evaluation of method-
ological quality. Smith (1984) argued that the basic episte-
mological and ontological assumptions of quantitative
and qualitative research are incompatible. Consequently,
it is inappropriate to apply measures such as validity to
qualitative research. Other researchers hold a moderate
viewpoint, assuming that some studies indeed are more
rigorous than others, and that concepts such as researcher
bias, validity, and reliability should be part of an assess-
ment exercise (Hannes & the Cochrane Qualitative
Research Methods Group [CQRMG], 2009; Morse,
Barett, Mayan, Olson, & Spiers, 2002).
A number of researchers have proposed translations of
the concept of validity to the qualitative research commu-
nity, such as rigor, trustworthiness, plausibility, and cred-
ibility (Eisner, 1991; Guba & Lincoln, 1989; Lincoln &
Guba, 1985). These translations have been criticized
by some and welcomed by others (Sandelowski, 1986;
1Catholic University Leuven, Leuven, Belgium
2University of Adelaide, Adelaide, South Australia, Australia
Corresponding Author:
Karin Hannes, Catholic University Leuven, Centre for Methodology
of Educational Research, A. Vesaliusstraat 2, 3000 Leuven, Belgium
Email: karin.hannes@ped.kuleuven.be
at K U Leuven-Campusbibliotheek on January 24, 2013qhr.sagepub.comDownloaded from
Hannes et al. 1737
Seale, 1999). To enable reviewers to critically appraise
qualitative studies, we have to move beyond a translation.
Any account of validity, to be productive, should begin
with an understanding of what qualitative researchers
actually do to establish validity. Maxwell (1992) decon-
structed the concept of validity in five types of under-
standing: descriptive, interpretive, and theoretical validity,
generalizability (also known as external validity), and
evaluative validity. According to Maxwell, validity is
based on the kinds of understanding we have of the
phenomena under study rather than the procedures and
instruments we use to evaluate validity in positivistic
approaches. Validity refers primarily to accounts identi-
fied by researchers and is therefore relative to purposes
and circumstances. Our approach to quality assessment
was focused on the identification of potential threats to
validity. This will assist reviewers in reflecting on these
threats. One approach to evaluate whether or not method-
ological quality has an impact on a review of qualitative
studies is to conduct sensitivity analyses (Harden, 2008).
Such analyses provide reviewers with objective informa-
tion on the impact of methodologically sound studies ver-
sus studies that contain methodological flaws. Maxwell’s
critical, realistic approach to quality assessment is useful
as a framework of the kinds of threats to validity that we
need to consider. It can be used to think about the nature
of these threats, the possible ways that specific threats
might be addressed, and is particularly helpful in trying
to understand the different criteria that are used in
recently developed critical appraisal instruments for
qualitative research.
In this article we compare three critical appraisal
instruments available online on the extent to which they
include criteria that can facilitate reviewers in assessing
the validity of an original qualitative study. In addition,
we discuss techniques reviewers could search for when
evaluating validity. To complete an assessment of qualita-
tive research, reviewers need to define what exactly they
wish to evaluate. They also need to be aware of the criteria
that potentially reflect methodological soundness and the
techniques authors of original studies might use to estab-
lish a methodologically sound study. In the context of this
study, we define criteria as the standards to be upheld as
ideals in qualitative research, and techniques as the meth-
ods employed to diminish validity threats (Whittemore,
Chase, & Mandle, 2001). Maxwell’s framework is a cen-
tral component in the comparison of criteria.
Method
We used four inclusion criteria to select appraisal instru-
ments for the comparison: (a) broadly applicable to dif-
ferent qualitative research designs; (b) used in recently
published QES (2005 to 2008); (c) available online,
ready to use, and free of charge; and (d) developed and
supported by an organization, institute, or consortium, or
a context other than individual academic interest. The
latter generally facilitates ongoing use and development
of the instrument, as well as more reliable access to the
instrument in the long term. The online component adds
to the timely accessibility of the instruments, particu-
larly for users who do not have access to peer-reviewed
journals. In addition, online availability facilitates com-
munication with or feedback to the developers. We used
an ongoing update of a review on published QES as a
basis for identification of relevant articles published
between 2005 and 2008 (Dixon-Woods, Booth, & Sutton,
2007). Eighty-two articles were included in the update
(Hannes & Macaitis, 2010). Strategies for critical appraisal
were among the data that were extracted. In a substantial
number of the QES articles, the authors did not mention
an appraisal instrument. Some explicitly stated that criti-
cal appraisal was not considered, failed to specify their
tool, or used a fairly general description of the critical
appraisal process. Others used modified versions from
colleagues or copied criteria developed by scholars in
the field of health care. We explored eight potentially
relevant critical appraisal instruments in the context of our
inclusion criteria, and excluded five instruments because
(a) they were not (or were no longer) available as an online
instrument (qualsyst, critical appraisal forms 2005, British
Sociological Association medical sociology group criteria),
(b) they addressed etiology instead of qualitative research
(Spider tool), or (c) they focused on process evalua-
tions instead of general qualitative research (evidence
for policy and practice information and coordinating
center tool).
Three instruments fit our inclusion criteria: the Joanna
Briggs Institute (JBI) tool, the critical appraisal skills
program (CASP) tool, and the evaluation tool for
qualitative studies (ETQS). All three instruments were
developed in the context of systematic reviews and can be
used by reviewers to assist them in assessing the quality
of original research articles. We refer the reader to the
original instruments available online for a more detailed
description per criterion.1 To facilitate comparison, we
grouped the criteria used in these three instruments under
11 main headings: theoretical frameworks; apropriateness
of the research design; the procedures for data collection,
data analysis, and the reporting of the findings; the context
of research; the impact of the investigator; believability;
ethics; adequacy of the conclusions; and value/implications
of the research. The locus of categorization was interpretive
and iterative. We used a constant comparative method to
develop headings for criteria that cut across the selected
and evaluated instruments, and cross compared the criteria
at K U Leuven-Campusbibliotheek on January 24, 2013qhr.sagepub.comDownloaded from
1738 Qualitative Health Research 20(12)
used in the CASP tool and the ETQS with the criteria used
in the JBI tool. This resulted in a functional consistency of
headings. Whenever two or more criteria appeared under
one major heading in the original instrument, we separated
them to facilitate the comparison. Table 1 displays the
results of the comparison.
Results
We set out to investigate whether the instruments pre-
sented criteria that could be of assistance to reviewers in
assessing the validity of original qualitative research
reports. The main headings derived from the constant
comparative process were used to inform us. An overview
can be found in Table 2. The following paragraphs expand
on this table and describe the five types of validity identi-
fied by Maxwell (1992).
Descriptive Validity
Descriptive validity refers to the process of data collec-
tion and can be used to evaluate the accuracy of reporting
on specific events and situations. It is mainly focused on
the representation of facts (rather than interpretations).
Maxwell (1992) drew on the idea that intersubjective
agreement can be achieved, given the appropriate data.
Descriptive validity is reflected in criteria such as “inves-
tigator impact” and “context.” The latter criterion is not
addressed in the CASP tool.
Interpretive Validity
Interpretive validity refers to the accuracy in portraying
the inner content of a research subject. It is focused on the
meaning of recorded behaviors, events, or experiences of
the people engaged with them. Interpretive accounts are
constructed by researchers but are grounded in the words
and concepts from the participants studied. Interpretive
validity is reflected in the criterion “believability,” which
is addressed in the JBI tool and the ETQS.
Theoretical Validity
To address theoretical validity, researchers seek to
answer questions such as how a phenomenon under study
manifests itself and why it does so. It is meant to contain
a level of abstraction in explicitly addressing the theoreti-
cal constructions and frameworks that researchers use to
apply the knowledge generated from their projects.
Appraisal of theoretical validity has been moved from an
interest in the accuracy of accounts to legitimacy of the
application of certain concepts or theories and their
appropriateness. It is partly reflected in the “theoretical
framework” criterion, with a link to the “evaluation/
outcome” criterion. Both are addressed in the JBI tool
and the ETQS.
Generalizability (External Validity)
Researchers can reach a degree of generalizability when
they use their theories to go beyond making sense of par-
ticular persons or situations studied with their theories.
They also add to the external validity of their study when
they show how the same process might lead to similar (or
different) results in other situations or in similar situa-
tions not directly observed. Although never the main goal
of qualitative research, generalization is partly reflected
in criteria discussing the “value and implications of
research.” It is explicitly addressed in the ETQS, is
addressed to a lesser extent in the CASP tool, and is not
addressed in the JBI tool.
Evaluative Validity
With the concept of evaluative validity researchers seek to
establish the degree to which a certain phenomenon under
study is legitimate, justified, or raises questions, and
involves the application of an evaluative framework to the
phenomenon under study (e.g., the student was wrong to
throw an eraser at the teacher). Like generalizability, eval-
uative validity is not as central to qualitative research as
descriptive, interpretive, and theoretical validity. It has
little to do with the methods used in a particular study.
The majority of researchers make no particular claim to
evaluate their phenomenon under study. The concept has
been conceptually linked to the “outcome/evaluation” cri-
teria addressed in the JBI tool and the ETQS.
Other criteria reported in all three of the appraisal instru-
ments included data collection, data analysis, and report-
ing of the findings. Accuracy of reporting and detailed
reporting on the methods used in a study can certainly be
of assistance to critical appraisal (Attree & Milton, 2006).
However, these criteria add little to the identification of
the choices researchers have made in their descriptive
and interpretive accounts, nor do they contribute to a
potential justification for the rationale of a researcher.
These criteria can be used to facilitate an overall judg-
ment of the quality of an article, and are useful in reflect-
ing on the extent to which authors have conducted their
research to an acceptable standard. The criterion “appro-
priateness of the research design” does not particularly
add to the discussion on validity, either. It does, however,
have an impact on the methodological soundness of a
study. Each discipline has a body of practices, proce-
dures, and rules that guide and inform scholarly inquiry.
at K U Leuven-Campusbibliotheek on January 24, 2013qhr.sagepub.comDownloaded from
Hannes et al. 1739
Table 1. Cross Comparison of Evaluation Criteria
Criterion JBI Tool CASP Tool ETQS
There is congruity between: Screening questions:
Was there a clear statement of
the aims?
Is a qualitative methodology
appropriate?
Provides a study overview including
bibliographic details, purpose, key
findings, and summary of the study
Includes name of reviewer and
review date, space for comments
Theoretical
framework
The stated philosophical
perspective and the
research methodology
What theoretical framework guides
or informs the study?
In what ways is the framework
reflected in the way the study was
done?
How do the authors locate
the study within the existing
knowledge base?
Appropriateness
of research
design
The research methodology
and the research question
or objectives
Was the research design
appropriate to address the
aims of the research?
Data collection The research methodology
and the methods used to
collect data
Was the recruitment strategy
appropriate to the aims of
the research?
Were the data collected in
a way that addressed the
research issue?
What data collection methods are
used to obtain and record data?
Is the information collected with
sufficient detail and depth to
provide insight into the meaning
and perceptions of informants?
Is the process of fieldwork
adequately described?
Data analysis The research methodology
and the representation and
analysis of data
Was the data analysis
sufficiently rigorous?
How were data analyzed?
How accurate is the description?
Findings The research methodology
and the interpretation of
results
Is there a clear statement of
findings?
Are the findings interpreted within
the context of other studies and
theory?
Context There is a statement locating
the researcher culturally
What role does the researcher
adopt within the setting?
Impact of
investigator
The influence of the
researcher on the research,
and vice versa, is clear
Has the relationship between
researchers and participants
been adequately considered?
Are the researcher’s own position,
assumptions, and possible biases
outlined?
Is there evidence of reflexivity? (Has
the researcher reflected on his
potential personal influence in the
collection and analysis of data?)
Believability Participants, and their voices,
are heard
Is adequate evidence provided to
support the analysis (validity and
reliability)?
Ethics The research is ethical
according to current
criteria, or there is evidence
of ethical approval by an
appropriate body
Have ethical issues been taken
into consideration?
Were ethics committee approval
and informed consent obtained?
Have ethical issues been adequately
addressed?
Evaluation/
outcome
Conclusions drawn in the
research report do appear
to flow from the analysis, or
interpretation, of the data
Is the conclusion justified given the
conduct of the study?
Value and
implications of
research
How valuable is the research? To what setting and population are
the study findings generalizable?
What are the implications for policy
and practice?
Note. JBI = Joanna Briggs Institute; CASP = Critical Appraisal Skills Program; ETQS = Evaluation Tool for Qualitative Studies
at K U Leuven-Campusbibliotheek on January 24, 2013qhr.sagepub.comDownloaded from
1740 Qualitative Health Research 20(12)
Table 2. Types of Validity Addressed in the Critical Appraisal Instruments
Types of Validity Description Criteria Appraisal Instruments
Descriptive validity The degree to which descriptive information such
as events, subjects, setting, time, and places are
accurately reported
Impact of
investigator
Context
Evaluated in JBI,
CASP, & ETQS
JBI & ETQS
Interpretive validity The degree to which participants’ viewpoints,
thoughts, intentions, and experiences are accurately
understood and reported by the qualitative
researcher
Believability Evaluated in JBI &
ETQS
Theoretical validity The degree to which a theory or theoretical
explanation informing or developed from a
research study fits the data and is, therefore,
credible and defensible
Theoretical
framework
Evaluated in JBI &
ETQS
Generalizability The degree to which findings can be extended to
other persons, times, or settings than those directly
studied
Value and
implications of
research
Evaluated in CASP &
ETQS
Evaluative validity The degree to which an evaluative framework or
critique is applied to the object of study
Evaluation/outcome Evaluated in JBI &
ETQS
Note. JBI = Joanna Briggs Institute; CASP = Critical Appraisal Skills Program; ETQS = Evaluation Tool for Qualitative Studies
It is therefore important to include the appropriateness
criterion, at least in the list of screening questions.
Discussion
We evaluated the extent to which three online, publically
available critical appraisal instruments can be used to
facilitate the assessment of validity in qualitative research
reports. Building on Maxwell’s (1992) framework, we
could argue that the discourse should focus on criteria
such as believability, impact of the investigator, con-
text, and the relationship between them (see Table 1). This,
though, would evaluate researcher bias resulting from
selective observation or recording of information and
ungrounded interpretation of data related to a nonreflec-
tive attitude of the researcher. It might not differ funda-
mentally from the “risk of bias” (the potential over- or
underestimation of an effect) definition used to appraise
quantitative research designs (Higgins et al., 2008). Risk
of bias is intertwined with the instruments used to retrieve
results. Quantitative researchers would typically use cali-
brated or validated external measurement instruments
and statistical programs. In qualitative research, the
investigator is the instrument through which data are col-
lected and analyzed (Brody, 1992). Therefore, criteria
closely linked to the investigator’s potential influence
and interpretation are crucially important to assess valid-
ity. Maxwell’s deconstruction of the concept of validity
into descriptive, interpretative, theoretical, external, and
evaluative validity facilitates the comparison between
appraisal instruments. Winter (2000) criticized Max-
well’s idea to link validity to certain stages of a research
project. Descriptive validity is related to the initial stage
of data collection and refers to the registration of facts. It
is, however, extremely difficult to eliminate interpreta-
tion from this phase. Furthermore, there is a very thin line
between concepts such as theoretical validity and gener-
alizability. Theoretical frameworks are meant to guide
the data analysis process, but are also used in an attempt
to generalize beyond the original research results.
Because of this potential overlap between concepts of
validity, assigning evaluation techniques to a particular
category is somehow artificial. Several authors have
reported on evaluation techniques to deal with the aspect
of validity in original research articles.
It is important to establish accuracy in what research-
ers report as information retrieved from participants in
evaluating descriptive validity. Such reporting includes
descriptions from events, behavior or characteristics of
the participants, setting, time, and place. Methods and
investigator triangulation are considered useful tech-
niques (Denzin, 1978; Mays & Pope, 1995). Different
methods that produce different data or accounts of the
same events raise concerns about the descriptive validity
of the accounts (Maxwell, 1992). The accuracy of report-
ing can be increased through the use of multiple observ-
ers recording and describing the participants’ behavior
and context, which allows for cross checking of observa-
tions (Giacomini & Cook, 2000). Techniques to increase
interpretive validity include the display of citations and ver-
batim interview excerpts laying out the participants’ views,
behaviors, perceptions, thoughts, feelings, or experiences.
In addition, reviewers should evaluate whether these
were correctly interpreted by the researcher.
at K U Leuven-Campusbibliotheek on January 24, 2013qhr.sagepub.comDownloaded from
Hannes et al. 1741
Member checking, participant feedback, and close
collaboration with participants verify insights from a
researcher. However, these techniques do not allow
researchers to feed back any theoretical abstractions of
what has been stated or observed during the research pro-
cess. One could ask, for example, for an opinion on
whether the categorical classification of participants’
statements is congruent with the meaning they intended
to express through a particular quote. Other appropriate
strategies include the analysis of data by more than one
independent researcher and the calculation of interrater
agreements. One of the techniques to promote interpre-
tive validity is self-reflection by the researcher on
potential biases, preconceptions, assumptions, and ref-
erence frameworks that might affect the research process
and conclusions. Creswell and Miller (2000) suggested
prolonged engagement in the field—also referred to as
persistent observation—to improve theoretical validity.
This means that researchers spend enough time studying
their subjects and their setting to be able to create a set of
patterns or relationships that are stable and contribute to
an understanding of why these occur.
Another strategy is to explore different theories to
help interpret and explain data (theory triangulation).
This could include the search for deviant cases or discon-
firming evidence or the use of multiple working hypoth-
eses for questions that cannot be addressed by one single
theory (Miles & Huberman, 1994). By using theory trian-
gulation, researchers can examine how theoretical mod-
els complement, supplement, or controvert each other. In
more deductive approaches to qualitative research, pat-
tern matching might occur. Researchers would typically
predict a series of results that form a pattern and then
determine the degree to which the actual results fit the
predicted pattern (Burke Johnson, 1998). Reviewers can
also look for details of the study participants, demo-
graphics, contextual background information, and thick
description about both the sending and the receiving con-
text. This approach enables reviewers to make informed
decisions about to whom the results might be generalized
or to which groups the findings can be transferred.
Another strategy researchers might use is replication
logic. This refers to the degree of confidence we have in
a particular finding when it shows to be true for differ-
ent sets of people. In that case, we assume that it applies
more broadly (Campbell, 1979; Yin, 1994). There are no
clear techniques that facilitate evaluative validity, mainly
because it is an almost unconscious activity within the
research project itself, and part of the reflective process
of the researcher. Assessing the value of something or
judging the objects under study will depend very much
on the circumstances. Both the JBI tool and the ETQS
include an “evaluation/outcome” criterion. This can be
used to evaluate the congruity between conclusions and
other parts of the research process rather than the legiti-
mization of the conclusions. Clarifying the link between
the conclusion and other stages of a research project
might contribute to evaluative validity. We doubt, how-
ever, that this very act would capture the full meaning
Maxwell (1992) assigned to the concept. Ethics are impor-
tant to consider in judging the findings and outcomes of
research. These techniques are all useful; however, they
should not be rigidly applied. They can be of assistance in
evaluating research, but do not contribute directly to the
rigor of a qualitative research project, nor do they provide
us with an accurate picture of whether the choices research-
ers made were grounded. We would benefit from a
reflexive dialogue between researchers and reviewers as
promoted by Stige, Malterud, and Midtgarden (2009),
and an extensive knowledge of research paradigms and
methodologies, to be able to fully understand these issues.
Checklists can nevertheless provide us with an interest-
ing list of criteria to be considered in assessing the level
of methodological soundness of a study.
There are some interesting differences between the
instruments compared. The JBI tool does not include a
criterion that facilitates the assessment of external valid-
ity or relevance of original studies to be included in QES.
It is debatable whether or not relevance is an issue that
needs to be evaluated in the context of a critical appraisal
exercise. Like ethics, the relevance criterion most likely
has its roots in the idea that research should address the
concerns of practitioners rather than be the product of
individual academic interest. However, these criteria are
unlikely to have direct implications on the methodologi-
cal quality of a study. Of the three instruments, the CASP
tool seems to be the least sensitive to validity. It does not
facilitate the evaluation of interpretative and theoretical
validity (see Table 1). We believe an evaluation of inter-
pretative and theoretical validity is crucial for the establish-
ment of a methodologically sound qualitative study.
Most qualitative research is inductive. Researchers
typically look at reality and try to develop a theory from
the information derived from the field. The philosophi-
cal position of researchers toward a research project
determines not only their choice of an appropriate
method but also the window through which they will be
looking at the data. It has a direct impact on the way the
findings will be interpreted and presented. Therefore, a
research article that does not reveal what view of reality
the researchers held can be described as highly mecha-
nistic (Wilson, 2002). Although the CASP tool is a pop-
ular appraisal instrument—most likely because it is a
user-friendly alternative for novice researchers—it does
not score particularly well in evaluating the intrinsic
methodological quality of an original study when com-
pared with other instruments. The ETQS provides more
detailed instructions on how to interpret criteria than the
at K U Leuven-Campusbibliotheek on January 24, 2013qhr.sagepub.comDownloaded from
1742 Qualitative Health Research 20(12)
Campbell, D. T. (1979). Degrees of freedom and the case study.
In T. D. Cook & C. S. Reichardt (Eds.), Qualitative and
quantitative methods in evaluation research (pp. 49-67).
Beverly Hills, CA: Sage.
Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for
qualitative research in health care: Controversies and rec-
ommendations. Annals of Family Medicine, 6, 331-339.
doi:10.1370/afm.818
Creswell, J. W., & Miller, D. L. (2000). Determining validity in
qualitative inquiry. Theory and Practice, 39, 124-130.
Denzin, N. K. (1978). The research act: A theoretical ori-
entation to sociological methods (2nd ed.). New York:
McGraw-Hill.
Dixon-Woods, M., Booth, A., & Sutton, A. J. (2007). Synthesiz-
ing qualitative research: A review of published reports. Quali-
tative Research, 7, 375-422. doi:10.1177/1468794107078517
Eisner, E. (1991). The enlightened eye: Qualitative inquiry
and the enhancement of educational practices. New York:
Macmillan.
Giacomini, M. K., & Cook, D. J. (2000). User’s guide to the
medical literature XXIII: Qualitative research in health
care. a. Are the results of the study valid? Journal of the
American Medical Association, 284, 357-362. doi:10.1001/
jama.284.3.357
Guba, E., & Lincoln, Y. (1989). Fourth generation evaluation.
Newbury Park, CA: Sage.
Hannes K., & the Cochrane Qualitative Research Methods
Group. (2009). Chapter 6: Critical appraisal of qualitative
research. Retrieved from http://www.joannabriggs.edu.au/
cqrmg/documents/Cochrane_Guidance/Chapter6_Guidance
_Critical_Appraisal.pdf
Hannes, K., & Macaitis, K. (2010). Update on a review of pub-
lished qualitative evidence syntheses: Moving toward more
systematic and transparent approaches. Manuscript submit-
ted for publication.
Harden, A. (2008, June-July). Critical appraisal and qualitative
research: Exploring sensitivity analyses. Paper presented at
the Research Methods Festival of the National Centre for
Research Methods, Oxford, United Kingdom.
Health Care Practice Research & Development Unit. (2009).
Evaluation tool for qualitative research. Retrieved from
http://www.fhsc.salford.ac.uk/hcprdu/qualitative.htm
Higgins, J. P. T., Altman, D. G., the Cochrane Statistical Meth-
ods Group, & the Cochrane Bias Methods Group. (2008).
Assessing risk of bias in included studies. In J. P. T Higgins &
S. Green (Eds.), Cochrane handbook for systematic reviews
of interventions (Version 5.0.1). Retrieved from http://www
.cochrane-handbook.org
Joanna Briggs Institute. (2007). SUMARI: The Joanna Briggs
Institute system for the unified management, assessment and
review of information. Adelaide, Australia: Author. Retrieved
from http://www.joannabriggs.edu.au/services/sumari.php
Lincoln, Y., & Guba, E. (1985). Naturalistic inquiry. Beverly
Hills, CA: Sage.
JBI tool. However, it is the latter, with its focus on
congruity, that appears to be the most coherent. It
would be interesting to assess whether these appraisal
instruments are indeed applicable to a broad range of
qualitative research designs, and to assess their validity,
in future research projects.
Acknowledgments
We thank Catalin Tufanaru for assisting us in labeling the crite-
ria, and the staff members from the Joanna Briggs Institute for
accepting the lead author as a visiting research fellow.
Declaration of Conflicting Interests
The authors declared no conflicts of interest with respect to the
authorship and/or publication of this article.
Funding
The authors received no financial support for the research and/
or authorship of this study.
Note
1. The JBI tool was developed through an analysis of the litera-
ture and input from a panel of experts from Australian uni-
versities. It has been extensively piloted and refined before
being incorporated into the JBI qualitative assessment and
review instrument software developed to assist reviewers in
completing systematic reviews of qualitative research (JBI,
2007). The CASP tool was developed by the Public Health
Resource Unit of the National Health Service in collabora-
tion with the U.K. Centre for Evidence Based Medicine and
the Birmingham critical appraisal skills program. The instru-
ment provides users with an extensive amount of additional
information as to how the criteria on rigor and relevance
of an original research report should be interpreted (Public
Health and Resource Unit, 2009). The ETQR was developed
by the Health Care Practice Research and Development Unit
from the University of Salford, in collaboration with the
Nuffield Institute and the University of Leeds. The emphasis
lies on the areas of study context and the process of data col-
lection and analysis. The developers of the instrument were
particularly concerned with meaning, context, and depth.
They provided the researcher with a set of core questions,
and then elaborated on what was meant by it (Health Care
Practice Research and Development Unit, 2009).
References
Attree, P., & Milton, B. (2006). Critically appraising qualitative
research for systematic reviews: Defusing the methodologi-
cal cluster bomb. Evidence & Policy, 2, 109-126.
Brody, H. (1992). Philosophic approaches. In B. Crabtree &
W. Miller (Eds.), Doing qualitative research (pp. 174-185).
Newbury Park, CA: Sage.
Burke Johnson, R. (1998). Examining the validity structure of
qualitative research. Education, 118, 282-292.
at K U Leuven-Campusbibliotheek on January 24, 2013qhr.sagepub.comDownloaded from
Hannes et al. 1743
Maxwell, J. A. (1992). Understanding and validity in qualitative
research. Harvard Educational Review, 62, 279-300.
Mays, N., & Pope, C. (1995). Rigour in qualitative research.
British Medical Journal, 311, 109-112.
Miles, M. B., & Huberman, A. M. (1994). Qualitative data
analysis: An expanded sourcebook (2nd ed.). Newbury Park,
CA: Sage.
Morse, J. M., Barett, M., Mayan, M., Olson, K., & Spiers, J.
(2002). Verification strategies for establishing reliability
and validity in qualitative research. International Journal
of Qualitative Methods, 1(2), 1-19. Retrieved from http://
www .ualberta.ca/~iiqm/ backissues/1_2Final/pdf/morseetal
.pdf
Public Health Resource Unit, Critical Appraisal Skills Pro-
gram. (2009). Critical appraisal tool for qualitative stud-
ies. Retrieved from http://www.phru.nhs.uk/Doc_Links/
Qualitative%20Appraisal%20Tool.pdf
Sandelowski, M. (1986). The problem of rigor in qualitative
research. Advances in Nursing Science, 8, 27-37.
Seale, C. (1999). Quality in qualitative research. Qualitative
Inquiry, 5, 465-478. doi:10.1177/107780049900500402
Smith, J. K. (1984). The problem of criteria for judging interpre-
tive inquiry. Educational Evaluation and Policy Analysis, 6,
379-391. doi:10.3102/01623737006004379
Stige, B., Malterud, K., & Midtgarden, T. (2009). Toward
an agenda for evaluation of qualitative research. Quali-
tative Health Research, 19, 1504-1516. doi:10.1177/
1049732309348501
Whittemore, R., Chase, S. K., & Mandle, C. L. (2001). Valid-
ity in qualitative research. Qualitative Health Research, 11,
522-537. doi:10.1177/104973201129119299
Wilson, T. D. (2002, July). Philosophical foundations and
research relevance: Issues for information research. Paper
presented at the Fourth International Conference on Concep-
tions of Library and Information Science, Seattle, Washington.
Winter, G. A. (2000, March). Comparative discussion of the
notion of “validity” in qualitative and quantitative research.
The Qualitative Report, 4(3/4). Retrieved from http://www
.nova.edu/ssss/QR/QR4-3/winter.html
Yin, R. K. (1994). Case study research: Design and methods.
Newbury Park, CA: Sage.
Bios
Karin Hannes, PhD, MEd, MSc, is a senior research fellow and
teacher of research methodology at the Centre for Methodology
of Educational Research in the Catholic University Leuven, and
senior researcher at the Belgian Center for Evidence-Based Med-
icine, Belgian Cochrane Branch, in Leuven, Belgium.
Craig Lockwood, RN, BN, MNSc, is associate director of the
research and innovation unit at the Joanna Briggs Institute, Uni-
versity of Adelaide, Adelaide, Australia.
Alan Pearson, PhD, RN, MSc, is a professor of evidence-based
health and executive director of the Joanna Briggs Institute,
University of Adelaide, Adelaide, Australia.
at K U Leuven-Campusbibliotheek on January 24, 2013qhr.sagepub.comDownloaded from
... The research methodology rigour of each publication was critically appraised using the Joanna Briggs Institute Qualitative Assessment and Review Instrument. 11 Questions that answered 'yes' were scored 1 point, and studies with an overall score of 5 or less were considered low-quality and excluded from the synthesis of previous similar studies. 11,12 Due to insufficient information available for assessing eligibility criteria, 17 were excluded from the analysis as a result of the quality appraisal process (Appendix S2). ...
... 11 Questions that answered 'yes' were scored 1 point, and studies with an overall score of 5 or less were considered low-quality and excluded from the synthesis of previous similar studies. 11,12 Due to insufficient information available for assessing eligibility criteria, 17 were excluded from the analysis as a result of the quality appraisal process (Appendix S2). A total of 19 studies were selected for inclusion in the systematic review. ...
Article
Full-text available
Objectives To identify, describe and synthesise the views and experiences of adults living with asthma regarding shared decision‐making (SDM) in the existing qualitative literature Methods We conducted a comprehensive search of 10 databases (list databases) from inception until September 2023. Screening was performed according to inclusion criteria. Tools from the Joanna Briggs lnstitute were utilised for the purposes of data extraction and synthesis in this study. The data extraction process in this study employed the Capability, Opportunity and Motivation Model of Behaviour (COM‐B model) as a framework, and a pragmatic meta‐aggregative approach was employed to synthesise the collected results. Results Nineteen studies were included in the metasynthesis. Three synthesised themes were identified: the capability of people living with asthma, the opportunities of people living with asthma in SDM, and the motivation of the people living with asthma in SDM. Conclusions We have identified specific factors influencing people living with asthma engaging in SDM. The findings of this study can serve as a basis for the implementation of SDM in people living with asthma and provide insights for the development of their SDM training programs. The ConQual score for the synthesised findings was rated as low. To enhance confidence, future studies should address dependability and credibility factors. Practice Implications This review contemplates the implementation of SDM from the perspective of people living with asthma, with the aim of providing patient‐centred services for them. The results of this review can benefit the implementation of SDM and facilitate information sharing. It offers guidance for SDM skills training among adults living with asthma, fosters a better doctor–patient relationship and facilitates consensus in treatment decisions, thereby enabling personalised and tailored medical care. Patient or Public Contribution Three nursing graduate students participated in the data extraction and integration process, with two students having extensive clinical experience that provided valuable insights for the integration.
... While a growing body of evidence is available, a universal issue for practitioners, patients and their families and their advocates concerns evaluating its value, that is, its trustworthiness (Smith, 2016). To tackle this, a number of guidelines for evaluating existing evidence have been made available (Hannes et al., 2010). However, the assumptions on which guidelines are based have been questioned by some researchers. ...
... The following five major themes were covered by the published literature: 1) research design, and theoretical underpinning; 2) quality of the study: 3) quality or standards of the reporting: 4) rigor of the research: and 5) trustworthiness of the research. Table 2 summarizes the common themes and subthemes taken from the available CATs. [19][20][21][22][23][24][25][26][27]. This provides a quantitative figure that represents 39 qualitative points that were structurally and critically addressed, albeit in a qualitative way. ...
Article
Full-text available
Background The Problem and Gap: Critical appraisal of qualitative systematic reviews continues to be an underdeveloped field in medical education. There is great confusion regarding the choice of the most suitable tool for appraising such reviews. The Hook: In-depth understanding of the existing critical appraisal tools (CATs) is expected to help the development of a new and more comprehensive tool that would better guide the process of qualitative systematic reviews appraisal. Methods A systematic search strategy was employed and a meticulous literature search was undertaken. The following search engines were used: PubMed, Google Scholar and ERIC. A thematic analysis of the finally included articles was performed to determine common themes and subthemes about critical appraisal of qualitative systematic reviews. The available CATs were critically assessed. Results The common themes identified included quality of the study, its design, standards of reporting, rigor of the review and trustworthiness. An extended new tool was developed in an attempt to address the needs of a robust qualitative review, keeping in mind the essential standards of scientific rigor, trustworthiness and completeness of the evidence. Conclusion The new tool that resulted from the current research includes a comprehensive range of criteria necessary for thoroughly reviewing qualitative systematic reviews. The tool will help future researchers to achieve their objective of appraising the reviews in a more thorough and structured fashion.
... The validity of a study can be further supported through online applications by identifying the values and implications of qualitative research (Hannes et al., 2010). Thematic narrative analysis processes relate to the research questions because narrative research identifies themes from the stories of participants in a study to provide information that is relevant to the problem presented (Alvarez & Urla, 2002). ...
Book
Full-text available
The presented challenge in education revolves around the effective utilization of differentiated and individualized instruction by special education teachers, aiming to address the unique needs of students with learning disabilities. This study's purpose was to identify ways to use these instructional methods to develop methods that create better educational experiences for the teacher and student. The research methodology was qualitative with a narrative design. The population was high school and middle school special education teachers in New Jersey. Purposeful and snowball sampling was used. The instrumentation used to collect information was interview questions. Data analysis procedures were collecting previous literature and information from interviews to develop common and different themes presented in the responses given. The predictive benefits of the study were potential strategies that could be used to alleviate problems and challenges presented when implementing differentiated and individualized instruction. The beneficiary of this study would be special education teachers. The findings were that differentiated and individualized instruction is beneficial in promoting inclusivity, increasing desired behaviors, developing self-awareness, cultivating relationships, increasing engagement, and supporting self-advocacy. Some detriments experienced while differentiating and individualizing instruction are time, collaboration, overaccommodation, resistance, standardized testing, and fairness.
... We believe we have provided sufficient participant quotes here as a method to improve the credibility of the process. 16,17 We believe we have provided a sufficient description of a homogenous sample as required for focus groups. 9 Having a single person act as moderator for the focus group, transcriber, and analyst of the resultant data facilitated engagement and immersion with the data, which is considered one way to enhance the accuracy of the interpretive analysis. ...
Article
Background The use of e-cigarettes has become increasingly prevalent and of public health concern. In order to evaluate and plan public health policy, it is important to understand the issue as understood by the community. This study describes the perspectives and views of a small group of South Australian adults who do not use e-cigarettes. Methods A semi-structured focus group interview was conducted, using a question guide to stimulate discussion. Data were analysed using a descriptive qualitative approach. Results Five participants, aged 35-39 years, and one moderator were involved in the focus group. Six themes relating to participant perspectives were developed: Vaping for social reasons, vaping is superior to cigarettes, attraction of vaping, vaping is invasive and pervasive, addiction and impacts of vaping and prevention and cessation of vaping. Conclusion The views of this group add to those of other studies and help provide context for public health workers and policy makers regarding public health messaging, interventions, and legislation regarding vaping.
... Only articles with an average score of at least nine were included. 21 The total score for each study was computed, and the score for each study type was presented in the results section. After screening and evaluating the quality of articles using the CASP checklist, only the final articles were included in our review. ...
Article
Full-text available
Introduction Mental health disorders affect millions of people worldwide. Chatbots are a new technology that can help users with mental health issues by providing innovative features. This article aimed to conduct a systematic review of reviews on chatbots in mental health services and synthesized the evidence on the factors influencing patient engagement with chatbots. Methods This study reviewed the literature from 2000 to 2024 using qualitative analysis. The authors conducted a systematic search of several databases, such as PubMed, Scopus, ProQuest, and Cochrane database of systematic reviews, to identify relevant studies on the topic. The quality of the selected studies was assessed using the Critical Appraisal Skills Programme appraisal checklist and the data obtained from the systematic review were subjected to a thematic analysis utilizing the Boyatzis's code development approach. Results The database search resulted in 1494 papers, of which 10 were included in the study after the screening process. The quality assessment of the included studies scored the papers within a moderate level. The thematic analysis revealed four main themes: chatbot design, chatbot outcomes, user perceptions, and user characteristics. Conclusion The research proposed some ways to use color and music in chatbot design. It also provided a systematic and multidimensional analysis of the factors, offered some insights for chatbot developers and researchers, and highlighted the potential of chatbots to improve patient-centered and person-centered care in mental health services.
Article
Background During the COVID-19 pandemic face-to-face activities were suspended, boosting the delivery of online teaching. As students returned to campuses, the delivery of active learning teaching methods followed a blended learning style. The flipped classroom, which is a student-centred approach, appears to be an effective teaching method, generating improved learning outcomes. No systematic review has so far explored students' experiences of this teaching method – a knowledge gap that this review aims to address. Methods Studies published between 2012 and 2023 were identified from seven databases. The JBI critical appraisal tool was adopted to select high-quality studies and add credibility. Following extraction of qualitative data, meta-aggregation was used to identify synthesised findings. Results The findings were aggregated into seven categories. Based on meaning similarity, three synthesised findings were identified to answer the research question on how nursing students experience the flipped classroom method. Conclusion Several factors affect the student experience. Although variables are interrelated and complex to analyse, this approach is a valuable teaching method, positively experienced by students with the potential to improve engagement and learning outcomes. The group activities used as a feature of the flipped classroom can be seen as an instrument to deliver a safer and high quality of care.
Article
BACKGROUND Opioids play an important role in peri-operative pain management. However, opioid use is challenging for healthcare practitioners and patients because of concerns related to opioid crises, addiction and side effects. OBJECTIVE This review aimed to identify and synthesise the existing evidence related to adults’ experiences of opioid use in postoperative pain management. DESIGN Systematic scoping review of qualitative studies. Inductive content analysis and the Theoretical Domains Framework (TDF) were applied to analyse and report the findings and to identify unexplored gaps in the literature. DATA SOURCES Ovid MEDLINE, PsycInfo, Embase, CINAHL (EBSCO), Cochrane Library and Google Scholar. ELIGIBILITY CRITERIA All qualitative and mixed-method studies, in English, that not only used a qualitative approach that explored adults’ opinions or concerns about opioids and/or opioid reduction, and adults’ experience related to opioid use for postoperative pain control, including satisfaction, but also aspects of overall quality of a person's life (physical, mental and social well being). RESULTS Ten studies were included; nine were qualitative ( n = 9) and one used mixed methods. The studies were primarily conducted in Europe and North America. Concerns about opioid dependence, adverse effects, stigmatisation, gender roles, trust and shared decision-making between clinicians and patients appeared repeatedly throughout the studies. The TDF analysis showed that many peri-operative factors formed people's perceptions and experiences of opioids, driven by the following eight domains: Knowledge, Emotion, Beliefs about consequences, Beliefs about capabilities, Self-confidence, Environmental Context and Resources, Social influences and Decision Processes/Goals. Adults have diverse pain management goals, which can be categorised as proactive and positive goals, such as individualised pain management care, as well as avoidance goals, aimed at sidestepping issues such as addiction and opioid-related side effects. CONCLUSION It is desirable to understand the complexity of adults’ experiences of pain management especially with opioid use and to support adults in achieving their pain management goals by implementing an individualised approach, effective communication and patient–clinician relationships. However, there is a dearth of studies that examine patients’ experiences of postoperative opioid use and their involvement in opioid usage decision-making. A summary is provided regarding adults’ experiences of peri-operative opioid use, which may inform future researchers, healthcare providers and guideline development by considering these factors when improving patient care and experiences.
Article
Full-text available
There is no conclusive evidence to compare pre-and post-COVID-19 data to measure the performance of SMEs through the lens of mental well-being. e goal of this pa-per is to contribute to the existing knowledge through a comparative lens, especially considering entrepreneurial stress. is research examined the entrepreneurial stress and mental well-being of entrepreneurs in Canadian SMEs, specifically pre-and post-COVID-19. e PRISMA framework, based on the philosophy of interpretivism, was applied and 37 articles were taken into consideration for this research. e findings confirm that there is a significant increase in the entrepreneurial stress affecting the mental well-being and overall performance of Canadian SMEs, especially during and after COVID-19. Furthermore, this research confirmed that everyone’s reaction to and precautions taken against entrepreneurial stress vary. Interestingly, distinct stressors such as finances, family, and work–life imbalance have evidently increased. However, the magnitude of these stressors varies for individuals. Similarly, the consequences also differ; however, emotional symptoms, followed by physical and behavioural symptoms, have noticeably been exhibited by entrepreneurs both pre-and post-COVID, while performance among SMEs has dwindled, which has brought about a higher number of entrepreneurship closures. is manuscript contributes to the existing literature from multiple facets, such as (i) through the comparative lens, (ii) enriching the literature from an advanced economy, (iii) expanding upon the theory of entrepreneurial stress, and (iv) shedding light on the importance of entrepreneurs’ mental well-being. e manuscript also provides practical implications to entrepreneurs to deal with distinct types of stressors.
Chapter
Full-text available
Critical appraisal of qualitative studies is an essential step within a Cochrane Intervention review that incorporates qualitative evidence.  The overarching goal of critical appraisal in the context of including qualitative research in a Cochrane Intervention Review is to assess whether the studies actually address questions under meaning, process and context in relation to the intervention and outcomes under review.  Review teams should use a critical appraisal instrument that is underpinned by a multi-dimensional concept of quality in research and hence includes items to assess quality according to several domains including quality of reporting, methodological rigour and conceptual depth and bread.  Critical appraisal involves (i) filtering against minimum criteria, involving adequacy of reporting detail on the data sampling,-collection and-analysis, (ii) technical rigour of the study elements indicating methodological soundness and (iii) paradigmatic sufficiency, referring to researchers' responsiveness to data and theoretical consistency.  When choosing an appraisal instrument a Review teams should consider the available expertise in qualitative research within the team and should ensure that the critical appraisal instrument they choose is appropriate given the review question and the type of studies to be included.  Reviewers need to clarify how the outcome of their critical appraisal exercise is used with respect to the presentation of their findings. The inclusion of a sensitivity analysis is recommended to evaluate the magnitude of methodological flaws or the extent to which it has a small rather than a big impact on the findings and conclusions.
Article
There are few explicit discussions in nursing literature of how qualitative research can be made as rigorous as it is relevant to the perspective and goals of nursing. Four factors complicate the debate about the scientific merits of qualitative research: the varieties of qualitative methods, the lack of clear boundaries between quantitative and qualitative research, the tendency to evaluate qualitative research against conventional scientific criteria of rigor, and the artistic features of qualitative inquiry. A framework for understanding the similarities and differences in research approaches and a summary of strategies to achieve rigor in qualitative research are presented.