ArticlePDF Available

Is qualitative research generalizable?

Authors:

Figures

Content may be subject to copyright.
IS QUALITATIVE RESEARCH
GENERALIZABLE?
Alexandra GHEONDEA-ELADI1
Abstract
: Many qualitative researchers are faced with the everlasting question of generalizability
of their findings, especially when trying to support their research in front of quantitative
researchers. Despite this state of affairs qualitative researchers rarely discuss generalizability of
their data and argue that a deeper understanding of the phenomena is the goal of their endeavour
and not statistical generalization. Furthermore, quantitative researchers usually dismiss the results
of qualitative research based on the lack of generalizability. I argue that this state of affairs is a
crude simplification of reality based on either a misconception about what qualitative data is or on
a misconception of the aspects of qualitative data analysis that lead to generalizability like: the
purpose of the research, the sampling method, the data analysis method and the coding strategy.
The paper suggests that discussions on generalizability should become the standard for reporting
qualitative report if the research question is phrased to demand a general answer.
Keywords
: qualitative research; generalization; external validity; non-probabilistic sampling;
coding; grounded theory.
What is generalization
The most important standards of research are validity and reliability. Still, the
definitions of validity and reliability are sometimes considered to differ for qualitative
and quantitative research. In quantitative research, reliability is the "consistency of a
measure of a concept" (Bryman 2008: 140), while validity is a measure of "whether an
indicator (or set of indicators) that is devised to gauge a concept really measures that
concept" (Bryman 2008:151). Generalizability is also known as external validity
(Bryman 2008; Chelcea 2001). Also, the distinction between external and internal
reliability and external and internal validity may be adapted to the purposes of
qualitative research. While external reliability refers to the replicability of a qualitative

1 Ph.D. Scientific Researcher III, Institute for Research on Quality of Life, Romanian Academy,
Bucharest, ROMANIA. E-mail: alexandra@iccv.ro
This paper is made and published under the aegis of the Research Institute for Quality of Life,
Romanian Academy co-funded by the European Union within the Operational Sectorial
Programme for Human Resources Development through the “Pluri and interdisciplinary in
doctoral and post-doctoral programmes” Project, POSDRU/159/1.5/S/141086.
J
ournal of Community Positive Practices, XI
V
(
3
)
2014,114
-
124
ISSN Print: 1582-8344; Electronic: 2247-6571
IS QUALITATIVE RESEARCH GENERALIZABLE?
115
study, internal reliability refers to inter-rater reliability if multiple coders or observers
are used for qualitative coding or observation. On the other hand, external validity is
used to refer to the degree to which a research is generalizable to other settings, while
internal validity is concerned with the link between theory and observation or coding.
Some researchers use the term validity with the meaning of internal validity (see for
example Hanson 2008: 107) while others use the term reliability with the meaning of
external reliability (see for example Potter and Levine-Donnerstein 2009: 261).
Generalization is not accepted by many researchers as the purpose of qualitative,
interpretative research. For example, by looking at the qualitative research literature
Onwuegbuzie and Leech (2009) argue that in qualitative research statistical
generalization is usually replaced by "analytic generalizations" – generalizability to
theory instead of population based on how concepts relate with each other – and "case-
to-case transfer" (p. 883) – generalizing from one case to other similar cases from a
chosen set of points of view. This conclusion is shaped by Firestone's (1993, cited by
Onwuegbuzie and Leech 2009 and by Polit and Beck 2010) models of generalization,
namely statistical, analytic and transferability. Analytic generalizations are thus meant to
generalize "from particularities to broader constructs or theories" (Polit and Beck 2010:
e4). Hanson (2008) on the other hand points out that the notion of universe that is used
in generalizations based on quantitative research has the same role in generalizations
based on qualitative research as the notion of context. Other researchers point out that
the standards of qualitative research should be trustworthiness, instead of internal and
external reliability and validity. It should be measured as credibility, transferability,
dependability and confirmability (Guba and Lincoln 1994, cited by Bryman, 2008).
Nevertheless, generalization is defined in many ways. For example, Polit and Beck
(2010: e2) think of generalization as "an act of reasoning that involves drawing broad
conclusions from particular instances – that is, making an inference about the
unobserved based on the observed. In nursing and other applied health research,
generalizations are critical to the interest of applying the findings to people, situations,
and times other than those in a study". On the other hand, Payne and Williams (2005:
296) think that "[t]o generalize is to claim that what is the case in one place or time, will
be so elsewhere or in another time". They see generalization as an inductive reasoning
process. Bryman (2008: 187) argues that generalization is not only concerned about
drawing conclusions from a sample to a population, but also from one period of time
to another.
More than this, Polit and Beck (2010) argue that what is considered to be statistical
generalizability is either a myth or an ill applied theory. They point out that most
random sampling is extracted from a conveniently accessible population, like students
at the university, while the population to which the generalization is to be made is
usually poorly defined in most quantitative research (Polit and Beck 2010: e4). This
means that the starting point of constructing a sample is usually the sample and not the
characteristics of the population (Polit and Beck, 2010). Nevertheless, generalizability is
mainly a standard and especially a high standard which cannot be judged to be
appropriate or not by arguing that most researchers do not follow it.
Alexandra GHEONDEA-ELADI
116
In this paper I will argue that generalizability should be considered as a standard of
evaluation if the research question reveals the need to generalize, and if the sampling
method and the data analysis method allow generalization. In the first part of this paper
the practice of generalization in qualitative research will be shortly investigated. Then
sampling requirements for generalizability will be pursued within the literature, such
that in the third section the connection between data analysis methods and
generalization may be analyzed. The final part of the article will draw some conclusions
about the conditions in which qualitative research is generalizable.
The practice of generalization in qualitative research
The main argument against using statistical generalization for qualitative studies is that
their purpose is not to generalize. In support of this claim, researchers have studied the
types of generalization that are usually employed by qualitative researchers. While some
researchers from interpretive sociology reject the standard of generalizability altogether,
others use "moderatum generalization". Moderatum generalization means that results of
qualitative inquiry "are not attempts to produce sweeping sociological statements that
hold good over long periods of time, or across ranges of cultures", but conclusions that
are "open to change" (Payne and Williams 2005: 297). The conclusions of such
moderatum generalizations may be further tested statistically, but this is considered to be
a different topic. On the other hand, qualitative research should ensure "transferability"
by providing as much contextual information as possible in order to aid future
researchers to identify the relevant characteristics that should be transferred to a
different study (Lincoln and Guba, 1985, cited by Bryman 2008).
A study on research published in volume 37 of Sociology (journal of the British
Sociological Association) from 2003, showed that scholars do not discuss
generalizability, but make generalizations in different ways (Polit and Beck 2010). From
17 articles which employed qualitative methods, all have made generalizations, but only
4 tried to back up their claims openly (Polit and Beck 2010). These authors backed up
their general claims either by "later feedback from a conference with a wider range of
informants", by "call[ing] for more studies", by "claim[ing] moderatum status for their
position" and by "deny[ing] to make them [generalizations]" (Polit and Beck 2010: 300).
Similarly, generalization practices have been studied from articles published between
1990-2006 in the journal The Qualitative Report. From 125 empirical studies there were
only 8 justified generalizations and 45 which have made generalizations (Onwuegbuzie
and Leech 2009). Although there are no such studies available for Romanian sociology
journals, it is easy to find qualitative research that gives general conclusions without
discussions of how this is achieved with respect to sampling or coding method (see for
example, Ecirli, 2012, Gheondea-Eladi, 2013) or research that neither gives general
conclusions nor a discussion on the generalizability or transferability of the findings
(see for example, Tufa, 2011). It is also easy to find qualitative research that does not
require such discussions, since the population discussed is very small (see for example,
Alexandrescu, 2010; Mihalache, 2010).
At the other end, researchers who employ quasi-experimental studies do not differ so
much in their practice from their qualitative counterparts. Shadish, Cook, and Campbell
IS QUALITATIVE RESEARCH GENERALIZABLE?
117
(2002) looked at how researchers make generalizations from quasi-experimental studies
that do not abide to sampling theory due to various causes like: lack of resources,
logistics, time constraints, ethical constraints or political ones or simply because
"random sampling makes no clear contribution to construct validity" (p. 348). Their
study provides some examples for the claim that quantitative studies rarely abide to the
random sampling requirements that support statistical generalization, proposed by Polit
and Beck (2010). Shadish et al. (2002) argue that researchers use five principles in order
to ensure generalizability:
surface similarity, which is based on "surface similarities between the target of
generalization and the instance(s) representing it" (p. 378). For example the
identification of the characteristics of the target population to which a
generalization is sought. In this way generalization can be made from treatments or
outcomes or persons for which the study has been undertaken to other treatments,
outcomes or persons which share these same surface similarities.
ruling out irrelevancies, which is based on the "identif[ication of] those attributes of
persons, settings, treatments, and outcome measures that are presumed to be
irrelevant because they would not change a generalization and then to make those
irrelevancies heterogeneous (PSI-Het [purposive sampling for heterogeneous
instances]) in order to test that presumption" (p. 380)
making discriminations between "kinds of persons, measures, treatments, or outcomes
in which a generalization does or does not hold. One example is to introduce a
treatment variation that clarifies the construct validity" (p. 382). For example
discriminating between two levels of a construct for which the two levels lead to
substantial changes in the direction and the size of the causality.
interpolation and extrapolation by "interpolating to unsampled values within the range
of the sampled persons, settings, treatments, and outcomes and, much more
difficult, by extrapolating beyond the sampled range" (p. 354). For example, the use
of more than one level of a treatment or of a response, instead of using
dichotomous variables for their estimation brings about the possibility to make
inferences about the levels of the treatment or of the responses that have not been
captured in the study. Extrapolation means making inferences from a range of
treatment levels to those outside the range. Interpolation means making inference
from a range of treatment levels to those within the range, despite the fact that they
were not captured in the study.
causal explanation which is employed by "developing and testing explanatory theories
about the target of generalization" (p. 354), for example by testing whether the
same stimulus has different effects in different settings. In this case, to generalize
would be to say that all different effects have the same cause.
These generalization principles, as can be seen are very similar to those employed
earlier for qualitative studies, except for the causal explanation which cannot be
assessed through qualitative research. Nevertheless, generalizability is also dependent
Alexandra GHEONDEA-ELADI
118
on the sampling method, and the data analysis method. The following sections will
address the link between these two elements of research and generalization.
Sampling methods for generalizability
In a very simple and generally accessible account, qualitative studies are studies that
look at words as data instead of considering numbers as data (Bryman 2008). In
qualitative studies research and theory are linked in an inductive manner as opposed to
the deductive one from quantitative studies (Bryman 2008). Also, the epistemological
view in qualitative research is interpretivism and the ontological one is constructivism
(Bryman 2008). Quantitative studies depart from a theory that potentially answers the
research question of the study, formulate hypotheses, choose a research design, select
sampling units, collect data and analyze the data in order to formulate some conclusions
that will answer the research question. Qualitative studies also depart from a research
question, but collect data that will later be interpreted in order to formulate a theory
that will lead to an answer to the research question. Nevertheless, this overly simplified
view does not reflect the complexity of either qualitative or quantitative research. In
practice, quantitative studies may depart from the data to look for a "problem" that may
be explained by several theories which will be tested against further data or start from a
theory that explains the research question and test the particular theory against the data.
In a similar manner, qualitative studies may interpret data in light of a chosen theory
(top-down analysis) or build a new theory from the data (bottom-up analysis).
In light of these general differences, sampling for qualitative and quantitative analysis is
also performed differently. Usually, sampling is performed in order to "estimate the
true values, or parameters, of statistics in a population, and to do so with a calculable
probability of error" (Russell 1988: 79). While quantitative studies employ probabilistic
samples, qualitative research is usually concerned with non-probabilistic sampling:
quota sampling, haphazard (convenience) sampling, snowball sampling, purposive
sampling and theoretical sampling (Russell 1988). While quota sampling requires
previous knowledge about the population to which generalization is to be made, the
other types of non-probabilistic sampling requires little or no previous knowledge
about it. Convenience and snowballing sampling is mostly employed in qualitative
research for pilot tests or for populations that are very difficult to reach, like drug users
or some vulnerable groups. This is why only the last two non-probabilistic methods will
be of interest in order to study the generalization potential of qualitative research.
Purposive Sampling
Purposive sampling was originally a probabilistic sampling technique which described
"a random selection of sampling units within the segment of the population with the
most information on the characteristic of interest" (Guarte and Barrios 2006). There are
two types of purposive sampling (p.s.): p.s. for typical instances and for heterogeneous
instances (Shadish et al. 2002). P.s. for typical instances is based on defining and
randomly selecting typical cases and their characteristics. Generalization in this case is
possible only for units that share the selected characteristics. P.s. for heterogeneous
instances is based on defining typical cases and randomly selecting units in order to
IS QUALITATIVE RESEARCH GENERALIZABLE?
119
obtain the widest variation possible for the sample. The logic of this type of sampling is
that if relationships are validated despite the wide sample variation, then these
relationships will be very strong. In general this type of sampling differs from the one
performed for quantitative studies in that it seeks to replicate the mode of the desired
population within a sample with the widest variation, instead of replicating the mean of
the population in the sample. For qualitative sampling this method is usually performed
without random selection from the population with the most amount of knowledge.
Theoretical Sampling
Theoretical sampling is a recursive type of purposive sampling that stops when theoretical
saturation is obtained (Bryman 2008; Strauss and Corbin 1998). This means that new data
are no longer obtained, that each category for which a sampling session has been organized
is developed in a way that is satisfactory and that the relationships between categories have
been shown to be stable and valid (Bryman 2008; Strauss and Corbin 1998). Data is
analyzed after each sampling session and new sampling units are defined according to the
theoretical needs of the developing theory. Generalization from theoretical samples thus
infers from some particular descriptions to a general theory. Particular descriptions are
based on the identification of typical characteristics and irrelevancies, just like in purposive
sampling. The transfer from particularity to theory is performed by means of abstraction or
conceptualization. This is the point in which data analysis becomes an issue. But data
analysis cannot be performed unless sampling units are defined properly.
Sampling Units
Defining sampling units is the main starting point in any sampling endeavor. But this is
not a linear process, but an iterative one between the universe, context or theory to which
the generalization is performed and the potential sampling units. It is important to decide
what kind of outcomes are desirable at the end of the research. In order to provide an
explanation for this, I will describe two sampling alternatives for a research I conducted.
This research was aimed at studying the decision-making models employed by patients
with chronic Hepatitis C Virus (HCV) infection. The main research question of the study
was: how do patients with chronic HCV infection (CHCVI) decide between treatments for their illness?
Two logical structures of the methodology for such a study could be envisaged:
1. If a sample is extracted from the population of patients diagnosed with CHCVI,
then the characteristics of the structure of the decision-making process employed
by these patients should be generalizable to the population of all HCV infected
patients.
2. If a sample is extracted from the population of decision-making situations of patients
diagnosed with CHCVI, then the characteristics of the structure of the decision-
making process employed in such situations should be generalizable to a general
theory of decision-making for patients diagnosed with CHCVI.
In other words, one may use as sampling units people diagnosed with CHCVI or the
contexts in which these patients decide. Also, it is possible to generalize to the
population of people diagnosed with CHCVI or to a theory of decision-making
Alexandra GHEONDEA-ELADI
120
applicable to this type of patients. For the first type of sampling units, purposive
sampling is required because patterns of decision-making will be followed in patients'
accounts and inferences will be of the type "most patients decide in this way". If
enough data is sampled, inferences can then be refined to "most patients with these
characteristics decide in this way". Purposive sampling in this case means that patients
should be randomly selected from the population of patients with CHCVI. For the
second type of sampling units – decision-making situations or contexts – a theoretical
sampling is required because patterns of decision-making will be pursued within
particular contexts and inferences will be of the type "in most situations, patients decide
in this way". If enough data is sampled, inferences can also be refined to "patients in
this type of situation decided in this way". Thus, knowing that both context and patient
characteristics influence the decision-making structure of patients with CHCVI, it is
possible to sample one or the other, since neither can be held constant. If contexts are
sampled, the underlying assumption is that patients employ the same decision-making
structures irrespective of their internal characteristics (like education, intelligence, etc.).
If patients are sampled, the underlying assumption is that patients employ the same
decision-making structures irrespective of the context they are in (family support,
medical support received, stage of illness at diagnosis, etc.). Clearly, both assumptions
lead to a simplified representation of reality.
As mentioned before, the purposive sampling method requires previous knowledge
about the population, while the theoretical sampling procedure does not. In our case,
the purposive sampling method requires knowledge about the cases in which the
decision-making structures differ, like: (1) the characteristics of the decision-making
situation which cause patients to adopt a decision-making structure or another; (2)
characteristics of the diagnosis which cause patients to adopt a decision-making
structure or another. The theoretical sampling procedure requires an iterative data
collection strategy and data-driven re-sampling and data collection. It is based on
purposive sampling on various concepts of the developing theory.
Qualitative data analysis for obtaining general results
Many times it is assumed that in qualitative research coding is the data analysis, but this
is only an intermediary step that facilitates analysis (Saldana 2009). Still, coding takes the
most amount of the time spent to analyse qualitative data and it may also lead to
differences in the latter data analysis, since different coding schemes will lead to
different results. This is why qualitative researchers have given it great priority when
discussing qualitative data analysis. On the other hand, the validity and reliability of a
qualitative study depend on the validity and reliability of the coding scheme (Potter and
Levine-Donnerstein 2009).
Coding can be performed in two separate ways: top-down and bottom-up. In top-down
data analysis, the researcher departs from theory and uses predefined coding schemes
to pursue this theory within the data. This type of coding scheme should be followed
by analysis which may use either quantitative measures (e.g. frequency tables) or
qualitative measures for the main concepts (e.g. a synthesis of contextual elements that
provide answers to the research question or differences in the meanings of the main
IS QUALITATIVE RESEARCH GENERALIZABLE?
121
concepts of the research question, etc.). In bottom-up data analysis, the researcher
departs from the data and uses abstraction from particularities to pursue the theory that
would emerge from the data. After the coding scheme is built, hypotheses that provide
an explanation for the research question are tested against further sampled data.
Top down data analysis
Top-down data analysis departs from a theoretically driven coding scheme in which the
concepts and categories of the coding scheme are pre-established with respect to the
driving theory. Extensive rules for applying the codes need to be designed in order to
ensure the consistency of their application. Potter and Levine-Donnerstein (2009) point
out that reliability and validity are constructed differently depending on the "locus of
meaning" (p. 261). Moreover, they argue that the "the locus of meaning" leads to
"manifest" content, "latent" content and "projective" content, each one of them having
different relationships to theory and different ways to construct reliability and validity.
"Projective" content leads to particularly difficult tasks for coders since codes are
attributed by "constructing interpretations" (p. 261), while for "latent" content the task
requires the recognition of patterns. The easiest task for coders is given for coding
manifest content, when the accurate recording of content is the only task required
(Potter and Levine-Donnerstein, 2009).
The internal reliability and validity of a top-down coding scheme are given by the
consistency with which the coding scheme is applied (Potter and Levine-Donnerstein
2009) and by prior testing of the coding scheme against data. External validity on the
other hand, depends on the degree to which the coding scheme reflects the theoretical
concepts of the driving theory such that they will be transferable to other settings. For
example, if a coding scheme is very much particular to the setting it is applied to and
has little links to the theory, then it may be more difficult to find common grounds to
transfer it to other settings. On the other hand, if the coding scheme is too abstract and
has little connection with the data, while being strongly theoretical, it will be very
difficult to explain why a certain fragment of text should be coded in one way and not
the other. Some trader-off or middle way should be achieved.
Bottom up data analysis
Bottom up data analysis is usually performed as part of grounded theory. Grounded
theory aims to construct a formal theory that would answer the research question, but
emerges directly from the data and not the other way around. Bryman (2008) gives a
very intuitive scheme for the work-flow of grounded theory (Figure 1).
Grounded theory is based on open coding which is "an analytical process in which
concepts are identified and their properties and dimensions are discovered within data
(Strauss and Corbin 1998). Open coding is also an iterative process in which data is
transformed in either in vivo codes or concepts which are then classified – thus forming
classifications – and then grouped into categories. Passing from data to concepts or
from in vivo codes to concepts requires a process of abstraction. In open coding,
concepts and classifications are then interpreted by means of memos, to form categories
Alexandra GHEONDEA-ELADI
122
(Strauss 1987). The links between different concepts, classifications and categories are
used to form the substantial theory. Strauss and Corbin (1998) differentiate between (1)
coding in vivo which means labeling a part of text by using the words of the responder;
(2) conceptualizing, which means labeling a phenomenon by means of abstraction; (3)
classification, which means the identification of types of a certain concept; (4)
categorization, which means the interpretation and/or clustering of concepts into
categories. Strauss and Corbin (1998) point out that it is always possible to classify the
same content in different ways, just as a pen, a paper knife and a press-papier can either
be tools for writing or weapons.
Figure 1.
The work-flow in grounded theory
Source: Bryman (2008)
After a substantiated theory has been built, the research is replicated in other settings
such that several other substantiated theories will yield. After a satisfactory amount of
replications have been performed, a formal theory may emerge from all substantiated
ones. In terms of validity and reliability of this theory, it is now easy to see that the
consistency of concepts and categories of each substantial theory should be preserved
in order to ensure a reliability of the formal theory. Also, the accuracy of the coding
should be controlled to preserve the internal validity of the theory. Replicability in
IS QUALITATIVE RESEARCH GENERALIZABLE?
123
different settings, on the other hand should ensure the external validity of the final
formal theory.
Coding and data analysis
As mentioned before, coding and data analysis are not the same procedure (Basit 2003;
Saldana 2009). Some coding may be analyzed quantitatively, by counting frequencies for
example while the same coding may be analyzed qualitatively, by uncovering underlying
meanings or links between concepts. Open coding or top-down coding does not give
the research a qualitative or quantitative orientation. It is the actions that are performed
with the resulting codes that provide such an orientation. Clearly, the choice of the data
analysis depends on the research question.
In which circumstances is qualitative research
generalizable?
In this paper generalization has been discussed with respect to qualitative methodology.
After discussing some differences in defining generalizability with respect to qualitative
and quantitative research, the results of several studies on the practice of generalization
have been presented. Further on, the link between generalizability and sampling
methodologies characteristic to qualitative studies have been presented and an example
of choosing among alternative sampling units and sampling methods for one study has
been discussed. Furthermore, top-down and bottom-up coding techniques were
presented with respect to their subsequent data analysis method and their contribution
to external validity. In this way, this paper argues that some qualitative research can be
led by the standard of generalizability.
Since the main parts of a research which provide external validity are the sampling
method, the coding strategy and the data analysis method, qualitative research is
generalizable when the appropriate sampling, coding and data analysis methods are
employed. Moreover, if the sampling method is either purposive or theoretical,
generalizations can be performed either to the typical population represented by the
sample or to a theory. Further attention should be given to the identification of the
characteristics of the typical population and to those characteristics deemed irrelevant,
in case of purposive sampling or to the provision of enough data that would allow
replication and transfer of the research to different settings in case of theoretical
sampling.
The implications of this study on generalization for qualitative research are primarily
concerned with the general practice of reporting of qualitative research results. Since
qualitative research can be generalizable, an open discussion on generalizability should
become the standard in reporting qualitative analysis if the research question is phrased
in a way that demands a general answer. An open discussion on this topic should
address the issues of the relationship between the sampling method employed and
external validity as well as between the coding method and external validity. Any
qualitative data report should also discuss its potential for transferability and
replicability of the research.
Alexandra GHEONDEA-ELADI
124
Bibliography
Alexandrescu, F. (2010). Social Economy In Giurgiu County. Journal of Community Positive Practices X(3-4), 49-63.
Basit, T. (2003). Manual or Electronic? The Role of Coding in Qualitative Data Analysis. Educational Research
45(2), 143–54.
Bryman, A. (2008). Social Research Methods. 2nd ed. New York: Oxford University Press.
Chelcea, S. (2001). Metodologia Cercetarii Sociologice. Bucureşti: Editura Economica.
Ecirli, A. (2012). Parenting And Social Roles In Turkish Traditional Families: Issues And Choices In
Parenting For Turkish Expatriate Families Living In Bucharest. Journal of Community Positive Practices
XII(1), 36-50.
Gheondea-Eladi, A. (2013). Teoria Raționalității în Sociologie: Un Argument Metodologic. Revista Romana de
Sociologie (1-2), 337–347.
Guarte, J. M., & Barrios E. B. (2006). Estimation Under Purposive Sampling. Communication in Statistics –
Simulation and Computation 35(2), 277–84.
Guba, E.G., & Lincoln, Y.S. (1994). Competing Paradigms in Qualitative Research. in Lincoln Y.S. (ed.),
Organiyation Theory and Inquiry: The Paradigm Revolution. Beverly Hills, CA: Sage.
Hanson, B. (2008). Wither Qualitative/Quantitative? Grounds for Methodological Convergence. Quality and
Quantity 42, 97–111.
Lincoln, Y.S. & Guba, E. (1985). Naturalistic Inquiry. Beverly Hills, CA: Sage.
Mihalache, F. (2010). Social Economy In Olt County. Journal of Community Positive Practices, X(3-4), 25-48.
Onwuegbuzie, A. J. & Leech, N.L. (2009). Generalization Practices in Qualitative Research: A Mixed
Methods Case Study. Quality and Quantity 44(5), 881–92.
Payne, G. & Williams M. (2005). Generalization in Qualitative Research. Sociology 39(2):295–314.
Polit, D. F. & Beck, T.C. (2010). Generalization in Quantitative and Qualitative Research: Myths and
Strategies. International Journal of Nursing Studies 47(11), 1451–58.
Potter, J., W. & Levine-Donnerstein, D. (2009). Rethinking Validity and Reliability in Content
Analysis. Journal of Applied Communication Research 27(3), 258–84.Russell, H. B. (1988). Research
Methods in Cultural Anthropology. Newbury Park, CA: SAGE Publications, Inc.
Saldana, J. (2009). The Coding Manual for Qualitative Researchers. London: Sage.
Shadish, W. R., Cook, T.D. & Campbell, T.D. (2002). Generalized Causal Inference: A Grounded
Theory. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Belmont, CA:
Wadsworth, Cengage Learning, 341–73.
Strauss, A.& Corbin, J. (1998). Open Coding. Basics of Qualitative Research. London: SAGE, 101–117
Strauss, A. (1987). Introduction. Qualitative analysis for social scientists. Cambridge: Cambridge University Press,
1–39
Strauss, A. & Corbin, J. (1998). Basics of Qualitative Research Techniques for Developing Grounded Theory. 2nd ed.
SAGE Publications, Inc.
Tufa, L. (2011). Discrimination And Discourse: An Expert Interviewing Approach. Journal of Community
Positive Practices 2011, XI(1), 5–23.
... Given the intrinsic nature of qualitative studies, researchers should not limit their primary objective solely to the generalisation of findings in similar contexts; rather, it is imperative to adopt a more comprehensive and advanced perspective which acknowledges the intricacies and complexity of qualitative research methodologies (Gheondea-Eladi, 2014). Moreover, in the context of qualitative research, the concept of transferability should extend beyond its implications solely for study results (Bryman, 2008). ...
... Onwuegbuzie and Leech (2009) propose that in qualitative research the concept of transferability extends beyond statistical generalisation and, instead, involves the potential for both theoretical generalisabilities, where findings can be applied to broader theoretical frameworks and case-to-case transfer where insights from one specific case can inform similar cases. From this perspective, qualitative research should, therefore, be approached holistically, considering not only the outcome-oriented aspects but also the data collection process, accessibility of participants, theoretical framework, and the various limitations and advantages inherent in the study (Gheondea-Eladi, 2014). When the concept of generalisability is considered in the context of qualitative research, it is essential to recognise the uniqueness of each individual experience and the contextual factors that shape it (O'Reilly and Parker, 2013). ...
... Hence, transferability holds relevance and applicability throughout all stages of the research process, transcending its significance beyond a specific phase or aspect (e.g., findings) . Bryman (2008) and Gheondea-Eladi (2014) suggest that in a qualitative study, transferability of the key components of the research such as sampling can be achieved, provided that validity and reliability are effectively addressed. As in this study, employing a purposive sampling method allows for the establishment of transferability in terms of participant criteria, thereby enhancing the potential for generalisability in comparable research contexts (Gheondea-Eladi, 2014). ...
Thesis
Twice-exceptionality (2E) refers to individuals who possess both exceptional intellectual abilities and disabilities. This qualitative exploratory study primarily investigated the experiences of 2E learners and teachers who have previously taught, or currently teach, students with twice-exceptionality. The study examined such lived experiences with reference to philosophical, sociological and socio-cultural theoretical concepts. A combination of interviews and self-administered questionnaires was utilised for data collection following a protracted recruitment process in COVID-19 pandemic and post-pandemic conditions which, it was assumed, had limited consent to participate to seven teachers and five students based primarily in Plymouth. An indicative content analysis of the students` data and reflexive thematic analysis of teachers` data illustrated the importance of acknowledging of paradoxical combinations of ability and disability with additional conditions (e.g., eating disorders and depression). Socialisation difficulties in 2E students with autism and organisational skill problems in high potential students with ADHD were identified. Some teachers observed different characteristics in 2E students such as overconfidence and creative writing skills. However, it was also found that participating teachers were unaware of aspects of 2E and tended to avoid classifying their students as 2E or gifted. Thus, the generation of data on the lack of awareness of 2E provided an additional benefit and contribution to knowledge. The study emphasised the importance of tailored support and inclusive practices, intending to ensure that the voices of 2E students and teachers are heard and to determine their needs. With reference to empirical data and the existing literature, it will contribute the recognition and broader understanding of twice exceptionality, offering valuable insights for educational practitioners, policymakers, and researchers. Further research into this complex intersection of issues in diversity, inclusion, and twice-exceptionality is encouraged to enhance inclusivity and educational systems for 2E individuals.
... However, in the majority of quantitative research, the population to which generalisations are to be applied is typically poorly defined, and random sampling seldom yields random samples (Polit et al., 2010). Additionally, purposive sampling in multiple case narratives, which focuses on defining typical cases and selecting units to obtain the widest possible variation for the sample from the population with the most knowledge, is comparable to probabilistic sampling in that it expands the potential for study findings to be replicated (Gheondea-Eladi, 2014). ...
Article
Full-text available
Background The ability to generalise research generated findings to different contexts is a significant, yet overlooked, feature in qualitative studies conducted in nursing, where evidence-based clinical practice is highly regarded. The multiple case narrative is a constructivist-narrative approach, claimed to not only have the potential for analytical and case-to-case generalisation but also sample-to-population generalisation. Methods This paper provides an overview of multiple case narrative by comparing it with similar methodologies, reviewing studies that have used this approach and critically evaluating its capacity for producing generalisable results. Results The multiple case narrative approach addresses limitations of collective case study, case survey and meta-ethnography by employing greater sample sizes and more generalisable results. Most studies previously using this approach have been performed in the education field and with the purpose of overcoming sample size limitations in qualitative research. The approach offers a uniquely systematic approach to analysis by finding associations between categories generated from collective analysis of large number of cases and providing the potential for sample to population generalisation. Conclusion Multiple case narrative, which to date has been underutilised, is a systematic approach with characteristics that make it an efficient research technique to provide valid qualitative evidence.
... The result of qualitative research is not used to generalize the data. Qualitative research rarely addresses the generalizability of their data, and they contend that rather than statistical generalization, their purpose is to gain a deeper knowledge of the phenomena [9]. The researcher intended to dig up deeper information about the challenges in Freedom-to-learn curriculum implementation and investigate the way they confront the challenges. ...
Article
Full-text available
The so-called “Freedom-to-Learn” Curriculum has been implemented in Indonesia. However, in its application, the teachers seemed to face some challenges in executing their teaching plans. The aim of this research is to examine the challenges encountered by junior high school English teachers in the town of Banyumas, Indonesia, and to explore the strategies used by the teachers to confront the challenges. This is qualitative research that uses interviews, observations, and documents to collect the data. The study revealed that there are some challenges faced by the teachers: lack of understanding of the concept, lack of preparation, learning loss, big class size, and adjustment of a simultaneous summative test. Although there are some challenges, the learning process must continue. Therefore, the teachers used some strategies to meet the challenges: taking training, attending an English Teachers’ Association meetings, creating an exciting and enjoyable learning environment, implementing the differentiated learning, and conducting an evaluation. The Freedom-to Learn Curriculum provides flexibility to educators to create quality learning that suits the needs and learning environment of students. Although the reality shows that it has not reached the curricular goal yet, all parties have tried to do their best.
... Generalization is making general applications based on findings from specific cases (Carminati, 2018). According to Gheondea-Eladi (2014), it is "making an inference of the unobserved based on the observed (p. 115). ...
Book
Full-text available
The purpose of this qualitative narrative inquiry study was to explore the experiences of nine traditional-age college seniors to understand their perceptions of spiritual and moral development throughout the college years. The problem was the need for educators and policymakers to better understand the dynamics of spiritual and moral development from student perspectives to facilitate greater institutional support. Using primarily in-depth semi-structured interviews, data was collected from religious and non-religious seniors who self-identified as both spiritual and moral. The data was analyzed using thematic, narrative, and intersectionality analyses. There were eight major findings from the various analyses. The findings indicated that the spiritual and moral development of college students were influenced by both curricular and extra-curricular activities but there is the need for greater attention to the role of the curriculum in facilitating the development of the spiritual and moral identities of students. A significant finding was the importance of spiritual and moral identity to a student with a learning disability. The findings suggested the need for further studies on the intersection of spiritual and moral development of college students and the need for further studies on the spiritual and moral development of students with learning disabilities.
... The small selection of three key examples/case studies may raise the question of how generalisable the results of our analysis are to other street art murals (for example with respect to its reception/enjoyment). Indeed, qualitative research with smaller data collections has been criticised for not being transferrable to other settings [Queirós, Faria & Almeida, 2017] and even for "yielding less societal use-value" because "there is no way of telling what is true and what is false" [Frykholm, 2021, p. 255]. However, as outlined above, it was not the point of our study to explore 'what is true and what is false', not least because generalising qualitative research is not the goal or interest of the method we used [Gheondea-Eladi, 2014; see also Silverman, 2017, p. 264]. Within this context, we explored meaning production by analysing limited but rich data and how it can be studied as a process that is contextualised and inextricably linked to broader social and cultural practices [Jensen, 1991, p. 4]. ...
Article
Full-text available
Street art is visual art in public spaces — public art — created for public visibility. Street art addresses a massive and extremely diverse audience: everyone in a city. Using a case study approach, this article explores: 1) the extent to which science-inspired environmental street art can be considered a vehicle for science communication in less tangible science contexts and institutional settings — on the street — and 2) the strategies that street artists deploy to communicate their environmental messages through large-scale painted murals. This article clarifies how street art can be understood as a means of creative grassroots environmental communication. It shows that, and how, street art can encourage agency in pro-environmentalism and help to develop our relationship with sustainability.
... Throughout the process, using Strauss and Corbin's (1997) intuition, we asked specific and consistent questions about the data around "how to fulfill the identified business requirements successfully" (Gheondea-Eladi 2014;Mayring 2004;Stevens and Palfreyman, 2012). This led to a collection of categories that describe the critical success factors of autonomous and IoT-driven intralogistics systems. ...
Article
Full-text available
Eyeing superior operational performance, transportation, and warehousing management, researchers have turned their much-needed attention to autonomous and IoT-driven intralogistics systems. Despite its potential, a systematic evaluation of its overall business needs and the criteria for its success at each stage of adoption is missing in the literature. Using the business analysis framework, augmented by the technology adoption model, this study seeks to provide the business context for adopting autonomous and IoT-driven intralogistics by identifying business requirements and critical success factors for such systems. We thematically analyze 85 recent research articles on autonomous and IoT-driven intralogistics systems to identify business requirements that are linked to the mission of maximizing operational profit for the warehousing and storage industry. Then, using the identified business requirements as a base, we thematically analyze those 85 research articles again to identify critical success factors at different stages of technology adoption, namely information, analysis, acquisition, and utilization. We use the findings to develop propositions for future researchers. These findings provide a foundation for developing empirical, descriptive, and normative research on adopting and managing these systems for the warehousing and storage industry.
Article
Female minorities are represented scarcely in leadership positions within STEM. Previous research focused on increasing STEM education of minorities, not exploring the underlying issues affecting these groups in the workforce. It has driven the relevance of this study, which focuses on examining the intersectional experiences of cisgender, female minority leaders on their career paths, and examines the coping mechanisms which created resilience allowing these women to rise to high-level positions in STEM. The theoretical construct used is the 360-Degree Gender Sphere, which describes the barriers that prevent women from moving up the corporate ladder. The results of the study indicate that minority women who are leaders in STEM fields did indeed face a 360-Degree Gender Sphere, in particular challenges and obstacles from direct leaders, family, gender structures, themselves, peers and social norms. Further, they were encapsulated by a “360-degree intersectionality sphere” a peripheral and tangentially related outer layer of challenges and obstacles, including cultural stereotypes, cultural micro-attacks, cultural macro-aggressions, microaggressions from the H/R environment, cultural isolation, cultural doubts, cultural naivety, cultural micro-attacks, cultural stereotypical threats and geographical cultural macroaggression. The results also showed that women overcame these difficulties by creating resilience to these challenges that influenced their practice, education and leadership. The impact of this research can be useful in the process of initiating and shaping impulses for changes in society, women’s communities, business, geopolitical formations and political systems in the world. The practical significance of this study is that it deepens the knowledge and equips minority females with information and skills for self-realization and successful career building in STEM fields.
Article
Full-text available
This qualitative research explores the perceptions of cybercrime among adults and investigates the factors influencing their potential motivations for participating in online activities that could be deemed illegal or criminal. The study involved panel discussions with master's students from diverse international backgrounds at the University of Tehran. Thematic analysis was utilized to examine the opinions and justifications presented by the participants. The primary objective of this article is to shed light on the ways in which people conceptualize online crimes and rationalize their involvement in illegal online activities, such as illegal downloads and online bullying. The findings contribute to a more profound comprehension of how individuals view cybercrimes, which, in turn, can offer valuable insights for developing targeted educational programs, interventions, and legal frameworks to effectively address this issue.
Article
Transgender consumers struggle to have their gender identity recognized, resorting to clothing as a practical way to forge their identity. Intimate apparel represents the innermost type of clothing, carrying both functional and aesthetic roles for transgender individuals. When navigating the intimate apparel marketplace, transgender consumers are faced with vulnerability as their gender identity is stigmatized by society, negatively impacting their well‐being. This research focuses on understanding the neglected aspects of intimate apparel consumption by transgender consumers, discussing the role of stigma as it generates vulnerability in the intimate apparel marketplace. To gain insight into the multifaceted transgender people's intimate apparel consumption context, we conducted in‐depth interviews with 18 Brazilian transgender consumers. Data were analyzed using dialogical thematic analysis with a hybrid approach of deductive and inductive coding. Our findings revealed that: (1) the consumption of intimate apparel by transgender individuals involves a striking duality of both positive and negative aspects; (2) although transgender consumers see intimate apparel as a legitimate way of expressing their gender identity that carries symbolic meaning, functional aspects (e.g., comfort and fit) are also essential to them; and (3) intimate apparel plays a unique role during transgender people's social transition as they can choose whether to use those pieces privately or publicly, unlike most fashion products. Our key takeaway from this research is that consuming intimate apparel carries multifaceted meanings for transgender consumers, impacting their emotional and social well‐being in several ways.
Article
In the last years, rational choice theory took over a lot of the social sciences, almost polarizing discussions in sociology, such that I was able to hear questions like: are there any other theories in sociology, besides rational choice theory? Although the answer to this question is clearly yes, what seems to be important to avoid is that the moment when this theory monopolizes behavioural explanations in a multi-paradigmatic discipline should not come from partial or incomplete knowledge of it. Consequently, I decided to write this article with two goals in mind: one, to make a review of what rational choice theory is and means to sociology and secondly, to shortly present a research which questioned the structuring of social events according to this theory. In the first part of this paper, I will present the main aspects of rational choice theory, such that I can argue for choosing one of its variants in the second section. In the last part I will present the methodology I used to explore the closeness of the Volunteer’s Dilemma (as Diekmann (1985, 1993) proposed it) to the volunteering situation in Romania. I shall do this by aid of institutional analysis and interview analysis. The results of the research will be briefly described, such that, in the end, the conclusions can summarize the main ideas about rational choice theory emerging from this article.
Article
Earlier treatments of moderatum generalization (e.g. Williams, 2000a) explicitly addressed interpretivist sociology. This article extends that earlier argument by examining some of its implications for a wider range of qualitative research methods. It first adopts an empirical approach, providing concrete illustrations from the most recent volume of Sociology of what sociologists actually do when describing the meaning of their findings. In the light of this, we reconsider the significance of moderatum generalization for research practice and the status of sociological knowledge, in particular making the case that research design should plan for anticipated generalizations, and that generalization should be more explicitly formulated within a context of supporting evidence.
Article
This article looks into the issues and challenges of parenting in Turkish families upholding traditional values that live in Bucharest, the capital of Romania. Based on theoretical mainstreams on parenting and the structure of Turkish families, a qualitative research was designed with two aims. The first was to describe the issues and choices in parenting for Turkish expatriate families living in a foreign country. The second was to find out to which of the three ideal-types of families according to Baumrind (permissive, authoritarian, authoritative) do they fit closest. According to the main findings, these Turkish expatriate families show traits somewhere between Baumrind€™s authoritative and authoritarian types of family. The parents€™ own values are taken as reference. However, the child€™s obedience is not an end in itself, using realistic restrictions and giving importance to reasoning for the policies they apply concerning the child€™s development. Within the family, the father has a dominant position, in charge with securing the economic resources, while the wife main roles are household management and child rearing.
Article
The teaching of qualitative analysis in the social sciences is rarely undertaken in a structured way. This handbook is designed to remedy that and to present students and researchers with a systematic method for interpreting qualitative data', whether derived from interviews, field notes, or documentary materials. The special emphasis of the book is on how to develop theory through qualitative analysis. The reader is provided with the tools for doing qualitative analysis, such as codes, memos, memo sequences, theoretical sampling and comparative analysis, and diagrams, all of which are abundantly illustrated by actual examples drawn from the author's own varied qualitative research and research consultations, as well as from his research seminars. Many of the procedural discussions are concluded with rules of thumb that can usefully guide the researchers' analytic operations. The difficulties that beginners encounter when doing qualitative analysis and the kinds of persistent questions they raise are also discussed, as is the problem of how to integrate analyses. In addition, there is a chapter on the teaching of qualitative analysis and the giving of useful advice during research consultations, and there is a discussion of the preparation of material for publication. The book has been written not only for sociologists but for all researchers in the social sciences and in such fields as education, public health, nursing, and administration who employ qualitative methods in their work.
Article
Purposive sampling is described as a random selection of sampling units within the segment of the population with the most information on the characteristic of interest. Nonparametric bootstrap is proposed in estimating location parameters and the corresponding variances. An estimate of bias and a measure of variance of the point estimate are computed using the Monte Carlo method. The bootstrap estimator of the population mean is efficient and consistent in the homogeneous, heterogeneous, and two-segment populations simulated. The design-unbiased approximation of the standard error estimate differs substantially from the bootstrap estimate in severely heterogeneous and positively skewed populations.
Article
The central thesis in this essay is that validity and reliability should be conceptualized differently across the various forms of content and the various uses of theory. This is especially true with applied communication research where a theory is not always available to guide the design. A distinction needs to made between manifest and latent (pattern and projective) content. Also, we argue that content analyses need not be limited to theory‐based coding schemes and standards set by experts. When researchers are clear about what kind of content they want to analyze and the role of theory in their studies, they are in a better position to select the most appropriate strategies for demonstrating validity and reliability.