ArticlePDF Available

An Analysis of the Falsification Criterion of Karl Popper: A Critical Review

Authors:

Abstract

Karl Popper identified ‘falsifiability’ as the criterion in demarcating science from non-science. The method of induction, which uses the (debated) principle of uniformity of nature, was rejected by Popper. He instead suggested that a scientific theory cannot be ‘verifiable’ but only ‘falsifiable’; one counter-example to the claims made by the theory would falsify it. The paper conducts a survey of the extant literature to understand the concept, the methodology as suggested by Popper to operationalize the concept, and possible limitations, both conceptual and methodological. The extant literature points out inherent ambiguities in the Popperian concept of falsifiabilty. One recurring theme is that Popper, the deductivist, uses the much critiqued inductivistic method among his methodological suite.
Tattva-Journal of Philosophy
2020, Vol. 12, No. 1, 1-18
ISSN 0975-332X│https://doi.org/10.12726/tjp.23.1
1
An Analysis of the Falsification Criterion of
Karl Popper: A Critical Review
Suddhachit Mitra*
Abstract
Karl Popper identified ‘falsifiability’ as the criterion in
demarcating science from non-science. The method of
induction, which uses the (debated) principle of
uniformity of nature, was rejected by Popper. He instead
suggested that a scientific theory cannot be ‘verifiable’ but
only ‘falsifiable’; one counter-example to the claims made
by the theory would falsify it. The paper conducts a
survey of the extant literature to understand the concept,
the methodology as suggested by Popper to
operationalize the concept, and possible limitations, both
conceptual and methodological. The extant literature
points out inherent ambiguities in the Popperian concept
of falsifiabilty. One recurring theme is that Popper, the
deductivist, uses the much critiqued inductivistic method
among his methodological suite.
Keywords: Karl Popper, Demarcation Problem, Falsifiability,
Problem of Induction
1. Introduction
Science can be defined as a systematic endeavor to organize
knowledge as a set of falsifiable or testable explanations and
predictions about the universe. One keyword here is testability or
* Institute of Rural Management, Anand (IRMA), Gujarat, India;
suddhachit.mgmt@gmail.com
Suddhachit Mitra An Analysis of the Falsification Criterion
2
falsifiabilty, the bedrock on which science stands. This is probably
the most important element that distinguishes scientific inquiry
form other forms of inquiry such as spiritual inquiry.
Karl Popper (1902-1994), one of the greatest philosophers of science
is credited with enunciating falsification as a demarcating entity
between science and non-science (Thornton, 2019). He attended the
University of Vienna, where he was exposed to the psychoanalytic
theories propounded by Freud and Adler as well as the Marxian
theory. While listening to a lecture on the theory of relativity by
Einstein in Vienna, he was impressed by the ‘critical spirit’ in
Einstein’s theory, while its complete absence in Marxian and
Freudian theories which made these theories impervious to
disconfirmation was of crucial significance according to Popper.
Popper surmised that a key difference between the two theories
(Freud’s Psychoanalytic theory and Einstein’s theory of relativity)
was the intrinsic ‘risk’ in Einstein’s theory which could lead to its
potential falsification. In contrast, the psychoanalytic theory was,
even in principle, not falsifiable. The component of risk in
Einsteinian theory emanated from the fact that highly improbable
or impossible consequences, as per the Newtonian paradigm (such
as light bending towards massive objects, a fact confirmed by
Eddington in 1919), which would, if they were shown to be false,
falsify the theory. Popper was critical of the Marxian theory as well,
although Popper admitted that it was initially propounded as a
truly predictive theory; when facts showed that it was inadequate it
was worked on by supplementation of ad-hoc hypotheses to reflect
these facts. Thus, Marxism, a scientific theory, was reduced to a
“pseudo-scientific dogma (Thornton, 2019). Hence Popper
concluded that these “theories” (Psychoanalytic theory, worked-on
Marxism) were similar to primitive myths and not to modern
science (Mitra, 2016).
These experiences led Popper to use falsifiability as the benchmark
to demarcate (distinguish) science from non-science. A theory would
be deemed to be scientific if it has the potential to be incompatible
with at least one or more of all possible empirical observations,
whereas a theory that is compatible with all possible empirical
observations, either because it has been modified on an ex-post
basis to accommodate these observations (such as revised
Tattva-Journal of Philosophy, Vol. 12, No.1 ISSN 0975-332X
3
Marxism) or it has been developed to be compatible with all
possible observations (such as psychoanalytic theories) is
unscientific. A theory that is unscientific, being unfalsifiable, might
however, become scientific with the development of technology
and/or with further refinement of the theory.
Popper authored three books between 1935 and 1957. The first book
was named Logik der Forschung (1935), and was translated later to
English as The Logic of Scientific Discovery (Popper, 2002)
[hereinafter L. Sc. D.]. This book provides an overview of his
notions on science and its philosophy. His other books include The
Poverty of Historicism (1957) that critiques the idea of historical
laws and The Open Society and its Enemies (1945), a treatise on
philosophy of society, history and politics.
This article intends to introduce Popper’s ideas of falsification in
the context of philosophy of sciences, conduct a review of the
literature that critiques Popper’s conceptualization and
operationalization of falsification, and then discuss and comment
on the findings. The article collates research that has critiqued his
ideas, discusses them and comments on the possible limitations
and merits of Popper’s notions of falsificationalism in the light of
existing critique.
The rest of this paper is structured as follows: this introduction is
followed by a brief exposition of Popper’s idea of demarcation
between sciences and non-sciences in section 2. This is followed by
an analytic understanding of Popper’s criterion of falsifiability in
the extant literature in section 3. Standard literature has been used
from a wide array of sources. Results and discussion of findings
constitute section 4. A conclusion follows in section 5.
2. Demarcation and Falsifiability
According to Popper, the principal issue in philosophy of sciences
is that of demarcation or distinguishing science from non-science,
such as metaphysics or Freudian psychoanalysis. While accepting
Hume’s critique of induction as valid, he opines that induction
should not generally be used by a scientist. He contends that all
observations are “selective and theory-laden”. Or, in other words,
there can be no observation without theory. Thus, he challenges the
Suddhachit Mitra An Analysis of the Falsification Criterion
4
hitherto dominant viewpoint that the inductive method demarcates
science from non-science.
Popper thus suggests falsification as a valid method for scientific
investigation after rejecting induction as a methodology. According
to Popper a theory can be corroborated as scientific only if it
endures truly ‘risky’ forecasts which have the potential to turn out
false. A test of a scientific theory is an attempt to falsify it, with
only a single counter-instance rendering the whole theory untrue.
Popper’s idea of demarcation is rooted in the fact that there exists a
logical asymmetry between verification and falsification: it is
impossible to conclusively verify a universal proposition by
induction, whereas one counter-example proves the universal law
to be false.
A scientific theory, thus, Popper says is prohibitive in that it
prohibits certain events. Hence, whereas testing and falsification of
such a theory is possible, logical verification is not possible. Hence,
a theory should not be assumed to be verified even after very
rigorous testing for years. The most that can be said is that it has
been highly corroborated and is a good candidate to be rated as the
best available theory till it is falsified.
However, according to Popper there is a distinction between the
logic of falsifiability and the relevant methodology. For example, if
a ferrous metal is shown to be not influenced by magnetic fields,
then it cannot be said that all ferrous metals are affected by
magnetic fields. Thus the Popperian paradigm says that a scientific
law is falsifiable but not conclusively verifiable. However
methodological errors import a dimension of uncertainty: could
there have been an experimental error which probably influenced
the outcome of the experiment?
Thus in actual practice, one single counter-example is not sufficient
to falsify a theory. This is the reason for retaining scientific theories
in many cases despite anomalous evidence. The OPERA
experiment, a collaborative scientific effort between CERN, Geneva
and LNGS, Italy, for detecting neutrinos, a subatomic particle,
reported findings that said neutrinos were found to travel faster
than light. Scientists announced this result in September, 2011.
However the scientific community retained belief on Einstein’s
Tattva-Journal of Philosophy, Vol. 12, No.1 ISSN 0975-332X
5
theory of relativity that specifies an upper limit to the velocity of
any particle, namely the velocity of light. However, later concerned
scientists admitted two errors in their experimental set-up (Mitra,
2016).
According to Popper, based on the criterion of demarcation using
falsifiability, among other things, physics, chemistry, non-
introspective psychology can be classified as sciences, psycho-
analysis as pre-science and astrology and phrenology as pseudo-
sciences.
Unlike many social scientists, in Popper’s view, the more
improbable a theory is, the more competent it is scientifically, as
the probability of a theory to be true and its information content are
inversely proportional to each other. Thus statements which closely
approach truth are those with high information content although
improbable.
Popper, although initially skeptical about the concept of truth he
considered a theory to be an open-ended hypothesis and hence
potentially false later in his Conjectures and Refutations (1963)
integrated the concepts of truth and content (a new theory has
more empirical content than an old one) to frame the concept of
verisimilitude or truth-likeliness. The content of a theory is the sum
total of its logical consequences, divisible into two classes: ‘truth
content’, the class of true propositions derivable from it, and the
‘falsity content’, the class of false consequences; this may be an
empty set.
3. Review of Literature
Derksen (1985) critiques Popper’s concept of falsifiability and calls
it ‘fake cement’ to achieve methodological unity in the philosophy
of science. He examines Popper’s epistemology and suggests that
apparently the concept of falsifiability leads to a great chain’ of
concepts, all linked to falsifiability. A wide variety of desiderata,
such as most falsifiable, most testable, most informative and the
best corroborated are all achievable simultaneously. However
Derksen (1985) claims that there are inherent ambiguities in the
Popperian concept of falsifiabilty. As a result, the great chain
Suddhachit Mitra An Analysis of the Falsification Criterion
6
disintegrates along with the methodological unanimity that goes
with it.
Derksen (1985) examines the great chain in detail. According to
Popper, any learning is possible through falsifying our guesses.
Hence the first claim in this chain is: “only from our mistakes can
we learn”. This is actually falsifiability, or criticizabilty, a broader
term.
Now, if learning is possible only through mistakes, scientific
theories should be open to empirical falsification. Generalizing this,
a more falsifiable theory has a better probability of being falsified,
and hence is more scientific. Or in other words, the scientific
character of a theory is measurable and falsifiabilty is its metric.
Popper goes on to argue that a highly falsifiable theory has “little
chance to escape falsification”. A bolder theory not only presents
more risks of falsification, it also offers more opportunity to learn
something new, thus offering scientific knowledge a chance to
grow. Hence Popper’s second claim is: a more falsifiable theory
offers a better opportunity of scientific growth.
Another link of falsifiability with Popper’s deductivist thought is
that a highly falsifiable theory contains more information. The
converse is true as well; a theory with no falsifiers has no empirical
content. Hence the initial links of the ‘great chain’ are: falsifiability
links informative content links (larger) class of potential falsifiers.
Popper further contends that the most falsifiable theory is also the
one with the highest explanatory capacity and one with the
maximum simplicity. A theory can be made more falsifiable either
by making the theory more general, or by using a more specific
predicate. Examples are: all crows are black to all birds are black for
the first way; all crows are black or brown to all crows are black for
the second way. Thus it follows that explanatory power increases
as falsifiability increases. As regards simplicity, Popper quotes the
example of the hypothesis stating that planets revolve around the
sun in circles as compared to the less simple elliptical orbits. The
circle hypothesis can be falsified by four observations, whereas the
elliptical hypothesis can be falsified through at least six
observations. Also, the circle hypothesis is more precise as all
Tattva-Journal of Philosophy, Vol. 12, No.1 ISSN 0975-332X
7
circles constitute a subset of ellipses. Hence it can be seen that
simplicity and falsifiability are positively correlated.
Hence the great chain is reached: falsifiability potential falsifiers
testability information content explanatory power
simplicity.
Popper later realized that only falsification is not enough. A
scientist has to be certain that his more falsifiable, and consequently
falsified theories are heading him in the ‘right direction’. Popper
(1960) says that when all new attempts at theory building are
refuted, the scientist “would feel that we were producing a
sequence of theories which … were ad-hoc and … that we were not
getting any nearer to the truth”, and consequently “science would
lose its empirical character”. Hence Popper amended the first claim
to say: only through falsifications, occasionally interspersed with
corroborations (this may be called an amended claim), can we
learn.
Popper claims in his L. Sc. D. that the theory which has resisted the
most severe testing is also the most falsifiable theory which has not
been falsified; hence this is the most corroborated theory. There are,
however, questions regarding whether scientists actually test the
most falsifiable theory and corroborate it, and hence the most
corroborated theory is not necessarily the most falsifiable one.
However, if one follows Popper’s advice, that is, testing the most
falsifiable theory, then the last link in the great chain is added:
The most corroborated theory the most falsifiable theory yet to
be falsified.
However, Popper’s ‘Great Chain’ experiences tension as illustrated
below. It is possible to argue that the most corroborated theory at
one point of time which has been tested most thoroughly is no
longer the most falsifiable one. This is because the “most risky
predictions” have been tested, and the comparatively less risky
ones are yet to be tested. Hence with a smaller probability of
refutation, these less risky predictions do not make the theory most
falsifiable currently. Also, corroborated experiments, which are
now part of scientific knowledge, leaves much less scope for
counterexamples to occur, thus decreasing the severity of the tests.
Suddhachit Mitra An Analysis of the Falsification Criterion
8
This implies that falsifiability as indicated by testability of a theory
decreases with time. Popper admits as much: “empirical character
of a very successful theory grows stale after a time” (Popper, 1972).
Falsifiablity under the Popperian paradigm shows up as
falsifiabilty as information content and also falsifiabilty as
testability. While a theory is successful against falsification, its
information content and its explanatory power remain constant,
whereas its testability decreases. ‘Corroboration’, a complement to
testability, hence, changes. The ‘chain’ linked with a single concept
of falsifiabilty comes under strain: is falsifiabilty to be reckoned as
information content or testability?
So the question boils down to: which theory do we test? The most
testable theory (best chance to learn) or the most corroborated
theory (to be closer to the truth)? Suppose that we settle for the
most testable theory, as this offers the greatest chance to learn.
Science, after all, is an endeavor to learn from our mistakes; this
maintains the empirical character and rationality inherent in
sciences. However, going by our amended claim, we also need
occasional corroborations. How can the riskiest and most testable
theory guarantee that? Since Popper says that corroboration should
emerge as a result of the most severe test, it is clear that we need
the most testable theory, and hope for occasional corroboration and
finally falsification. Here, Popper makes his third claim:
“corroboration gives us a reason for believing that science has come
closer to the truth.”
Popper propounded methodological directives as to which theories
should be chosen to test:
The most falsifiable, informative, testable theories for
obvious reasons
The most corroborated theory; this is the most severely
tested theory, “appears to be the best so far”, and hence is
‘rational’
To offer a solution to the issue narrated in the earlier paragraph,
Popper propounded two more directives:
A new theory must cover successes of the old theory
Tattva-Journal of Philosophy, Vol. 12, No.1 ISSN 0975-332X
9
An old theory should be an approximation of the new
theory
These directives are relevant when an old theory has been tested so
much that its “empirical character has grown stale”. Popper offers
an elegant way out of the issue raised in the preceding paragraph.
We need to know that we are moving in the right direction”;
hence the old theory has to be approximately true. Since the new
theory has to preserve all past successes it can be seen that the
directives by Popper imply choosing the most testable theory
among the most corroborated, convergent ones.
Derksen (1985) shows that Popper’s third claim carries the weight
of the amended first claim and the second claim. Popper extends
arguments in support of the third claim, namely, the verisimilitude
argument, and the highly unlikely accident argument. Popper puts
the later thus: a theory which has withstood a series of different
and risky tests, it is “highly improbable this is due to an accident,
highly improbable therefore that the theory is miles away from the
truth.”Although apparently a deductivist argument, Derksen
concludes that since the future is involved it is an inductivist
argument. Hence claim three cannot be explained by Popper’s
deductivism. Hence the first two claims are not so meaningful. Also
testability and information content, as shown earlier, are
disassociated. Hence, Derksen concludes that falsifiabilty is ‘fake
cement’.
Gillies (2003) comments on a challenge to falsifiability as the
demarcation criterion, known as Duhem-Quine thesis, which was
brought to the fore by Neurath in 1935 and was based on the work
of Duhem. The following presents the gist of the thesis.
It is agreed on common consent that Newton’s first law of motion
is a scientific law. As it happens, it is not falsifiable. This law states
that a body continues in its state of rest or of uniform motion in a
straight line, unless acted upon by an external impressed force. Let
it be supposed a body is found neither at rest nor at uniform
motion in a straight line, which it seems is not acted upon by any
external force. This observation seemingly refutes Newton’s law,
but in reality this does not necessarily hold true. Newton himself
on observing the elliptical orbits of planets came to the conclusion
Suddhachit Mitra An Analysis of the Falsification Criterion
10
that they were acted on by gravitational forces from celestial bodies
other than the sun.
This issue is discussed by Duhem (1962) as cited in the
“Underdetermination of Scientific Theory” (2019)“…the physicist
can never subject an isolated hypothesis to experimental test, but
only a whole group of hypotheses; when the experiment is in
disagreement with his predictions, what he learns is that at least
one of the hypotheses constituting this group is unacceptable and
ought to be modified; but the experiment does not designate which
one should be changed.”
Going by the above, Newton’s first law cannot be tested on its own
as a standalone hypothesis, but only as a group of hypotheses. For
meaningful results, the law should be used in conjunction with:
one, further assumptions, such as Newton’s second and third laws
and the law of universal gravitation; and two, auxiliary
assumptions, such as the mass of the sun is much greater than that
of the planets.
Since the first law needs to be used in conjunction with many
assumptions, it would not be possible to refute the law in case
forecasts from the law are not realized, as any further assumptions
and/or auxiliary assumptions might not hold. Hence, by the
Duhem-Quine thesis Newton’s first law is not falsifiable.
Popper replied to the issue. He used a four-level model of types of
statements divided on the basis of their falsifiability and
conformability. Gillies (2003) extend this in Table 1:
Table 1: Popper’s Four-level Model of Scientific Hypotheses; (source:
Gillies, 2003)
Level
Type of
Statement
Criterion
Example
3
Metaphysical
Not confirmable
Greek atomism
2
Scientific
Confirmable but not falsifiable
Newton’s first
law
1
Scientific
Falsifiable and confirmable
Kepler’s first law
0
Observation
Truth-value determinable by
observation
Statement of
position of Mars
at a point in time
Tattva-Journal of Philosophy, Vol. 12, No.1 ISSN 0975-332X
11
Gillies (2003) points out where the ideas of Kuhn and Popper
converge. Level 2 theories, such as Newton’s first law, cannot be
falsified through observation as shown in Table 1. According to
Thomas Kuhn the Newtonian paradigm was replaced by the
Einstinian paradigm not through one single observation, but
through a process of scientific revolution. This is expected in the
case of level two theories which cannot be falsified. However, the
Popperian scheme of falsification is applicable to level-one theories.
Moreover, a level one hypothesis such as Kepler’s first law (that
states that planets orbit around a star in ellipses with the star at its
one focus) can be tested by observing the positions of the planet
and ascertaining that these points lie on the circumference of the
ellipse with defined parameters. This may be called direct
confirmation. Newton’s laws, along with a few additional
assumptions, mentioned earlier, can deduce an approximate form
of Kepler’s law. Newton’s theory can however be confirmed by
observation on planets, motions of projectiles and so on. The
confirmation of Newtonian theory, along with the fact that Kepler’s
first law in an approximate form is obtained from Newtonian
theory is a pointer to an indirect confirmation of Kepler’s law.
Popper (1972) wrote something similar:
“Thus I assert that with the corroboration of Newton’s theory, and
the description of the earth as a rotating planet, the degree of
corroboration of the statement s The sun rises in Rome once in
every twenty-four hours’ has greatly increased. For, on its own, s is
not very well testable; but Newton’s theory, and the theory of the
rotation of the earth are well testable. And if these are true, s will be
true also.”
The Duhem-Quine thesis says that it would be impossible to falsify
an individual theory experimentally. Can the Popperian paradigm
guide us in detecting errors in individual theories? Maxwell (1972)
describes the following thought experiment in this regard. Let us
consider two rival research programs based on two competing
theories T1 and T2. The program based on T1 has long been stagnant;
a number of well-corroborated hypotheses are at odds with this.
The program has also not been able to come up with new
predictions.
Suddhachit Mitra An Analysis of the Falsification Criterion
12
The research program based on T2 is surging ahead. Its empirical
content and predictive power are far more. Popperian rules
indicate that T2 should be accepted and T1 rejected.
But if T1 is true (perfectly possible), then on what basis should T1 be
rejected? The logic advanced by Maxwell is that perhaps the
universe is built in such a way that theories which plunge us into
deeper errors are precisely the programs which forge ahead.
However, the Popperian paradigm has no clear answer to this
dilemma. The argument that T2 has been more corroborated than T1
implies that T2 is closer to the truth cannot be invoked by followers
of Popper as that would mean resorting to the unreliable inductive
method!
Turney (1991) in an interesting paper points out certain ‘errors’ in
Popper (1959) where Popper says that simplicity can be equated to
falsifiability. Popper uses a geometrical example to develop his
argument. He proposes two definitions and a theorem.
Definition 1: The theoretical dimension of a class of geometrical
figures is one less than the cardinality of the smallest set of points
such that there is no figure in the class on which all the points lie.
Definition 2: The geometrical dimension of a class of geometrical
figures is the number of free parameters in the equations that
define the class.
As an example, the equation for the class of circles is:
Ax2 + Ay2 + Bx + Cy + D = 0
This has four parameters, A, B, C, and D but since the equation
determines the value of one of them, hence the number of free
parameters is three; the geometrical dimension of the class of circles
is three. A lower geometrical dimension translates to a simpler
class.
Theorem: The geometrical dimension of a class of geometrical
figures equals the theoretical dimension of that class.
Turney (1991) claims that the theorem is false. Let us consider the
conic
Ax2 + Bxy + Cy2 + Dx + Ey + F = 0 where at least one of A, B, C, D,
E is not equal to 0.
Tattva-Journal of Philosophy, Vol. 12, No.1 ISSN 0975-332X
13
For a circle: A=C, A is not equal to 0; B=0. The requirement that at
least one of A, B, C, D, E be non-zero arises from the fact that if A =
B= C= D= E= F= 0, then the solution is the entire x-y plane with
infinite dimensions.
Considering the class of circles, since they have three free
parameters, hence it has a geometrical dimension of three. Turney
shows that it has a theoretical dimension of two. Let us consider the
set of 3 points P = ((0, 0), (1, 1), (2, 2)). Using the conic equation, we
get,
F=0
A+B+C+D+E+F=0
4A+4B+4C+2D+ 2E+F=0
This implies A=0, which is a contradiction, as A is not equal to 0 for
a circle. On dropping this condition, certain undesirable results are
obtained.
Circles are not special cases of ellipses
Lines are special cases of ellipses
Hence we do not drop this condition, thereby implying no circle
contains all points in P. Hence the class of circles has a theoretical
dimension of two, not three and geometrical dimension of three.
Such results can be illustrated with other conics as well.
Goodman (1972) as cited in Turney (1991), has also shown an
inconsistency in Popper’s equalizing falsifiability with simplicity.
Let us suppose a number of maple trees have been examined from
a wide area and all of them have been found to be deciduous.
Further suppose we have not visited a particular location,
Eagleville. A choice among the following hypotheses is what we
might be considering:
All maples are deciduous, except those in Eagleville
All maples are deciduous
All maples elsewhere and all sassafras trees in Eagleville are
deciduous.
Suddhachit Mitra An Analysis of the Falsification Criterion
14
Clearly the third statement is most specific and the easiest to falsify,
whereas the second is the easiest and the best. Thus Goodman
concludes that there is no reason to necessarily equate simplicity to
falsifiability.
Keita (1989) raises three important issues regarding falsification of
universal statements:
If induction does not work, then a scientific law L derived
from a finite set of universal statements is false. Then the
veracity of a theory which is constituted by L is
questionable. The implication is that Popper attempts to
falsify ‘false’ theories.
In actual science, researchers attach a predictive judgment
before testing a hypothesis which has already been subject
to experiments. This is really resorting to the inductive
method. However, in case of statistical inference (which are
not strictly scientific laws as they do not predict events with
a probability one and are not explanatory but correlational)
scientific laws are “established by enumeration of particular
events”.
Actual scientific laws are not universal statements but are
restricted in temporal and spatial scope. For instance,
Henry’s law, a law in the theory of Phase Equilibria: “At a
fixed temperature, the amount of gas dissolved in a given
quantity of solvent is proportional to the partial pressure of
the gas above the solution or P=XK where K is Henry’s
constant. But Henry’s law is applicable to finite class of
gases subjected to experiments which defines a set of
conditions such as temperature and pressure. If in T1 to Tn-1
experimental conditions this law has been satisfied then it
would be prudent to infer that it would be satisfied at time
Tn. Thus this takes the sting out of the criticism that
induction takes a leap out of a finite set to an unrestricted
class.
Tattva-Journal of Philosophy, Vol. 12, No.1 ISSN 0975-332X
15
4. Results and Discussion
The following Table 2 records insights and discusses them from the
preceding review of literature conducted in section 3 in a
structured manner.
Table 2: Insights from the Review of Literature
Author(s),
year
Discussion
(Derksen,
1985)
The verisimilitude argument of Popper can be stated as
“in case a theory withstands a set of risky and varied tests,
it is highly improbable that this is due to an accident…”. It
has been correctly pointed out by Derksen (1985) that
with respect to the past and present the argument is non-
inductivist but is inductivist with respect to the future.
The rationale that can justify Popper’s reasoning is that
the future is similar to the past. But this is an inductivist
argument. Hence this contradicts Popper’s deductivism.
Hence Derksen’s argument is valid.
(Gillies,
2003)
Popper, it seems, broadly agreed with the critique, and he
answered the critique with a four-level model of
statements based on their falsification and conformability.
The critique is a valid one. The critique and the
consequent four-level model
of Popper led to an
understanding (Gillies, 2003) that level two ‘statements’
such as Newton’s second law had to be falsified through a
“scientific revolution” as proposed by Thomas Kuhn.
(Maxwell,
1972)
The paper strikes at the basis of Popper’s philosophy,
namely, the ability of Popper’s methodology and logic to
demarcate science from non-science. The issue of two
competing research programs T1 and T2 is well dealt. The
weakness of some of Popper’s arguments, seemingly
deductivistic, but actually shown to be inductivistic is
witnessed in this article as well.
(Turney,
1991)
Popper claims that the geometrical dimension of a class of
geometrical figures is equal to the theoretical dimension
of that class. He, however, does not prove it. Popper’s
claim is not founded on reasoning. Popper’s general claim
that simplicity is the same as falsifiabilty is dented.
Suddhachit Mitra An Analysis of the Falsification Criterion
16
5. Conclusion
A major critique of Popper is that Popper’s methodology of
falsification is based on the despised (by Popper) inductive method
(Derksen, 1985; Keita, 1989; Maxwell, 1972). This makes Popper’s
methodologies and assertions on falsifiability open to arguments.
In any case, actual empirical science generally uses induction and
actual scientific laws are situated in specific spatio-temporal
contexts, thus countering Popper’s critique that inductivists take a
wild leap of faith when interpolating their findings to a universal
context. The second critique, which Popper agreed with and replied
by strengthening his framework, is the Duhem-Quine thesis: actual
science considers a group of hypotheses and in tandem, and in
such cases whereas individual hypothesis are falsifiable, the group
is only confirmable and not falsifiable. Third, while Popper equated
simplicity with falsifiabilty, researchers such as Goodman (1972)
and Turney (1991) have picked up flaws in this argument as well.
The paper takes a look at the various dimensions of the falsification
criterion using a review of the extant literature. Various researchers
have identified deficiencies in Popper’s epistemology: also actual
science works not necessarily in accordance with how Popper
visualizes it. Nonetheless, Popper’s falsifiability remains a very
strong criterion, especially where research can be founded on
value-laden assumptions such as in the social sciences including
management, and scientists willing to subject their research to
(Goodman,
1972)
In the limited context, Goodman’s argument holds.
However, whether the second statement which is
linguistically most simple is actually most simple or not
may be open to argument.
(Keita,
1989)
Induction is a recurring theme in almost all critique of
Popper’s methodology. It is also a fact that induction does
play a part in science, but that is not a problem with
Popper’s argument. With regard to Henry’s law, the
inductive approach cannot test for all possible situations
in the restricted temporal and spatial scope”. Hence
Popper’s criterion of falsifiability as a criterion of
demarcating science from non-science remains valid.
Tattva-Journal of Philosophy, Vol. 12, No.1 ISSN 0975-332X
17
more difficult tests for falsifiability would constitute sound
research practice.
As a student of philosophy of sciences, one would contend that the
idea of probabilistic verification of Kuhn (1970) is in many cases a
good guide to philosophy of sciences. Normal science, according to
Kuhn, advances by probabilistic verification of competing theories,
wherein the better theory becomes the most viable one through a
process similar to natural selection. There is always an imperfect
data-theory fit and in case of severe inconsistency, testing the
theory through falsification would require a degree of falsification
or level of improbability which leads to probabilistic verification. In
this regard, Popper and Kuhn share a degree of unanimity.
References
Derksen, A. (1985). The alleged unity of Popper’s philosophy of science:
Falsifiability as fake cement. Philosophical Studies, 48(3), 313336.
Duhem, P. (1962). The Aim and Structure of Physical Theory, translated by
Philip P. Weiner. New York: Athenaeum.
Gillies, D. (2003). The Demarcation Problem and Alternative Medicine.
Presented at the Karl R. Popper: Revision of his Legacy, La Coruna,
Spain. Retrieved from http://discovery.ucl.ac.uk/17002/1/17002.pdf
Goodman, N. (1972). Problems and Projects, Bobbs-Merrill Co. New York
Bobbs-Merrill Co. Inc.
Keita, L. (1989). Are universal statements falsifiable? Journal for General
Philosophy of Science, 20(2), 351366.
Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). Chicago:
The University of Chicago Press.
Maxwell, N. (1972). A critique of Popper’s views on scientific method.
Philosophy of Science, 39(2), 131152.
Mitra, S. (2016). What Constitutes Science: Falsifiability as a Criterion of
Demarcation. Retrieved from
https://doi.org/10.13140/rg.2.1.4612.5685
Popper, K. R. (1959). The logic of scientific discovery (Second Harper
Torchbook edition 1968). Harper and Row.
Popper, K. R. (1960). Truth, rationalism and the growth of science. Presented
at the International Congress for the Philosophy of Science, Stanford,
Stanford University.
Popper, K. R. (1972). Objective knowledge: An evolutionary approach. Oxford
University Press.
Popper, K. R. (2002). The logic of scientific discovery. Routledge.
Suddhachit Mitra An Analysis of the Falsification Criterion
18
Stanford, K. (2019). Underdetermination of scientific theory. In Stanford
encyclopedia of philosophy. Retrieved from https://
plato.stanford.edu/entries/scientific-underdetermination/
Thornton, S. (2019). Karl Popper. In Stanford Encyclopedia of philosophy.
Retrieved from http://plato.stanford.edu/entries/popper/
Turney, P. (1991). A note on Popper’s equation of simplicity with
falsifiability. The British Journal for the Philosophy of Science, 42(1), 105
109.
... Bu nedenle, yıllarca süren çok titiz testlerden sonra bile bir teorinin doğrulandığı varsayılmamalıdır. Söylenebilecek en fazla şey, yüksek oranda desteklendiği ve yanlışlanana kadar mevcut en iyi teori olarak derecelendirilmeye iyi bir aday olduğudur (Mitra, 2020). Popper (2006) yanlışlanabilirlik ilkesini anlatırken üç aşamalı bir yöntem ortaya koymaktadır. ...
Article
Bilginin mutlak olmadığı anlayışını benimsemiş olan Karl Popper yirminci yüzyılın birçok yazarına göre çağının en önemli felsefecilerindendir. Bu çalışmanın temel amacı Popper’ın bilim felsefesine getirdiği en önemli metodolojik yeniliklerden olan yanlışlanabilirlik ilkesi ve onun tümavarım yaklaşımı hakkında genel bir çerçeve ortaya koymaktır. Popper'e göre bilim insanları için temel hedef teorilerinin doğru olduğuna dair verileri toplamak değil, onların yanlış olduğunu kanıtlamaya çalışmaktır. Bir teorinin test edilebilmesi için, onun çürütülebilir ya da diğer bir deyişle yanlışlanabilir olup olmadığını görmek gerekmektedir. Popper tümevarım sorunu ile bilimi metafizikten ayırma edimi olarak gördüğü sınır çizme sorunu arasında paralellik kurmuştur.
... This framework provides a model for understanding how scientific knowledge can be simultaneously stable in terms of its hardcore and dynamic in terms of its protective belt. 9 However, Lakatos' view differs from other philosophers of science, such as Karl Popper (1902 -1994), who advocated a model of falsificationism. [10][11][12][13] According to Popper, scientific theories cannot be verified; they can only be refuted or falsified. ...
Article
Full-text available
This research focuses on analyzing the dynamic nature of scientific knowledge from an epistemological perspective, focusing specifically on anthropometric research of the human hand. The main objective of this study is to examine how knowledge is generated and evolves in this field, in the light of epistemological theories such as Lakatos'. Key concepts of epistemology and philosophy of science are addressed, including the theories of Lakatos, Popper, Kuhn and Feyerabend. Subsequently, Lakatos' theory of Scientific Investigation Programs (SIPs) is applied to the field of hand anthropometry, identifying its fundamental core (which refers to the belief in the relevance of hand measurements) and its protective belt (comprising auxiliary theories and methods). It discusses how both heuristics and empirical evidence drive the evolution of knowledge in this field, also emphasizing the importance of creative inquiry, scientific debate, and methodological rigor. Ultimately, it is concluded that anthropometric research eloquently exemplifies the inherent dynamic nature of scientific knowledge.
Chapter
There is an accepted framework for producing scientific evidence (the philosophy of science), and another for using random samples from a population to summarize that evidence (statistical inference). Central to the accepted philosophy of science is the idea of falsifiability, which is generally adhered to in randomized controlled trials (RCTs) using the null hypothesis significance testing approach defined by Neyman and Pearson. Beyond the tight control of RCTs, the flexibility of statistical inference means it is relatively easy to overlook falsifiability, thereby undermining the validity of any scientific conclusion. Perhaps the typical example of this is HARKing (hypothesizing after the results are known), where a statistical analysis of sample data reveals a significant effect that has a very convincing explanation only in retrospect and is reported as if hypothesized a priori. Such ‘discoveries’ may well be valid, but the rate at which they might not be is unknown. This is contributing to a crisis of replication in medical research, where results from published studies are often not reproducible. Science is considered a difficult subject, and it may not be appropriate to teach the philosophy behind it to medical students who have other time commitments, including also the learning of difficult methods of statistics. Under these circumstances, what can be taught to promote the use of statistics within the scientific framework and mitigate the crisis of replication? roposed suggestions range from producing registered study protocols to the abandonment of statistical significance; importantly, both can be implemented without need for expertise in philosophy. Despite this, the replication crisis persists and the implementation of any proposed solution, and perhaps even awareness of them, is the exception. As (if) we move to better scientific practice, it is likely that those who teach statistics will be required to include the suggestions into their courses.KeywordsFalsifiabilityPhilosophy of sciencePre-registrationReplication crisis
Chapter
Here we provide an extended discussion of the core conceptual background and the methodological structure we employ in the subsequent chapters. We start with a brief discussion of anthropology as a discipline, and then outline a chronological summary of anthropological approaches that have been used to understand spirits. This discussion includes early researchers such as Sir Edward Tylor, Franz Boas, Bronislaw Malinoswski, Sir James Frazer as well as more recent approaches focused on the use of psychedelics, evaluations of the mental health of shamans and other spirit specialists, the prevalence and importance of spirit possession, and advances in brain neurology during the late-twentieth entury and early-twenty-first century. We also introduce the hyperactive agent detection hypothesis that suggests humans believe in spirits/animistic agency because of an overly sensitive adaptive mechanism in our brains to identify predators. The topics presented in this summary will be touched on during the rest of the book (e.g., the concepts of animism, magic, and spirit possession). We then present our own theoretical and methodological approach for understanding spirits. Building from the work of Karl Hempel and other philosophers of science, we explain the underlying issue to the scientific study of spirits is focusing on them as unobservable entities and avoiding questions such as, “Are spirits real?”. We suggest in contrast that anthropologists study the empirically observable aspects of spirit-human interaction and use operational definitions of categories such as spirits, shamans, and related terms. Our approach allows anthropologists to step back from unanswerable questions related to belief and “the objective reality of spirits” and instead focus on what spirits do in specific cultural contexts. This will provide the empirical basis for scientific cross-cultural comparisons.
Book
Full-text available
Este libro de enseñanza en Bioética y Bioderecho tiene como propósito el entendimiento de la deontología forense en casos de investigación en donde se aplica esta ciencia, además del análisis de la evidencia científica, siempre teniendo en cuenta las pautas éticas en la recolección, análisis y valoración de la prueba para la investigación en ciencias forenses. De esta manera, este texto de docencia será una guía introductoria a los tópicos que entrelazan la práctica forense y la bioética, se identificarán también elementos transversales de los derechos humanos con perspectiva de género, y así identificaremos los elementos básicos para continuar con un estudio profundo de esta colaboración disciplinar. A lo largo del texto haremos referencia a casos en materia de evidencia y ciencias forenses con un componente ético, resueltos no solamente por nuestra corte nacional, sino también por cortes de otros países y también su vinculación con los precedentes de cortes internacionales en materia de derechos humanos.
Technical Report
Full-text available
Austrian-British philosopher Karl Popper was well known for his rejection of the classical inductivist views in favour of empirical falsification. He considered falsification as the demarcating criterion between scientific inquiry and other forms of inquiry.
Article
Full-text available
This paper considers objections to Popper's views on scientific method. It is argued that criticism of Popper's views, developed by Kuhn, Feyerabend, and Lakatos, are not too damaging, although they do require that Popper's views be modified somewhat. It is argued that a much more serious criticism is that Popper has failed to provide us with any reason for holding that the methodological rules he advocates give us a better hope of realizing the aims of science than any other set of rules. Con¬sequently, Popper cannot adequately explain why we should value scientific theories more than other sorts of theories ; which in turn means that Popper fails to solve adequately his fundamental problem, namely the problem of demarcation. It is sug¬gested that in order to get around this difficulty we need to take the search for explana¬tions as a fundamental aim of science.
Article
Though not so worried by Hume's challenge that it disturbs their sleep, most philosophers would, I guess, be pleased with a demonstration that one can do without induction. That makes Popper's philosophy with its self-advertized undiluted deductivism a quite attractive. This philosophy is even more alluring because its main notions are all treated in terms of one key concept, that of falsifiability. The resulting unity gives the impression of great internal strength. Moreover, it makes Popper's deductivism esthetically quite enchanting. In this paper I shall first develop Popper's central epistemological theses in such a way that the unity of his system stands out most compellingly. It turns out that its crucial concepts grow organically out of the concept of falsifiability. Thus, we reach the Great Chain of concepts, all concatenated to falsifiability. It leads to an impressive methodological unanimity: such different desiderata for a theory to be the most falsifiable, the best testable, the most informative, and the best corroborated, can all be fulfilled at the same time. However, a closer look displays a hidden ambiguity in the concept of falsifiability as used by Popper. The Great Chain breaks and the subsequent disintegration overthrows the methodological unanimity as well. For instance, the best testable theory, i.e. the theory which runs the greatest risk of being refuted and so offers us the best chance to learn someting, and the best corroborated theory, i.e. the theory which we have reason to believe is the closest to the truth, need not be the same theory anymore. Additional requirements might seem to restore the unity; unfortunately, they only do so when some inductive assumptions are made. That is, I argue that the impression of unity made by Popper's philosophy is due partly to a hidden ambiguity in his concept of falsifiability and partly to suppressed inductive assumptions. Falsifiability is a fake cement. Once its ambiguity is spelled out - and thus
Article
In diesem Beitrag wird ausgeführt, daß Poppers Falsifikationskriterium einen logischen Fehler enthält. Die Regeln deduktiver Ableitung sind selbst für ihre Durchführung von induktiver Ableitung (z. B. Erinnerungsvermögen) abhängig. Um das “Criterion of Falsification” für gültig zu erklären, beruft sich Popper auf diese Regeln deduktiver Ableitung. Es wird weiter gezeigt, daß eine Untersuchung aktueller wissenschaftlicher Gesetze nicht die These beweist, daß die wissenschaftlichen Gesetze universale Feststellungen von unbeschränktem Horizonte in der Zeit und im Zeitraum sind. Die Anerkennung dieser Tatsache scheint im Zusammenhang der wissenschaftlichen Forschung die Wichtigkeit des Problems der Induktion sehr zu verkleinern.
Presented at the Karl R. Popper: Revision of his Legacy
  • D Gillies
Gillies, D. (2003). The Demarcation Problem and Alternative Medicine. Presented at the Karl R. Popper: Revision of his Legacy, La Coruna, Spain. Retrieved from http://discovery.ucl.ac.uk/17002/1/17002.pdf Goodman, N. (1972). Problems and Projects, Bobbs-Merrill Co. New York Bobbs-Merrill Co. Inc.
Truth, rationalism and the growth of science
  • K R Popper
Popper, K. R. (1960). Truth, rationalism and the growth of science. Presented at the International Congress for the Philosophy of Science, Stanford, Stanford University.