ArticlePDF Available

The Computational Universe: Quantum Quirks and Everyday Reality, Actual Time, Free Will, the Classical Limit Problem in Quantum Loop Gravity and Causal Dynamical Triangulation

Authors:

Abstract and Figures

The simulation analogy presented in this work enhances the accessibility of abstract quantum theories, specifically the stochastic hydrodynamic model (SQHM), by relating them to our daily experiences. The SQHM incorporates the influence of fluctuating gravitational background, a form of dark energy, into quantum equations. This model successfully addresses key aspects of objective-collapse theories, including resolving the ‘tails’ problem through the definition of quantum potential length of interaction in addition to the De Broglie length, beyond which coherent Schrödinger quantum behavior and wavefunction tails cannot be maintained. The SQHM emphasizes that an external environment is unnecessary, asserting that the quantum stochastic behavior leading to wavefunction collapse can be an inherent property of physics in a spacetime with fluctuating metrics. Embedded in relativistic quantum mechanics, the theory establishes a coherent link between the uncertainty principle and the constancy of light speed, aligning seamlessly with finite information transmission speed. Within quantum mechanics submitted to fluctuations, the SQHM derives the indeterminacy relation between energy and time, offering insights into measurement processes impossible within a finite time interval in a truly quantum global system. Experimental validation is found in confirming the Lindemann constant for solid lattice melting points and the 4He transition from fluid to superfluid states. The SQHM’s self-consistency lies in its ability to describe the dynamics of wavefunction decay (collapse) and the measure process. Additionally, the theory resolves the pre-existing reality problem by showing that large-scale systems naturally decay into decoherent states stable in time. Continuing, the paper demonstrates that the physical dynamics of SQHM can be analogized to a computer simulation employing optimization procedures for realization. This perspective elucidates the concept of time in contemporary reality and enriches our comprehension of free will. The overall framework introduces an irreversible process impacting the manifestation of macroscopic reality at the present time, asserting that the multiverse exists solely in future states, with the past comprising the formed universe after the current moment. Locally uncorrelated projective decays of wavefunction, at the present time, function as a reduction of the multiverse to a single universe. Macroscopic reality, characterized by a foam-like consistency where microscopic domains with quantum properties coexist, offers insights into how our consciousness perceives dynamic reality. It also sheds light on the spontaneous emergence of gravity in discrete quantum spacetime evolution, and the achievement of the classical general relativity limit in quantum loop gravity and causal dynamical triangulation. The simulation analogy highlights a strategy focused on minimizing information processing, facilitating the universal simulation in solving its predetermined problem. From within, reality becomes the manifestation of specific physical laws emerging from the inherent structure of the simulation devised to address its particular issue. In this context, the reality simulation appears to employ an optimization strategy, minimizing information loss and data management in line with the simulation’s intended purpose.
Content may be subject to copyright.
Quantum Rep. 2024, 6, 278322. https://doi.org/10.3390/quantum6020020 www.mdpi.com/journal/quantumrep
Article
The Computational Universe: Quantum Quirks and Everyday
Reality, Actual Time, Free Will, the Classical Limit Problem in
Quantum Loop Gravity and Causal Dynamical Triangulation
Piero Chiarelli 1,2,* and Simone Chiarelli 3
1 National Council of Research of Italy, San Cataldo, Moruzzi 1, 56124 Pisa, Italy
2 Department of Information Engineering, University of Pisa, G. Caruso, 16, 56122 Pisa, Italy
3 Independent Researcher, 56125 Pisa, Italy; simchi88@hotmail.com
* Correspondence: pchiare@ifc.cnr.it; Tel.: +39-050-315-2359; Fax: +39-050-315-2166
Abstract: The simulation analogy presented in this work enhances the accessibility of abstract quan-
tum theories, specifically the stochastic hydrodynamic model (SQHM), by relating them to our daily
experiences. The SQHM incorporates the influence of fluctuating gravitational background, a form
of dark energy, into quantum equations. This model successfully addresses key aspects of objective-
collapse theories, including resolving the ‘tails’ problem through the definition of quantum poten-
tial length of interaction in addition to the De Broglie length, beyond which coherent Schrödinger
quantum behavior and wavefunction tails cannot be maintained. The SQHM emphasizes that an
external environment is unnecessary, asserting that the quantum stochastic behavior leading to
wavefunction collapse can be an inherent property of physics in a spacetime with fluctuating met-
rics. Embedded in relativistic quantum mechanics, the theory establishes a coherent link between
the uncertainty principle and the constancy of light speed, aligning seamlessly with finite infor-
mation transmission speed. Within quantum mechanics submied to fluctuations, the SQHM de-
rives the indeterminacy relation between energy and time, offering insights into measurement pro-
cesses impossible within a finite time interval in a truly quantum global system. Experimental vali-
dation is found in confirming the Lindemann constant for solid laice melting points and the 4He
transition from fluid to superfluid states. The SQHM’s self-consistency lies in its ability to describe
the dynamics of wavefunction decay (collapse) and the measure process. Additionally, the theory
resolves the pre-existing reality problem by showing that large-scale systems naturally decay into
decoherent states stable in time. Continuing, the paper demonstrates that the physical dynamics of
SQHM can be analogized to a computer simulation employing optimization procedures for realiza-
tion. This perspective elucidates the concept of time in contemporary reality and enriches our com-
prehension of free will. The overall framework introduces an irreversible process impacting the
manifestation of macroscopic reality at the present time, asserting that the multiverse exists solely
in future states, with the past comprising the formed universe after the current moment. Locally
uncorrelated projective decays of wavefunction, at the present time, function as a reduction of the
multiverse to a single universe. Macroscopic reality, characterized by a foam-like consistency where
microscopic domains with quantum properties coexist, offers insights into how our consciousness
perceives dynamic reality. It also sheds light on the spontaneous emergence of gravity in discrete
quantum spacetime evolution, and the achievement of the classical general relativity limit in quan-
tum loop gravity and causal dynamical triangulation. The simulation analogy highlights a strategy
focused on minimizing information processing, facilitating the universal simulation in solving its
predetermined problem. From within, reality becomes the manifestation of specific physical laws
emerging from the inherent structure of the simulation devised to address its particular issue. In
this context, the reality simulation appears to employ an optimization strategy, minimizing infor-
mation loss and data management in line with the simulation’s intended purpose.
Citation:
Chiarelli, P.; Chiarelli, S.
The Computational Universe:
Quantum Quirks and Everyday
Reality, Actual Time, Free Will, the
Classical Limit Problem in Quantum
Loop Gravity and Causal Dynamical
Triangulation.
Quantum Rep. 2024, 6,
278
322. hps://doi.org/10.3390/
quantum6020020
Academic Editors:
Gerald B. Cleaver
and
Lev Vaidman
Received: 12 March 2024
Revised: 16 May 2024
Accepted: 18 June 2024
Published:
20 June 2024
Copyright:
© 2024 by the authors.
Licensee MDPI, Basel, Swierland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Aribution (CC BY) license
(hps://creativecommons.org/license
s/by/4.0/).
Quantum Rep. 2024, 6 279
Keywords: EPR paradox; preexisting reality; quantum to classical coexistence; actual time in
spacetime; free will
1. Introduction
One of the most intriguing aspects of modern physics is its approach to infinitesimals
and infinities as mathematical abstractions rather than real entities. This perspective has
already yielded significant results, such as quantum loop gravity [1,2] and non-commu-
tative string theories [3,4].
This study endeavors to showcase how adopting a discrete perspective can offer
novel insights into age-old conundrums in physics.
Building on the sound hypothesis that spacetime is not continuous but discrete, we
demonstrate the plausibility of drawing an analogy between our universe and a comput-
erized N-body simulation.
Our goal is to present a sturdy framework of reasoning, which will be subsequently
utilized to aain a more profound comprehension of our reality. This framework is
founded on the premise that anyone endeavoring to create a computer simulation resem-
bling our universe will inevitably confront the same challenges as the entity responsible
for constructing the universe itself.
The fundamental aim is that, by tackling these challenges, we may unearth insights
into the reasons behind the functioning of the universe. This is based on the notion that
constructing something as extensive and intricate at higher levels of efficiency might have
only one viable approach. The essence of the current undertaking is eloquently aligned
with the dictum of Feynman: ‘What I cannot create, I do not understand’. Conversely,
here, this principle is embraced in the affirmative: ‘What I can create, I can comprehend.
One of the primary challenges in achieving this objective is the need for a physical
theory that comprehensively describes reality, capable of portraying N-body evolution
across the entire physical scale, spanning the microscopic quantum level to the macro-
scopic classical realm.
Regarding this maer, established physics falls short in providing a comprehensive
and internally consistent theoretical foundation [510]. Numerous problematic aspects
persist to this day, including the challenge posed by the probabilistic interpretation as-
signed to the wavefunction in quantum mechanics. Others persistent issues are the im-
possibility of assuming a well-defined concept of pre-existing reality before measurement
and ensuring local relativistic causality.
Quantum theory, despite its well-defined mathematical apparatus, remains incom-
plete with respect to its foundational postulates. Specifically, the measurement process is
not explicated within the framework of quantum mechanics. This requires acceptance of
its probabilistic foundations regardless of the validity of the principle of causality.
This conflict is famously articulated through the objection posed by the EPR paradox.
The EPR paradox, as detailed in a renowned paper [5], is rooted in the incompleteness of
quantum mechanics concerning the indeterminacy of the wavefunction collapse and
measurement outcomes. These fundamental aspects do not find a clear placement within
a comprehensive theoretical framework.
The endeavor to formulate a theory encompassing the probabilistic nature of quan-
tum mechanics within a unified theoretical framework can be traced back to the research
of Nelson [6] and has persisted over time. However, Nelson’s hypotheses ultimately fell
short due to the imposition of a specific stochastic derivative with time inversion sym-
metry, limiting its generality. Furthermore, the outcomes of Nelson’s theory do not fully
align with those of quantum mechanics concerning the incompatibility of contemporary
measurements of conjugated variables, as illustrated by Von Neumann’s proof [7] of the
impossibility of reproducing quantum mechanics with theories based on underlying clas-
sical stochastic process.
Quantum Rep. 2024, 6 280
Moreover, the overarching goal of incorporating the probabilistic nature of quantum
mechanics while ensuring its reversibility through hidden variablesin local classical the-
ories was conclusively proven to be impossible by Bell [8]. Nevertheless, Bohm’s non-local
hidden variable theory [11] has arisen with some success. He endeavors to restore the
determinism of quantum mechanics by introducing the concept of a pilot wave. The fun-
damental concept posits that, in addition to the particles themselves, there exists a guid-
anceor influence from the pilot wavefunction that dictates the behavior of the particles.
Although this pilot wavefunction is not directly observable, it does impact the measure-
ment probabilities of the particles.
Feynman’s integral path representation [12] of quantum mechanics constitutes the
conclusive and accurate model, reducible to a stochastic framework. Here, as shown by
Kleinert [13], it is established that quantum mechanics can be conceptualized as an imag-
inary-time stochastic process. These imaginary time quantum fluctuations differ from the
more commonly known real-time fluctuations of the classical stochastic dynamics. They
result in the ‘reversible’ evolution of probability waves (wavefunctions) that shows the
pseudo-diffusion behavior of mass density evolution.
The distinguishing characteristic of quantum pseudo-diffusion is the inability to de-
fine a positivedefinite diffusion coefficient. This directly stems from the reversible nature
of quantum evolution, which, within a spatially distributed system, may demonstrate lo-
cal entropy reduction over specific spatial domains. However, this occurs within the
framework of an overall reversible deterministic evolution with a net entropy variation of
zero [14].
This aspect is clarified by the Madelung quantum hydrodynamic model [1517],
which is perfectly equivalent to the Schrödinger description while being a specific subset
of the Bohm theory [18]. In this model, quantum entanglement is introduced through the
action of the so-called quantum potential.
Recently, with the emergence of evidence pointing to dark energy manifested as a
gravitational background noise (GBN), whether originating from relics or the dynamics
of bodies in general relativity, the author demonstrated that quantum hydrodynamic rep-
resentation provides a means to describe self-fluctuating dynamics within a system with-
out necessitating the introduction of an external environment [19]. The noise produced by
ripples in spacetime curvature can be incorporated into Madelung’s quantum hydrody-
namic framework by applying fundamental principles of relativity. This allows us to es-
tablish a mechanism through which the energy associated with spacetime curvature rip-
ples generates fluctuations in mass density.
The resulting stochastic quantum hydrodynamic model (SQHM) avoids introducing
divergent results that contradict established theories such as decoherence [9] and the Co-
penhagen foundation of quantum mechanics; instead, it enriches and complements our
understanding of these theories. It indicates that in the presence of noise, quantum entan-
glement and coherence can be maintained on a microscopic scale much smaller than the
De Broglie length and the range of action of the quantum potential. On a scale with a
characteristic length much larger than the distance over which quantum potential oper-
ates, classical physics naturally emerges [19].
While the Bohm theory aributes the indeterminacy of the measurement process to
the indeterminable pilot wave, the SQHM aributes its unpredictable probabilistic nature
to the fluctuating gravitational background. Furthermore, it is possible to demonstrate a
direct correspondence between the Bohm non-local hidden variable approach developed
by Santilli in IsoRedShift Mechanics [20] and the SQHM. This correspondence reveals that
the origin of the hidden variable is nothing but the perturbative effect of the fluctuating
gravitational background on quantum mechanics [21].
The stochastic quantum hydrodynamic model (SQHM), adept at describing physics
across various length scales, from the microscopic quantum to the classical macroscopic
[19], offers the potential to formulate a comprehensive simulation analogy to N-body evo-
lution within the discrete spacetime of the universe.
Quantum Rep. 2024, 6 281
The work is organized as follows:
i. Introduction to the stochastic quantum hydrodynamic model (SQHM)
ii. Quantum-to-classical transition and the emerging classical mechanics in large-sized
systems
iii. The measurement process in quantum stochastic theory: the role of the finite range
of non-local quantum potential interactions
iv. Maximum precision in measurements of mechanical variables in spacetime with
fluctuating background and finite speed of light
v. Minimum discrete length interval in 4D spacetime
vi. Dynamics of wavefunction collapse
vii. Evolution of the mass density distribution of quantum superposition of states in
spacetime with GBN
viii. EPR paradox and pre-existing reality from the standpoint of the SQHM
ix. The computer simulation analogy for the N-body problem
x. How the universe computes the next state: the unraveling of the meaning of time
xi. Free will
xii. The universal ‘pasta makerand actual time in 4D spacetime
xiii. Discussion and future developments
xiv. Extending free will
xv. Best future states problem-solving emergent from the Darwinian principle of
evolution
xvi. How the conscience captures the reality dynamics
xvii. The spontaneous appearance of gravity in a discrete spacetime simulation
xviii. The classical general relativity limit problem in quantum loop gravity and causal
dynamical triangulation
2. The Quantum Stochastic Hydrodynamic Model
The Madelung quantum hydrodynamic representation transforms the Schrodinger
equation [1517] (italic indexes run from 1 to 3)
2
2
t i i (q)
iV
m
ψψ

= ∂∂


(1
)
for the complex wavefunction
iS
| |e
ψψ
=
into two equations of real variables: the con-
servation equation for mass density
2
||
ψ
22
0
t ii
|| (||q)
ψψ
+∂ =
(2
)
and the motion equation for momentum
,
( )
1
j j ( q ) qu
(t) ( n)
q VV
m
=−∂ +

(1
)
where
2
( q ,t )
S ln *
ψ
ψ
=
and where
2
1
2
qu i i
V ||
m| |
ψ
ψ
=− ∂∂
. (2
)
Quantum Rep. 2024, 6 282
The fluctuating energy content of gravitational background noise (GBN) leads to lo-
cal variations in mass density. As demonstrated below, this results in the quantum poten-
tial producing a stochastic force, which extends the Madelung hydrodynamic analogy
into a quantumstochastic problem. The fluctuations in mass density can be understood
by observing that gravitons are metric fluctuations that cause space itself to vibrate. This
vibration contracts and elongates distances, similar to what is detected in the observation
of gravitational waves at the LIGO and VIRGO laboratories. Consequently, when a mass
element experiences elongation or shortening of distance, its density decreases or in-
creases accordingly. As shown in [19], the SQHM is defined by the following assumptions:
1. The additional mass density generated by GBN is described by the wavefunction
gbn
ψ
with density
2
gbn
||
ψ
;
2. The associated energy density
E
of GBN is proportional to
2
gbn
||
ψ
;
3. The additional mass
gbn
m
is defined by the identity
22
gbn gbn
E m c| |
ψ
=
4. The additional mass is assumed to not interact with the mass of the physical system (since
the gravitational interaction is sufficiently weak to be disregarded).
Under this assumption, the wavefunction of the overall system
tot
ψ
reads as
tot gbn
ψ ψψ
(3
)
Additionally, given that the energy density of gravitational background noise (GBN)
E
is quite small, mass density
2
gbn gbn
m| |
ψ
is presumed to be significantly smaller
than the body mass density typically encountered in physical problems. Hence, consider-
ing the mass
gbn
m
to be much smaller than the mass of the system and assuming in
Equations (3) and (4)
tot gbn
m m mm= +≅
, the overall quantum potential can be ex-
pressed as follows
( )
11
1 1 11
2
2
2
2
2
qu gbn i i gbn
(n )
tot tot
i i gbn i i gbn gbn i gbn i
V = ||| | ||| |
m
|| ||| | | | ||| | | | ||
m
ψ ψ ψψ
ψ ψ ψ ψ ψψ ψ ψ
−−
−−
∂∂ =
=− ∂∂ + ∂∂ +
. (4
)
Through an analysis of the variation in quantum potential energy generated by each
Fourier component of mass fluctuation and utilizing the MaxwellBolmann law, we can
derive the spectrum of this noise and subsequently determine its correlation function
G( )
λ
.
To accomplish this, let us examine a fluctuating mass density with a wavelength
λ
22
2
gbn ( )
|| cosq
λ
π
ψλ
(7)
related to the wavefunction of the mass fluctuation
2
gbn( ) cos q
λ
π
ψλ
∝±
.
(8)
With this, we find that the energy fluctuations resulting from the quantum potential
Quantum Rep. 2024, 6 283
qu tot( q ,t ) qu( q,t )
V
E n V dV
δδ
=
,
(9)
following the procedure described in reference [8] are expressed as:
2
2
2
2
qu( )
Em
λ
π
δλ



(10)
that, in 3D space, read as
( )
22
22
22
qu( ) i
i
E k |k|
mm
λ
δ
≅=

(11)
The outcome illustrated by Equation (11) indicates that the energy stemming from
fluctuations in the mass density increases inversely with the square of its wavelength
λ
.
Moreover, since fluctuations in quantum potential with extremely short wavelengths for
0
λ
diverge, they can lead to a finite contribution even when the noise amplitudes
approach zero (i.e.,
0T
). This situation raises concerns regarding the achievement of
the deterministic zero noise limit (2)(4) that represents quantum mechanics.
This occurs because the output of the quantum potential, due to its second derivative
structure, is dependent on the correlation distance of the noise. Consequently, if we must
nullify the fluctuations in the quantum potential in order to achieve convergence to the
deterministic limit (24) of quantum mechanics for
0T
, it follows that a condition of
the correlation function of the quantum potential noise for
0
λ
arises [9]. The deriva-
tion of the shape of the correlation of
G( )
λ
involves tedious stochastic calculations [9],
which can be obtained by considering the probability of uncorrelated fluctuations occur-
ring at increasingly shorter distances.
A simpler and more straightforward approach to calculating
G( )
λ
is through the
spectrum
S( k )
of the noise that reads as [8]
2
2
2
qu( ) c
(k) (k )
k
E
S probability exp exp
kT
λ
π
λ
λ
δ
=



=−=







(12)
In Equation (12),
k
represents the Bolmann constant and
T
signifies the temper-
ature (mean amplitude parameter) of mass density fluctuations. It is worth noting that
Equation (12) exhibits a non-white characteristic, with probability of wavelengths
λ
smaller than
c
λ
rapidly approaching zero.
From (12),
G( )
λ
reads as [19,22]
2
2
12
2
c
( ) (k)
/
cc
G exp[ ik ]S dk exp[ ik ] exp k dk
exp
λ
λ
λλ
πλ
λλ
+∞ +∞
−∞ −∞


∝−








∝−




∫∫
.
(13)
where
Quantum Rep. 2024, 6 284
12
2
c/
( mkT )
λ
=
.
(14)
At least for
2
,
c
λ
represents the De Broglie length
DB p
λ
=<>
, where
12/
p ( mkT )< >=
is the mean momentum of mass density fluctuations, observed as an
ideal gas of particles. It is noteworthy that the De Broglie length corresponds to the wave-
length associated with the momentum of mass density fluctuations behaving as waves (in
accordance with the Loren transformation). Expression (12) reveals that uncorrelated
mass density fluctuations are not capable of manifesting at increasingly shorter distances
than
c
λ
. Consequently, we uncover a new role of the quantum potential in an open sys-
tem: gradually suppressing fluctuations (due to their sharply increasing energy) on a mi-
croscopic scale. This elucidates the empirical observation that the micro-scale is governed
by quantum mechanics.
This probability-mediated suppression enables the proposition of conventional
quantum mechanics as the zero-noise ‘deterministic’ limit of the stochastic quantum hy-
drodynamics model (SQHM). Furthermore, since this phenomenon applies to systems
with a physical length significantly smaller than the De Broglie length
c
λ
, the direct
transposition of quantum mechanical behavior to macroscopic large-scale scenarios is not
feasible at
0T>
. This is because the De Broglie length
c
λ
, within the framework of
SQHM, disrupts scale invariance.
In the presence of GBN, which generates mass density fluctuations, the mass density
distribution (MDD)
2
||
ψ
becomes a stochastic function denoted as
n
, where
2
0T
lim n | |
ψ
. Based on this assumption, we can conceptually separate
n
into two
parts:
nn n
δ
= +
, where
n
δ
is the fluctuating part and
n
is the regular part.
All these variables are connected by the limiting condition
2
00TT
lim n lim n | |
ψ
→→
= =
.
Moreover, the features of the Madelung quantum potential, which fluctuate in the
presence of stochastic noise, can be determined by positing it as comprising a regular com-
ponent
qu( n )
V
(to be defined) along with a fluctuating component
st
V
, such as
12 12
2
2
//
qu i i qu( n ) st
(n)
V n nV V
m
=− ∂∂ = +

(15)
where the stochastic part of the quantum potential
st
V
results in force noise
i s t ( q ,t ,T )
Vm
ϖ
−∂ =
(16)
leading to the stochastic motion equation
( )
1
j j (q) qu(n) (q,t,T)
(t)
q VV
m
ϖ
=−∂ + +

.
(17)
Moreover, the regular part
qu( n )
V
for microscopic systems (
1
c
λ
L
), without loss
of generality, can be reorganized as
( )
12 12
22
12
12
1
22
//
/
qu( n ) i i i i qu
/()
V n n VV V
mm
ρ
ρ
ρ
=− =− +∆ = +∆


(18)
Quantum Rep. 2024, 6 285
leading to the motion equation
( )
1
j j (q) qu( ) (q,t,T )
(t)
q VV V
m
ρ
ϖ
=− + +∆ +

(19)
where
( q ,t )
ρ
represents the probability mass density function (PMD) associated with the
stochastic process (17) [23], which, in the deterministic limit, adheres to the condition
2
0 00T ( q ,t ) T T
lim lim n lim n | |
ρψ
→→
= = =
.
For the sufficiently general case to be practically relevant, it can be assumed that the
correlation function of
( q ,t ,T )
ϖ
possesses zero correlation time, is isotropic in space and
is independent among different coordinates, taking the form
0
lim (q ,t) (q ,t ) (q ) (q ) (T)
T, , G( ) ( )
λ τ αβ
α β αβ
ϖ ϖ ϖ ϖ λδτδ
++
< >≅< >
(20)
with
0
0
lim (q ) (q ) (T )
T
,
αβ
ϖϖ
< >=
.
(21)
Furthermore, given that for microscopic systems (
1
c
λ
L
)
2
0
1 11
2
T ()
c cc
mkT
lim G exp
λ
λ
λ λλ



≅=




(22)
it follows that
0
1
2
lim (q ,t ) (q ,t ) (q ) (q ) (T )
T
mkT
, , ()
λ τ αβ
α β αβ
ϖ ϖ ϖ ϖ δτδ
++
< >≈ < >
(23)
and the motion described by Equation (17) takes the stochastic form of the Markov process
[19]
( )
12
1
( q ) qu() /
j j (t)
(t) (t ) j
VV
qq D
mq
ρ
κ κξ
∂+
=−− +

.
(24)
where
12
12
2 22
/
/
DD
c
kT
Dm
γγ
λ


= =




LL
(25)
where
D
γ
is a non-zero pure number.
In this case,
( q ,t )
ρ
is the probability mass density determined by the probability
transition function (PTF)
0P q,z |t, )(
of the Markov process (24) [23] through the rela-
tion
6
0
0
N
( ,)
q,t ) P q,z |t, ) d z
ρρ
=
z
((
where
0P q,z|t, )(
obeys the Smolu-
chowski conservation equation [23]
6
0 0 0 00
N
Pq ,q |t ,t ) q , z| ,t ) z ,q |t t ,t )
P P dz
ττ
−∞
+−
=
(((
.
(26)
Quantum Rep. 2024, 6 286
So, in summary, for the complex field
12/
i
e
ρ
=
ψ
S
(27)
the quantum–stochastic hydrodynamic system equation reads as
0
0
r
( ,)
q,t ) P q,z |t, ) d z
ρρ
=
z
((
(28)
i i i ( q ,t )
mq p= =
S
,
(29)
()
12
1( q ) qu() /
j j (t)
(t) (t ) j
VV
qq D
mq
ρ
κ κξ
∂+
=−− +

(30)
12
12
2
1
2
/
/
qu i i
Vm
ρ
ρ
=− ∂∂
.
(31)
where
2
( q ,t ) ln *
=
S
ψ
ψ
.
(32)
In the context of (28)(31),
ψ
, defined by (27) and determined by solving Equations
(28)(31), does not denote the quantum wavefunction; rather, it represents the probability
wave defined by the stochastic generalization of quantum mechanics. With the exception
of some specific cases (see (37)), this probability wave adheres to the limit
0T
lim
ψ
ψ=
(33)
It is worth noting that the SQHA Equations (28)(31) show that gravitational dark
energy leads to a self-fluctuating system in which noise is an intrinsic property of
spacetime dynamical geometry that does not require the presence of an environment.
The agreement between the SQHM and the well-established quantum theory outputs
can be additionally validated by applying it to mesoscale systems (
c
λ
L<
). In this sce-
nario, the SQHM reveals that by posing
ψ
ψ
adheres to the Langevin-Schrodinger-
like equation, which, for time-independent systems, is expressed as follows
2
12
2
22
( q ,t )
/
t i i (q ) (t)
Q
i | | V Const S qm D i
m||
ψ ψ κ κξ ψ
ψ

= ∂∂ + + +



(34)
that by using (32) can be readjusted as:
2
12
2
22 2
( q ,t )
/
t i i (q) (t)
Q
i | | V ln qmD i
m*
||
ψ
ψ κ ξψ
ψψ


= ∂∂ +






(35)
The term
( q ,t )
Q
account for contributions from higher-order cumulants in the mass
conservation equation derived from the Smoluchowski equation using Pontryagin’s
method ([19] and references therein), which holds the property
0
0
D( q ,t )
lim Q
=
.
Moreover, the realization of quantum mechanics is ensured by introducing the semi-
empirical parameter
α
close to zero noise, defined by the relation [19]
Quantum Rep. 2024, 6 287
,
02
0
28
T
T
D
kT
lim lim mD m
αα
γ
κ
≅=
L
,
(36)
characterizing the system’s dissipation ability, satisfying the condition
00
T
lim
α
=
.
Although in the framework of quantum hydrodynamic formalism, quantum me-
chanics embodies the deterministic limit of the theory without dissipation, it is interesting
to examine a scenario where, nearing the zero-noise threshold, the drag term
j(t)
q
κ
in
Equation (24) remains significant and non-zero. This occurs particularly when the param-
eter
α
remains relatively high, approaching the quantum limit
c
λ
L<
, such as
0
0orT
c
lim
λ
αα
=
L
.
(37)
In this case, as shown in the appendix, Equation (24) leads to the quantum Brownian
motion equation.
The emergence of the SchrödingerLangevin equation through the stochastic exten-
sion of the quantum hydrodynamic model is noteworthy, showcasing a precise alignment
with traditional outcomes in literature.
2.1. Emerging Classical Mechanics on Large Size Systems
When manually nullifying the quantum potential in the equations of motion for
quantum hydrodynamics (24), the classical equation of motion emerges [17]. However,
despite the apparent validity of this claim, such an operation is not mathematically sound,
as it alters the essential characteristics of the quantum hydrodynamic equations. Specifi-
cally, this action leads to the elimination of stationary configurations, i.e., eigenstates, as
the balancing force of the quantum potential against the Hamiltonian force [24]which
establishes their stationary mass density distribution conditionis nullified. Conse-
quently, even a small quantum potential cannot be disregarded in conventional quantum
mechanics as described by the zero-noise ‘deterministic’ quantum hydrodynamic model
(2)(4).
Conversely, in the stochastic generalization, it is possible to correctly neglect the
quantum potential in (3) and (24) when its force is much smaller than the force noise
ϖ
such as
1
i q u ( ) ( q , t ,T )
| V || |
m
ρ
ϖ
that by (25) leads to condition
12 12
1
2 22
//
i qu( ) D D
c
mkT
|V |
mmm
ρ
κγ κ γ
λ


 
∂=


 

 
 

LL
,
(38)
and hence, in a coarse-grained description with elemental cell side
q
, to
12
2 22
/
i qu( ) D D
c
qq
kT
lim V m m
m
ρ
κ γ κγ
λ
→∆


∂=




LL
,
(39)
where
L
is the physical length of the system.
It is worth noting that, despite the noise
( q ,t ,T )
ϖ
having a zero mean, the mean of
the fluctuations in the quantum potential, denoted as
st( n,S )
VS
κ
, is not null. This non-
null mean generates to the dissipative force
(t)
q
κ
in Equation (24). Consequently, the
stochastic sequence of noise inputs disrupts the coherent evolution of the quantum
Quantum Rep. 2024, 6 288
superposition of states, causing them to decay to a stationary mass density distribution
with
0
(t)
q=
. Moreover, by observing that the stochastic noise
12
2
/
D (t)
c
m
κγ ξ
λ






L
(40)
grows with the size of the system, for macroscopic systems (i.e.,
c
λ
→∞
L
), condition (38)
is satisfied if
1
i qu( n )
q(q)
cc
lim V
m
λλ

→∞ =∞


<∞
L
.
(41)
To aain a comprehensive portrayal devoid of quantum correlations for any large-
scale system of physical length
L
, a stricter criterion must be enforced, such as
11 0
q i qu( ) q i qu( ) i qu( )
(q) (q) (q)
cc
lim V lim V V
mm
ρ ρρ
λλ
→∞ →∞
= ∂∂ =
.
(42)
Therefore, acknowledging that
2
q qu( q )
lim V q
→∞
,
(43)
holds for linear systems, from the standpoint of SQHM, it promptly follows that they are
incapable of engendering the macroscopic classical phase.
In general, as the Hamiltonian potential strengthens, the wavefunction localization
increases, and the quantum potential behavior at infinity becomes more prominent.
This is demonstrable by considering the MDD
2k
(q)
| | exp P
ψ

∝−

, (44
)
where
k
(q)
P
is a polynomial of order k, and it becomes evident that a finite range of
quantum potential interaction is achieved for
3
2
k<
.
Hence, linear systems, characterized by
2k=
, exhibit an infinite range of quantum
potential action, as well as the ballistic Gaussian coherent states.
Conversely, for gas phases where particles interact via the LennardJones potential,
whose long-distance wavefunction reads as [25]
121
/
r
lim | | a r
ψ
→∞
, (45)
the quantum potential reads as
22
2
2
11
2
r qu( ) q r r
lim V lim | | a | |
m| | m
r
ρ
ψψ
ψ
→∞ →∞
∂∂ = =

(46
)
leading to the quantum force
2 22
3
1 11
20
22
r rqu() q r rr r rr
lim V lim | | r
m || m r m
r
ρ
ψ
ψ
→∞ →∞
= ∂∂ = ∂∂ = =

, (47
)
Quantum Rep. 2024, 6 289
Such that satisfying conditions (38) and (42) can lead to large-scale classical behavior [19]
in a sufficiently rarefied phase.
It is noteworthy that in Equation (46), the quantum potential output aligns with the
hard sphere potential within the ‘pseudo-potential Hamiltonian modelof the GrossPi-
taevskij equation [26,27], where
4
a
π
represents the bosonboson s-wave scaering
length.
By observing that, to meet condition (42), it is sufficient to require that
1
0
1
i qu( ) ( r , , )
(q)
r | V | dr < ,
m
ρ θϕ
θϕ
∞∀
, (48
)
it is possible to define the range of interaction of the quantum potential
qu
λ
as [19]
1
0
i qu( ) ( r , , )
(q)
qu c c qu
i qu( ) ( r , , )
(q) c
r | V | dr
I
|V |
ρ θϕ
ρ λ θϕ
λλ λ
=
= =
.
1
qu
I>
(49
)
Relation (49) provides a measure of the physical length associated with quantum
non-local interactions.
It is worth mentioning that quantum non-local interactions extend themselves up to
the distance of order of the largest length between
qu
λ
and
c
λ
. Below
c
λ
, due to noise
damping, even a feeble quantum potential emerges . Above
c
λ
but below
qu
λ
the
quantum potential is strong enough to overcome the fluctuations.
The quantum non-local effects can be extended by increasing
c
λ
, which can be ac-
complished by lowering the temperature or mass of the bodies (see (14)), or
qu
λ
, which
increases with a stronger Hamiltonian potential. In the laer case, for instance, larger val-
ues of
qu
λ
can be obtained by extending the linear range of Hamiltonian interaction be-
tween particles (see (43) and (44)).
2.2. The Lindemann Constant for Quantum Laice-to-Classical Fluid Transition
For a system of LennardJones interacting particles, the quantum potential range of
interaction
qu
λ
reads as
4
4
4
0
11
13
d
c
qu c
d
dq dq d d
q
λ
λλ


≅+ =+





∫∫
(50
)
where
( )
00
1dr r
ε
= +∆= +
(with
0
r
ε
=
) represents the distance up to which the in-
ter-atomic force is approximately linear, and
0
r
denotes the atomic equilibrium distance.
Experimental validation of the physical significance of the quantum potential length
of interaction is evident during the quantum-to-classical transition in a crystalline solid at
its melting point. This transition occurs as the system shifts from a quantum laice to a
fluid amorphous classical phase.
If we assume that, within the quantum laice, the atomic wavefunction extends over
a distance smaller than the range of interaction of the quantum potential, and if, according
to the SQHM perspective, the classical phase of an amorphous fluid is distinguished by
molecular wavefunctions extending beyond the influence of the quantum potential (thus
Quantum Rep. 2024, 6 290
preventing the tails from reconstructing quantum coherence), we can infer that the melt-
ing point occurs when the variance of the wavefunction equals
0qu
r
λ
.
Drawing from these assumptions, the Lindemann constant
C
L
, as defined by [23]
0
C
wave function variance
at transit
Lion
r



=
, (51
)
can be expressed as
0
0
qu
C
r
Lr
λ
=
. Moreover, it can be theoretically computed as
( )
3
0
0
1
13
c
qu
rr
λ
λε



++




(52
)
which, being typically
005 01..
ε
≈÷
and
0
08
c
.
r
λ
, leads to
0 217 0 267
C
L. .≈÷
. (53
)
A more accurate evaluation, employing the potential well approximation for molec-
ular interaction [28,29], yields
0
1 2357
qu
. r
λ
, and provides a Lindemann constant
value of
0 2357
C
L.=
. This value aligns with measured values, falling within the range
of 0.2 to 0.25 [23].
2.3. The FluidSuperfluid
4
He
λ
Transition
Given that the De Broglie distance
c
λ
is temperature-dependent, its impact on the
fluidsuperfluid transition in monomolecular liquids at extremely low temperatures, as
observed in
4
He
, can be identified. The approach to this scenario is elaborated in refer-
ence [29], where, for the
4
He
-
4He
interaction, the potential well is assumed to be
( )
0
r
V r
σ
= <<
(54
)
( )
2
r
V = 0.82 U r
σσ
<< +
(55
)
( )
2
r
V =0 r
σ
> +∆
(56
)
In this context,
22
B
k 1 5 10 JU=10.9 .
= ×
represents the LennardJones poten-
tial depth, and
10
3 7 10 m.
σ
+∆= ×
denotes the mean
4
He
-
4He
inter-atomic dis-
tance, where
10
1 54 10 m.
∆= ×
.
Ideally, at the superfluid transition, the De Broglie length aains approximately the
mean
4
He
-
4
He
atomic distance. However, the induction of the superfluid
4He
-
4
He
state occurs as soon as the De Broglie length overlaps with the
4He
-
4He
wavefunctions
within the potential depth. Therefore, we observe the gradual increase of
4He
superfluid
concentration within the interval
2
c
σλ σ
< < +∆
. (57
)
Quantum Rep. 2024, 6 291
For
c
λσ
<
, it follows that no superfluidity occurs, as all inter-atomic well poten-
tials lie beyond the damped noise distance of the De Broglie length.
Conversely, for
2
c
λσ
>≈ +
, 100% of molecular interactions are within the zone
of quantum coherence, resulting in all molecules of
4
He
being in the superfluid state.
Therefore, given that
12
2
c/
( mkT )
λ
=
, (58
)
for a
4
He
mass of
27
46 6 10 kg
He
m.
= ×
, the superfluid ratio of 100% is reached at the
temperature
2
2
100
21 0 92
2
%
T . K
mk
σ

≈=°

+∆

(59
)
consistent with the experimental data from reference [30], which gives
100
10
%
T . K= °
.
Moreover, when the superfluid/normal
4He
density ratio is 50%, it follows that the
temperature
50%
T
is given by
2
2
50
21 1 92
%
T . K
mk
σ

= = °

+∆

. (60
)
This observation is further supported by experimental data, as reported in reference
[30], confirming
50
1 95
%
T . K= °
.
Moreover, by employing the definition that at the
λ
-point of
4
He
, the superfluid
ratio is 38%, such as that
( )
38 2
c
%
λσ
=+∆
, the transition temperature
T
λ
is deter-
mined as follows.
Furthermore, utilizing the definition that at the critical
λ
-point of
4He
, the super-
fluid ratio is 38%, and considering
( )
38 2
c%
λσ
=+∆
, the transition temperature
T
λ
is determined as follows:
2
2
21 2 20
0 76
T . K
mk .
λ
σ

≈=°

+∆

(61
)
in good agreement with the measured superfluid transition temperature of
2 17. K°
.
As a final remark, it is worth noting that there are two ways to establish quantum
macroscopic behavior. One approach involves lowering the temperature, effectively in-
creasing the De Broglie length. The second approach aims to increase the strength of the
Hamiltonian interaction among the particles within the system, thereby extending the ef-
fective range of the quantum potential.
Regarding the laer, it is important to highlight that the limited strength of the Ham-
iltonian interaction over long distances is the key factor allowing classical behavior to
manifest. When analyzing systems governed by a quadratic or stronger Hamiltonian po-
tential, the range of interaction associated with the quantum potential becomes infinite, or
at least remains so as long as the linear Hamiltonian interaction is maintained, as can be
inferred from (43). Consequently, achieving a classical phase becomes unaainable when
the system’s physical length is smaller than the typical distance up to which the interac-
tion is linear.
In this particular scenario, we exclusively observe the complete manifestation of clas-
sical behavior on a macroscopic scale within systems featuring interactions that are
Quantum Rep. 2024, 6 292
sufficiently weak, weaker even than linear interactions. This condition is crucial, as em-
phasized in Sections 2.6.2 and 2.6.3, where the chaotic nature of classical motion trajecto-
ries is essential for obtaining the Born rule, ensuring the possibility of reaching any eigen-
state forming the superposition of states.
Hence, in this scenario, where the quantum potential is incapable of exerting its not-
local influence over extensive distances, classical mechanics arises as a decoherent mani-
festation of quantum mechanics on macroscopic scale, in the presence of a fluctuating
spacetime background.
2.4. Measurement Process and the Finite Range of Not-Local Quantum Potential Interactions
Throughout the course of measurement, there exists the possibility of a conventional
quantum interaction between the sensing component within the experimental setup and
the system under examination. This interaction concludes when the measuring apparatus
is relocated to a considerable distance from the system. Within the SQHM framework, this
relocation is imperative and must surpass specified distances
c
λ
and
qu
λ
.
Following this relocation, the measuring apparatus takes charge of interpreting and
managing the interaction output.This typically involves a classical, irreversible process
characterized by a distinct temporal progression, culminating in the determination of the
macroscopic measurement result.
Consequently, the phenomenon of decoherence assumes a pivotal role in the meas-
urement process. Decoherence facilitates the establishment of a large-scale classical frame-
work, ensuring authentic quantum isolation between the measuring apparatus and the
system, both pre- and post-measurement event.
This quantum-isolated state, both at the initial and final stages, holds paramount sig-
nificance in determining the temporal duration of the measurement and in amassing sta-
tistical data through a series of independent repeated measurements.
It is crucial to underscore that, within the confines of the SQHM, merely relocating
the measured system to an infinite distance before and after the measurement, as com-
monly practiced, falls short in guaranteeing the independence of the system and the meas-
uring apparatus if either
c
λ
=
or
q
λ
=
is met. Therefore, the existence of a macro-
scopic classical reality remains indispensable for the execution of measurement process.
2.5. Maximum Measurement Precision in Fluctuating Spacetime
Any quantum theory aiming to elucidate the evolution of a physical system across
various scales, at any order of magnitude, must inherently address the transition from
quantum mechanical properties to the emergent classical behavior observed at larger
magnitudes. The fundamental disparities between the two descriptions are encapsulated
by the minimum uncertainty principle in quantum mechanics, signifying the inherent in-
compatibility of concurrently measuring conjugated variables, and the finite speed of
propagation of interactions and information in local classical relativistic mechanics violat-
ing quantum mechanics’ principle of non-locality, which implies that interactions occur
instantaneously over distances (in the Schrodinger formulation).
If a system strictly adheres to the deterministic principles of quantum mechanics
within a distance smaller than
c
λ
, where its subparts lack individual identities, it then
follows that an independent observer seeking information about the system must main-
tain a separation distance greater than a certain distance
qc
λ
, both before and after
the process.
Therefore, due to the finite speed of propagation of interactions and information, in
the framework of SQHM, the process cannot be executed in a timeframe shorter than
Quantum Rep. 2024, 6 293
2 12
2
2
qc
min /
cc
( mc kT )
λ
τ
> ∝∝
. (62
)
Furthermore, considering the Gaussian noise in (24), with the diffusion coefficient
proportional to
kT
, the mean value of energy fluctuation is
2
(T )
kT
E
δ
=
for the degree
of freedom. Moreover, if we assume the relativistic point of view, where the energy of the
particle (both kinetic and potential) is encapsulated into its mass, being
2
mc kT>>
, it
follows that a scalar structureless particle, with mass m, exhibits an energy variance
E
of
2 2 22 12 22 2 22 12
2 12 2 12
2
2
//
(T )
//
E ( ( mc E ) ( mc ) ) ( ( mc ) mc E ( mc ) )
( mc E ) ( mc kT )
δδ
δ
<+ −><+ −>
<>≅
(63
)
Equation (63) can be beer understood by employing the mean energy
E
of the
SchrodingerLangevin equation (see (A4) in the Appendix A) in the final stationary state
2
pot qu
E E V mc= +≡
(where the superscript bars represent mean values), along with
the stochastic contribution
( )
2
12
2
/
(t)
kT
| qm D |
ψ κξ ψ
< >=
, resulting in
2
22
pot qu
kT kT
E E V mc= ++ +
.
Furthermore, from (63), it follows that
2 12
2
/
c
min
( mc kT )
Et E )
c
λ
τ
∆∆>∆∆
, (64
)
It is noteworthy that the product
τ
E
remains constant, as the increase in energy
variance with the square root of
T
precisely offsets the corresponding decrease in the
minimum acquisition time
τ
. This outcome also holds true when establishing the maxi-
mum possible precision in measuring the position and momentum of a particle with mass
m in the SQHM regime.
If we acquire information about the spatial position of a particle with precision
L
, we effectively exclude the space beyond this distance from the quantum non-local inter-
action of the particle, and consequently
q
L<∆
. (65
)
The variance
p
of its relativistic momentum
mc)pp(
/
=
21
µ
µ
due to the fluctua-
tions reads as
2 2 12 2 2 12
12 12
2
2
(T ) //
//
E
p ( ( mc ) ( mc ) ) ( ( mc ) m E ( mc ) )
c
( m E ) ( mkT )
δδ
δ
<+ −><+−>
<>≅
(66
)
and the maximum aainable precision reads as
12 12
2
//
qc
L p ( mkT ) ( mkT ) )
λ
∆∆ >
(67
)
Equating (64) and (67) to the quantum uncertainty value
2
, such as
Quantum Rep. 2024, 6 294
12
22
/
q
L p ( mkT )∆∆ > =
(68
)
and
2 12
2
2
/
q
min
( mc kT )
Et E c
τ
∆∆>∆∆ = =
, (69
)
it follows that
22
c
q
λ
=
represents the physical length below which quantum en-
tanglement is fully effective, and it signifies the physical length-scale below which the
deterministic limit of the SQHM, specifically the realization of quantum mechanics, fully
takes place.
By performing the limit of (68) and (69) for
0T=
(
c
λ
→∞
), within the non-relativ-
istic limit (
c→∞
), it follows that
22
c
min
c
λ
τ
= →∞
(70
)
2 12 20
/
c
c
E ( mc kT )
λ
∆≅ =
, (71
)
22
c
q
λ
= →∞
(72
)
12
20
/
c
p ( mkT )
λ
∆≅
(73
)
and therefore, for the deterministic limit of conventional quantum mechanics, it results in
2
min
Et E
τ
∆∆>∆∆ =
(74
)
12
2
/
q
L p ( mkT )∆∆ > =
(75
)
By associating the maximum precision of measurements with the variance of corre-
sponding observables in quantum mechanics, (75) aligns with the concept of minimum
uncertainty in quantum mechanics, which arises from the deterministic limit of the
SQHM.
It is worth noting that, by (74), the SQHM extends uncertainty relations to all conju-
gate variables of 4D spacetime. This extension is notable since, in conventional quantum
mechanics, the energytime uncertainty is deemed impossible due to the lack of a defined
time operator.
Moreover, it is intriguing to observe that in the realm of quantum mechanics, the
minimum acquisition time for information is
q
min c
τ
∆=
. This minimum time, in the
context of the low-velocity limit of classical quantum mechanics, results in
q
min
c
τ
= →∞
. This result indicates that performing a measurement within a fully
deterministic quantum mechanical global system is not feasible, as its duration would be
Quantum Rep. 2024, 6 295
infinite. Moreover, it must be noted that the Heisenberg minimum uncertainty relations
refer to the quantum variance even though the measurement is not possible and the cor-
responding precision cannot be defined in a perfectly quantum universe. Since the deter-
ministic limit of SQHM is reached through successive steps within the open SQHM re-
gime, the limiting measurement precision is associated with the quantum variance of the
corresponding observable.
Given that non-locality is restricted to domains with physical lengths on the order of
22
c
λ
, and information about a quantum system cannot be transmied faster than the
speed of light (otherwise it would violate the uncertainty principle), local realism is estab-
lished within the coarse-grained macroscopic physics where domains of order of
3
c
λ
re-
duce to a point.
The paradox of spooky action at a distance is confined to microscopic distances
(smaller than
22
c
λ
), where quantum mechanics are described within the low-velocity
limit, assuming
c→∞
and
c
λ
→∞
. This leads to the apparently instantaneous trans-
mission of interaction over a distance.
It is also noteworthy that in the presence of noise, the measured precision undergoes
a relativistic correction, as expressed by
2 2 2 2 12 2 12
2
1
24
//
kT kT
E ( ( mc ) ( mc ) ) ( mc kT ) mc

≈< + > = +


, resulting in the
maximum precision in a quantum system subject to gravitational background noise (
0T>
)
12
2
1
24
/
kT
Et mc

∆> +


(76
)
and
12
2
1
24
/
kT
Lp mc

∆∆ > +


(77
)
This can become significant for light particles (with
0m
), but in quantum me-
chanics, at
0T=
, the uncertainty relations remain unchanged.
2.6. Minimum Discrete Interval of Spacetime
Within the framework of the SQHM, incorporating the maximum precision of meas-
ure in a fluctuating quantum system and the maximum aainable velocity of the speed of
light
xc
, (78
)
by (68), in a fluctuating vacuum with
0T>
possibly with classical large-scale behavior
(enabling the presence of the measuring apparatus), it follows that
2
p
xx
m mx
∆= =

, (79
)
leading to
2c
mx
and, consequently, to
Quantum Rep. 2024, 6 296
22
c
R
xmc
∆> =
. (80
)
where
c
R
is the Compton length.
Identity (80) states that the highest possible concentration of a body’s mass is within
an elemental volume with a side length equal to half of its Compton wavelength.
This result holds significant implications for black hole (BH) formation. To form a
BH, all the mass must be contained within the gravitational radius
Rg
, giving rise to the
relationship
min
2
2
24
R
Gm c
Rgc
xr
=
>= =
, (81
)
which further leads to the condition
2
22
1
48 8
p
c
gg
m
Rc
R mcR mG m
π
= = = <

(82
)
indicating that a BH’s mass, due to quantum effects, cannot be smaller than
2
8
p
c
mmG
π
=
to ensure all its mass is confined within its gravitational radius.
The validity of the result (82) is substantiated by the gravitational effects produced
by the quantum mass distribution within spacetime. This demonstration elucidates that
when mass density is condensed into a sphere with a diameter smaller than half the
Compton wavelength, it engenders an outgoing quantum potential force that overcomes
the compressive gravitational force within a black hole [31,32].
Considering a Planck mass black hole as the lightest configuration, with its mass com-
pressed within a sphere of half the Compton wavelength, it logically follows that black
holes with masses greater than
p
m
[19] exhibit their mass as compressed into a sphere
of smaller diameter. Consequently, given the significance of elemental volume as the vol-
ume inside which content is uniformly distributed, the consideration of the Planck length
as the smallest discrete elemental volume of spacetime is not sustainable. This would
make it impossible to compress the mass of large black holes within a sphere of a smaller
diameter, consequently preventing the achievement of gravitational equilibrium [32].
This assumption conflicts with the fact that any existing black holes compress their
mass into a nucleus smaller than the Planck length [32].
This compression is feasible if spacetime discretization allows for elemental cells of
smaller volume, thereby distinguishing between the minimum measurable distance and
the minimum discrete element of distance in the spacetime laice. In the simulation anal-
ogy, the maximum grid density is equivalent to the elemental cell of the spacetime.
Finally, it is worth noting that the current theory leads to the assumption that the
elemental discrete spacetime distance corresponds to the Compton length of the maxi-
mum possible mass, which is the energy/mass of the universe. Consequently, we have a
criterion to rationalize the mass of the universe—why it is not higher than its valuebeing
intricately linked to the minimum length of the discrete spacetime element. If the pre-Big
Bang black hole (PBBH) was generated by a fluctuation anomaly in an elemental cell of
spacetime, it could not have a mass/energy content smaller than that which the universe
possesses.
Quantum Rep. 2024, 6 297
2.6.1. Dynamics of Wavefunction Collapse
The Markov process (24) can be described by the Smoluchowski equation for the
Markov probability transition function (PTF) [23]
0 0 0 00
r
Pq ,q |t ,t ) q , z| ,t ) z ,q |t t ,t )
P P dz
ττ
−∞
+−
=
(
((
(83
)
where the PTF
,z| ,t )
P
τ
(q
is the probability that in time interval τ is transferred to point q.
The conservation of the PMD shows that the PTF displaces the PMD according to the
rule [23]
0
0r
( ,)
q,t ) P q,z |t, ) d z
ρρ
=
z
((
(84
)
Generally, in the quantum case, Equation (83) cannot be reduced to a FokkerPlanck
equation (FPE). The functional dependence of
qu ()
V
ρ
by
,t )
ρ
(q
, and the PTF
0P q,z|t, )(
, produces non-Gaussian terms [19].
Nonetheless, if, at the initial time,
0
( q ,t )
ρ
is stationary (e.g., in a quantum eigenstate)
and close to the long-time final stationary distribution
eq
ρ
, it is possible to assume that
the quantum potential is constant in time as a Hamilton potential following the approxi-
mation
( ) ( )
( )
22
1
42
qu q q eq( q ) q eq( q )
V ( ) ln ln
m
ρρ

≅− +


(85
)
Being the quantum potential independent by the mass density time evolution, the
stationary long-time solutions
eq( q )
ρ
can be approximately described by the Fokker–
Planck equation
0 00
t ii
P q,z | t, ) P q,z |t, )
υ
+∂ =((
(86
)
where
( )
22
11
42 2
i i j j eq j eq ( q ) i eq
D
ln ln V ln
mm
υ ρρ ρ
κ


= ∂∂ +




(87
)
leading to the final equilibrium of the stationary quantum configuration
( ) ( )
( )
( )
22
11 0
42 2
i ( q ) j j eq( q ) j eq( q ) i eq
D
V ln ln ln
mm
ρρ ρ
κ


∂∂ + + =




(88
)
In ref. [19], the stationary states of a harmonic oscillator obeying (88) are shown. The
results show that the quantum eigenstates are stable and maintain their shape (with a
small change in their variance) when subject to fluctuations.
It is worth mentioning that in (88),
ρ
does not represent the fluctuating quantum
mass density
2
||
ψ
but is the probability mass density (PMD) of it.
2.6.2. Evolution of the PMD of Superposition of States Submied to Stochastic Noise
The quantum evolution of not-stationary state superpositions (not considering fast
kinetics and jumps) involves the integration of Equation (24) that reads as
Quantum Rep. 2024, 6 298
( ) ( )
2212
11
42
/
q (q) q q q (t)
q V ln ln D
mm
ρρξ
κ


= ∂∂ + +




(89
)
By utilizing the associated conservation Equation (84) for the PMD
ρ
, it is possible
to integrate (89) by using its second-order discrete expansion
()()
12
2
1
11
2
/
qk qk
kk
k
k k k ( q ) qu( t ) k k ( q ) qu( t ) k
k, k,
t
d
qq VV t VV D
m m dt
ρρ
κκ
+
∂+ ∂+ +W
(90
)
where
k (t )
k
qq=
(91
)
1kk k
tt t
+
∆=
(92
)
1
k(t ) (t )
kk+
∆= WW W
(93
)
where
k
W
has a Gaussian zero mean and unitary variance whose probability function
k
, t)∆∆P( W
, for
k
t tk =∆∀
, reads as
( )
( )
( )
( )
12
12
12
12
12
2
00
2
11
0
2
2
1
4
4
1
4
4
2
1
44
/
/
/
/
/
k
tt
kk
t
k
kkk
, t)
k
D
lim lim exp t
t
qq
D
lim exp tD
t
q
qqq t t
D t exp tD
π
π
π
∆→ ∆→
++
∆→
+
∆∆
=
−< >
=

<>
−< >


=∆−

(W
W
P
(94
)
where the midpoint approximation has been introduced
1
2
kk
k
qq
q++
=
(95
)
and where
1
qk
kk
( q ) qu ( t )
k
k
VV
qmq
ρ
κ

∂+


< >=
(96
)
and
()
1
2
(q k
k
( q ) qu ( ,t )
)
k
k
k
VV
d
qm dt q
ρ
κ
∂+
< >=

(97
)
are the solutions of the deterministic problem
()()
2
1
11
2
qk qk
kk
k
k k k ( q ) qu( t ) k k ( q ) qu( t )
k, k,
t
d
q q VV t VV
m m dt
ρρ
κκ
+
<><>∂+ ∂+
(98
)
Quantum Rep. 2024, 6 299
As shown in ref. [19], the PTF
1
1
q , q | t ,( k ) t )
kk −∆
(
P
can be achieved after succes-
sive steps of approximation and reads as
( )
( )
12
11
11
2
1
11
1
42
4
/
(u)
q ,q | t ,( k ) t ) u q , q | t ,( k ) t )
kk kk
()
qq
t()
kk
q Dq q
k qk q k
kk
D
lim
Dt e
π
−∆ −∆
−−


< > +< >


+ < > +∂ < >
−−





=
≅∆


((
PP
(99
)
and the PMD at the
k
-th instant reads as
( )
11
1
11
() ()
q ,k t ) q ,q | t ,( k ) t ) k
q ,k t)
k kk kdq
ρρ
∞∞
−∆
−∆
−∞
=
(( (
P
(100
)
leading to the velocity field
( ) ( )
( )
22
11
42
() () ()
k q (q ) q q q
kk
q V ln ln
mm
ρρ
κ
∞∞


< > = ∂∂ +




(101
)
Moreover, the continuous limit of the PTF gives
0
00 0
00 0
0 11 1
1
22
11
121
141
2
1
1
1
2
()
q ,q |t t , ) t q ,q |n t , )
n
n ()
t k k q ,q | t ,( k ) t )
kk
n()
qq
nkk q
t()
k
() Dq
k
qq
qt
kk
Dq
k
Dk
k
q
q
D
P lim
lim dq
qe e
e
∆→
∆→ =
−∞



∂< >

+< >

< >∆



=

=

<>
=
=
=
=
((
(
P
P
D
0
2
122
4
00
qt
dq dt q q D q
q
qD
qt
q
qe

+<> + <>




∫∫






D
(102
)
where
( )
11
1
2
() () ()
k kk
q qq
∞∞
−−
<>=<>+<>

.
The resolution of the recursive Expression (102) offers the advantage of being appli-
cable to nonlinear systems that are challenging to handle using conventional approaches
[3336].
2.6.3. General Features of Relaxation of the Quantum Superposition of States
The classical Brownian process admits the stationary long-time solution
( )
0
11
0
qq
q dq' K( q')dq'
( q' )
DD
t ,t
qq
q ,q |t t ,t ) t
P lim N e N e
<>
−∞ −∞
−∞
−∞ −∞ −∞
∫∫
= =
(
(103
)
where
1
(q)
V
K( q ) mq
κ
=
, leading to solution [13]
Quantum Rep. 2024, 6 300
( )
0 00
22
0 00
11
2
24
qq
t
q
q qt
P q,q | t t ,t ) exp K( q ')dq' q exp dt q K ( q ) D K( q )
DD


= + +∂


∫∫
(D
(104
)
As far as it concerns
()
( q ,t )
q
<>
in quantum case (102), it cannot be expressed in
a closed form, unlike (103), because it is contingent on the particular relaxation path
q ,t )
ρ
(
, which the system follows toward the steady state. This path is significantly influ-
enced by the initial conditions, namely the MDD
00
2
q ,t ) q ,t )
||
ψρ
=
((
as well as
0
( q ,t )
q<>
, and, consequently, by the initial time
0
t
, at which the quantum superposition
of states is subjected to fluctuations.
In addition, from (90), we can see that
k
t
q
depends on the exact sequence of inputs
of stochastic noise, since, in classically chaotic systems, very small differences can lead to
relevant divergences of the trajectories in a short time. Therefore, in principle, different
stationary configurations
q ,t )
ρ
=∞(
(analogues of quantum eigenstates) can be reached
whenever they start from an identical superposition of states. Therefore, in classically cha-
otic systems, Born’s rule can also be applied to the measurement of a single quantum state.
Even if
c qu
λλ
L
, it is worth noting that, to have finite quantum lengths
c
λ
and
qu
λ
(necessary to have quantum stochastic dynamics) and the quantum decoupled
(classical) environment or measuring apparatus, the nonlinearity of the overall system
(systemenvironment) is necessary: quantum decoherence, leading to the decay of super-
position states, is significantly promoted by the widespread classical chaotic behavior ob-
served in real systems.
On the other hand, a perfect linear universal system would maintain quantum corre-
lations on a global scale and would never allow for quantum decoupling between the sys-
tem and the experimental apparatus performing the measure. Merely assuming the exist-
ence of separate systems and environments subtly introduces the classical condition (
c qu
,
λλ
<∞
) into the nature of the overall supersystem.
Furthermore, given that Equation (24) (see Equations (A31) and (A38), in ref. [19]) is
valid only in the leading order of approximation of
q
(i.e., during a slow relaxation pro-
cess with small amplitude fluctuations), in instances of large fluctuations occurring on a
timescale much longer than the relaxation period of
q ,t )
ρ
(
, transitions may occur to
q ,t )
n
(
that are not captured by (102), potentially leading from a stationary eigenstate to a
new superposition of states.
In this case, relaxation will follow again toward another stationary state.
q ,t )
ρ
(
(100)
describes the relaxation process occurring in the time interval between two large fluctua-
tions rather than the system evolution toward a statistical mixture. Due to the extended
timescales associated with these jumping processes, a system comprising a significant
number of particles (or independent subsystems) undergoes a gradual relaxation towards
a statistical mixture. The statistical distribution of this mixture is dictated by the temper-
ature-dependent behavior of the diffusion coefficient.
2.7. EPR Paradox and Pre-Existing Reality
The SQHM highlights that conventional quantum theory, despite its well-defined re-
versible deterministic theoretical framework, remains incomplete with respect to its foun-
dational postulates. Specifically, the SQHM underscores that the measurement process is
not explicated within the deterministic Hamiltonianframework of standard quantum
Quantum Rep. 2024, 6 301
mechanics. Instead, it manifests as a phenomenon comprehensively described within the
framework of a quantum stochastic generalized approach.
The SQHM reveals that quantum mechanics represents the deterministic (zero noise)
limit of a broader quantumstochastic theory induced by spacetime gravitational back-
ground fluctuations.
From this standpoint, zero-noise quantum mechanics defines the deterministic evo-
lution of the probabilistic waveof the system. Moreover, the SQHM suggests that the
term probabilisticis inaccurately introduced, since it arises from the inherent probabil-
istic nature of the measurement process outside the theory framework. Given the capacity
of the SQHM to describe both wavefunction decay and the measurement process, thereby
achieving a comprehensive quantum theory, the term state waveis a more appropriate
substitute for the expression probabilistic wave. The SQHM theory reinstates the princi-
ple of determinism into conventional quantum theory, emphasizing that it delineates the
deterministic evolution of the state waveof the system. It elucidates the probabilistic
outcomes as a consequence of the fluctuating gravitational background.
Furthermore, it is noteworthy to observe that the SQHM addresses the lingering
question of pre-existing reality before measurement. In contrast, the Copenhagen inter-
pretation posits that only the measurement process allows the system to decay into a sta-
ble eigenstate, establishing a persistent reality over time. Consequently, it remains inde-
terminate within this framework whether a persistent reality exists prior to measurement.
The SQHM rejects the anthropocentric notion that the act of measurement induces the
collapse of the wavefunction, in line with the viewpoint of Penrose: It takes place in the
physics, and it is not because somebody comes and looks at it’.
About this point, the SQHM introduces a simple and natural innovation showing
that the world is capable of self-decaying through macroscopic-scale decoherence,
wherein only the stable macroscopic stationary states (very close to eigenstates, or coher-
ent states) persist. These states, being stable with respect to fluctuations, establish an en-
during reality that exists prior to measurement.
Regarding the EPR paradox, the SQHM demonstrates that, in a perfect quantum de-
terministic (coherent) universe, it is not feasible to achieve the complete decoupling be-
tween the subparts of the system, namely the measuring apparatus and the measured sys-
tem, and carry out the measurement in a finite time interval. Instead, this condition can
only be realized within a large-size classical supersystema quantum system in 4D
spacetime with fluctuating backgroundwhere the quantum entanglement, due to the
quantum potential, extends up to a finite distance [19]. Under these circumstances, the
SQHM shows that it is possible to restore local relativistic causality with a finite speed of
transmission of interactions and information, compatible with the precision of measure-
ments that are confined outside quantum non-local domains with lengths smaller than
22
c
λ
.
If the LennardJones inter-particle potential yields a sufficiently weak force, resulting
in a microscopic range of quantum non-local interaction and a large-scale classical phase,
photons, as demonstrated in reference [19], maintain their quantum behavior at the mac-
roscopic level due to their infinite quantum potential range of interaction. Consequently,
they represent the optimal particles for conducting experiments aimed at demonstrating
the characteristics of quantum entanglement over a distance.
In order to clearly describe the standpoint of the SQHM on this argument, we can
analyze the output of two entangled photon experiment traveling in opposite directions
in the state
1 2 12
1
| | , |,
2
i
H H e VV
ϕ
ψ
>= > + >
(105
)
Quantum Rep. 2024, 6 302
Vertical and horizontal polarizations are denoted as
V
and
H
respectively, while
φ
represents a constant phase coefficient.
Photons oneand twoencounter polarizers
a
P
(Alice) and
b
P
(Bob), with their
polarization axes positioned at angles
α
and
β
relative to the horizontal axis, respec-
tively. For the sake of our analysis, we can assume
0
φ
=
.
The likelihood of photon two successfully traversing Bob’s polarizer is
( ) ( )
2
1
, cos
2
P
αβ α β
=
.
According to the prevailing view in quantum mechanics in the scientific community,
when photon onetraverses polarizer
a
P
at an angle of
α
relative to its axes, the state
of photon twoimmediately collapses to a linearly polarized state at the same angle
α
,
leading to the composite state
12 1 2
|, | |
αα α α
>= > >
.
On the other hand, within the framework of SQHM, which can elucidate the dynam-
ics of wavefunction collapse, the collapse process is not instantaneous. Following the Co-
penhagen interpretation of quantum mechanics, it is imperative to assert rigorously that
the state of photon tworemains undefined until its measurement at the polarizer
b
P
.
Hence, when photon onetraverses polarizer
a
P
, according to the SQHM perspec-
tive, we must consider the composite state as
1 1 12
|, | | ,S QP S
αα
>= > >
, where
12
|,QP S >
signifies the state of photon two in interaction with the residual tail field
1
QP
generated by the quantum potential of photon oneat polarizer
a
P
.
The spatial extension of the field
12
|,QP S >
, in the case where the photons travel in
opposite direction, is the double of the one crossed by photon one before its adsorption.
In this regard, it is noteworthy that the quantum potential is not proportional to the inten-
sity of the tail field of photon one. Instead, it is proportional to its second derivative. There-
fore, a minimal residual tail field with a high frequency interacting with photon two can
result in a notable quantum potential interaction originating from the tail field
1
QP
.
When the residual part of the two entangled photons
12
|,QP S >
also passes
through Bob’s polarizer, it undergoes the transition
12 2
|, |QP S
β
>→ >
with probability
( ) ( )
2
1
, cos
2
P
αβ α β
=
. The duration of the photon two adsorption (wavefunction
decay and measurement) due to its spatial extension, and finite light speed, it is just the
time necessary to transfer the information about the measure of photon one to the place
of photon two measurement. A possible experiment is proposed in ref. [19].
Summarizing, the SQHM reveals the following key points:
i. The SQHM posits that quantum mechanics represents the deterministic limit of a
broader quantum stochastic theory.
ii. Classical reality emerges at the macroscopic level, constituting a pre-existing reality
before measurement.
iii. The measurement process is feasible in a classical macroscopic world, because we
can have really quantum decoupled and independent systems, namely the system
and the measuring apparatus.
iv. Determinism is acknowledged within standard quantum mechanics under the
condition of zero GBN.
v. Locality is achieved at the macroscopic scale, where quantum non-local domains
condense to punctual domains.
vi. Determinism is retrieved in quantum mechanics representing the zero-noise limit of
the SQHM. The probabilistic nature of quantum measurement is introduced by GBN.
Quantum Rep. 2024, 6 303
vii. The maximum light speed of the propagation of information and the local relativistic
causality align with quantum uncertainty.
viii. The SQHM addresses GBN as playing the role of the hidden variable in the Bohm
non-local hidden variable theory: the Bohm theory ascribes the indeterminacy of the
measurement process to the unpredictable pilot wave, whereas stochastic quantum
hydrodynamics attributes its probabilistic nature to the fluctuating gravitational
background. This background is challenging to determine due to its predominantly
early-generation nature during the Big Bang, characterized by the weak force of
gravity without electromagnetic interaction. In the context of Santilli’s non-local
hidden variable approach in IsoRedShift Mechanics, it is possible to demonstrate the
direct correspondence between the non-local hidden variable and GBN [21].
Furthermore, it must be noted that the consequent probabilistic nature of the
wavefunction decay, and its measured output, is also compounded by the inherently
chaotic nature of the classical law of motion and the randomness of GBN, further
contributing to the indeterminacy of measurement outcomes.
2.8. The SQHM and the Objective-Collapse Theories
The SQHM well inserts itself into the so-called objective-collapse theories [3740]. In
collapse theories, the Schrödinger equation is augmented with additional nonlinear and
stochastic terms, referred to as spontaneous collapses, that serve to localize the wavefunc-
tions in space. The resulting dynamics ensures that, for microscopic isolated systems, the
impact of these new terms is negligible, leading to the recovery of usual quantum proper-
ties with only minute deviations.
An inherent amplification mechanism operates to strengthen the collapse in macro-
scopic systems comprising numerous particles, overpowering the influence of quantum
dynamics. Consequently, the wavefunction for these systems is consistently well-localized
in space, behaving practically like a point in motion following Newton’s laws.
In this context, collapse models offer a comprehensive depiction of both microscopic
and macroscopic systems, circumventing the conceptual challenges linked to measure-
ments in quantum theory. Prominent examples of such theories include the GhirardiRi-
miniWeber model [37], the continuous spontaneous localization model [38] and the
DiósiPenrose model [39,40].
While the SQHM aligns well with existing objective-collapse models, it introduces an
innovative approach that effectively addresses critical aspects within this class of theories.
One notable achievement is the resolution of the ‘tails’ problem by incorporating the quan-
tum potential length of interaction, in addition to the De Broglie length. Beyond this in-
teraction range, the quantum potential cannot maintain the coherent Schrödinger quan-
tum behavior of wavefunction tails.
The SQHM also highlights that there is no need for an external environment, demon-
strating that the quantum stochastic behavior responsible for wavefunction collapse can
be an intrinsic property of the system in a spacetime with fluctuating metrics due to the
gravitational background. In principle, gravitons, a manifestation of dark energy within
the gravitational field, can be thought of as external environment. However, there exists a
nuanced distinction from conventional concepts. While an external system or environ-
ment typically exists separately from the physical system of interest, gravitational vibra-
tions within the reference system represent something distinct: There is no additional
physical system per se, but rather, the external system is inherently integrated into the
specific type of reference system under consideration. While the concept remains largely
the same, there are subtle, yet fundamentally important differences. Thus, rather than re-
jecting existing concepts outright, it can be seen as an enhancement or refinement of them.
Furthermore, situated within the framework of relativistic quantum mechanics, which
aligns seamlessly with the finite speed of light and information transmission, the SQHM
establishes a clear connection between the uncertainty principle and the invariance of light
speed.
Quantum Rep. 2024, 6 304
The theory also derives, within a fluctuating quantum system, the indeterminacy be-
tween energy and timean aspect not expressible in conventional quantum mechanics
providing insights into measurement processes that cannot be completed within a finite
time interval in a truly quantum global system. Notably, the theory finds support in the
confirmation of the Lindemann constant for the melting point of solid laices and the
transition of
4
He
from fluid to superfluid states. Additionally, it can possibly propose a
potential explanation for the measurement of entangled photons through a EarthMoon
Mars experiment [19].
3. Simulation Analogy: Complexity in Achieving Future States
The discrete spacetime structure derived by (80)
min 22
uu
c
xmc E
∆> =

(where
u
E
is the total energy of the universe) that comes from the finite speed of light together
with the minimum discrete measurable distance (69) allows for the implementation of a
discrete simulation of the universe’s evolution.
In this case, the programmer of such universal simulation has to face the following
problems:
i. One key argument revolves around the inherent challenge of any computer
simulation, namely the finite nature of computer resources. The capacity to represent
or store information is confined to a specific number of bits. Similarly, the availability
of floating-point operations per second is limited. Regardless of effort, achieving a
truly continuoussimulated reality in the mathematical sense becomes unattainable
due to these constraints. In a computer-simulated universe, the existence of infinites-
imals and infinities is precluded, necessitating quantization, which involves defining
discrete cells in spacetime.
ii. The speed of light must be finite. Despite real time and simulation time being
disconnected from each other, computing the evolution of a system where
interactions propagate infinitely fast necessitates the simultaneous computation of
all interactions within a single timeframe. However, it is a practical impossibility for
any computer with finite computing power to execute such a task. The limitation
arises from the fact that any computer can only achieve a finite speed of propagation
in a simulation, as this is the sole feasible method of integration. Therefore, a common
issue in computer-simulation arises from the inherent limitation of computing power
in terms of the speed of executing calculations. Objects within the simulation cannot
surpass a certain speed, as doing so would render the simulation unstable and
compromise its progression. Any propagating process cannot travel at an infinite
speed, as such a scenario would require an impractical amount of computational
power. Therefore, in a discrete representation, the maximum velocity for any moving
object or propagating process must conform to a predefined minimum single-
operation calculation time. This simulation analogy aligns with the finite speed of
light c as a motivating factor.
iii. Discretization must be dynamic. The use of fixed-size discrete grids is clearly a huge
dispersion of computational resource in spacetime regions where there are no bodies
and there is nothing to calculate (so that we can fix there just one big cell, saving
computational resources). On the one hand, the need to increase the size of the
simulation requires lowering the resolution; on the other hand, there is the need to
achieve better resolution within smaller domains of the simulation where mass is
present. This dichotomy is already present to those creating vast computerized
cosmological simulations [41]. This problem is attacked by varying the mass
quantization grid resolution as a function of the local mass density and other
parameters, leading to the so-called automatic tree refinement (ATR). The adaptive
moving mesh method, an approach similar to ATR [42], would vary the size of the
Quantum Rep. 2024, 6 305
cells of the quantized mass grid locally, as a function of kinetic energy density while
at the same time varying the size of the local discrete time-step, which should be kept
per-cell as a 4th parameter of space, in order to better distribute the computational
power where it is needed the most. By doing so, the grid would become distorted
having different local sizes. In a 4D simulation, this effect would also involve the time
being perceived as flowing differently in different parts of the simulation: faster for
regions of space where there is more local kinetic energy density, and slower where
there is less. [Additional consequences are reported and discussed in Section 3.3].
In principle, there are two instruments or methods for computing the future states of
a system. One involves utilizing a classical apparatus composed of conventional computer
bits. Unlike Qbits, these classical bits cannot create, maintain, or utilize the superposition
of their states, rendering them classical machines. On the other hand, quantum computa-
tion employs a quantum system of Qbits and utilizes the quantum law of evolution for
calculations.
However, the capabilities of the classical and quantum approaches to predict the fu-
ture state of a system differ. This distinction becomes evident when considering the cal-
culation of the evolution of many-body system. In the classical approach, computer bits
must compute the position and interactions of each element of mass at every calculation
step. This becomes increasingly challenging (and less precise) due to the chaotic nature of
classical evolution. In principle, the classical N-body simulations are straightforward, as
they primarily entail integrating the 6N ordinary differential equations that describe par-
ticle motions. However, in practice, the sheer magnitude of particles, N, is often excep-
tionally large (of order of millions or tens of billions like in the Millennium simulation
[42]). Moreover, the computational expense becomes prohibitive due to the four-power
increase
4
N
in the number of particleparticle interactions that need to be computed.
Consequently, direct integration of the differential equations requires an exponential in-
crease of calculation and data storage resources for large scale simulations.
On the other hand, quantum evolution does not require defining the state of each
particle at every step. It addresses the evolution of the global wave of superposition of
states for all particles. Eventually, when needed or when decoherence is induced or spon-
taneously occurs, the classical state of each particle at a specific instant is obtained through
the wavefunction decay (under this standpoint, calculated is the analogous of ‘measured’
or collapsed). This represents a form of optimization: sacrificing the knowledge of the
classical state at each time step, but being satisfied with knowing the classical state of each
particle at discrete time intervals, specifically after a large number of calculation steps.
This approach allows for a quicker computation of the future state of reality with a lesser
use of resources. Moreover, since the length of quantum coherence
qu
λ
is finite, the
group of entangled particles undergoing to the common wavefunction decay, are of
smaller finite number, further simplifying the algorithm of the simulation.
The advantage of quantum calculus over classical calculus can be metaphorically
demonstrated by addressing the challenge of finding the global minimum. When using
classical methods like maximum descent gradient or similar approaches, the pursuit of
the global minimumsuch as in the determination of prime numbers—results in an ex-
ponential increase in the calculation time as the maximum value of the prime numbers
rises.
In contrast, employing the quantum method allows us to identify the global mini-
mum in linear or, at least, polynomial time. This can be loosely conceptualized as follows:
in the classical case, it is akin to having a ball fall into each hole to find a minimum, and
then the values of each individual minimum must be compared with all possible minima
before determining the overall minimum. The utilization of the quantum method involves
using an infinite number of balls, spanning the entire energy spectrum. Consequently, at
each barrier between two minima (thanks to quantum tunneling), some of the balls can
explore the next minimum almost simultaneously. This simultaneous exploration
Quantum Rep. 2024, 6 306
(quantum computing) significantly shortens the time needed to probe the entire set of
minima, then wavefunction decay allows us to measure (or detect) the outcome of the
process (measure).
If we aim to create a simulation on a scale comparable to the vastness of the universe,
we must find a way to address the many-body problem. Currently, solving this problem
remains an open challenge in the eld of computer science. Just recently in December
2023, a new quantum algorithm for simulating coupled harmonic oscillators in polyno-
mial time has been uncovered [43], showing that quantum mechanics appears to be a
promising candidate for making the many-body problem manageable. This is achieved
through the utilization of the entanglement process, which encodes coherent particles and
their interaction outcomes as a wavefunction. The wavefunction evolves without explicit
solving, and, when coherence diminishes, the wavefunction collapse leads to calculate (as
well as determine) the essential classical properties of the system given by the underlying
physics at discrete time steps.
This sheds light on the reason why physical properties remain undefined until meas-
ured; from the standpoint of the simulation analogy, it is a direct consequence of the quan-
tum optimization algorithm, where, in each local quantum-correlate domain, the classical
state remains uniquely defined only at few discrete times, unlike the continuous descrip-
tion given by classical evolution. In this way, the determination of reality properties is
achieved solely through the utilization of the minimal amount of computational power.
In accordance with [43], quantum computing demonstrates the capability to solve classical
problems in polynomial time that would otherwise require exponential time, thereby op-
timizing the simulation. Moreover, the combination of the coherent quantum evolution
with the wavefunction collapse has been proven to constitute a Turing-complete compu-
tational process, as evidenced by its application in quantum computing for performing
computations.
An even more intriguing aspect of the possibility that reality can be virtualized as a
computer simulation is the existence of an algorithm capable of solving the intractable
many-body problem, challenging classical algorithms. Consequently, the entire class of
problems characterized by a phenomenological representation, describable by quantum
physics, can be rendered tractable through the application of quantum computing. How-
ever, it is worth noting that very abstract mathematical problems, such as the ‘laice prob-
lem’ [44], may still remain intractable. Currently, the most well-known successful exam-
ples of quantum computing include Shor’s algorithm [45] for prime number discovery
and Grove’s algorithm [46] for inverting ‘black box functions’.
Classical computation categorizes the determination of prime numbers as an NP
(non-polynomial) problem, whereas quantum computation classifies it as a P (polyno-
mial) problem with Shor’s algorithm. However, not all problems considered NP in classi-
cal computation can be reduced to P problems by utilizing quantum computation. This
implies that quantum computing may not be universally applicable in simplifying all
problems but only a certain limited class.
The possibility of acknowledging the universe’s many-body problem as a computer
simulation requires that the NP problem of N-body is tractable [43]. In such a scenario, it
becomes theoretically feasible to utilize universe-like particle simulations for solving NP
problems by embedding the problem within specific assigned particle behavior. This con-
cept implies that the laws of physics are not inherently given but are rather formulated to
represent the solution of specific problems.
To clarify further: if various instances of universe-like particle simulations were em-
ployed to tackle distinct problems, each instance would exhibit different laws of physics
governing the behavior of its particles. This perspective opens up the opportunity to ex-
plore the purpose of the universe and inquire about the underlying problem it seeks to
solve.
In essence, it prompts the question: What is the fundamental problem that the uni-
verse is aempting to address?
Quantum Rep. 2024, 6 307
3.1. How the universe Computes the Next State: The Unraveling of the Meaning of Time and
Free Will
At this point, to examine the universal simulation and generate evolution akin to
SQHM characteristics within a flat space (excluding gravity except for gravitational back-
ground noise contributing to quantum decoherence), let us focus on the local evolution
within a spacetime cell of a few De Broglie lengths or quantum coherence lengths
qu
λ
[19]. After a certain characteristic time, the superposition of states in a local quantum-en-
tangled domain, evolving following the motion equation (24), decays into one of its eigen-
states and leads to a stable state that, surviving fluctuations, constitutes a lasting state over
time: we can define it as reality since, for its stability, it gives the same result even after
repeated measurements. Moreover, given macroscopic decoherence, the local domain in
different places are quantum disentangled from each other. Therefore, their decay to a
stable eigenstate is unlikely to happen at the same time. Due to the perceived randomness
of GBN, this process can be assumed to be stochastically distributed into the space, lead-
ing to a foam-like classical reality in spacetime that, in this way, results in cells that are
locally quantum but globally classic.
Furthermore, after an interval of time much larger than the wavefunction decay one,
each domain is perturbed by a large fluctuation that is able to let it to jump to a quantum
superposition that re-starts evolving following the quantum law of evolution for a while,
before new wavefunctions collapse, and so on.
From the standpoint of the SQHM, the universal computation method exploits the
quantum evolution for a while and then, by decoherence, derives the classical N-body
state at certain discrete instants by the wavefunction collapse exactly like a universal quan-
tum computer. Then it goes to the next step by computing the evolution of the quantum
entangled wavefunction evolution, saving on classically calculating the state of the N-
bodies repeatedly, deriving it only when the quantum state decays into the classical one
(as in a measure).
Practically, the universe realizes a sort of computational optimization to speed up the
derivation of its future state by utilizing a Qbits-like quantum computation.
Free Will
Following the pigeonhole principle, which states that any computer that is a subsys-
tem of a larger one cannot handle the same information (and thus cannot produce a
greater power of calculation in terms of speed and precision) as the larger one, and con-
sidering the inevitable information loss due to compression, we can infer that a human-
made computer, even utilizing a vast system of Q-bits, cannot be faster and more accurate
than the universal quantum computer.
Therefore, the temporal horizon of predicting the future states, before they happen,
is by force limited inside reality. Hence, among the many future states possible, we can
infer that we can determine or choose the future output within a certain temporal horizon
and that free will is limited. Moreover, since the decision of what reality state we want to
realize is not connected to the previous events before a certain preceding interval of time
(4D disentanglement), we can also say that such a decision is not predetermined.
Nevertheless, other than stating that the will is free but limited, from the present
analysis, there is an additional aspect of the concept of free will that needs to be analyzed.
Specifically, it pertains to whether many possible states of reality exist in future scenarios,
providing us with the genuine opportunity to choose which of them to aain.
In this context, within the deterministic quantum evolution framework, or even in
classical scenarios, with precisely defined initial conditions in 4D spacetime, such a possi-
bility is effectively prohibited, since the future states are predetermined. Time in this con-
text does not flow but merely serves as a coordinateof the 4D spacetime where reality is
located, losing the significance it holds in real life.
Quantum Rep. 2024, 6 308
In the absence of GBN, knowing the initial condition of the universe at initial instant
of the Big Bang and the laws of physics precisely, the future of the universe remains de-
fined.
This is because, unless you introduce noise into the simulation, the basic quantum
laws of physics are deterministic.
Actually, in the context of SQHM evolution, the random nature of GBN plays an im-
portant role in shaping the future states of the universe. From the standpoint of the simu-
lation analogy, the nature of GBN presents important informational aspects.
The randomness introduced by GBN renders the simulation inherently unpredicta-
ble to an internal observer. Even if the internal observer employs an identical algorithm
to the simulation to forecast future states, the absence of access to the same noise source
results in rapid divergence in their predictions of future states. This is due to the critical
influence of each individual fluctuation in the wavefunction decay (see Section 2.6.3). In
other words, to the internal observer, the future would be encrypted by such noise. Fur-
thermore, if the noise that would be used in the simulation analogy evolution were a
pseudo-random noise with enough unpredictability, only someone who is in possession
of the seed would in fact be able to predict the future or invert the arrow of time. Even if
the noise is pseudo-random, the problem of deriving the encryption key is practically in-
tractable. Therefore, in presence of GBN, the future outcome of the computation is en-
crypted’ by the randomness of GBN.
Moreover, if the simulation makes use of a pseudo-random routine to generate GBN
and it appears truly random inside reality, it follows that the seed encoding GBNis kept
outside the simulated reality and is unreachable to us. In this case, we are in front of an
instance of a one-time pad, effectively equating to deletion, which is proven unbreakable.
Therefore, in principle, the simulation could effectively conceal information about the key
used to encrypt GBN noise in a manner that remains unrecoverable.
From this perspective, the renowned Einstein quote, God does not play dice with the
universe,’ is aptly interpreted. In this context, it implies that the programmer of the uni-
versal simulation does not engage in randomness, as everything is predetermined for him.
However, from within this reality, we remain unable to ascertain the seed of the noise,
and the noise manifests itself as genuinely random. Furthermore, even if from inside this
reality, we would be able to detect the pseudo-random nature of GBN, featuring a high
level of randomness, the challenge of deciphering the key remains insurmountable [47]
and the encryption key practically irretrievable.
Thus, we would never be able to trace back to the encryption key and completely
reproduce the outcomes of the simulation, even knowing the initial state and all the laws
of physics perfectly, since simulated evolution depends on the form of each single fluctu-
ation.
This universal behavior emphasizes the concept of ‘free will’ as a constrained capa-
bility, unable to access information beyond a specific temporal horizon. Furthermore, the
simulation analogy delves deeper into this idea, portraying free will as a faculty originat-
ing in macroscopic classical systems characterized by foam-like dimensions in spacetime.
As a result, our consciousness lacks a perfect definition of free will; we desire something
without a full precise understanding of what it is. Nonetheless, through the exercise of
our free will, we can impact the forthcoming macroscopic state, albeit with a certain im-
precision and ambiguity in our intentions, yet not predetermined by preceding states of
reality beyond a specific interval of time.
3.2. The Universal Pasta Makerand the Actual Time in 4D Spacetime
Working with a discrete spacetime offers advantages that are already supported by
laice gauge theory [48]. This theory demonstrates that in such a scenario, the path inte-
gral becomes finite-dimensional and can be assessed using stochastic simulation tech-
niques, such as the Monte Carlo method.
Quantum Rep. 2024, 6 309
In our scenario, the fundamental assumption is that the optimization procedure for
universal computation has the capability to generate the evolution of reality. This hypoth-
esis suggests that the universe evolves quantum mechanics in polynomial time, efficiently
solving the many-body problem and transitioning it from NP to P. In this context, quan-
tum computers, employing Q-bits with wavefunction decay that both produces and effec-
tively computes the result, utilize a method inherent to the physical reality itself.
From a global spacetime perspective, aside from the collapses in each local domain,
it is important to acknowledge a second fluctuation-induced effect. Larger fluctuations
taking place over extended time intervals can induce a jumping process in the wavefunc-
tion configuration, leading to a generic superposition of states. This prompts a restart in
its evolution following quantum laws. As a result, after each local wavefunction decay, a
quantum resynchronization phenomenon occurs, propelling the progression towards the
realization of the next local classical state of the universe.
Furthermore, with quantum synchronization, at the onset of the subsequent moment,
the array of potential quantum states (in terms of superposition) encompasses multiple
classical states of realization. Consequently, in the current moment, the future states form
a quantum multiverse where each individual classical state is potentially aainable de-
pending on events (such as the chain of wavefunction decay processes) occurring before-
hand. As the present unfolds, marked by the quantum decoherence process leading to the
aainment of a classical state, the past is generated, ultimately resulting in the realization
of the singular (foam-like) classical reality: the universe.
Moreover, if all possible configurations of the realizable universe exist in the future
(extending past our ability to determine or foresee over a finite temporal extent), the past
is comprised of fixed events (universe) that we are aware of but unable to alter.
In this context, we can metaphorically illustrate spacetime and its irreversible univer-
sal evolution as an enormous pasta maker. In this analogy, the future multiverse is repre-
sented by a blob of unshaped flour dough, inflated because it contains all possible states.
This dough, extending up to the surface of the present, is then pressed into a thin pasta
sheet, representing the quantum superposition reduction to the classical state realizing
the universe.
The 4D surface boundary (see Figure 1) between the future multiverse and the past
universe marks the instant of present time. At this point, the irreversible process of deco-
herence occurs, entailing the computation or reduction to the present classical state. This
specific moment defines the current time of reality, a concept that cannot be precisely lo-
cated within the framework of relativistic spacetime. The SQHM aligns with the gravita-
tional decoherence hypothesis [49]. It agrees with Roger Penrose’s point of view that dis-
pels the anthropocentric idea that the act of measurement triggers the collapse: It takes
place in the physics, and it is not because somebody comes and looks at it’.
Quantum Rep. 2024, 6 310
Figure 1. The Universal ‘Pasta-Maker’.
3.3. Quantum and Gravity
Until now, we have not adequately discussed how gravity arises from the discrete
nature of the universal ‘calculation. Nevertheless, it is interesting to provide some in-
sights into the issue because, viewed through this perspective, gravity naturally emerges
as quantized.
Considering the universe as an extensive quantum computer operating on a prede-
termined space-time grid does not yet represent the most optimized simulation. Indeed,
the optimization of the simulation has not taken into account the possibility of adjusting
the fixed dimensions of the elemental grid. This becomes apparent when we realize that
maintaining constant elemental cell dimensions leads to a significant dispersion of com-
putational resources in spacetime regions devoid of bodies or any need for calculation. In
such regions, we could simply allocate one large cell, thereby conserving computational
resources.
This perspective aligns with a numerical algorithm employed in numerical analysis
known as adaptive mesh refinement (AMR). This technique dynamically adjusts the ac-
curacy of a solution within specific sensitive or turbulent regions during the calculation
of a simulation. In numerical solutions, computations often occur on predetermined,
quantified grids, such as those on the Cartesian plane, forming the computational grid or
‘mesh’. However, many issues in numerical analysis do not demand uniform precision
across the entire computational grid as, for instance, used for graph ploing or computa-
tional simulation. Instead, these issues would benefit from selectively refining the grid
density only in regions where enhanced precision is required.
The local adaptive mesh refinement (AMR) creates a dynamic programming envi-
ronment enabling the adjustment of numerical computation precision according to the
specific requirements of a computation problem, particularly in areas of multidimensional
graphs that demand precision. This method allows for lower levels of precision and reso-
lution in other regions of the multidimensional graphs. The credit for this dynamic tech-
nique of adapting computation precision to specific requirements goes to Marsha Berger,
Joseph Oliger, and Phillip Colella [50,51], who developed an algorithm for dynamic grid-
ding known as AMR. The application of AMR has subsequently proven to be widely ben-
eficial and has been utilized in the investigation of turbulence problems in hydrodynam-
ics, as well as the exploration of large-scale structures in astrophysics, exemplified by its
use in the Bolshoi Cosmological Simulation [52].
Quantum Rep. 2024, 6 311
An intriguing variation of AMR is the adaptive moving mesh method (AMMM) pro-
posed by Huang Weizhang and Russell Robert [53]. This method employs an r-adaptive
(relocation adaptive) strategy to achieve outcomes akin to those of adaptive mesh refine-
ment. Upon reflection, an r-adaptive strategy, grounded in local energy density as a pa-
rameter, bears resemblance to the workings of curved space-time in our universe.
Conceivably, a more sophisticated cosmological simulation could leverage an ad-
vanced iteration of the AMMM algorithm. This iteration would involve relocating space
grid cells and adjusting the local delta time for each cell. By moving cells from regions of
lower energy density to those of higher energy density at the system’s speed of light and
scaling the local delta time accordingly, the resultant grid would appear distorted and
exhibit behavior analogous to curved space-time in general relativity.
Furthermore, as cell relocation induces a distortion in the grid mesh, updating the
grid at the speed of light, as opposed to simultaneous updating, would disperse compu-
tations across various timeframes. In this scenario, gravity, time dilation, and gravitational
waves would spontaneously manifest within the simulated universe, mirroring their
emergence from curved space-time in our universe.
A criterion for reducing the grid step is based on providing a more detailed descrip-
tion in regions with higher mass density (more particles or energy density) with a higher
amplitude of induced quantum potential fluctuations that reduces the De Broglie length
of quantum coherence.
This point of view finds an example in the Lagrangian approach, as outlined in ref.
[54], where the computational mesh dynamically moves with the maer being simulated.
This results in an increased resolution, characterized by smaller mesh cells, in regions of
high mass/energy density, while decreasing resolution in other areas. While this approach
holds significant potential, it is not without its challenges and limitations, as highlighted
in the work of Gnedin and Bertschinger [55].
The variability of the mesh introduces noticeable apparent forces, which are deemed
undesirable in the method [56] due to their tendency to violate energy conservation. Con-
sequently, countermeasures are implemented to eliminate or rectify this inconvenience.
From the standpoint of the simulation analogy, the field of force that naturally
emerges [55,56], due to grid mesh distortion caused by cell relocation methods [53,54], is
not as problematic as it might appear. The force field, resulting from this optimization
process in the calculation, may simulate gravity induced by discretization of the
spacetime, as an ‘apparent’ force arising from laice cell distortion. On this ansa, the 4D
non-uniform laice network of the universal algorithm replicates reality by depicting 3D
space incorporating the gravity. The universal algorithm includes rules (spacetime geom-
etrization) for modulating the variable density of the computational grid to simulate the
gravity observed in reality. This approach naturally results in discretized gravity that is
quantum from its inception.
From this perspective, gravity arises as an apparent force resulting from the optimi-
zation process of AMMM to streamline the advancement of the reality simulation as
shown in references [5356], and the emergence of gravity can be aributed to the algo-
rithmic optimization. The theoretical framework for transitioning from the variable mesh
of the computer simulation to the discrete 4D curved spacetime with gravity has the po-
tential to provide insights into the quantum gravity. The causal dynamical triangulation
theory [57] closely aligns with the perspective of simulating physics through discrete com-
putations, representing a theoretical framework. However, challenges emerge when at-
tempting to reconcile this approach with the continuum classical limits of general relativ-
ity (see Section 3.3.1).
From this line of reasoning, an intriguing observation emerges: dynamically enlarg-
ing grid cells in regions where less computational power is needed inevitably results in
the creation of vast cells corresponding to empty spacetime. The constraint of limited re-
sources makes it impossible to achieve an infinitely large grid cell, preventing the realiza-
tion of completely flat space within any cell.
Quantum Rep. 2024, 6 312
In the context of the quantum geometrization of spacetime [32], leading to a quintes-
sence-like interpretation of the cosmological constant that diminishes with the universe’s
expansion, the finite maximum size of the simulation cell implies that the cosmological
constant can be arbitrarily small but not zero. This aligns with the implications of pure
quantum gravity, which posits that a vacuum with zero cosmological constant collapses
into a polymer-branched phase devoid of physical significance.
Moreover, assuming the discrete nature of spacetime, the cosmological constant is
also discrete, and the smallest discrete value before zero decreases as the universe ex-
pands. This raises the question: is there a minimal critical value for the cosmological con-
stant (at a certain degree of universe inflation) below which the vacuum will collapse to
the polymer-branched phase prompting an envisioning of the ultimate fate of the uni-
verse?
On the opposite side, if achieving a zero-grid dimension is deemed impossible, the
inquiry into the minimum elemental size of spacetime naturally arises. In this context, as
highlighted in [19], the SQHM emphasizes the existence of a finite minimum discrete ele-
ment of distance. Consequently, spacetime can be conceptualized as a laice composed of
elemental cells of discrete size. It is noteworthy to observe that the discreteness of
spacetime deduced by the SQHM is many orders of magnitude smaller than the minimum
geometric quantity of LQG. This minimum discrete distance renders spacetime akin to a
fabric with sparse threads, rather than a continuous plastic layer. It should not be con-
flated with the minimum geometric quantity of LQG, which is due to the discrete spec-
trum of area and volume operators where the ground state is linked to the Planck length
and implicitly incorporates the concept of detectability. This is because revealing this
ground state would require an energy equal to or greater than that of the Planck mass,
inevitably forming a black hole that overshadows it.
To establish the order of magnitude for the elemental size of spacetime, we can assert
that the volume of such an elemental cell must not exceed the volume within which the
maer of the pre-big-bang black hole collapses [32,58]. This condition ensures the pres-
ence of a quantum potential repulsive force equal to the aractive gravitational force, es-
tablishing the black hole equilibrium configuration, leading to the expression:
min
max
2 22
uu
c
xm c mc E
∆= = =

(106
)
where
u
m
is the mass equivalent to the total universe energy of the universe
53 60
10 j
u
E
÷
[59], leading to
( )
34 8 78 85
min 53 60
6.62 10 3 10 10
22 10
u
c
xm
E
−÷
÷
× ××
∆=
×
(107
)
where, being
c
,
,
u
E
, physical constants, make also
min
x
a physical universal con-
stant. Furthermore, for the time coordinate this requires that
( )
87 94
min
3 10
2
u
c
ts
cE
−÷
= ≈×
(108
)
3.3.1. The Classical Limit Problem in Quantum Loop Gravity (QLG) and Causal Dynam-
ical Triangulation (CDT)
Even if from the standpoint of the simulation analogy both CDT [57] and QLG [60]
take support and endorsement, also a contradictory aspect emerges in the procedure to
achieve the classical limit of the theories, namely the general relativity.
Quantum Rep. 2024, 6 313
About this point, the SQHM asserts that achieving classical macroscopic reality re-
quires not only imposing the condition
0
lim
, but enforcing the double conditions
00macro dec dec
lim lim lim lim lim
→→
≡=

(109
)
where the subscript decstands for decoherence defined as
kk
max (k)
kk
kk
min
de
()
k
c dec
kk
iS
b | | exp[ ]
iS
b | | exp[ ]
lim lim
ψψ
ψ
=
=
=
=

. (110
)
where
min max
k kk<<
is one of the
kmax min
k
eigenstates and holding.
The SQHM demonstrates that in the so-called semiclassical limit, aained by the con-
dition
0
lim
, applied to the (zero-noise) quantum mechanics, the quantum entangle-
ment between particles persists and influences interactions even at infinite distances.
Thus, rather than a genuine classical limit, it portrays a large-scale quantum description.
This approach implicitly supports the wrong idea that the properties of the vacuum at a
macroscopic large-scale replicate those at a small scale, which is not true due to the break-
ing of scale invariance symmetry in the SQHM by the De Broglie length.
This aspect can be analytically examined by exploring the least action principle
[19,31], generalized in the Madelung hydrodynamic formulation.
In the quantum hydrodynamic approach, the quantum equation of motion for the
complex wavefunction is separated into two equations involving real variables. These
equations pertain to the conservation of mass density distribution and its evolution, which
is described by a Hamilton-Jacobi-like equation. This evolution can be expressed through
a generalized hydrodynamic Lagrangian function that adheres to a generalized stationary
action condition principle [31].
By utilizing the quantum hydrodynamic Lagrangian, the variation of the quantum
hydrodynamic action for a general quantum superposition of states can be expressed as
[31] (Greek indexes run from 0 to 3)
( )
( )
2
1
0
(k) (k )
(k k
kk
Q
)(
Q
k
m
k)
ix
pq q LL
| | | |d
c || ||
µµ
µνµ
δ ψ δψ
ψψ
δ
δ


∂∂

= −∂



∂∂


+∆
−∂
=
∫∫∫

(111
)
where
2
1(k) (k )
Qk
k
kk
LL
| | | |d
c || ||
µ
µ
ψ δψ
ψψ

∂∂
= −∂


∂∂

∫∫
∫∫

(112
)
is the variation of the action generated by quantum effects and where
( )
2
1
(k)
(kQk
mix k
)
| | | |d
c
q
pq
ν
µ
δ ψ δψ
=−Ω
∫∫∫
(113
)
is the variation of the action generated by the mixing hat happens within the quantum
superposition of states.
The expression (113) is typical of superposition states, since the variation of the action
solely due to eigenstates is represented by:
Quantum Rep. 2024, 6 314
2
2
2
1
(k) (k )
(k)
k
(k) (k ) (k
(k
) (k)
)
(k)
(k)
kk
kkk
(k) (k ) (k )
L
| | x dVdt
x
LL L L
| | | | | | dVdt
|| ||
LL L
d
||
cd
qq
q
t
q
q
qq
µ
µ
µµµ
µ
µ
µ
µ
µ
µ
δψ δ
ψ δ δ δψ δ ψ
ψψ
ψδ

=



∂∂
= + + +


∂∂
∂∂


∂∂
= −+


∂∂

∫∫∫
∫∫∫




( )
0
(k)
k
kkk
Q
L| |d
|| ||
µ
µ
δψ
ψψ
δ



−∂



∂∂


=∆≠
∫∫∫

. (114
)
Moreover, given that the quantum motion equations for the k-th eigenstate [31] sat-
isfies the condition
0
(k) (k )
(k)
qq
LL
d
dt
µ µ

∂∂
−=


∂∂

, (115
)
the variation of the action
δ
for the k-th eigenstates reads as
( )
2
1
(k) (k )
Q( k ) k k
kk
LL
|| ||d
c || ||
µ
µ
δ δ ψ δψ
ψψ

∂∂
= = −∂


∂∂

∫∫∫


(116
)
and therefore by (115) it follows both that
( )
dec Q( k )
lim
δ
=

(117
)
and that
( )
0
dec Qmix
lim
δ
∆=
. (118
)
Furthermore, since in the semiclassical limit, for
0
and consequently
0
qu
V
, we have that
( )
0
0
(k)
k
|
m Li
|
l
ψ
=
, (119
)
( )
00
(k)
k
l Li
|
m
|
µψ
=
, (120
)
and finally, through the identity:
( )
0
0
Q( k )
lim
δ
∆=
, (121
)
by utilizing (118) and (121), the classical least action condition
0
0
dec
lim lim
δ
=
. (122
)
The classical least action law is recovered when the quantum hydrodynamic equa-
tions transition into their classical form. In this context, the condition is rigorously
achieved through a coarse-grained approach, where the minimum discrete length is larger
Quantum Rep. 2024, 6 315
than the range of interaction of the quantum potential. Within the minimal cell of this
coarse-grained approach (representing a macroscopic point mass), where quantum phys-
ics dominates, the decay of the superposition of states leads to stationary configurations
that are practically identical to eigenstates. Meanwhile, the macroscopic point masses
move according to classical laws of motion since in their interaction
0
qu
V=
.
From this perspective, it is not possible for any quantum gravity theory to achieve
the classical limit of general relativity solely by imposing
0
lim
. This limitation arises
because the classical least action, a fundamental principle of general relativity, cannot be
restored with a straightforward condition
0
lim
.
This goes beyond a mere formal theoretical boleneck; it is a genuine condition pos-
iting that spacetime at the macroscopic level undergoes decoherence and lacks quantum
properties. This holds true, at least, in regions governed by Newtonian gravity. However,
in high-gravity areas near black holes and within them, the strong gravitational interac-
tion can give rise to macroscopic quantum entanglement over significant distances [58].
In this context, it is conceivable that quantum gravity approaches, such as QLG and
CDT, might face substantial challenges in achieving the classical limit of general relativity
solely by taking the limit of a null Planck constant. Although classical properties may be
recovered in coherent states, this approach is not exempt from the influence of quantum
potential (propelling quantum entanglement) on large-scale, as envisioned by Objective
Collapse Theories, which in the framework of SQHM persists within the tail interaction
of coherent states.
Furthermore, given that the quantum uncertainty and the finite speed of light rules
out the existence of a continuum limit, deeming it devoid of physical significance, CDT
could encounter difficulties in aempting to derive it.
3.3.2. The Simulation Analogy and the Holographic Principle
Even if the Holographic Principle and the Simulation Analogy support the idea that
reality is a phenomenon stemming from a computing process encoding it, some notable
differences arise. The simulation analogy portrays the real world as if it were being or-
chestrated by a computational procedure subject to various optimization methods. The
macroscopic classical reality, characterized by foam-like paerns with short discrete time
intervals in microscopic quantum domains, clearly shows that scale invariance is a broken
symmetry in spacetime: The properties of the vacuum on a small scale are quite different
from those on a macroscopic scale, subjected to low gravity conditions [19], where the De
Broglie length defines the absolute scale.
Conversely, the holographic principle, based on the insightful observation that 3-D
space can be traced back to an informatively equivalent 2D formulation, that allows for
the development of a theory where gravity and quantum mechanics can be described to-
gether, implicitly assumes that the properties of the vacuum at a macroscopic scale repli-
cate those at a small scale, which is not accurate. Essentially, the holographic principle
takes a similar shortcut to quantum loop gravity and causal dynamical triangulation, fac-
ing challenges in describing macroscopic reality and general relativity.
To address this gap, the theory must integrate the decoherence process, which in-
volves leakage of information. A sort of condition of bounded from below information
loss should be introduced to ensure a more comprehensive understanding of classical re-
ality and make the theory less abstract, potentially paving the way for experimental con-
firmations at classical scale.
However, for the sake of precision, it must be noted that in scenarios of high gravity,
such as in black holes, where quantum entanglement can span significant distances [58],
the Holographic Principle can yield accurate predictions, indicating that information
about infalling mass remains encoded on the event horizon area of black holes.
Quantum Rep. 2024, 6 316
4. Philosophical Breakthrough
The spacetime structure, governed by its laws of physical evolution that enable the
modeling of reality as a computer simulation, eliminates the possibility of a continuous
realization in both time and space. This applies both to quantum microscopic world and
to classical objects with their dynamic behavior, encompassing living organisms and their
information processing.
4.1. Extending Free Will
Although we cannot predict the ultimate outcome of our decisions beyond a certain
point in time, it is feasible to develop methods that enhance the likelihood of achieving
our desired results in the distant future. This forms the basis of ‘best decision-making. It
is important to highlight that having the most accurate information about the current state
extends our ability to forecast future states. Furthermore, the farther away the realization
of our desired outcome is, the easier it becomes to adjust our actions to aain it. This con-
cept can be thought of as a preventive methodology. By combining information gathering
and preventive methodology, we can optimize the likelihood of achieving our objectives
and, consequently, expanding our free will.
Additionally, to streamline the evaluation process of ‘what to do, in addition to the
rational-mathematical calculations that dynamically exactly e detailed reconstruct the
pathway to our final state, we can focus solely on the probability of a certain future state
configuration being realized, adopting a faster evaluation (a sort of Monte Carlo ap-
proach). This allows us to potentially identify the best sequence of events to achieve our
objective. States beyond the time horizon in a realistic context can be accessed through a
multi-step tree pathway. A practical example of this approach is the widely recognized
cardiopulmonary resuscitation procedure [61,62]. In this procedure, even though the early
assurance of the patient’s rescue may not be guaranteed, it is possible to identify a se-
quence of actions that maximizes the probability of saving their life.
In the final scenario, free will is the ability to make the desired choice at each step,
shaping the optimal path that enhances the probability of reaching the future desired re-
ality. Considering the simulated nature of the universe, it becomes evident that utilizing
powerful computers and software, such as those at the basis of artificial intelligence, for
acquiring and handling information can significantly enhance the decision-making pro-
cess. However, a comprehensive analysis and development of this argument extend be-
yond the scope of the current work and are deferred to future research.
4.2. Methodological Approaches, Emergent from the Darwinian Principle of Evolution, for the
Best Future States Problem Solving
Considering intelligence as a function that, in certain circumstances, aids in finding
the most effective way to achieve desired or useful outcomes, it is conceivable that meth-
ods beyond slow and burdensome rational calculations exist to aain results. This concept
aligns with emotional intelligence, a basic mechanism that, as demonstrated by psychol-
ogy and neuroscience, initiates subsequent purposeful rational evaluation.
The simulation nature of reality demonstrates a form of intelligence (with both emo-
tional and rational components) that has evolved through a selection process favoring the
‘winner is the best solution. Although this work does not delve into introducing another
important aspect, the physical law governing maer self-assembling [63] leading to the
appearance of life, it can already be asserted that two ‘methodologies of intelligence’ have
emerged. The first one is ‘capturing intelligence, where the subject acquires resources by
overcoming and/or destroying the antagonist. The second one is ‘synergic intelligence,
which seeks collaborative actions to share gained resources or to construct a more efficient
system or structure. The laer form of intelligence of the universal nature has played a
crucial role in shaping organized systems (living organism) and social structures and their
Quantum Rep. 2024, 6 317
behaviors. However, a detailed examination of these dynamics goes beyond the scope of
this work and is left for future analysis.
4.3. Dynamical Conscience
By adhering to the quantum but macroscopically classical dynamics of the SQHM,
all objects, including living organisms, within the simulation analogy undergo fresh re-
calculations at each time step. This process propels them forward into the future within
the reality. The compilation of previous instant states, stored and processed within an
energy-information handling system, such as the brain, encapsulates the dynamics of evo-
lution and forms the foundation of consciousness in living organisms [6466].
Neuroscience conceptualizes the consciousness of the biological mind as a three-level
process. Starting from the outermost level and moving inward, we have the cognitive cal-
culation, the orientative emotional stage, and, at the most fundamental level, the discrete
time acquisition routine. This routine captures the present state, compares it with the an-
ticipated state from the previous time step, and projects the future state for the next ac-
quisition step. The comparison between the anticipated and realized states provides input
for decision-making at higher levels. Additionally, this comparison generates awareness
of changes in reality and the speed of those changes, allowing for adjustments in the adap-
tive time scan velocity. In situations where reality is rapidly evolving with the emergence
of new elements or potential dangers, the scanning time velocity is increased. This process
gives rise to the pe rception of time dilation, where a few moment s appear as a significantly
prolonged period in the subject’s mind.
Given the natural progression of universal time, which achieves optimal performance
via quantum computation involving stepwise evolution and wavefunction decay for out-
put extraction, it is inevitable that, due to selective processes like maer self-assembly and
subsequent Darwinian evolution, living systems, optimized for efficiency, adopt the high-
est-performing intelligence for a subsystem (the minds of organisms) through replication
of universal quantum computing methods. This suggests that groups of interconnected
neurons implement quantum computing at the microscopic level of their structure, result-
ing in their output and/or overall state being the outcome of multiple local wavefunction
decays.
The macroscopic classical reality, characterized by foam-like paerns and brief dis-
crete time and microscopic space quantum domains, aligns with the PenroseHameroff
theory [67] proposing that a quantum mechanical approach to consciousness can account
for various aspects of human behavior, including free will. According to this theory, the
brain utilizes the inherent property of quantum physical systems to exist in multiple su-
perposed states, allowing it to explore a range of different options in the shortest possible
period of time.
4.4. Intentionality of Conscience
Intentionality is contingent upon the fundamental function of intelligence, which em-
powers the intelligent system to respond to environmental situations. Following calcula-
tion or emotional evaluation, when potential beneficial objectives are identified, intention-
ality is activated to initiate action. However, this reliance is constrained by the underlying
laws of physics explicitly defined in the simulation. Essentially, the intelligent system is
calibrated to adhere to the physics of the simulation. In our reality, it addresses all needs
essential for development of life and organized structures, encompassing basic require-
ments, for instance, such as the necessity to eat, freedom of movement, association, and
protection from cold and heat and many others.
This problem-solving disposition is, however, constrained by the physics of the en-
vironment. When it comes to artificial machines, they cannot develop intentionality solely
through calculations because they lack integration with the world. In biological intelli-
gence, the ‘hardware’ is intricately linked with the physics of the environment. Not only
Quantum Rep. 2024, 6 318
it manages energy and information, but there is also an inverse influence: energy shapes
and develops the hardware of the biological system.
In contrast, a computational machine lacks the capacity to autonomously modify its
hardware and establish criteria for its safe maintenance or enhancement. In other words,
intentionality, the driving force behind the pursuit of desired solutions, cannot be devel-
oped by computational procedure executed by hardware. Intentionality serves as a safety
mechanism, or navigation system, for a continually evolving intelligent system whose
hardware is seamlessly integrated into its functionality and continuously updated. To
achieve this, a physically self-evolving wetware is necessary. At the level of artificial intel-
ligence or autonomous machines, a partial improvement, aimed at beer mimicking bio-
logical behavior, can be achieved by developing software that mimics biological self-mod-
ification, such as genetic programming.
4.5. Final Considerations
So far, the finite speed of information transmission has been merely postulated, the
simulation analogy provides an explanation. Although real time and simulation time are
disconnected, to compute the effects of infinite speed particles you would need to com-
pute all the interactions all at once in one frame (requiring an infinite speed of computer
calculation power). Since this is impossible, you need a finite speed of propagation in any
simulation, because that is the only way you can integrate. This speed absolute value de-
pends directly on the computing power you have and nothing else.
Furthermore, quantum physics, which underpins quantum computing, exponen-
tially simplifies the evolution of classical physics, thereby practically enabling simulations
that were previously unaainable. Quantum physics serves as the foundational frame-
work of universe, while classical physics emerges on a macroscopic scale. The universe
evolves through quantum computation, with classical states derived only at discrete in-
tervals. This is less demanding than the computation of continuous classical evolution,
highlighting the sophistication of such an approach.
This suggests that the hypothesis, proposed by a large number of scientists, as posit-
ing classical physics as fundamental while quantum physics arises from it through some
‘emergent’ mechanism, appears counterproductive and an unjustified complication from
computational standpoint.
The SQHM agrees with Roger Penrose’s point of view that dispels the anthropocen-
tric idea that the act of measurement triggers the collapse: It takes place in the physics,
and it is not because somebody comes and looks at it. Penrose’s assertion that the intelli-
gence (and the universe which inherently possesses it) is not classically computable aligns
closely with the perspective presented in this work, which argues that the universal pro-
gression with integrated intelligence is achievable only through the foundation of its
quantum computing.
Moreover, considering that the maximum entropy tendency is not universally valid
[6870], but rather the most efficient energy dissipation with order and living structures
formation is the emergent law [63], we are positioned to narrow down the goal, motivat-
ing the simulation, to two possibilities: the generation of life and/or the realization of an
efficient intelligent system.
Furthermore, as the physical laws, along with the resulting evolution of reality, are
embedded in the problem that the simulation seeks to address, intentionality and free will
are inherently manifested within the (simulated) reality to achieve the simulation’s objec-
tive.
5. Conclusions
The stochastic quantum hydrodynamic model achieves a significant advancement by
incorporating the influence of the fluctuating gravitational background, akin to a form of
dark energy, into quantum equations. This approach offers solutions that effectively tackle
crucial aspects within the realm of objective-collapse Theories. A notable accomplishment
Quantum Rep. 2024, 6 319
lies in resolving the ‘tails’ problem through the definition of the quantum potential length
of interaction, supplementing the De Broglie length. Beyond the quantum interaction
range, the quantum potential is unable to sustain coherent Schrödinger quantum behavior
of wavefunction tails.
The SQHM additionally emphasizes that an external environment is unnecessary,
illustrating that the quantum stochastic behavior leading to wavefunction collapse can
be an intrinsic property of the system within a spacetime characterized by fluctuating
metrics. Moreover, positioned within the framework of relativistic quantum mechanics,
seamlessly aligning with the finite speed of light and information transmission, the SQHM
establishes a distinct connection between the uncertainty principle and the invariance of
light speed.
The theory further deduces, within a fluctuating quantum system, the indeterminacy
relation between energy and timean aspect not expressible in conventional quantum
mechanics. This revelation offers insights into measurement processes that cannot be con-
cluded within a finite time interval, particularly within a genuinely quantum global sys-
tem. Remarkably, the theory garners experimental validation through the confirmation of
the Lindemann constant concerning the melting point of solid laices and the transition
of
4
He
from fluid to superfluid states.
The self-consistency of the SQHM is guaranteed by its ability to depict the collapse
of the wavefunction within its theoretical framework. This characteristic allows it to
demonstrate the compatibility of the relativistic locality principle with the non-local prop-
erty of quantum mechanics, specifically the finite speed of light and the uncertainty prin-
ciple. Furthermore, by illustrating that large-scale systems naturally transition into deco-
herent stable states, the SQHM effectively resolves the ‘pre-existing’ reality problem in
quantum mechanics.
Moving forward, the paper demonstrates that the physical dynamics of the SQHM
can be analogized to a computer simulation where various optimization procedures are
applied to bring it into realization. This conceptual framework leading to macroscopic
reality of foam-like consistency, wherein microscopic domains with quantum properties
coexist, helps in elucidating the meaning of time in our contemporary reality and deepens
our understanding of free will. The overarching design, introducing irreversible processes
influencing the manifestation of macroscopic reality in the present moment, posits that
the multiverse exists solely in future states, with the past constituted by the formed uni-
verse after the present instant. The projective decay at the present time represents a kind
of multiverse reduction to a universe.
The discrete simulation analogy lays the foundation for a profound understanding
of several crucial questions. It addresses inquiries about, the emergence of gravity in a
discrete spacetime evolution, and the recovery of the classical general relativity limit in
Quantum Loop Gravity and Causal Dynamical Triangulation.
The Simulation Analogy reveals a strategy that minimizes the amount of information
to be processed, thereby facilitating the operation of the simulated reality in aaining the
solution of its predefined founding problem. From the perspective within, reality is per-
ceived as the manifestation of simulation-specific physical laws. In the scenario under
consideration, the simulation appears to employ an entropic optimization strategy, mini-
mizing information loss while achieving maximum useful data compression and mainte-
nance. All this in alignment with the simulation’s intended purpose of life as well as in-
telligence generation.
Author Contributions: Conceptualization, P.C. and S.C.; Formal analysis, S.C.; Investigation, P.C.
All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Data Availability Statement: The original contributions presented in the study are included in the
article, further inquiries can be directed to the corresponding author.
Quantum Rep. 2024, 6 320
Conflicts of Interest: The authors declare no conflict of interest.
Appendix A
The SchrodingerLangevin equation describing the quantum Brownian motion can
be derived from (34) by utilizing the following identities:
2
2
00
0
24
0
TD TD
orT c
c
kT
lim D lim lim
m
λ
γγ
λ
→→

= = =


L
LL
(A1)
00
2
0
28
T
orT D
c
finite
kT
lim lim mD m
λ
αα
γ
κ
≅==
L
L
(A2)
0
0
( q ,t )
orT
c
lim Q
λ
=
L
. (A3
)
Therefore, being
2
0
2
( q ,t )
orT
c
Q
lim | | | S |
||
λ
κ
ψ
<<
L
, the term
2
( q ,t )
Q
i||
ψ
can be disre-
garded in Equation (34), resulting in the conventional Langevin-Schrödinger equation for
quantum Brownian motion:
( )
2
12
2
/
t i i (q) (t )
i | | V S qm D C
m
ψ ψ κ κξ ψ
= ∂∂ + +
.
(A4)
References
1. Ashtekar, A.; Bianchi, E. A short review of loop quantum gravity. Rep. Prog. Phys. 2021, 84, 042001. https://doi.org/10.1088/1361-
6633/abed91.
2. Rovelli, C. Loop Quantum Gravity. Living Rev. Relativ. 1998, 1, 1. https://doi.org/10.12942/lrr-1998-1.
3. Carroll, S.M.; Harvey, J.A.; Kostelecký, V.A.; Lane, C.D.; Okamoto, T. Noncommutative Field Theory and Lorentz Violation.
Phys. Rev. Lett. 2001, 87, 141601. https://doi.org/10.1103/PhysRevLett.87.141601.
4. Douglas, M.R.; Nekrasov, N.A. Noncommutative field theory. Rev. Mod. Phys. 2001, 73, 977.
https://doi.org/10.1103/RevModPhys.73.977.
5. Einstein, A.; Podolsky, B.; Rosen, N. Can Quantum-Mechanical Description of Physical Reality be Considered Complete? Phys.
Rev. 1935, 47, 777–780. https://doi.org/10.1103/PhysRev.47.777.
6. Nelson, E. Derivation of the Schrödinger Equation from Newtonian Mechanics. Phys. Rev. 1966, 150, 1079.
7. Von Neumann, J. Mathematical Foundations of Quantum Mechanics; Beyer, R.T., Translator; Princeton University Press: Princeton,
NJ, USA, 1955.
8. Bell, J.S. On the Einstein Podolsky Rosen Paradox. Phys. Phys. Физика 1964, 1, 195–200.
9. Zurek, W. Decoherence and the Transition from Quantum to Classical—Revisited Los Alamos Science Number 27. 2002. Avail-
able online: hps://arxiv.org/pdf/quantph/0306072.pdf (accessed on 10 June 2003).
10. Bassi, A.; Großardt, A.; Ulbricht, H. Gravitational decoherence. Class. Quantum Gravity 2016, 34, 193002.
11. Bohm, D. A Suggested Interpretation of the Quantum Theory in Terms of Hidden Variables I and II. Phys. Rev. 1952, 85, 166–
179.
12. Feynman, R.P. Space-Time Approach to Non-Relativistic Quantum Mechanics. Rev. Mod. Phys. 1948, 20, 367–387.
https://doi.org/10.1103/RevModPhys.20.367.
13. Kleinert, H.; Pelster, A.; Putz, M.V. Variational perturbation theory for Marcov processes. Phys. Rev. E 2002, 65, 066128.
14. Mita, K. Schrödinger’s equation as a diffusion equation. Am. J. Phys. 2021, 89, 500–510.
15. Madelung, E.Z. Quantentheorie in hydrodynamischer form. Eur. Phys. J. 1926, 40, 322–326.
16. Jánossy, L. Zum hydrodynamischen Modell der Quantenmechanik. Eur. Phys. J. 1962, 169, 79.
17. Birula, I.B.; Cieplak, M.; Kaminski, J. Theory of Quanta; Oxford University Press: New York, NY, USA, 1992; pp. 87–115.
18. Tsekov, R. Bohmian mechanics versus Madelung quantum hydrodynamics. arXiv 2011, arXiv:0904.0723v8.
19. Chiarelli, P. Quantum-to-Classical Coexistence: Wavefunction Decay Kinetics, Photon Entanglement, and Q-Bits. Symmetry
2023, 15, 2210. https://doi.org/10.3390/sym15122210.
Quantum Rep. 2024, 6 321
20. Santilli, R.M. A Quantitative Representation of Particle Entanglements via Bohm’s Hidden Variable According to Hadronic
Mechanics. Prog. Phys. 2002, 1, 150–159.
21. Chiarell, P. The Stochastic Nature of Hidden Variables in Quantum Mechanics. Hadron. J. 2023, 46, 315–338.
https://doi.org/10.29083/HJ.46.03.2023/SC315.
22. Chiarelli, P. Can fluctuating quantum states acquire the classical behavior on large scale? J. Adv. Phys. 2013, 2, 139–163.
23. Rumer, Y.B.; Ryvkin, M.S. Thermodynamics, Statistical Physics, and Kinetics; Mir Publishers: Moscow, Russia, 1980; pp. 444–459.
24. Chiarelli, P. Quantum Decoherence Induced by Fluctuations. Open Access Libr. J. 2016, 3, 1–20.
https://doi.org/10.4236/oalib.1102466.
25. Bressanini, D. An Accurate and Compact Wave Function for the 4 He Dimer. EPL 2011, 96, 23001. hps://doi.org/10.1209/0295-
5075/96/23001.
26. Gross, E.P. Structure of a quantized vortex in boson systems. Il Nuovo C. 1961, 20, 454–456. hps://doi.org/10.1007/BF02731494.
27. Pitaevskii, P.P. Vortex lines in an Imperfect Bose Gas. Sov. Phys. JETP 1961, 13, 451–454.
28. Chiarelli, P. Quantum to Classical Transition in the Stochastic Hydrodynamic Analogy: The Explanation of the Lindemann
Relation and the Analogies Between the Maximum of Density at He Lambda Point and that One at Water-Ice Phase Transition.
Phys. Rev. Res. Int. 2013, 3, 348–366.
29. Chiarelli, P. The quantum potential: The missing interaction in the density maximum of He4 at the lambda point? Am. J. Phys.
Chem. 2014, 2, 122–131.
30. Andronikashvili, E.L. Zh. Éksp. Teor. Fiz. 1946, 16, 780; 1948, 18, 424.¸ J. Phys. USSR 10, 201 (1946)
31. Chiarelli, P. The Gravity of the Classical Klein-Gordon Field. Symmetry 2019, 11, 322. https://doi.org/10.3390/sym11030322.
32. Chiarelli, P. Quantum Geometrization of Spacetime in General Relativity; BP International: Hong Kong, China, 2023; ISBN 978-81-
967198-7-6 (Print), ISBN 978-81-967198-3-8 (eBook). https://doi.org/10.9734/bpi/mono/978-81-967198-7-6.
33. Ruggiero, P.; Zannei, M. Quantum-classical crossover in critical dynamics. Phys. Rev. B 1983, 27, 3001.
34. Ruggiero, P.; Zannei, M. Critical Phenomena at T = 0 and Stochastic Quantization. Phys. Rev. Le. 1981, 47, 1231.
35. Ruggiero, P.; Zannei, M. Microscopic derivation of the stochastic process for the quantum Brownian oscillator. Phys. Rev. A
1983, 28, 987.
36. Ruggiero, P.; Zannei, M. Stochastic description of the quantum thermal mixture. Phys. Rev. Le. 1982, 48, 963.
37. Ghirardi, G.C.; Rimini, A.; Weber, T. Unified dynamics for microscopic and macroscopic systems. Phys. Rev. D 1986, 34, 470–
491. https://doi.org/10.1103/PhysRevD.34.470.
38. Pearle, P. Combining stochastic dynamical state-vector reduction with spontaneous localization. Phys. Rev. A 1989, 39, 2277–
2289. https://doi.org/10.1103/PhysRevA.39.2277.
39. Diósi, L. Models for universal reduction of macroscopic quantum fluctuations. Phys. Rev. A 1989, 40, 1165–1174.
https://doi.org/10.1103/PhysRevA.40.1165.
40. Penrose, R. On Gravity’s role in Quantum State Reduction. Gen. Relativ. Gravit. 1996, 28, 581–600;
https://doi.org/10.1007/BF02105068.
41. Berger, M.J.; Oliger, J. Adaptive mesh refinement for hyperbolic partial differential equations. J. Comput. Phys. 1984, 53, 484–
512. https://doi.org/10.1016/0021-9991(84)90073-1.
42. Huang, W.; Russell, R.D. Adaptive Moving Mesh Method; Springer: Berlin/Heidelberg, Germany, 2010; ISBN 978-1-4419-7916-2.
43. Babbush, R.; Berry, D.W.; Kothari, R.; Somma, R.D.; Wiebe, N. Exponential Quantum Speedup in Simulating Coupled Classical
Oscillators. Phys. Rev. X 2023, 13, 041041. https://doi.org/10.1103/PhysRevX.13.041041.
44. Micciancio, D.; Goldwasser, S. Complexity of Lattice Problems: A Cryptographic Perspective; Springer Science & Business Media:
Berlin, Germany, 2002; Volume 671.
45. Monz, T.; Nigg, D.; Martinez, E.A.; Brandl, M.F.; Schindler, P.; Rines, R.; Wang, S.X.; Chuang, I.L.; Blatt, R. Realization of a
scalable Shor algorithm. Science 2016, 351, 1068–1070. https://doi.org/10.1126/science.aad9480.
46. Long, G.-L. Grover algorithm with zero theoretical failure rate. Phys. Rev. A 2001, 64, 022307.
https://doi.org/10.1103/PhysRevA.64.022307.
47. Chandra, S.; Paira, S.; Alam, S.S.; Sanyal, G. A comparative survey of symmetric and asymmetric key cryptography. In
Proceedings of the 2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE),
Hosur, India, 1718 November 2014; IEEE: Piscartway, NJ, USA, 2014; pp. 8393.
48. Makeenko, Y. Methods of Contemporary Gauge Theory; Cambridge University Press: Cambridge, UK, 2002; ISBN 0-521-80911-8.
49. Singh, P.; Miller, T.; Wang, Y. Quantum Gravitational Effects on Decoherence. Phys. Lett. B 2021, 812, 136015.
50. Fraga, E.S.; Morris, J.L. An adaptive mesh refinement method for nonlinear dispersive wave equations. J. Comput. Phys. 1992,
101, 718. https://doi.org/10.1016/0021-9991(92)90045-Z.
51. Berger, M.J.; Colella, P. Local adaptive mesh refinement for shock hydrodynamics. J. Comput. Phys. 1989, 82, 64–84.
52. Klypin, A.A.; Trujillo-Gomez, S.; Primack, J. Dark Matter Halos in the Standard Cosmological Model: Results from the Bolshoi
Simulation. Astrophys. J. 2011, 740, 102. https://doi.org/10.1088/0004-637X/740/2/102.
53. Huang, W.; Ren, Y., Russell, R.D. Moving mesh methods based on moving mesh partial differential equations. J. Comput. Phys.
1994, 113, 279290. https://doi.org/10.1006/jcph.1994.1135.
54. Gnedin, N.Y. Softened Lagrangian hydrodynamics for cosmology. Astrophys. J. Suppl. Ser. 1995, 97, 231–257; ISSN 0067-0049.
55. Gnedin, N.Y.; Bertschinger, E. Building a cosmological hydrodynamic code: Consistency condition, moving mesh gravity and
slh-p3m. Astrophys. J. 1996, 470, 115. https://doi.org/10.1086/177854.
Quantum Rep. 2024, 6 322
56. Kravtsov, A.V.; Klypin, A.A.; Khokhlov, A.M. Adaptive Refinement Tree: A New High-Resolution N-Body Code for Cosmo-
logical Simulations. Astrophys. J. Suppl. Ser. 1997, 111, 73. https://doi.org/10.1086/313015.
57. CDT. Eichhorn, A., et al., Quantum Gravity and the Functional Renormalization Group: The Road towards Asymptotic Safety.
Front. Phys. 2021.
58. Chiarelli, P. Quantum Effects in General Relativity: Investigating Repulsive Gravity of Black Holes at Large Distances.
Technologies 2023, 11, 98. https://doi.org/10.3390/technologies11040098.
59. Valev, D. Estimation of Total Mass and Energy of the observable Universe. Phys. Int. 2014, 5, 15–20.
https://doi.org/10.3844/pisp.2014.15.20.
60. QLG. Ashtekar, A., Loop Quantum Gravity: From Concepts to Phenomenology Jorge Pullin (Eds.)Springer 2021, ISBN: 978-3-
030-67814-1
61. DeBard, M.L. Cardiopulmonary resuscitation: Analysis of six years experience and review of the literature. Ann. Emerg. Med.
1981, 10, 408–416.
62. Cooper, J.A.; Cooper, J.D.; Cooper, J.M. Cardiopulmonary resuscitation: History, current practice, and future direction.
Circulation 2006, 114, 2839–2849.
63. Chiarelli, P. Far from Equilibrium Maximal Principle Leading to Matter Self-Organization. J. Adv. Chem. 2009, 5, 753–783.
https://doi.org/10.48550/arXiv.1303.1772.
64. Seth, A.K.; Suzuki, K.; Critchley, H.D. An interoceptive predictive coding model of conscious presence. Front. Psychol. 2012, 2,
18458. https://doi.org/10.3389/fpsyg.2011.00395.
65. Ao, Y.; Catal, Y.; Lechner, S.; Hua, J.; Northoff, G. Intrinsic neural timescales relate to the dynamics of infraslow neural waves.
NeuroImage 2024, 285, 120482. https://doi.org/10.1016/j.neuroimage.2023.120482.
66. Craig, A.D. The sentient self. Brain Struct Funct. 2010, 214, 563–577. https://doi.org/10.1007/s00429-010-0248-y.
67. Hameroff, S.; Penrose, R. Consciousness in the universe: A review of the ‘Orch OR’theory. Phys. Life Rev. 2014, 11, 39–78.
https://doi.org/10.1016/j.plrev.2013.08.002.
68. Prigogine, I. Le domaine de validité de la thermodynamique des phénomènes irréversibles. Phys. D Nonlinear Phenom. 1949, 15,
272284. https://doi.org/10.1016/0031-8914(49)90056-7.
69. Sawada, Y. A thermodynamic variational principle in nonlinear non-equilibrium phenomena. Prog. Theor. Phys. 1981, 66, 68–76.
70. Malkus, W.V.R.; Veronis, G. Finite Amplitude Cellular Convection. J. Fluid Mech. 1958, 4, 225–260.
https://doi.org/10.1017/S0022112058000410.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual au-
thor(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
By utilizing a generalized version of the Madelung quantum hydrodynamic framework that incorporates noise, we derive a solution using the path integral method to investigate how a quantum superposition of states evolves over time. This exploration seeks to comprehend the process through which a stable quantum state emerges when fluctuations induced by the noisy gravitational background are present. The model defines the conditions that give rise to a limited range of interactions for the quantum potential, allowing for the existence of coarse-grained classical descriptions at a macroscopic level. The theory uncovers the smallest attainable level of uncertainty in an open quantum system and examines its consistency with the localized behavior observed in large-scale classical systems. The research delves into connections and similarities alongside other theories such as decoherence and the Copenhagen foundation of quantum mechanics. Additionally, it assesses the potential consequences of wave function decay on the measurement of photon entanglement. To validate the proposed theory, an experiment involving entangled photons transmitted between detectors on the moon and Mars is discussed. Finally, the findings of the theory are applied to the creation of larger Q-bit systems at room temperatures.
Article
Full-text available
We present a quantum algorithm for simulating the classical dynamics of 2n coupled oscillators (e.g., 2n masses coupled by springs). Our approach leverages a mapping between the Schrödinger equation and Newton’s equation for harmonic potentials such that the amplitudes of the evolved quantum state encode the momenta and displacements of the classical oscillators. When individual masses and spring constants can be efficiently queried, and when the initial state can be efficiently prepared, the complexity of our quantum algorithm is polynomial in n, almost linear in the evolution time, and sublinear in the sparsity. As an example application, we apply our quantum algorithm to efficiently estimate the kinetic energy of an oscillator at any time. We show that any classical algorithm solving this same problem is inefficient and must make 2Ω(n) queries to the oracle, and when the oracles are instantiated by efficient quantum circuits, the problem is bounded-error quantum polynomial time complete. Thus, our approach solves a potentially practical application with an exponential speedup over classical computers. Finally, we show that under similar conditions our approach can efficiently simulate more general classical harmonic systems with 2n modes.
Article
Full-text available
In the present work the author establish the theoretical basis of the hidden variable of the IsoRedShift Mechanics by generalizing the Madelung hydrodynamic representation of the Pauli equation to its stochastic counterpart, which incorporates the noise arising from gravitational background fluctuations. The model reveals that conventional quantum mechanics is applicable only in a perfectly static universal spacetime. In the real case, due to the fluctuations of the spacetime background, originating from the Big Bang and by other cosmological source, the quantum evolution of mass densities and spin waves experience a drag force. The theory demonstrates that the hidden variable parameter, needed for the accurate description of nuclear mechanics, represents the stochastic-induced correction to the conventional quantum mechanics being the deterministic limit of the more general stochastic quantum theory. The theory shows that the experimental measurements of the hidden variable in nuclear mechanics provide empirical evidence of the presence of dark energy of the fluctuating gravitational background that perturbs the deterministic expression of quantum mechanics resulting in quantum decoherence and giving rise to the emergence of classical mechanics in macroscopic systems.
Book
Full-text available
This book introduces the quantum theory of gauge fields, emphasising four non-perturbative methods which have important applications: path integrals, lattice gauge theories, the 1/N expansion, and reduced matrix models. Written as a textbook, it assumes a knowledge of quantum mechanics and elements of perturbation theory, while many relevant concepts are introduced at a basic level in the first half of the book. The second half comprehensively covers large-N Yang–Mills theory. The book uses an approach to gauge theories based on path-dependent phase factors known as Wilson loops, and contains problems with detailed solutions to aid understanding. Suitable for advanced graduate courses in quantum field theory, the book will also be of interest to researchers in high energy theory and condensed matter physics as a survey of recent developments in gauge theory. Originally published in 2002, this title has been reissued as an Open Access publication on Cambridge Core.
Article
Full-text available
The paper proposes a theoretical study that investigates the quantum effects on the gravity of black holes. The study utilizes a gravitational model that incorporates quantum mechanics derived from the classical-like quantum hydrodynamic representation. The research calculates the mass density distribution of quantum black holes, specifically in the case of central symmetry. The gravity of the quantum black hole shows contributions coming from the quantum potential energy, which is also sensitive to the presence of the background of gravitational noise. The additional energy, stored in the quantum potential fluctuations and constituting a form of dark energy, leads to a repulsive gravity in the weak gravity limit. This repulsive gravity overcomes the attractive classical Newtonian force at large distances of order of the intergalactic length.
Article
Full-text available
In this note, we first recall the 1935 historical view by A. Einstein, B. Podolsky and N. Rosen according to which "Quantum mechanics is not a complete theory" (EPR argument), because of the inability by quantum mechanics to provide a quantitative representation of the interactions occurring in particle entanglements. We then show, apparently for the first time, that the completion of quantum entanglements into the covering EPR entanglements formulated according to hadronic mechanics provides a quantitative representation of the interactions occurring in particle entanglements by assuming that their continuous and instantaneous communications at a distance are due to the overlapping of the wave packets of particles, and therefore avoiding superluminal communications. According to this view, entanglement interactions result to be non-linear, non-local and not derivable from a potential, and are represented via Bohm's variable λ hidden in the quantum mechanical associative product of Hermitean operators AB = A × B via explicit and concrete, axiom-preserving realizations A×B = AλB, with ensuing non-unitary structure, multiplicative unit U1U † =Î = 1/λ,Î×A = A×Î = A, inapplicability' of Bell's inequalities and consequential validity of Bohm's hidden variables. We finally introduce, also apparently for the first time, the completion of quantum computers into the broader EPR computers characterizing a collection of extended electronic components under continuous entanglements, and show their apparent faster computation, better cybersecurity and improved energy efficiency. According to clear experimental evidence dating back to the early part of the past century, particles that were initially bounded together and then separated, can continuously and instantaneously influence each other at a distance, not only at the particle level (see e.g. [1, 2] and papers quoted therein), but also at the classical level [3]. The above experimental evidence is generally assumed to be represented by quantum mechanics and, therefore, particle entanglements are widely called quantum entanglement (Figure 1). However, Albert Einstein strongly criticized such an assumption because it would imply superluminal communications that violate special relativity. This occurrence motivated the 1935 historical view by A. Einstein, B. Podolsky and N. Rosen according to which "Quantum mechanics is not a complete theory" (EPR argument) [4]. In fact, quantum mechanics can only represent interactions derivable from a potential while no quantum mechanical potential is conceivably possible to represent continuous and instantaneous interactions at a distance. More explicitly , the quantum mechanical equation for two interacting particles with coordinates r k , k = 1, 2 on a Hilbert space H over the field C of complex numbers is given by the familiar Schrödinger equation (for = 1) Σ k=1,2 1 2m k p k p k + V(r) ψ(r) = E ψ(r). (1) When the two particles are entangled, in view of the absence of any possible potential V(r), the above equation becomes Σ k=1,2 1 2m k p k p k ψ(r 1)ψ(r 2) = = Σ k=1,2 1 2m k −i ∂ ∂ r k −i ∂ ∂ r k ψ(r 1)ψ(r 2) = = E ψ(r 1)ψ(r 2) (2) and can only represent two free particles characterized by the individual wave functions ψ(r k) without any possible or otherwise known interaction. At the 2020 International Teleconference on the EPR argument [5-7], R. M. Santilli proposed the new notion of Ein-stein-Podolsky-Rosen entanglement (Sect. 7.2.3, p. 61 of [6]) which is based on the sole conceivable interaction responsible for particle entanglements, that due to the overlapping of the wave packets of particles (Figure 2), thus being non-linear as first suggested by W. Heisenberg [8], non-local as first suggested by L. de Broglie and D. Bohm [9] and non derivable from a potential as first suggested by R. M. Santilli at Harvard University under DOE support [13, 14], because of contact, thus continuous and instantaneous character, by therefore voiding the need for superluminal communications. The non-linear, non-local and non-potential character of the assumed interactions render them ideally suited for their representation via the isotopic (i.e. axiom-preserving branch of hadronic mechanics, [15-17]), comprising iso-mathematics and iso-mechanics (see [18] for an outline, [19-21] for a R. M. Santilli. A Quantitative Representation of Particle Entanglements 131
Article
Full-text available
Schrödinger's equation can be considered as a diffusion equation with a diffusion coefficient β 2 = ℏ / 2 m. In this article, we explore the implications of this view. Rewriting the wave function in a polar form and transforming Schrödinger's equation into two real equations, we show that one of them reduces to the continuity equation, and the other, a nonlinear dynamical equation for the probability density. Considering these two equations as if they were the basic equations of quantum mechanics, we apply them to several one-dimensional quantum systems. We show the dispersive properties in the probability densities of stationary states of a particle in a rigid box and in harmonic potential; quasi-classical Gaussian probability densities of a free particle; and coherent states and squeezed states of the harmonic oscillator. We also present the soliton as a quantum mechanical representation of a free particle. We discuss the meaning of the diffusion coefficient β 2 for each quantum system using a density plot.
Article
The primary aim of this study is to establish a unified criterion for obtaining the gravity developed by quantum mass densities within spacetime. This is achieved by extending the principle of equivalence between inertial and gravitational mass, a fundamental aspect of General Relativity, to the covariance of equations of motion. In the classical scenario, we obtain the gravity of spacetime with classical characteristics, whereas in the quantum scenario, we obtain the gravity of spacetime with quantum mechanical properties. In both cases, the principle of least action is employed to define the geometry of spacetime. The gravity resulting from the quantum geometrization of spacetime can be seen as the quantum mechanical counterpart of General Relativity, where the fields of quantum physics are integrated into the theory of gravitation. In this study, we derive the gravity generated by boson and fermion fields. The outcomes of the theory have been utilized to derive antimatter gravity, resolve black hole singularities, and understand the origin of small-valued cosmological constants. The work also derives the fluctuations of the black hole quantum potential in the presence of the gravitational wave background and evaluates the resultant repulsive gravity at large distances. Furthermore, it examines the breaking of the matter-antimatter symmetry caused by the gravitational coupling of the fermions field. The significance of matter-antimatter asymmetry in pre-big bang black hole is described: This behavior implies that the matter-antimatter asymmetry might have played a crucial role in the highly energetic vacuum states of the pre-big bang black hole. When surpassing the Planck mass, the high-energy fermion state in the pre-big bang black hole's comprised fermion and antifermion black holes. The annihilation of these black holes emitted lighter fermions, accounting for the mass difference between the black hole and anti-black hole. The theory shows that as we approach the Minkowskian limit, the matter-antimatter symmetry becomes asymptotically established, and the mass disparity between particles and antiparticles diminishes as we transition from heavier to lighter particles within each particle family. The theory also shows that if the matter-antimatter symmetry were upheld, the vacuum would have collapsed into the polymer branched phase because there would have been no residual mass (resulting from the matter-antimatter difference) to stabilize the vacuum and confer a nonzero cosmological constant. Thus, the matter-antimatter symmetry in a quantum mechanical covariant gravity is incongruent with the formation of a physically stable vacuum with non-zero mean cosmological constant value. The process of quantum geometrization of spacetime provides a comprehensive framework for understanding the evolution of our universe, from the Pre-big bang black hole to the current quantum-decoherent classical reality. The theory posits that the ubiquitous presence of supermassive black holes (SMBHs) at the centers of galaxies is a direct consequence of the big-bang, from which SMBHs are generated without mass accretion, and that it plays a pivotal role in cosmological expansion, driven by their repulsive interactions. Finally, the system of field equations for Quantum Electrodynamics (QED) in curved spacetime (containing the fields back-reaction), along with an introductory section on the Standard Model in self-generated gravity is presented. The problem of second quantization of fields in spacetime with the coupled gravity of is also introduced. This has the potential to extend the standard Quantum Field Theory (QFT) to high energies. Experimental tests examining the disparities in magnetic moments between leptons and antileptons, as well as investigations involving entangled photons, are proposed as potential avenues for empirically validating the theory.
Article
Quantum mechanics was still in its infancy in 1932 when the young John von Neumann, who would go on to become one of the greatest mathematicians of the twentieth century, published Mathematical Foundations of Quantum Mechanics—a revolutionary book that for the first time provided a rigorous mathematical framework for the new science. Robert Beyer's 1955 English translation, which von Neumann reviewed and approved, is cited more frequently today than ever before. But its many treasures and insights were too often obscured by the limitations of the way the text and equations were set on the page. This new edition of this classic work has been completely reset in TeX, making the text and equations far easier to read. The book has also seen the correction of a handful of typographic errors, revision of some sentences for clarity and readability, provision of an index for the first time, and prefatory remarks drawn from the writings of Léon Van Hove and Freeman Dyson have been added. The result brings new life to an essential work in theoretical physics and mathematics.