ArticlePDF Available

Emanating Confluence: Chaos, Complexity, Emergence & Technological Singularity

Authors:

Abstract

The paper seeks to explain significant constructs within artificial intelligence (AI), including but not limited to the impact of ‘information theory;’ entropy, especially in terms of ‘information entropy;’ and language theory (linguistics), dealing with all communication methods and the ‘meaning’ in that communication along with the challenging problems of ‘bias.’ The paper discusses the progression from ‘chaos theory’ to ‘complexity theory’ to ‘emergence’ and finally, the ‘technological singularity.’ It examines such questions: How do chaos and complexity lead to emergent systems that inevitably lead to a technological singularity? How does the endless loop of AI progress as it emanates outward, comes to confluence, and emanates yet again? What does each phase entail? How does our advance towards exponential growth in data affect the progression of AI? What are the dangers of ‘bias’? What are the risks as we move towards emergence? Finally, why must we exercise extreme caution as we come closer to ‘technological singularity’?
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 169
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
Emanating confluence:
The symbiotic relationship between
artificial intelligence and data
Received (in revised form): 20th August, 2021
Ted W. Gross
Founder, Asanatae, Israel
Ted W. Gross is a futurist and theorist who has worked in the high-tech industry for many years
as a chief technology officer and vice president of research and development. Ted believes
there is a hidden order within chaos, and that complexity arises from simple models and concepts.
His work on technology theory and application has been published in various professional
technology journals.
E-mail: tedwgross@gmail.com
Abstract This paper seeks to explain significant constructs within artificial intelligence
(AI), including but not limited to: the impact of ‘information theory’; entropy, especially
in terms of ‘information entropy’; and language theory (linguistics), dealing with all
communication methods and the ‘meaning’ in that communication. The paper discusses
the progression from ‘chaos theory’ to ‘complexity theory’ to ‘emergence’ and finally the
‘technological singularity’. It examines such questions as: How do chaos and complexity
lead to emergent systems which will inevitably lead to a technological singularity? How
does the endless loop of AI progress as it emanates outward, comes to confluence, and
emanates yet again? What does each phase entail? How does our advance towards
exponential growth in data affect the progression of AI? What are the dangers of ‘bias’?
What are the risks as we move towards emergence? Finally, why must we exercise
extreme caution as we come closer to the technological singularity?
KEYWORDS: artificial intelligence, entropy, chaos, complexity, emergence, singularity,
superintelligence
WHERE MANY PATHS AND
ERRANDS MEET
‘The Road goes ever on and on,
Down from the door where it began.
Now far ahead the Road has gone,
And I must follow, if I can,
Pursuing it with eager feet,
Until it joins some larger way
Where many paths and errands meet.
And whither then? I cannot say.’1
At the beginning of J.R.R. Tolkien’s
epic saga, ‘The Lord of The Rings’, Bilbo
Baggins, the protagonist from the prequel
‘The Hobbit’,2 leaves the Shire to embark
upon his final trip to Rivendell, the home
of the Elves. As he sets out upon his last
adventure, singing to himself his self-
composed song ‘The Road Goes Ever
On and On’, Bilbo bequeaths his home
and a potent, terrifying and ominous
legacy to his nephew Frodo — ‘one ring
to rule them all’.3 Although Bilbo shows
enormous strength of will by giving up
possession of the ring, he fails to warn his
nephew of the dangers inherent in the ring’s
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
170 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
power. Indeed, not even Gandalf, the great
magician and seer, feels capable of warning
Frodo of the dangers ahead.
In today’s age of artificial intelligence
(AI), society, much like Bilbo and Frodo,
has acquired something precious and
incredibly powerful, forged through the
accumulation of generations-worth of
knowledge. Moreover, much like Frodo
at the beginning of his adventure, no one
is yet fully aware of the power it contains.
Nevertheless, we persist in exploring it
because ‘the road goes ever on and on’.
Many seers view the advancement of
humankind with a mixture of awe and deep
worry. Some, like Gandalf, choose to keep
their counsel to themselves as they watch
cautiously what paths are taken. Others
are vocal in their objections, warnings or
encouragement as society accelerates towards
horizons unknown.
In terms of where AI is leading
humankind, there are thousands of scenarios
painted upon the canvas of technology.
Optimists paint a new ‘Garden in Eden’
— a garden replete with ever-expanding
beauty and open access to both the Tree
of Knowledge and the Tree of Life. In the
Bible, these two trees, representing ever-
expanding knowledge and eternal life, were
sacrosanct, and the consumption of their
fruit punishable by expulsion and death.
This time, however, there are no such
restrictions. Unlimited knowledge and a
world without disease, where bodies last
forever, looms just beyond the horizon, if
only we can crack the code. Nirvana on the
grandest scale imaginable.
The pessimists, by contrast, see danger
in every advance and around every corner.
They warn that humankind has become
drunk on its powers of discovery and
innovation, and that society is running
headlong into a hell of its own making.
Indeed, they contend, it is only a matter
of time before we cross the line and create
a supreme being — a god with no care
nor compassion, nor empathy for those
who created it, and who, once unleashed,
will be guided by its own logic. We will
be in danger of being assailed and perhaps
annihilated by our own creations.
There is, as always, a middle road — a
road where AI is used in specific areas to
benefit humankind. Health, wealth, freedom
and happiness should be within everyone’s
grasp. Each advance will increase society’s
knowledge and the ability of individuals
to cope with the modern world. As we
are the creators and inventors, humankind
will always remain in absolute control,
and maintain the machines that serve as
homes to AI. The ability to conquer the
base instincts of war, jealousy, destruction,
hatred and racism will lead slowly to a
more enlightened existence. However, the
advance will be controlled, fastidious, and
serve only to increase people’s wellbeing.
Endless paths lie ahead — emanation
on a hitherto unseen scale. Millions of
rivulets flow along the path of data and
fluid time into a quantum existence. AI
feeds endlessly upon this river of data, in a
loop of progression and chaotic existence.
Every day, the river of data — the fuel of
all AI — swells until eventually it reaches
true exponential power. The more we
progress, the more the illusion of our power
grows. In reality, however, our control and
understanding of exactly how AI is working
is failing to keep pace.
Where there is an emanation, nature
demands confluence. Fluid streams
cross paths, intertwine, share and gain
power by combining their prowess.
They communicate in the complexity
of existence, seeking another emanation
to grow and feed upon to maintain and
increase their current. The more data, the
larger the rivulet will grow and the further it
will journey. Some rivulets dry up, leaving
only dry-cracked earth. Others emerge to
become mighty rivers, ever combining into
a singular tremendous power. Emanation
will occur again as other rivulets emerge
from the newly created confluence. It is
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 171
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
an ever-expanding loop of information,
analyses, discovery, innovation and creation.
Like Bilbo setting out with no idea of the
path ahead — only his desired destination,
society too is upon a road of discovery.
Some travellers are destined to arrive in
Rivendell; others will follow Frodo’s
path and embark upon a journey towards
unimagined power — at a price that many
may be unwilling to pay.
Where does this road lead? What will the
signposts along the way disclose? When will
the destination reveal itself with precision
and clarity? In answer, Bilbo’s soft, haunting
whisper echoes through the expanse of the
universe:
‘And whither then? I cannot say.’
EXPLODING INFORMATION AND
INTELLIGENCE
One of the most fundamental basics of AI is
the unquenchable need for data. In his work
on dataism, Yuval Noah Harari describes
this phenomenon as the ‘data religion’:4
‘Dataism declares that the universe
consists of data flows, and the value of any
phenomenon or entity is determined by its
contribution to data processing … 5
… Like capitalism, dataism too began
as a neutral scientific theory, but is now
mutating into a religion that claims
to determine right and wrong. The
supreme value of this new religion is
“information flow”. If life is the movement
of information, and if we think that life
is good, it follows that we should deepen
and broaden the flow of information in the
universe’.6
Whether or not one treats dataism like an
emerging religion, it is incontestable that
AI has irrevocably changed our approach
to data analytics. No longer satisfied with
Excel spreadsheets and specialised statistical
software, analytics today demands constantly
evolving and evermore sophisticated
algorithms based upon incomprehensible
amounts of data.
This new age of AI affects every aspect
of our lives and our world. As Amy Webb
states in her introduction to ‘The Big
Nine’: ‘AI isn’t a tech trend, a buzzword,
or a temporary distraction — it is the third
era of computing’.7 The sheer volume
of information and data now available,
combined with the quality of thought
and scientific advancement, can bring
any researcher into a state of awe at the
possibilities.
The ‘information explosion’8 will lead
straight to the ‘intelligence explosion’.
As Webb points out, ‘The intelligence
explosion, as foretold 100 years ago by
British mathematician and early AI pioneer
I. J. Good, begins in the late 2060s’.9
The intelligence explosion marks a
turning point as well. As the possibilities
of enhancing AI flourish, so too does its
metamorphosis from assistant to creator.
‘The pioneers of artificial intelligence,
however, notwithstanding their belief in
the imminence of human-level AI, mostly
did not contemplate the possibility of
greater-than-human AI…
… Let an ultraintelligent machine
be defined as a machine that can far
surpass all the intellectual activities of any
man however clever. Since the design
of machines is one of these intellectual
activities, an ultraintelligent machine could
design even better machines; there would
then unquestionably be an “intelligence
explosion”, and the intelligence of man
would be left far behind. Thus, the first
ultraintelligent machine is the last invention
that man need ever make, provided that the
machine is docile enough to tell us how to
keep it under control.10
Information — which is to say, data — is
the very soul of anything we, as a collective
or as individuals, accomplish, create and
envision. Certainly, it is the lifeblood
of analytics. The fuel that empowers AI
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
172 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
is data, and without accumulating and
then examining such data, the incredible
technological advances seen today would be
impossible.
Large segments of society, including
researchers, tend to approach AI as a unified
entity. It is not. AI is a conglomeration of a
myriad of theories, discoveries, technological
advancements and innovations. It
encompasses every aspect of human
knowledge and endeavour.
It is also common to think of analytics
as a progression of objective, non-biased
computer algorithms that simply determine
numbers and data. It is not. Analytics is
fuelled by data. Information is, by definition,
data. Information can be wrong, biased, and
come in many forms — not just textual.
Analytics must provide for all situations if it
hopes to be anywhere near correct.
Why is this so important? Data lie at
the heart and soul of AI. Indeed, without
information, AI could not exist. AI requires
massive amounts of information to make any
sort of informed decision. AI is progressing
at an unprecedented pace not only towards
the emergence of an intelligence explosion,
but a possible technological singularity. It is
hard to imagine anything more frightening
than a superintelligence running amok
because the algorithms and data first
imposed upon it were biased and based
upon inaccurate information and deficient
rules. As Spock wisely and logically states:
‘Insufficient facts always invite danger.’11
DISCOVERY, INNOVATION AND DATA
‘Imagination is more important than
knowledge. For knowledge is limited,
whereas imagination embraces the entire
world, stimulating progress, giving birth
to evolution. It is, strictly speaking, a real
factor in scientific research.’12
Many great discoveries, even mathematical
constructs, are made entirely by chance.
By contrast, some follow a great leap of
intuition, leading to possibilities previously
unknown. Imagination, curiosity, and a
deeply ingrained desire to make sense of
the universe fuel these discoveries. AI is
no different. It did not suddenly appear on
the radar. It is born of many decades —
even centuries — of evolutionary progress,
in thinking, capabilities and expanding
technology.
The questions of ‘what we want AI to do’
or ‘what slows us down or speeds us up’ are,
in essence, irrelevant. The answers to such
questions depend upon the nature of the
exact problem that needs to be solved. The
solutions offered today will not be the same
ones offered even 90 days into the future
as our ever-expanding AI horizons grow.
What we must comprehend thoroughly are
the implications of every step we take in AI.
What are we doing with the data? How are
we analysing the data? This requires taking
a step back to understand the evolutionary
process leading to the age of AI.
Some credit Lady Ada Lovelace — the
original pioneer of computer programming
— with creating the world’s first computer
algorithm and, with it, modern analytics.13
Lady Lovelace was working on the
‘analytical engine’, a theoretical machine
proposed by her mentor, Charles Babbage,
which would be capable of solving
equations, precisely calculating a sequence
of Bernoulli numbers. In her notes on the
proposed machine, which challenged Alan
Turing’s thinking process almost a century
later, she wrote:
‘The analytical engine has no pretensions
whatever to originate anything. It can do
whatever we know how to order it to
perform. It can follow analysis; but it has
no power of anticipating any analytical
relations or truths.’14
Others point to Turing’s opening statement,
‘I propose to consider the question, “Can
machines think?”’ in paper ‘Computing
machinery and intelligence’,15 as the first
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 173
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
step in the quest to realise AI (despite the
term ‘artificial intelligence’ being wholly
absent from the paper). Certainly, it is hard
to overestimate Turing’s contribution to
the field — his famous ‘imitation game’,
also known as the ‘Turing Test’16 remains
the topic of much discussion, and although
written in 1950, the following statement still
resonates:
‘I propose to consider the question, “Can
machines think?” This should begin with
definitions of the meaning of the terms
“machine” and “think”. The definitions
might be framed so as to reflect so far as
possible the normal use of the words, but
this attitude is dangerous, If the meaning
of the words “machine” and “think” are
to be found by examining how they are
commonly used it is difficult to escape
the conclusion that the meaning and the
answer to the question, “Can machines
think?” is to be sought in a statistical
survey such as a Gallup poll. But this
is absurd. Instead of attempting such a
definition I shall replace the question by
another, which is closely related to it and
is expressed in relatively unambiguous
words.’17
Purists will go back to 1637, when
the philosopher René Descartes, in his
‘Discourse on the Method’,18 asserted:
Cogito, Ergo Sum’ — ‘I think, therefore
I am’. This brief statement opened up a
world of questions regarding ‘existence’,
‘intelligence’ and ‘consciousness’.
The consequences of this assertion are
considerably more far-reaching than
Descartes could ever have imagined. Most
notably, we now ask in all seriousness, ‘If
a machine can think, then is it conscious?’.
This is no longer the realm of science fiction,
and paradoxically, what lies at the heart of
this progression is the very substance that
empowers humankind’s advancement —
data and the methods used to analyse data.
Many researchers and authors, such as
Nick Bostrom in ‘Superintelligence’,19
Amy Webb in ‘The Big Nine’,20 Ray
Kurzweil in ‘The Singularity Is Near’21
and ‘How To Create A Mind’,22 Yuval
Noah Harari in ‘Homo Deus: A Brief
History of Tomorrow’,23 James Gleick
in ‘Chaos: Making a New Science’24 and
‘The Information: A History, a Theory, a
Flood’,25 Melanie Mitchell in ‘Complexity:
A Guided Tour’26 and ‘Artificial Intelligence:
A Guide for Thinking Humans’,27 to name
but a few, have produced leading-edge
works on understanding AI, data and the
innovative thinking within these fields of
endeavour. The dean of them all, Walter
Isaacson, in some of his seminal works —
‘Leonardo da Vinci’,28 ‘The Innovators’,29
the introduction to ‘Invent and Wander
— The Collected Writings of Jeff Bezos’,30
‘Steve Jobs’31 and ‘The Code Breaker’,32
though concentrating on portraying
personalities, does a remarkable job of
describing the more esoteric developments
in AI, and how the exponential growth of
data has motivated the great innovators.
The truth is, many innovations
and discoveries over the past century
have helped (or if one wishes, pushed)
humankind to reach its current apex, and by
tomorrow scale yet another summit. These
discoveries serve as a foundation, and they
cover all aspects of knowledge, science and
theory. Indeed, whatever mystical destiny
is involved, they seem to happen in waves.
Although they spring from areas that initially
appear to have nothing to do with one
another, given time for osmosis into the
general realm of human knowledge, they
merge into one flowing unit.
‘Two revolutions coincided in the
1950s. Mathematicians, including Claude
Shannon and Alan Turing, showed that
all information could be encoded by
binary digits, known as bits. This led to a
digital revolution powered by circuits with
on-off switches that processed information.
Simultaneously, Watson and Crick
discovered how instructions for building
every cell in every form of life were encoded
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
174 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
by the four-letter sequences of DNA. Thus
was born an information age based on digital
coding (0100110111001…) and genetic
coding (ACTGGTAGATTACA…). The
flow of history is accelerated when two
rivers converge.’33
The following list is neither comprehensive
nor exhaustive. However, those that are
mentioned are essential to understand if one
is to grasp chaos, complexity, emergence and
singularity, and to understand the marriage
between AI and data.
Information theory
It is often the case that significant discoveries
and innovations require a seed from which
to evolve. This seed, in and of itself, is often
of immense importance. Such is the case
with information theory,34 the significance
of which is impossible to quantify. Digital
information in any form would simply not
exist were it not for information theory.
It began in 1854 with George Boole’s
paper on algebraic logic, ‘An investigation of
the laws of thought on which are founded
the mathematical theories of logic and
probabilities’.35 Boole’s algebraic and logical
notions are known today as a ‘Boolean
function’36 and permeate our thought
processes from an early age. Computer
programmers are entirely reliant upon the
Boolean logical operators, and without such
propositions represented in code, it would
prove impossible to develop any level of
programming sophistication.
‘Boole revolutionised logic by finding ways
to express logical statements using symbols
and equations. He gave true propositions
the value 1 and false propositions a 0. A set
of basic logical operations — such as and,
or, not, either/or, and if/then — could
then be performed using these propositions,
just as if they were math equations.’37
The seed of information theory then
germinated for almost a century. In 1948, at
Bell Laboratories, Claude Shannon published
a paper with an immeasurable impact upon
modern technology and data analysis. In ‘A
mathematical theory of communication’,38
now commonly known as ‘the Magna
Carta of the Information Age’, Shannon
introduced the mind-boggling notion that
information can be quantified and measured.
He applied Boolean logic towards a whole
new cosmos while adding his personal touch
of genius.
‘Shannon figured out that electrical circuits
could execute these logical operations
using an arrangement of on-off switches.
To perform an and function, for example,
two switches could be put in sequence,
so that both had to be on for electricity
to flow. To perform an or function,
the switches could be in parallel so that
electricity would flow if either of them was
on. Slightly more versatile switches called
logic gates could streamline the process.’39
‘Before Shannon’s paper, information
had been viewed as a kind of poorly
defined miasmic fluid. But after Shannon’s
paper, it became apparent that information
is a well-defined and, above all, measurable
quantity…
… Shannon’s theory of information
provides a mathematical definition of
information, and describes precisely how
much information can be communicated
between different elements of a system.
This may not sound like much, but
Shannon’s theory underpins our
understanding of how signals and noise are
related, and why there are definite limits
to the rate at which information can be
communicated within any system, whether
man-made or biological.40
‘The resulting units’, wrote Shannon, ‘may
be called binary digits, or more briefly,
bits’.41
‘The bit now joined the inch, the
pound, the quart, and the minute as a
determinate quantity — a fundamental
unit of measure. But measuring what?
“A unit for measuring information”,
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 175
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
Shannon wrote, as though there were such
a thing, measurable and quantifiable, as
information.’42
Until this time, no one had ever presumed
that information could be subject to a
mathematical formula or computational
analysis. With the publication of this
seminal paper, there was the world before
Shannon and the world after Shannon.
Perhaps Shannon’s greatest achievement was
the counter-intuitive approach to analyse
‘information’ independent of ‘meaning’.
In short, when dealing with information,
one does not need to consider the meaning
of the message. Indeed, ‘meaning’ is
superfluous to that actual content. ‘Meaning’
is, in effect, meaningless. As he wrote in
the second introductory paragraph of ‘A
mathematical theory of communication’:
‘The fundamental problem of
communication is that of reproducing at
one point either exactly or approximately
a message selected at another point.
Frequently the messages have meaning;
that is they refer to or are correlated
according to some system with certain
physical or conceptual entities. These
semantic aspects of communication are
irrelevant to the engineering problem. The
significant aspect is that the actual message
is one selected from a set of possible
messages. The system must be designed to
operate for each possible selection, not just
the one which will actually be chosen since
this is unknown at the time of design.’43
What is of concern are probability and
uncertainty. With the birth of the bit,
Shannon took the 0,1->true/false construct
of Boolean functions to a whole new
stratosphere. At the heart of Shannon’s
theory lie ‘noise’ and ‘surprise’. All
communication — human, computer,
over wires, signals, digital — as a universal
fundamental — has an element of ‘noise’
and an element of ‘surprise’. The surprise
is what is left after the noise is eliminated
(and there is always noise on any channel of
communication) without interfering with
the original message..
‘So, what is information? It is what remains
after every iota of natural redundancy
has been squeezed out of a message, and
after every aimless syllable of noise has
been removed. It is the unfettered essence
that passes from computer to computer,
from satellite to Earth, from eye to brain,
and (over many generations of natural
selection) from the natural world to the
collective gene pool of every species.’44
Shannon’s information theory gave
practical birth to the digital age. Without
it, people would be drowning in noise
and uncertainty regarding the veracity of
the messages they shared. Without it, all
modes of communication would be left
grappling with garbled information and
incoherent meanings. Paradoxically, by
ignoring the meaning of a message, by
showing how insignificant ‘meaning’ is to
an actual message, Shannon gave the world
true meaning and the ability to handle
massive amounts of data securely and
coherently. Simply put, information theory
is fundamental to everything.
‘But before Shannon, there was precious
little sense of information as an idea,
a measurable quantity, an object fitted
out for hard science. Before Shannon,
information was a telegram, a photograph,
a paragraph, a song. After Shannon,
information was entirely abstracted into
bits. The sender no longer mattered, the
intent no longer mattered, the medium
no longer mattered, not even the meaning
mattered: a phone conversation, a snatch
of Morse telegraphy, a page from a
detective story were all brought under a
common code.’45
Furthermore, with Shannon’s landmark
paper, another term was born — something
that would also rock the foundations of
science, technology and the digital age:
‘information entropy’.
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
176 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
Entropy
According to one (almost certainly untrue)
story, when grappling with the terminology
to use in his paper, Shannon asked the
legendary mathematician and physicist, John
von Neumann46 ‘What should I call this
thing?’. Von Neumann reputedly responded:
‘Say that information reduces “entropy”.
It is a good, solid physics word. And more
importantly, no one knows what entropy
really is, so in a debate, you will always have
the advantage’.
Entropy47 is the Second Law of
Thermodynamics,48 and without
understanding the fundamental consequences
of entropy, few technological advances
would have taken place. Any research
on entropy will leave the researcher in
stupefaction over the possible definitions of
a law that lies at the very heart of humanity’s
understanding of the universe. By all logical
formulations, entropy should benefit from an
uncontested, universally accepted definition.
However, as Von Neumann supposedly
stated, this has proven impossible, and this
crucial law is defined differently depending
on the system with which it is being used.
It has even entered the realm of buzzwords,
where one can hear the term ‘entropy’ being
spoken with abandon and without any sort
of essential meaning.
The Second Law of Thermodynamics
states:
‘Entropy always increases until it reaches
a maximum value. The total entropy of a
system will always increase until it reaches
its maximum possible value; it will never
decrease on its own unless an outside agent
works to decrease it.’49
This law holds that there will always be an
expenditure of energy in any system in the
universe. This energy cannot be reused.
The entropy produced by all-natural and
biological systems will continue to expand
until some other outside source interferes
with this trajectory. For example, many
systems release heat as part of their working
process. ‘This so-called heat loss is measured
by a quantity called entropy. Entropy is
a measure of the energy that cannot be
converted into additional work’.50
To address the misuse and
misunderstanding of the term, Professor
Arieh Ben-Naim has suggested abandoning
the word ‘entropy’ altogether, and replacing
it with ‘missing information’. He also
suggests entropy can be classified into
two major categories: ‘one is based on the
interpretation of entropy in terms of the
extent of disorder in a system; the second
involves the interpretation of entropy in
terms of the missing information on the
system’.51
In information theory, Shannon
recognised the order–disorder quandary. He
realised that all information contains ‘noise’,
something which is essentially useless to the
data. The ubiquitous term ‘signal-to-noise
ratio’ comes directly from this formulation.
‘In his classic 1948 paper, Shannon defined
the information content in terms of the
entropy of the message source. People
have sometimes characterised Shannon’s
definition of information content as the
“average amount of surprise” a receiver
experiences on receiving a message, in
which “surprise” means something like the
“degree of uncertainty” the receiver had
about what the source would send next…
… Once again, the entropy (and thus
information content) of a source is defined
in terms of message probabilities and is
not concerned with the ‘meaning’ of a
message.52
These bits of information are at the heart of
what Shannon named ‘information entropy’
(also known as ‘Shannon entropy’). To
grasp the meaning of any data, information
entropy is necessary.
‘Information is entropy. This was the
strangest and most powerful notion of all.
Entropy — already a difficult and poorly
understood concept — is a measure of
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 177
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
disorder in thermodynamics, the science of
heat and energy.53
‘In essence, entropy is a measure
of uncertainty. When our uncertainty
is reduced, we gain information, so
information and entropy are two sides of
the same coin.54
The ‘noise’ is always abstracted in any
data analysis, be it machine learning (ML),
pattern recognition (PR), deep learning
(DL), natural language processing (NLP)
or a simple data comparison. We only
analyse the ‘real’ information available, also
known as ‘the surprise within the message’.
Without this understanding and ability,
without a firm grasp of information entropy,
one can never hope to achieve any sort of
AI or analytics. Predictive analytics would
never work without reducing the noise
and disorder,55 and it lies at the heart of all
modern data analytics. It makes it possible to
reduce the uncertainty of meaning.
Linguistics: Language theory
The history of language and linguistics
is also at the heart of analytics and AI.
Communication is not only about
information entropy but psychology as well.
What we want to impart in our message is
as equally important as what we are actually
saying. Paradoxically, in contradistinction to
information theory, what language coveys is
‘meaning’.
Language plays a critical part in all AI
systems, especially if such a system must
consider sentiment analysis and emotion
recognition (SAER).56 Inferring SAER
from text, even with information entropy,
is difficult. One must include the differing
types of communication, which are non-
verbal and non-textual. Emojis, graphics,
videos, acronyms — all play a crucial role in
current communication technology.
‘Psychology of language, properly
understood, is a discipline which embraces
the study of the acquired system (the
grammar), of the methods of acquisition
(linked to universal grammar), and models
of perception and production, and which
also studies the physical bases for all of this.
This study forms a coherent whole.’57
‘Language is the interaction of
meaning (semantics), conditions on usage
(pragmatics), the physical properties of
its inventory of sounds (phonetics), a
grammar (syntax, or sentence structure),
phonology (sound structure), morphology
(word structure), discourse conversational
organisational principles, information and
gestures. Language is a gestalt — the whole
is greater than the sum of its parts. That is
to say, the whole is not understood merely
by examining its individual components.58
As if information entropy and determining
the meaning of words precisely were not
enough for data analytics (just think in terms
of a tweet or a Facebook post), there is still
so much more that analytics must consider.
‘Underdeterminacy means that every
utterance in every conversation and every
line in every novel and each sentence
of any speech contains “blank spots” —
unspoken, assumed knowledge, values,
roles and emotions — underdetermined
content that I label “dark matter”.
Language can never be understood
entirely without a shared, internalised set
of values, social structures and knowledge
relationships. In these shared cultural and
psychological components, language filters
what is communicated, guiding a hearer’s
interpretations of what another is saying.
People use the context and cultures in
which they hear language to interpret it.
They also use gestures and intonation, in
order to interpret the full meaning of what
is being communicated.’59
One other significant aspect of language
must be considered within information
and AI. The dawn of language allowed
humankind to preserve memories for
thousands of years. Be it passing down
the stories from generation to generation
or by the written word, this ability makes
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
178 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
it possible to ‘accrue’ information — to
accumulate it en masse.
‘Single-cell animals could remember
events for seconds, based on chemical
reactions. Animals with brains could
remember events for days. Primates with
culture could pass down information
through several generations. Early human
civilisations with oral histories were able
to preserve stories for hundreds of years.
With the advent of written language
the permanence extended to thousands
of years.’60
Bias
Data analytics has a colossal job in
determining what is actually in the data.
However, the quest for data purity does not
end there. It is essential to deal with possible
bias in and about the actual data. There are
two significant types of bias one must be
wary about in analytics and AI.
Bias in the data
In analysing data for any system, specifically
for an AI system, one must be cognisant
of the origin of said data. For example,
applying medical information from a
massive data lake of 90 per cent male health
statistics to a majority female population will
probably produce flawed results. Likewise,
creating a national average for a political poll
based upon the response from upper middle
class individuals will also produce erroneous
results. All data sets will, by nature, contain
a certain amount of bias. This bias must
be taken into account within analytics and
corresponding AI.
Bias in evaluating or creating the data
One of the most dangerous aspects of data
analytics that can lead to catastrophic results
in AI concerns the biases inherent within the
controlling group.
In 1968, Melvin Conway wrote that
‘organisations which design systems (in
the broad sense used here) are constrained
to produce designs which are copies of
the communication structures of these
organisations’.61
This brief statement became known as
Conway’s law62 and elucidates how bias will
always appear in systems. The systems inherit
the bias of the creators. They imitate those
that created them — ‘clones’ if one wishes
to use the term.
Webb’s research in AI, among other
aspects, concentrates on this bias, as it is
fundamental to correct AI implementation.
As she wisely states: ‘in the absence of
meaningful explanations, what proof do
we have that bias hasn’t crept in? Without
knowing the answer to that question, how
would anyone possibly feel comfortable
trusting AI?’.63
As Webb points out in her evaluation
of a Harvard Business School analysis of
codebases:
‘One of their key findings: design choices
stem from how their teams are organised,
and within those teams, bias and influence
tends to go overlooked. As a result, a small
super network of individuals on a team
wield tremendous power once their work
— whether that’s a comb, a sink, or an
algorithm — is used by or on the public…
Therefore, Conway’s law prevails,
because the tribe’s values — their beliefs,
attitudes, and behaviours as well as their
hidden cognitive biases — are so strongly
entrenched.64
However, bias does not end there. Bias can
appear in the actual data sets used because of
how the data were initially defined.
‘Since researchers can’t just scrape and
load “ocean data” into a machine-
learning system for training, they will
buy a synthetic data set from a third party
or build one themselves. This is often
problematic because composing that data
set — what goes into it and how it’s
labelled — is rife with decisions made
by a small number of people who often
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 179
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
aren’t aware of their professional, political,
gender, and many other cognitive biases.’65
In 1956, Dartmouth College hosted the
first conference dedicated to AI.66 The
term ‘artificial intelligence’ is credited to
John McCarthy,67 conference leader and
one of the proposal’s original authors.
Unfortunately, the initial group was
fundamentally flawed, being riddled
with bias. It had no people of colour
and only one woman among 47 eminent
participants — even though many experts
of colour and women were available. The
answer to team creation without bias is fairly
obvious.
‘A truly diverse team would have only one
primary characteristic in common: talent.
There would not be a concentration of any
single gender, race, or ethnicity. Different
political and religious views would be
represented.’68
However, to achieve non-biased data,
the law of ‘talent’ must be universally
applied, and this is simply not realistic.
Human nature will always produce
some type of bias, no matter how
much one prides oneself on being non-
biased and politically correct. Bias is
inherent in everything people do; it is
an expression of one’s individualism.
To complicate matters, what is a bias
for one culture and society is considered
logical, objective and fair for another.
Bias, therefore, does not contain a one-
meaning-solves-all definition.
Detecting if bias is apparent in any
construct depends on how that specific
subculture defines bias and how that
definition is implemented within the system.
Bias will never be eradicated entirely,
although it is, by any definition, always a
negative factor. Data analytics must therefore
take the bias into account and build a
myriad of defences to counter it. Failure to
do so will lead to erroneous results and a
catastrophically flawed AI.
Bayes theorem
Without Bayes’ theorem there is no AI69
it really is that important and consequential.
Bayes’ theorem provides the ability to
decipher, manipulate and understand data.
Thomas Bayes’ work ‘An essay towards
solving a problem in the doctrine of
chances’,70 published post-mortem in 1763,
caused much contention and until recent
decades was often dismissed as inaccurate.
The supposition that a mathematical formula
could adjust itself constantly to new realities
based upon the previous answers was
anathema to mathematicians — but a gift to
AI and predictive analytics.
‘On its face Bayes’ rule is a simple, one-
line theorem: by updating our initial
belief about something with objective
new information, we get a new and
improved belief. To its adherents, this is
an elegant statement about learning from
experience … Opponents, meanwhile,
regarded Bayes’ rule as subjectivity run
amok.’71
Many have used the theorem to solve
complex problems. It lies at the heart of ML
and utilising ‘decision trees’72 and ‘random
forests’,73–75 just to name a small part of
critical data analytics within AI. Without
Bayes’ theorem, analysts would be adrift
in a sea of data with no ability to evaluate
said data. The theorem allows for predictive
analytics, which can constantly update itself
to new information.
‘Conceptually, Bayes’ system was simple.
We modify our opinions with objective
information: Initial Beliefs (our guess
where the cue ball landed) + Recent
Objective Data (whether the most recent
ball landed to the left or right of our
original guess) = A New and Improved
Belief. Eventually, names were assigned
to each part of his method: Prior for the
probability of the initial belief; Likelihood
for the probability of other hypotheses
with objective new data; and Posterior for
the probability of the newly revised belief.
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
180 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
Each time the system is recalculated, the
posterior becomes the prior of the new
iteration. It was an evolving system, which
each new bit of information pushed closer
and closer to certitude.’76
The e-mail spam filter provides an excellent
example of the importance of Bayes’
theorem in the modern world. A ‘one fits
all’ type of rule simply does not work for
filtering spam. This is because a spam filter
must gather data and rearrange suppositions
on an ongoing basis as it continuously learns
and teaches itself to make more informed
decisions on what is considered spam for that
specific system. For this reason, a majority of
spam filters are based upon Bayes’ theorem.
CHAOS THEORY
‘Introduce a little anarchy. Upset the
established order, and everything becomes
chaos. I’m an agent of chaos. Oh, and you
know the thing about chaos? It’s fair!’77
‘Invention, it must be humbly admitted,
does not consist in creating out of void, but
out of chaos; the materials must, in the first
place, be afforded: it can give form to dark,
shapeless substances, but cannot bring into
being the substance itself.78
In 1961, when experimenting with weather
pattern information, a lack of caffeine led
to Edward Lorenz keying a shortened
decimal into a series of computed numbers.
This mistake gave birth to ‘chaos theory’79
and the ‘butterfly effect’.80 The subsequent
publication of his paper ‘Deterministic
nonperiodic flow’81 prompted a debate on
chaos that is still very much alive today. The
butterfly effect has been prosaically described
as the idea that a butterfly flapping its wings
in Brazil might cause a tornado in Texas.
Alternatively, as Lorenz himself put it:
‘One meteorologist remarked that if the
theory were correct, one flap of a sea
gull’s wings would be enough to alter
the course of the weather forever. The
controversy has not yet been settled, but
the most recent evidence seems to favour
the sea gulls.’82
At the heart of chaos theory lies the
seemingly modest statement which
postulates that small, even minute events
can influence enormous systems leading
to significant consequences — hence the
butterfly effect, or as defined within chaos:
‘sensitivity to initial conditions’.
Surprisingly enough, one can express
chaos in mathematical terms. Randomness
suddenly becomes an orderly disorder,
which means not only in existential terms
but in real-world scenarios that there is a
hidden order to chaos.
‘The modern study of chaos began with
the creeping realisation in the 1960s that
quite simple mathematical equations could
model systems every bit as violent as a
waterfall. Tiny differences in input could
quickly become overwhelming differences
in output — a phenomenon given the
name “sensitive dependence on initial
conditions”.’83
To illustrate the butterfly effect, advocates
of chaos theory often cite the proverb ‘For
want of a nail’:’
‘For want of a nail, the shoe was lost.
For want of the shoe, the horse was lost.
For want of the horse, the rider was lost.
For want of the rider, the battle was lost.
For want of the battle, the kingdom was lost.
And all for the want of a horseshoe nail.’
The lesson is obvious: the lack of something
as inconsequential as a single nail can cause
the loss of a kingdom. When so many
minor events can have such enormous
consequences, how can one even attempt
to predict the behaviour of systems?
This nagging question remained in the
background for centuries. Chaos was
attributed to a supreme being, karma or
plain old luck. Despite its ubiquity, there
was no way to foretell or control chaos.
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 181
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
Events are random, and random events defy
prediction. Or so everyone believed.
Almost as a genetic imperative, the brain
seeks ‘patterns’. However, the terms ‘chaos’
and ‘patterns’ seem to be polar opposites.
If there is a pattern to be discerned, then
how can there be chaos? Furthermore, if
chaos prevails, then how can there be an
underlying pattern?
In 2005, Lorenz condensed chaos
theory into the following: ‘Chaos: When
the present determines the future, but the
approximate present does not approximately
determine the future’.84
Ancient man sought patterns in the night
sky filled within the chaos of hundreds of
millions of stars and found the constellations.
Modern man seeks underlying patterns of
behaviour and activity in everyday life. The
recent COVID-19 outbreak is an example
of such a pursuit. At present, at least, there
is less focus on the source of the proverbial
nail, but intense interest in such patterns
as how the virus spreads, how specific
prophylactic measures have worked, and
the patterns of historical outbreaks such as
the black death (bubonic plague) and the
Spanish flu after the First World War, in
the hope that it is possible to apply pattern
recognition to combating the spread of
the coronavirus. The identification of such
patterns assists in combating the virus by
making it possible to project possible future
outcomes based upon the present situation.85
(It will be interesting to see how Webb
analyses this in her upcoming book, ‘The
Genesis Machine’.86)
‘Chaos appears in the behaviour of the
weather, the behaviour of an airplane in
flight, the behaviour of cars clustering
on an expressway, the behaviour of oil
flowing in underground pipes. No matter
what the medium, the behaviour obeys
the same newly discovered laws. That
realisation has begun to change the way
business executives make decisions about
insurance, the way astronomers look at
the solar system, the way political theorists
talk about the stresses leading to armed
conflict.’87
Chaos theory specifies not only that there
are geometric patterns to be discerned in
the seemingly random events of a complex
system, but also introduces ‘linear’ and
‘nonlinear’ progressions. Linear progressions
go from step A to step B to step C. Such
systems lend themselves to predictability.
Their patterns are apparent even before they
begin. They take no heed of chaos as they
have clear beginnings with specific steps
along the way. Unfortunately, the way our
brains handle data is mainly linear because
this is how the majority of people are trained
to think from birth.
‘Linear relationships can be captured with a
straight line on a graph. Linear relationships
are easy to think about: the more the
merrier. Linear equations are solvable,
which makes them suitable for textbooks.
Linear systems have an important modular
virtue: you can take them apart, and put
them together again — the pieces add up.
Nonlinear systems generally cannot be
solved and cannot be added together. In
fluid systems and mechanical systems, the
nonlinear terms tend to be the features that
people want to leave out when they try to
get a good, simple understanding … That
twisted changeability makes nonlinearity
hard to calculate, but it also creates rich
kinds of behaviour that never occur in
linear systems.’88
‘How, precisely, does the huge
magnification of initial uncertainties come
about in chaotic systems? The key property
is nonlinearity. A linear system is one
you can understand by understanding its
parts individually and then putting them
together … A nonlinear system is one in
which the whole is different from the sum
of the parts … Linearity is a reductionist’s
dream, and nonlinearity can sometimes be
a reductionist’s nightmare.89
For instance, structured query language
(SQL) purists set up data stores with
traditional relationships they define for
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
182 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
the system. This may be wonderful for basic
name-address-looked-at-bought-something
systems, as what is being analysed rests on
a previously decided relationship, whether
one-to-one or one-to-many. Programmers
also have absolute control over the data
going into the system.
Both those who teach and those who
implement SQL programming are blind
to chaos and reject its consequences. They
rid their systems of ‘noise’ by making these
systems adhere to previously defined rules.
Information entropy is predefined in that
there can be no ‘surprise’ in the data, and
bias exists from the initial stage. Suppose
the data are not of a specific predefined
composition (eg string, numeric, binary,
etc). In that case, the data will simply not
enter the system, even if the currently
rejected data may be crucial later on — the
information is forever lost to the system. In
short, pure SQL programming disallows the
viewing of data in nonlinear terms, which
can have disastrous consequences to both
data and AI.
‘Textbooks showed students only the rare
nonlinear systems that would give way
to such techniques. They did not display
sensitive dependence on initial conditions.
Nonlinear systems with real chaos were
rarely taught and rarely learned. When
people stumbled across such things — and
people did — all their training argued
for dismissing them as aberrations. Only
a few were able to remember that the
solvable, orderly, linear systems were the
aberrations. Only a few, that is, understood
how nonlinear nature is in its soul.’90
Data systems today are as chaotic as
predicting the weather. Scraping data
means one does not know what to expect
from the data. One must seek patterns and
information once that data lake takes form.
Approaching data in a linear manner will
always create bias as one decides what data
to find, in what order to find the data, what
the structure of the data must be, and the
rules to which it must adhere. Neither linear
thinking nor traditional SQL structures
can provide a proper answer. Information
entropy, bias and the application of Bayes’
theorem cannot reveal adequate results
because insufficient information is collected.
Failure to conduct data analysis correctly will
always lead to massively erroneous results
in AI.
The ‘eureka moment’ of chaos
theory boils down to a single number
— 4.6692016 — otherwise known as
‘Feigenbaum’s constant’.91 The essential
word here is ‘constant’, although few
scientists or mathematicians would have
believed it was possible until it was
categorically proven. Simply stated, what
Mitchell Feigenbaum discovered was that
there is a universality in how complex
systems work.92 Given enough time, this
constant will always appear in a series.
Moreover, this constant is universal.
Chaos swings like a pendulum along a
mathematical axis. Once one accepts
disorder and chaos, one can plan for it —
even within large systems. The fact that even
within chaotic systems one can find stability
creates a whole new universe of possibilities
‘Although the detailed behaviour of a
chaotic system cannot be predicted, there
is some “order in chaos” seen in universal
properties common to large sets of chaotic
systems, such as the period-doubling route
to chaos and Feigenbaum’s constant.
Thus, even though “prediction becomes
impossible” at the detailed level, there
are some higher-level aspects of chaotic
systems that are indeed predictable.’93
Chaos theory has its limits, however, as
there will always be more than one butterfly
flapping its wings. In many systems,
the sensitivity to initial conditions will
eventually become too complex for any type
of prediction. Lorenz’s weather prediction,
for example, lasts for a short period of a few
days at most. As it stands, there is no way
to discover what ‘initial condition’ may
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 183
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
become significant in four days’ time. One
may view short-term weather forecasting as
a deterministic system; however, according
to chaos theory, random behaviour remains
a possibility even in a deterministic system
with no external source.
‘The defining idea of chaos is that there
are some systems — chaotic systems —
in which even minuscule uncertainties
in measurements of initial position
and momentum can result in huge
errors in long-term predictions of these
quantities … But sensitive dependence
on initial conditions says that in chaotic
systems, even the tiniest errors in your
initial measurements will eventually
produce huge errors in your prediction
of the future motion of an object. In such
systems (and hurricanes may well be an
example) any error, no matter how small,
will make long-term predictions vastly
inaccurate.’94
This leads to complexity theory.95
‘In our world, complexity flourishes, and
those looking to science for a general
understanding of nature’s habits will be
better served by the laws of chaos.’96
COMPLEXITY THEORY
‘The complexity of things — the things
within things — just seems to be endless.
I mean nothing is easy, nothing is simple.’97
‘Complex systems with many different
initial conditions would naturally produce
many different outcomes, and are so
difficult to predict that chaos theory cannot
be used to deal with them.98
Nonlinear systems are not so easily defined
nor understood — and so complexity (also
known as ‘complex systems’) enters the
realm of investigation. Complexity theory
intertwines information theory, entropy and
chaos theory, and leads towards ‘emergence’.
Approaching complexity as a single science
with one definition or uniform topic is
impossible. There is an almost infinite
possibility of initial events all working
together, somehow, mysteriously, towards
an unknown goal.
For an introduction to the nature of
complexity, consider a colony of ants:
‘Colonies of social insects provide some
of the richest and most mysterious
examples of complex systems in nature.
An ant colony, for instance, can consist
of hundreds to millions of individual
ants, each one a rather simple creature
that obeys its genetic imperatives to
seek out food, respond in simple ways
to the chemical signals of other ants in
its colony, fight intruders, and so forth.
However, as any casual observer of the
outdoors can attest, the ants in a colony,
each performing its own relatively
simple actions, work together to build
astoundingly complex structures that are
clearly of great importance for the survival
of the colony as a whole.’99
A unique aspect of ant colonies is that there
is no apparent central control or leader.
Nevertheless, a colony will create ceaseless
patterns, collect and exchange information,
and evolve in the environment in which
it finds itself. Similar behaviour manifests
in stock markets, within biological cell
organisation and in artificial neural networks
(ANNs). Complexity appears almost
everywhere, following on the heels of chaos.
Mitchell has formulated an excellent (if
partial) definition of complexity:
‘A system in which large networks of
components with no central control and
simple rules of operation give rise to
complex collective behaviour, sophisticated
information processing, and adaptation
via learning or evolution … Systems in
which organised behaviour arises without
an internal or external controller or leader
are sometimes called self-organising. Since
simple rules produce complex behaviour
in hard-to-predict ways, the macroscopic
behaviour of such systems is sometimes
called emergent. Here is an alternative
definition of a complex system: a system
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
184 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
that exhibits nontrivial emergent and
self-organising behaviours. The central
question of the sciences of complexity
is how this emergent self-organised
behaviour comes about.’100
There are other, perhaps more subtle
ways to measure a complex system. If one
measures the information entropy, one can
see how much ‘surprise’ is left once the
‘noise’ is eliminated. If this surprise is above
a specific, pre-set value, one can assume
complexity. Alternatively, perhaps one
should just look at the size of the set. For
instance, DNA sequencing — along with
the attendant possibilities — is a complex
system under these parameters (or actually,
under any parameters).
Of course, the most significant issue in
the age of AI and data stems from Alan
Turing’s famous question: ‘Can machines
think?’. The consequences of such a
question regarding complexity theory
are enormous. If there is a possibility
for thinking machines, is it possible for
a machine to gain ‘consciousness’ and
‘intelligence’? If the data pool is so large as
to endow thinking upon a machine, will
information entropy be full of surprise for
that machine? Or will the machine choose
to ignore critical information as just ‘noise’?
Will the bias inherent in the data that
created the ultimate complex system be
used by the machine to logically propagate
erroneous assumptions until the machine
becomes a danger to its creators? Will the
machine that thinks understand language and
communication in all its forms, including
voice intonations, facial expressions, the
meaning of ambiguous statements that
humans know intuitively how to interpret,
and most importantly, emotion and
sentiment?
Consider the fact that it is already possible
to build ANNs that work and can teach
themselves at ever-growing speed. Still, if
asked how the ANN is teaching itself such
complex interactions, everyone involved
will either throw their hands up in despair
for lack of an answer or offer theory upon
theory — none of which will answer the
simple question: ‘How did this ANN teach
itself?’
‘But in a complex system such as those
I’ve described above, in which simple
components act without a central
controller or leader, who or what actually
perceives the meaning of situations so
as to take appropriate actions? This is
essentially the question of what constitutes
consciousness or self-awareness in living
systems.’101
The games of go and chess are self-contained
complex systems. The number of moves
in these games is beyond comprehension,
while each move results from sensitivity
to initial conditions. ‘One concept of
complexity is the minimum amount of
meaningful, non-random, but unpredictable
information needed to characterise a system
or process.’102
In 2017, AlphaGo Zero,103 a version of
the AlphaGo software from DeepMind,104
was the first version of AlphaGo to train
itself to play go (arguably more complex
than chess) without the benefit of any
previous datasets or human intervention.
Built upon a neural network and using
a branch of AI known as ‘reinforcement
learning’, its knowledge and skill were
entirely self-taught.
In the first three days, AlphaGo Zero
played 4.9 million games against itself in
quick succession. It appeared to develop the
skills required to beat professional go players
within just a few days, and in 40 days,
surpassed all previous AlphaGo software and
won every game.105 Although this is narrow
AI (in that it could play go and nothing
else), this achievement is impossible to
ignore.
Demis Hassabis, the co-founder and
CEO of DeepMind in 2017, stated that
AlphaGo Zero’s power came from the fact
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 185
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
that it was ‘no longer constrained by the
limits of human knowledge’,106 while Ke Jie,
a world-renown go professional, said that
‘Humans seem redundant in front of its self-
improvement’.107
The most chilling comment, however,
came from David Silver, DeepMind’s lead
researcher:
‘The fact that we’ve seen a program
achieve a very high level of performance in
a domain as complicated and challenging
as go should mean that we can now start
to tackle some of the most challenging and
impactful problems for humanity’.108
Simply put, this means it is possible to
implement the AlphaGo Zero narrow
AI lessons within general or strong AI.
This again circles back to Turing’s all-
encompassing question: ‘can machines
think?’ and the various dilemmas that arise
pursuant to this question.
There are rudimentary and general
possibilities for prediction within chaotic
systems; there is also the universality of
the Feigenbaum constant, but complexity
goes way beyond these rules. It contains so
much surprise in the information entropy,
so many points of ‘initial sensitivity to initial
conditions’, so many systems which are not
yet understood, so many possibilities of bias
slipping in when the data are not ‘pure’, that
we remain blindly fumbling while trying to
understand the nature and consequences of
complexity.
‘Chaos has shown us that intrinsic
randomness is not necessary for a system’s
behaviour to look random; new discoveries
in genetics have challenged the role of
gene change in evolution; increasing
appreciation of the role of chance and self-
organisation has challenged the centrality
of natural selection as an evolutionary
force. The importance of thinking in terms
of nonlinearity, decentralised control,
networks, hierarchies, distributed feedback,
statistical representations of information,
and essential randomness is gradually being
realised in both the scientific community
and the general population.’109
‘What’s needed is the ability to see their
deep relationships and how they fit into a
coherent whole — what might be referred
to as “the simplicity on the other side of
complexity”.110
EMERGENCE
‘Emergence results in the creation
of novelty, and this novelty is often
qualitatively different from the
phenomenon out of which it emerged.’111
Emergence may be categorised as a step in
the evolutionary process. It is best perceived
as a new state of being that arises from a
previous state. In Mitchell’s previously
discussed definition of complexity, she
offers an alternative definition for a complex
system, that is: ‘a system that exhibits
nontrivial emergent and self-organising
behaviours. The central question of the
sciences of complexity is how this emergent
self-organised behaviour comes about’.112
What is meant by ‘emergence’? It is
difficult to explain away what our minds
cannot grasp; however, to prove that
emergence is actual, let us first examine
emergence simplistically without considering
self-organisation.
When the pieces of a jigsaw puzzle are
spread out, one can view the properties
of each individual piece — its shape, size,
picture and so forth. The individual pieces
are entities in and of themselves. As we
attempt to put the puzzle together, our
minds shift from the individual pieces to
what the overall picture should look like and
the shapes of the pieces required to connect
together. We are, in actuality, seeking
patterns. Once the puzzle is completed, a
new entity emerges — one that was not
present before. Humans are creatures of
emergence. We take chaotic, complex
ideas and situations, and attempt to make
sense of them, usually through patterns.
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
186 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
‘The brain uses emergent properties.
Intelligent behaviour is an emergent
property of the brain’s chaotic and complex
activity.’113
The overall picture becomes coherent as
it emerges from the disorder.
‘Emergence refers to the existence or
formation of collective behaviours —
what parts of a system do together that
they would not do alone … In describing
collective behaviours, emergence refers
to how collective properties arise from
the properties of parts, how behaviour
at a larger scale arises from the detailed
structure, behaviour and relationships at a
finer scale. For example, cells that make up
a muscle display the emergent property of
working together to produce the muscle’s
overall structure and movement …
Emergence can also describe a system’s
function — what the system does by virtue
of its relationship to its environment that it
would not do by itself.’114
Complex systems are emergent systems.
Once one discovers and encounters
complexity, emergent behaviour is almost
inevitable at some stage. Consider the 86
billion neurons in a human brain. Each
neuron does one thing or is a connector.
Yet, combine those neurons into one
system, and thought, consciousness,
emotion, reasoning, creativity and
numerous psychological states will emerge.
Alternatively, consider the stock market —
a complex system to which much of AI
has been dedicated. Each person has their
own distinct reactions to the market.
However, it is the combination of millions
of different reactions that make up the
stock market’s ‘whole’. The result, at any
given millisecond, is an emergence of a new
entity. Because of complexity and sensitivity
to initial conditions at that millisecond,
yet another entirely new complex system
emerges.
Economist Jeffrey Goldstein published a
widely accepted definition of emergence in
1999 — ‘the arising of novel and coherent
structures, patterns and properties during
the process of self-organisation in complex
systems’.115 Then in 2002, Peter Corning
further expanded on this definition:
‘The following are common characteristics:
(1) radical novelty (features not previously
observed in the system); (2) coherence or
correlation (meaning integrated wholes
that maintain themselves over some period
of time); (3) a global or macro ‘level’
(ie there is some property of ‘wholeness’);
(4) being the product of a dynamical
process (it evolves); and (5) being
‘ostensive’ (it can be perceived).’116
Indeed, once one takes time to view
complex systems, emergence is there for
all to see. It is a state which, yes, ‘emerges’
from complexity. As each emergent
system will exhibit qualities not previously
observed in the individual parts, the result
is, in essence, a whole new system. Then
chaos and complexity will again lead to
emergence. This is not a recursive loop but
an ever-expanding system.
The constant growth of computing
power (whether Moore’s law117 dissipates,
or remains stable, or goes into hypergrowth)
will allow for massive computations only
dreamed about a few years ago. Coupled
with the information explosion, computers
will be able to digest colossal amounts of
information within milliseconds.
What seems always to be forgotten by
those who refuse to accept the state of
emergence is that the world is, by nature,
chaotic. It is ruled by sensitivity to initial
conditions. Chaos will always appear. Those
little flaps of the butterfly wings tend to
throw even the best-laid plans of mice and
men into a tailspin.
Imagine a data lake being constantly fed
from a multitude of sources while algorithms
are imposed on the data to produce a picture
from the billions of data bits. As the data
lake consumes more data, the real-time
image obtained from the data necessarily
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 187
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
differs from the picture obtained just a
minute before. Information entropy has
changed. Bias has shifted. The results amend
themselves — endlessly.
Now imagine a massive number of
chaotic-complex-emergent systems all
reaching an apex at approximately the
same time. They are dynamic, and they
are evolving. They are also self-organising.
At some point, these systems will begin to
communicate with one another, sharing
their information, having their own
information entropy, linguistic capability,
decision trees and random forests with no
human intervention. A new mega-system
will emerge from the numerous individual
emergent systems that have reached this
stage.
This is the age of ‘technological
singularity’.118
TECHNOLOGICAL SINGULARITY
‘Wisdom is more valuable than weapons
of war, but a single error destroys much of
value.’119
‘Computers make excellent and efficient
servants, but I have no wish to serve under
them.120
Much of the literature on the technological
singularity centres on the methods used to
achieve it. Will it occur through ‘human-
like AI’ with ears, eyes, a heart and a brain
(in the classical sense) or a disembodied form
that one cannot even imagine?
The moment emergence appears, there
will be no stopping a coming singularity.
It is not a matter of when it will occur or
under what exact conditions it will occur,
but rather the very real possibility that it will
be achieved.
‘What, then, is the singularity? It’s a
future period during which the pace of
technological change will be so rapid, its
impact so deep, that human life will be
irreversibly transformed.’121
‘A singularity in human history would
occur if exponential technological progress
brought about such dramatic change that
human affairs as we understand them today
came to an end.122
While Kurzweil ‘set the date for the
singularity — representing a profound
and disruptive transformation in human
capability — as 2045’,123 this is not critical
to the present argument. It may well happen
in 2060, as Webb maintains, or 2145. The
crucial point here, to reiterate, is that once
emergence begins, the singularity will
follow. (Kurzweil will release a new book,
‘The Singularity Is Nearer’,124 in 2022,
which might update his current predictions.)
As emergence begins, the information
explosion will become the long-prophesied
intelligence explosion. There are a few
prerequisites for this to happen, although
this list is by no means exhaustive.
The world of AI is currently infused with
ML, which encompasses within it many
different evolving sciences. ML makes use
of data available by using NLP, PR and
DL. This lies at the heart of advance, along
with continuous growth in the amount of
data available. Patterns lie at the essence
of human thought and are crucial for
intelligence.
‘The patterns are important. Certain details
of these chaotic self-organising methods,
expressed as model constraints (rules
defining the initial conditions and the
means for self-organisation), are crucial,
whereas many details within the constraints
are initially set randomly. The system then
self-organises and gradually represents
the invariant features of the information
that has been presented to the system.
The resulting information is not found in
specific nodes or connections but rather is
a distributed pattern.’125
‘The sort of AI we are envisaging here
will also be adept at finding patterns in
large quantities of data. But unlike the
human brain, it won’t be expecting that
data to be organised in the distinctive way
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
188 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
that data coming from an animal’s senses
are organised. It won’t depend on the
distinctive spatial and temporal organisation
of that data, and it won’t have to rely on
associated biases, such as the tendency for
nearby data items to be correlated … To be
effective, the AI will need to be able to find
and exploit statistical regularities without
such help, and this entails that it will be
very powerful and very versatile.126
Although ML is in its infancy, as is most
of AI, PR and DL will continue to make
inroads and augment a computer decision-
making process while digesting patterns.
Couple the technology with the data
available and the ever-increasing speed
at which data can be stored, accessed and
analysed, and we are approaching the
moment when ‘general’, also known as
‘strong’ AI,127 may be possible. However,
ML and all its constructs require other
factors to make this a reality.
‘The massive parallelism of the human
brain is the key to its pattern-recognition
ability, which is one of the pillars of our
species’ thinking … The brain has on the
order of one hundred trillion interneuronal
connections, each potentially processing
information simultaneously.’128
Creating a machine intelligence capable
of such parallelism, which would then
engender PR at a human or above level, is
not yet viable. However, we are certainly
on track to do so. Once this level of
sophistication has been achieved, the systems
will grow exponentially. ‘Exponential’ is a
key term here, as many do not understand
the implications of exponential growth.
For example, to explain ‘exponential’ in
simple terms, eg doubling at a constant
rate, take one grain of rice and place it on
one chessboard square. Now exponentially
increase that grain of rice on each square
so that each square will contain double the
amount of rice on the previous square. By
the time one is done with the exponential
experiment, there will be over 18 quintillion
grains of rice on that one chessboard.
Imagine this type of exponential growth
exploding in parallelism. Imagine such an
increase in humankind’s ability to analyse
data, putting aside the growth in data itself.
Once this stage is reached, humanity
enters the age of possible ‘superintelligence’.
‘A machine superintelligence might itself
be an extremely powerful agent, one that
could successfully assert itself against the
project that brought it into existence as
well as against the rest of the world.’129
‘The singularity-related idea that
interests us here is the possibility of an
intelligence explosion, particularly the
prospect of machine superintelligence’.130
When a superintelligence appears, it
will directly result from the intelligence
explosion. However, superintelligence is
not something one controls with a flip of
the switch or a code change. Once a real
superintelligence appears (or better said,
once an intelligence explosion is on the cusp
of occurring — or has occurred) it may be
too late. Writing code to change something
here or there will no longer do anyone any
good.
‘A successful seed AI would be able to
iteratively enhance itself: an early version of
the AI could design an improved version of
itself, and the improved version — being
smarter than the original — might be able
to design an even smarter version of itself,
and so forth. Under some conditions, such
a process of recursive self-improvement
might continue long enough to result
in an intelligence explosion — an event
in which, in a short period of time, a
system’s level of intelligence increases
from a relatively modest endowment
of cognitive capabilities … to radical
superintelligence.’131
Following the intelligence explosion
and the creation of a singularity, this
‘recursive self-improvement’ is perhaps the
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 189
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
superintelligence’s penultimate capability —
a stage where the intelligence created can
correct any mistakes and errors it ‘judges and
thinks’ it may have made or may have been
made to it. This is the actual ‘event horizon’,
as once recursive self-improvement is viable,
the intelligence explosion will be a logical
consequence.
‘At some point, the seed AI becomes
better at AI design than the human
programmers. Now when the AI improves
itself, it improves the thing that does the
improving.’132
‘once an AI is engineered whose
intelligence is only slightly above
human level, the dynamics of recursive
self-improvement become applicable,
potentially triggering an intelligence
explosion.133
The lexicon, however, must be clear: data
and information are what is collected and
analysed. Neither term connotes actual
knowledge or intelligence.
‘Information is not knowledge. The world
is awash in information; it is the role of
intelligence to find and act on the salient
patterns’.134
Simply put: as data are amassed and
chaos and complexity ensue, systems
will be flooded with massive amounts of
information. These systems will demonstrate
nonlinear ‘thought’ processes in terms of
what they are attempting to analyse and
the predictions made. Concurrently, ML
will make considerable strides in PR and
DL, bringing us closer to the capabilities of
parallelism as the power of the hardware and
software grows. Indeed, actual exponential
growth, especially in data, may be achieved,
adding to the information explosion.
Complex systems with massive amounts
of data will emerge in our computerisation.
At some impossible-to-predict moment in
time (despite Kurzweil’s prophecy), these
complex systems will emerge into yet
larger systems and continue the process of
chaos-complexity-emergence. Whether
by human hand or by self-generated
computer capability, these systems will
begin to communicate with one another,
merging again into evermore aware and
extensive systems — emergence on an all-
encompassing scale.
As these systems communicate, they
will gain information based upon all the
data they are analysing. An intelligence
will emerge, capable of recursive self-
improvement due to the amount of data and
capabilities inherent within all the chaotic
and complex systems that gave birth to it.
At that point, a superintelligence will appear
and the intelligence explosion will have
begun. The technological singularity will
have reached a technological event horizon.
‘The “event horizon” is the boundary
defining the region of space around a black
hole from which nothing (not even light)
can escape.135
‘Just as we find it hard to see beyond the
event horizon of a black hole, we also find
it difficult to see beyond the event horizon
of the historical singularity.136
‘This, then, is the singularity. Some
would say that we cannot comprehend
it, at least with our current level of
understanding. For that reason, we cannot
look past its event horizon and make
complete sense of what lies beyond. This is
one reason we call this transformation the
singularity.137
As discussed, the superintelligence capable
of recursive self-improvement will have no
use for its human creators. The actions of
this superintelligence will depend a great
deal on how closely it is possible to inculcate
information with human capabilities. It
does not matter if it looks like an avatar or
spread between a trillion computers in the
cloud. What truly matters is that once this
superintelligence emerges, it can master
language and all its nuances, have common
sense and show creativity in a positive
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
190 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
sense. Probably the most critical, crucial,
and fundamental characteristics are if it will
understand emotion and empathy.
‘This vision of the future has considerable
appeal. If the transition from human-level
AI to superintelligence is inevitable, then
it would be a good idea to ensure that
artificial intelligence inherits basic human
motives and values. These might include
intellectual curiosity, the drive to create,
to explore, to improve, to progress. But
perhaps the value we should inculcate in
AI above all others is compassion toward
others, toward all sentient beings, as
Buddhists say. And despite humanity’s
failings — our war-like inclinations, our
tendency to perpetuate inequality, and our
occasional capacity for cruelty — these
values do seem to come to the fore in
times of abundance. So, the more human-
like an AI is, the more likely it will be to
embody the same values, and the more
likely it is that humanity will move toward
a utopian future, one in which we are
valued and afforded respect, rather than a
dystopian future in which we are treated as
worthless inferiors.’138
To be clear. One slight error, one bias in the
wrong place, one dismissal of information
entropy and the ‘surprise’ in the message,
failure to ensure the systems understand
the actual human condition — will be
disastrous.
‘A flaw in the reward function of a
superintelligent AI could be catastrophic.
Indeed, such a flaw could mean the
difference between a utopian future of
cosmic expansion and unending plenty,
and a dystopian future of endless horror,
perhaps even extinction.’139
‘It would be a serious mistake, perhaps a
dangerous one, to imagine that the space of
possible AIs is full of beings like ourselves,
with goals and motives that resemble
human goals and motives. Moreover,
depending on how it was constructed,
the way an AI or a collective of AIs set
about achieving its aims (insofar as this
notion even made sense) might be utterly
inscrutable, like the workings of the alien
intelligence.140
Bostrom’s ‘Superintelligence: Paths, Dangers,
Strategies’, Kurzweil’s ‘The Singularity Is
Near: When Humans Transcend Biology’,
Shanahan’s ‘The Technological Singularity’
and Webb’s ‘The Big Nine’ all have one
thing in common. They all discuss protection
against a possible dangerous singularity
and suggest a myriad of methods to build
defences into the system, regulate dangerous
advances, or put in a ‘kill switch’.
These defences, however, are unlikely to
work. A superintelligence will simply self-
correct for its own continued existence and
certainly not allow a kill switch to be used
upon itself. As Bostrom warns, there is only
one chance to get it right.
‘If some day we build machine
brains that surpass human brains in
general intelligence, then this new
superintelligence could become very
powerful. And, as the fate of the gorillas
now depends more on us humans than
on the gorillas themselves, so the fate of
our species would depend on the actions
of the machine superintelligence. We do
have one advantage: we get to build the
stuff. In principle, we could build a kind
of superintelligence that would protect
human values. We would certainly have
strong reason to do so. In practice, the
control problem — the problem of how to
control what the superintelligence would
do — looks quite difficult. It also looks
like we will only get one chance. Once
unfriendly superintelligence exists, it would
prevent us from replacing it or changing its
preferences. Our fate would be sealed.’141
The progression of emergence will occur
when complex systems begin to communicate
with each other without human oversight.
These complex-emergent systems, essentially
existing in electronic memory (no matter how
advanced it may be), will converge and build
ever-better systems by themselves. They will
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 191
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
be far too intelligent to let any sort of defence
get in their way, even if they lack the proper
knowledge (as opposed to information).
The advent of a singularity when there is
bias in the data, when information entropy
does not get rid of the noise, when language
comprehension is mistaken, and common
sense and emotion are not correctly
ensconced in the systems, will lead to that one
chance being blown. The worst-case scenario,
known as the ‘Terminator argument’ (based
upon the Terminator movie franchise142), or
its less radical cousin, ‘transhumanism’,143,144
may become an actuality when self-awareness
is achieved. If either of these scenarios
becomes a reality, it will be lights out for
humanity — figuratively and literally.
CONCLUSION
‘One Ring to rule them all, One Ring
to find them,
One Ring to bring them all and in the
darkness bind them.145
AI is not one science. It is a conglomeration
of many different fields and theories —
a giant jigsaw that once complete will
create a single entity possessing great power.
In many ways, one may compare it to ‘the
theory of everything’ (‘M-theory’), which
fascinated, perturbed and eluded great
minds like Albert Einstein and Stephen
Hawking.
‘M-theory is not a theory in the usual
sense. It is a whole family of different
theories, each of which is a good
description of observations only in some
range of physical situations. It is a bit like
a map. As is well known, one cannot
show the whole of the earth’s surface on a
single map.’146
One can say the same about AI. ‘It is
a whole family of different theories,
constructs and sciences, each being a good
description of observations in a specific
range of conditions’. Will it lead to a ‘theory
of everything’, or will it prove to be as
elusive as finding which butterfly flapped its
wings in Brazil and created the tornado in
Texas?
Data will continue to grow, perhaps in
real exponential terms, and we will continue
to save every bit. Will we be able to handle
said data wisely, predicting and building for
a better future? Will we pollute the analysis
with bias and censure? Will it all become
just ‘noise’, or will humankind continue to
express childlike wonder in the ‘surprise’
of the message? Will chaos and complexity
overwhelm us as we emerge, sometimes
blindly, without understanding the systems
surrounding our reality?
The words of Shakespeare provide pause
to contemplate humanity’s existence:
‘Nay, had I power, I should
Pour the sweet milk of concord into hell,
Uproar the universal peace, confound
All unity on earth.’147
Frodo’s ring remains on our finger,
reminding us of the power we have amassed
and the road we are on. Will it deliver a
more meaningful existence — one in which
humankind’s dreams can be realised — or
will it bring uproar to universal peace and
confound all unity on earth?
Frodo destroys the ring, ensuring that
darkness will not prevail. For humanity
there is no such choice. Society can only
learn to live with the consequences of
data and information while constantly
evaluating techniques, bias and flaws with
knowledge, common sense and empathy.
Even more important is the need to create
methods through which it is possible
to inculcate these attributes into our
creations. If we ignore this and insist on
rushing headlong into our technological
nirvana, calamity will reign down upon
humankind.
Pure data depict the past and present
without bias or prejudice. In other words,
pure data depict information.
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
192 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
By contrast, what people do with their
data is neither pure nor ever without bias or
prejudice.
If we do not plan for such consequences,
the ring of AI and data will bind us to
the darkness and we will find ourselves in
a world where our impotence vis-à-vis
our own AI creations will force us into
unadulterated chaos of our own design.
Perhaps Bilbo’s simple advice to Frodo (as
Frodo reported it) may provide the wisdom
for humankind to navigate the ceaseless
emanating confluence of AI:
‘He used often to say there was only one
Road; that it was like a great river: its
springs were at every doorstep, and every
path was its tributary. “It’s a dangerous
business, Frodo, going out of your door,”
he used to say. “You step into the Road,
and if you don’t keep your feet, there is
no knowing where you might be swept
off to.’148
POSTSCRIPT
Covering AI and data within an academic
article presents many challenges.
Additionally, a vast majority of the
populace has delegated AI to a buzzword
with little or no understanding of the
actual technologies involved or the possible
consequences. This paper is meant to be
read as a fundamental overview of the
progression of AI and the reliance upon
data. Undoubtedly, there are many areas
of enquiry that are not presented herein.
Rather, a meagre attempt has been made
to present the ‘headings’ as they are while
leaving it up to the reader to delve into the
deeper contexts.
It is also true that a thoroughly valid
argument can be presented that the
article contains a surfeit of references and
quotations. It would be impossible for
any researcher to remain immune to the
knowledge and superb intelligence of
those who contributed and continue to
leave their mark upon AI. Such research
is a humbling and often overwhelming
experience. Therefore, the abundance of
references and quotes is warranted and
indeed required, allowing for confirmation
of the thoughts, possibilities and directions
presented herein.
ACKNOWLEDGMENTS
In Israel, at the Herzliya traffic interchange
near the bridge, there was once a clown
on stilts entertaining the passersby. When
the light turned red, the clown would walk
into the intersection, stand on the road, and
pantomime while placing his large clown hat
on the highway for tips. His performance
was surprisingly humorous, causing everyone
to smile.
One summer day, on my way to
the train, I stood watching this clown,
mesmerised by his ability to remain balanced
on the giant stilts attached to his legs.
Suddenly, the traffic lights went haywire,
and there this poor clown stood, in the
middle of a busy thoroughfare with honking
cars and impatient drivers coming at him
from all directions. It was utter chaos.
Through the ensuing bedlam, without
showing any fear, walking calmly on his
giant stilts, the clown made his way back to
the sidewalk, avoiding the moving cars and
waving happily at the angry drivers. At that
moment, as my safe linear world entered
non-linearity amidst the pandemonium of
complexity, I suddenly realised chaos could
be contained, directed and controlled.
Indeed, since that second in fragile time, I
have come to firmly believe that chaos exists
in our universe so we can evolve.
When I crossed the road, I made a
point of putting money in his big funny
hat. The clown bowed from on-high in
appreciation.
I owe an immense measure of gratitude
to this unknown person, making his living
as a clown on the street. In one enlightening
moment, he had changed the course of my
thought processes forever.
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 193
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
Thank you, Mr Clown! Merci! Toda
Raba!
‘Chaos is the score upon which reality is
written.’149
References
1 . Tolkien, J.R.R. (2009) ‘The Lord of the Rings:
The Classic Fantasy Masterpiece’, HarperCollins
Publishers, London, Kindle Edition, Location 939.
2 . Tolkien, J.R.R. (2009) ‘The Hobbit’, HarperCollins
Publishers, London, Kindle Edition.
3 . Tolkien, ref. 1 above, Location 1206.
4 . Harari,Y.N. (2016) ‘Homo Deus:A Brief History of
Tomorrow’, HarperCollins Publishers Inc., Harper,
New York, NY, Kindle Edition, Chapter 11.
5 . Ibid., Location 6101.
6 . Ibid., Location 6321.
7 . Webb,A. (2019) ‘The Big Nine: How the Tech
Titans and Their Thinking Machines Could Warp
Humanity’ PublicAffairs, New York, NY, Kindle
Edition, Location 64.
8 . Wikipedia (n.d.) ‘Information explosion’, available
at: https://en.wikipedia.org/wiki/Information_
explosion
(accessed 9th May, 2021).
9 . Webb, ref. 7 above, Location 2685.
10. Bostrom, N. (2014) ‘Superintelligence: Paths,
Dangers, Strategies’, Oxford University Press, New
York, NY, Kindle Edition, Location 380.
11. Quote taken from:‘Space Seed’, Star Trek, created
by Gene Roddenberry, Season 1, Episode 24,
Paramount (1968).
12. Einstein,A. (2012) ‘Einstein on Cosmic Religion and
Other Opinions and Aphorisms’, Dover Publications,
New York, NY, p. 97.
13. Isaacson,W. (2014) ‘The Innovators: How a Group
of Hackers, Geniuses, and Geeks Created the Digital
Revolution’, Simon & Schuster, New York, NY,
Kindle Edition, Location 277–721.
14. Ibid., Location 655.
15. Turing, A.M. (1950) ‘Computing machinery and
intelligence’, Mind,Vol. 59, No. 236, pp. 433–460.
16. Wikipedia (n.d.) ‘Turing test’, available at: https://
en.wikipedia.org/wiki/Turing_test
(accessed 29th
July, 2021).
17. Turing, ref. 15 above.
18. Descartes, R. (1637) ‘Discourse on the Method of
Rightly Conducting One’s Reason and of Seeking
Truth in the Sciences’, Project Gutenberg e-book,
available at: https://www.gutenberg.org/files/59/59-
h/59-h.htm#part4 (accessed 29th July, 2021).
19. Bostrom, ref. 10 above.
20. Webb, ref. 7 above.
21. Kurzweil, R. (2013) ‘The Singularity Is Near:When
Humans Transcend Biology’, Duckworth Overlook,
London, Kindle Edition.
22. Kurzweil, R. (2012) ‘How to Create a Mind:The
Secret of Human Thought Revealed’, Penguin
Books, London, Kindle Edition.
23. Harari, ref. 4 above.
24. Gleick, J. (2011) ‘Chaos: Making a New Science’,
Open Road Media, New York, NY, Kindle Edition.
25. Gleick, J. (2011) ‘The Information’, Pantheon Books,
New York, NY, Kindle Edition.
26. Mitchell, M. (2009) ‘Complexity:A Guided Tour’,
Oxford University Press, New York, NY, Kindle
Edition.
27. Mitchell, M. (2019) ‘Artificial Intelligence:A Guide
for Thinking Humans’, Farrar, Straus and Giroux,
New York, NY, Kindle Edition.
28. Isaacson,W. (2017) ‘Leonardo da Vinci’, Simon &
Schuster, New York, NY.
29. Isaacson, ref. 13 above.
30. Isaacson,W. and Bezos, J. (2020) ‘Invent and Wander:
The Collected Writings of Jeff Bezos,With an
Introduction by Walter Isaacson’, Harvard Business
Review Press and PublicAffairs, Boston, MA, Kindle
Edition, Location 76–468.
31. Isaacson,W. (2011) ‘Steve Jobs:The Exclusive
Biography’, Little, Brown Book Group, London.
32. Isaacson,W. (2021) ‘The Code Breaker: Jennifer
Doudna, Gene Editing, and the Future of the
Human Race’, Simon & Schuster, New York, NY,
Kindle Edition.
33. Ibid., Location 467–481.
34. Wikipedia (n.d.) ‘Information theory’, available at:
https://en.wikipedia.org/wiki/Information_theory
(accessed 29th July, 2021).
35. Wikipedia (n.d.) ‘The laws of thought’, available
at: https://en.wikipedia.org/wiki/The_Laws_of_
Thought
(accessed 19th August, 2021).
36. Wikipedia (n.d.) ‘Boolean function’, available at:
https://en.wikipedia.org/wiki/Boolean_function
(accessed 19th August, 2021).
37. Isaacson, ref. 13 above, Location 943.
38. Shannon, C. (1948) ‘A mathematical theory of
communication’, Bell System Technical Journal, Vol. 27,
July/October, pp. 379–423
39. Isaacson, ref. 13 above, Location 943
40. Stone, J.V. (2018) ‘Information Theor y:A Tutorial
Introduction’, Sebtel Press, Kindle Edition, Location
82
41. Shannon, ref. 38 above.
42. Gleick, ref. 25 above, Location 66.
43. Shannon, ref. 38 above.
44. Stone, J.V. (2018) ‘Information Theor y:A Tutorial
Introduction’, Sebtel Press, Kindle Edition, Location
359.
45. Soni, J. and Goodman, R. (2017) ‘A Mind at Play:
How Claude Shannon Invented the Information
Age’, Simon & Schuster, New York, NY, Kindle
Edition, Location 69.
46. Wikipedia (n.d.) ‘John von Neumann’, available at:
https://en.wikipedia.org/wiki/John_von_Neumann
(accessed 20th August, 2021).
47. Wikipedia (n.d.) ‘Entropy’, available at: https://
en.wikipedia.org/wiki/Entropy
(accessed 29th July,
2021).
48. Wikipedia (n.d.) ‘Entropy in thermodynamics
and information theory’, available at: https://
en.wikipedia.org/wiki/Entropy_in_
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications
194 Applied Marketing Analytics Vol. 7, 2 169–195 © Henry Stewart Publications 2054-7544 (2021)
Gross
thermodynamics_and_information_theory (accessed
29th July, 2021).
49. Mitchell, ref. 26 above, Location 744.
50. Ibid., Location 738.
51. Arieh Ben-Naim (2008) ‘Entropy Demystified:The
Second Law Reduced to Plain Common Sense’,
World Scientific Publishing Company, London,
Kindle Edition, Location 489.
52. Mitchell, ref. 26 above, Location 902–918.
53. Gleick, ref. 25 above, Location 3592.
54. Stone, ref. 40 above, Location 603.
55. Gross,T.W. (2021) ‘Thesis and antithesis —
Innovation and predictive analytics: Σ (Past +
Present) Data Future Success’, Applied Marketing
Analytics,Vol. 6, No. 3, pp. 22–36
56. Gross,T.W. (2020) ‘Sentiment analysis and
emotion recognition: Evolving the paradigm of
communication within data classification’, Applied
Marketing Analytics,Vol. 6, No. 1, pp. 230–243.
57. Chomsky, N. (2017) ‘On Language: Chomsky’s
Classic Works: Language and Responsibility and
Reflections on Language’,The New Press, New York,
NY, Kindle Edition, Location 683.
58. Everett, D.L. (2017) ‘How Language Began:The
Story of Humanity’s Greatest Invention’, Liveright,
New York, NY, Kindle Edition, Location 329.
59. Ibid., Location 184.
60. Kurzweil, ref. 21 above, Location 1098.
61. Conway, M. E. (1968) ‘How do committees invent?’,
Datamation,Vol. 14, No. 5, pp. 28–31.
62. Wikipedia (n.d.) ‘Conway’s law’, available at: https://
en.wikipedia.org/wiki/Conway’s_law
(accessed 29th
July, 2021).
63. Webb, ref. 7 above, Location 1763.
64. Ibid., Location 1666.
65. Ibid., Location 2763.
66. Wikipedia (n.d.) ‘Artificial intelligence’,
available at: https://en.wikipedia.org/w/index.
php?title=Artificial_intelligence&oldid=997705860
(accessed 13th January, 2021).
67. Wikipedia (n.d.) ‘John McCarthy (computer
scientist)’, available at: https://en.wikipedia.org/
wiki/John_McCarthy_(computer_scientist)
(accessed
29th July, 2021).
68. Webb, ref. 7 above, Location 893.
69. Wikipedia (n.d.) ‘Bayes’ theorem’, available at:
https://en.wikipedia.org/wiki/Bayes%27_theorem
(accessed 29th July, 2021).
70. Wikipedia (n.d.) ‘An essay towards solving a
problem in the doctrine of chances’, available at:
https://en.wikipedia.org/wiki/An_Essay_towards_
solving_a_Problem_in_the_Doctrine_of_Chances
(accessed 29th July, 2021).
71. McGrayne, S.B. (2011) ‘Preface’ in ‘The Theory That
Would Not Die: How Bayes’ Rule Cracked the
Enigma Code, Hunted Down Russian Submarines,
& Emerged Triumphant from Two Centuries of
Controversy’,Yale University Press, New Haven, CT,
Kindle Edition, Location 69.
72. Wikipedia (n.d.) ‘Decision tree learning’, available
at: https://en.wikipedia.org/wiki/Decision_tree_
learning
(accessed 29th July, 2021).
73. Wikipedia (n.d.) ‘Random forest’, available at:
https://en.wikipedia.org/wiki/Random_forest
(accessed 29th July, 2021).
74. Hartshorn, S. (2016) ‘Machine Learning With
Random Forests And Decision Trees:A Visual Guide
For Beginners’, Kindle Edition.
75. Smith, C. and Koning, M. (2017) ‘Decision Trees
and Random Forests:A Visual Introduction For
Beginners: A Simple Guide to Machine Learning
with Decision Trees’, Blue Windmill Media, Canada,
Kindle Edition.
76. Ibid., Location 228.
77. Wikiquote (n.d.) ‘The Dark Knight (film)’, available
at: https://en.wikiquote.org/wiki/The_Dark_
Knight_(film)
(accessed 29th July, 2021).
78. Shelley, M.W. (1813) ‘Frankenstein’, Project
Gutenberg e-book, available at: https://www.
gutenberg.org/files/42324/42324-h/42324-h.htm
(accessed 31st July, 2021).
79. Wikipedia (n.d.) ‘Chaos theory’, available at: https://
en.wikipedia.org/wiki/Chaos_theory
(accessed 29th
July, 2021).
80. Wikipedia (n.d.) ‘Butterfly effect’, available
at: https://en.wikipedia.org/wiki/Butterfly_
effect#History
(accessed 29th July, 2021).
81. Lorenz, E. N. (1963) ‘Deterministic nonperiodic
flow’, Journal of the Atmospheric Sciences,Vol. 20, No. 2.
pp. 130–141.
82. Lorenz, E. N. (1963) ‘The predictability of
hydrodynamic flow’, Transactions of the New York
Academy of Sciences,Vol. 25, No. 4, pp. 409–432.
83. Gleick, ref. 24 above, Location 156.
84. Jones, C. (2013) ‘Chaos in an atmosphere hanging
on a wall’, available at: http://mpe.dimacs.rutgers.
edu/2013/03/17/chaos-in-an-atmosphere-hanging-
on-a-wall/
(accessed 2nd August, 2021).
85. Gross,T. (2015) ‘An overwhelming amount of data:
Applying chaos theory to find patterns within big
data’, Applied Marketing Analytics, Vo l. 1, No. 4 , p p.
377–387.
86. Webb,A. and Hessel,A. (2022) ‘The Genesis
Machine: Our Quest to Rewrite Life in the Age of
Synthetic Biology’, PublicAffairs, New York, NY.
87. Gleick, ref. 24 above, Location 99–118.
88. Ibid., Location 389.
89. Mitchell, M. (2009) ‘Complexity:A Guided Tour’,
Oxford University Press, New York, NY, Kindle
Edition, Location 449.
90. Gleick, ref. 24 above, Location 1029.
91. Wikipedia (n.d.) ‘Feigenbaum constants’,
available at: https://en.wikipedia.org/wiki/
Feigenbaum_constants
(accessed 3rd August,
2021).
92. Feigenbaum, M. J. (1980) ‘Universal behavior in
nonlinear systems’, Los Alamos Science, Vo l . 1 , N o. 1 ,
p. 4–27.
93. Mitchell, ref. 26 above, Location 674.
94. Ibid., Location 405.
95. Wikipedia (n.d.) ‘Complex system’, available at:
https://en.wikipedia.org/wiki/Complex_system
(accessed 29th July, 2021).
96. Gleick, ref. 24 above, Location 4491.
Delivered by Ingenta
IP: 85.5.11.165 On: Tue, 26 Oct 2021 19:03:53
Copyright: Henry Stewart Publications 195
© Henry Stewart Publications 2054-7544 (2021) Vol. 7, 2 169–195 Applied Marketing Analytics
The symbiotic relationship between AI and data
97. Munro,A. (2010) ‘Beyond the Mask:The Rising
Sign — Part II: Libra-Pisces’, Genoa House,Toronto,
p. 193.
98. Mitchell, ref. 25 above, Location 956.
99. Ibid., Location 349.
100. Ibid., Location 307.
101. Ibid., Location 2970.
102. Kurzweil, ref. 21 above, Location 904.
103. Wikipedia (n.d.) ‘AlphaGo Zero’, available at:
https://en.wikipedia.org/wiki/AlphaGo_Zero
(accessed 3rd August, 2021).
104. Wikipedia (n.d.) ‘DeepMind’, available at: https://
en.wikipedia.org/wiki/DeepMind
(accessed 3rd
August, 2021).
105. Kennedy, M. (2017) ‘Computer learns to play go
at superhuman levels without human knowledge’,
NPR, 18th October, available at: https://www.npr.
org/sections/thetwo-way/2017/10/18/558519095/
computer-learns-to-play-go-at-superhuman-levels-
without-human-knowledge
(accessed 3rd August,
2021).
106. Knapton, S. (2017) ‘AlphaGo Zero: Google
DeepMind supercomputer learns 3,000 years of
human knowledge in 40 days’, Telegraph, 18th
October, available at: https://www.telegraph.
co.uk/science/2017/10/18/alphago-zero-google-
deepmind-supercomputer-learns-3000-years/
(accessed 3rd August, 2021).
107. Meiping, G. (2017) ‘New version of AlphaGo can
master Weiqi without human help’, CGTN, 19th
October, available at: https://news.cgtn.com/
news/314d444d31597a6333566d54/share_p.html
(accessed 3rd August, 2021).
108. Duckett, C. (2017) ‘DeepMind AlphaGo Zero learns
on its own without meatbag intervention’, ZDNet,
19th October, available at: https://www.zdnet.com/
article/deepmind-alphago-zero-learns-on-its-own-
without-meatbag-intervention/
(accessed 3rd August,
2021).
109. Mitchell, ref. 26 above, Location 4879.
110. Ibid., Location 4939.
111. Capra, F. and Luisi, P.L. (2014) ‘The Systems
View of Life:A Unifying Vision’, ‘Cognition and
consciousness’ Cambridge University Press, New
York, NY, pp. 257–265.
112. Mitchell, ref. 26 above, Location 307.
113. Kurzweil, ref. 21 above, Location 2671.
114. New England Complex Systems Institute (n.d.)
‘Concepts: emergence’, available at: https://necsi.
edu/emergence
(accessed 3rd August, 2021).
115. Goldstein, J. (1999) ‘Emergence as a construct:
history and issues’, Emergence,Vol. 1 No.1, pp. 49–72.
116. Corning, P.A (2002) ‘The re-emergence of
“emergence£: a venerable concept in search of a
theory’, Complexity,Vol. 7, No. 6, pp. 18–30.
117. Wikipedia (n.d.) ‘Moore’s law’, available at: https://
en.wikipedia.org/wiki/Moore’s_law
(accessed 3rd
August, 2021).
118. Wikipedia (n.d.) ‘Technological singularity’, available
at: https://en.wikipedia.org/wiki/Technological_
singularity
(accessed 3rd August, 2021).
119. Ecclesiastes 9:12.
120. Quote taken from:‘The Ultimate Computer’, Star
Trek, created by Gene Roddenberry, Season 2,
Episode 24, Paramount (1968).
121. Kurzweil, ref. 21 above, Location 348.
122. Shanahan, M. (2015) ‘The Technological Singularity’,
The MIT Press, Cambridge, MA, Kindle Edition,
Location 101.
123. Kurzweil, ref. 21 above, Location 2344.
124. Kurzweil, R. (2022) ‘The Singularity Is Nearer’,
Viking, New York, NY.
125. Kurzweil, ref. 21 above, Location 2685.
126. Shanahan, ref. 122 above, Location 1326–1343.
127. Wikipedia (n.d.) ‘Artificial general intelligence’,
available at: https://en.wikipedia.org/wiki/
Artificial_general_intelligence
(accessed 6th
August, 2021).
128. Kurzweil, ref. 21 above, Location 2626–2642.
129. Bostrom, ref. 10 above, Location 2546.
130. Ibid., Location 2685.
131. Ibid., Location 971.
132. Ibid., Location 2560.
133. Shanahan, ref. 122 above, Location 1292.
134. Bostrom, ref. 10 above, Location 7075.
135. COSMOS (n.d.) ‘Event Horizon’, available
at: https://astronomy.swin.edu.au/cosmos/e/
Event+Horizon
(accessed 8th August, 2021).
136. Kurzweil, ref. 21 above, Location 9392.
137. Ibid., Location 735.
138. Shanahan, ref. 122 above, Location 1566.
139. Ibid., Location 1770.
140. Ibid., Location 711.
141. Bostrom, ref. 10 above, Location 55–58.
142. Wikipedia (n.d.) ‘Terminator (franchise)’, available
at: https://en.wikipedia.org/wiki/Terminator_
(franchise)
(accessed 3rd August, 2021).
143. Wikipedia (n.d.) ‘Transhumanism’, available at:
https://en.wikipedia.org/wiki/Transhumanism
(accessed 3rd August, 2021).
144. Shanahan, ref. 122 above, Location 2091–2329.
145. Tolkien, ref. 1 above, Location 1211.
146. Hawking, S. and Mlodinow, L. (2010) ‘The Grand
Design’,Transworld Digital, London, Kindle Edition,
Location 68.
147. Shakespeare, W. (1605) ‘Macbeth’, Act IV, scene 3,
line 97.
148. Tolkien, ref. 1 above, Location 1649.
149. Miller, H. (1961) ‘Tropic Of Cancer’, Grove Press,
New York, NY.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This paper introduces basic concepts of chaos theory into the world of big data and real-time big data analysis. It concentrates on demonstrating how chaos theory can be applied to analysing big data, what elements must be present, and what the possible outcomes can be. Although chaos theory contains a few set rules, the paper concentrates on the meaning and importance of the butterfly effect in finding patterns and trends within big data analysis. It should be noted that, although many aspects of the application of chaos theory to big data analytics are largely theoretical or in the infancy of deployment, almost all big data analysing systems active today make use of the essential components contained within chaos theory.
Article
Full-text available
Ted Gross has worked in the high-tech industry for over 30 years as a chief technology offcer, vice president of research and development, team leader and programmer. His current study, seminars and lectures concentrate on the application of the principles of chaos theory to data analysis and artifcial intelligence components, including machine learning, sentiment analysis, pattern recognition and disruptive innovation. Ted's work on technology has been published on Medium and LinkedIn, as well as in a variety of professional journals. Abstract The process of sentiment analysis and emotion recognition (SAER) entails using artifcial intelligence components and algorithms to extract emotions and sentiments from online texts, such as tweets. The information extracted can then be used by marketing, customer support and public relations teams to foster positive consumer attitudes. Advances in this discipline, however, are being hindered by two signifcant obstacles. First, although 'emotion' and 'sentiment' are distinct entities that require distinct analysis, there is no agreed defnition to distinguish between the two. Secondly, the nature of language within the electronic medium has evolved to include much more than textual statements, including (but not limited to) acronyms, emojis and other visuals, such as video (in its many forms). As visual communication lacks universal interpretation, this can lead to erroneous analysis and conclusions, even where there is a differentiation between emotion and sentiment. This paper uses examples and case studies to explain the theoretical basis of the problem. It also offers conceptual direction regarding how to make SAER more accurate.
Article
Full-text available
Ted Gross has worked in the high-tech industry for over 30 years as a chief technology officer and vice president of research and development. Having spent years studying how to apply the principles of chaos theory, Big Data analysis, artificial intelligence, innovation theory and disruption, he founded a startup. Ted's work on technology has been published in a variety of professional technology journals. Abstract: Predictive analytics (PA) is a tool routinely used by companies to help chart a future product path. It makes extensive use of algorithms and data mining to sort out market desires and trends. It also combines a robust host of artificial intelligence tools, including machine learning, pattern recognition, natural language processing, sentiment analysis and emotion recognition, among others, to achieve more precise results. PA, though, is imperfect, as it is often subject to the whims of the marketplace. Analysing past and present data does not, in any manner, guarantee positive results. Indeed, when it comes to innovation, particularly 'disruptive innovation', relying on PA can lead a company down a disastrous path. Data analytics requires a method that validates innovation and uses PA as something other than an infallible crystal ball. But does the possibility of innovation automatically disavow any insights into future market trends that PA may supply? This paper attempts to place both innovation and PA into proper perspective. It considers when, where, how and why PA and innovation are paramount, but reiterates the importance of instinct, originality and creativity. To illustrate its argument, the paper draws on the history of the Sony Walkman and Apple iPod.
Book
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.
Book
In this unique book, Arieh Ben-Naim invites the reader to experience the joy of appreciating something which has eluded understanding for many years - entropy and the Second Law of Thermodynamics. The book has a two-pronged message: first, that the Second Law is not “infinitely incomprehensible” as commonly stated in textbooks of thermodynamics but can, in fact, be comprehended through sheer common sense; and second, that entropy is not a mysterious quantity that has '‘resisted understanding’’ but a simple, familiar and easily comprehensible concept. Written in an accessible style, the book guides the reader through an abundance of dice games and examples from everyday life. The author paves the way for readers to discover for themselves what entropy is, how it changes, and most importantly, why it always changes in one direction in a spontaneous process. © 2007 by World Scientific Publishing Co. Pte. Ltd. All rights reserved.
Article
Over the past thirty years, a new systemic conception of life has emerged at the forefront of science. New emphasis has been given to complexity, networks, and patterns of organisation leading to a novel kind of 'systemic' thinking. This volume integrates the ideas, models, and theories underlying the systems view of life into a single coherent framework. Taking a broad sweep through history and across scientific disciplines, the authors examine the appearance of key concepts such as autopoiesis, dissipative structures, social networks, and a systemic understanding of evolution. The implications of the systems view of life for health care, management, and our global ecological and economic crises are also discussed. Written primarily for undergraduates, it is also essential reading for graduate students and researchers interested in understanding the new systemic conception of life and its implications for a broad range of professions - from economics and politics to medicine, psychology and law.
Article
Despite its current popularity, "emergence" is a concept with a venerable history and an elusive, ambiguous standing in contemporary evolutionary theory. This paper briefly recounts the history of the term and details some of its current usages. Not only are there radically varying interpretations about what emergence means but "reductionist" and "holistic" theorists have very different views about the issue of causation. However, these two seemingly polar positions are not irreconcilable. Reductionism, or detailed analysis of the parts and their interactions, is essential for answering the "how" question in evolution -- how does a complex living system work? But holism is equally necessary for answering the "why" question -- why did a particular arrangement of parts evolve? In order to answer the "why" question, a broader, multi-leveled paradigm is required. The reductionist approach to explaining emergent complexity has entailed a search for underlying "laws of emergence." Another alternative is th e "Synergism Hypothesis," which focuses on the "economics" - the functional effects produced by emergent wholes and their selective consequences. This theory, in a nutshell, proposes that the synergistic (co-operative) effects produced by various combinations of parts have played a major causal role in the evolution of biological complexity. It will also be argued that emergent phenomena represent, in effect, a subset of a much larger universe of combined effects in the natural world; there are many different kinds of synergy, but not all synergies represent emergent phenomena.