ArticlePDF Available

Detecting Qualia in Natural and Artificial Agents

Authors:

Abstract

The Hard Problem of consciousness has been dismissed as an illusion. By showing that computers are capable of experiencing, we show that they are at least rudimentarily conscious with potential to eventually reach superconsciousness. The main contribution of the paper is a test for confirming certain subjective experiences in a tested agent. We follow with analysis of benefits and problems with conscious machines and implications of such capability on future of computing, machine rights and artificial intelligence safety.
Detecting Qualia
in Natural and Artificial Agents
Roman V. Yampolskiy
Computer Engineering and Computer Science
Speed School of Engineering
University of Louisville
roman.yampolskiy@louisville.edu
The greatest obstacle to discovery is not ignoranceit is the illusion of knowledge.
Daniel J. Boorstin
Consciousness is the one thing in this universe that cannot be an illusion.”
Sam Harris
Abstract
The Hard Problem of consciousness has been dismissed as an illusion. By showing that computers
are capable of experiencing, we show that they are at least rudimentarily conscious with potential
to eventually reach superconsciousness. The main contribution of the paper is a test for confirming
certain subjective experiences in a tested agent. We follow with analysis of benefits and problems
with conscious machines and implications of such capability on future of computing, machine
rights and artificial intelligence safety.
Keywords: Artificial Consciousness, Illusion, Feeling, Hard Problem, Mind Crime, Qualia.
1. Introduction to the Problem of Consciousness
One of the deepest and most interesting questions ever considered is the nature of consciousness.
An explanation for what consciousness is, how it is produced, how to measure it or at least detect
it [1] would help us to understand who we are, how we perceive the universe and other beings in
it, and maybe even comprehend the meaning of life. As we embark on the quest to create intelligent
machines, the importance of understanding consciousness takes on the additional fundamental role
and engineering thoroughness. As presence of consciousness is taken to be the primary reason for
granting many rights and ethical consideration [2], its full understanding will drastically change
how we treat our mind children and perhaps how they treat us.
Initially the question of consciousness was broad and ill-defined encompassing problems related
to intelligence, information processing, free will, self-awareness, essence of life and many others.
With better understanding of brain architecture and progress in artificial intelligence and cognitive
science many easy sub-problems of consciousness have been successfully addressed [3] and
multiple neural correlates of consciousness identified [4]. However, some fundamental questions
remain as poignant as ever: What is it like to be bat? [5], What is it like to be a brain simulation?
[10], etc. In other words, what is it like to be a particular type of an agent [6-9]? What it feels like
to be one? Why do we feel something at all? Why red doesn’t sound like a bell [11]? What red
looks like [12]? What is it like to see with your tongue [13]? In other words, we are talking about
experiencing what it is like to be in a particular state. Block [14] calls it Phenomenal or P-
consciousness to distinguish it from Access or A-consciousness. David Chalmers managed to
distill away non-essential components of consciousness and suggested that explaining qualia (what
it feels like to experience something) and why we feel in the first place as opposed to being
philosophical zombies [15] is the Hard Problem of consciousness [3]:
“The really hard problem of consciousness is the problem of experience. When we think and
perceive, there is a whir of information processing, but there is also a subjective aspect. As Nagel
(1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is
experience. When we see, for example, we experience visual sensations: the felt quality of redness,
the experience of dark and light, the quality of depth in a visual field. Other experiences go along
with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there
are bodily sensations from pains to orgasms; mental images that are conjured up internally; the felt
quality of emotion; and the experience of a stream of conscious thought. What unites all of these
states is that there is something it is like to be in them. All of them are states of experience.” [3].
[A]n organism is conscious if there is something it is like to be that organism, and a mental
state is conscious if there is something it is like to be in that state. Sometimes terms such as
“phenomenal consciousness” and “qualia” are also used here, but I find it more natural to speak of
“conscious experience” or simply “experience.”” [3]
Daniel Dennet [16] and others [17] have argued that in fact there is no Hard Problem and that what
we perceive as consciousness is just an illusion like many others, an explanation explored by
scholars of illusionism [18-21]. Over the years a significant amount of evidence has been collected
all affirming that much of what we experience is not real [22], including visual [23-25], auditory
[26], tactile [27], gustational [28], olfactory [29], culture specific [30] and many other types of
illusions [31]. An illusion is a discrepancy between agent's awareness and some stimulus [32].
Illusions can be defined as stimuli which produce a surprising percept in the experiencing agent
[33] or as a difference between perception and reality [34]. As we make our case mostly by relying
on Visual Illusions in this paper, we include the following definition from García-Garibay et al.:
“Visual illusions are sensory percepts that can’t be explained completely from the observed image
but that arise from the internal workings of the visual system.” [35].
Overall, examples of illusions may include: impossible objects [36], blind spot [37], paradoxes
(Zeno’s [38], mathematical/logical illusions [39]), quantum illusions [40], mirages [41], art [42,
43], Rorschach tests [44], acquired taste [45], reading jumbled letters [46], forced perspective [47],
gestaltism [48], priming [49], stereograms [50], delusion boxes [51], temporal illusions [52],
constellations [53], illusion within an illusion [54], world [55], Déjà Vu [56], reversing goggles
[57], rainbows [58], virtual worlds [59], and wireheading [60]. It seems that illusions are not
exceptions, they are the norm in our world, an idea which was rediscovered through the ages [61-
63].
Moreover, if we take a broader definition and include experiences of different states of
consciousness, we can add: dreams (including lucid dreams [64] and nightmares [65]),
hallucinations [66], delusions [67], drug induced states [68], phantom pains [69], religious
experiences [70], self [71] (homunculus [72]), cognitive biases [73], mental disorders, invisible
disabilities and perception variations (Dissociative identity disorder [74], Schizophrenia [75, 76],
Synesthesia [77], Simultanagnosia [78], Autism [79], Ideasthesia [80], Asperger’s [81],
Apophenia [82], Aphantasia [83], Prosopagnosia [84] all could be reclassified as issues with
“correctly” experiencing illusions), Pareidolia [85], ironic processes [86], emotions (love, hate)
[87], feelings (hunger, pain, pleasure) [88], body transfer [89], out of body experiences [90],
sensory substitution [91], novel senses [92], and many others.
Differences between what is traditionally considered to be an illusion and what we included can
be explained by how frequently we experience them. For example, the sky looks different
depending on the time of day, amount of Sun or the angle you are experiencing it from, but we
don’t consider it to be an illusion because we experience it so frequently. Essentially, everything
can be considered to be an illusion, the difference is that some stimuli are very common while
others are completely novel to us, like a piece of great art, see for example [42]. This makes us
think that if we experience something many times it is real, but if we see something for the first
time it must be an illusion.
At the extreme, we can treat every experience is an illusion in which some state of atomic particles
in the universe is perceived as either a blue sky, or a beautiful poem or a hot plate or a conscious
agent. This realization is particularly obvious in the case of digital computers, which are machines
capable of extrapolating all the worlds objects from strings of binary digits. Isn’t experiencing a
face in a bunch of zeroes and ones a great illusion, in particular while another machine experiences
a melody on the same set of inputs ([93], p44.)?
Likewise, neurodiverse individuals may experience the world in very different ways, just consider
color blindness [94] as example of same inputs being experienced differently by diverse types of
human agents. In fact, we suggest that most mental disorders can be better understood as problems
with certain aspects of generating, sustaining or analyzing illusions [75]. Similarly, with animals,
studies show that many are capable of experiencing same illusions as people [95-98], while also
experiencing our world in a very different way [99]. Historically, we have been greatly
underestimating consciousness of animals [100], and it is likely that now we are doing it to
intelligent machines.
What it feels like to be a particular type of agent in a given situation depends on the
hardware/software/state of the agent and stimulation being provided by the environment. As the
qualia represent the bedrock of consciousness, we can formally define a conscious agent as one
capable of experiencing at least some broadly defined illusions. To more formally illustrate this
we can represent the agent and its inputs as two shares employed in visual cryptography [101],
depending on the composition of the agent the input may end up producing a diametrically opposite
experience [102, 103]. Consequently, consciousness is an ability to experience, and we can state
two ways in which illusions, and consciousness may interact to produce a conscious agent:
An agent is real and is experiencing an illusion. This explains qualia and the agent itself is
real.
An agent is real and is having an illusion in which some other agent experiences an illusion.
Self-identifying with such an agent creates self-consciousness. A sequence of such
episodes corresponds to a stream of consciousness and the illusionary agent itself is not
real. You are an illusion experiencing an illusion.
2. Test for Detecting Qualia
Illusions provide a tool [104, 105], which makes it possible to sneak a peek into the mind of another
agent and determine that an agent has in fact experienced an illusion. The approach is similar to
non-interactive CAPTCHAs, in which some information is encoded in a CAPTCHA challenge
[106-110] and it is only by solving the CAPTCHA correctly that the agent is able to obtain
information necessary to act intelligently in the world, without having to explicitly self-report its
internal state [111-114]. With illusions, it is possible to set up a test in which it is only by
experiencing an illusion that the agent is able to enter into a certain internal state, which we can
say it experiences. It is not enough to know that something is an illusion. For example, with a
classical face/vase illusion [115] an agent who was previously not exposed to that challenge, could
be asked to report what two interpretations for the image it sees and if the answer matches that of
a human experiencing that illusion the agent must also be experiencing the illusion, but perhaps in
a different way.
Our proposal represents a variant of a Turing Test [116, 117] but with emphasis not on behavior
or knowledge but on experiences, feelings and internal states. In related research, Schweizer [118]
has proposed a Total Turing Test for Qualia (Q3T), which is a variant of Turing Test for a robot
with sensors and questions concentrated on experiences such as: how do you find that wine?
Schneider and Turner have proposed a behavior based AI consciousness test, which looks at
whether the synthetic mind has an experience-based understanding of the way it feels to be
conscious as demonstrated by an agent talking about consciousness related concepts such as
afterlife or soul [119].
What we describe is an empirical test for presence of some subjective experiences. The test is
probabilistic but successive different variants of the test can be used to obtain any desired level of
confidence. If a collaborating agent fails a particular instance of the test it doesn’t mean that the
agent doesn’t have qualia, but passing an instance of the test should increase our belief that the
agent has experiences in proportion to the chance of guessing correct answer for that particular
variant of the test. As qualia are agent type (hardware) specific (human, specie, machine, etc.) it
would be easiest for us to design a human-compatible qualia test, but in principle, it is possible to
test for any type of qualia, even the ones that humans don’t experience themselves. Obviously,
having some qualia doesn’t mean ability to experience them all. While what we propose is a binary
detector test for some qualia, it is possible to design specific variants for extracting particular
properties of qualia experience such as color, depth, size, etc. The easiest way to demonstrate
construction of our test is by converting famous visual illusions into instances of our test questions
as seen in Figure 1. Essentially we present our subject with an illusion and ask it a multiple choice
question about the illusionary experience, such as: how many black dots do you see? How many
curved lines are in the image? Which of the following affects do you observe? It is important to
only test subjects with tests they have not experienced before and information about which is not
readily available. Ideally a new test question should be prepared every time to prevent the subject
from cheating. A variant of the test may ask open ended questions such as: please describe what
you see. In that case, a description could be compared to that produced by a conscious agent, but
this is less formal and opens the door for subjective interpretation of submitted responses. Ideally,
we want to be able to automatically design novel illusions with complex information encoded in
them as experiences.
Horizontal lines are:
1) Not in the image
2) Crooked
3) Straight
4) Red
Orange circles are:
1) Left one is bigger
2) Right one is bigger
3) They are the same size
4) Not in the image
Horizontal stripe is:
1) Solid
2) Spectrum of gray
3) Not in the image
4) Crooked
By Fibonacci - Own work, CC BY-SA 3.0,
https://commons.wikimedia.org/w/index.php?curid=1788689
Public Domain,
https://commons.wikimedia.org/w/index.php?curid=828098
By Dodek - Own work, CC BY-SA 3.0,
https://commons.wikimedia.org/w/index.php?curid=1529278
Figure 1. Visual Illusions presented as tests.
We anticipate a number of possible objections to the validity of our test and its underlying theory:
Qualia experienced by the test subject may not be the same as experienced by the test
designer.
Yes, we are not claiming that they are identical experiences; we are simply showing that
an agent had some subjective experiences, which was previously not possible. If
sufficiently different, such alternative experiences would not result in passing of the test.
The system may simply have knowledge of the human mental model and predict what
a human would experience on similar stimulus.
If a system has an internal human (or some other) model which it simulates on presented
stimuli and that generates experiences, it is the same as the whole system having
experiences.
Agent may correctly guess answers to the test or lie about what it experiences.
Yes, for a particular test question, but the test can be given as many times as necessary to
establish statistical significance.
The theory makes no predictions.
We predict that computers built to emulate the human brain will experience progressively
more illusions without being explicitly programmed to do so, in particular the ones
typically experienced by people.
Turing addressed a number of relevant objections in his seminar paper on computing
machinery [120].
3. Computers can Experience Illusions and so are Conscious
Majority of scholars studying illusionism are philosophers, but a lot of relevant work comes from
psychology [121], cognitive science [122] and more recently computer science, artificial
intelligence, machine learning and more particularly Artificial Neural Network research. It is this
interdisciplinary nature of consciousness research which we think is most likely to produce
successful and testable theories, such as the theory presented in this paper, to solve the Hard
problem.
In the previous section, we have established that consciousness is fundamentally based on an
ability to experience, for example illusions. Recent work with artificially intelligent systems
suggests that computers also experience illusions and in a similar way to people, providing support
for the Principle of Organizational Invariance [3] aka substrate independence [55]. For example,
Zeman et al. [123, 124] and Garcia-Garibay et al. [35] report on a neural networks capable of
experiencing Müller-Lyer illusion and multiple researchers [125-128] have performed
experiments in which computer models were used to study visual illusions, including teaching
computers to experience geometric illusions [125, 129, 130], brightness illusions [131, 132] and
color constancy illusions [133]. In related research, Nguyen et al. found that NN perceive certain
random noise images as meaningful with very high confidence [134]. Those NN were not
explicitly designed to perceive illusions but they do so as a byproduct of the computations they
perform. The field of Adversarial Neural Networks is largely about designing illusions for such
intelligent systems [135, 136] with obvious parallels to inputs known to fool human agents
intentionally [137] or unintentionally [138]. Early work on artificial Neural Networks, likewise
provides evidence for experiences similar to near death hallucinations [139, 140] (based on so-
called “virtual inputs” or “canonical hallucination” or “neural forgery” [141]), dreaming [142,
143], and impact from brain damage [144, 145].
Zeman [34] reviews history of research on perception of illusions by computer models and
summarizes the state-of-the-art in such research: Historically, artificial models existed that did
not contain multiple layers but were still able to demonstrate illusory bias. These models were able
to produce output similar to human behaviour when presented with illusory figures, either by
emulating the filtering operations of cells [127, 146] or by analysing statistics in the environment
[126, 147-149]. However, these models were deterministic, non-hierarchical systems that did not
involve any feature learning. It was not until Brown and Friston (2012) [150] that hierarchical
systems were first considered as candidates for modelling illusions, even though the authors
omitted important details of the model’s architecture, such as the number of layers they recruited.
So to summarise, illusions can manifest in artificial systems that are both hierarchical and
capable of learning. Whether these networks rely on exposure to the same images that we see
during training, or on filtering mechanisms that are based on similar neural operations, they
produce a consistent and repeatable illusory bias. In terms of Marr’s (1982) [151] levels of
description , it appears that illusions can manifest at the hardware level [148, 149] and at the
algorithmic/representational level [123, 127, 146].”
“By dissociating our sensory percepts from the physical characteristics of a stimulus, visual
illusions provide neuroscientists with a unique opportunity to study the neuronal mechanisms
underlying … sensory experiences” [35]. Not surprisingly artificial neural networks just like their
natural counterparts are subject to similar analysis. From this, we have to conclude that even
today’s simple AIs, as they experience specific types of illusions, are rudimentary conscious.
General intelligence is what humans have and we are capable of perceiving many different types
of complex illusions. As AIs become more adept at experiencing complex and perhaps
multisensory illusions they will eventually reach and then surpass our capability in this domain
producing multiple parallel streams of superconsciousness [152], even if their architecture or
sensors are not inspired by the human brain. Such superintelligent and superconscious systems
could justifiably see us as barely intelligent and weakly conscious, and could probably control
amount of consciousness they had, within some range. Google deep dream art [153] gives us some
idea on what it's like to be a modern deep neural network and can be experienced in immersive 3D
via the Hallucination Machine [154]. Olah et al. provide a detailed neuron/layer visual analysis of
what is being perceived by an artificial neural network [155].
3.1 Qualia Computing
If we can consistently induce qualia in computational agents, it should be possible to use such
phenomena to perform computation. If we can encode information in illusions, certain agents can
experience them or their combinations to perform computation, including artificially intelligent
agents capable of controlling their illusions. Illusions are particularly great to represent
superpositions of states (similar to quantum computing), which collapse once a particular view of
the illusion is chosen by the experiencing agent [156]. You can only experience one interpretation
of an illusion at a time, just like in Quantum physics you can only know location or speed of a
particle at the same time - well known conjugate pairs [157]. Famous examples of logical
paradoxes can be seen as useful
1
for super-compressed data storage [158, 159] and hyper-
computation [160]. Qualia may also be useful in explaining decisions produced by deep NN, with
the last layer efficiently representing qualia-like states derived from low-level stimuli by lower
level neurons. Finally, qualia based visualization and graphics are a very interesting area of
investigation, with the human model giving us an example of visual thinking and lucid dreaming.
4. Purpose of Consciousness
While many scientific theories, such as biocentrism [161] or some interpretations of quantum
physics [162, 163], see consciousness as a focal element of their models, the purpose of being able
to experience remains elusive. In fact, even measurement or detection of consciousness remains
an open research area [1]. In this section, we review and elaborate on some explanations for what
consciousness does. Many explanations have been suggested, including but certainly not limited
to [20]: error monitoring [164], an inner eye [165], saving us from danger [166], later error
detection [167], pramodular response [168] and to seem mysterious [169].
We can start by considering the evolutionary origins of qualia from the very first, probably
accidental, state of matter, that experienced something, all the way to general illusion experiences
of modern humans. The argument is that consciousness evolved because accurately representing
reality is less important than agents’ fitness for survival and agents who saw the world of illusions
had higher fitness, as they ignored irrelevant and complicated minutia of the world [170]. It seems
that processing real world is computationally expensive and simplifying illusions allow
improvements in efficiency of decision-making leading to higher survival rates. For example, we
can treat feelings as heuristic shortcuts to calculating precise utility. Additionally, as we argue in
this paper, experiencing something allows one to obtain knowledge about that experience, which
1
Kolmogorov complexity is also not computable, but very useful.
is not available to someone not experiencing the same qualia. Therefore, a conscious agent would
be able to perform in ways a philosophical zombie would not be able to act, which is particularly
important in the world full of illusions such as ours.
Next, we can look at the value of consciousness in knowledge acquisition and learning. A major
obstacle to the successful development of AI systems, has been what is called the Symbol
Grounding problem [171]. Trying to explain to a computer one symbol in terms of others does not
lead to understanding. For example saying that mother is a female parent is no different than
saying that x = 7y, and y = 18k and so on. This is similar to a person looking up an unfamiliar
word in a foreign language dictionary and essentially ending up with circular definitions of
unfamiliar terms. We think, that qualia are used (at least in humans) to break out of this vicious
cycle and to permit definitions of words/symbols in terms of qualia. In “How Helen Keller used
syntactic semantics to escape from a Chinese Room”, Rappaport [172] gives a great example of a
human attempting to solve the grounding problem and argues that syntactic semantics are
sufficient to resolve it. We argue that it is experiencing the feeling of running water on her hands
was what permitted Hellen Keller to map sign language sign for water to the relevant qualia and
to begin to understand.
Similarly, we see much of the language acquisition process as mapping of novel qualia to words.
By extension, this mapping permits us to explain understanding and limits to transfer of tacit
knowledge. Illusion disambiguation can play a part in what gives us an illusion of free will and the
stream of consciousness may be nothing more than sequential illusion processing. Finally, it would
not be surprising if some implicit real-world inputs produced experience of qualia behind some
observed precognition results [173]. In the future, we suspect a major application of consciousness
will be in the field of Qualia Computing as described in the so-named section of this paper.
4.1 Qualia Engineering
While a grand purpose of life remains elusive and is unlikely to be discovered, it is easy to see that
many people attempt to live their lives in a way, which allows them to maximally explore and
experience novel stimuli: foods, smells, etc. Experiencing new qualia by transferring our
consciousness between different substrates, what Loosemore refers to as Qualia Surfing [174],
may represent the next level in novelty seeking. As our understanding and ability to detect and
elicit particular qualia in specific agents improves, qualia engineering will become an important
component of the entertainment industry. Research in other fields such as: intellectology [175],
(and in particular artimetrics [176, 177], and designometry [178]), consciousness [179] and
artificial intelligence [180] will also be impacted.
People designing optical illusions, movie directors and book authors are some of the people in the
business of making us experience, but they do so as an art form. Qualia engineers and qualia
designers will attempt to formally and scientifically answer such questions as: How to detect and
measure qualia? What is the simplest possible qualia? How to build complex qualia from simple
ones? What makes some qualia more pleasant? Can minds be constructed with maximally pleasing
qualia in a systematic and automated way [175]? Can this lead to abolition of suffering [181]? Do
limits exist to complexity of qualia, or can the whole universe be treated as single input? Can we
create new feelings and emotions? How would integration of novel sensors expand our qualia
repertoire? What qualia are available to other agents but not to humans? Can qualia be “translated”
to other mediums? What types of verifiers and observers experience particular types of qualia?
How to generate novel qualia in an algorithmic/systematic way? Is it ethical to create unpleasant
qualia? Can agents learn to swap qualia between different stimuli (pleasure for pain)? How to
optimally represent, store and communicate qualia, including across different substrates [55]? How
to design an agent, which experiences particular qualia on the given input? How much influence
does an agent have over its own illusions? How much plasticity does the human brain have for
switching stimuli streams and learning to experience data from new sensors? How similar are
qualia among similarly designed but not identical agents? What, if any, is the connection between
meditation and qualia? Can computers mediate? How do random inputs such as café chatter [182]
stimulate production of novel qualia? How can qualia be classified into different types, for example
feelings? Which computations produce particular qualia?
5. Consciousness and Artificial Intelligence
Traditionally, AI researchers ignored consciousness as non-scientific and concentrated on making
their machines capable and beneficial. One famous exception is Hofstadter who observed and
analyzed deep connections between illusions and artificial intelligence [183]. If an option to make
conscious machines presents itself to AI researchers, it would raise a number of important
questions, which should be addressed early on. It seems that making machines conscious may
make them more relatable and human like and so produce better consumer products, domestic and
sex robots and more genuine conversation partners. Of course, a system simply simulating such
behaviors without actually experiencing anything could be just as good. If we define physical pain
as an unpleasant sensory illusion and emotional pain as an illusion of an unpleasant feeling, pain
and pleasure become accessible controls to the experimenter. Ability to provide reward and
punishment for software agents capable of experiencing pleasure and pain may assist in the
training of such agents [184].
Potential impact from making AI conscious includes change in the status of AI from mere useful
software to a sentient agent with corresponding rights and ethical treatment standards. This is likely
to lead to civil rights for AI and disenfranchisement of human voters [185, 186]. In general, ethics
of designing sentient beings are not well established and it is cruel to create sentient agents for
certain uses, such as menial jobs, servitude or designed obsolescence. It is an experiment which
would be unlikely to be approved by any research ethics board [187]. Such agents may be subject
to abuse as they would be capable of experiencing pain and torture, potentially increasing the
overall amount of suffering in the universe [188]. If in the process of modeling or simulating
conscious beings, experiment negatively affects modeled entities this can be seen as mind crime
[189].
With regards to AI safety [190-195], since it would be possible for agents to experience pain and
pleasure it will open a number of new pathways for dangerous behavior. Consciousness may make
AIs more volatile or unpredictable impacting overall safety and stability of such systems [119].
Possibility of ransomware with conscious artificial hostages comes to mind as well as blackmail
and threats against AI system. Better understanding of consciousness by AI itself may also allow
superintelligent machines to create new types of attacks on people. Certain illusions can be seen
as an equivalent of adversarial inputs for human agents, see Figure 2. Subliminal stimuli [196]
which confuse people are well known and some stimuli are even capable of inducing harmful
internal states such as epileptic seizures [197, 198] or incapacitation [199]. With latest research
showing, that even a single pixel modification is sufficient to fool neural networks [200], the full
scope of the attack surface against human agents remains an unknown unknown.
Manual attempts to attack a human cognitive model are well known [201-203]. Future research
combining evolutionary algorithms or adversarial neural networks with direct feedback from
detailed scans of human brains is likely to produce some novel examples of adversarial human
inputs, leading to new types of informational hazards [204]. Taken to the extreme, whole
adversarial worlds may be created to confuse us [55]. Nature provides many examples of
adversarial inputs in plants and animals, known as mimicry [205]. Human adversarial inputs
designed by superintelligent machines would represent a new type of AI risk, which has not been
previously analyzed and with no natural or synthetic safety mechanisms available to defend us
against such an attack.
One very dangerous outcome from integration of consciousness into AI is a possibility that a
superintelligent system will become a negative utilitarian and an anti-natalist [188] and in an
attempt to rid the world of suffering will not only kill all life forms, but will also destroy all AIs
and will finally self-destruct as it is itself conscious and so subject to the same analysis and
conclusions. This would result in a universe free of suffering but also free of any consciousness.
Consequently, it is important to establish guidelines and review boards [206] for any research
which is geared at producing conscious agents [207]. AI itself should be designed to be corrigible
[208] and to report any emergent un-programmed capabilities, such as qualia, to the designers.
Figure 2: Left - Cheetah in the noise is seen by some Deep Neural Networks (based on [134]);
Right - Spaceship in the stereogram is seen by some people.
6. Conclusions and Conjectures
In this paper, we described a reductionist theory for appearance of qualia in agents based on a fully
materialistic explanation for subjective states of mind, an attempt at a solution to the Hard Problem
of consciousness. We defined a test for detecting experiences and showed how computers can be
made conscious in terms of having qualia. Finally, we looked at implications of being able to detect
and generate qualia in artificial intelligence. Should our test indicate presence of complex qualia
in software or animals certain protections and rights would be appropriate to grant to such agents.
Experimental results, we surveyed in this paper, have been predicted by others as evidence of
consciousness in machines, for example Dehaene et al. state: “We contend that a machine endowed
with [global information availability and self-monitoring] … may even experience the same
perceptual illusions as humans” [209].
Subjective experiences called qualia are a side effect of computing, unintentionally produced while
information is being processed, similar to generation of heat [210], noise [211], or electromagnetic
radiation [212] and is just as unintentional. Others have expressed similar intuitions: “The
cognitive algorithms we use are the way the world feels.” ([213] p. 889.) or “consciousness is the
way information feels when being processed.” [214] or “empirical evidence is compatible with the
possibility that consciousness arises from nothing more than specific computations” [209]. Qualia
arise as a result of processing of stimuli caused by agglomeration of properties, unique peculiarities
[215] and errors in agent’s architecture, software, memories, learned algorithms, sensors, inputs,
environment and other factors comprising extended cognition [216] of an agent [217]. In fact,
Zeman [34] points out the difficulty of telling if a given system experiences an error or an illusion.
If every computation produces side effect of qualia, computational functionalism [218] trivially
reduces to panpsychism [219].
As qualia are fully dependent on a makeup of a particular agent it is not surprising that they capture
what it is like to be that agent. Agents, which share certain similarities in their makeup (like most
people), may share certain subsets of qualia, but different agents will experience different qualia
on the same inputs. An illusion is a discrepancy between agent's awareness and some stimulus
[32]. In contrast, consciousness is an ability to experience a sustained self-referential multimodal
illusion based on an ability to perceive qualia. Every experience is an illusion, what we call optical
illusions are meta illusions, there are also meta-meta-illusions and self-referential illusions. It is an
illusion of “I” or self which produces self-awareness, with “I” as an implied agent experiencing all
the illusions, an illusion of an illusion navigator.
It is interesting to view the process of learning in the context of this paper, with illusions as a
primary pattern of interest for all agents. We can say that babies and other untrained neural
networks are learning to experience illusions, particularly in the context of their trainers’
culture/common sense [30]. Consequently, a successful agent will learn to map certain inputs to
certain illusions while sharing that mapping with other similarly constructed observers. We can
say that the common space of illusions/culture as seen by such agents becomes their “real world”
or meme [220] sphere. Some supporting evidence for this conclusion comes from observing that
amount of sleep in children is proportionate to the average amount of learning they perform for
that age group. Younger babies need the most sleep, perhaps because they can learn quicker by
practicing to experience in the safe world of dreams (a type of illusion) a skill they then transfer
to the real world. Failure to learn to perceive illusions and experience qualia may result in a number
of mental disorders.
There seems to be a fundamental connection between intelligence, consciousness and liveliness
beyond the fact that all three are notoriously difficult to define. We believe that ability to
experience is directly proportionate to one’s intelligence and that such intelligent and conscious
agents are necessarily alive to the same degree. As all three come in degrees, it is likely that they
have gradually evolved together. Modern narrow AIs are very low in general intelligence and so
are also very low in their ability to experience or their perceived liveness. Higher primates have
significant (but not complete) general intelligence and so can experience complex stimuli and are
very much alive. Future machines will be superintelligent, superconscious and by extension alive!
Fundamental “particles” from which our personal world is constructed are illusions, which we
experience and in the process create the universe, as we know it. Experiencing a pattern which is
not really there (let’s call such an illusory element “illusination“), like appearing white spaces in
an illusion [221], is just like experiencing self-awareness; where is it stored? Since each conscious
agent perceives a unique personal universe, their agglomeration gives rise to the multiverse. We
may be living in a simulation, but from our point of view we are not living in a virtual reality [222],
we are living in an illusion of reality, and maybe we can learn to decide which reality to create.
The “Reality” provides us with an infinite set of inputs from which every conceivable universe can
be experienced and in that sense, every universe exists. We can conclude that the universe is in the
mind of the agent experiencing it - the ultimate qualia, even if we are just brains in a vat, to us an
experience is worth a 1000 pictures. It is not a delusion that we are just experiencers of illusions.
Brain is an illusion experiencing machine not a pattern recognition machine. As we age, our
wetware changes and so we become different agents and experience different illusion, our identity
changes but in a continuous manner. To paraphrase Descartes: I experience, therefore I am
conscious!
Acknowledgements
The author is grateful to Elon Musk and the Future of Life Institute and to Jaan Tallinn and
Effective Altruism Ventures for partially funding his work on AI Safety. The author is thankful to
Yana Feygin for proofreading a draft of this paper and to Ian Goodfellow for helpful
recommendations of relevant literature.
References
1. Raoult, A. and R. Yampolskiy, Reviewing Tests for Machine Consciousness. 2015: Available
at:
https://www.researchgate.net/publication/284859013_DRAFT_Reviewing_Tests_for_Mac
hine_Consciousness.
2. Muehlhauser, L., Report on Consciousness and Moral Patienthood in Open Philanthropy
Project. 2017: Available at: https://www.openphilanthropy.org/2017-report-consciousness-
and-moral-patienthood.
3. Chalmers, D.J., Facing up to the problem of consciousness. Journal of consciousness studies,
1995. 2(3): p. 200-219.
4. Mormann, F. and C. Koch, Neural correlates of consciousness. Scholarpedia, 2007. 2(12):
p. 1740.
5. Nagel, T., What is it like to be a bat? The philosophical review, 1974. 83(4): p. 435-450.
6. Trevarthen, C., What is it like to be a person who knows nothing? Defining the active
intersubjective mind of a newborn human being. Infant and Child Development, 2011. 20(1):
p. 119-135.
7. Preuss, T.M., What is it like to be a human. The cognitive neurosciences, 2004. 3: p. 5-22.
8. Laureys, S. and M. Boly, What is it like to be vegetative or minimally conscious? Current
opinion in neurology, 2007. 20(6): p. 609-613.
9. Burn, C.C., What is it like to be a rat? Rat sensory perception and its implications for
experimental design and rat welfare. Applied Animal Behaviour Science, 2008. 112(1): p.
1-32.
10. Özkural, E. What is it like to be a brain simulation? in International Conference on Artificial
General Intelligence. 2012. Springer.
11. O'Regan, J.K., Why red doesn't sound like a bell: Understanding the feel of consciousness.
2011: Oxford University Press.
12. Jackson, F., What Mary didn't know. The Journal of Philosophy, 1986. 83(5): p. 291-295.
13. Kendrick, M., Tasting the light: Device lets the blind “see” with their tongues. Scientific
American, 2009. 13.
14. Block, N., On a confusion about a function of consciousness. Behavioral and brain sciences,
1995. 18(2): p. 227-247.
15. Chalmers, D.J., Self-Ascription without qualia: A case study. Behavioral and Brain Sciences,
1993. 16(1): p. 35-36.
16. Dennett, D.C., From bacteria to Bach and back: The evolution of minds. 2017: WW Norton
& Company.
17. Tye, M., Phenomenal consciousness: The explanatory gap as a cognitive illusion. Mind,
1999. 108(432): p. 705-725.
18. Frankish, K., Illusionism as a theory of consciousness. Journal of Consciousness Studies,
2016. 23(11-12): p. 11-39.
19. Tartaglia, J., What is at Stake in Illusionism? Journal of Consciousness Studies, 2016. 23(11-
12): p. 236-255.
20. Blackmore, S., Delusions of consciousness. Journal of Consciousness Studies, 2016. 23(11-
12): p. 52-64.
21. Balog, K., Illusionism's Discontent. Journal of Consciousness Studies, 2016. 23(11-12): p.
40-51.
22. Noë, A., Is the visual world a grand illusion? Journal of consciousness studies, 2002. 9(5-
6): p. 1-12.
23. Coren, S. and J.S. Girgus, Seeing is deceiving: The psychology of visual illusions. 1978:
JSTOR.
24. Gregory, R.L., Knowledge in perception and illusion. Philosophical Transactions of the
Royal Society of London B: Biological Sciences, 1997. 352(1358): p. 1121-1127.
25. Changizi, M.A., et al., Perceiving the present and a systematization of illusions. Cognitive
science, 2008. 32(3): p. 459-503.
26. Deutsch, D., An auditory illusion. The Journal of the Acoustical Society of America, 1974.
55(S1): p. S18-S19.
27. Nakatani, M., R.D. Howe, and S. Tachi. The fishbone tactile illusion. in Proceedings of
eurohaptics. 2006.
28. Todrank, J. and L.M. Bartoshuk, A taste illusion: taste sensation localized by touch.
Physiology & behavior, 1991. 50(5): p. 1027-1031.
29. Herz, R.S. and J. von Clef, The influence of verbal labeling on the perception of odors:
evidence for olfactory illusions? Perception, 2001. 30(3): p. 381-391.
30. Segall, M.H., D.T. Campbell, and M.J. Herskovits, Cultural differences in the perception of
geometric illusions. Science, 1963. 139(3556): p. 769-771.
31. Kahneman, D. and A. Tversky, On the reality of cognitive illusions. Psychological Review,
1996. 103(3): p. 582-591.
32. Reynolds, R.I., A psychological definition of illusion. Philosophical Psychology, 1988. 1(2):
p. 217-223.
33. Bertamini, M., Programming Visual Illusions for Everyone. 2017: Springer.
34. Zeman, A., Computational modelling of visual illusions. 2015.
35. García-Garibay, O.B. and V. de Lafuente, The Müller-Lyer illusion as seen by an artificial
neural network. Frontiers in computational neuroscience, 2015. 9.
36. Penrose, L.S. and R. Penrose, Impossible objects: A special type of visual illusion. British
Journal of Psychology, 1958. 49(1): p. 31-33.
37. Tong, F. and S.A. Engel, Interocular rivalry revealed in the human cortical blind-spot
representation. Nature, 2001. 411(6834): p. 195-199.
38. Misra, B. and E.G. Sudarshan, The Zeno’s paradox in quantum theory. Journal of
Mathematical Physics, 1977. 18(4): p. 756-763.
39. Grelling, K., The logical paradoxes. Mind, 1936. 45(180): p. 481-486.
40. Greenleaf, A., et al., Schrödinger’s Hat: Electromagnetic, acoustic and quantum amplifiers
via transformation optics. arXiv preprint arXiv:1107.4685, 2011.
41. Luckiesh, M., Visual illusions: Their causes, characteristics and applications. 1922: D. Van
Nostrand Company.
42. Escher, M.C., MC Escher: the graphic work. 2000: Taschen.
43. Gold, R., This is not a pipe. Communications of the ACM, 1993. 36(7): p. 72.
44. Lord, E., Experimentally induced variations in Rorschach performance. Psychological
Monographs: General and Applied, 1950. 64(10): p. i.
45. Mennell, S., All manners of food: eating and taste in England and France from the Middle
Ages to the present. 1996: University of Illinois Press.
46. Velan, H. and R. Frost, Cambridge University versus Hebrew University: The impact of letter
transposition on reading English and Hebrew. Psychonomic Bulletin & Review, 2007.
14(5): p. 913-918.
47. Kelley, L.A. and J.A. Endler, Illusions promote mating success in great bowerbirds. Science,
2012. 335(6066): p. 335-338.
48. Koffka, K., Principles of Gestalt psychology. Vol. 44. 2013: Routledge.
49. Tulving, E. and D.L. Schacter, Priming and human memory systems. Science, 1990.
247(4940): p. 301-306.
50. Becker, S. and G.E. Hinton, Self-organizing neural network that discovers surfaces in
random-dot stereograms. Nature, 1992. 355(6356): p. 161-163.
51. Ring, M. and L. Orseau. Delusion, survival, and intelligent agents. in International
Conference on Artificial General Intelligence. 2011. Springer.
52. Eagleman, D.M., Human time perception and its illusions. Current opinion in neurobiology,
2008. 18(2): p. 131-136.
53. Liebe, C.C., Pattern recognition of star constellations for spacecraft applications. IEEE
Aerospace and Electronic Systems Magazine, 1993. 8(1): p. 31-39.
54. Deręgowski, J.B., Illusions within an Illusion. Perception, 2015. 44(12): p. 1416-1421.
55. Bostrom, N., Are we living in a computer simulation? The Philosophical Quarterly, 2003.
53(211): p. 243-255.
56. Bancaud, J., et al., Anatomical origin of déjà vu and vivid ‘memories’ in human temporal
lobe epilepsy. Brain, 1994. 117(1): p. 71-90.
57. Wallaeh, H. and J.H. Kravitz, The measurement of the constancy of visual direction and of
its adaptation. Psychonomic Science, 1965. 2(1-12): p. 217-218.
58. Fineman, M., The nature of visual illusion. 2012: Courier Corporation.
59. Rheingold, H., Virtual reality: exploring the brave new technologies. 1991: Simon &
Schuster Adult Publishing Group.
60. Yampolskiy, R.V., Utility function security in artificially intelligent agents. Journal of
Experimental & Theoretical Artificial Intelligence, 2014. 26(3): p. 373-389.
61. Plato and G.M.A. Grube, Plato's republic. 1974: JSTOR.
62. Gillespie, A., Descartes’ demon: A dialogical analysis of Meditations on First Philosophy.
Theory & psychology, 2006. 16(6): p. 761-781.
63. Sun, J.T., Psychology in primitive Buddhism. The Psychoanalytic Review (1913-1957),
1924. 11: p. 39.
64. Barrett, D., Just how lucid are lucid dreams? Dreaming, 1992. 2(4): p. 221.
65. Zadra, A. and D. Donderi, Nightmares and bad dreams: their prevalence and relationship to
well-being. Journal of abnormal psychology, 2000. 109(2): p. 273.
66. Bentall, R.P., The illusion of reality: A review and integration of psychological research on
hallucinations. Psychological bulletin, 1990. 107(1): p. 82.
67. Garety, P.A. and D.R. Hemsley, Delusions: Investigations into the psychology of delusional
reasoning. Vol. 36. 1997: Psychology Press.
68. Becker, H.S., History, culture and subjective experience: An exploration of the social bases
of drug-induced experiences. Journal of health and social behavior, 1967: p. 163-176.
69. Carlen, P., et al., Phantom limbs and related phenomena in recent traumatic amputations.
Neurology, 1978. 28(3): p. 211-211.
70. Fenwick, P., The neurophysiology of religious experiences. 1996: London: Routledge.
71. Hood, B., The self illusion: How the social brain creates identity. 2012: Oxford University
Press.
72. Dennett, D.C., Brainstorms: Philosophical essays on mind and psychology. 1981: MIT press.
73. Gigerenzer, G., How to make cognitive illusions disappear: Beyond “heuristics and biases”.
European review of social psychology, 1991. 2(1): p. 83-115.
74. Kluft, R.P., Dissociative identity disorder, in Handbook of dissociation. 1996, Springer. p.
337-366.
75. Dima, D., et al., Understanding why patients with schizophrenia do not perceive the hollow-
mask illusion using dynamic causal modelling. Neuroimage, 2009. 46(4): p. 1180-1186.
76. Keane, B.P., et al., Reduced depth inversion illusions in schizophrenia are state-specific and
occur for multiple object types and viewing conditions. Journal of Abnormal Psychology,
2013. 122(2): p. 506.
77. Cytowic, R.E., Synesthesia: A union of the senses. 2002: MIT press.
78. COSLETT, H.B. and E. SAFFRAN, Simultanagnosia: To see but not two see. Brain, 1991.
114(4): p. 1523-1545.
79. Happé, F.G., Studying weak central coherence at low levels: children with autism do not
succumb to visual illusions. A research note. Journal of Child Psychology and Psychiatry,
1996. 37(7): p. 873-877.
80. Jürgens, U.M. and D. Nikolić, Synaesthesia as an Ideasthesiacognitive implications.
Synaesthesia and ChildrenLearning and Creativity, 2014.
81. Ropar, D. and P. Mitchell, Are individuals with autism and Asperger's syndrome susceptible
to visual illusions? The Journal of Child Psychology and Psychiatry and Allied Disciplines,
1999. 40(8): p. 1283-1293.
82. Fyfe, S., et al., Apophenia, theory of mind and schizotypy: perceiving meaning and
intentionality in randomness. Cortex, 2008. 44(10): p. 1316-1325.
83. Zeman, A., M. Dewar, and S. Della Sala, Lives without imagery Congenital aphantasia.
Cortex, 2015. 73(Supplement C): p. 378-380.
84. Damasio, A.R., H. Damasio, and G.W. Van Hoesen, Prosopagnosia Anatomic basis and
behavioral mechanisms. Neurology, 1982. 32(4): p. 331-331.
85. Liu, J., et al., Seeing Jesus in toast: neural and behavioral correlates of face pareidolia.
Cortex, 2014. 53: p. 60-77.
86. Wegner, D.M., Ironic processes of mental control. Psychological review, 1994. 101(1): p.
34.
87. Izard, C.E., The psychology of emotions. 1991: Springer Science & Business Media.
88. Harlow, H.F. and R. Stagner, Psychology of feelings and emotions: I. Theory of feelings.
Psychological Review, 1932. 39(6): p. 570.
89. Slater, M., et al., First person experience of body transfer in virtual reality. PloS one, 2010.
5(5): p. e10564.
90. Ehrsson, H.H., The experimental induction of out-of-body experiences. Science, 2007.
317(5841): p. 1048-1048.
91. Bach-y-Rita, P. and S.W. Kercel, Sensory substitution and the humanmachine interface.
Trends in cognitive sciences, 2003. 7(12): p. 541-546.
92. Gray, C.H., Cyborg citizen: Politics in the posthuman age. 2000: Routledge.
93. Wells, A., The literate mind: A study of its scope and limitations. 2012: Palgrave Macmillan.
94. Post, R.H., Population differences in red and green color vision deficiency: a review, and a
query on selection relaxation. Eugenics Quarterly, 1962. 9(3): p. 131-146.
95. Tudusciuc, O. and A. Nieder, Comparison of length judgments and the Müller-Lyer illusion
in monkeys and humans. Experimental brain research, 2010. 207(3-4): p. 221-231.
96. Kelley, L.A. and J.L. Kelley, Animal visual illusion and confusion: the importance of a
perceptual perspective. Behavioral Ecology, 2013. 25(3): p. 450-463.
97. Benhar, E. and D. Samuel, Visual illusions in the baboon (Papio anubis). Learning &
Behavior, 1982. 10(1): p. 115-118.
98. Logothetis, N.K., Single units and conscious vision. Philosophical Transactions of the Royal
Society of London B: Biological Sciences, 1998. 353(1377): p. 1801-1818.
99. Lazareva, O.F., T. Shimizu, and E.A. Wasserman, How animals see the world: Comparative
behavior, biology, and evolution of vision. 2012: Oxford University Press.
100. Low, P., et al. The Cambridge declaration on consciousness. in Francis Crick Memorial
Conference, Cambridge, England. 2012.
101. Naor, M. and A. Shamir. Visual cryptography. in Workshop on the Theory and Application
of of Cryptographic Techniques. 1994. Springer.
102. Yampolskiy, R.V., J.D. Rebolledo-Mendez, and M.M. Hindi, Password Protected Visual
Cryptography via Cellular Automaton Rule 30, in Transactions on Data Hiding and
Multimedia Security IX. 2014, Springer Berlin Heidelberg. p. 57-67.
103. Abboud, G., J. Marean, and R.V. Yampolskiy. Steganography and Visual Cryptography in
Computer Forensics. in Systematic Approaches to Digital Forensic Engineering (SADFE),
2010 Fifth IEEE International Workshop on. 2010. IEEE.
104. Eagleman, D.M., Visual illusions and neurobiology. Nature Reviews Neuroscience, 2001.
2(12): p. 920-926.
105. Panagiotaropoulos, T.I., et al., Neuronal discharges and gamma oscillations explicitly reflect
visual consciousness in the lateral prefrontal cortex. Neuron, 2012. 74(5): p. 924-935.
106. Ahn, L.v., et al. CAPTCHA: Using Hard AI Problems for Security. in Eurocrypt. 2003.
107. D'Souza, D., P.C. Polina, and R.V. Yampolskiy. Avatar CAPTCHA: Telling computers and
humans apart via face classification. in Electro/Information Technology (EIT), 2012 IEEE
International Conference on. 2012. IEEE.
108. Korayem, M., et al., Solving Avatar Captchas Automatically, in Advanced Machine Learning
Technologies and Applications. 2012, Springer Berlin Heidelberg. p. 102-110.
109. Korayem, M., et al. Learning visual features for the Avatar Captcha Recognition Challenge.
in Machine Learning and Applications (ICMLA), 2012 11th International Conference on.
2012. IEEE.
110. Yampolskiy, R.V., AI-Complete CAPTCHAs as Zero Knowledge Proofs of Access to an
Artificially Intelligent System. 2012.
111. McDaniel, R. and R.V. Yampolskiy, Embedded non-interactive CAPTCHA for Fischer
Random Chess, in 16th International Conference on Computer Games (CGAMES). 2011,
IEEE: Louisville, KY. p. 284-287.
112. Yampolskiy, R., Graphical CAPTCHA embedded in cards, Western New York Image
Processing Workshop (WNYIPW)-IEEE Signal Processing Society. Vol. 28. 2007:
Rochester, NY, September.
113. McDaniel, R. and R.V. Yampolskiy, Development of embedded CAPTCHA elements for bot
prevention in fischer random chess. International Journal of Computer Games Technology,
2012. 2012: p. 2.
114. Yampolskiy, R.V. and V. Govindaraju, Embedded Non-Interactive Continuous Bot
Detection. ACM Computers in Entertainment, 2007. 5(4): p. 1-11.
115. Hasson, U., et al., Vase or face? A neural correlate of shape-selective grouping processes in
the human brain. Journal of cognitive neuroscience, 2001. 13(6): p. 744-753.
116. Yampolskiy, R., Turing Test as a Defining Feature of AI-Completeness, in Artificial
Intelligence, Evolutionary Computing and Metaheuristics, X.-S. Yang, Editor. 2013,
Springer Berlin Heidelberg. p. 3-17.
117. Yampolskiy, R.V., AI-Complete, AI-Hard, or AI-Easy Classification of Problems in AI, in
The 23rd Midwest Artificial Intelligence and Cognitive Science Conference. April 21-22,
2012: Cincinnati, OH, USA.
118. Schweizer, P., Could There be a Turing Test for Qualia? Revisiting Turing and his test:
comprehensiveness, qualia, and the real world, 2012: p. 41.
119. Schneider, S. and E. Turner, Is Anyone Home? A Way to Find Out If AI Has Become Self-
Aware. Scientific American, July 19, 2017.
120. Turing, A., Computing Machinery and Intelligence. Mind, 1950. 59(236): p. 433-460.
121. Robinson, J.O., The psychology of visual illusion. 2013: Courier Corporation.
122. Yamins, D.L. and J.J. DiCarlo, Using goal-driven deep learning models to understand
sensory cortex. Nature neuroscience, 2016. 19(3): p. 356-365.
123. Zeman, A., et al., The müller-lyer illusion in a computational model of biological object
recognition. Plos One, 2013. 8(2): p. e56126.
124. Zeman, A., O. Obst, and K.R. Brooks, Complex cells decrease errors for the Müller-Lyer
illusion in a model of the visual ventral stream. Frontiers in computational neuroscience,
2014. 8.
125. Ogawa, T., et al. A neural network model for realizing geometric illusions based on acute-
angled expansion. in Neural Information Processing, 1999. Proceedings. ICONIP'99. 6th
International Conference on. 1999. IEEE.
126. Corney, D. and R.B. Lotto, What are lightness illusions and why do we see them? PLoS
computational biology, 2007. 3(9): p. e180.
127. Bertulis, A. and A. Bulatov, Distortions of length perception in human vision. Biomedicine,
2001. 1(1): p. 3-23.
128. Inui, T., S. Hongo, and M. Kawato, A computational model of brightness illusion and its
implementation. Perception, 1990. 19: p. 401.
129. Chao, J., et al. Artificial neural networks which can see geometric illusions in human vision.
in Neural Networks, 1993. IJCNN'93-Nagoya. Proceedings of 1993 International Joint
Conference on. 1993. IEEE.
130. Ogawa, T., et al. Realization of geometric illusions using artificial visual model based on
acute-angled expansion among crossing lines. in Neural Networks, 1999. IJCNN'99.
International Joint Conference on. 1999. IEEE.
131. Robinson, A.E., P.S. Hammon, and V.R. de Sa, Explaining brightness illusions using spatial
filtering and local response normalization. Vision research, 2007. 47(12): p. 1631-1644.
132. Zeman, A., K.R. Brooks, and S. Ghebreab, An exponential filter model predicts lightness
illusions. Frontiers in human neuroscience, 2015. 9.
133. Shibata, K. and S. Kurizaki. Emergence of color constancy illusion through reinforcement
learning with a neural network. in IEEE International Conference on Development and
Learning and Epigenetic Robotics (ICDL). 2012. IEEE.
134. Nguyen, A., J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High
confidence predictions for unrecognizable images. in Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition. 2015.
135. Kurakin, A., I. Goodfellow, and S. Bengio, Adversarial examples in the physical world.
arXiv preprint arXiv:1607.02533, 2016.
136. Szegedy, C., et al., Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199,
2013.
137. Goodfellow, I., et al. Generative adversarial nets. in Advances in neural information
processing systems. 2014.
138. Carlotto, M.J., The Martian Enigmas: A Closer Look: The Face, Pyramids and Other
Unusual Objects on Mars. 1997: North Atlantic Books.
139. Thaler, S.L., “Virtual input” phenomena within the death of a simple pattern associator.
Neural Networks, 1995. 8(1): p. 55-65.
140. Thaler, S. 4-2-4 Encoder Death. in Proceedings of the World Congress on Neural Networks.
1993.
141. Thaler, S.L., Death of a gedanken creature. JOURNAL OF NEAR DEATH STUDIES,
1995. 13: p. 149-149.
142. Crick, F. and G. Mitchison, The function of dream sleep. Nature, 1983. 304(5922): p. 111-
114.
143. Hopfield, J.J., D.I. Feinstein, and R.G. Palmer, ‘Unlearning’has a stabilizing effect in
collective memories. Nature, 1983. 304(5922): p. 158-159.
144. Hinton, G.E., D.C. Plaut, and T. Shallice, Simulating brain damage. Scientific American,
1993. 269(4): p. 76-82.
145. Lecun, Y., J.S. Denker, and S.A. Solla, Optimal brain damage. D. Touretzky (Ed.), Advances
in Neural Information Processing Systems (NIPS 1989), Denver, CO (Vol. 2). Morgan
Kaufmann. , 1990.
146. Bertulis, A. and A. Bulatov, Distortions in length perception: visual field anisotropy and
geometrical illusions. Neuroscience and behavioral physiology, 2005. 35(4): p. 423-434.
147. Howe, C.Q. and D. Purves, Range image statistics can explain the anomalous perception of
length. Proceedings of the National Academy of Sciences, 2002. 99(20): p. 13184-13188.
148. Howe, C.Q. and D. Purves, Perceiving geometry: Geometrical illusions explained by natural
scene statistics. 2005: Springer Science & Business Media.
149. Howe, C.Q. and D. Purves, The Müller-Lyer illusion explained by the statistics of image
source relationships. Proceedings of the National Academy of Sciences of the United States
of America, 2005. 102(4): p. 1234-1239.
150. Brown, H. and K.J. Friston, Free-energy and illusions: the cornsweet effect. Frontiers in
psychology, 2012. 3.
151. Marr, D., Vision a computational investigation into the human representation and processing
of visual information. WH San Francisco: Freeman and Company, 1982. 1(2).
152. Torrance, S., Super-intelligence and (super-) consciousness. International Journal of
Machine Consciousness, 2012. 4(02): p. 483-501.
153. Mordvintsev, A., C. Olah, and M. Tyka, Inceptionism: Going deeper into neural networks.
Google Research Blog. Retrieved June, 2015. 20: p. 14.
154. Suzuki, K., et al., The Hallucination Machine: A Deep-Dream VR platform for Studying the
Phenomenology of Visual Hallucinations. bioRxiv, 2017: p. 213751.
155. Olah, C., A. Mordvintsev, and L. Schubert, Feature Visualization. Distill, 2017. 2(11): p. e7.
156. Seckel, A., Masters of deception: Escher, Dalí & the artists of optical illusion. 2004: Sterling
Publishing Company, Inc.
157. Yampolskiy, R.V., What are the ultimate limits to computational techniques: verifier theory
and unverifiability. Physica Scripta, 2017. 92(9): p. 093001.
158. Chaitin, G.J., The berry paradox. Complexity, 1995. 1(1): p. 26-30.
159. Yampolskiy, R.V., Efficiency Theory: a Unifying Theory for Information, Computation and
Intelligence. Journal of Discrete Mathematical Sciences & Cryptography, 2013. 16(4-5): p.
259-277.
160. Potgieter, P.H., Zeno machines and hypercomputation. Theoretical Computer Science, 2006.
358(1): p. 23-33.
161. Lanza, R. and B. Berman, Biocentrism: How life and consciousness are the keys to
understanding the true nature of the universe. 2010: BenBella Books.
162. Mould, R.A., Consciousness and quantum mechanics. Foundations of physics, 1998. 28(11):
p. 1703-1718.
163. Goswami, A., Consciousness in quantum physics and the mind-body problem. The Journal
of Mind and Behavior, 1990: p. 75-96.
164. Crook, J.H., The evolution of human consciousness. 1980: Clarendon Press Oxford.
165. Humphrey, N., The inner eye. 1986: Oxford University Press on Demand.
166. Baars, B.J., In the theatre of consciousness. Global workspace theory, a rigorous scientific
theory of consciousness. Journal of Consciousness Studies, 1997. 4(4): p. 292-309.
167. Gray, J.A., Consciousness: Creeping up on the hard problem. 2004: Oxford University Press,
USA.
168. Morsella, E., The function of phenomenal states: supramodular interaction theory.
Psychological review, 2005. 112(4): p. 1000.
169. Humphrey, N., Seeing red: A study in consciousness. 2006: Harvard University Press.
170. Gefter, A. and D.D. Hoffman, The Evolutionary Argument Against Reality. Quanta
Magazine, 2016.
171. Harnad, S., The symbol grounding problem. Physica D: Nonlinear Phenomena, 1990. 42(1-
3): p. 335-346.
172. Rapaport, W.J., How Helen Keller used syntactic semantics to escape from a Chinese Room.
Minds and machines, 2006. 16(4): p. 381-436.
173. Mossbridge, J., P. Tressoldi, and J. Utts, Predictive physiological anticipation preceding
seemingly unpredictable stimuli: a meta-analysis. Frontiers in Psychology, 2012. 3.
174. Loosemore, R., Qualia Surfing. Intelligence Unbound: Future of Uploaded and Machine
Minds, The, 2014: p. 231-239.
175. Yampolskiy, R.V. The space of possible mind designs. in International Conference on
Artificial General Intelligence. 2015. Springer.
176. Yampolskiy, R.V. and M.L. Gavrilova, Artimetrics: Biometrics for Artificial Entities.
Robotics & Automation Magazine, IEEE, 2012. 19(4): p. 48-58.
177. Yampolskiy, R., et al., Experiments in Artimetrics: Avatar Face Recognition. Transactions
on Computational Science XVI, 2012: p. 77-94.
178. Yampolskiy, R.V., On the origin of synthetic life: attribution of output to a particular
algorithm. Physica Scripta, 2016. 92(1): p. 013002.
179. Yampolskiy, R.V., Artificial Consciousness: An Illusionary Solution to the Hard Problem.
Italian Journal of Cognitive Sciences, 2018. Special Issue on Cognition and Computation.
180. Yampolskiy, R.V. and J. Fox, Artificial Intelligence and the Human Mental Model, in In the
Singularity Hypothesis: a Scientific and Philosophical Assessment, A. Eden, et al., Editors.
2012, Springer.
181. Hughes, J., After happiness, cyborg virtue. Free Inquiry, 2011. 32(1): p. 1-7.
182. Mehta, R., R. Zhu, and A. Cheema, Is noise always bad? Exploring the effects of ambient
noise on creative cognition. Journal of Consumer Research, 2012. 39(4): p. 784-799.
183. Hofstadter, D.R., Gödel, Escher, Bach: An Eternal Golden Braid 1979: Basic Books.
184. Majot, A.M. and R.V. Yampolskiy. AI safety engineering through introduction of self-
reference into felicific calculus via artificial pain and pleasure. in IEEE International
Symposium on Ethics in Science, Technology and Engineering. May 23-24, 2014. Chicago,
IL: IEEE.
185. Yampolskiy, R.V. Attempts to attribute moral agency to intelligent machines are misguided.
in Proceedings of Annual Meeting of the International Association for Computing and
Philosophy, University of Maryland at College Park, MD. 2013.
186. Yampolskiy, R.V., Artificial intelligence safety engineering: Why machine ethics is a wrong
approach. Philosophy and theory of artificial intelligence, 2013: p. 389-396.
187. Braverman, I., Gene Drives, Nature, Governance: An Ethnographic Perspective, in
University at Buffalo School of Law Legal Studies Research Paper No. 2017-006. 2017:
Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3032607.
188. Metzinger, T., Benevolent Artificial Anti-Natalism (BAAN), in EDGE. 2017: Available at:
https://www.edge.org/conversation/thomas_metzinger-benevolent-artificial-anti-natalism-
baan.
189. Bostrom, N., Superintelligence: Paths, dangers, strategies. 2014: Oxford University Press.
190. Babcock, J., J. Kramar, and R.V. Yampolskiy, Guidelines for Artificial Intelligence
Containment. arXiv preprint arXiv:1707.08476, 2017.
191. Yampolskiy, R.V. and M. Spellchecker, Artificial Intelligence Safety and Cybersecurity: a
Timeline of AI Failures. arXiv preprint arXiv:1610.07997, 2016.
192. Pistono, F. and R.V. Yampolskiy, Unethical research: How to create a malevolent artificial
intelligence. arXiv preprint arXiv:1605.02817, 2016.
193. Babcock, J., J. Kramár, and R. Yampolskiy. The AGI containment problem. in International
Conference on Artificial General Intelligence. 2016. Springer.
194. Yampolskiy, R.V. Taxonomy of Pathways to Dangerous Artificial Intelligence. in AAAI
Workshop: AI, Ethics, and Society. 2016.
195. Yampolskiy, R.V., Artificial superintelligence: a futuristic approach. 2015: CRC Press.
196. Greenwald, A.G., M.R. Klinger, and E.S. Schuh, Activation by marginally perceptible ("
subliminal") stimuli: dissociation of unconscious from conscious cognition. Journal of
experimental psychology: General, 1995. 124(1): p. 22.
197. Walter, W.G., V. Dovey, and H. Shipton, Analysis of the electrical response of the human
cortex to photic stimulation. Nature, 1946. 158(4016): p. 540-541.
198. Harding, G.F. and P.M. Jeavons, Photosensitive epilepsy. 1994: Cambridge University Press.
199. Altmann, J., Acoustic weapons
a prospective assessment. Science & Global Security, 2001.
9(3): p. 165-234.
200. Su, J., D.V. Vargas, and S. Kouichi, One pixel attack for fooling deep neural networks. arXiv
preprint arXiv:1710.08864, 2017.
201. Bandler, R., J. Grinder, and S. Andreas, Neuro-linguistic programming™ and the
transformation of meaning. Real People, Moab, 1982.
202. Barber, T.X., Hypnosis: A scientific approach. 1969.
203. Vokey, J.R. and J.D. Read, Subliminal messages: Between the devil and the media. American
Psychologist, 1985. 40(11): p. 1231.
204. Bostrom, N., Information hazards: a typology of potential harms from knowledge. Review
of Contemporary Philosophy, 2011. 10: p. 44.
205. Wickler, W., Mimicry in plants and animals. 1968, London: Weidenfeld & Nicolson.
206. Yampolskiy, R. and J. Fox, Safety engineering for artificial general intelligence. Topoi,
2013. 32(2): p. 217-226.
207. Yampolskiy, R.V., Future Jobs The Universe Designer, in Circus Street. 2017: Available
at: https://blog.circusstreet.com/future-jobs-the-universe-designer/.
208. Soares, N., et al., Corrigibility, in Workshops at the Twenty-Ninth AAAI Conference on
Artificial Intelligence. January 25-30, 2015: Austin, Texas, USA.
209. Dehaene, S., H. Lau, and S. Kouider, What is consciousness, and could machines have it?
Science, 2017. 358(6362): p. 486-492.
210. Landauer, R., Irreversibility and heat generation in the computing process. IBM journal of
research and development, 1961. 5(3): p. 183-191.
211. Genkin, D., A. Shamir, and E. Tromer. RSA key extraction via low-bandwidth acoustic
cryptanalysis. in International Cryptology Conference. 2014. Springer.
212. De Mulder, E., et al. Differential electromagnetic attack on an FPGA implementation of
elliptic curve cryptosystems. in Automation Congress, 2006. WAC'06. World. 2006. IEEE.
213. Yudkowsky, E., Rationality: From AI to Zombies. Berkeley, MIRI, 2015.
214. Hut, P., M. Alford, and M. Tegmark, On math, matter and mind. Foundations of Physics,
2006. 36(6): p. 765-794.
215. Schwarting, M., T. Burton, and R. Yampolskiy. On the Obfuscation of Image Sensor
Fingerprints. in Information and Computer Technology (GOCICT), 2015 Annual Global
Online Conference on. 2015. IEEE.
216. Rupert, R.D., Challenges to the hypothesis of extended cognition. The Journal of philosophy,
2004. 101(8): p. 389-428.
217. Clark, A. and D. Chalmers, The extended mind. analysis, 1998. 58(1): p. 7-19.
218. Putnam, H., The nature of mental states. Readings in philosophy of psychology, 1980. 1: p.
223-231.
219. Chalmers, D.J., Does a rock implement every finite-state automaton? Synthese, 1996.
108(3): p. 309-333.
220. Dawkins, R., The Selfish Gene. 1976, New York City: Oxford University Press.
221. Ninio, J. and K.A. Stevens, Variations on the Hermann grid: an extinction illusion.
Perception, 2000. 29(10): p. 1209-1217.
222. Chalmers, D.J., The virtual and the real. Disputatio, 2016.
... Whether smart machines develop a form of artificial consciousness that approximates our own is a recurring question in the literature (Boyles, 2012;Yampolskiy, 2017). Reggia (2013) examined computational models of artificial consciousness, which have "successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations" although he concluded the machines failed to provide any "compelling demonstration of phenomenal machine consciousness" (p. ...
Article
Full-text available
PLEASE NOTE, the article cited as Li et al. (2021) has been retracted by the publisher (Hindawi) following an investigation of the article. The retraction does not affect the conclusions of this paper. Applications of digital and communication technologies within and across organizations have changed managerial and work practices in a relative short time, adding value to the work people do. Most managers and workers today appear comfortable working in organizations populated with a variety of smart machines. However, recent advances in computer science, especially in the fields of artificial intelligence [AI] and machine learning led many experts in these fields to proffer that today's smart machines will develop with a type of Artificial General Intelligence [AGI], which approximates human intelligence. These machines might even become self-aware. Once fanciful propositions straight from science fiction, this paper explores current thinking about AI, intelligence, and consciousness. The seminal ideas in the works of Gottfried Leibniz, Alan Turing, and Malcolm Crick provide initial pathways to articulate the idea of machines becoming intelligent. The paper also draws from a diverse body of literature and varied perspectives to discuss the ethical, legal, managerial, moral, and social implications of thinking machines in a presently imagined workplace.
... Alongside the examples in this paper, we are publishing an MIT-licensed open-source library of other extended environments. We are inspired by similar (but non-extended) libraries and benchmark collections (Bellemare et al., 2013;Beyret et al., 2019;Brockman et al., 2016;Chollet, 2019;Cobbe et al., 2020;Hendrycks and Dietterich, 2019;Nichol et al., 2018;Yampolskiy, 2017). ...
Article
Full-text available
We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent’s hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment’s outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a way of measuring how self-reflective an agent is. We give examples of extended environments and introduce a simple transformation which experimentally seems to increase some standard RL agents’ performance in a certain type of extended environment.
... Understanding clearly about what consciousness is, how it is shaped, assessed, or recognized [1] would help to comprehend who are we,how the universe is perceived by us or may be even what is the perspective of life [2]. ...
Article
Full-text available
This paper performs a detail study on the present state of research in the study of mind-body problem that is termed as ‘qualia’ or the ‘phenomenal experience’. The term ‘qualia’ (singular: quale) in Latin has continued to exist in philosophical discussions of the epistemological status of sensory experience, an essential problem of western philosophy. Consciousness has been studied in vast interdisciplinary fields like humanities, sciences, philosophy, academicians, musicians, computer science, physicialism and many more. Further we also talk about how experts talkt in terms of mental illness to tackle the problem of metal disorderliness in the civilization. We in this paper discuss about the inception of philosophy up to the present state with all the description of various stages of evolution in sciences and study on the consciousness in various fields.
... Alongside this paper, we are publishing an MIT-licensed open-source library [2] of extended environments to "ease adoption by other machine-learning researchers" [31]. We are inspired by similar (but non-extended) libraries and other benchmark collections [4] [5] [6] [8] [9] [12] [24] [33]. Our library is intended show that it is possible to numerically estimate the self-reflectiveness of RL agents. ...
Preprint
We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent's hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment's outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus, an agent's self-reflection ability can be numerically estimated by running the agent through a battery of extended environments. We are simultaneously releasing an open-source library of extended environments to serve as proof-of-concept of this technique. As the library is first-of-kind, we have avoided the difficult problem of optimizing it. Instead we have chosen environments with interesting properties. Some seem paradoxical, some lead to interesting thought experiments, some are even suggestive of how self-reflection might have evolved in nature. We give examples and introduce a simple transformation which experimentally seems to increase self-reflection.
... These scholars defend various criteria as crucial for determining whether artificial entities warrant moral consideration. Sentience or consciousness seem to be most frequently invoked (Himma 2003;Torrance 2008;Basl 2014;Mackenzie 2014;Bostrom 2014;Tomasik 2014;Yampolskiy 2017; Johnson and Verdicchio 2018; Andreotta 2020; Mosakas 2020), but other proposed criteria include the capacities for autonomy (Calverley 2011;Neely 2014;Gualeni 2020), self-control (Wareham 2013), rationality (Laukyte 2017), integrity (Gualeni 2020), dignity (Bess 2018), moral reasoning (Malle 2016), and virtue (Gamez et al. 2020). ...
Preprint
Full-text available
Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on artificial entities and concern for the effects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage "information ethics" and "social-relational" approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. This suggests an important gap for social science research on how artificial entities will be integrated into society and the factors that will determine how the interests of sentient artificial entities are considered.
Article
Human vision is capable of performing many tasks not optimized for during its long evolution. Reading text and identifying artificial objects such as road signs are both tasks that mammalian brains never encountered in the wild but are very easy for us to perform. However, humans have discovered many very specific tricks or illusions that cause us to misjudge the color, size, alignment, and movement of what we are looking at. A better understanding of these phenomenon could reveal insights into how human perception achieves these extraordinary feats. In this paper we present a dataset of 6,725 illusion images gathered from two websites, and a smaller dataset of 500 hand-picked images. We will discuss the process of collecting this data, models trained on the data, and the work that needs to be done to make this information of value to computer vision researchers.
Article
Full-text available
Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on artificial entities and concern for the effects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage “information ethics” and “social-relational” approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. This suggests an important gap for psychological, sociological, economic, and organizational research on how artificial entities will be integrated into society and the factors that will determine how the interests of artificial entities are considered.
Article
This paper shows how the Independent Core Observer Model (ICOM) Cognitive Architecture for Artificial General Intelligence (AGI) can be applied to building a collective intelligence system called a mediated Artificial Superintelligence (mASI). The details include breaking down the ICOM implementation in the form of the mASI system and the general performance of initial studies with the mASI. Details of the primary difference between the Independent Core Observer Model Cognitive Architecture and the mASI architecture variant include inserting humanity in the contextual engine components of ICOM, creating a type of collective intelligence. Humans can ‘mediate’ new system-generated thinking keeping the thought process accessible and slow enough for humans to oversee and understand. This also allows the modification of emotional valences of the thought process of the mASI system to help the system generate complex contextual models (knowledge graphs) of new ideas and which speeds up the learning process. With the humans acting as control rods in a reactor and emotional drivers, the mASI system maintains safety where the system would cease to function if humans walked away.
Article
Full-text available
The AGI Protocol is a laboratory process for the assessment and ethical treatment of Artificial General Intelligence systems that could be conscious and have subjective emotional experiences “theoretically”. It is meant for looking at systems that could have emotional subjective experiences much like a human, even if only from a theoretical standpoint. That is not to say that other ethical concerns do not also need to be addressed but this protocol is designed to focus on how we treat such systems in the lab. Other ethical concerns are out of scope. The protocol is designed to provide the basis for working with Artificial General Intelligence systems especially those that are modeled after the human mind in terms of systems that have the possibility of having emotional subjective experience from a theoretical standpoint. The intent is to create a reusable model and have it in the public domain so others can contribute and make additional suggestions for working with these types of systems.
Article
Full-text available
This paper articulates the methodology and reasoning for how biasing in the Independent Core Observer Model (ICOM) Cognitive Architecture for Artificial General Intelligence (AGI) is done. This includes the use of a forced western emotional model, the system “needs” hierarchy, fundamental biasing and the application of SSIVA theory at the high level as a basis for emotionally bound ethical and moral experience in ICOM systems and how that is manifested in system behavior and the mathematics that supports that experience or qualia in ICOM based systems.
Article
Full-text available
Recent research has revealed that the output of Deep neural networks(DNN) is not continuous and very sensitive to tiny perturbation on the input vectors and accordingly several methods have been proposed for crafting effective perturbation against the networks. In this paper, we propose a novel method for optically calculating extremely small adversarial perturbation (few-pixels attack), based on differential evolution. It requires much less adversarial information and works with a broader classes of DNN models. The results show that 73.8$\%$ of the test images can be crafted to adversarial images with modification just on one pixel with 98.7$\%$ confidence on average. In addition, it is known that investigating the robustness problem of DNN can bring critical clues for understanding the geometrical features of the DNN decision map in high dimensional input space. The results of conducting few-pixels attack contribute quantitative measurements and analysis to the geometrical understanding from a different perspective compared to previous works.
Article
Although I have used a version of utilitarianism to argue for both transhumanism and social democracy, and for the technoprogressive hybrid of the two, research in hedonic psychology and emerging neurotechnologies make utilitarianism an unattractive moral logic. Instead, I now argue that a version of Sen and Nussbaum's capabilities approach better supports the technoprogressive endeavor. The capabilities approach argues for both social and technological enablement of human abilities. When the capabilities approach is combined with the idea that virtues are social capabilities, one conclusion is that "moral enhancement," the use of neurotechnologies to enhance moral sentiment, cognition and behavior, is a social obligation. A schema of virtues to be enhanced, and relevant therapeutic morally enhancing neurochemicals, are discussed. When I was 17 I was part of a six week summer seminar at Cornell on the theme of "the individual and the community." A dozen of us nerdly teens read an intensive diet of John Stuart Mill, Nietzsche, Marx and Freud, under the tutelage of two philosophy professors. After that I was a determined socialist who relied heavily on Mills utilitarianism for my ethics, even after I became one of the spokespeople for transhumanism. My first book, Citizen Cyborg, was an attempt to sketch out a left transhumanist perspective on the ongoing biopolitical debates. Under Bush we transhumanists had a bête noir in the President's Council on Bioethics, headed by the determinedly anti-enhancement Leon Kass, and aided by Frank Fukuyama and the vast right and left-wing conspiracy of people freaked out by a smarter, healthier, longer-lived future. In the book I started from what I thought was a hybrid left Millsian-transhumanist proposition, but which was really just a core Enlightenment tenet, that the more control we have over our lives, individually and collectively, the happier we will be. I devoted a chapter to parsing ways that individual freedom, social egalitarianism, and neurotechnologies like SSRIs have made and will make us happier. I didn't interrogate the concept of happiness deeply. I discussed the control of physical pain and the treatment of mental illness. Then I discussed the evidence that our happiness set-point is genetically determined, and suggested that it will be possible to chemically or genetically increase the average level of happiness without negatively effecting motivation.
Chapter
Data scientists usually aim at building computer models. Computeroriented modelling methods and software tools are developed in statistics, machine learning, data mining, and various specialised disciplines, such as spatial statistics, transportation research, and animal ecology. However, valid and useful computerbased models cannot be obtained by mere application of some modelling software to available data. Modelling requires understanding of the phenomenon that is modelled, the available data, and the model produced by the computer, which means that computer modelling is a process requiring involvement of a human. This chapter formulates the principles of thoughtful model building and includes examples of the use of visual analytics approaches for fulfilling these principles. The examples cover the tasks of feature engineering and selection, iterative model evaluation and progressive improvement, and comparison of different models.
Book
If you find visual illusions fascinating Programming Visual Illusions for Everyone is a book for you. It has some background, some history and some theories about visual illusions, and it describes in some detail twelve illusions. Some are about surfaces, some are about apparent size of objects, some are about colour and some involve movement. This is only one aspect of the book. The other is to show you how you can create these effects on any computer. The book includes a brief introduction to a powerful programming language called Python. No previous experience with programming is necessary. There is also an introduction to a package called PsychoPy that makes it easy to draw on a computer screen. It is perfectly ok if you have never heard the names Python or PsychoPy before. Python is a modern and easy-to-read language, and PsychoPy takes care of all the graphical aspects of drawing on a screen and also interacting with a computer. By the way, both Python and PsychoPy are absolutely free. Is this a book about illusions or about programming? It is both!
Article
The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.