Steve Mann

Steve Mann
University of Toronto | U of T · Department of Electrical and Computer Engineering

PhD

About

169
Publications
33,829
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
5,093
Citations

Publications

Publications (169)
Chapter
The “vironment” is the liminal space and boundary between the environment (our surroundings) and the invironment (ourselves). Examples of vironments include clothing, “wearables” (wearable computing technologies), and veyances (conveyances versus deconveyances), such as wheelchairs, rollerblades, bicycles, e-bikes, cars, paddleboards, and boats. Ma...
Conference Paper
Full-text available
WaterHCI (Water-Human-Computer Interaction) is a field of study and practice focusing on the design and creation of interactive devices, systems, and experiences at the intersection of water, humans, and technology. It is a relatively new field that originated in Ontario, Canada, in the 1960s, and was further developed at University of Toronto thro...
Article
Full-text available
WaterHCI (Water-Human-Computer Interaction) is a field of study and practice focusing on the design and creation of interactive devices, systems, and experiences at the intersection of water, humans, and technology. It is a relatively new field that originated in Ontario, Canada, in the 1960s, and was further developed at University of Toronto thro...
Preprint
Full-text available
Work presented at international conference on Cyborgs in Ethics, Law, and Art, Dec. 14-15, on the idea and concept of negative oppression, i.e. can we build technologies that negate oppression, asking the fundamental question "Can humans being machines make machines be human", i.e. can we "tame the monster" of "bigness" with a piece of itself?
Preprint
The concept of "cyborg" has been in existence for more than a million years. Vessels were the first cyborg prosthesis, long before the invention of clothing, or even the existence of homo sapiens. Fundamental to the essence of cyborgs is freedom, freedom to explore, and to cross borders of land, ocean, skin, clothes, and body. This thinking leads t...
Conference Paper
Full-text available
Water-Human-Computer Interface (WaterHCI), or, more generally, Fluidic-User-Interface (i.e. including other fluids) is a relatively new concept and field of inquiry that originated in Canada, in the 1960s and 1970s, and was further developed at University of Toronto, 1998 to present. We provide a taxonomy of the various kinds of water-human interac...
Preprint
Full-text available
Human memory prioritizes the storage and recall of information that is emotionally-arousing and/or important in a process known as value-directed memory. When experiencing a stream of information (e.g. conversation, book, lecture, etc.), the individual makes conscious and subconscious value assessments of the incoming information and uses this as a...
Preprint
Full-text available
"Vironment" is a series of art pieces, social commentary, technology, etc., based on wearable health technologies of social-distancing, culminating in a social-distancing device that takes the familiar world of security and surveillance technologies that surround us and re-situates it on the body of the wearer (technologies that become part of us)....
Preprint
Full-text available
The vibrations generated from scratching and tapping on surfaces can be highly expressive and recognizable, and have therefore been proposed as a method of natural user interface (NUI). Previous systems require custom sensor hardware such as contact microphones and have struggled with gesture classification accuracy. We propose a deep learning appr...
Conference Paper
Full-text available
Vehicles such as cars, mobility scooters, wheelchairs, and the like are becoming partially or fully automated (i.e. equipped with driver-assist technologies or completely self-driving technologies). These technologies rely heavily on sensors such as vision (cameras), radar, sonar, etc.. In this paper, we propose the use of autonomous craft (e.g. `...
Chapter
Using a wearable electroencephalogram (EEG) device, this paper introduces a novel method of quantifying and understanding the visual acuity of the human eye with the steady-state visually evoked potential (SSVEP) technique. This method gives users easy access to self-track and to monitor their eye health. The study focuses on how varying the SSVEP...
Conference Paper
We provide a simple way to make sensors (including wearable sensors) tangible. The result is a photographic depiction of sensing that is a form of photographic art, as well as scientific photography, allowing humans to easily see and understand sensors. Since photography is a preferable form of evidence, making sensors tangible may also have forens...
Conference Paper
Ayinography is the use of the eye itself as a camera. By recording brainwaves, we read the "mind's eye''. Wearable ayinographic technology represents a new opportunity to solve an old problem: the lack of integrity inherent in the world around us. We live in a world of half-truths where incomplete evidence is collected about almost every facet of o...
Article
Full-text available
Existing physical fitness systems are often based on kinesiology (physical kinematics) which considers distance and its derivatives which form an ordered list: distance, velocity, acceleration, jerk, jounce, etc.. In this paper, we examine the efficacy of using integral kinematics to evaluate performance of exercises that combine strength and dexte...
Article
Full-text available
The contributions of this paper are: (1) a taxonomy of the "Realities" (Virtual, Augmented, Mixed, Mediated, etc.), and (2) some new kinds of "reality" that come from nature itself, i.e. that expand our notion beyond synthetic realities to include also phenomenological realities. VR (Virtual Reality) replaces the real world with a simulated experie...
Chapter
We live our lives surrounded by sensors. Entire cities are being built with an image sensor in every streetlight. Automatic doors, automatic handwash faucets, and automatic flush toilets that once used “single-pixel” infrared sensors are now using sensor arrays with 128 or 1024 pixels, along with more sophisticated structured infrared illumination...
Data
The ISTAS'13 IEEE International Symposium on Technology and Society ' Smart World: Peoples As Sensors ' event drew hundreds of delegates from around the world to a three day event at the University of Toronto on June 27­29, 2013. Engineers, academics, science fiction writers, educators, students, journalists and seemingly most of the Canadian techn...
Conference Paper
Full-text available
Augmented reality and wearable computing development has skyrocketed in the consumer product domain in the past few decades, while the open source domain remained rather neglected. We believe that this is due to the existing development platforms being expensive, closed-source and due to the lack of a strong open source augmented reality community....
Article
Full-text available
This article introduces an important concept: Transparency by way of Humanistic Intelligence as a human right, and in particular, Big/Little Data and Sur/Sous Veillance, where “Little Data” is to sousveillance (undersight) as “Big Data” is to surveillance (oversight). Veillance (Sur- and Sous-veillance) is a core concept not just in human–human int...
Article
Wearable devices with independent computing and networking capabilities change the proximity of people and visual information to self-presentation and self-perception. This article examines the disruptive effect that wearable technologies like the Digital Eye Glass present in documenting and representing the self in a surveillant world. We look at...
Article
Take a look at your grandparents? radio (Figure 1). What was most remarkable about it was what was ?behind the scenes.? Take a look at the back. It was probably easily removable, or, more often than not, maybe it didn?t even have a back. It was completely open (Figure 2).
Article
Steve Mann..brought with him an idea..and when he arrived here, a lot of people sort of said?wow this is very interesting..? I think it?s probably one of the best examples we have of where somebody brought with them an extraordinarily interesting seed and then..it grew, and there are many people now, so-called cyborgs in the Media Lab and people wo...
Article
If you entered the room where the 2014 IEEE Games, Entertainment, and Media Conference (IEEE GEM 2014) was held, you would have thought that you were experiencing one of the imaginary worlds depicted in many good science fiction stories. The audience witnessed the current and future technologies that could and will be used in future games, entertai...
Article
In 1995 Gershon founded the field of HII (Human Information Interaction). I build upon his seminal work by creating a new multiscale framework for (1) sensing, and (2) being sensed by, any measurable or generable classical or quantum scalar, vector, spinor, or tensor field, such as being able to sense and be sensed by water, sound, light (real or v...
Article
Actergy, the time-integral of energy, is employed in a human kinetic sensing system. This paper presents hardware and signal-processing mathematics to measure actergy in a health science, training or gaming environment, as a more detailed metric of performance than reflex time-delay. Fundamental quantities of Integral-Kinematics are compared alongs...
Article
Full-text available
Wearable computers and Generation-5 Digital Eye Glass easily recognize a user's own gestures, forming the basis for shared AR (Augmediated Reality). This Studio-workshop presents the latest in wearable AR, plus an historical perspective with new insights. Participants will sculpt 3D objects using hand gestures and create Unity 3D art+game objects u...
Conference Paper
Wearable computers and Generation-5 Digital Eye Glass easily recognize a user's own gestures, forming the basis for shared AR (Augmediated Reality). This Studio-workshop presents the latest in wearable AR, plus an historical perspective with new insights. Participants will sculpt 3D objects using hand gestures and create Unity 3D art+ game objects...
Article
Egocentric vision provides a unique perspective of the visual world that is inherently human-centric. Since egocentric cameras are mounted on the user (typically on the user's head), they are naturally primed to gather visual information from our everyday interactions, and can even act on that information in real-time (e.g. for a vision aid). We be...
Article
Toposculpting is the creation of virtual objects by moving real physical objects through space to extrude patterns like beams and pipes. In one example, a method of making pipe sculptures is proposed, using a ring-shaped object moved through space. In particular, computational lightpainting is proposed as a new form of data entry, 3D object creatio...
Conference Paper
Computer vision is embedded in toilets, urinals, handwash faucets (e.g. Delta Faucet's 128 or 1024 pixel linear arrays), doors, lightswitches, thermostats, and many other objects that 'watch' us. Camera-based motion sensing streetlights are installed throughout entire cities, making embedded vision ubiquitous. Technological advancement is leading t...
Conference Paper
Full-text available
Video cameras can only take photographs with limited dynamic range. One method to overcome this is to combine differently exposed images of the same subject matter (i.e. a Wyckoff Set), producing a High Dynamic Range (HDR) result. Implementations for real-time HDR videos have relied upon methods that are less accurate. Instead of weighted-sum appro...
Conference Paper
Video cameras can only take photographs with limited dynamic range. One method to overcome this is to combine differently exposed images of the same subject matter (i.e. a Wyckoff Set), producing a High Dynamic Range (HDR) result. HDR digital photography started almost 20 years ago. Now, it is possible to produce HDR video in real-time, on both hig...
Article
In this paper we address the increasingly complex constructs between power, and the practices of looking, in a mediated, mobile and networked culture. We develop and explore a nuanced understanding and ontology that examines veillance in both directions: surveillance and oversight, as well as sousveillance and “undersight”. In particular, we unpack...
Conference Paper
Hospital acquired infections (HAIs) occur frequently in hospitalized patients. Staff compliance with hand hygiene (HH) policy during patient care has been shown to reduce HAIs. Currently, hospitals evaluate adherence to HH policies through direct observation by human auditors. The auditors do not have authority over staff members; thus, this proces...
Conference Paper
This paper presents the invention and implementation of 3D (Three Dimensional) HDR (High Dynamic Range) sensing, along with examples. We propose a method of 3D HDR veillance (sensing, computer vision, video capture, or the like) by integrating tonal and spatial information obtained from multiple HDR exposures for use in conjunction with one or more...
Conference Paper
The Society of Mind is the theory, emerging in the 1970s, that natural intelligence arises from the interactions of numerous simple agents, each of which, taken individually, is “mindless”, but, collectively, give rise to intelligence: “What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems fro...
Conference Paper
Three-Dimensional (3D) range cameras have recently appeared in the marketplace for use in surveillance (e.g. cameras affixed to inanimate objects) applications. We present FreeGlass™ as a wearable hands-free 3D gesture-sensing Digital Eye Glass system. FreeGlass comprises a head-mounted display with an infrared range camera, both connected to a wea...
Conference Paper
Full-text available
Identity awareness of research data has been introduced as a requirement for open research, transparency and reusability of research data in the context of eScience. This requirement includes the capability of linking research data to researchers, research projects and publications, and identifying the license for the data. This connectivity betwee...
Conference Paper
Existential tinkering as a form of inquiry must be brought into the engineering curriculum at the university level, as well as into the education curricula in general, including early childhood education. This paper presents a methodology of education for people of all ages and abilities, including engineering education, through unstructured play,...
Conference Paper
The needs of a realtime vision aid require that it functions immediately, not merely for production of a picture or video record to be viewed later. Therefore the vision system must offer a high dynamic range (often hundreds of millions to one) that functions in real time. In compliment with the existing efficient and real-time HDR compositing algo...
Conference Paper
Wearable computing can be used to both extend the range of human perception, and to share sensory experiences with others. For this objective to be made practical, engineering considerations such as form factor, computational power, and power consumption are critical concerns. In this work, we consider the design of a low-power visual seeing aid, a...
Conference Paper
Full-text available
Surveillance is a French word that means “to watch from above” (e.g. guards watching prisoners, police watching citizens, etc.). Another form of veillance (watching) is sousveillance, which means “to watch from below”. Whereas surveillance often means cameras on large entities (e.g. buildings and land), sousveillance often means cameras on small en...
Conference Paper
This paper explores the interplay between surveillance cameras (cameras affixed to large-entities such as buildings) and sousveillance cameras (cameras affixed to small entities such as individual people), laying contextual groundwork for the social implications of Augmented/Augmediated Reality, Digital Eye Glass, and the wearable camera as a visio...
Article
Back in 2004, I was awakened early one morning by a loud clatter. I ran outside, only to discover that a car had smashed into the corner of my house. As I went to speak with the driver, he threw the car into reverse and sped off, striking me and running over my right foot as I fell to the ground. When his car hit me, I was wearing a computerized-vi...
Conference Paper
This paper presents “FreeGlass”, a hands-free user-buildable wearable computer system based on Mann's “Digital Eye Glass” (also known as EyeTap) concept developed 35 years ago. Like Mann's “Eye Glass”, FreeGlass is based on free and Open Source principles consistent with a free and open i-Society. FreeGlass is suitable for researchers and creators...
Article
This chapter builds upon the concept of Uberveillance introduced in the seminal research of M. G. Michael and Katina Michael in 2006. It begins with an overview of sousveillance (underwatching) technologies and examines the "We're watching you but you can't watch us" hypocrisy associated with the rise of surveillance (overwatching). Surveillance ca...
Conference Paper
We present highly parallelizable and computationally efficient High Dynamic Range (HDR) image compositing, reconstruction, and spatotonal mapping algorithms for processing HDR video. We implemented our algorithms in the EyeTap Digital Glass electric seeing aid, for use in everyday life. We also tested the algorithms in extreme dynamic range situati...
Article
Full-text available
I begin this article with the fundamental premise that wearable computing will fundamentally improve the quality of our lives [1]. I can make this claim because for the past 20 years I have been walking around with digital eye glasses (DEG), and I believe my life has been enhanced as a result. Perhaps I am biased about wearable computing, but like...
Conference Paper
The history of HDR (high dynamic range) imaging in digital photography goes back 19 years, as Robertson et al. state[2]: "The first report of digitally combining multiple pictures of the same scene to improve dynamic range appears to be Mann[1]." HDR combines multiple differently exposed pictures of the same subject matter to be able to see a much...
Conference Paper
Realtime video HDR (High Dynamic Range) is presented in the context of a seeing aid designed originally for task-specific use (e.g. electric arc welding). It can also be built into regular eyeglasses to help people see better in everyday life. Our prototype consists of an EyeTap (electric glasses) welding helmet, with a wearable computer upon which...
Conference Paper
Full-text available
We propose a novel computational method for compositing low-dynamic-range (LDR) images into an high-dynamic-range (HDR) image by the use of a comparametric camera response function (CCRF), which is the response of a virtual HDR camera to multiple inputs. We demonstrate the use of this method with a simple probabilistic joint estimation model, that...
Conference Paper
Technology has put us out of touch with nature. A goal of the CCEI (the Centre for Nature and Technology) is to invent, research, study, and teach technologies that facilitate connection with our natural world. One project of CCEI is Hydraulikos, the Water Labs, for people to touch and be touched by the most primordial of all media = water. Hydraul...
Article
Full-text available
A class of truly acoustic, yet computational musical instruments is presented. The instruments are based on physi- phones (instruments where the initial sound-production is physical rather than virtual), which have been outfitted with computation and tactuation, such that the final sound delivery is also physical. In one example, a single plank of...
Conference Paper
Full-text available
This paper describes a musical instrument consisting of a physical process that acoustically generates sound from the material world (i.e. sound derived from matter such as solid, liquid, gas, or plasma) which is modified by a secondary input from the informatic world. This informatic input selects attributes such as the frequency range of the musi...
Article
Full-text available
This paper presents two main ideas: (1) Various newly invented liquid-based or underwater musical instruments are proposed that function like woodwind instruments but use water instead of air. These "woodwater" instruments expand the space of known instruments to include all three states of matter: solid (strings, percussion); liquid (the proposed...
Conference Paper
Full-text available
I present a new way of teaching musical tempo and rhythm by writing out the music on a timeline along the ground, with, for example, chalk, in a form in which each beat of the music corresponds to one footstep. In some setups I use computer vision to track participants so that the music is actually generated by their footsteps moving through the sp...
Conference Paper
Full-text available
This paper describes the construction of a MIDI interface that can be attached to an acoustic hydraulophone, or pneumatophone, or to an ordinary water fountain, or air fountain. This allows the fountain to be used as a haptic/tactile control surface or highly expressive fluid keyboard that provides gentle, soothing (soft) tactile feedback quite dif...
Article
As a “cyborg” (Mann, 2001) in the sense of long-term adaptation to the modified body, one encounters a new kind of existential self-determination and mastery over one’s own environs (and to some degree, one’s own destiny). Presently, in addition to having the internet and massive databases and video at my beck and call most of the time, I am also c...
Article
This paper describes the author's own personal experiences, experiments, and lifelong narrative of inventing, designing, building, and living with a variety of body-borne computer-based visual information capture and mediation devices. The emphasis is not just on the devices themselves, but on certain social, privacy, ethical, and legal questions a...
Conference Paper
The FUNtain (TM) is a new input device in which a user inputs data by direct interaction with fluid, such as with one or more air or water jets. There is usually a (re)strictometer for each jet that measures the degree of restriction of fluid flow for that jet, such that the time-varying changes in restrictometric quantities provide an input to ano...
Article
Full-text available
Chirplet time-frequency representation has been applied to characterise visual evoked potentials (VEPs) successfully. The approach presented here can represent both transient VEPs and steady-state VEPs clearly. Comparison to the method of short-time Fourier transform (STFT) is reported.
Article
this paper differs from that of [14] in that the method presented here emphasizes the operation near the neighbourhood of the identity (e.g., the "video orbit")

Network

Cited By