Figure 1 - uploaded by Will Fitzgerald
Content may be subject to copyright.
Biological Water Processor Component of Water Recovery System (NASA Photo)

Biological Water Processor Component of Water Recovery System (NASA Photo)

Source publication
Conference Paper
Full-text available
Many intelligent interfaces must recognize patterns of user activity that cross a variety of different input channels. These multimodal interfaces offer significant challenges to both the designer and the software engineer. The designer needs a method of expressing interaction patterns that has the power to capture real use cases and a clear semant...

Context in source publication

Context 1
... example, consider a human/computer interface built for the control and monitoring of an integrated Waste Recovery System (which recycles urine and waste water into water us- able for drinking and other functions) at NASA's Johnson Space Center [14]. The system in question has several large subcomponents, including a biological water processor (see Figure 1), a reverse osmosis system, an air evaporation sys- Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ...

Similar publications

Conference Paper
Full-text available
Intelligent vehicles operating in different levels of automation require the driver to fully or partially conduct the dynamic driving task (DDT) and to conduct fallback performance of the DDT, during a trip. Such vehicles create the need for novel human-machine interfaces (HMIs) designed to conduct high-level vehicle control tasks. Multimodal inter...

Citations

... A closely related challenge is the management of false positives and recognition errors against the negative cost of reject rates [36]. More generally, technological intelligence was linked to the control of complex environments [23], to the interaction via multiple channels or devices [24,48], and to ensuring system robustness [48]. Moreover, capturing user data such as their affective state [69] or interpreting imprecise natural language instructions [42] was named as technically challenging. ...
Preprint
Full-text available
This reflection paper takes the 25th IUI conference milestone as an opportunity to analyse in detail the understanding of intelligence in the community: Despite the focus on intelligent UIs, it has remained elusive what exactly renders an interactive system or user interface "intelligent", also in the fields of HCI and AI at large. We follow a bottom-up approach to analyse the emergent meaning of intelligence in the IUI community: In particular, we apply text analysis to extract all occurrences of "intelligent" in all IUI proceedings. We manually review these with regard to three main questions: 1) What is deemed intelligent? 2) How (else) is it characterised? and 3) What capabilities are attributed to an intelligent entity? We discuss the community's emerging implicit perspective on characteristics of intelligence in intelligent user interfaces and conclude with ideas for stating one's own understanding of intelligence more explicitly.
... As a second example, procedural methods (e.g., transition networks [8,13]), alternative parsing strategies [4], and frame-based approaches [11,2] gained a lot of interest as fusion methods in the field of interactive systems due to a potential performance advantage compared to unification [9,7]. They all have to explicitly deal with variance during the central matching operation of two fusion candidates. ...
Conference Paper
Full-text available
This article describes four software techniques to enhance the overall quality of multimodal processing software and to include concurrency and variance due to individual characteristics and cultural context. First, the processing steps are decentralized and distributed using the actor model. Second, functor objects decouple domain- and application-specific operations from universal processing methods. Third, domain specific languages are provided inside of specialized feature processing units to define necessary algorithms in a human-readable and comprehensible format. Fourth, constituents of the DSLs (including the functors) are semantically grounded into a common ontology supporting syntactic and semantic correctness checks as well as code-generation capabilities. These techniques provide scalable, customizable, and reusable technical solutions for reoccurring multimodal processing tasks.
... The use of the Complex Event Recognition Architecture was first successfully demonstrated in monitoring a complex system for water recovery used at NASA [2]. In this system, CERA was used to build a model of event patterns occurring within the water recovery system, recognizing those event patterns in real time [5]. However, the use of CERA extends beyond the scope of the water recovery system. ...
... However, the use of CERA extends beyond the scope of the water recovery system. It can be used to describe and recognize complex patterns of discrete events [5]. Among the patterns types CERA recognizes are One-Of, In-Order, All, Within and Without. ...
... Java is used quite frequently in developing and prototyping multimodal user interfaces, and so this is a natural next step from a software engineering perspective. A proprietary Common Lisp version of CERA is also available from I/NET, Inc [5]. We would also like to develop an XML front-end for CERA that would allow one to easily describe the patterns for CERA to look for. ...
Article
Full-text available
We describe how CERA, the Complex Event Recognition Architecture, was used to create a multimodal user interface for Skibbles, a memory game moderated by a mobile robot. We also announce the availability of open-source software that implements CERA, As humans we experience the world through our five senses. Similarly, in computer science a multimodal interface is one that allows for input from many channels, also known as modes. The classic example of a multimodal interface is an in-car navigation system where the user touches the map and says, “Go here, ” simultaneously. Multimodal
... Fusionsmechanismen lassen sich vier verschiedenen Klassen zuordnen: Frame-basierte, Unifikations-basierte, prozedurale oder hybride Mechanismen [LNP + 09]. Typische Vertreter prozeduraler Ansätze im Computergraphikumfeld sind Übergangsnetzwerke [JB01] [Lat02] oder alternative Parsingstrategien [FFH03]. Frame-basierte Ansätze für 3D-Szenarien finden sich etwa bei [KST93] und [BNG04]. ...
Conference Paper
Full-text available
Dieser Artikel beschreibt ein System zur Umsetzung multimodaler Interaktionen in Systemen der Virtual, Augmented und Mixed Reality. Die Bewegungen eines Benutzers werden über ein Marker-gestütztes Infrarottrackingsystem einer Zeitreihen- und Featureanalyse unterworfen. Die folgende Gestenerkennung klassifiziert Bewegungsmuster über ein konnektionistisches Lernverfahren mittels neuronaler Netze. Neben der reinen Klassifikation werden relevante Bewegungsdaten korreliert, um eine spätere Auswertung der spatialen Gestenexpression zu ermöglichen. Die Fusion der gestischen und sprachlichen Eingaben verwendet einen Unifikationsansatz für multimodale Grammatiken. Für den Kontext interaktiver Systeme wurde der zugrundeliegende Chart-Parser um die Möglichkeit einer inkrementellen Verarbeitung erweitert. So genannte Featlets ermöglichen eine vereinheitlichte Verarbeitung der Sprach- und Gesteneinheiten innerhalb der Unifikation. Der Fusionsprozess verwendet dabei sowohl eine semantische wie auch eine temporale Zuordnung. Der atomare Unifikationsschritt bleibt universell, alternative Relationen ermöglichen ein variables Agreement. Die Eingabeverarbeitungskomponenten werden unter Benutzung des Actor-Models lose an eine VR/AR-Middleware gekoppelt.
... A need to record the corresponding parameter settings for the software is also identified. Fitzgerald, et al [6] define a framework for describing event-tracking for multimodal user interfaces. Such a framework can be used to develop a more comprehensive model of user interaction. ...
Article
Interactive visualizations provide an ideal setting for exploring the use and exploitation of personal histories. Even though visualizations leverage innate human capabilities for recognizing interesting aspects of data, it is unlikely that two users will follow the exact process for discovery. This results in an inability to effectively recreate the exact conditions of the discovery process, which we call the knowledge rediscovery problem. Because we cannot expect a user to fully document each of their interactions, there is a need for visualization systems to maintain user trace data in a way that enhances a user's ability to communicate what they found to be interesting, as well as how they found it. This project presents a model for representing user interactions that articulates with a corresponding set of annotations, or observations that are made during the exploration. This problem is only made more challenging when pervasive computing and corresponding interactions across devices is factored in.
... The Event Detection Assistant (EDA) monitors data produced by the control automation and searches for patterns in this data that are of interest to humans supervising this system. The EDA is implemented using the Complex Event Recognition Architecture (CERA) [8]. As specified patterns are detected in the control data, the EDA generates and broadcasts its own events about these data patterns. ...
Article
Full-text available
This paper describes the Distributed Collaboration and Interaction (DCI) environment, which supports interaction among humans and automated software systems. The DCI approach uses intermediate liaison agents associated with each human to provide an interfacing layer between the human and the automation. We have applied a DCI prototype to help human engineers interact with automated control software for the advanced Water Recovery System (WRS) at Johnson Space Center (JSC). This paper describes this application and the DCI design and implementation.
Article
Control automation can reduce human workload by automating routine operations such as system reconfiguration and anomaly handling. Humans need assistance in interacting with these automated control agents. We have developed the Distributed Collaboration and Interaction (DCI) system, an environment in which humans and automated control agents work together. Within this environment, we provide personal liaison agents, called Ariel agents, for each human in a group to assist them in performing the duties associated with the roles they perform for that group. We have deployed the DCI system at NASA's Johnson Space Center (JSC) to assist control engineers in performing duties associated with crew water recovery systems. In this paper we describe the water recovery system in the Water Research Facility (WRF) at JSC, the DCI system we developed, and our experiences in deploying DCI for use in the WRF.