Conference PaperPDF Available

Mobile interaction with visual and RFID tags: A field study on user perceptions

Authors:

Abstract and Figures

In this paper, we present a study of user perceptions on mobile interaction with visual and RFID tags. Although mobile interaction with tags has been proposed in several earlier studies, user perceptions and usability comparisons of different tag technologies have not been intensively investigated. In contrast to earlier studies, which report on user studies with evaluating new concepts or interaction techniques, we take another approach and examine the current understanding of the techniques and user perceptions on them. Our field study of 50 users charts currently existing user perceptions and reveals potential usability risks that are due to the limited or erroneous understanding of the interaction technique.
Content may be subject to copyright.
Mobile Interaction with Visual and RFID Tags – A Field
Study on User Perceptions
Kaj Mäkelä¹, Sara Belt², Dan Greenblatt³, Jonna Häkkilä¹
¹ Nokia Research Center
Itämerenkatu 11-13
00180 Helsinki
Finland
firstname.lastname@nokia.com
² Nokia Multimedia
Yrttipellontie 6
90230 Oulu
Finland
sara.belt@nokia.com
³ College of Computing
Georgia Inst. of Technology
Atlanta, GA 30332
USA
dmgreen@cc.gatech.edu
ABSTRACT
In this paper, we present a study of user perceptions on
mobile interaction with visual and RFID tags. Although
mobile interaction with tags has been proposed in several
earlier studies, user perceptions and usability comparisons
of different tag technologies have not been intensively
investigated. In contrast to earlier studies, which report on
user studies with evaluating new concepts or interaction
techniques, we take another approach and examine the
current understanding of the techniques and user
perceptions on them. Our field study of 50 users charts
currently existing user perceptions and reveals potential
usability risks that are due to the limited or erroneous
understanding of the interaction technique.
Author Keywords
User studies, RFID, visual tags, mobile interaction
ACM Classification Keywords
H5.m. Information interfaces and presentation (e.g., HCI):
Miscellaneous.
INTRODUCTION
Interacting with the physical world via a mobile handheld
device is a relatively new paradigm, which has quickly
emerged during recent years. Integrating cameras, motion
sensors, and radio frequency identification (RFID) or
barcode readers into mobile devices has made new
interaction concepts possible. Tags utilizing different
technologies have been introduced for interacting with
physical objects in a variety of applications and uses. For
instance, augmented reality applications have been
demonstrated [4], gesture recognition based on visual tags
has been performed [1], and tags have been used for
annotating the physical environment [5]. The use scenarios
also include accessing information through interacting with
a tag or using the tag for initiating some other information
channel, e.g. Bluetooth or internet connection [3, 7].
Typically, interaction with a tag employs a physical gesture
where the user (or more precisely, user’s device) points at
or touches a tag, which can be for instance an RFID tag
recognized with a device integrated reader or a visual tag
read with a camera [7, 8]. In [6], gesture semantics, i.e.
touching, pointing and scanning gestures, and their
suitability in different contexts has been examined.
Most of the research has so far focused on creating new
concepts, or utilizing tags as part of a larger system, as
opposed to specifically studying the interaction paradigm
itself. Typical for the existing studies is that they are often
used as a proof of concept with only a small sample of
users, often in a laboratory environment, with guided
instruction prior to performing interaction tasks. There
exists very little data on how users would interact with tags
without any specific instruction, their expectations of the
technology, and their perceptions regarding interacting with
these objects in public places.
In this paper we report on a field study with RFID and
visual 2D barcodes, where 50 people were interviewed and
asked to interact with the tags. The goals of our study,
conducted in ‘everyday life environment’ was to assess the
current knowledge or expectations people had with the tag
technology, the intuitiveness of usage, social acceptability,
and to predict any potential barriers to use.
DESIGN OF THE STUDY
Hypotheses
The study had a strongly exploratory nature, as it sought out
to chart the general perceptions people had with interacting
with RFID and visual tags. In addition, we had the
following hypotheses:
1. RFID and visual tags are perceived similarly in terms of
data storage and transfer.
2. Based on the ubiquity of cameraphones, camera-based
interaction in comparison to touch is perceived as more
Permission to make digital or hard copies of all or part of this work fo
r
p
ersonal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
b
ear this notice and the full citation on the first page. To copy otherwise,
or republish, to post on servers or to redistribute to lists, requires prio
r
specific permission and/or a fee.
CHI 2007, April 28–May 3, 2007, San Jose, California, USA.
Copyright 2007 ACM 978-1-59593-593-9/07/0004...$5.00.
CHI 2007 Proceedings • Tags, Tagging & Notetaking April 28-May 3, 2007 • San Jose, CA, USA
991
familiar, and thus the interaction with visual tags is more
intuitive.
Study Set-Up
The study consisted of semi-structured interviews
accompanied by interaction tasks, which were carried out in
two city centers in Finland, during summer 2006. The study
consisted of two phases, referred to in the following as A
(Oulu) and B (Tampere), which had identical set-ups. In
addition, in study B some additional details were asked in
order to verify assumptions made based on study A. The
interviews took place in an outdoor pedestrian shopping
mall, a library, and a market place. Participants were chosen
from those present on the street, aiming to achieve a
balance of male and female, with ages ranging from
teenager to middle aged (50+).
Figure 1. The poster used in the study. Above, the complete
poster with the visual tag, and below, the lower part of the
RFID poster (tag is behind the paper).
During the interview, each participant was shown two
posters, one employing an RFID tag and one a visual tag
(Figure 1). Participants were first asked about their
familiarity with a particular tag technology, then given a
brief easy-to-understand explanation of how the tag works,
and shown the tags. However, they were not told how to
interact with it. The participants were asked what kind of
information they would expect to receive from the tag, and
then given a properly-equipped mobile phone and asked to
demonstrate how they would interact with the tag (Figure
2). Answers to the interview questions, as well as
observations on usage were recorded by the researchers.
After the user had tried to use the tag and was shown the
proper usage scenario, he or she was asked to reflect on the
intuitiveness and ease-of-use of the experience. For each
participant, this process was repeated with both types of
tags. To avoid bias the order was altered so that half of the
participants started with RFID, half with visual tags.
Figure 2. A study participant reading RFID tag with a phone
The study included 50 participants (A: 11 female, 15 male;
B 13 female, 11 male). Participants’ background
information about mobile phone usage is presented in Table
1 (information from one female participant in study A is
missing, as she had to leave before completing the last
questions of the interview).
Table 1. Mobile phone usage of the study participants
Yes No
Currently carrying
a phone:
49
(A:25, B:24)
0
Owned a camera
phone:
19
(A:12, B:7)
30
(A:13, B:17)
RESULTS
In the study it was found that although the participants were
enthusiastic and open towards the presented information
acquisition methods, a large majority of the interviewed
were not familiar with the concept of either the RFID or
visual tag (see Figure 3). For some, RFID tags were known
from security tags on clothing or compact discs, but they
were not aware of their usage in the current context. Few of
the participants were able to associate the visual tag
(semacode) to the barcodes used in product packages.
As the participants did not have prior experience on which
to base their interactions with this technology, they applied
a diverse range of mental models governing what kind of
information the tags could store, and how that information
could be transferred to their mobile phone. For the visual
tag, based on its printed nature most users deduced it was
accessed with the camera. Some users suggested taking a
picture of the visual tag, while others pointed the camera at
the tag and waited for it to register automatically. For the
RFID tag, given its invisibility (i.e. hidden behind the
paper), and more advanced technology, the appropriate
CHI 2007 Proceedings • Tags, Tagging & Notetaking April 28-May 3, 2007 • San Jose, CA, USA
992
interaction technique proved slightly more elusive for
participants. The interaction techniques proposed for RFID
tags included, for example, utilizing Bluetooth, manually
typing up the visible URL to a mobile browser, reading the
tag via an infrared port, calling a stored number to hear pre-
recorded information, and taking a picture of the visual
icon.
Figure 3. Participants’ answers to if they recognized what the
tags were, or if they had seen them earlier.
Figure 4. Participants’ preferences for the tags.
For many of the participants the storage location and the
nature of the content was not clear. Many users correctly
assumed that the tag would contain some band-related
information, and some suggested specifically that it might
contain an mp3 file. The information was, in most of the
cases, considered to be in the tag itself; it was not perceived
as a reference to the actual information. This was quite
evident, as many of the participants did not even expect the
mobile phone to contain web browsing functionality.
Implicit in many participants’ responses was that the phone
would read and just store the information from the tag for
later use.
Thirty-one out of the fifty study participants preferred the
interaction paradigm of the RFID tag, while fifteen
preferred the visual tag (Figure 4). The RFID swipe was
viewed as being quicker, requiring less effort, and generally
feeling more natural than explicitly taking the picture of a
visual tag. Those that favored RFID also liked that it did not
require opening any additional application but the
interaction was instant. Those that preferred the visual tag
considered the physical action of taking a photo to be more
familiar and socially acceptable than waving the phone on
the wall.
There were also differing opinions with respect to the
aesthetics of the two different types of tags. Some people
disliked the way the visual tag looked, saying that it was too
official, vague or technological looking, while others
praised it for its sleek look, and said that they thought it
made the poster look cooler. Based solely on the
appearance of the visual tag, one participant claimed not to
be able to use the visual tag because of not being “a
mathematical person”.
Interestingly, several participants thinking from the
perspective of information producers preferred the visual
tags because they were cheaper to create and caused less
waste. However, the reliability of the tags was found to be
problematic. Participants pointed out that, as the tags were
accessible in public places, they were vulnerable to
vandalism. Visual tags could be visually manipulated and
this way their content might be altered. Also overlaying
water and dirt affects their readability. RFID can be ripped
off or changed with other tags containing information or
pointers to harmful material. Also the perceived active
nature of RFID tags caused concerns about picking up
harmful information accidentally while passing the tag.
As an answer to our hypothesis, the study showed that the
RFID and visual tags are perceived differently with respect
their nature as data storages and the way they transfer data.
The RFID was conceived to have a more active nature and
wider spatial range of functionality. Also, taking a picture
with a camera is considered to be familiar, but it does not
directly imply that it is intuitive as an interaction method.
The camera was expected to work like a continuously
sensing scanner rather than explicitly triggered reader.
DISCUSSION
User perceptions on interacting with tags have not been
extensively studied by earlier research. If user studies on
the interaction paradigm have been performed, they have
typically been used for confirming the interaction paradigm
selected for a certain application. These studies have
commonly employed only a small amount of people, and
have typically been performed in a laboratory environment,
university campus, or with IT students or professionals.
Often user studies are carried out to gain proof of concept,
in the manner of approving and justifying the research and
implementation. The results gained this way often have a
tendency to be positively biased and may not give realistic
feedback on the usability risks of the design. In our study,
we concentrated on the perceptions ‘a man on the street’
had about visual and RFID tags, and aimed to have a
sample large enough for realistic and reliable understanding
of the phenomenon.
The study results reveal that there are potential usability
risks with the mobile interaction with RFID and visual tags.
When a user is faced with any unfamiliar situation (s)he
CHI 2007 Proceedings • Tags, Tagging & Notetaking April 28-May 3, 2007 • San Jose, CA, USA
993
attempts to make sense of the world by developing a mental
model based on any prior relevant experience [2].
Currently, the mental model that people have on the
technologies is still very vague.
Characteristics that affected participants’ interactions with
the RFID and visual tags were their range of function and
visibility. The range of the RFID tags is typically less than
10 centimeters. Due to this short range of function the user
needs to be informed precisely of the location of the tag,
although the tag itself does not need to be visible. In the
study, the RFID was attached to behind the poster and its
location was indicated with visual icon. The visual icon
utilized (two concentric circles) was not a commonly
known indicator for RFID and did not therefore provide any
previously known cues for interaction. As the usage of
RFIDs or other invisible near-field communication (NFC)
becomes more common, it will be important to develop
standardized visual cues, enabling users to easily recognize
the presence of an NFC, and execute the known interaction
method.
The interaction required for triggering the tag reading was
not evident to the users. Although participants had used
cameraphones before and were used to the idea of snapping
a photo, many expected the visual tag to be recognized by
pointing at it with the camera, without initiating an explicit
capture action. This implies that the users expected the
system to be able to detect the presence of a tag. The
semacode reader application used in the study required the
user to trigger the tag reading. However, it should be noted
that there are existing applications able to detect the tag
automatically.
Both of the tags were expected to contain direct, mostly
textual information related to the band presented in the
poster. This leads us to assume that the device and
application were considered as a “lens” to view their
information content, being otherwise in an
incomprehensible, encrypted form. The users were
surprised when the recognized identifier triggered a
browser which then retrieved information from the internet.
They did not expect the identifier to act as reference or
trigger for other applications. In addition, the information
display was expected quite often to be dependent on the
proximity with the tag. The tag was held in the scope of the
device, either within the viewfinder of the camera or in the
proximity of the RFID module even after the actual tag
recognition occurred. This observation also supports the
concept of the device as a lens to view local information.
However, as the nature of the tag may vary in different
contexts between local storage and a reference to the actual
remote resource, it is challenging to provide a mental model
fitting to each context.
As the study was conducted in two cities within a single
country, it is therefore somewhat limited by the
geographical and cultural environment. However, we
believe that it reflects the general situation in industrial,
urban environment in a western culture.
CONCLUSIONS
In this paper, we presented a study of user perceptions on
mobile interaction with visual and RFID tags. The study
was conducted for 50 participants in two Finnish cities as
semi-structured interviews accompanied with interaction
tasks. In the study we found that the large majority of the
participants were not familiar with the concept of either the
RFID or visual tag and did not have clear knowledge of
their application prior to the interview. Together with the
lack of prior experience, minimal visual interaction cues
caused misconceptions and usability problems while
interacting with the tags. Their range of function and
methods for accessing the data were often unclear and
misconceived. The tags were assumed to contain direct
information in encrypted form in contrast to acting as
references to networked data resources.
In the future, conducting a similar study in another regional
location and culture would offer valuable insight into the
stage of local development and the cultural variables
affecting the usage of the tags. In addition, the evaluation of
metaphors and visual design of physical tags affecting the
user’s perception of the interaction with the tags would
require further study.
REFERENCES
1. Ballagas, R., Rohs, M., Sheridan, J. Mobile Phones as
Pointing Devices. In Pervasive 2005 Workshop on
Pervasive Mobile Interaction Devices (PERMID 2005).
2. Norman D. A. The Design of Everyday Things.
Doubleday, New York, USA, 1990.
3. Pradhan, S., Brignone, C., Cui, J. McReynolds, A., and
Smith, M. Websigns: Hyperlinking Physical Locations
to the Web. IEEE Computer, Aug 2001, 42-48.
4. Rekimoto, J., Ayatsuka, Y. CyberCode: Designing
Augmented Reality Environments with Visual Tags. In
Proc. of Designing Augmented Reality Environments
(DARE) 2000, 1-10.
5. Rohs, M. Visual Code Widgets for Marker-Based
Interaction. In Proc. of the 25th IEEE International
Conference on Distributed Computing Systems
Workshops (ICDCS 2005 Workshops).
6. Rukzio, E. Leichtenstern, K., Callaghan, V., Holleis, P.,
Schmidt, A., Chin, J. An Experimental Comparison of
Physical Mobile Interaction Techniques: Touching,
Pointing and Scanning. In Proc. Ubicomp 2006, 87-104.
7. Salminen, T., Hosio, S, Riekki, J. Enhancing Bluetooth
Connectivity with RFID. In Proc. of PerCom 2005, 36-
41.
8. Välkkynen, P. Tuomisto, T. Physical Browsing
Research. In Pervasive 2005 Workshop on Pervasive
Mobile Interaction Devices (PERMID 2005).
CHI 2007 Proceedings • Tags, Tagging & Notetaking April 28-May 3, 2007 • San Jose, CA, USA
994
... There is a large amount of existing research on handheld projectors in a variety of applications, and many potential areas of use have been suggested, see e.g. [8,11,16]. Rukzio et al. [11] have reported on the design space around Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. ...
... To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. personal projection, and Molyneaux et al. [8] report on studies using handheld projection both for infrastructurebased and infrastructure-less cases. Related to navigation, mobile projection has been used e.g. to augment maps [6] and to guide museum tours [13]. ...
Conference Paper
Full-text available
We present an in-the-wild user study (n=27) investigating the combination of two mobile technologies - picoprojectors and marker based information browsing. We studied a tour, where the tour guide used combinations of fixed and projected elements to present information, and compare four cases: A) as a baseline, a traditional paper poster, B) a projected poster, C) a printed paper fiducial marker, viewed through a mobile device browser application, and D) a projected fiducial marker viewed through a mobile device browser application. As a contribution, we present a novel approach to ad hoc projection of markers, and the findings of the user study. Here, the salient findings suggest that the techniques using markers have the potential to enhance tour participants' experience, but face practical challenges due to lighting conditions and image stability.
... Given that "advanced products demand advanced prototypes" [18], they can be difficult to realize under economically feasible conditions [18]. In the past, however, these kinds of prototypes and their iterative user evaluations led to disruptive innovations such as touchscreens, RFID tags, or biometric sensors [11,21,24,33]. The use of digital tools such as computer-aided design (CAD) or, more recently, virtual prototyping [36] assists the prototyping process and has become an industry standard. ...
Chapter
Full-text available
This paper proposes and evaluates a rapid mixed reality prototyping process for novel interaction devices. The process consists of five steps: ideation & visualization, CAD modeling, physical construction, MR augmentation, and comparative evaluation. It is exemplarily tested by developing a physical prototype of a transparent handheld display device and evaluating it against an opaque version (\(N=16\)). Results show that rapid MR prototyping can detect certain UX differences without the need to actually realize the final device. The transparent version had significantly higher hedonic quality. However, no obvious advantage of transparency could be identified regarding task completion. It turned out that the tablet-based AR solution had limited ability to mediate the technological concept and led to ergonomic problems in the experimental setup. However, the proposed process is flexible enough to allow the use of high-quality MR augmentation solutions such as immersive head-mounted display AR, which could overcome the identified limitations.Keywordsmixed realitytangible augmented realityprototyping processphysical proxytransparent displayuser study
... be designed so that they can be reliably tracked from a range of distances, angles and in various lighting conditions, but also, often are required to match to product design aesthetics and user experience. A performance optimized marker may be perceived as technical looking and obtrusive [9], or not fitting to the use context [1]. ...
Poster
Full-text available
The design of graphical augmented reality (AR) markers requires compromise between the aesthetic appearance and tracking reliability. To investigate the topic, we created a virtual reality (VR) pipeline to evaluate marker performance, and validated it against real-world performance for a set of graphical AR markers. We report that, with the well known Vuforia framework and typical smart-phone hardware, well designed 20×20 cm markers can be tracked at distances of up to 68 cm. We note that the number of feature points is particularly important to a marker's angular performance.
... For camera based mobile applications, tag design often resembles a 2D bar code, and QR codes are commonly used. However, QR codes have been criticized as being mathematical looking and unaesthetic [19], and attempts to create visually pleasing tags have taken place [9], as well as allowing users to design their own visual markers [5]. In products, visual markers often appear detached from the actual product or concept design, creating a mismatch between the visual style and the context, for example a 2D bar code attached to a gravestone [24]. ...
Conference Paper
Full-text available
The future vision of commonplace, everyday wear of augmented reality glasses, brings potential to the fashion world, where physical and virtual aspects can be blended to create an overall aesthetic. An enabler for this is the integration of computer-readable visual markers as part of clothing design. We introduce a design space for the use of visual markers on garments, with aspects of visual style, tag size, content, content type, customizability, attachment technique, and reader application. Two prototypes of clothing-integrated visual marker solutions, viewed with a mobile augmented reality (AR) application, are presented and evaluated via a focus group study, an online survey and an in-the-wild study. Our findings highlight that when designing wearable computing garments, the visual design is very important for acceptability, and that a user's personal style largely dictates their preference for the concept and willingness to wear the garment. Wearable AR markers were considered as suitable for wear in social situations, such as parties or events, and as a tool for group cohesion or for showing campaign or ideological information.
... Esta tecnología permite administrar, identificar y realizar trazabilidad de cualquier tipo de objeto mediante el uso de las etiquetas o TAGS, además, dota a los objetos "etiquetados" de capacidades inteligentes y conectividad inalámbrica, lo que les permite transferir información en todo momento (Curtin, Kauffman & Riggins, 2007). Muchas de las ventajas de su utilización hacen referencia a la capacidad de almacenamiento y transferencia de datos en tiempo real (Mäkelä, Belt, Greenblatt & Häkkilä, 2007), cualidades que hacen de estos sistemas una herramienta tecnológica con un alto potencial en diversas prácticas deportivas. Mediante el uso de estos dispositivos es posible por ejemplo cronometrar con precisión los tiempos de competición. ...
Article
Full-text available
The aim of the present investigation is to introduce new developments in the field of information and communication technologies (ICTs) and its potentiality for sport management development. On this sense, new technological advances in this field are presented, led by a new technological paradigm called the Internet of Things (IoT), which simultaneously integrates several complementary developments such as cloud computing (CC), pervasive computing, radio frequency identification system (RFID) and wireless sensor networks (WSN) to form a technology ecosystem capable of capturing a larger amount of information, streamline information management and promote improvements in all areas of a sport organization. Also, this research introduces diverse fields of implementation, characteristics and potentialities of these technologies from a practical perspective, in order to boost its scope and encourage the implementation of these new tools.
... Using conventional visual markers with mobile devices also poses challenges. One is related to their appearance: they may be considered technical looking and artificial in the context where they are presented [22,47]. However, visual marker might be used creatively and they can be naturalistic and unobtrusive [44]. ...
Conference Paper
Full-text available
We present a concept, prototype and in-the-wild evaluation of a mobile augmented reality (AR) application in which physical items from nature are used as AR markers. By blending the physical and digital, AR technology has the potential to create an enhanced learning experience compared to paper-based solutions and conventional mobile applications. Our prototype, an application running on a tablet computer, uses natural markers such as leaves and pinecones in a game-like nature quiz. The system was evaluated using interviews with and observations of 6- to 12-year-old children (n=11) who played the game as well as focus group discussions with play club counsellors (n=4) and primary school teachers (n=7). Our salient findings suggest that the concept has sound potential in its mixture of physical activity and educational elements in an outdoor context. In particular, teachers found the use of natural objects to be an appealing approach and a factor contributing to the learning experience.
... At present, self-commentary system of attractions mainly is built by microprocessor such as MCU (Micro Controller Unit) and embedded microprocessor in China. They received information of attractions by wireless module [4][5], and the MCU controls the audio decoder chip to decode the audio information of attractions and the explanations of attractions are played. The products cannot be used on a large-scale. ...
Article
The traditional electronic devices of tourist attractions commentary system can be divided into fixed and dedicated handheld commentary. But these commentary equipment should be provided by tourist attractions or be rent by visitors. With some disadvantages of high cost, it is difficult to manage and is unable to be used on a large scale in practical application. To solve this problem, an attraction self-commentary system based on NFC was proposed in this paper. In management system of attractions commentary, NFC (Near Field Communication) devices are used to achieve the daily management of attractions. The software with abilities to read labels of attractions and play back information was achieved based on Android operating system. It was designed for mobile phones with NFC function to achieve self-commentary of attractions. The implementation of this system not only conquers the shortcomings of existing products, but also has broad application prospects.
... Many applications of AmI, in particular health care related assisted living smart homes, require the use of non-intrusive, but very robust technologies [4]; the goal is to improve the quality of life and the autonomy of the user without requiring any form of participation or effort to adapt to the said technology. For example, it is well known among ambient assisted living (AAL) researchers that users will prefer passive RFID technology over more visible sensors such as camera [5]. In the past decade, battery powered technologies were often avoided for AAL in that changing batteries every month was unrealistic Wired sensors are still generally chosen by smart home researchers, but they have the disadvantage of high installation time and cost [6]. ...
... Being visible and widely adopted, they offer an affordance, which indicates that there is information available by reading the code. Mäkelä et al. have pointed out that visual markers provide an affordance for information access, but can be perceived as non-aesthetic [5]. The visual design of mobile phone readable tags has indeed been addressed, e.g. ...
Conference Paper
Visual markers are an easily distributed technology allowing situated information access with standard mobile devices. Earlier works on wearable visual markers have largely focused on use case based research using QR code or similar markers. This paper aims to identify general preferences for the approach based on a user study (n = 60), reporting e.g., that the preferred location for tags was the right side of the chest and the arm. Study participants' perceptions related to the tags were generally neutral, but highlighted intrusiveness and usability as potential risks.
Chapter
Population aging is the most significant social transformations of our century. In this context, affordable senior housing with supportive services is a key component to the world's long-term care continuum. One of the main issues is how to cost-efficiently provide adapted (and increasingly complex) care services in senior residences to an exponentially growing number of people considering staff shortage. The rise in operating costs (energy, food, etc.) forces companies to find new ways to stay competitive. In that context, this chapter tries to propose avenues of solutions by giving some answers to a simple question: how to optimize the work of the staff to meet the growing demand considering the context of staff shortage. More precisely, this chapter studies methods and strategies to exploit Ambient Intelligence and Big Data to increase the number of residents an employee can support by automating a part of his daily work.
Conference Paper
Full-text available
This paper presents an analysis, implementation and evaluation of the physical mobile interaction techniques touching, pointing and scanning. Based on this we have formulated guidelines that show in which context which interaction technique is preferred by the user. Our main goal was to identify typical situations and scenarios in which the different techniques might be useful or not. In support of these aims we have developed and evaluated, within a user study, a low-fidelity and a high-fidelity prototype to assess scanning, pointing and touching interaction techniques within different contexts. Other work has shown that mobile devices can act as universal remote controls for interaction with smart objects but, to date, there has been no research which has analyzed when a given mobile interaction technique should be used. In this research we analyze the appropriateness of three interaction techniques as selection techniques in smart environments.
Conference Paper
Full-text available
Physical browsing is a mobile­device­based interaction method for pervasive computing. In this paper, we describe our research interests and experiences of phy sical browsing: the user interac­ tion paradigm, scenarios using physical selection and a demon­ stration application. We also describe a number of research chal­ lenges within the field.
Conference Paper
Full-text available
This article outlines two techniques that allow the mobile phone to be used as a pointing device for public terminals and large public displays. Our research has produced two complimentary camera- based input techniques. We outline the details of the interaction techniques and identify further areas of exploration.
Article
Full-text available
The CyberCode is a visual tagging system based on a 2Dbarcode technology and provides several features not provided by other tagging systems. CyberCode tags can be recognized by the low-cost CMOS or CCD cameras found in more and more mobile devices, and it can also be used to determine the 3D position of the tagged object as well as its ID number. This paper describes examples of augmented reality applications based on CyberCode, and discusses some key characteristics of tagging technologies that must be taken into account when designing augmented reality environments. KEYWORDS: Augmented reality, merging virtual and real, ID-aware interface, CyberCode. INTRODUCTION: ID-AWAREAUGMENTEDENVIRONMENTS In designing augmented reality systems, it is often essential to implement a tagging (ID) system to make a link between physical and digital spaces. Some examples of tagged IDs are barcodes [18, 11, 9, 6], radio-frequency (RF) tags [2, 23], resonant tags [13], and infrared IDs [22]. 1 Unl...
Conference Paper
One of the challenges in pervasive computing is communication between a mobile user's terminal and the continuously changing local environment. Bluetooth is one potential option for providing connectivity, but its usage is hindered by the time consuming device discovery and service discovery processes. We propose using RFID technology to enhance the Bluetooth connection establishment procedure. We present quantitative evaluation and qualitative user evaluation of our system compared to the standard Bluetooth mechanism. Measurements show that our approach dramatically increases the performance when establishing a Bluetooth connection between two devices. Also users prefer our approach because it is faster to use, there is no need for menu selections, and it is considered easier and more pleasant to use than the standard approach.
Conference Paper
We present a set of graphical user interface elements, called widgets, for 2 dimensional visual codes. The proposed widgets are suitable for printing on paper as well as showing on electronic displays. Visual code markers and their orientation parameters are recognizable by camera-equipped mobile devices in real time in the live camera image. The associated widgets are specifically designed for marker-based interaction. They define basic building blocks for creating applications that incorporate mobile devices as well as resources in the user's environment, such as paper documents, posters, and public electronic displays. In particular, we present visual code menus (vertical menus and pie menus), check boxes, radio buttons, sliders, dials, and free-form input widgets. We describe the associated interaction idioms and outline potential application areas.
Article
HP researchers are developing handheld devices that combine wireless technology and ubiquitous computing to provide a transparent linkage between physical entities in the environment and resources available on the Web. First-generation mobile computing technologies typically use protocols such as WAP and i-mode to let PDAs, smart phones, and other wireless devices with Web browsers access the Internet, thereby freeing users from the shackles of their desktops. We believe, in addition, users would benefit from having access to devices that combine the advantages of wireless technology and ubiquitous computing to provide a transparent linkage between the physical world around them and the resources available on the Web. We are developing devices that augment users' reality with Web services related to the physical objects they see