Families of multi-touch gestures and specific gestures depending on number of fingers or the speed 

Families of multi-touch gestures and specific gestures depending on number of fingers or the speed 

Source publication
Conference Paper
Full-text available
The goal of HCI researchers is to make interaction with com- puter interfaces simpler, efficient and more natural. In a context of object manipulation, we think that reaching this goal requires the ability to pre- dict and recognize how humans grasp then manipulate objects. This is based on studies explaining human vision, reach, grasp taxonomies a...

Similar publications

Conference Paper
Full-text available
We present the design of a handheld-based interface for collaborative manipulations of 3D objects in mobile augmented reality. Our approach combines touch gestures and device movements for fast and precise control of 7-DOF transformations. Moreover, the interface creates a shared medium where several users can interact through their point-of-view a...
Article
Full-text available
This article explores relevant applications of educational theory for the design of immersive virtual reality (VR). Two unique attributes associated with VR position the technology to positively affect education: (1) the sense of presence, and (2) the embodied affordances of gesture and manipulation in the 3rd dimension. These are referred to as th...
Research
Full-text available
The advancement of HCI (human computer interface) is at its zenith and becoming more and more user-friendly interface every day. There are some innovatory revolutions in recent times. Since many new methods had been invented the traditional keyboard-input has been replaced progressively. Furthermore, the body sense technology even eliminated the re...
Conference Paper
Full-text available
Virtual Reality environments are able to offer natural interaction metaphors. However, it is difficult to accurately place virtual objects in the desired position and orientation using gestures in mid-air. Previous research concluded that the separation of degrees-of-freedom (DOF) can lead to better results, but these benefits come with an increase...
Article
Full-text available
Referential success is crucial for collaborative task-solving in shared environments. In face-to-face interactions, humans, therefore, exploit speech, gesture, and gaze to identify a specific object. We investigate if and how the gaze behavior of a human interaction partner can be used by a gaze-aware assistance system to improve referential succes...

Citations

... They can efficiently leverage the large display area (Nancel et al., 2011) and the space available in front of the display to augment its capabilities (Yoo et al., 2015). However, mid-air gestural interfaces are not very well known by end-users, they have little or no experience with mid-air gestures as they have with touch gestures (Morris et al., 2014), and transitioning from 2D touch gestures to 3D mid-air gestures is not straightforward (Boulabiar et al., 2014). The large degree of freedom allowed by mid-air gestures results in a wide range of possible gesture classes produced with varying articulations (Vatavu, 2013). ...
Article
Browsing multimedia objects, such as photos, videos, documents, and maps represents a frequent activity in a context of use where an end-user interacts on a large vertical display close to bystanders, such as a meeting in a corporate environment or a family display at home. In these contexts, mid-air gesture interaction is suitable for a large variety of end-users, provided that gestures are consistently mapped to similar functions across media types. We present Lui (Large User Interface), a ready-to-deploy and to-use application for browsing multimedia objects by consistent mid-air gesture interaction on a large display that is customizable by mapping new gesture classes to functions in real-time. The method followed to design the gesture interaction and to develop the application consists of four stages: (1) a contextual gesture elicitation study (23 participants × 18 referents = 414 proposed gestures) is conducted with the various media types to determine a consensus set satisfying consistency, (2) the continuous integration of this consensus set with gesture recognizers into a pipeline software architecture, (3) a comparative testing of these recognizers on the consensus set to configure the pipeline with the most efficient ones, and (4) an evaluation of the interface regarding its global quality and specific to the implemented gestures.
... After mastering 2D stroke gesture recognition [126] on touch-enabled, surface computing with multiple algorithms [103,105,114,116], researchers and practitioners turn their attention to 3D motion gestures [5] performed in mid-air interaction spaces [7,81]. ...
Article
Full-text available
Despite the tremendous progress made for recognizing gestures acquired by various devices, such as the Leap Motion Controller, developing a gestural user interface based on such devices still induces a significant programming and software engineering effort before obtaining a running interactive application. To facilitate this development, we present QuantumLeap, a framework for engineering gestural user interfaces based on the Leap Motion Controller. Its pipeline software architecture can be parameterized to define a workflow among modules for acquiring gestures from the Leap Motion Controller, for segmenting them, recognizing them, and managing their mapping to functions of the application. To demonstrate its practical usage, we implement two gesture-based applications: an image viewer that allows healthcare workers to browse DICOM medical images of their patients without any hygiene issues commonly associated with touch user interfaces and a large-scale application for managing multimedia contents on wall screens. To evaluate the usability of QuantumLeap, seven participants took part in an experiment in which they used QuantumLeap to add a gestural interface to an existing application.
... La première phase est celle de la formation du but à partir de ce qui existe (1). La deuxième phase est celle d'exécution qui consiste à la transformation de ce but formé vers des tâches nécessaires non ordonnées (2), l'ordonnancement de ces tâches en une séquence (3), ensuite l'exécution de cette séquence (4). La dernière phase est celle de l'évaluation, on vérifie les résultats après l'exécution (5) L'affordance d'un objet est définie par son pouvoir à évoquer son utilisation. ...
Thesis
Full-text available
Dans le domaine de la reconnaissance gestuelle, les mouvements humains sont observés, reconnus et transformés en primitives fonctionnelles pour contrôler un système ou manipuler un objet. Les chercheurs en Interaction Homme-Machine ont plus particulièrement étudié le suivi des gestes réalisés après le contact avec l'objet à manipuler (geste apparent). Nous montrons dans cette thèse qu'intéragir et manipuler un objet 3D est un processus plus large qui commence par la vision, qui inclut le rapprochement, la saisie, la manipulation et qui se termine avec la "consommation des évènements" dans les applications cibles. Dans cette thèse, nous avons collecté et organisé un état de l'art sur l'interaction Homma-Machine provenant de différents points de vue (visions, neuropsychologie, de saisie et techniques). Nous avons créé un système qui suit les mouvements des utilisateurs sur une table mais également au dessus de la surface à partir d'un nuage de points 3D. Nous avons spécifié des cas d'activités gestuelles et nous les avons utilisés dans deux applications. Nous avons aussi proposé une nouvelle façon de créer des applications adaptables aux nouvelles formes d'interactions en se basant sur un bus logiciel.
Chapter
There are more and more application scenarios for unmanned delivery vehicles, and traditional human-computer interaction methods can no longer meet different task scenarios and user needs. The primary purpose of this paper is to apply natural human-computer interaction technology to the field of unmanned delivery, changing the traditional mode that users can only operate unmanned delivery vehicles through the touch screen to complete tasks. The task scenarios of unmanned delivery vehicles are classified through the concept of context. Participatory design and heuristic research are used to allow users to define interactive gestures. Two groups of gesture interaction set that meet different task scenarios and can be accepted and understood by general users are designed. Based on Kinect’s deep imaging and bone tracking technology, large number of preset gesture samples are collected, and the Adaboost algorithm is used for machine training to realize gesture interaction. Through recognition and detection, it is proved that the gesture recognition achieved by this method has a high recognition rate and responding speed. Finally, based on Unity3D, the task scene of the real unmanned delivery vehicle is simulated in the virtual scene. Through the usability test of the human-computer interaction system, it is concluded that this interaction mode guarantees the task efficiency to a certain extent and improves the user experience.