Figure 2 - uploaded by Georg Umlauf
Content may be subject to copyright.
Set of static hand gestures. 

Set of static hand gestures. 

Source publication
Article
Full-text available
With the advent of various video game consoles and tablet devices gesture recognition got quite popu-lar to control computer systems. E.g. touch screens allow for an intuitive control of small 2d user inter-faces with finger gestures. For interactive manipulation of 3d objects in a large 3d projection environ-ment a similar intuitive 3d interaction...

Contexts in source publication

Context 1
... hand gestures are defined by the position and posture of the hand. An example is the thumbs-up-gesture, see sixth gesture in Figure 2. Such a gesture can be detected from a single image. ...
Context 2
... dynamic gestures need a start and end point, which can be triggered by static hand gestures. To grab an object we use the closed-hand gesture (fourth gesture in Figure 2) and to release it the open-hand gesture (seventh gesture in Figure 2). As long as the hand is open no interaction is triggered. ...
Context 3
... dynamic gestures need a start and end point, which can be triggered by static hand gestures. To grab an object we use the closed-hand gesture (fourth gesture in Figure 2) and to release it the open-hand gesture (seventh gesture in Figure 2). As long as the hand is open no interaction is triggered. ...
Context 4
... gestures with similar polygons like the pointing-forefinger gesture (fifth gesture in Figure 2) and the thumbs-up gesture are difficult to distinguish for both distances, Figure 6. ...

Similar publications

Article
Full-text available
This present paper reviews the concept of Lhe Statistical Neural Networks, with a buildup of the formalization of neural network The historical development of the concept is considered from the idea of McCulloch and Pitts in 1943 until when statisticians bctian to embrace the use of neural networks in statistical theory and practice. Also considere...
Article
Full-text available
A modern and successful tackle on education is represented by new teaching techniques which imply online courses, collaborative assignments, dynamic grading systems, real-time feedback and motivational inserts into the process of learning. Elearning together with massive open online courses (MOOCs) have seen a recent rise in popularity and integrat...
Conference Paper
Full-text available
Virtual worlds (VWs) have become increasingly popular to support students’ foreign language learning, especially beyond the classroom. Unfortunately students’ interaction in VWs is not always available for the supervisor and thus is not easy to analyse. Nonetheless, it provides interesting information not only in terms of assessment, but also to de...
Preprint
Full-text available
As technologies become more and more pervasive, there is a need for considering the affective dimension of interaction with computer systems to make them more human-like. Current demands for this matter include accurate emotion recognition, reliable emotion modeling, and use of unobtrusive, easily accessible and preferably wearable measurement devi...
Article
Full-text available
Cancer patients suffer from many physical, psychological, and social problems that may impede quality of life and reduce the impact of treatment. Researchers have concluded that in addition to focusing on the treatment of the disease in the form of drugs and chemotherapy, factors such as the physical environment, communication skills of physicians...

Citations

... Existing hand gesture recognition techniques can be classified into two groups: wearable sensing and remote (noncontact) sensing. In wearable sensing, the user literally wears the sensor(s), which may be installed on a glove or otherwise attached to the hand [3], [4]. While this sensing mode is both stable and responsive, the sensor(s) must be worn whenever hand movement is to be detected which is inconvenient. ...
... Existing hand gesture recognition techniques can be classified into two groups: wearable sensing and remote (noncontact) sensing. In wearable sensing, the user literally wears the sensor(s), which may be installed on a glove or otherwise attached to the hand [3], [4]. While this sensing mode is both stable and responsive, the sensor(s) must be worn whenever hand movement is to be detected which is inconvenient. ...
Preprint
Full-text available
In this study, we present a wireless (non-contact) gesture recognition method using only incoherent light wave signals reflected from a human subject. In comparison to existing radar, light shadow, sound and camera-based sensing systems, this technology uses a low-cost ubiquitous light source (e.g., infrared LED) to send light towards the subject's hand performing gestures and the reflected light is collected by a light sensor (e.g., photodetector). This light wave sensing system recognizes different gestures from the variations of the received light intensity within a 20-35cm range. The hand gesture recognition results demonstrate up to 96% accuracy on average. The developed system can be utilized in numerous Human-computer Interaction (HCI) applications as a low-cost and non-contact gesture recognition technology.
... This inconvenience strips away some of the advantages of wearable sensing. In general, although wearable sensing has higher accuracy, it is simply too inconvenient for many potential users [5]- [8]. ...
Preprint
Full-text available
In this paper, we demonstrate the ability to recognize hand gestures in a non-contact, wireless fashion using only incoherent light signals reflected from a human subject. Fundamentally distinguished from radar, lidar and camera-based sensing systems, this sensing modality uses only a low-cost light source (e.g., LED) and sensor (e.g., photodetector). The light-wave-based gesture recognition system identifies different gestures from the variations in light intensity reflected from the subject's hand within a short (20-35 cm) range. As users perform different gestures, scattered light forms unique, statistically repeatable, time-domain signatures. These signatures can be learned by repeated sampling to obtain the training model against which unknown gesture signals are tested and categorized. Performance evaluations have been conducted with eight gestures, five subjects, different distances and lighting conditions, and with visible and infrared light sources. The results demonstrate the best hand gesture recognition performance of infrared sensing at 20 cm with an average of 96% accuracy. The developed gesture recognition system is low-cost, effective and non-contact technology for numerous Human-computer Interaction (HCI) applications.
... The applications of this 'state sensing' are versatile, to say the least. Some high level areas are: neurology, biology, sociology, engineering, physics, and so on [15][16][17][18][19][20][21]. Due to the very versatile nature of the application of data fusion, throughout this manuscript, we will limit our review to the usage of data fusion using LiDAR data and camera data for autonomous navigation. ...
... The benefit of using this combination is the accuracy, speed, and resolution of the LiDAR and the quality and richness of data from the stereo vision camera. Together, these two sensors provide an accurate, rich, and fast data set for the object detection layer [18,28,29]. ...
... The data fusion layer output will provide location information of the objects in the map of the environment, so that the autonomous vehicle can, for instance, avoid the obstacle or stop if the object is a destination or wait for a state to be reached for further action if the object is deemed a marker or milestone. The control segment will take the necessary action, depending on the behavior as sensed by the sensor suite [18,28,29,[35][36][37]. ...
Article
Full-text available
This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.
... Depth image segmentation can be performed by thresholding the depth data or its normal field [23,27,50]. The proposals for detecting hands on depth images include also clustering approaches, where the pixels are grouped at different depth levels [58], morphological constraints [33], convex hull analysis for detection of fingertips [28], prior shape information [18], and even the humanbody skeleton generated by the Kinect [6,17,30]. Moreover, if we consider the depth data as gray-level intensities, we can use known image processing techniques to perform segmentation as well as tracking. ...
Article
Full-text available
This paper presents a real-time framework that combines depth data and infrared laser speckle pattern (ILSP) images, captured from a Kinect device, for static hand gesture recognition to interact with CAVE applications. At the startup of the system, background removal and hand position detection are performed using only the depth map. After that, tracking is started using the hand positions of the previous frames in order to seek for the hand centroid of the current one. The obtained point is used as a seed for a region growing algorithm to perform hand segmentation in the depth map. The result is a mask that will be used for hand segmentation in the ILSP frame sequence. Next, we apply motion restrictions for gesture spotting in order to mark each image as a ‘Gesture’ or ‘Non-Gesture’. The ILSP counterparts of the frames labeled as “Gesture” are enhanced by using mask subtraction, contrast stretching, median filter, and histogram equalization. The result is used as the input for the feature extraction using a scale invariant feature transform algorithm (SIFT), bag-of-visual-words construction and classification through a multi-class support vector machine (SVM) classifier. Finally, we build a grammar based on the hand gesture classes to convert the classification results in control commands for the CAVE application. The performed tests and comparisons show that the implemented plugin is an efficient solution. We achieve state-of-the-art recognition accuracy as well as efficient object manipulation in a virtual scene visualized in the CAVE.
... Currently, the mouse and keyboard provides means of input, other options like virtually grasping an objects through head, hand, and other parts of the body is becoming the popular gesture, as the adults give a way for the technology generation it is important to focus meeting their technological demands through more work on research (Bhuiyan & Picking, 2009), (Ren, Meng, 2011). The most effective way that is being used recently for gesture communication is either by wearing a remote control device at user's hand, or by wearing a gloves that is instrumented (), (Caputo 2012). However, the tools for capturing hand gesture are magnetic sensing devices (data gloves) or electromechanical device, in this process the tools use sensors attached to a glove that senses finger flexion into electrical signals to determine the hand gesture. ...
Article
Full-text available
To date, the most effective way for HCI (Human Computer Interaction) is dependend on intermediate device - remote control, teach pendant or computer mouse, data glove and many others. The use of human gesture as an input to a computer system has the advantages in terms of its flexibility and ease of access. We proposed a gesture based control system for effective HCI interfaces based of coordinate features. The focus is on using the proposed coordinate features to correctly classify a number of human gestures corresponding to specific functions. The system was setup based on Kinect 360 and Labview interfaces to control four specific functions based on four human gestures using coordinate features. The feasibility and the performance of the system was examined in terms of its accuracy, operational distance and lighting condition. Our experimental results showed that the proposed coordinate features could be used for gesture based remote control.
... The positions of the hands can also be obtained through more complex analysis, such as tracking the skeleton of the user [30]. In the latter case, there are several implementations of skeletal tracking available to the general public (OpenNI, Kinect SDK) that are sufficiently reliable for our proposed work. ...
Article
Full-text available
This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance.
... Another interesting and different work is described in [12]. This paper presents a recognition system of arm and hand gestures using the Kinect sensor (version 1 for Xbox 360) and high-definition 3D cameras. ...
... Advanced interaction environments have become a reality nowadays, mainly due to more affordable prices, easiness of use and popularization of gaming devices such as Kinect and Wiimote [5]. Kinect, which offers depth sensors that help tracking real time movements, has been frequently used in human-computer communication research, through gestures, robotics and graphic computing [6] [7]. Another reason for its success is the availability of proprietary and open source libraries for application development. ...
... An example of the first direction is in Caputo et al. [7], where the authors propose the fusion of multiple Kinect depth sensors with HD camera color sensors as a mean s of improving gesture recognition from greater distances. This approach combines arm and hand movement dynamics recognition, provided by Kinect, with static hand gesture recognition, from color sensors. ...
Article
Full-text available
The advent of advanced user interface devices has raised the interest of industry and academia in finding new modes of Human-Computer Interaction. Advanced interfaces employ gesture recognition, as well as motion and voice capturing to enable humans to interact naturally with interactive environments without utilizing any of the traditional devices like mice, joysticks or keyboards. Many approaches have been developed using a large variety of sensors to capture human interaction information and then provide further processing and recognition of the acquired information. However, the majority of these approaches usually focus on the actual implementation of the various stages that comprise an advanced interaction environment. Thus, the need for defining common data formats for improving integration and reutilization of these solutions are typically not addressed. On the other hand, this study aims at surveying existing research on integrating devices into interactive environments, at different interoperability levels and in data formats, identifying techniques and patterns of conveying information from the real world to the virtual world, in order to synthesize results, organize applicable documents by similarities and identify future research needs.
... Next, spatial configuration checking is applied using prior information about the hand shape. Other possibility is to use the human-body skeleton generated by Kinect to estimate the position of both hands and to reduce the search space for hand detection and segmentation [23]. ...
Conference Paper
Human Computer Interaction (HCI) is a fundamental issue for virtual reality environments due to the need for natural approaches and comfortable devices. Such goals can be achieved using hand gestures to interact with the virtual reality engine. This paper presents a real-time system based on hand gesture recognition (HGR) for interaction with CAVE applications. The whole pipeline can be roughly divided into four steps: segmentation, feature extraction for bag-of-features construction, classification through multiclass support vector machine (SVM), generation of commands to control the application. We build a grammar based on the hand gesture classes to convert the classification results in control commands for an application running in a CAVE. The input is the depth stream data acquired from a Kinect device. The hand gesture recognition and command generation/execution approaches compose a client-server plug in that is part of a CAVE system implemented based on the Instant Reality architecture and the X3D standard. The results show that the implemented plug in is a promising solution. We achieve suitable recognition accuracy and efficient object manipulation in a virtual room representing a surgical environment visualized in the CAVE.