Figure 5 - uploaded by Shyh-Kuang Ueng
Content may be subject to copyright.
, loss of active hand caused by overlapping and the recovery process.

, loss of active hand caused by overlapping and the recovery process.

Source publication
Article
Full-text available
This paper proposes a vision-based Multi-user Human Computer Interaction (HCI) method for creating augmented reality user interfaces. In the HCI session, one of the users’ hands is selected as the active hand. The fingers of the active hand are employed as input devices to trigger functionalities of the application program. To share the token of in...

Context in source publication

Context 1
... let the user resolve these problems: As the FSM switches back to the initial state to track a new active hand, the user can wave his hand slightly to regain the control of interaction. An example is shown in Figure 5 to illustrate the recovery of the active hand role. In part (a), the active hand overlaps the user's face and the fingertip detection procedure locates no fingertip. ...

Citations

... Vision-based HCI methods (Rautaray and Agrawal 2015;Datcu et al. 2015;Lou et al. 2018;Chen et al. 2018;Ueng and Chen 2016;Wu et al. 2016) provide non-intrusive and non-contact input for the system using computer vision and image processing techniques. This set of methods relies on processing the video captured by one or more cameras. ...
Article
Full-text available
Compared to the mouse, as a two-dimensional and precise interface device, hand gestures provide more degrees of freedom for users to interact with computers by employing intelligent computing methods. Leap Motion Controller is gaining more popularity due to its ability to detect and track hand joints in three dimensions. However, in some cases, the Leap Motion Controller measurements are not correct enough. We show that the occlusion, palm angle, and the limited field of view are the main downsides of the Leap Motion Controller. In this paper, a framework is proposed to manipulate and deform a three-dimensional object by hand gestures. We select only a few gestures so that the system instructions can be easily memorized. The gestures are not defined very strictly, so users can do them properly without getting tired. We propose that calculating a reliable space from a Leap Motion Controller can significantly reduce these problems. To deform objects, the Free Form Deformation technique is used, which allows for more local deformation. The selected gestures and determined space for interaction make the deformation framework achieve a balance between the accuracy, user-factors, required tasks for deformation, and limitation of the hand tracking device. The proposed method, compared to related studies, offers more creative methods for deforming objects and more natural movements to interact with the system. According to the conducted user study, a significant difference is observed between hand gesture interaction and mouse in terms of speed and number of attempts.
... 3D human motion analysis is a very interesting research topic in the domains of computer vision and pattern recognition, and has attracted widespread attentions due to its numerous applications in digital entertainment [36], intelligent visual surveillance [30], healthcare [35], human-computer interaction [21], and sport competition [23]. For example, in an intelligent visual surveillance system [30], 3D poses of a human subject can be used to recognize a generic activity and detect the abnormal behaviors in the real scenes. ...
Article
Full-text available
Recognizing and tracking multiple activities are all extremely challenging machine vision tasks due to diverse motion types included and high-dimensional (HD) state space. To overcome these difficulties, a novel generative model called composite motion model (CMM) is proposed. This model contains a set of independent, low-dimensional (LD), and activity-specific manifold models that effectively constrain the state search space for 3D human motion recognition and tracking. This separate modeling of activity-specific movements can not only allow each manifold model to be optimized in accordance with only its respective movement, but also improve the scalability of the models. For accurate tracking with our CMM, a particle filter (PF) method is thus employed and then the particles can be distributed in all manifold models at each time step. In addition, an efficient activity switching strategy is proposed to dominate the particle distribution on all LD manifolds. To diffuse the particles amongst manifold models and respond quickly to the sudden changes in the activity, a set of visually-reasonable and kinematically-realistic transition bridges are synthesized by using the good properties of LD latent space and HD observation space, which enables the inter-activity motions seem more natural and realistic. Finally, a pose hypothesis that can best interpret the visual observation is selected and then used to recognize the activity that is currently observed. Extensive experiments, via qualitative and quantitative analyses, verify the effectiveness and robustness of our proposed CMM in the tasks of multi-activity 3D human motion recognition and tracking.
... It is easy to tune and its performance is superior to the color-constancy-based methods and the procedure using a mixed color model. Figure 1 shows the improvement achieved by the proposed method in an HCI application [7]. The raw image is displayed in part (a). ...
... In our implementation, β is set to 2.105. The parameter β was decided by experiments carried out in [7]. The mean and the covariance matrix are retrieved from [6]. ...
Article
Full-text available
Skin region detection is crucial for face recognition, hand tracking, and motion detection. In the detection process, a skin color model is usually required to confine the distribution of skin colors. However, skin color models are sensitive to lighting conditions. Skin segmentation under varying lighting conditions produces poor results. This article presents a skin detection procedure for human–computer interaction sessions under varying lighting conditions. The proposed method requests a skin sample from the user to estimate the color temperature of the light source. Then, the color temperature is used to correct the skin sample. At the subsequent step, the mean of the corrected skin sample is utilized to adapt the skin color model. Finally, the adapted skin color model is employed to segment skin regions in the video stream. Tests using the proposed method and some adaptive skin detection algorithms have been conducted. Statistical data show that the proposed method is superior to color constancy methods and the Gaussian mixture model in skin region segmentation. The proposed method improves the true positive rate by more than 13% in segmenting skin regions of a database. Its true positive rate is 20% better if real-life images are used as test data.
... There are seven papers [6,9,16,17,19] in the technology category of smart human machine interaction. These studies focus on the technique issue to realize a human computer interaction system for the specific application. ...
... Remote e-teaching systems for students [6,9,16] in the field of education and remote rehabilitation systems for patients in the field of home care [17] are two main smart human interaction interface applications. The 20th paper presented by Ueng and Chen [19] proposes a vision-based multi-user human computer interaction method for creating augmented reality user interfaces. The presented system in [19] is a smart HCI system, efficient, flexible, and practical for users with problems on using ordinary input devices. ...
... The 20th paper presented by Ueng and Chen [19] proposes a vision-based multi-user human computer interaction method for creating augmented reality user interfaces. The presented system in [19] is a smart HCI system, efficient, flexible, and practical for users with problems on using ordinary input devices. ...
... Color constancy is not needed before skin segmentation either. Fig. 1 shows the improvement achieved by the proposed method in an HCI application [7]. The raw image is displayed in part (a). ...
Conference Paper
Full-text available
This paper presents an innovative method for skin detection under changing lighting conditions. The proposed algorithm employs a Gaussian skin color model to classify skin pixels. This Gaussian model is adapted beforehand to accommodate the current lighting condition. AT first, the color temperature of the light source is estimated from a skin sample. Then the skin sample is going through a color constancy process to produce a corrected skin sample. The means of the raw and corrected skin samples are utilized to transform the Gaussian model. Experiments, using different adaptive skin detection methods and the proposed algorithm, have been conducted. Test results assert the effectiveness of the proposed method for skin detection under changing lighting conditions.
Article
Full-text available
The aim of this article is to analyze and review the scientific literature relating to the application of Augmented Reality (AR) technology in industry. AR technology is becoming increasingly diffuse, due to the ease of application development and the widespread use of hardware devices (mainly smartphones and tablets) able to support its adoption. Today, a growing number of applications based on AR solutions are being developed for industrial purposes. Although these applications are often little more than experimental prototypes, AR technology is proving highly flexible and is showing great potential in numerous areas (e.g., maintenance, training/learning, assembly or product design) and in industrial sectors (e.g., the automotive, aircraft or manufacturing industries). It is expected that AR systems will become even more widespread in the near future. The purpose of this review is to classify the literature on AR published from 2006 to early 2017, to identify the main areas and sectors where AR is currently deployed, describe the technological solutions adopted, as well as the main benefits achievable with this kind of technology.
Article
The large-scale nature of C4I applications makes it difficult to formulate accessible requirements before putting lots of effort into development. Rapid modeling/prototyping has been proved to be efficient for requirement validation and verification by providing a mini scale software product. The latest updated software modeling/prototyping tool is an advanced tool for software prototyping and modeling via a unified graphical environment. To support complex requirement specification and elicitation, this tool is designed as a user centered modeling environment that represents requirements in multiple levels, supports project management, reduces modeling/prototyping effort, maintains model consistency and helps error prevention and elimination. This tool is demonstrated to be useful for modeling C4I applications.