Block diagram of the VBI

Block diagram of the VBI

Source publication
Article
Full-text available
This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head...

Similar publications

Article
Full-text available
This paper presents a motion planning system for robotic devices to be adopted in assistive or rehabilitation scenarios. The proposed system is grounded on a Learning by Demonstration approach based on Dynamic Movement Primitives (DMP) and presents a high level of generalization allowing the user to perform activities of daily living. The proposed...
Conference Paper
Full-text available
In this paper, we present a taxonomy in Robot-Assisted Training; a growing body of research in Human-Robot Interaction which focuses on how robotic agents and devices can be used to enhance user's performance during a cognitive or physical training task. The proposed taxonomy includes a set of parameters that characterize such systems, in order to...
Conference Paper
Full-text available
Assistive robotic arms are increasingly enabling users with upper extremity disabilities to perform activities of daily living on their own. However, the increased capability and dexterity of the arms also makes them harder to control with simple, low-dimensional interfaces like joy-sticks and sip-and-puff interfaces. A common technique to control...
Article
Full-text available
Deaf-blindness forces people to live in isolation. At present, there is no existing technological solution enabling two (or many) deaf-blind people to communicate remotely among themselves in tactile Sign Language (t-SL). When resorting to t-SL, deaf-blind people can communicate only with people physically present in the same place, because they ar...
Article
Full-text available
Background: The application of continuum manipulators as assistive robots is discussed and tested through the use of Bendy ARM, a simple manually teleoperated tendon driven continuum manipulator prototype. Methods: Two rounds of user testing were performed to evaluate the potential of this arm to aid people living with disabilities in completing ac...

Citations

... Francois Pasteau [9] also develop a wheelchair assistant for corridor routes using visual linear features that emerged from every corner of the wall. Perez [10] and Pei Jia [11] also using visual-based approach to conducting their study. They're trying to put control of smart wheelchair on head-gestures as the main features, and optical flow to recognize and classify the commands. ...
Article
Full-text available
Head Gesture Recognition has been developed using a variety of devices that mostly contain a sensor, such as a gyroscope or an accelerometer, for determining the direction and magnitude of movement. This paper explains how to control a smart wheelchair using Head-Gesture Recognition based on Computer Vision. Using the Haar Cascade Algorithm Method for determining the position of the face and nose, determining the order of the head gesture would be easy to do. We classify head gestures to become four, namely: Look down, Look up/center, Turn right and Turn left. The four gesture information is used to control the smart wheelchair as Brake, Accelerate, Turn right and Turn left. The experiment result shows that our system has successfully controlled the smart wheelchair using head gestures.
... Another example of face tracking is presented by Perez et al. [16], who detected and tracked a patient's face using an algorithm based on optical flow. Their eyes and mouth were detected even when there were changes in the intensity of light or the facial geometry. ...
Article
Full-text available
Today, computer vision algorithms are very important for different fields and applications, such as closed-circuit television security, health status monitoring, and recognizing a specific person or object, and robotics. Regarding this topic, the present paper deals with a recent review of the literature on computer vision algorithms (recognition and tracking of faces, bodies, and objects) oriented towards socially assistive robot applications. The performance, frames per second (FPS) processing speed, and hardware-implemented to run the algorithms are highlighted by comparing the available solutions. Moreover, this paper provides general information for researchers interested in knowing which vision algorithms are available, enabling them to select the one that is most suitable to include in their robotic system applications. Citation: Montaño-Serrano, V.M.; Jacinto-Villegas, J.M.; Vilchis-González, A.H.; Portillo-Rodríguez, O. Artificial Vision Algorithms for Socially Assistive Robot Applications: A Review of the Literature. Sensors 2021, 21, 5728.
... For example, in the study of [21]- [23], an imaging sensor is attached to the human body as an interface to recognize gestures and other actions. The research of [24], [25] uses hand gestures and head tilt as an intuitive interface to control an electric wheelchair and a mobile robot. In the research of [26] and [27], an imaging sensor attached to the wrist captures the detailed movements of the five fingers. ...
Article
Full-text available
Conventional control systems for prosthetic hands use myoelectric signals as an interface, but it is impossible to realize complex and flexible human hand movements with only myoelectric signals. A promising control scheme for prosthetic hands uses computer vision to assist in grasping objects. It features an imaging sensor, and the control system is capable of recognizing an object placed in the environment. Then, a gripping pattern can be selected from some predefined candidates according the recognized object. However, previous studies assumed that only one object exists in the environment. If there are multiple target objects in the environment, the hand could become confused in attempting to find the target object. This study addresses this problem and proposes a method to determine the target object from multiple objects. The proposed method is able to determine the target object by estimating the positional relationship between the artificial hand and the objects, as well as the motion of the hand. To verify the validity and effectiveness, we implemented the proposed method in a vision-based prosthetic hands control system and conducted pick-and-place experiments. The experiments confirm that the proposed method can accurately estimate the target object in accordance with the user’s intention.
... Motion capture has been widely used for rehabilitation exercises, gait analysis, and posture disorders diagnosis. Artificial vision systems [5,6] are a suitable alternative or complement to electromyographic acquisition [7,8], inertial measurement devices, goniometry, and other techniques, mainly because of their low cost, noninvasiveness, and user's comfort. ...
Article
Full-text available
Background: The rehabilitation process is a fundamental stage for recovery of people's capabilities. However, the evaluation of the process is performed by physiatrists and medical doctors, mostly based on their observations, that is, a subjective appreciation of the patient's evolution. This paper proposes a tracking platform of the movement made by an individual's upper limb using Kinect sensor(s) to be applied for the patient during the rehabilitation process. The main contribution is the development of quantifying software and the statistical validation of its performance, repeatability, and clinical use in the rehabilitation process. Methods: The software determines joint angles and upper limb trajectories for the construction of a specific rehabilitation protocol and quantifies the treatment evolution. In turn, the information is presented via a graphical interface that allows the recording, storage, and report of the patient's data. For clinical purposes, the software information is statistically validated with three different methodologies, comparing the measures with a goniometer in terms of agreement and repeatability. Results: The agreement of joint angles measured with the proposed software and goniometer is evaluated with Bland-Altman plots; all measurements fell well within the limits of agreement, meaning interchangeability of both techniques. Additionally, the results of Bland-Altman analysis of repeatability show 95% confidence. Finally, the physiotherapists' qualitative assessment shows encouraging results for the clinical use. Conclusion: The main conclusion is that the software is capable of offering a clinical history of the patient and is useful for quantification of the rehabilitation success. The simplicity, low cost, and visualization possibilities enhance the use of the software Kinect for rehabilitation and other applications, and the expert's opinion endorses the choice of our approach for clinical practice. Comparison of the new measurement technique with established goniometric methods determines that the proposed software agrees sufficiently to be used interchangeably.
... Regarding the HCI, the major requirements are noninvasiveness, low cost, robustness, and adaptability for a group of users with similar capacities. For many years, we have developed HCI that was essayed as control input with acceptable results, especially for disabled people needs, such as adapted keypads, head-mouse controllers, vision based Interfaces using hand or head position detection, electromyogram, and electrooculogram-based interfaces, among others [12][13][14][15][16]. ...
... To avoid this problem, we decided to incorporate a light source. More details about this interface can be found in [6]. ...
Article
Advances in medicine have led to a significant increase in human life expectancy and, therefore, to a growing number of disabled elderly people who need chronic care and assistance [1]. The World Health Organization reports that the world's population over 60 years old will double between 2000 and 2050 and quadruple for seniors older than 80 years, reaching 400 million [2]. In addition, strokes, traffic-related and other accidents, and seemingly endless wars and acts of terrorism contribute to an increasing number of disabled younger people.
... In recent years, robots has been achieving a remarkable development in assisting human by covering a wide area of field such as healthcare, military, manufacturing and others. Medical robot is a promising field with several applications in healthcare industries such as surgical robots, tele-medicine robots and rehabilitation robots [1]. Tele-presence robot allows medical specialist to see, talk, hear, interact, navigate and acquire medical data as if the doctor is virtually present [2]. ...
Article
Full-text available
The use of medical robots in healthcare industry especially in rural areas are hitting limelight these days. Development of Medical Tele-diagnosis Robot (MTR) has gain importance to unravel the need of medical emergencies. Nevertheless, challenges for a better visual communication still arises. Thus, a face identification and tracking system for MTR is designed to allow an automated visual which will ease the medical specialist to identify and keep the patient in the best view for visual communication. This paper emphasis on the motion detection module which is the first module of the system. An improved motion detection technique is proposed which suits a real-time application for a dynamic background. Frame differencing method was used to detect the motion of the target. The developed motion detection module succeeded an accuracy of 96% resulting an average of 97% of the whole MTR.
... The YCbCr space is selected because its components are less variant to different skin tones [23]. Segmentation is performed by thresholding Cr and Cb components [26]. The threshold values were empirically selected as 130 < Cr < 170 and 70 < Cb < 127. ...
Article
Full-text available
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.
Article
Motivated by the need to improve the quality of life for the elderly and disabled individuals who rely on wheelchairs for mobility, and who may have limited or no hand functionality at all, we propose an egocentric computer vision based co-robot wheelchair to enhance their mobility without hand usage. The robot is built using a commercially available powered wheelchair modified to be controlled by head motion. Head motion is measured by tracking an egocentric camera mounted on the user’s head and faces outward. Compared with previous approaches to hands-free mobility, our system provides a more natural human robot interface because it enables the user to control the speed and direction of motion in a continuous fashion, as opposed to providing a small number of discrete commands. This article presents three usability studies, which were conducted on 37 subjects. The first two usability studies focus on comparing the proposed control method with existing solutions while the third study was conducted to assess the effectiveness of training subjects to operate the wheelchair over several sessions. A limitation of our studies is that they have been conducted with healthy participants. Our findings, however, pave the way for further studies with subjects with disabilities.