Visualisation of basic gesture types. 

Visualisation of basic gesture types. 

Source publication
Conference Paper
Full-text available
We present a novel sensing modality for hands-free gesture controlled user interfaces, based on active capacitive sensing. Four capacitive electrodes are integrated into a textile neck-band, allowing continuous unobtrusive head movement mon-itoring. We explore the capability of the proposed system for recognising head gestures and postures. A study...

Context in source publication

Context 1
... gestures typically from the face or head of the user. An example is presented in [3]: The control of an electric wheelchair is realised using a web cam fixated to it and allowing face detection, tracking and head gesture recognition in real time. Recognised gestures are then mapped to mo- tion control commands, such as speed up, slow down, or turn left/right. However, similar to voice-based systems, vision- based approaches raise privacy concerns and thus their social acceptance is limited. Moreover, both solutions have to deal with a very diversified, dynamically changing environment, posing difficult practical challenges. A promising alternative to the above systems is presented in [8]: A gesture controlled wheelchair user interface for people with motor impairments based on electromyography (EMG) and electrooculography (EOG). Eyebrow muscle activity from EMG signals is used for directional control commands, while EOG is used to ad- just speed. Although the system is wearable and rather unobtrusive (electrodes are integrated into a headband), the control interface itself is cumbersome and not natural. We present a novel sensing modality for hands-free control interfaces, addressing the shortcomings of the above mentioned approaches. Our system is based on the active capacitive sensing principle: Capacitive electrodes are integrated into a textile neckband. With our wearable system it is possible to continuously monitor the wearer’s head movements in an unobtrusive way. We investigate the potential of this technology to distinguish various head gestures and head postures. This is motivated by application scenarios such as the above mentioned electric wheelchair control interface, where recognised gestures or postures can be mapped to control commands. Moreover, we show that the capability of our system exceeds that of common solutions where usually only a few gestures are recognised: Quantitative evaluation shows that we can reliably distinguish between 15 head gestures. Finally, it is reasonable to expect that the social acceptance of our system is higher than that of, for example, voice or camera-based solutions. Overall, the paper provides the following main contributions: 1. We present a novel sensing modality for hands-free gesture controlled user interfaces, based on head gesture and head posture recognition. 2. To explore the capability of our system, we carried out a study involving 12 subjects and recording data from 15 head gestures and 19 different head postures. We present a quantitative evaluation based on this dataset. Our system is based on the active capacitive sensing principle, introduced recently for human activity recognition [1]. A capacitor consists of a dielectric material between two conductive planes. It can store energy in an electric field, the amount depending on thickness and composition of the dielectric material, and can be measured electrically. By plac- ing the planes around the human neck, its inside becomes the dielectric. Movement of muscles, tendons, blood vessels and other tissue can then be reflected as an electric value. Head movement consists mostly of the three main degrees of freedom of the cervical vertebrae (Figure 1). In order to infer different movements (which results in a broad range of changes inside the neck) multiple, carefully selected capacitive electrode positions are necessary. We propose the 4-electrode setup as depicted in Figure 1, where each pair of channels can reflect asymmetrical changes due to difference calculation. The implementation of the proposed scheme (hardware prototype) is shown in Figure 2. The four capacitive sensors, in form of conductive textile, are embedded in a non-conductive textile, building a stretchable neckband as proposed in [1]. Two sensors are centred directly above and under the larynx, the other two to the left and right on the neck. Electric ground is connected to the neck over partly exposed conductive textile and serves as a common second plane. The connected circuit boards are for amplification, noise reduction, analog- to-digital conversion and wireless data transmission [1]. From the monitored head movements, the goal is to distinguish between a set of gestures and postures, which then can be mapped to commands in a hands-free control interface. In order to evaluate the potential of our approach, we conducted two sets of experiments with 8 male and 4 female participants (aged from 22 to 35 years, with neck perimeter ranging from 29 cm to 42 cm). Only one of the test subjects was involved in designing the experiments. For each experiment, we adjusted the tightness of the neckband so that the participants felt comfortable and were not restricted in their head movement. For recording and automatic labelling of the data, we developed For the first set of experiments, subjects were asked to sit on a chair in front of a computer screen and perform 15 different head gestures upon timed screen and voice commands. Figure 3 shows the basic gesture types, chosen as feasible head movements for natural user interfaces. Each of the first three (“nod”, “tilt” and “look”) was performed in two directions. Moreover, we defined two movement types for these gestures, namely “slowly once” and “double”. We referred to the single and double click on a computer mouse as analogy. Therefore, 12 gestures ( e.g. “slowly look left once” or “double nod down”) were defined this way. In addition, we defined the gestures “circle head (counter) clockwise” and “double woodpecker move”, the latter resembling to the typical movement of a woodpecker. Participants were asked to carry out each gesture as they interpreted them, and return facing the screen once a gesture was performed in order to wait for the next command. Every subject recorded each of the 15 gestures 11 times in random order, during a single session. Figure 4 shows typical example signals from the 4 channels. For the second set of experiments, we placed a chair in fixed distance facing a wall with a grid of 3x5 points, named “A1- A5”, “B1-B5” and “C1-C5”. These points were spread over the average comfortable and typical facing range of a per- son sitting on the chair. Strips on the ceiling, the floor, the far left and far right marked extreme angles, defining four additional points for subjects to look at. The complete grid structure with approximate facing angles is depicted in Figure 5a. During this experiment participants were asked to look at the point called out via voice command. This was supported by a laser pointer mounted on top of the subject’s head, the participant had to hold the laser point until the next command. The laser pointer was calibrated to the subject be- fore experiments, and ensured reproducible head postures via well defined looking points. The entire experimental setup is shown in Figure 5b. Two sessions, divided by a break, were recorded with each subject. In every session, each of the 19 Due to the well defined experiment procedure and scripted data collection with automatic labelling, the start of each gesture is reliably marked in the dataset. In addition, the end of each gesture was marked in a semi-automatic way, thus resulting in isolated gesture segments (four examples are shown in Figure 4). Considering recorded data from head postures, automatic labels mark the start of the transition to the next posture ( cf. . Figure 6). We used these marking points to iso- late posture segments. Moreover, the end point of the transition in each posture segment was marked in a semi-automatic way. Relevant features ( e.g. mean, see below) were calculated by leaving the signal transition part out. A large number of approaches exist to recognise gestures from isolated signal segments. Most common algorithms use time domain matching or modelling such as hidden Markov model (HMM) [5], dynamic time warping (DTW) [4] or methods based on data dictionaries [2]. Another common tool used for gesture recognition relies on first extracting features from each signal segment and then use a classifier on the feature set. This latter method showed promising results recently, also in comparison to HMM or DTW [6, 9]. Therefore, we applied this approach in our evaluation. We defined the feature set based on visual inspection of the signal form and based on extracted features in related work ( cf. e.g. [9]). For head postures, signal level and its difference between certain channels includes the major informa- tion content ( cf. Figure 6). Therefore, the feature set for this task comprises of the features mean, min/max, ratio and difference between each pair of channels. For the head gesture recognition task, different signal shapes ( cf. Figure 4) need to be distinguished. Therefore, we computed various features in both time and frequency domain from each segment, such as mean, standard deviation, correlation between each pair of channels, spectral entropy, or dominant frequency. As for classification, we used decision tree-based classifiers since they automatically select the most relevant features from a large feature set. We compared three ensemble learners: bagging, boosting and random forests, each with 200 iterations. For comparison reasons, we also included a single decision tree and a kNN classifier in our evaluation ( k = 3 , all algo- rithm parameters were determined in preliminary studies on the training dataset). For quantitative results we performed user dependent 10-fold cross-validation for each user. Since both the head gesture and posture recognition task represents balanced classification problems, we used accuracy as performance measure. Figure 7 shows the mean accuracy of the 5 classifiers on both tasks, min/max values indicate user-specific variation. Overall it is clear that the decision tree-based ensemble learners perform best, with bagged trees achieving the highest accuracy. Figure 8 shows the confusion matrix of recognising the 15 head gestures ...

Similar publications

Article
Textile-based sensors can perceive and respond to environmental stimuli in daily life, and hence are critical components of wearable devices. Here, self-powered triboelectric wearable sensors are fabricated using polyvinylidene fluoride (PVDF) fibers stitched by a sewing machine. The excellent mechanical properties of dry-jet wet spun PVDF fibers a...
Article
Full-text available
Human machine interface technology is focused upon new ways of interaction between human beings and machines. Gesture recognition gloves are getting increasingly popular as human-machine interface devices. Conventionally, these gloves use electronic sensors to sense different hand gestures. As electronic sensors are bulky and uncomfortable, we prop...

Citations

... Acoustic sensors have been used for muscle movement recognition [18], speech recognition, ref. [19] and actions related to eating [20][21][22]. Prior research has been done on e-textiles used in the neck region for detecting posture [23] and swallowing [24], but those efforts have relied on capacitive methods that have limitations in daily interactions. Researchers have explored sensoring the neck with piezoelectric sensors for monitoring eating [25] and medication adherence [26]. ...
Article
Full-text available
Sensor technology that captures information from a user’s neck region can enable a range of new possibilities, including less intrusive mobile software interfaces. In this work, we investigate the feasibility of using a single inexpensive flex sensor mounted at the neck to capture information about head gestures, about mouth movements, and about the presence of audible speech. Different sensor sizes and various sensor positions on the neck are experimentally evaluated. With data collected from experiments performed on the finalized prototype, a classification accuracy of 91% in differentiating common head gestures, a classification accuracy of 63% in differentiating mouth movements, and a classification accuracy of 83% in speech detection are achieved.
... Compared with existing work, head gestures present another way for hands-free input like navigation in 3D space [5]. To recognize head gestures or orientations, researchers have explored various sensing techniques such as motion sensing [14], acoustic sensing (Soundr [53]), capacitive sensing [16], vision-based sensing [23,35], and so on. Radi-Eye [40] is a hands-free radial interfaces for interaction in 3D space using both gaze and head crossing gestures. ...
Preprint
We present HeadText, a hands-free technique on a smart earpiece for text entry by motion sensing. Users input text utilizing only 7 head gestures for key selection, word selection, word commitment and word cancelling tasks. Head gesture recognition is supported by motion sensing on a smart earpiece to capture head moving signals and machine learning algorithms (K-Nearest-Neighbor (KNN) with a Dynamic Time Warping (DTW) distance measurement). A 10-participant user study proved that HeadText could recognize 7 head gestures at an accuracy of 94.29%. After that, the second user study presented that HeadText could achieve a maximum accuracy of 10.65 WPM and an average accuracy of 9.84 WPM for text entry. Finally, we demonstrate potential applications of HeadText in hands-free scenarios for (a). text entry of people with motor impairments, (b). private text entry, and (c). socially acceptable text entry.
... Arshad et al. designed a floor sensing system to monitor the motion of elderly patients [3] to various gesture monitoring systems. Marco [4] presented a textile neckband for head gesture recognition. Bian [5] showed a capacitive wristband for on-board hand gesture recognition. ...
Chapter
Body capacitance change is an interesting signal for a variety of body sensor network applications in activity recognition. Although many promising applications have been published, capacitive on body sensing is much less understood than more dominant wearable sensing modalities such as IMUs and has been primarily studied in individual, constrained applications. This paper aims to go from such individual-specific application to a systemic analysis of how much the body capacitance is influenced by what type of factors and how does it vary from person to person. The idea is to provide a basic form which other researchers can decide if and in what form capacitive sensing is suitable for their specific applications. To this end, we present a design of a low power, small form factor measurement device and use it to measure the capacitance of the human body in various settings relevant for wearable activity recognition. We also demonstrate on simple examples how those measurements can be translated into use cases such as ground type recognition, exact step counting and gait partitioning.KeywordsHuman body capacitanceElectric field sensingCapacitive sensingRespiration detectionGait partitioningTouch sensingGround type recognitionStep counting
... Arshad et al. designed a floor sensing system to monitor the motion of elderly patients [3] to various gesture monitoring systems. Marco [4] presented a textile neckband for head gesture recognition. Bian [5] showed a capacitive wristband for on-board hand gesture recognition. ...
Conference Paper
Body capacitance change is an interesting signal for a variety of body sensor network applications in activity recognition. Although many promising applications have been published, capacitive on body sensing is much less understood than more dominant wearable sensing modalities such as IMUs and has been primarily studied in individual, constrained applications. This paper aims to go from such individual-specific application to a systemic analysis of how much the body capacitance is influenced by what type of factors and how does it vary from person to person. The idea is to provide a basic form which other researchers can decide if and in what form capacitive sensing is suitable for their specific applications. To this end, we present a design of a low power, small form factor measurement device and use it to measure the capacitance of the human body in various settings relevant for wear-able activity recognition. We also demonstrate on simple examples how those measurements can be translated into use cases such as ground type recognition, exact step counting and gait partitioning.
... In this paper we take the idea further looking at free fitness exercises and focusing on types of exercises that are difficult to recognize using an arm based and investigating in detail how sensing can complement . Available related motion sensing works were either based on capacitance variation of local body part(like neck [14], wrist [15], etc.) or full body capacitance for proximity or motion sensing. For example, Arshad et al. [16] designed a floor sensing system leveraging active capacitance variation caused by body intrusiveness to monitor the motion of elderly patients. ...
... For example, Arshad et al. [16] designed a floor sensing system leveraging active capacitance variation caused by body intrusiveness to monitor the motion of elderly patients. Marco et al. [14] presented a textile capacitive neckband for head gesture recognition. Cohn et al. [12] showed a prototype to detect the arm movement by supplying a conductive ground plane near the wrist. ...
Conference Paper
Inertial Measurement Unit (IMU) is currently the dominant sensing modality in sensor-based wearable human activity recognition. In this work, we explored an alternative wearable motion-sensing approach: inferring motion information of various body parts from the human body capacitance (HBC). While being less robust in tracking the body motions, HBC has a property that makes it complementary to IMU: it does not require the sensor to be placed directly on the moving part of the body of which the motion needs to be tracked. To demonstrate the value of HBC, we performed exercise recognition and counting of seven machine-free leg-alone exercises. The sensing shows significant advantages over the signals in both classification(0.89 vs 0.78 in F-score) and counting.
... Yet, in both previous cases, the effect of commands generated by casual head movement hinders the use of such technique in a real-life setting. Hirsch et al. (2014) used a capacitive neckband ( Figure II-11). In order to avoid commands caused by casual head movement, they propose doubling the movement to confirm that it is a wanted command, but this makes it more tiring to the user and much slower. ...
Thesis
Full-text available
The power wheelchair is an effective way to regain mobility for many people around the world. Unfortunately, some people with motor disabilities who also suffer from loss of muscle strength may find it difficult to use a power wheelchair. The reason is that they can experience difficulties related to the handling of a joystick, the standard wheelchair control device. This thesis aims to propose exploring an alternative to the joystick for people with neuromuscular diseases. The work of this thesis is particularly interested in tactile interaction. The hypothesis is that the tactile interaction can offer a reliable control with a level of physical effort which is tolerable by people suffering from neuromuscular diseases. In this perspective, we developed a wheelchair steering interface on smartphone. It offers many configuration possibilities allowing the customization according to the user's needs. This interface was designed in a user-centered, iterative approach. In each iteration, different people suffering from a loss of mobility were able to test the piloting interface. Their feedback feeds into improvements to the interface in the next iteration. During the last iteration as part of this thesis, a study was carried out with users suffering from neuromuscular diseases at the SSR Le Brasset, with the help of the AFM Théléton. These participants were able to take appropriate the use of the touch interface and use it to control their wheelchairs. We also compared the driving performance using the touch interface and the joystick in different daily tasks (cornering, slalom ...). The performance of these patients with the touch interface is close to that of the joystick. In addition, the comments collected suggest that the touch interface requires less physical effort than the joystick.
... Before conducting an experiment, we built a design space of all physiologically possible gestures based on the field of osteokinematics [4] and linguistics [18] (see Sect. 2.1) and the literature about head and/or shoulders gestures [7,8,17,23]. Table 1 defines these gestures based on which plane is maintained constant or left variable. ...
Chapter
This paper presents empirical results about user-defined gestures for head and shoulders by analyzing 308 gestures elicited from 22 participants for 14 referents materializing 14 different types of tasks in IoT context of use. We report an overall medium consensus but with medium variance (mean: .263, min: .138, max: .390 on the unit scale) between participants gesture proposals, while their thinking time were less similar (min: 2.45 s, max: 22.50 s), which suggests that head and shoulders gestures are not all equally easy to imagine and to produce. We point to the challenges of deciding which head and shoulders gestures will become the consensus set based on four criteria: the agreement rate, their individual frequency, their associative frequency, and their unicity.
... Researchers have proposed a variety of hands-free interactions [5,9,77], contributing to a greater efciency in multitasking [82]. In addition, they have also proven to be useful when applied as assistive technologies [30,34,45]. This signifcantly helps people with physical impairments to regain essential interaction capabilities, in particular patients suffering from locked-in syndrome [69]. ...
Conference Paper
Full-text available
Sensing interfaces relying on head or facial gestures provide efective solutions for hands-free scenarios. Most of these interfaces utilize sensors attached to the face, as well as into the mouth, being either obtrusive or limited in input bandwidth. In this paper, we propose ChewIt – a novel intraoral input interface. ChewIt resembles an edible object that allows users to perform various hands-free input operations, both simply and discreetly. Our design is informed by a series of studies investigating the implications of shape, size, locations for comfort, discreetness, maneuverability, and obstructiveness. Additionally, we evaluated potential gestures that users could utilize to interact with such an intraoral interface.
... Hirsch et al [8] proposed the use of a capacitive collar to recognize different neck movements and associate them with controls. However, to avoid unplanned commands, the authors proposed to double the occipital gesture for each command. ...
Conference Paper
Full-text available
A large number of people suffer from severe motor disability and presents great difficulty in controlling an electric wheelchair (EWC). Therefore, we propose an interface capable of recognizing different facial expressions. Each expression can be used in a Human-Machine Interface (HMI) to control an EWC, which allows patients to control their own wheelchairs without using their hands. This work presents a deep learning based system to process and classify the expressions. Our approach classifies up to nine facial expressions such as open mouth or raised eyebrows by employing a camera as a sensor. The neural network achieved a 92.59% accuracy on our test set and later experiments have shown that the developed system can correctly classify facial expressions from unseen users.
... A series of experiments from Jonassen et al. [4] and Fujiwara et al. [5] gave Human Body Capacitance(HBC) a value of 100-400 pf. Plenty of HBC based sensors and applications were developed: proximity sensing [6], movement detection [7], communication [8]- [10] and motion recognition [11]. Most of those works basically use active sensors with capacitance changing in shunt mode or transmit mode, which use a transceiver to emit and capture a time-varying signal. ...
Conference Paper
In this work, we present the design and implementation of a micro watt level power consumption, human body capacitance based sensor for recognizing and counting gym workouts. The concept also works when the device is attached to a body part which is not directly involved in the activity's movement. In contrast, most of the widely used motion sensing based approaches require placing the sensor on the moving body part (e.g. for analyzing leg based gym exercises the sensor needs to be placed on the leg). We described the physical principle behind the ubiquitous electric coupling between human body and environment, and explored the capability of this sensing modality in gym workouts. We evaluated our sensor with 11 subjects, performing 7 popular gym workouts each day over 5 days with our sensor being placed at 3 different body positions, including a non-contact position, where the sensor is placed in the subject's pocket. Results showed that our sensing approach achieved an average counting accuracy of 91\%, which is highly competitive with commercial devices on the market. The mean leave one user out workout recognition f-scores obtained were of 63\%, 56\%, 45\% for sensors located on wrist, on calf and in pocket, respectively. As every subject performed activities over multiple days changing shoe height, shoe and clothes type, we demonstrate that full body activity counting and to some extent recognition is feasible, regardless of personal habit of movement speed and scale.