Figure 1 - uploaded by Dieter Fox
Content may be subject to copyright.
Multi-sensor board and PDA used in our system. The sensor board shown in Figure 1 is extremely com- pact, low-cost, and uses standard electronic components. It weighs only 121g including battery and processing hard- ware . Sensors include a 3-axis accelerometer, two microphones for recording speech and ambient sound, photo- transistors for measuring light conditions, and temperature and barometric pressure sensors. The overall cost per sensor board is approximately USD 400. The time-stamped data collected on this device is transfered via a USB con- nection to an iPAQ handheld computer. GPS data is transfered from the receiver via Bluetooth to the PDA. The overall system is able to operate for more than 8 hours. 

Multi-sensor board and PDA used in our system. The sensor board shown in Figure 1 is extremely com- pact, low-cost, and uses standard electronic components. It weighs only 121g including battery and processing hard- ware . Sensors include a 3-axis accelerometer, two microphones for recording speech and ambient sound, photo- transistors for measuring light conditions, and temperature and barometric pressure sensors. The overall cost per sensor board is approximately USD 400. The time-stamped data collected on this device is transfered via a USB con- nection to an iPAQ handheld computer. GPS data is transfered from the receiver via Bluetooth to the PDA. The overall system is able to operate for more than 8 hours. 

Source publication
Article
Full-text available
We introduce a new dynamic model with the capability of recognizing both activities that an individual is performing as well as where that ndividual is located. Our model is novel in that it utilizes a dynamic graphical model to jointly estimate both activity and spatial context over time based on the simultaneous use of asynchronous observations c...

Similar publications

Article
Full-text available
Traditional methods of human activity recognition from wearable sensors rely on good training datasets in which thousands of training sequences should be carefully labeled. However, unlike images or videos which can be easily classified by human beings, strictly labeling such sequences of sensor data need much more manpower and computing resources....
Preprint
Full-text available
Background Estimating energy expenditure with indirect calorimetry requires expensive equipment and provides slow and noisy measurements. Rapid estimates using wearable sensors would enable techniques like optimizing assistive devices outside a lab. Existing methods correlate data from wearable sensors to measured energy expenditure without evaluat...
Article
Full-text available
A miniaturized, flexible fiber-based lithium sensor was fabricated from low-cost cotton using a simple, repeatable dip-coating technique. This lithium sensor is highly suited for ready-to-use wearable applications and can be used directly with-out the preconditioning steps normally required with traditional ion-selective electrodes. The sensor has...
Article
Full-text available
Biometric systems are becoming increasingly important, since they provide more reliable and efficient means of identity verification. Biometric gait recognition (i.e. recognizing people from the way they walk) is one of the recent attractive topics in biometric research. This paper presents biometric user recognition based on gait. Biometric gait r...
Article
Full-text available
The use of inertial sensors to characterize pathological gait has traditionally been based on the calculation of temporal and spatial gait variables from inertial sensor data. This approach has proved successful in the identification of gait deviations in populations where substantial differences from normal gait patterns exist; such as in Parkinso...

Citations

... According to how features are extracted and utilized, semi-supervised HAR methods can be classified into feature engineering based methods (Subramanya et al. 2012) and feature learning based methods. One typical feature engineering based method was designed in Subramanya et al. (2012), which introduces boosted decision stumps based discriminative method to select features. ...
... According to how features are extracted and utilized, semi-supervised HAR methods can be classified into feature engineering based methods (Subramanya et al. 2012) and feature learning based methods. One typical feature engineering based method was designed in Subramanya et al. (2012), which introduces boosted decision stumps based discriminative method to select features. The advent of deep learning enables the boosting of deep SSL-based HAR methods. ...
Article
Full-text available
Human activity recognition (HAR), which aims at inferring the behavioral patterns of people, is a fundamental research problem in digital health and ambient intelligence. The application of machine learning methods in HAR has been investigated vigorously in recent years. However, there are still a number of challenges confronting the task, where one significant barrier lies in the longstanding shortage of annotations. To address this issue, we establish a new paradigm for HAR, which integrates active learning and semi-supervised learning into one framework. The main idea is to reduce the annotation cost by actively selecting the most informative samples for annotation, as well as leveraging the unlabelled instances in a semi-supervised way. In particular, we propose to utilize the massive unlabelled data via temporal ensembling of convolutional neural networks (CNN), which yields robust consensus predictions by aggregating the outputs of the training networks on different epochs. We conducted extensive experiments on three public benchmark datasets. The proposed method achieves Macro F1 values of 0.76, 0.45 and 0.91 in a low annotation scenario on PAMAP2, USCHAD and UCIHAR datasets respectively, outperforming a multitude of state-of-the-art deep models. The ablation study proves the effectiveness of the two components of the framework, i.e., active learning-based sample selection and semi-supervised model training with temporal ensembling, in alleviating the issue of insufficient labels. Cross-validation and statistical significance experiments further demonstrate the robustness and generalization ability of the proposed method. The source codes are available at https://github.com/HaixiaBi1982/ActSemiCNNAct.
... While all approaches use a classifier in end, they differ in how features are generated. This includes use of signal processing methods through which hundreds of signal statistics are computed (Subramanya et al. 2012;Lester et al. 2006;Anguita et al. 2012) and those based on nonlinear time series analysis (Ali et al. 2007;Basharat and Shah 2009;Frank et al. 2010). The later approaches use concepts from the theory of chaotic systems such as time delay embedding and computation of dynamic invariants (e.g., Lyapunov exponent, correlation dimension) to extract relevant features. ...
Article
Full-text available
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
... In [16], a model capable of recognizing same activities was built, and the dataset was taken from the triaxial accelerometer, phototransistors, temperature and barometric pressure sensors, two microphones, and GPS to distinguish between a stationary state, walking, jogging, driving a vehicle, and climbing up and down stairs. The other systems were not very practical as they involved multiple sensors situated all across the body but in [17] author used a system which involved various accelerometers or combination of accelerometers capable of identifying a wide range of activities. The system can work for only some small-scale applications (e.g., hospital setting). ...
Chapter
Nowadays, mobile phone sensor technology is advancing at a great pace and is consequently used to perform several tasks with the preinstalled GPS and accelerometer sensors on it. Different human activities including jogging, sitting, standing, walking, climbing stairs, etc., are automatically recorded on a cell phone with the help of installed sensors. The stored data is used to recognize human activities using various machine learning algorithms. In this paper, the performance evaluation of Ensemble learner on the raw sensor data is performed. The WISDM dataset, which uses a phone-based accelerometer to recognize the activities of a human, is used for experimentation. The used dataset keeps a record of the basic human activities such as jogging, sitting, standing, walking, climbing up and down stairs, in the day to day life. The WISDM data contains some missing values which are replaced using the average values. The ensemble classifiers including random forest, AdaBoost, and bagging are used for human activity classification. The performance of classifiers is evaluated against the parameters of accuracy, recall, precision, and f1-measures. The experimentation suggests that the random forest classifier outperforms the other two on every parameter value.
... For conducting the recognition procedure, they incorporated classifiers like Naïve Bayes, Decision Tree as well as instance based learning and obtained an accuracy of 84.26%, but in terms of accuracy, we obtained a superior result for activities whether it was transitional or non-transitional. In another research by Subramanya, et al. [18] introduce the employment of barometric gas sensors, temperature, phototransistors, microphones, tri-axial accelerometer and GPS for conducting the recognition process of activities namely downstairs, jogging, climbing up and stationary states with an accuracy of 83% in motion state and 87% by evaluating in environment. But, in terms of number of sensors, it can be observed from our accomplishment that, we employed fewer sensors even though we obtained superior consequences. ...
... In a contemporary research, Bao and Intille [17] collected data from 20 users and learned on twenty activities with decision tree, Naïve Bayes classifier and instance based learning and they achieved 84.26% accuracy in recognition activities but we found higher accuracy on both transitional and non-transitional activities. Subramanya, et al. [18] collected data from temperature, barometric gas sensors, triaxial accelerometer, phototransistors, micro-phones and GPS to detect semi-complex activities such as climbing up, jogging, downstairs and stationary states with 83% and 87% accuracy in motion state and estimating in environment respectively, but we proposed more reliable model to recognize those activities with less sensors than they used. Similarly, Anguita, et al. [19] implemented a hardware friendly SVM to improve computational cost and sustainable system to detect walking, downstairs, upstairs, sitting, standing and laying with overall accuracy of 89% on test dataset and we proposed an online handling model to reduce computational cost and we showed more improved accuracy. ...
... Interested readers are pointed to the extensive survey on human activity recognition published by Lara et al. (Lara and Labrador, 2013). Prior work focused on placing multiple acceleration sensors on several parts of the participant's body (Parkka et al., 2006;Subramanya et al., 2012). This setup were capable of identifying a wide range of activities, such as running, walking, or climbing stairs. ...
... The authors of [4] describe a method to recognize walking, sitting, standing, and running activities by means of five accelerometers. Other studies tried to improve the performances of their recognition systems by relying on the combinations of heterogeneous sensors, e.g., accelerometers and gyroscopes, microphones, GPS, and so on [30,19]. ...
Conference Paper
Full-text available
In recent years, the percentage of the population owning a smartphone has increased significantly. These devices provide the user with more and more functions, so that anyone is encouraged to carry one during the day, implicitly producing that can be analysed to infer knowledge of the user’s context. In this work we present a novel framework for Human Activity Recognition (HAR) using smartphone data captured by means of embedded triaxial accelerometer and gyroscope sensors. Some statistics over the captured sensor data are computed to model each activity, then real-time classification is performed by means of an efficient supervised learning technique. The system we propose also adopts a participatory sensing paradigm where user’s feedbacks on recognised activities are exploited to update the inner models of the system. Experimental results show the effectiveness of our solution as compared to other state-of-the-art techniques.
... Past work focused on the use of multiple accelerometers placed on several parts of the user's body, for example (Bao and Intille, 2004; Bao and Intille, 2004; Krishnan et al., 2008; Parkka et al., 2006; Subramanya et al., 2012 ). These systems using multiple accelerometers and other sensors were capable of identifying a wide range of activities. ...
Conference Paper
Full-text available
The trend of mobile activity monitoring using widely available technology is one of the most blooming concepts in the recent years. It supports many novel applications, such as fitness games or health monitoring. In these scenarios, activity recognition tries to distinguish between different types of activities. However, only little work has focused on qualitative recognition so far: How exactly is the activity carried out? In this paper, an approach for supervising activities, i.e. qualitative recognition, is proposed. The focus lied on push-ups as a proof of concept, for which sensor data of smartphones and smartwatches were collected. A user-dependent dataset with 4 participants and a user-independent dataset with 16 participants were created. The performance of Naive Bayes classifier was tested against normal, kernel and multivariate multinomial probability distributions. An accuracy of 90.5% was achieved on the user-dependent model, whereas the user-independent model scored with an accuracy of 80.3%.
... There are only a few works trying to deal with the difficulties in obtaining large-quantity and diverse labeled data set for identifying travel modes. [15] proposes a semi-supervised learning approach to deal with this issue, in which they apply Virtual Evidence mechanism to fill missing labels in partially labeled data sets. This approach only requires data collectors to label each mode chunk once at a random moment and is showed to produce good accuracy. ...
Preprint
Transportation mode detection with personal devices has been investigated for over ten years due to its importance in monitoring ones' activities, understanding human mobility, and assisting traffic management. However, two main limitations are still preventing it from large-scale deployments: high power consumption, and the lack of high-volume and diverse labeled data. In order to reduce power consumption, existing approaches are sampling using fewer sensors and with lower frequency, which however lead to a lower accuracy. A common way to obtain labeled data is recording the ground truth while collecting data, but such method cannot apply to large-scale deployment due to its inefficiency. To address these issues, we adopt a new low-frequency sampling manner with a hierarchical transportation mode identification algorithm and propose an offline data labeling approach with its manual and automatic implementations. Through a real-world large-scale experiment and comparison with related works, our sampling manner and algorithm are proved to consume much less energy while achieving a competitive accuracy around 85%. The new offline data labeling approach is also validated to be efficient and effective in providing ground truth for model training and testing.
... Researchers have been training classifiers to identify activity as well as location since the mid-2000's with great success in off-line classification of a broad range of activities, including mundane household actions (such as reading a newspaper) [24] to identifying various sporting activities in naturalistic non-laboratory conditions [25]. The vast majority of the state-of-the-art literature relies on highfrequency sampling of motion and environmental parameters, calculation of features over specified time windows (often 5-10 seconds) in both time and frequency domains, and off-line training of black-box classification models in [26], [27]. There is a trend, however, is away from multi-sensor approaches towards using smartphones to identify activity, with notable work being performed to not only collect and classify modes and activities but also to validate the results [28], [29]. ...
Article
We present a platform to allow up to 50 000 students to simultaneously collect and learn from their personal activity, transportation, and environmental data. The main goals that we met during the design of our sensor platform were to: 1) be low cost; 2) remain powered for the duration of the data collection campaign; 3) robustly sense a wide range of environmental parameters; and 4) be packaged in a form factor conducive to wide-spread adoption and ease of use. We describe and generalize the design methods we applied on the hardware and firmware. Our sensors employWi-Fi communication to move data as well as to localize themselves using a radio-map of Singapore. Our system uses embedded as well as server-based machine learning algorithms to perform on-sensor transportation mode identification and state inference. The testing and validation methods that we applied ensured that over 98% of the deployed sensors successfully met all of their design goals. In addition, we summarize the results of a large-scale deployment of our system for a nation-wide experiment in Singapore in 2015, and describe three sample applications of the collected data. We publish sample data sets and algorithm code for researchers to analyze.