Google Glass swipe gesture input method screenshots. The upper screen displays the categorical overview with hints how to access them (one dot = one finger, ...). As soon as one, two or three fingers touch the touchpad on the right side of Google Glass the corresponding label selection matrix appears. To select a label the user keeps his fingers on the touch pad and swipes to the front, back, up or down.

Google Glass swipe gesture input method screenshots. The upper screen displays the categorical overview with hints how to access them (one dot = one finger, ...). As soon as one, two or three fingers touch the touchpad on the right side of Google Glass the corresponding label selection matrix appears. To select a label the user keeps his fingers on the touch pad and swipes to the front, back, up or down.

Source publication
Article
Full-text available
We describe an activity logging platform for Google Glass based on our previous work. We introduce new multi-modal methods for quick non-disturbing interactions for activity logging control and real time ground truth labeling, consisting of swipe gesture, head gesture and laser pointer tagging methods. The methods are evaluated in user studies towa...

Citations

... The DataScope application was implemented in the R Shiny platform. The shiny platform provides a user-friendly, interactive, web interface in combination with the computational power of R software (Ishimaru et al., 2014; R Development Core Team, 2016.) ...
Article
Machine learning promises many advantages, but achieving these promises requires methods that smooth the interaction between humans and machine learning. Machine learning systems require meticulous training on large, labeled datasets. Labeling data is a tedious expensive process, and many times requires complex human-judgment skills. Mixed-initiative designs divide the labor between the artificial agent and the human to make the task at hand more efficient and effective. This paper proposes a mixed-initiative method for efficient coding of video and image data in general and emotion data in particular. We integrate an unsupervised dimensionality reduction algorithm and the R Shiny platform to develop an interactive method that leverages human expertise to label the data more efficiently and effectively. The method, through the interactive web tool, allows the user to explore the data interactively, examine similarities and dissimilarities in the data, and label clusters of many images and video frames at once. The combination of the unsupervised learning algorithm and the R Shiny platform enables interactive exploration and annotation of high-dimensional, complicated data. This method can be used to annotate large data sets faster and can advance research in machine vision.
... com) (85), to visualize the author keywords, countries, institutions and authors. HistCite software an open-source tool was used for analyzing the research productivity (86), in addition to a bibliometrix: a R-tool for comprehensive science mapping analysis (using R-studio cloud) was used (87,88). ...
Article
Full-text available
Esophageal cancer (EC) is 8th common cause of cancer death in the world. Moreover, it is considered a public health issues due to it is incredibly aggressive nature and poor survival rate. The study reviewed the epidemiology, diagnosis, treatment and provides an overview of the global scientific research on EC. Bibliometrics studies have played a fundamental role in decision making regarding policy formation and the prioritization of resources for public health challenges. The bibliometric analysis was conducted for studies published between 1961 and 2019 using medical subject headings (MesH) database of the United States. VOSViewer and HistCite softwares were used for data analysis. Data was evaluated based on the title, trends, citations report, authorship, countries/regions, organizations, and journals. The total number of documents was 9,021, total citations was 222,721 and h-index was 160. Research article 7,871 (87.25%) and review paper 655 (7.26%), represent the majority of documents. The publications were rapidly increased during the period of 1985 to 2019. Journal of Diseases of the Esophagus and Annals of Surgical Oncology are leading journals. Doki Y, Wang Y and Kuwano H are most productive authors in EC research. Most published articles and leading funding agencies for EC research were from China, USA and Japan. The retrieved authors keywords were squamous cell carcinoma (SCC) and adenocarcinoma (AC). There is need for collaboration towards diagnostic and treatment of EC in the world and control the risk associated with EC.
... Some studies have utilized the surface of the device [20,62], and researchers have also proposed hand-to-face input techniques from the perspective of technical feasibility [9,25,32,42,64]. Yamashita et al. attached an optical sensor array to an HMD to allow gesture inputs via the cheek [64]. ...
... In this project, we were able to develop a web application using R software and the shiny package [3], [4]. The application represents a powerful and promising tool for video analytics. ...
... Although manual intervention for activity and context recognition with wearable devices is common [1,4], video recording interfaces associated with manual annotation have not been well explored. As a research involving recordtime manual annotation, Suzuki et al. proposed a video watermarking tool called Annotone [8]. ...
Conference Paper
To efficiently edit first-person videos, manually highlighting important scenes while recording is helpful. However, little study has been performed on how such annotation contributes to video editing and affects user behavior during recording. To elicit fundamental requirements for designing useful record-time annotation techniques, we conducted a study using a set of prototype wearable camera system and a video editing interface that enables users to annotate scenes during recording. We asked participants to perform video recording and editing tasks with two different interface settings. We observed that the participants edited videos more efficiently with detailed annotation techniques, whereas focussing on annotating scenes affected their record-time behavior. We conclude the paper with the design guidelines developed from the findings.
... Wearable smart glasses are an emerging device category that present many novel interaction challenges [30]. Equipped with large, high resolution and private graphical displays, they are capable of displaying a rich range of contents to their users [36]. ...
Conference Paper
Full-text available
Wearable technologies such as smart-glasses can sense, store and display sensitive personal contents. In order to protect this data, users need to securely authenticate to their devices. However, current authentication techniques, such as passwords or PINs, are a poor fit for the limited input and output spaces available on wearables. This paper focuses on eyewear and addresses this problem with a novel authentication system that uses an alphabet of simple tapping patterns optimized for rapid and accurate input on the temples (or arms) of glasses. Furthermore, it explores how an eyewear display can support password memorization by privately presenting a visualization of entered symbols. A pair of empirical studies confirm that performance during input of both individual password symbols and full passwords is rapid and accurate. A follow-up session one week after the main study suggests using a private display to show entered password symbols effectively supports memorization.
Chapter
In this paper, we present a personalized and real-time prototyping solution on smart glasses targeting activity recognition. Our work is based on the analysis of sensor data to study user’s motions and activities, while utilizing wearable glasses bundled with various sensors. The software system collects, trains data, and builds the model for fast classification, which emphasizes on how specific features annotate and extract head-mounted behavior. Based on our feature selection algorithm, the system reaches high accuracy and low computation cost in the experiments. Other than some previous works in data mining on sensors of smart phones or smart glasses, and related works of activity recognition on smartphones, our results show the accuracy achieves 87 %, and the responsive time is less than 3 s. The proposed system can provide more insightful and powerful services for the glass wearers. It would be possibly expected to carry out more user-centric and context-aware wearable applications in the future.