Figure 1 - uploaded by Huy Viet Le
Content may be subject to copyright.
Our full-touch smartphone prototype based on two LG Nexus 5 and a Genuino MKR1000 for touch sensing on the edges.

Our full-touch smartphone prototype based on two LG Nexus 5 and a Genuino MKR1000 for touch sensing on the edges.

Source publication
Conference Paper
Full-text available
Smartphones are the most successful mobile devices and offer intuitive interaction through touchscreens. Current devices treat all fingers equally and only sense touch contacts on the front of the device. In this paper, we present InfiniTouch, the first system that enables touch input on the whole device surface and identifies the fingers touching...

Contexts in source publication

Context 1
... Handheld Device (see Figures 1 and 2a) consists of a 3D printed frame, two Nexus 5 touchscreens, 37 copper plates as capacitive touch sensors on the edges, and a PCB HD . The 3D printed frame holds both touchscreens and encloses PCB HD . ...
Context 2
... Handheld Device (see Figures 1 and 2a) consists of a 3D printed frame, two Nexus 5 touchscreens, 37 copper plates as capacitive touch sensors on the edges, and a PCB HD . The 3D printed frame holds both touchscreens and encloses PCB HD . ...

Similar publications

Chapter
Full-text available
It is in the crossroads between art museum and technology where this project here presented is born. We are referring to a prototype designed to provide involvement of museum visitors through experiences of enjoyment, creation and sharing. The prototype is an interactive application that combines an enjoyment area with a creative exploration area....

Citations

... Smartphones have become indispensable to human life. Besides the traditional finger touch interaction [1][2][3][4], a series of novel interaction approaches have also been explored, such as voice [5][6][7], gesture [8][9][10][11] and eye-gaze interaction [12][13][14][15][16]. To control smartphones, eye-gaze interaction involves utilizing eye and gaze information, such as gaze positions, gaze gestures, dwell time, and eye blinks. ...
Preprint
Full-text available
Current eye-gaze interaction technologies for smartphones are considered inflexible, inaccurate, and power-hungry. These methods typically rely on hand involvement and accomplish partial interactions. In this paper, we propose a novel eye-gaze smartphone interaction method named Event-driven Eye-Gaze Operation (E ² GO), which can realize comprehensive interaction using only eyes and gazes to cover various interaction types. Before the interaction, an anti-jitter gaze estimation method was exploited to stabilize human eye fixation and predict accurate and stable human gaze positions on smartphone screens to further explore refined time-dependent eye-gaze interactions. We also integrated an event-triggering mechanism in E ² GO to significantly decrease its power consumption to deploy on smartphones. We have implemented the prototype of E ² GO on different brands of smartphones and conducted a comprehensive user study to validate its efficacy, demonstrating E ² GO’s superior smartphone control capabilities across various scenarios.
... Esta biblioteca permite la construcción de tokens (tangibles) pasivos, personalizados, que puedan ser identificados y sobre los que se pueda hacer seguimiento en el móvil, también al ubicarlos sobre la pantalla. Un ejemplo adicional sobre usos extendidos de la pantalla multitáctil es InfinityTouch [25]. Los anteriores referentes hacen parte de las interfaces tangibles categorizadas como "superficies interactivas". ...
... Finalmente, son aún amplias las funcionalidades que es preciso desarrollar con la biblioteca. Por ejemplo, gran cantidad de referentes utilizan tangibles sobre la pantalla multitáctil para identificar y detectar la posición y rotación de objetos sobre la pantallawidgets- [16], [21], [22], [24], [25]; sin embargo, la versión actual de SensorMov solo permite la identificación de objetos estático con tres puntos de toque y la captura de los datos crudos. Potencialmente, se puede extender la identificación de objetos a figuras más complejas e incluir algoritmos de procesamiento de señales para suavizar y estabilizar la captura de datos crudos. ...
Article
Full-text available
El estudio presentado en este artículo tuvo como objetivo contribuir a la comunidad de desarrolladores móviles hispanohablantes con un conjunto de herramientas libres de programación, documentación y ejemplos en español para crear interfaces tangibles de usuario con dispositivos móviles inteligentes. Este tipo de interfaces permiten diversificar las interacciones con los celulares, priorizando usos alternativos de sus sensores. El desarrollo de este conjunto de herramientas ocurrió en el marco de un proyecto de investigación que buscó promover el diálogo de saberes entre la ingeniería y los saberes textiles tradicionales. En el proyecto se usó una metodología de diseño participativa que involucró actores diversos en las diferentes etapas: reconocimiento propio y mutuo, ideación, prototipado y experimentación. Como resultados se presentan una revisión de referentes sobre dispositivos móviles e interfaces tangibles, destacando aquellas interacciones móviles que usan objetos físicos y acciones gestuales; una descripción técnica de SensorMov: su arquitectura y diseño específico y, finalmente, su evaluación preliminar a través de un estudio de caso y de un proyecto académico. La biblioteca entiende lo tangible como el propio móvil u objetos físicos pasivos, que no requieran alimentación mediante energía eléctrica, como imanes. En este sentido, la versión de SensorMov documentada permitió trabajar con datos crudos de los sensores, identificar objetos y posiciones según el campo magnético estático de objetos físicos e identificar y posicionar objetos físicos sobre la pantalla del móvil. El principal reto a futuro es posibilitar espacios de apropiación que faciliten y extiendan su uso y funcionalidades, creando comunidad en torno a ella.
... While first-generation tabletops were able to detect fingers and tangibles using camera systems, they were bulky and, thus, have basically been fully replaced by projectivecapacitive touchscreens [7,61]. Over the last years, capacitive sensors have evolved from tracking only the fingertip to multi-purpose sensing devices, e.g., [13,17,29]. Hence, the software stack of today's touchscreens is fine-tuned to extract fingers touching a screen from the low-resolution "capacitive image, " making them incapable of recognizing structural information needed to detect tangibles placed on them. ...
... The more prominent investigation is toward finger orientation [39,42,69], enabling a wide variety of interactions previously not possible. Other investigations track the finger type [30], finger parts [49], or multi-finger gestures [29,31]. ...
... Our recognizer is a traditional CNN model which based on a capacitive sample predicts the clasŝ and the orientationˆ; thus, ( ) →ˆ,ˆ. Such a model is common in processing capacitive images, e.g., [27,29,47,49]. ...
Article
Full-text available
While tangibles enrich the interaction with touchscreens, with projected capacitive screens being mainstream, the recognition possibilities of tangibles are nearly lost. Deep learning approaches to improve the recognition of conductive triangles require collecting huge amounts of data and domain-specific knowledge for hyperparameter tuning. To overcome this drawback, we present a toolkit that allows everyone to train a deep learning tangible recognizer based on simulated data. Our toolkit uses a pre-trained Generative Adversarial Network to simulate the imprint of fiducial tangibles, which we then use to train a deployable recognizer based on our pre-defined neuronal network architecture. Our evaluation shows that our approach can recognize fiducial tangibles such as AprilTags with an average accuracy of 99.3% and an average rotation error of only 4.9°. Thus, our toolkit is a plug-and-play solution requiring no domain knowledge and no data collection but allows designers to use deep learning approaches in their design process.
... In addition, some users tend to use their phones with their non-dominant hands during other passive activities such as eating or drinking [39]. Therefore, designers and researchers have proposed numerous solutions to facilitate one-handed interaction with smartphones, such as user-interface (UI) adaptation [7,13,19], specific thumb gestures [9,21,34], and input space expansion (e.g., mid-air and back-of-device interaction) [31,52,55]. More specifically in UI adaptation for one-handed smartphone interaction, it is suggested that the UI design should accommodate the situations of left-and right-hand interaction which differ from each other largely in terms of thumb reachability [7]. ...
... HandSee [55] extended Back-Mirror, introduced a camera-based gesture sensing technique by placing a prism mirror on the front camera to extend the interaction space above the screen. InfiniTouch [31] supports the back-of-device interaction using different fingers with the support of extra capacitive sensors around the smartphone. ...
... Similarly, HandSense [51] detected the unimanual hand-grasp postures through a capacitive-sensor array. Le et al. [30][31][32] extended HandSense and developed InfiniTouch, a finger-and hand-aware touch-sensing mechanism with the touch-sensor array, supporting handedness prediction. ...
Article
Full-text available
The handedness (i.e. the side of the holding and operating hand) is an important contextual information to optimise the one-handed smartphone interaction. In this paper, we present a deep-learning-based technique for unobtrusive handedness prediction in one-handed smartphone interaction. Our approach is built upon a multilayer LSTM (Long-Short-Term Memory) neural network, and processes the built-in motion-sensor data of the phone in real time. Compared to the existing approaches, our approach eliminates the need of extra user actions (e.g., on-screen tapping and swiping), and predicts the handedness based on the picking-up action and the holding posture before the user performs any operation on the screen. Our approach is able to predict the handedness when a user is sitting, standing, and walking at an accuracy of 97.4%, 94.6%, and 92.4%, respectively. We also show that our approach is robust to the turbulent noise with an average accuracy of 94.6% for the situations of users in the transportation tools (e.g., bus, train, and scooter). Furthermore, the presented approach can classify users’ real-life single-handed smartphone usage into left- and right-handed with an average accuracy of 89.2%.
... Inspired by our peers [141,142], we also provide our dataset to the community to further advance the understanding of dexterity of single-finger movements while grasping objects and leverage this dexterity to design quick and seamless gestures that can be integrated with everyday actions. Despite the fact that covering the full hand with a large number of sensors can help detect gestures in situations where hands hold objects, in the real world, it is desirable to be able to have a minimal set of sensors that can perform gesture classification at a higher accuracy. ...
... Smartphones, tablet computers, and many other devices used today have touchscreens. These devices allow users to manipulate onscreen icons and menus with the touch of a finger (Le et al., 2018). Such a direct manipulation style facilitated by touchscreens enables people with different levels of expertise to use computing systems (Cáliz et al., 2021). ...
Article
Full-text available
Interaction of human beings with various types of apparatus, including many digital gadgets, follows Fitts' law. The objectives of this study were to assess the ability of children to acquire onscreen targets while using smartphones and determine if their interaction with smartphones follows Fitts' law. We developed an app implementing the standard two‐dimensional target selection task and provided it to 30 children aged between 4 and 10 years. We observed them to use the app and acquire onscreen targets using the tap gesture and the drag and drop gesture. We noted the index of difficulty (ID), movement time (MT), and throughput (TP) for the movement tasks. MT decreased and TP increased with the age of the children (p < .05). MT was significantly (p < .05) higher for the drag and drop gesture than for the tap gesture for 4 to 6‐year‐old children but not for 7 to 10‐year‐old children. No strong correlation (−.142 ≤ r ≤ .292) was observed between ID and MT for the children aged between 4 and 10 years indicating that the interaction of children in this age range with smartphones does not obey Fitts' law. We recommend that smartphone apps for children be developed taking into consideration their ability to acquire onscreen targets. It was found that the interaction of children aged four to ten years with a smartphone does not obey Fitts’ law. Additionally, TP is higher for the tap gesture and increases with age. ID, index of difficulty; MT, movement time; TP, throughput.
... Consumer touchscreen devices rarely report their underlying capacitive image data (most often used for debugging), and so we recompiled the open source Android kernel with a custom touchscreen driver (cf. [31,38,39]) that is able to communicate directly with the tablet's Synaptics touch controller over i2c (400kHz). Our driver code can be found here anonymized_for_review. ...
... Le et al. [153] proposed the finger-aware interaction that identified fingers touching the whole device surface to add the input modalities. In a prototype, front and back side touchscreens were developed by two stacked smartphones and their three edges were attached with 37 capacitive sensors. ...
Article
Full-text available
Touchscreens have been studied and developed for a long time to provide user-friendly and intuitive interfaces on displays. This paper describes the touchscreen technologies in four categories of resistive, capacitive, acoustic wave, and optical methods. Then, it addresses the main studies of SNR improvement and stylus support on the capacitive touchscreens that have been widely adopted in most consumer electronics such as smartphones, tablet PCs, and notebook PCs. In addition, the machine learning approaches for capacitive touchscreens are explained in four applications of user identification/authentication, gesture detection, accuracy improvement, and input discrimination.
... Although this approach is restricted to at most four actions, future work could explore and compare alternative design solutions (e.g. scrollable menu, marking menus [46], incorporate touch input beyond the front touchscreen [38]). Additional actions are available in the middle of the menu, which require a display switch and thus have an increased interaction cost. ...
Preprint
Full-text available
Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.
... Although this approach is restricted to at most four actions, future work could explore and compare alternative design solutions (e.g. scrollable menu, marking menus [46], incorporate touch input beyond the front touchscreen [38]). Additional actions are available in the middle of the menu, which require a display switch and thus have an increased interaction cost. ...
Conference Paper
Full-text available
Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.