Figure 1 - uploaded by Byungjoo Lee
Content may be subject to copyright.
(A) Button displacement and (B) fingertip force curves for button pressing [18]. Traditional design activates the button during downward travel, whereas Impact Activation triggers near the point of maximum impact. 

(A) Button displacement and (B) fingertip force curves for button pressing [18]. Traditional design activates the button during downward travel, whereas Impact Activation triggers near the point of maximum impact. 

Source publication
Conference Paper
Full-text available
The activation point of a button is defined as the depth at which it invokes a make signal. Regular buttons are activated during the downward stroke, which occurs within the first 20 ms of a press. The remaining portion, which can be as long as 80 ms, has not been examined for button activation for reason of mechanical limitations. The paper presen...

Contexts in source publication

Context 1
... define an activation principle called Impact Activation (IA) and assess it empirically in a rapid tapping task. In IA, the signal is invoked at the maximal impact point during a button press ( Figure 1). This point can be physically defined and implemented in sensors that provide continuous signal. ...
Context 2
... traditional button activation methods, IA triggers a but- ton at the point of maximum impact (see Figure 1). Figure 1b shows a fingertip force profile measured during a button press [18] from a snap-feeling button with 0.6 N activation force. ...
Context 3
... traditional button activation methods, IA triggers a but- ton at the point of maximum impact (see Figure 1). Figure 1b shows a fingertip force profile measured during a button press [18] from a snap-feeling button with 0.6 N activation force. The figure shows three main phases of a press: compression, impact, and release. ...
Context 4
... detect this, we developed an algorithm that finds the first local maximum signal during a button press. The algorithm keeps updating the maximum sensor value and activates the button if the sensor value starts to decrease (see Algorithm 1 and Figure 1a for details). For a noisy signal, T a may be increased, or smoothing applied before a pass through the algorithm. ...

Similar publications

Article
Full-text available
A Mobile stroke unit (MSU) is a type of ambulance deployed to promote the rapid delivery of stroke care. We present a computational study using a time to treatment estimation model to analyze the potential benefits of using MSUs in Sweden’s Southern Health Care Region (SHR). In particular, we developed two scenarios (MSU1 and MSU2) each including t...

Citations

... To create the models mentioned, we used two methods. The first will be based on the spatiotemporal modeling efforts where they have modelled and projected mistake rates of users executing spatiotemporal activities including hitting a virtual baseball [27], clicking on a moving and challenging target [26], and pressing a tactile button at the appropriate time [19]. ...
Preprint
Full-text available
The task of learning the piano has been a centuries-old challenge for novices, experts and technologists. Several innovations have been introduced to support proper posture, movement, and motivation, while sight-reading and improvisation remain the least-explored areas. In this PhD, we address this gap by redesigning the piano augmentation as an interactive and adaptive space. Specifically, we will explore how to support learners with adaptive visualisations through a two-pronged approach: (1) by designing adaptive visualisations based on the proficiency of the learner to support regular piano playing and (2) by assisting them with expert annotations projected on the piano to encourage improvisation. To this end, we will build a model to understand the complexities of learners' spatiotemporal data and use these to support learning. We will then evaluate our approach through user studies enabling practice and improvisation. Our work contributes to how adaptive visualisations can push music instrument learning and support multi-target selection tasks in immersive spaces.
... It would be interesting to explore whether using a continuous visual prompt inspired by gameplay might aid, e.g., in the usability of RCS interfaces. Additional work in this vein models the neuromechanical process of a finger pressing a physical button in a non-switch user [18,44]. Single-switch users, though, often have specialized switches that may be activated by different body parts. ...
Preprint
Some individuals with motor impairments communicate using a single switch -- such as a button click, air puff, or blink. Row-column scanning provides a method for choosing items arranged in a grid using a single switch. An alternative, Nomon, allows potential selections to be arranged arbitrarily rather than requiring a grid (as desired for gaming, drawing, etc.) -- and provides an alternative probabilistic selection method. While past results suggest that Nomon may be faster and easier to use than row-column scanning, no work has yet quantified performance of the two methods over longer time periods or in tasks beyond writing. In this paper, we also develop and validate a webcam-based switch that allows a user without a motor impairment to approximate the response times of a motor-impaired single switch user; although the approximation is not a replacement for testing with single-switch users, it allows us to better initialize, calibrate, and evaluate our method. Over 10 sessions with the webcam switch, we found users typed faster and more easily with Nomon than with row-column scanning. The benefits of Nomon were even more pronounced in a picture-selection task. Evaluation and feedback from a motor-impaired switch user further supports the promise of Nomon.
... We chose two approaches to build the models described. The first one will be based on the spatiotemporal modelling work by Kim et al. [14], Lee et al. [20], Lee and Oulasvirta [21], Liao et al. [22] where they have modelled and predicted error rates of users executing spatiotemporal tasks such as batting a virtual baseball, clicking on a moving and tricky target, and pressing a tactile button at the right time. ...
Preprint
Full-text available
The process of learning the piano for novices is usually difficult and time-consuming. Several approaches in augmented reality such as piano-roll visualizations have been explored but have not garnered enough success and adoption. These piano roll prototypes have introduced several features and modules that assist novices on aspects such in sight reading, timing and many others. However, improvisation, the act of allowing the piano user to incorporate their personal touch into their performance, and personalised learning have not been much explored in this domain. In this PhD, we are going to explore how we can encourage piano learners to improvise with the use of adaptive piano roll visualisations. Specifically, we are going to investigate how heuristics defined by experts and spatiotemporal models can be used to design visualisations that motivate and encourage learners based on their personalised learning patterns. Using these models and inputs, we will design and build a piano roll training system integrated with adaptive visualisations that serve as intervention helping learners. We will evaluate and compare these visualisations in various user studies where they get to play piano pieces and develop their improvisation skills. We intend to uncover whether these adaptive visualisations will be helpful in the overall training of piano learners. Additionally, we wish to explore whether these adaptive visualisations will allow us to discover affordances that can potentially improve piano learning in general.
... We chose two approaches to build the models described. The first one will be based on the spatiotemporal modelling work by Kim et al. [12], Lee et al. [17], Lee and Oulasvirta [19], Liao et al. [21] where they have modelled and predicted error rates of users executing spatiotemporal tasks such as batting a virtual baseball, clicking on a moving and tricky target, and pressing a tactile button at the right time. It also considers three factors namely (i) the user's internal time keeping mechanism, (ii) Fitt's Law and (iii) the effects of visualisations in Cognitive Load Theory (CLT) [11,13]. ...
... The work of Kim et al. [12] presented an activation technique called impact activation (IA) that describes the point where a button is activated at its maximal impact point. Based on their findings, IA as a technique is most useful during particularly-rapid repetitive button pressing activities, which are usually observed in games and music applications. ...
Preprint
Full-text available
Learning the piano is hard and many approaches including piano roll visualisations have been explored in order to support novices and seasoned learners in this process. However, existing piano roll prototypes have not considered the spatiotemporal component (user’s ability to press on a moving target) when generating these visualisations and user modelling. In this PhD, we are going to look into two different approaches: (i) exploring whether existing techniques in single-target spatiotemporal modelling can be adapted to a multi-target scenario such as when learners use several fingers to press multiple moving targets when playing the piano, and (ii) exploring heuristics defined by experts marking various difficult parts of songs, and deciding on specific interventions needed for these marked parts. Using models and input from the experts we will design and build an adaptive piano roll training system. We will evaluate and compare these models in various user studies involving users trying to play piano pieces and develop their improvisation skills. We intend to uncover whether these adaptive visualisations will be helpful in the overall training of piano learners. Additionally, these models and adaptive visualisations will allow us to discover affordances that can potentially improve piano learning in general.
... This study also reveals two interesting points about the origin of the high performance of professional players. First, it seems that the combat strategy of professional players is largely determined by the settings of the game interface such as the input device [25,26,30,32]. For example, looking at the results of the A-2 related metrics, the reason professional players move their mouse faster than regular players may simply be because they have a lower mouse sensitivity setting. ...
... In my thesis, I demonstrate designing a push-button upon the proposed framework. Buttons are transducers that register a discrete event from physical motion [14,18,33], and arguably the most basic input component for any interfaces. Interestingly, each button design is unique in its haptic response characteristics. ...
Preprint
Full-text available
Input devices, such as buttons and sliders, are the foundation of any interface. The typical user-centered design workflow requires the developers and users to go through many iterations of design, implementation, and analysis. The procedure is inefficient, and human decisions highly bias the results. While computational methods are used to assist various design tasks, there has not been any holistic approach to automate the design of input components. My thesis proposed a series of Computational Input Design workflows: I envision a sample-efficient multi-objective optimization algorithm that cleverly selects design instances, which are instantly deployed on physical simulators. A meta-reinforcement learning user model then simulates the user behaviors when using the design instance upon the simulators. The new workflows derive Pareto-optimal designs with high efficiency and automation. I demonstrate designing a push-button via the proposed methods. The resulting designs outperform the known baselines. The Computational Input Design process can be generalized to other devices, such as joystick, touchscreen, mouse, controller, etc.
... Page 5 v CHI 2020 Paper CHI 2020, April 25-30, 2020, Honolulu, HI, USA Table 1. Comparison between baseline models and ICP model but users typically have a c µ value lower than 0.5 [36,34,37], which is called the negative mean asynchrony (NMA) phenomenon [49,46,32]. • ν is the rate at which the user encodes sensory information to estimate click timing from the visual cue. ...
Conference Paper
Full-text available
... Upon release, it returns to the initial state. More generally, buttons are transducers that register a discrete event from physical motion [28,33,49]. Numerous types exist, using spring-loading but also other mechanisms, such as rubber and metal domes. ...
... Some tactile buttons emit an audible "click" sound near the snap point. Travel distance is the total distance before the keycap hits the bottom, and the distance at which the button is activated is called its activation point [28]. While these features can be modeled with FD curves, we stress again that FD neglects velocity and vibration characteristics. ...
Preprint
Full-text available
Designing a push-button with desired sensation and performance is challenging because the mechanical construction must have the right response characteristics. Physical simulation of a button's force-displacement (FD) response has been studied to facilitate prototyping; however, the simulations' scope and realism have been limited. In this paper, we extend FD modeling to include vibration (V) and velocity-dependence characteristics (V). The resulting FDVV models better capture tactility characteristics of buttons, including snap. They increase the range of simulated buttons and the perceived realism relative to FD models. The paper also demonstrates methods for obtaining these models, editing them, and simulating accordingly. This end-to-end approach enables the analysis, prototyping, and optimization of buttons, and supports exploring designs that would be hard to implement mechanically.
... In fact, even nearzero latency may have a negative impact on users, at least in theory [44]. Also, positive effects of latency on usability have been reported in several recent works [24,27,29]. ...
... The player must anticipate and plan the input so as to acquire the target successfully. A series of models for predicting user error rates in such anticipated input tasks has recently been published [24,[27][28][29][35][36][37]. The moving-target selection model extended in this paper [28] is the latest of these. ...
... If the system response is faster than the user's anticipation, a non-intuitive conclusion follows: that latency must be increased for minimizing the discordance. This effect has recently been reported by several authors [24,[27][28][29]. In their studies, delaying the system response of the button-pressing reduced the user's error rate by 5-94%. ...
Conference Paper
" Effects of unintended latency on gamer performance have been reported. End-to-end latency can be corrected by post-input manipulation of activation times, but this gives the player unnatural gameplay experience. For moving-target selection games such as Flappy Bird, the paper presents a predictive model of latency on error rate and a novel compensation method for the latency effects by adjusting the game's geometry design -- e.g., by modifying the size of the selection region. Without manipulation of the game clock, this can keep the user's error rate constant even if the end-to-end latency of the system changes. The approach extends the current model of moving-target selection with two additional assumptions about the effects of latency: (1) latency reduces players' cue-viewing time and (2) pushes the mean of the input distribution backward. The model and method proposed have been validated through precise experiments.
... 1. the user first touches the input device, 2. the user overcomes activation force and triggers a mechanical switch (~20 ms [11]), 3. the mechanical switch closes an electrical circuit, 4. the closed circuit is detected by the device's controller chip (~1-20 ms), 5. after processing the sensor data the chip puts data into a USB buffer (~1-20 ms), 6. the host computer queries the USB device for new data (~1-10 ms), 7. the device sends data over the wire (0.001 ms), 8. the host computer notifies the OS about new data from the USB (0.001 ms), 9. the OS has processed the data and made it available to userland libraries (0.01 ms), 10. user code has received an input event from a userland library (0.01 ms). ...
Conference Paper
We propose a method for accurately and precisely measuring the intrinsic latency of input devices and document measurements for 36 keyboards, mice and gamepads connected via USB. Our research shows that devices differ not only in average latency, but also in the distribution of their latencies, and that forced polling at 1000 Hz decreases latency for some but not all devices. Existing practices - measuring end-to-end latency as a proxy of input latency and reporting only mean values and standard deviations - hide these characteristic latency distributions caused by device intrinsics and polling rates. A probabilistic model of input device latency demonstrates these issues and matches our measurements. Thus, our work offers guidance for researchers, engineers, and hobbyists who want to measure the latency of input devices or select devices with low latency.