Ryo Suzuki's research while affiliated with The University of Calgary and other places

Publications (74)

Preprint
This paper introduces Video2MR, a mixed reality system that automatically generates 3D sports and exercise instructions from 2D videos. Mixed reality instructions have great potential for physical training, but existing works require substantial time and cost to create these 3D experiences. Video2MR overcomes this limitation by transforming arbitra...
Preprint
We introduce Augmented Physics, a machine learning-powered tool designed for creating interactive physics simulations from static textbook diagrams. Leveraging computer vision techniques, such as Segment Anything and OpenCV, our web-based system enables users to semi-automatically extract diagrams from physics textbooks and then generate interactiv...
Preprint
This paper introduces holographic cross-device interaction, a new class of remote cross-device interactions between local physical devices and holographically rendered remote devices. Cross-device interactions have enabled a rich set of interactions with device ecologies. Most existing research focuses on co-located settings (meaning when users and...
Preprint
We introduce RealitySummary, a mixed reality reading assistant that can enhance any printed or digital document using on-demand text extraction, summarization, and augmentation. While augmented reading tools promise to enhance physical reading experiences with overlaid digital content, prior systems have typically required pre-processed documents,...
Preprint
This paper introduces the concept of augmented conversation, which aims to support co-located in-person conversations via embedded speech-driven on-the-fly referencing in augmented reality (AR). Today computing technologies like smartphones allow quick access to a variety of references during the conversation. However, these tools often create dist...
Preprint
This paper introduces RealityEffects, a desktop authoring interface designed for editing and augmenting 3D volumetric videos with object-centric annotations and visual effects. RealityEffects enhances volumetric capture by introducing a novel method for augmenting captured physical motion with embedded, responsive visual effects, referred to as obj...
Preprint
We introduce RealityCanvas, a mobile AR sketching tool that can easily augment real-world physical motion with responsive hand-drawn animation. Recent research in AR sketching tools has enabled users to not only embed static drawings into the real world but also dynamically animate them with physical motion. However, existing tools often lack the f...
Preprint
This paper introduces VR Haptics at Home, a method of repurposing everyday objects in the home to provide casual and on-demand haptic experiences. Current VR haptic devices are often expensive, complex, and unreliable, which limits the opportunities for rich haptic experiences outside research labs. In contrast, we envision that, by repurposing eve...
Preprint
We present ChameleonControl, a real-human teleoperation system for scalable remote instruction in hands-on classrooms. In contrast to existing video or AR/VR-based remote hands-on education, ChameleonControl uses a real human as a surrogate of a remote instructor. Building on existing human-based telepresence approaches, we contribute a novel metho...
Preprint
This paper introduces Teachable Reality, an augmented reality (AR) prototyping tool for creating interactive tangible AR applications with arbitrary everyday objects. Teachable Reality leverages vision-based interactive machine teaching (e.g., Teachable Machine), which captures real-world interactions for AR prototyping. It identifies the user-defi...
Preprint
HapticLever is a new kinematic approach for VR haptics which uses a 3D pantograph to stiffly render large-scale surfaces using small-scale proxies. The HapticLever approach does not consume power to render forces, but rather puts a mechanical constraint on the end effector using a small-scale proxy surface. The HapticLever approach provides stiff f...
Preprint
We introduce UltraBots, a system that combines ultrasound haptic feedback and robotic actuation for large-area mid-air haptics for VR. Ultrasound haptics can provide precise mid-air haptic feedback and versatile shape rendering, but the interaction area is often limited by the small size of the ultrasound devices, restricting the possible interacti...
Preprint
We present RealityTalk, a system that augments real-time live presentations with speech-driven interactive virtual elements. Augmented presentations leverage embedded visuals and animation for engaging and expressive storytelling. However, existing tools for live presentations often lack interactivity and improvisation, while creating such effects...
Preprint
Full-text available
In this paper, we present Mixels, programmable magnetic pixels that can be rapidly fabricated using an electromagnetic printhead mounted on an off-the-shelve 3-axis CNC machine. The ability to program magnetic material pixel-wise with varying magnetic force enables Mixels to create new tangible, tactile, and haptic interfaces. To facilitate the cre...
Preprint
Full-text available
This paper introduces a method to generate highly selective encodings that can be magnetically "programmed" onto physical modules to enable them to self-assemble in chosen configurations. We generate these encodings based on Hadamard matrices, and show how to design the faces of modules to be maximally attractive to their intended mate, while remai...
Preprint
Full-text available
This paper introduces a cube-based reconfigurable robot that utilizes an electromagnet-based actuation framework to reconfigure in three dimensions via pivoting. While a variety of actuation mechanisms for self-reconfigurable robots have been explored, they often suffer from cost, complexity, assembly and sizing requirements that prevent scaled pro...
Conference Paper
This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robo...
Article
Full-text available
In this paper, we survey the emerging design space of expandable structures in robotics, with a focus on how such structures may improve human-robot interactions. We detail various implementation considerations for researchers seeking to integrate such structures in their own work and describe how expandable structures may lead to novel forms of in...
Preprint
This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robo...
Preprint
We introduce Swarm Fabrication, a novel concept of creating on-demand, scalable, and reconfigurable fabrication machines made of swarm robots. We present ways to construct an element of fabrication machines, such as motors, elevator, table, feeder, and extruder, by leveraging toio robots and 3D printed attachments. By combining these elements, we d...
Preprint
HapticBots introduces a novel encountered-type haptic approach for Virtual Reality (VR) based on multiple tabletop-size shape-changing robots. These robots move on a tabletop and change their height and orientation to haptically render various surfaces and objects on-demand. Compared to previous encountered-type haptic approaches like shape display...
Preprint
We present RealitySketch, an augmented reality interface for sketching interactive graphics and visualizations. In recent years, an increasing number of AR sketching tools enable users to draw and embed sketches in the real world. However, with the current tools, sketched contents are inherently static, floating in mid air without responding to the...
Preprint
RoomShift is a room-scale dynamic haptic environment for virtual reality, using a small swarm of robots that can move furniture. RoomShift consists of nine shape-changing robots: Roombas with mechanical scissor lifts. These robots drive beneath a piece of furniture to lift, move and place it. By augmenting virtual scenes with physical objects, user...
Preprint
Programming education is becoming important as demands on computer literacy and coding skills are growing. Despite the increasing popularity of interactive online learning systems, many programming courses in schools have not changed their teaching format from the conventional classroom setting. We see two research opportunities here. Students may...
Preprint
Large-scale shape-changing interfaces have great potential, but creating such systems requires substantial time, cost, space, and efforts, which hinders the research community to explore interactions beyond the scale of human hands. We introduce modular inflatable actuators as building blocks for prototyping room-scale shape-changing interfaces. Ea...
Conference Paper
We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can chang...
Conference Paper
This paper introduces LiftTiles, a modular and reconfigurable room-scale shape display. LiftTiles consist of an array of retractable and inflatable actuator that is compact (e.g., 15cm tall) and light (e.g., 1.8kg), while extending up to 1.5m to allow for large-scale shape transformation. Inflatable actuation also provides a robust structure that c...
Conference Paper
In this paper, I introduce collective shape-changing interfaces, a class of shape-changing interfaces that consist of a set of discrete collective elements. Through massively parallel transformation, locomotion, and connection of individual building blocks, the overall physical structure can be dynamically changed. Given this parallel change of ind...
Preprint
We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can chang...
Conference Paper
Full-text available
We introduce MorphIO, entirely soft sensing and actuation modules for programming by demonstration of soft robots and shape-changing interfaces.MorphIO's hardware consists of a soft pneumatic actuator containing a conductive sponge sensor.This allows both input and output of three-dimensional deformation of a soft material.Leveraging this capabilit...
Preprint
This paper presents Tabby, an interactive and explorable design tool for 3D printing textures. Tabby allows texture design with direct manipulation in the following workflow: 1) select a target surface, 2) sketch and manipulate a texture with 2D drawings, and then 3) generate 3D printing textures onto an arbitrary curved surface. To enable efficien...
Conference Paper
This paper introduces Dynamic 3D Printing, a fast and reconstructable shape formation system. Dynamic 3D Printing can assemble an arbitrary three-dimensional shape from a large number of small physical elements. Also, it can disassemble the shape back to elements and reconstruct a new shape. Dynamic 3D Printing combines the capabilities of 3D print...
Conference Paper
We explore a new approach to programming swarm user interfaces (Swarm UI) by leveraging direct physical manipulation. Existing Swarm UI applications are written using a robot programming framework: users work on a computer screen and think in terms of low-level controls. In contrast, our approach allows programmers to work in physical space by dire...
Conference Paper
We present PEP (Printed Electronic Papercrafts), a set of design and fabrication techniques to integrate electronic based interactivities into printed papercrafts via 3D sculpting. We explore the design space of PEP, integrating four functions into 3D paper products: actuation, sensing, display, and communication, leveraging the expressive and tech...
Conference Paper
For people with visual impairments, tactile graphics are an important means to learn and explore information. However, raised line tactile graphics created with traditional materials such as embossing are static. While available refreshable displays can dynamically change the content, they are still too expensive for many users, and are limited in...
Article
Recent advances in program synthesis offer means to automatically debug student submissions and generate personalized feedback in massive programming classrooms. When automatically generating feedback for programming assignments, a key challenge is designing pedagogically useful hints that are as effective as the manual feedback given by teachers....
Article
For people with visual impairments, tactile graphics are an important means to learn and explore information. However, raised line tactile graphics created with traditional materials such as embossing are static. While available refreshable displays can dynamically change the content, they are still too expensive for many users, and are limited in...
Conference Paper
For massive programming classrooms, recent advances in program synthesis offer means to automatically grade and debug student submissions, and generate feedback at scale. A key challenge for synthesis-based autograders is how to design personalized feedback for students that is as effective as manual feedback given by teachers today. To understand...
Conference Paper
In large introductory programming classes, teacher feedback on individual incorrect student submissions is often infeasible. Program synthesis techniques are capable of fixing student bugs and generating hints automatically, but they lack the deep domain knowledge of a teacher and can generate functionally correct but stylistically poor fixes. We i...
Article
Texture is an essential property of physical objects that affects aesthetics, usability, and functionality. However, designing and applying textures to 3D objects with existing tools remains difficult and time-consuming; it requires proficient 3D modeling skills. To address this, we investigated an auto-completion approach for efficient texture cre...
Article
Full-text available
IDEs, such as Visual Studio, automate common transformations, such as Rename and Extract Method refactorings. However, extending these catalogs of transformations is complex and time-consuming. A similar phenomenon appears in intelligent tutoring systems where instructors have to write cumbersome code transformations that describe "common faults" t...
Conference Paper
Expert crowdsourcing marketplaces have untapped potential to empower workers' career and skill development. Currently, many workers cannot afford to invest the time and sacrifice the earnings required to learn a new skill, and a lack of experience makes it difficult to get job offers even if they do. In this paper, we seek to lower the threshold to...
Conference Paper
Full-text available
Crowdsourcing marketplaces provide opportunities for autonomous and collaborative professional work as well as social engagement. However, in these marketplaces, workers feel disrespected due to unreasonable rejections and low payments, whereas requesters do not trust the results they receive. The lack of trust and uneven distribution of power amon...

Citations

... Our work examines opportunities for making these embedded transcripts interactive, enabling and supporting access to a richer range of information to complement and extend the text. With the recent advent of large language models (LLMs), we believe the integration of AR and AI [47] will become important for interactive conversation support in AR. ...
... Farouk et al. [8] presented an app on HoloLens 2 enhancing collaboration on visualized data through Kinect-captured remote user movements. Ihara et al. [13] introduced HoloBots, a mixed-reality collaboration platform improving holographic telepresence via synchronized mobile robots, integrating Kinect and tabletop robots. ...
... As a result, these systems are not fully adaptable and deployable to real-world scenarios. Some preliminary works have explored on-demand text analysis using machine learning, such as Dually Noted [51] analyzing document structure in real-time, Augmented Math [14] extracting graphs and math equations, and SOCRAR [58] using OCR to extract key information. However, none have yet achieved comprehensive and generalpurpose document enhancement. ...
... To address this, we could attach 3D object models of equipment to the avatar. For example, we can attach a virtual tennis racket to the avatar's hand like Re-alityCanvas [52] and rotate the racket based on the hand rotation. Also, we could detect the objects in the video using object detection techniques. ...
... Physics simulations have long been recognized as an effective way to enhance learning experiences, particularly within the classroom setting [4,19,34]. Motivated by this, researchers continuously developed simulated applications [8,29,41,54] [42], CircuitTUI [68]), which provide more engaging and collaborative experiences through spatial and embodied interactions. However, these existing physics simulation tools are often limited to pre-programmed and off-the-shelf simulations, which sometimes fail to meet the specific needs and challenges students face. ...
... In a paper titled Selective Self-Assembly using Re-Programmable Magnetic Pixels [69], and its extension, Mixels [70]- [72], we address these concerns. This paper introduces a method to generate highly selective encodings that can be magnetically "programmed" onto physical modules to enable them to self-assemble in chosen configurations. ...
... A variety of works [14,21,27] that use spatial projection mapping, mobile augmented reality, and mixed reality headsets have shown exciting opportunities for mixed reality and remote collaboration. On the other hand, Physical Telepresence [26], HoloBots [19], and ChameleonControl [11] demonstrate remote collaboration through shared physical interaction. ...
... And similarly, with additional training, other concrete objects could be used, for example in later iterations we trained the app to recognise multilink cubes. And recently researchers have begun experimenting with combining machine learning and augmented reality to enable users to train ANNs themselves to recognise their own gestures with familiar objects to them [35,36]. ...
... This enables opportunistic use of everyday common objects as tangible proxies for virtual assets. Additionally, Cathy et al. [11] use common household objects, such as chairs and sofas, as passive haptic props for preset scenarios in VR. ...
... By leveraging augmented visual displays, mixed reality interfaces can enhance the visual affordances and feedback for mobile devices, while keep using tangible devices as a rich tactile and tangible input [31,38]. In prior works, mixed reality cross device interfaces are used for tasks such as extending a device's screen space [5], interacting with data visualizations [9], or working with 3D models [43]. BISHARE [46] provides a nice summary of cross-device interactions with mixed reality interfaces. ...