Fig 2 - uploaded by Ernesto Damiani
Content may be subject to copyright.
Ivan Sutherland's HMD [31]

Ivan Sutherland's HMD [31]

Source publication
Article
Full-text available
This paper surveys the current state-of-the-art of technology, systems and applications in Augmented Reality. It describes work performed by many different research groups, the purpose behind each new Augmented Reality system, and the difficulties and problems encountered when building some Augmented Reality applications. It surveys mobile augmente...

Contexts in source publication

Context 1
... built a prototype of his vision, which he described in 1955 in "The Cinema of the Future", named Sensorama, which predated digital computing [64]. Next, Ivan [65] Sutherland invented the head mounted display in 1966. In 1968, Sutherland was the first one to create an augmented reality system using an optical see-through head-mounted display ( Fig. 2) [31]. In 1975, Myron Krueger creates the Videoplace, a room that allows users to interact with virtual objects for the first time. Later, Tom Caudell and David Mizell from Boeing coin the phrase Augmented Reality while helping workers assemble wires and cable for an aircraft [65]. They also started discussing the advantages of ...
Context 2
... of such applications include WikitudeDrive (Fig. 20) [69], which is a GPS- like application that allows the user to keep his/her eyes on the road while glancing at the GPS; Firefighter 360 ( Fig. 21), which has an entertainment purpose that permits the user to fight a virtual fire like a real firefighter; and Le Bar Guide (Fig. 22) that has a navigational function to guide the user to ...
Context 3
... of such applications include WikitudeDrive (Fig. 20) [69], which is a GPS- like application that allows the user to keep his/her eyes on the road while glancing at the GPS; Firefighter 360 ( Fig. 21), which has an entertainment purpose that permits the user to fight a virtual fire like a real firefighter; and Le Bar Guide (Fig. 22) that has a navigational function to guide the user to the nearest bar that serves Stella Artois beer. Websites such as Mashable, the Social Media Guide [44] and iPhoneNess [34] have all come up with the best augmented reality applications for iPhone and we encourage interested readers to have a look at ...
Context 4
... is still in infancy stage, and as such, future possible applications are infinite. Advanced research in AR includes use of head-mounted displays and virtual retinal displays for visualization purposes, and construction of controlled environments containing any number of sensors and actuators [65]. MIT Media Lab project "Sixth Sense" (Fig. 23) [49] is the best example of AR research. It suggests a world where people can interact with information directly without requiring the use of any intermediate device. Other current research also include Babak Parviz AR contact lens (Fig. 24) [5] as well as DARPA's [16] contact lens project (Fig. 23) [70], MIT Media Lab multiple ...
Context 5
... environments containing any number of sensors and actuators [65]. MIT Media Lab project "Sixth Sense" (Fig. 23) [49] is the best example of AR research. It suggests a world where people can interact with information directly without requiring the use of any intermediate device. Other current research also include Babak Parviz AR contact lens (Fig. 24) [5] as well as DARPA's [16] contact lens project (Fig. 23) [70], MIT Media Lab multiple research applications such as My-Shopping Guide [20] and TaPuMa [48]. Parviz's contact lens opens the door to an environment where information can only be viewed by the user. Of course, this can also be done by using glasses as opposed to contact ...
Context 6
... [65]. MIT Media Lab project "Sixth Sense" (Fig. 23) [49] is the best example of AR research. It suggests a world where people can interact with information directly without requiring the use of any intermediate device. Other current research also include Babak Parviz AR contact lens (Fig. 24) [5] as well as DARPA's [16] contact lens project (Fig. 23) [70], MIT Media Lab multiple research applications such as My-Shopping Guide [20] and TaPuMa [48]. Parviz's contact lens opens the door to an environment where information can only be viewed by the user. Of course, this can also be done by using glasses as opposed to contact lens, but the advantage in both cases over using a cell ...
Context 7
... of technology and keep researching as we see the potential grow. However, with augmented reality, it will be very important for the developers to remember that AR aims at simplifying the user's life by enhancing, augmenting the user's senses, not interfering with them. For instance, when reading the comments following Babak Parviz's contact lens (Fig. 24) article [5], there were suggestions from readers including "tapping the optic nerve" or "plugging in to the optic nerves and touch and smell receptors" and suggested that these would eventually be "more marketable approach" and a "much more elegant solution". Although the commentators do realize that as of today research simply does ...

Similar publications

Article
Full-text available
This paper presents a piezoelectric-driven stepping rotary actuator based on the inchworm motion. With the help of nine piezoelectric stacks and the flexure hinges, the designed actuator can realize large rotary ranges and high rotary speed with high accuracy. Three kinds of working units that compose the actuator are described and calculated: the...
Article
Full-text available
Purpose Although high-performance work practices (HPWPs) are considered to have a strong influence over organizational performance, researchers are not unanimous about the exact mechanism through which the impact of HPWS transcends to organizational performance. The purpose of this paper is to explore two explanatory theories (job characteristics...

Citations

... Motion copy empowers an untrained person to be depicted in videos dancing like a professional dancer, acting like a Kung Fu star, and playing basketball like an NBA player. Correspondingly, motion copy finds its applications in a wide spectrum of scenarios including animation production [1], [2], augmented reality [3], [4], and social media entertainment [5]. Interestingly, the source and target persons might be greatly different in body shape, appearance, and race. ...
Preprint
    Human motion copy is an intriguing yet challenging task in artificial intelligence and computer vision, which strives to generate a fake video of a target person performing the motion of a source person. The problem is inherently challenging due to the subtle human-body texture details to be generated and the temporal consistency to be considered. Existing approaches typically adopt a conventional GAN with an L1 or L2 loss to produce the target fake video, which intrinsically necessitates a large number of training samples that are challenging to acquire. Meanwhile, current methods still have difficulties in attaining realistic image details and temporal consistency, which unfortunately can be easily perceived by human observers. Motivated by this, we try to tackle the issues from three aspects: (1) We constrain pose-to-appearance generation with a perceptual loss and a theoretically motivated Gromov-Wasserstein loss to bridge the gap between pose and appearance. (2) We present an episodic memory module in the pose-to-appearance generation to propel continuous learning that helps the model learn from its past poor generations. We also utilize geometrical cues of the face to optimize facial details and refine each key body part with a dedicated local GAN. (3) We advocate generating the foreground in a sequence-to-sequence manner rather than a single-frame manner, explicitly enforcing temporal inconsistency. Empirical results on five datasets, iPER, ComplexMotion, SoloDance, Fish, and Mouse datasets, demonstrate that our method is capable of generating realistic target videos while precisely copying motion from a source video. Our method significantly outperforms state-of-the-art approaches and gains 7.2% and 12.4% improvements in PSNR and FID respectively.
    ... Augmented reality (AR) technology has recently been widely utilized in a variety of fields, including industry, military, healthcare, education, and entertainment [1][2][3][4]. One of the most popular applications of this technology is head-up displays (HUDs), which offer a realistic and immersive experience that combines three-dimensional (3D) virtual contents with the real scenes [5][6][7]. ...
    ... The size of the expanded eye-box is still not sufficient due to the limited number of virtual SLM replicas. However, this research has great significance in that an expanded eye-box of sufficient size required by the system can be implemented by increasing the number of virtual SLM replicas that contribute to eye reference points according to Equation (4). ...
    ... However, since the algorithm considers that the depth of the reconstructed 3D image is a variable, it is capable of generating 3D images at multiple depths over the entire range. To expand the eye-box while maintaining maximum FOV at the optimal viewing distance, we calculate the CGH based on the position of the eye shifted by within the expanded eye-box calculated in Equation (4). When the center of pupil is located on the optical axis, the wavefield of the target image at the retinal plane is expressed as � , ; + �. ...
    Article
    Full-text available
    Augmented reality (AR) technology has been widely applied across a variety of fields, with head-up displays (HUDs) being one of its prominent uses, offering immersive three-dimensional (3D) experiences and interaction with digital content and the real world. AR-HUDs face challenges such as limited field of view (FOV), small eye-box, bulky form factor, and absence of accommodation cue, often compromising trade-offs between these factors. Recently, optical waveguide based on pupil replication process has attracted increasing attention as an optical element for its compact form factor and exit-pupil expansion. Despite these advantages, current waveguide displays struggle to integrate visual information with real scenes because they do not produce accommodation-capable virtual content. In this paper, we introduce a lensless accommodation-capable holographic system based on a waveguide. Our system aims to expand the eye-box at the optimal viewing distance that provides the maximum FOV. We devised a formalized CGH algorithm based on bold assumption and two constraints and successfully performed numerical observation simulation. In optical experiments, accommodation-capable images with a maximum horizontal FOV of 7.0 degrees were successfully observed within an expanded eye-box of 9.18 mm at an optimal observation distance of 112 mm.
    ... In healthcare, it has been used to enhance engagement, motivation, and learning outcomes [23]. Additionally, VR offers immersive and interactive environments for skill acquisition and simulation-based training [4]. ...
    Conference Paper
    The global emergence of the COVID-19 pandemic has placed an unprecedented strain on healthcare systems worldwide, demanding the need for accurate and widespread testing methods, particularly the utilization of nasopha-ryngeal swabs for detecting SARS-CoV-2. Proficiency in this critical procedure among healthcare professionals is paramount to ensure both test accuracy and safety. Nevertheless, conventional training methods face numerous constraints, including high costs, limited availability of trainers, and the inherent risk of infection transmission. In response to these challenges, this study endeavors to explore a novel approach by integrating incentive-based learning techniques with incentive within immersive 3D virtual training environments tailored for nasopharyngeal swabbing. The primary aim of this investigation is to assess the impact of this innovative approach in comparison to conventional 3D virtual training methods on several crucial dimensions, including knowledge acquisition, cognitive load, skill development , and user engagement. To achieve this, the study capitalizes on cutting-edge technologies like virtual reality (VR) to provide a dynamic and realistic training experience. In essence, this research seeks to provide valuable insights into the effective amalgamation of incentive-based learning strategies with incentive principles within the realm of virtual reality-based healthcare training. The specific focus of this study lies in enhancing proficiency in the nasopharyngeal swabbing technique , a skill of paramount importance during the ongoing COVID-19 pandemic. By pushing the boundaries of educational technology and leveraging the power of motivation, this research aspires to contribute significantly to healthcare education in these challenging times.
    ... [3] On the other hand, AR allows users to keep seeing the real environment, but overlies virtual elements, and MR combines and allows real and virtual environments to coexist, with the user interacting with both of them at the same time. [4] MR offers spatial flexibility for interacting with virtual objects in real time in a more natural way, and is a technology that frees up hands, which can be particularly useful in situations where users need to keep their hands free for other tasks. [5,6] Thanks to their unique properties, immersive technologies have an outstanding potential for a wide range of applications: manufacturing and assembly tasks, where they can be used to improve the efficiency and accuracy of processes with real-time guidance and assistance; customer service and sales, since they can provide customers with product information and support in real time to increase sales; or education and training, since they can provide a more engaging and effective learning experience than traditional methods. ...
    Article
    Full-text available
    In a scenario in which the manufacturing of high‐performance, safe batteries on an unprecedented large scale is crucial for the energy transition and fight against climate change, research laboratories and cell production industries are facing challenges due to the lack of efficient data management and training tools. In this context, the use of intelligent devices plays an important role on the path towards the optimization of the manufacturing process and the enhancement of the battery performance while reducing production costs. In this Concept, we present an innovative Mixed Reality tool for efficient data collection and training in real‐time in battery research laboratories and battery manufacturing pilot lines, which runs on Microsoft HoloLens 2 glasses. We report a deep analysis on its ergonomic and usability aspects, and describe how we solved the problems found during its development. Thanks to this tool, users can collect data while keeping their hands free and receive advice in real time to design and build batteries with tailored properties. This optimizes data management in battery manufacturing environments. Now, thanks to our Mixed Reality application, users can collect data in the place of work, save this data automatically on a server and exploit it to receive advice and feedback to support their decision‐making and learning of the manufacturing process.
    ... In augmented reality (AR) systems, information and graphics such as virtual objects, multimedia, and text are superimposed in the real-world view of users to provide instant additional information for better interaction users with the environment [30]. Today, people can easily access AR systems in their smartphones, an example of such applications is Pokémon Go. ...
    Article
    Full-text available
    Nowadays with the increase of high-rise buildings, emergency evacuation is an indispensable part of urban environment management. Due to various disaster incidents occurred in indoor environments, research has concentrated on ways to deal with the different difficulties of indoor emergency evacuation. Although global navigation satellite systems (GNSSs) such as global positioning system (GPS) come in handy in outdoor spaces, they are not of much use in enclosed places, where satellite signals cannot penetrate easily. Therefore, other approaches must be considered for pedestrian navigation to cope with the indoor positioning problem. Another problem in such environments is the information of the building indoor space. The majority of the studies has used prepared maps of the building, which limits their methodology to that specific study area. However, in this study we have proposed an end-to-end method that takes advantage of BIM model of the building, thereby applicable to every structure that has an equivalent building information model (BIM). Moreover, we have used a mixture of Wi-Fi fingerprinting and pedestrian dead reckoning (PDR) method with relatively higher accuracy compared to other similar methods for navigating the user to the exit point. For implementing PDR, we used the sensors in smartphones to calculate user steps and direction. In addition, the navigational information was superimposed on the smartphone screen using augmented reality (AR) technology, thus communicating the direction information in a user-friendly manner. Finally, the AR mobile emergency evacuation application developed was assessed with a sample audience. After an experience with the app, they filled out a questionnaire which was designed in the system usability scale test (SUS) format. The evaluation results showed that the app achieved an acceptable suitability for usage.
    ... It can be applied in very diverse fields, including domains such as medicine, manufacturing, military, or education, among others [11]. In the field of marketing, AR has emerged as one of the most relevant interactive technologies, rapidly increasing its fields of application [12][13][14] and the devices that support it, including smartphones, tablets, projectors, or interactive screens [15]. Therefore, AR is an effective method for influencing human perception, and the assessment of human perception of virtual things in AR should be an interesting and important research domain [11]. ...
    Article
    Full-text available
    The emergence of new display technologies can change the perception of product design features and their assessment. Previous studies are limited to comparisons between a few technologies; the real product is considered only occasionally. This work compares the perceptions of 10 design features in two household products, shown by five display technologies (image rendering, 360° rotation, and augmented, immersive, and non-immersive virtual reality), and also with the real product. Results show that the 360° rotation provides the best perception for the most important features. However, the perception of aesthetic features is better achieved with i_VR. Other global results vary depending on the product. Finally, interaction with the real product shows a quite different perception for many features. The results contribute to the understanding of product perceptions influenced by different displays, comparing them with perceptions generated through real interaction. It is expected that the conclusions will be used to optimize the presentation of product features.
    ... However, the concept of AR is not new. The perception was commenced by Morton Heilig in 1950 and is insinuated to as "Sensorama" (Carmigniani et al., 2011;Uruthiralingam and Rea, 2020b). Given smartphones' lack of technical progress, users cannot easily access augmented reality until up-to-date software and hardware innovations arrive on the market (Kosa et al., 2019). ...
    ... AR is not an innovative technology. Morton Heilig invented it in 1950 under the name "Sensorama," In cinematography, it has been used for a long time (Carmigniani et al., 2011;Uruthiralingam and Rea, 2020a). It has been difficult for individuals to utilize augmented reality because of the dearth of technological innovation in smart devices until recent advancements in software and hardware have enabled individuals to do so (Kosa et al., 2019). ...
    Article
    Purpose Augmented reality (AR) adoption has boomed globally in recent years. The prospective of AR to seamlessly integrate digital information into the actual environment has proven to be a challenge for academics and industry, as they endeavor to understand and predict the influence on users' perceptions, adoption intentions and usage. This study investigates the factors affecting consumers’ behavioral intention to adopt AR technology in shopping malls by offering the mobile technology acceptance model (MTAM). Design/methodology/approach This conceptual framework is based on mobile self-efficacy, rewards, social influence and enjoyment of existing MTAM constructs. A self-administered questionnaire, constructed by measuring questions modified from previous research, elicited 311 usable responses from mobile respondents who had recently used AR technology in shopping malls. This analysis was performed using SmartPLS3.0. Findings Grounded on the findings of the study, it was found that, aside from factors such as mobile usefulness, ease of use and social influence, the remaining independent variables had the most significant impact on adopting AR technologies. Considering the limitations of this study, the paper concludes by discussing the significant implications and insinuating avenues for future research. Originality/value To better investigate mobile AR app adoption in Pakistan’s shopping malls, the researchers modified the newly proposed MTAM model by incorporating mobile self-efficacy theory, social influence, rewards and perceived enjoyment. However, the extended model has not been extensively studied in previous research. This study is the first to examine the variables that affect an individual’s intention to accept mobile AR apps by using a novel extended MTAM.
    ... Three-dimensional reconstruction provides digital 3D models by presenting a realworld scene, promoting the development of augmented reality, such as autonomous driving and digital twins [1][2][3][4][5]. On the other hand, point cloud data is fundamental for building a digital 3D model because 3D point cloud registration is the core technology for achieving 3D reconstruction by providing stereoscopic models [6,7]. ...
    Article
    Full-text available
    3D point cloud registration is a crucial technology for 3D scene reconstruction and has been successfully applied in various domains, such as smart healthcare and intelligent transportation. With theoretical analysis, we find that geometric structural relationships are essential for 3D point cloud registration. The 3D point cloud registration method achieves excellent performance only when fusing local and global features with geometric structure information. Based on these discoveries, we propose a 3D point cloud registration method based on geometric structure embedding into the attention mechanism (GraM), which can extract the local features of the non-critical point and global features of the corresponding point containing geometric structure information. According to the local and global features, the simple regression operation can obtain the transformation matrix of point cloud pairs, thereby eliminating the semantics that ignores the geometric structure relationship. GraM surpasses the state-of-the-art results by 0.548° and 0.915° regarding the relative rotation error on ModelNet40 and LowModelNet40, respectively.
    ... These new technologies contributed to the widespread use of Augmented Reality (AR) technologies (Carmigniani et al., 2011). ...
    Chapter
    Full-text available
    This book chapter is dedicated to reviewing current trends on wearable antennas. The second part of this review will be on a brief history and applications of wearable devices. Then, the antenna, an essential element of wearable devices will be reviewed. In particular, some of the different fabrication processes existing in the literature and materials used for the antenna fabrication will be explored.
    ... MR investigated in a variety of settings pertaining to medical education. Many early studies focused on teaching relevant anatomy, and more recently studies have evaluated the use of MR in procedural training, and its use in streaming of clinical ward-rounds to medical students [25][26][27][28][29][30][31][32][33].. ...
    Article
    Full-text available
    Background Mixed reality offers potential educational advantages in the delivery of clinical teaching. Holographic artefacts can be rendered within a shared learning environment using devices such as the Microsoft HoloLens 2. In addition to facilitating remote access to clinical events, mixed reality may provide a means of sharing mental models, including the vertical and horizontal integration of curricular elements at the bedside. This study aimed to evaluate the feasibility of delivering clinical tutorials using the Microsoft HoloLens 2 and the learning efficacy achieved. Methods Following receipt of institutional ethical approval, tutorials on preoperative anaesthetic history taking and upper airway examination were facilitated by a tutor who wore the HoloLens device. The tutor interacted face to face with a patient and two-way audio-visual interaction was facilitated using the HoloLens 2 and Microsoft Teams with groups of students who were located in a separate tutorial room. Holographic functions were employed by the tutor. The tutor completed the System Usability Scale, the tutor, technical facilitator, patients, and students provided quantitative and qualitative feedback, and three students participated in semi-structured feedback interviews. Students completed pre- and post-tutorial, and end-of-year examinations on the tutorial topics. Results Twelve patients and 78 students participated across 12 separate tutorials. Five students did not complete the examinations and were excluded from efficacy calculations. Student feedback contained 90 positive comments, including the technology’s ability to broadcast the tutor’s point-of-vision, and 62 negative comments, where students noted issues with the audio-visual quality, and concerns that the tutorial was not as beneficial as traditional in-person clinical tutorials. The technology and tutorial structure were viewed favourably by the tutor, facilitator and patients. Significant improvement was observed between students’ pre- and post-tutorial MCQ scores (mean 59.2% Vs 84.7%, p < 0.001). Conclusions This study demonstrates the feasibility of using the HoloLens 2 to facilitate remote bedside tutorials which incorporate holographic learning artefacts. Students’ examination performance supports substantial learning of the tutorial topics. The tutorial structure was agreeable to students, patients and tutor. Our results support the feasibility of offering effective clinical teaching and learning opportunities using the HoloLens 2. However, the technical limitations and costs of the device are significant, and further research is required to assess the effectiveness of this tutorial format against in-person tutorials before wider roll out of this technology can be recommended as a result of this study