Figure - available from: Multimedia Tools and Applications
This content is subject to copyright. Terms and conditions apply.
Head elevation during experiment

Head elevation during experiment

Source publication
Article
Full-text available
With the proliferation of consumer virtual reality (VR) headsets and creative tools, content creators are experimenting with new forms of interactive audience experience using immersive media. Understanding user attention and behaviours in virtual environment can greatly inform the creative processes in VR. We developed an abstract VR painting and...

Citations

... Mu et al. [26] have presented user attention with behaviour in VE art encounter. Here, with the widespread availability of consumer VR headsets, creative tools, content makers were testing by novel forms of interactive audience engagement through immersive media. ...
Article
Full-text available
Artificial intelligence involves imitating human thought, consciousness, and other aspects. This is similar to having machine with human brain that can think, produce independently. However, unlike human brain, it has a speed, memory advantage. A virtual reality interactive glove is built using a nine-axis inertial sensor, an artificial intelligence deep learning algorithm. In this research work, Product Design Interaction and Experience Based on Virtual Reality Technology (PDIE-VRT-SCGAN) were proposed. Initially, input gestures data are gathered from the Virtual Reality Experiences (VRE) dataset. The input gestures data is then pre-processed using Multi-Window Savitzky-Golay Filter (MWSGF) to reduce noises, increase overall quality of the gestures data. In order to improving overall user engagement in product design interactions on virtual reality (VR) technology, the pre-processed gestures data are then fed into an adversarial network called a Semi-Cycled Generative Adversarial Network (SCGAN). In general, SCGAN does not express some adaption of optimization strategies for determining optimal parameters to promise exact to improving overall user engagement in product design interactions using VR technology. Therefore, FOX-inspired Optimization (FIO) is proposed to enhance weight parameter of SCGAN method, which precisely improving the user experience in product design interaction. The efficacy of PDIE-VRT-SCGAN method is assessed using a number of performance criteria, including tracking accuracy, frame rate, latency, rendering time, error rate, and user error. The proposed PDIE-VRT-SCGAN method attains 22.36%, 25.42% and 18.17% higher tracking accuracy, 21.26%, 15.42% and 19.27% higher latency, 28.36%, 25.32% and 28.27% higher frame rate compared with existing methods, such as design and implementation of virtual reality interactive product software depend on artificial intelligence deep learning algorithm(DVRI-PS-AI-DL), virtual evaluation system for product designing utilizing virtual reality (VES-PD-VR), and analysis of unsatisfying user experiences with unmet psychological needs for virtual reality exergames utilizing deep learning approach (AUUE-UP-VRE-DLA) respectively.
... However, the recent conjunction of virtual reality with museum practice has transformed the museum context by creating highly immersive and user-centered experiences that often resemble physical museum spaces. Consequently, research on the users' motivation and the levels of engagement, learning, interaction, satisfaction, etc., in these virtual environments constructs a continuously updated core of study for the literature, which requires systematic and thorough study [38][39][40][41][42][43][44]. ...
Article
Full-text available
In recent years, the digitization of cultural heritage has been favored by significant advancements in specific technologies, such as photogrammetry and three-dimensional scanning. The digital representations of artifacts, paintings, books, and collections, as well as buildings or archaeological sites, has led to the transfer of cultural organizations to the digital space. On the other hand, the rapid development of immersive technologies and the Internet of Things is expected to decisively shape virtual cultural heritage in the coming years. However, this digital transition should expand its impact on most of the population. This article aims to cover the lack of structured methodology in the design and development of inclusive virtual spaces in cultural heritage. This research introduces a holistic framework that is mainly based on the disciplines of virtual museology. The proposed methodology takes into account the advancements in extended reality and the creative industry of computer games. The multisensory approach would lead to advanced immersive experiences, while the multilayered approach of cultural heritage content would enhance accessibility in inclusive virtual spaces. Moreover, this holistic framework could provide evidence from the virtual worlds that could be applied to real cultural heritage organizations.
... Therefore, we involved a range of human-related features in this experiment. The VR painting was created by artist Goodyear using Google Tilt Brush [8]. The experimental environment was developed using a Unity3D game engine with a combination of hardware sensors and software tracking tools. ...
... The artist aims to investigate how participants split their attention among these brushstrokes. In previous work, we studied user attention modelling and eye gaze-based community generative art [8,33]. In this paper, we focus on user interactions related to walking and navigation. ...
Article
Full-text available
This paper presents a study on modelling user free walk mobility in virtual reality (VR) art exhibition. The main objective is to investigate and model users’ mobility sequences during interactions with artwork in VR. We employ a range of machine learning (ML) techniques to define scenes of interest in VR, capturing user mobility patterns. Our approach utilises a long short-term memory (LSTM) model to effectively model and predict users’ future movements in VR environments, particularly in scenarios where clear walking paths and directions are not provided to participants. The DL model demonstrates high accuracy in predicting user movements, enabling a better understanding of audience interactions with the artwork. It opens avenues for developing new VR applications, such as community-based navigation, virtual art guides, and enhanced virtual audience engagement. The results highlight the potential for improved user engagement and effective navigation within virtual environments.
... Designers are able to use a virtual painting brush (e.g., Google Tilt Brush) to implement their 3D artwork in a virtual environment (Gerry, 2017;Ho et al., 2019;Liu, 2021;McClinton et al., 2019;Otsuki et al, 2018). Mu et al. (2022) used Google Tilt Brush to create an abstract VR painting (called Caverna Coelo) and applied the built-in eye gaze and movement tracking techniques of HTC VIVE Pro Eye to investigate user attention and behavior, and audience art experience towards the painting. In architecture and engineering design areas, some studies found that the 3D visualization features of VR could make VR drawing systems more acceptable by users and much easier to use than desktop-based software (Darabkh et al., 2018;de Klerk et al., 2019;Mulders et al., 2022;Zender et al., 2019). ...
... Among these challenges, we decided to address the lack of guidelines by focusing on the definition of user attention-oriented design recommendations. In fact, as highlighted in [10], the design of a VR application requires the understanding of how users approach the virtual environment and interact with the virtual contents. These issues become even more relevant when VR applications are employed for safety-related tasks, such as the training of employees in charge of managing emergency situations (e.g., earthquakes, fires, or natural disasters). ...
Article
Full-text available
In virtual reality applications, head-mounted displays allow users to explore virtual surroundings, thus creating a high sense of immersion. However, due to the novelty of the technology and the possibility of freely enjoying a 360∘\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$360^\circ $$\end{document} virtual world, users can get distracted and divert their attention from the content of the application. In this work, we define a set of guidelines for the design of virtual reality applications for enhancing the users’ attention. To the best of our knowledge, this is one of the first attempts to provide general guidelines for virtual application design based on visual attention. More specifically, we analyze the different categories of factors that contribute to the user’s responsiveness and define a set of experiments for measuring the user’s promptness with respect to visual stimuli with different features and in the presence of audio/visual distractions. Experimental tests have been carried out with 36 volunteers. The users’ reaction time has been recorded and the performed analysis allowed the definition of a set of guidelines based on individual, operational, and technological factors for the design of virtual reality applications optimized in terms of user attention. In particular, statistical tests demonstrated that the presence of distractions leads to significantly different reaction times with respect to the case of no distractions, and that users belonging to different age intervals have significantly different behaviors. Moreover, the optimal placement of objects has been identified and the impact of cybersickness has been analyzed.
... Thus, we involved a range of human-related features in this experiment. The VR painting was created by artist Goodyear using Google Tilt Brush [7]. The experimental environment was developed using a Unity3D game engine with a combination of hardware sensors and software tracking tools. ...
... The artist aims to investigate how participants split their attention among these brushstrokes. In previous work, we studied user attention modelling and eye-gaze based community generative art [7,32]. In this paper, we are focusing on user interactions related to walking and navigation. ...
Preprint
Full-text available
Understanding human interactions in virtual reality (VR) can help developing intelligent applications that adapt to users' needs and enhance the user experience. The significant development of VR content has expanded the impact on VR complexity, making understanding VR spatial characteristics more difficult. While user mobility is a crucial part of their interactions with the VR environment, the current literature still does not provide a suitable framework to interpret and model VR user mobility data. We conducted a user experiment in the context of an abstract VR painting exhibition where users are prompted to walk naturally in a physical area to explore the VR painting. Deep Learning models are used to model user mobility sequences and predict their future movements while engaging with the art exhibition. Our user mobility model can support the development of new VR applications for the improved user navigation and social experience in VR.
... This work is extended by Liebers et al. [19], who show that behavioral biometrics is also possible based solely on user movements. Mu et al. [21] analyze users' eye gaze positions and body movements in VR. They find strong indicators that some of the interpersonal differences in these two metrics are related to users' backgrounds such as personality and related skills. ...
Conference Paper
Full-text available
An essential aspect in the evaluation of Virtual Training Environments (VTEs) is the assessment of users' training success, preferably in real-time, e.g. to continuously adapt the training or to provide feedback. To achieve this, leveraging users' behavioral data has been shown to be a valid option. Behavioral data include sensor data from eye trackers, head-mounted displays, and hand-held controllers , as well as semantic data like a trainee's focus on objects of interest within a VTE. While prior works investigated the relevance of mostly one and in rare cases two behavioral data sources at a time, we investigate the benefits of the combination of three data sources. We conduct a user study with 48 participants in an industrial training task to find correlations between training success and measures extracted from different behavioral data sources. We show that all individual data sources, i.e. eye gaze position and head movement, as well as duration of objects in focus are related to training success. Moreover, we find that simultaneously considering multiple behav-ioral data sources allows to better explain training success. Further, we show that training outcomes can already be predicted significantly better than chance by only recording trainees for parts of their training. This could be used for dynamically adapting a VTE's difficulty. Finally, our work further contributes to reaching the long-term goal of substituting traditional evaluation of training success (e.g. through pen-and-paper tests) with an automated approach.
Article
The success of the metaverse depends on technological advancements as well as human-centered designs that care for human users in fully immersive virtual environments. Without substantial spatial knowledge of the virtual environment, users can feel lost or disoriented in the metaverse. This can cause distress and poor overall user experience. This article introduces an unstuck feature that aims at guiding users out of irreconcilable situations. The feature delivers persuasive user navigation using automated avatars which were driven by machine learning models. A user experiment with mobility and eye-gaze tracking has been conducted to evaluate the effectiveness of the unstuck feature.