Figure 4 - uploaded by Sara Condino
Content may be subject to copyright.
Camera coordinate frame and image coordinate frame of the pinhole camera model in OpenCV and in Unity. Both OpenCV and Unity assume that the camera principal axis is aligned with the positive Z-axis of the camera coordinate system, but the OpenCV coordinate system is right-handed (the Y-axis is oriented downward) while the Unity coordinate system is left-handed (the Y-axis is oriented upward). Moreover, in OpenCV, the origin of the image coordinate system is the center of the upper-left image pixel while in Unity the origin of the image coordinate system is at the image center. This picture also shows the coordinate system of the quad primitive used in Unity.

Camera coordinate frame and image coordinate frame of the pinhole camera model in OpenCV and in Unity. Both OpenCV and Unity assume that the camera principal axis is aligned with the positive Z-axis of the camera coordinate system, but the OpenCV coordinate system is right-handed (the Y-axis is oriented downward) while the Unity coordinate system is left-handed (the Y-axis is oriented upward). Moreover, in OpenCV, the origin of the image coordinate system is the center of the upper-left image pixel while in Unity the origin of the image coordinate system is at the image center. This picture also shows the coordinate system of the quad primitive used in Unity.

Source publication
Article
Full-text available
Simulation for surgical training is increasingly being considered a valuable addition to traditional teaching methods. 3D-printed physical simulators can be used for preoperative planning and rehearsal in spine surgery to improve surgical workflows and postoperative patient outcomes. This paper proposes an innovative strategy to build a hybrid simu...

Contexts in source publication

Context 1
... ArUco tracking expresses the pose of each marker according to the right-handed reference system of OpenCV, whereas Unity uses a left-handed convention (the Y-axis is inverted as shown in Figure 4): thus, a change-of-basis transformation is applied to transform the acquired tracking data from OpenCV convention to Unity convention. Appl. ...
Context 2
... ArUco tracking expresses the pose of each marker according to the right-handed reference system of OpenCV, whereas Unity uses a left-handed convention (the Y-axis is inverted as shown in Figure 4): thus, a change-of-basis transformation is applied to transform the acquired tracking data from OpenCV convention to Unity convention. Both OpenCV and Unity assume that the camera principal axis is aligned with the positive Z-axis of the camera coordinate system, but the OpenCV coordinate system is right-handed (the Y-axis is oriented downward) while the Unity coordinate system is left-handed (the Y-axis is oriented upward). ...
Context 3
... are modeled with the well-known pinhole camera model. Figure 4 depicts the camera coordinate frame and image coordinate frame of the pinhole camera model in OpenCV and in Unity. ...
Context 4
... asymmetric checkerboard is used to estimate cameras' intrinsic parameters offline using the Camera Calibrator application of the MATLAB Computer Vision Figure 4. Camera coordinate frame and image coordinate frame of the pinhole camera model in OpenCV and in Unity. ...
Context 5
... are modeled with the well-known pinhole camera model. Figure 4 depicts the camera coordinate frame and image coordinate frame of the pinhole camera model in OpenCV and in Unity. ...

Similar publications

Article
Full-text available
Pedicle screw fixation (PSF) demands rigorous training to mitigate the risk of severe neurovascular complications arising from screw misplacement. This paper introduces a patient-specific phantom designed for PSF training, extending a portion of the learning process beyond the confines of the surgical room. Six phantoms of the thoracolumbar region...

Citations

... The generation of an AR scene requires the configuration of a virtual camera using the linear part of the intrinsic parameters of the corresponding real camera to obtain the same projection model and guarantee the virtual-to-real matching. To do this in Unity, we used the "Physical Camera" component that can simulate the linear real-world camera attributes: focal length, sensor size, and lens shift as detailed in [29], whereas the non-linearities associated with the distortion introduced by the lens are compensated by undistorting the real camera frame before rendering them on the background of the AR image (Figure 3). The extrinsic parameters of the virtual camera are modeled using the real camera pose computed frame-by-frame by the inside-out marker-based tracking of the VOSTARS platform. ...
Article
Full-text available
Achieving and maintaining proper image registration accuracy is an open challenge of image-guided surgery. This work explores and assesses the efficacy of a registration sanity check method for augmented reality-guided navigation (AR-RSC), based on the visual inspection of virtual 3D models of landmarks. We analyze the AR-RSC sensitivity and specificity by recruiting 36 subjects to assess the registration accuracy of a set of 114 AR images generated from camera images acquired during an AR-guided orthognathic intervention. Translational or rotational errors of known magnitude up to ±1.5 mm / ±15.5°, were artificially added to the image set in order to simulate different registration errors. This study analyses the performance of AR-RSC when varying (1) the virtual models selected for misalignment evaluation ( e.g . the model of brackets, incisor teeth, and gingival margins in our experiment), (2) the type (translation/rotation) of registration error, and (3) the level of user experience in using AR technologies. Results show that: 1) the sensitivity and specificity of the AR-RSC depends on the virtual models (globally, a median true positive rate of up to 79.2% was reached with brackets, and a median true negative rate of up to 64.3% with incisor teeth), 2) there are error components that are more difficult to identify visually, 3) the level of user experience does not affect the method. In conclusion, the proposed AR-RSC, tested also in the operating room, could represent an efficient method to monitor and optimize the registration accuracy during the intervention, but special attention should be paid to the selection of the AR data chosen for the visual inspection of the registration accuracy.
... In Unity, the "Physical Camera" component is enabled, and the focal length, sensor size, and lens shift are defined according to the intrinsic parameters of the laparoscope as detailed in [12] to simulate its projection model. A quad primitive, whose size and position relative to the virtual camera are defined according to the camera sensor size [12], is generated as a child of the virtual camera and used to render the laparoscopic video feed (Fig. 1c). ...
... In Unity, the "Physical Camera" component is enabled, and the focal length, sensor size, and lens shift are defined according to the intrinsic parameters of the laparoscope as detailed in [12] to simulate its projection model. A quad primitive, whose size and position relative to the virtual camera are defined according to the camera sensor size [12], is generated as a child of the virtual camera and used to render the laparoscopic video feed (Fig. 1c). The virtual camera component is configured to render both the laparoscopic video feed and the VR models of the anatomy (aorta, vena cava, ureters described in the previous paragraph), and it is anchored to the distal end of the virtual laparoscope. ...
... Acquired images were automatically processed in MATLAB according to the protocol described in [12] to estimate the 2D target display error (TVE2D): the offset, expressed in pixels, between virtual and real objects in the image plane (i.e., the centroids of the real landmarks and their virtual counterparts). For each couple of images, a rough estimate of the 3D target visualization error expressed in mm, i.e., the TVE3D, was derived by proportioning the landmark size in pixels with their actual size in mm. ...
Conference Paper
Augmented Reality (AR) can avoid some of the drawbacks of Minimally Invasive Surgery and may provide opportunities for developing innovative tools to assist surgeons. In laparoscopic surgery, the achievement of easy and sufficiently accurate registration is an open challenge. This is particularly true in procedures, such as laparoscopic abdominal Sacro-Colpopexy, where there is a lack of a sufficient number of visible anatomical landmarks to be used as a reference for registration. In an attempt to address the above limitations, we developed and preliminarily testes a constrained manual procedure based on the identification of a single anatomical landmark in the laparoscopic images, and the intraoperative measurement of the laparoscope orientation. Tests in a rigid in-vitro environment show good accuracy (median error 2.4 mm obtained in about 4 min) and good preliminary feedback from the technical staff who tested the system. Further experimentation in a more realistic environment is needed to validate these positive results. Clinical Relevance - This paper provides a new registration method for the development of AR educational videos and AR-based navigation systems for laparoscopic interventions.
... An augmented version was developed by Condino et al. (2021) with an in-house synthetic spine made from acrylonitrile butadiene styrene (ABS), which is 3D-printed using fused deposition modeling (FDM). The synthetic spine is overlaid with pedicle markers, which can be seen using a VR handset. ...
Article
Full-text available
This paper presents a review of surgical simulators, developed to enhance the learning process of surgical procedures, that involves bones, ranging from musculoskeletal system (orthopedics) and the skull (ENT and neurosurgeries). The paper highlights the specific challenges in terms of the extended reality representation of surgical training along with its latest advances. The study gathers journal and conference proceedings from various database sources (bibliographic databases and online search engines) that fulfills a predetermined eligibility criterion. From the search, 185 journals were found but only 144 met the inclusion criteria. Surgical simulators emerge as a promising alternative to aid residents in surgical training. It encompasses surgical procedures done in the craniomaxillofacial, joints, limbs and spine section of the human body. The study was partially supported by internal grant STG/19/047 from KU Leuven.
... In their study on AR-based applications for STEM subject learning, Mystakidis et al. [87] identified five instructional models, which were distinguished based on the instructional approach applied to subjects' learning: experiential, activity-based, presentation, cooperative/collaborative, and discovery instructional strategies. According to this distinction, five of the studies reviewed [27,68,77,84,85] applied the experiential instructional design strategy, in which the subject takes advantage of his or her prior experience and applies it in a simulated environment, while only one study [42] followed the activitybased instructional design strategy, in which subjects were guided by an expert in carrying out the training activity. ...
... The same research group [27] also proposed a method to build a hybrid simulator for the fixation of pedicle screws based on 3D-printed patient-specific spine models and AR and virtual fluoroscopy visualization. A patient-specific spine phantom and surgical tools, both equipped with markers for optical tracking, and a calibration cube were used. ...
... The AR projector was not used in any of the reviewed studies to superimpose virtual content over physical space. Although this alternative provides the ability for the user to observe the AR content with the naked eye, without using additional instrumentation, in most of the reviewed applications, a 2D display tool was used, whether smartphone [37,39,40,57,71,83], tablet [76], or PC [27,45,[47][48][49][50][52][53][54][58][59][60][61]63,69,70,79,80,82]. This choice is justified by the fact that the virtual content was not designed for real-time interaction with the user and/or real-world objects, but mostly to increase the amount of information available at the same time. ...
Article
Full-text available
Background: Augmented Reality (AR) represents an innovative technology to improve data visualization and strengthen the human perception. Among Human–Machine Interaction (HMI), medicine can benefit most from the adoption of these digital technologies. In this perspective, the literature on orthopedic surgery techniques based on AR was evaluated, focusing on identifying the limitations and challenges of AR-based healthcare applications, to support the research and the development of further studies. Methods: Studies published from January 2018 to December 2021 were analyzed after a comprehensive search on PubMed, Google Scholar, Scopus, IEEE Xplore, Science Direct, and Wiley Online Library databases. In order to improve the review reporting, the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used. Results: Authors selected sixty-two articles meeting the inclusion criteria, which were categorized according to the purpose of the study (intraoperative, training, rehabilitation) and according to the surgical procedure used. Conclusions: AR has the potential to improve orthopedic training and practice by providing an increasingly human-centered clinical approach. Further research can be addressed by this review to cover problems related to hardware limitations, lack of accurate registration and tracking systems, and absence of security protocols.
... Among all the papers accepted in this Special Issue, three focused on extended reality (XR) applications and hybrid simulation for the healthcare sector [4,10,11]. ...
... Each one of these papers investigated a different aspect of the application of these technologies (VR, AR, MR, and hybrid simulation) in medicine, in particular: introducing VR and gamification in traditional patient treatment [4], using AR and hybrid simulation in medical training [10], and reviewing the current state of the art of AR/MR applications in healthcare simulation [11]. ...
... A hybrid simulator for the training of pedicle screw fixation in the spine was proposed in [10]. In this paper, Condino et al. describe a system for the preoperative planning and rehearsal of spine surgery, with the goal of improving both surgical workflows and postoperative patient outcomes. ...
Article
Full-text available
Serious games are games in which the main goal is not entertainment, but a serious purpose ranging from the acquisition of knowledge to interactive training, to name just a few [...]
... Most commonly threedimensional printed models of patient-specific vertebral bodies are used to train surgeons in the required skills for spine surgery [26,[28][29][30]. More recently, a few mixed reality simulators have been presented in literature for spine surgery training, which use the virtual environment to model the physiological responses while the physical model of the surgical area supports the haptic device towards providing realistic feedback [31,32]. These simulators aim to incorporate the best aspects of both types of trainers for a spine fusion procedure as mentioned in a study by Coelho and Defino, which presents the potential of such systems [33]. ...
Article
Full-text available
Most surgical simulators leverage virtual or bench models to simulate reality. This study proposes and validates a method for workspace configuration of a surgical simulator which utilizes a haptic device for interaction with a virtual model and a bench model to provide additional tactile feedback based on planned surgical manoeuvers. Numerical analyses were completed to determine the workspace and position of a haptic device, relative to the bench model, used in the surgical simulator, and the determined configuration was validated using device limitations and user data from surgical and nonsurgical users. For the validation, surgeons performed an identical surgery on a cadaver prior to using the simulator, and their trajectories were then compared to the determined workspace for the haptic device. The configuration of the simulator was determined appropriate through workspace analysis and the collected user trajectories. Statistical analyses suggest differences in trajectories between the participating surgeons which were not affected by the imposed haptic workspace. This study, therefore, demonstrates a method to optimally position a haptic device with respect to a bench model while meeting the manoeuverability needs of a surgical procedure. The validation method identified workspace position and user trajectory towards ideal configuration of a mixed reality simulator.
... For these reasons, in [25,26], we proposed hybrid simulation platforms for training in orthopaedic procedures (i.e., pedicle screws fixation and hip arthroplasty, respectively) combining 3D printed patient-specific bone models with AR functionalities. More specifically, in [25], we performed qualitative tests to evaluate the workload and usability of the Microsoft HoloLens for our orthopaedic simulator, considering both visual and audio perception and interaction/ergonomics issues, and the results obtained encourage the use of the proposed hybrid approach for orthopaedic open surgery. ...
Article
Full-text available
Cryosurgery is a technique of growing popularity involving tissue ablation under controlled freezing. Technological advancement of devices along with surgical technique improvements have turned cryosurgery from an experimental to an established option for treating several diseases. However, cryosurgery is still limited by inaccurate planning based primarily on 2D visualization of the patient’s preoperative images. Several works have been aimed at modelling cryoablation through heat transfer simulations; however, most software applications do not meet some key requirements for clinical routine use, such as high computational speed and user-friendliness. This work aims to develop an intuitive platform for anatomical understanding and pre-operative planning by integrating the information content of radiological images and cryoprobe specifications either in a 3D virtual environment (desktop application) or in a hybrid simulator, which exploits the potential of the 3D printing and augmented reality functionalities of Microsoft HoloLens. The proposed platform was preliminarily validated for the retrospective planning/simulation of two surgical cases. Results suggest that the platform is easy and quick to learn and could be used in clinical practice to improve anatomical understanding, to make surgical planning easier than the traditional method, and to strengthen the memorization of surgical planning.
... In the last years, the medical sector has been positively impacted by developments in robotics, nanotechnologies, additive manufacturing (AM)/3D printing (3DP), virtual reality (VR), augmented reality (AR), haptic training simulators, sensors or wearables [5][6][7]. All these technologies are offering innovative and efficient solutions for a broad range of medical applications, supporting specialist's efforts towards personalized medicine, making diagnosis more accurate, improving healthcare systems and services, improving patients' quality of life, democratizing the access to the medical care, providing new perspectives to medical training [8][9][10][11][12]. ...
Article
As standard practice in orthopedic surgery, the information gathered by analyzing Computer Tomography (CT) 2D images is used for patient diagnosis and planning surgery. Lately, these virtual slices are the input for generating 3D virtual models using DICOM viewers, facilitating spatial orientation, and diagnosis. Virtual Reality (VR) and 3D printing (3DP) technologies are also reported for use in anatomy visualization, medical training, and diagnosis. However, it has not been yet investigated whether the surgeons consider that the advantages offered by 3DP and VR outweigh their development efforts. Moreover, no comparative evaluation for understanding surgeon’s preference in using these investigation tools has been performed so far. Therefore, in this paper, a pilot usability test was conducted for collecting surgeons’ opinions. 3D models of knee, hip and foot were displayed using DICOM 3D viewer, two VR environments and as 3D-printed replicas. These tools adequacy for diagnosis was comparatively assessed in three cases scenarios, the time for completing the diagnosis tasks was recorded and questionnaires filled in. The time for preparing the models for VR and 3DP, the resources needed and the associated costs were presented in order to provide surgeons with the whole context. Results showed a preference in using desktop DICOM viewer with 3D capabilities along with the information provided by Unity-based VR solution for visualizing the virtual model from various angles challenging to analyze on the computer screen. 3D-printed replicas were considered more useful for physically simulating the surgery than for diagnosis. For the VR and 3DP models, the lack of information on bone quality was considered an important drawback. The following order of using the tools was preferred: DICOM viewer, followed by Unity VR and 3DP.