Figure 1 - uploaded by Kun-Chang Yu
Content may be subject to copyright.
Computer setup during live bronchoscopy. The left panel is the 3D Surface tool. This tool gives a 3D visualization of the airway tree with airway centerlines depicted in red, and preplanned ROI path in blue. The ROI for the case is the 3D region in red at the bottom right of the airway tree (left lower lobe location). The bronchoscope's current position is represented by the yellow cylinder and green needle. The panel on the right is the CT-Video Match tool. The top section displays the live bronchoscopic video side-by-side with the endoluminal rendering of the airway as seen by the virtual bronchoscope at its current position within generation 3. The bottom panel highlighted by a green box is a "frozen" snapshot of the previous registration at generation 2. The path to the currently selected ROI is rendered in blue in the VB view and is overlaid onto the bronchoscopic video during registration.

Computer setup during live bronchoscopy. The left panel is the 3D Surface tool. This tool gives a 3D visualization of the airway tree with airway centerlines depicted in red, and preplanned ROI path in blue. The ROI for the case is the 3D region in red at the bottom right of the airway tree (left lower lobe location). The bronchoscope's current position is represented by the yellow cylinder and green needle. The panel on the right is the CT-Video Match tool. The top section displays the live bronchoscopic video side-by-side with the endoluminal rendering of the airway as seen by the virtual bronchoscope at its current position within generation 3. The bottom panel highlighted by a green box is a "frozen" snapshot of the previous registration at generation 2. The path to the currently selected ROI is rendered in blue in the VB view and is overlaid onto the bronchoscopic video during registration.

Source publication
Article
Full-text available
Past work has shown that guidance systems help improve both the navigation through airways and final biopsy of regions of interest via bronchoscopy. We have previously proposed an image-based bronchoscopic guidance system. The system, however, has three issues that arise during navigation: 1) sudden disorienting changes can occur in endoluminal vie...

Contexts in source publication

Context 1
... live guidance, a new source of data is available in the form of bronchoscopic video. Using the airway surfaces derived from the MDCT data, VB views from within the airway tree are presented. To provide live guidance, the VB views need to be synchronized with the bronchoscopic video, along with supplemental information along the route. Fig. 1 gives an example display. The 3D Surface tool provides the 3D position of the bronchoscope within the airway tree. The CT-Video Match tool is a general guidance tool that provides visualization of the bronchoscopic video, the VB view, and supplemental guidance ...
Context 2
... The technician registers the VB view with the bronchoscopic video and freezes the registered view for future reference (e.g., lower portion of CT-Video Match tool in Fig. ...
Context 3
... updated navigation system uses an upgraded CT-Video Match tool that has the following features to enhance the visualization ( Fig. ...
Context 4
... the pre-processing stage, first the tracheobronchial airway tree was segmented out followed by centerline analysis. Then, a foreign object within the airway tree was defined as an ROI in consultation with a physician. 3D surfaces were generated for the airway tree and the ROI as shown in Fig. 1. Automatic path planning was run to find an optimum route to the ROI. Finally, a pre-bronchoscopy report was generated with manually adjusted bifurcation view sites for improved views at the bifurcation ...

Citations

... 22 Side-by-side display of VB and video, pathway to lesion, target visualization and 2 orthogonal CT images. 19,[115][116][117]119 Study on humans to demonstrate how their platform find pathto = lesion. Comment: The group specializes in finding routes to the lesion (path planning). ...
... 22,116 Their platform GUI displays a side-by-side view of the VB and the video, pathway guidance to the lesion, target visualization, and 2 orthogonal CT images. 19,[115][116][117]119 We could not find any clinical trials, but in their papers, they refer to studies on humans to demonstrate how their platform works and how to find a path to the lesion. ...
Article
Background: Navigated bronchoscopy uses virtual 3-dimensional lung model visualizations created from preoperative computed tomography images often in synchronization with the video bronchoscope to guide a tool to peripheral lesions. Navigated bronchoscopy has developed fast since the introduction of virtual bronchoscopy with integrated electromagnetic sensors in the late 1990s. The purposes of the review are to give an overview and update of the technological components of navigated bronchoscopy, an assessment of its clinical usefulness, and a brief assessment of the commercial platforms for navigated bronchoscopy. Methods: We performed a literature search with relevant keywords to navigation and bronchoscopy and iterated on the reference lists of relevant papers, with emphasis on the last 5 years. Results: The paper presents an overview of the components necessary for performing navigated bronchoscopy, assessment of the diagnostic accuracy of different approaches, and an analysis of the commercial systems. We were able to identify 4 commercial platforms and 9 research and development groups with considerable activity in the field. Finally, on the basis of our findings and our own experience, we provide a discussion on navigated bronchoscopy with focus on the next steps of development. Conclusions: The literature review showed that the peripheral diagnostic accuracy has improved using navigated bronchoscopy compared with traditional bronchoscopy. We believe that there is room for improvement in the diagnostic success rate by further refinement of methods, approaches, and tools used in navigated bronchoscopy.
... Hybrid systems, which combine electromagnetic external tracking with image-based methods Image-based registration system for tracked bronchoscopy (83). Image courtesy of William Higgins, PhD, Penn State University. ...
Article
Full-text available
The trend toward minimally invasive surgical interventions has created new challenges for visualization during surgical procedures. However, at the same time, the introduction of high-definition digital endoscopy offers the opportunity to apply methods from computer vision to provide visualization enhancements such as anatomic reconstruction, surface registration, motion tracking, and augmented reality. This review provides a perspective on this rapidly evolving field. It first introduces the clinical and technical background necessary for developing vision-based algorithms for interventional applications. It then discusses several examples of clinical interventions where computer vision can be applied, including bronchoscopy, rhinoscopy, transnasal skull-base neurosurgery, upper airway interventions, laparoscopy, robotic-assisted surgery, and Natural Orifice Transluminal Endoscopic Surgery (NOTES). It concludes that the currently reported work is only the beginning. As the demand for minimally invasive procedures rises, computer vision in surgery will continue to advance through close interdisciplinary work between interventionists and engineers.
Article
Background and objective: Recent advances in neural networks and temporal image processing have provided new results and opportunities for vision-based bronchoscopy tracking. However, such progress has been hindered by the lack of comparative experimental data conditions. We address the issue by sharing a novel synthetic dataset, which allows for a fair comparison of methods. Moreover, as incorporating deep learning advances in temporal structures is not yet explored in bronchoscopy navigation, we investigate several neural network architectures for learning temporal information at different levels of subject personalization, providing new insights and results. Methods: Using our own shared synthetic dataset for bronchoscopy navigation and tracking, we explore deep learning temporal information architectures (Recurrent Neural Networks and 3D convolutions), which have not been fully explored on bronchoscopy tracking, putting a special focus on network efficiency by using a modern backbone (EfficientNet-B0) and ShuffleNet blocks. Finally, we provide a study of different losses for rotation tracking and population modeling schemes (personalized vs. population) for bronchoscopy tracking. Results: Temporal information architectures provide performance improvements, both in position and angle estimation. Additionally, population scheme analysis illustrates the benefits of offering a personalized model, while loss analysis indicates the benefits of using an adequate metric, improving results. We finally compare with a state-of-the-art model obtaining better results both in performance, with 12.2% and 18.7% improvement for position and rotation respectively, and around 67.6% reduction in memory consumption. Conclusions: Proposed advances in temporal information architectures, loss configuration, and population scheme definition allow for improving the current state of the art in bronchoscopy analysis. Moreover, the publication of the first synthetic dataset allows for further improving bronchoscopy research by enabling proper comparison grounds among methods.