ArticlePDF Available

Initialization, noise, singularities, and scale in hight ridge traversal for tublar object centerline extraction

Authors:

Abstract

The extraction of the centerlines of tubular objects in two and three-dimensional images is a part of many clinical image analysis tasks. One common approach to tubular object centerline extraction is based on intensity ridge traversal. In this paper, we evaluate the effects of initialization, noise, and singularities on intensity ridge traversal and present multiscale heuristics and optimal-scale measures that minimize these effects. Monte Carlo experiments using simulated and clinical data are used to quantify how these "dynamic-scale" enhancements address clinical needs regarding speed, accuracy, and automation. In particular, we show that dynamic-scale ridge traversal is insensitive to its initial parameter settings, operates with little additional computational overhead, tracks centerlines with subvoxel accuracy, passes branch points, and handles significant image noise. We also illustrate the capabilities of the method for medical applications involving a variety of tubular structures in clinical data from different organs, patients, and imaging modalities.
A preview of the PDF is not available
... Cerebrovascular imaging such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI) can non-invasively provide morphometric information on cerebral vessels. The detection of arteries' centerlines is essential for the extraction of geometric information of the cerebral arteries [1][2][3], such as their tortuosity, thickness, and spatial variations [4][5][6]. It is related to the detection of bifurcations of interest for the labeling of the intracranial arteries [7][8][9]. ...
... Comparison between Method 1 (DFS algorithm) and Method 4 (proposed algorithm). 2 Comparison between Method 2 (Dijkstra algorithm) and Method 4 (proposed algorithm).3 Comparison between Method 3 (A* algorithm) and Method 4 (proposed algorithm). ...
Article
Full-text available
Centerline tracking is useful in performing segmental analysis of vessel tortuosity in angiography data. However, a highly tortuous) artery can produce multiple centerlines due to over-segmentation of the artery, resulting in inaccurate path-finding results when using the shortest path-finding algorithm. In this study, the internal carotid arteries (ICAs) from three-dimensional (3D) time-of-flight magnetic resonance angiography (TOF MRA) data were used to demonstrate the effectiveness of a new path-finding method. The method is based on a series of depth-first searches (DFSs) with randomly different orders of neighborhood searches and produces an appropriate path connecting the two endpoints in the ICAs. It was compared with three existing methods which were (a) DFS with a sequential order of neighborhood search, (b) Dijkstra algorithm, and (c) A* algorithm. The path-finding accuracy was evaluated by counting the number of successful paths. The method resulted in an accuracy of 95.8%, outperforming the three existing methods. In conclusion, the proposed method has been shown to be more suitable as a path-finding procedure than the existing methods, particularly in cases where there is more than one centerline resulting from over-segmentation of a highly tortuous artery.
... [6][7][8][9][10][11] In addition, an image resampling step can be used to reduce the image resolution 8 or to get an image with isotropic voxel spacing. 12 For tubular structure extraction, the models can be divided into three categories: namely point-based, path-based and tree-based methods. ...
... Another propagation mechanism involves searching the candidate centerline point in the area surrounding the current location. Aylward et al 12 detect the next centerline point by finding the local maximum voxel intensity value on the plane shifted from the current point and perpendicular to the local centerline direction. Instead of determining the next point within a plane, Friman et al. 16 search for the next point evenly spaced on the sphere surrounding the current point by fitting a tubular template to the local image patch. ...
Article
Full-text available
Background The curved planar reformation (CPR) technique is one of the most commonly used methods in clinical practice to locate coronary arteries in medical images. Purpose The artery centerline is the cornerstone for the generation of the CPR image. Here, we describe the development of a new fully automatic artery centerline tracker with the aim of increasing the efficiency and accuracy of the process. Methods We propose a COronary artery Centerline Tracker (COACT) framework which consists of an ostium point finder (OPFinder) model, an intersection point detector (IPDetector) model and a set of centerline tracking strategies. The output of OPFinder is the ostium points. The function of the IPDetector is to predict the intersections of a sample sphere and the centerlines. The centerline tracking process starts from two ostium points detected by the OPFinder, and combines the results of the IPDetector with a series of strategies to gradually reconstruct the coronary artery centerline tree. Results Two coronary CT angiography (CCTA) datasets were used to validate the models. Dataset1 contains 160 cases (32 for test and 128 for training) and dataset2 contains 70 cases (20 for test and 50 for training). The results show that the average distance between the ostium points predicted by the OPFinder and the manually annotated ostium points was 0.88 mm, which is similar to the differences between the results obtained by two observers (0.85 mm). For the IPDetector, the average overlap of the predicted and ground truth intersection points was 97.82% and this is also close to the inter‐observer agreement of 98.50%. For the entire coronary centerline tree, the overlap between the results obtained by COACT and the gold standard was 94.33%, which is slightly lower than the inter‐observer agreement, 98.39%. Conclusions We have developed a fully automatic centerline tracking method for CCTA scans and achieved a satisfactory result. The proposed algorithms are also incorporated in the medical image analysis platform TIMESlice (https://slice‐doc.netlify.app) for further studies.
... The annotations of cerebrovascular centerline and radius were available online. The intracranial vessel annotations were obtained via an open-source toolkit-TubeTK, 21 and we referred to the preparation. 18 The voxel spacing of the TOF-MRA images is 0.5 × 0.5 × 0.8 mm 3 with a volume size of 448 × 448 × 128 voxels. ...
... The vascular centerlines in the MIDAS dataset are not manually labeled by domain experts but are automatically generated. 21 It cannot be guaranteed that the centerline of all cerebral vessels is traversed. A chief physician with years of clinical experience confirmed the omission of vessels in Figure 10. ...
Article
Full-text available
Background Cerebrovascular segmentation is a crucial step in the computer‐assisted diagnosis of cerebrovascular pathologies. However, accurate extraction of cerebral vessels from time‐of‐flight magnetic resonance angiography (TOF‐MRA) data is still challenging due to the complex topology and slender shape. Purpose The existing deep learning‐based approaches pay more attention to the skeleton and ignore the contour, which limits the segmentation performance of the cerebrovascular structure. We aim to weight the contour of brain vessels in shallow features when concatenating with deep features. It helps to obtain more accurate cerebrovascular details and narrows the semantic gap between multilevel features. Methods This work proposes a novel framework for priority extraction of contours in cerebrovascular structures. We first design a neighborhood‐based algorithm to generate the ground truth of the cerebrovascular contour from original annotations, which can introduce useful shape information for the segmentation network. Moreover, We propose an encoder‐dual decoder‐based contour attention network (CA‐Net), which consists of the dilated asymmetry convolution block (DACB) and the Contour Attention Module (CAM). The ancillary decoder uses the DACB to obtain cerebrovascular contour features under the supervision of contour annotations. The CAM transforms these features into a spatial attention map to increase the weight of the contour voxels in main decoder to better restored the vessel contour details. Results The CA‐Net is thoroughly validated using two publicly available datasets, and the experimental results demonstrate that our network outperforms the competitors for cerebrovascular segmentation. We achieved the average dice similarity coefficient (DSC$DSC$) of 68.15 and 99.92% in natural and synthetic datasets. Our method segments cerebrovascular structures with better completeness. Conclusions We propose a new framework containing contour annotation generation and cerebrovascular segmentation network that better captures the tiny vessels and improve vessel connectivity.
... The Hessian matrix is a real symmetric square matrix composed of the second partial derivatives of a multivariate function. The eigenvalues and eigenvectors of it can be used to characterize the properties of specific structures (point structures/line structures) [25]. The eigenvalues and eigenvectors of a Hessian matrix could be described as a linear or tubular structure: ...
Article
Full-text available
Saltpans extraction is vital for coastal resource utilization and production management. However, it is challenging to extract saltpans, even by visual inspection, because of their spatial and spectral similarities with aquaculture ponds. Saltpans are composed of crystallization and evaporation ponds. From the whole images, existing saltpans extraction algorithms could only extract part of the saltpans, i.e., crystallization ponds. Meanwhile, evaporation ponds could not be efficiently extracted by only spectral analysis, causing the degeneration of saltpans extraction. In addition, manual intervention was required. Thus, it is essential to study the automatic saltpans extraction algorithm of the whole image. As to the abovementioned problems, this paper proposed a novel method with an amendatory saltpan index (ASI) and local spatial parallel similarity (ASI-LSPS) for extracting coastal saltpans. To highlight saltpans and aquaculture ponds in coastal water, the Hessian matrix has been exploited. Then, a new amendatory saltpans index (ASI) is proposed to extract crystallization ponds to reduce the negative influence of turbid water and dams. Finally, a new local parallel similarity criterion is proposed to extract evaporation ponds. The Landsat-8 OLI images of Tianjin and Dongying, China, have been used in experiments. Experiments have shown that ASI can reach at least 70% in intersection over union (IOU) and 78% in Kappa for extraction of crystallization in saltpans. Moreover, experiments also demonstrate that ASI-LSPS can reach at least 82% in IOU and 89% in Kappa on saltpans extraction, at least 13% and 17% better than comparing algorithms in IOU and Kappa, respectively. Furthermore, the ASI-LSPS algorithm has the advantage of automaticity in the whole imagery. Thus, this study can provide help in coastal saltpans management and scientific utilization of coastal resources.
Preprint
Full-text available
Magnetic resonance angiography (MRA) performed at ultra-high magnetic field provides a unique opportunity to study the arteries of the living human brain at the mesoscopic level. From this, we can gain new insights into the brain's blood supply and vascular disease affecting small vessels. However, for quantitative characterization and precise representation of human angioarchitecture to, for example, inform blood-flow simulations, detailed segmentations of the smallest vessels are required. Given the success of deep learning-based methods in many segmentation tasks, we here explore their application to high-resolution MRA data, and address the difficulty of obtaining large data sets of correctly and comprehensively labelled data. We introduce VesselBoost, a vessel segmentation package, which utilizes deep learning and imperfect training labels for accurate vasculature segmentation. Combined with an innovative data augmentation technique, which leverages the resemblance of vascular structures, VesselBoost enables detailed vascular segmentations.
Article
Full-text available
Background: Segmentation of coronary arteries in computed tomography angiography (CTA) images plays a key role in the diagnosis and treatment of coronary-related diseases. However, manually analyzing the large amount of data is time-consuming, and interpreting this data requires the prior knowledge and expertise of radiologists. Therefore, an automatic method is needed to separate coronary arteries from a given CTA dataset. Methods: Firstly, an anisotropic diffusion filter was employed to smooth the noise while preserving the vessel boundaries. The coronary skeleton was then extracted using a two-step process based on the intensity of the coronary. In the first step, the thick vessel skeleton was extracted by clustering, improved vesselness filtering and region growing, while in the second step, the thin vessel skeleton was extracted by the height ridge traversal method guided by the cylindrical model. Next, the vesselness measure, representing vessel a priori information, was incorporated into the local region active contour model based on the vessel geometry. Finally, the initial contour of the active contour model was generated using the coronary artery skeleton for effective segmentation of the three-dimensional (3D) coronary arteries. Results: Experimental results on chest CTA images show that the method is able to segment coronary arteries effectively with an average precision, recall and dice similarity coefficient (DSC) of 86.64%, 91.26% and 79.13%, respectively, and has a good performance in thin vessel extraction. Conclusions: The method does not require manual selection of vessel seeds or setting of initial contours, and allows for the extraction of a successful coronary artery skeleton and eventual effective segmentation of the coronary arteries.
Chapter
Cardiovascular complications and death are potential outcomes of aortic valve disease, which is a severe medical condition that also carries a significant economic burden. The study of fluid mechanics is crucial to understanding the development, progression, and treatment of cardiovascular and aortic valve disease. Technological advancements in imaging methods and patient-specific computational modeling have enabled clinicians to gain more detailed information about blood flow patterns in both healthy individuals and those with disease. This information can be used to obtain non-invasive metrics before and after interventions, which can help in selecting appropriate treatments and ultimately improve patient outcomes. Incorporating information about flow physics into the clinical practice can further enhance current medical knowledge. This chapter will focus on the integration of medical imaging with computational modeling, which will allow for faster modeling, improved data accuracy, and earlier detection of cardiovascular and valvular anomalies. The use of machine learning will also be explored as a means of developing patient-specific diagnostic and predictive tools for characterizing and assessing cardiovascular outcomes. The goal of the chapter is to provide an overview of these approaches and their potential to support decision-making during important clinical milestones in the management of aortic valve disease.
Article
Automatic rib labeling and anatomical centerline extraction are common prerequisites for various clinical applications. Prior studies either use in-house datasets that are inaccessible to communities, or focus on rib segmentation that neglects the clinical significance of rib labeling. To address these issues, we extend our prior dataset ( RibSeg ) on the binary rib segmentation task to a comprehensive benchmark, named RibSeg v2 , with 660 CT scans (15,466 individual ribs in total) and annotations manually inspected by experts for rib labeling and anatomical centerline extraction. Based on the RibSeg v2 , we develop a pipeline including deep learning-based methods for rib labeling, and a skeletonization-based method for centerline extraction. To improve computational efficiency, we propose a sparse point cloud representation of CT scans and compare it with standard dense voxel grids. Moreover, we design and analyze evaluation metrics to address the key challenges of each task. Our dataset, code, and model are available online to facilitate open research at https://github.com/M3DV/RibSeg.
Article
Full-text available
Cardiovascular disease (CVD) accounts for about half of non-communicable diseases. Vessel stenosis in the coronary artery is considered to be the major risk of CVD. Computed tomography angiography (CTA) is one of the widely used noninvasive imaging modalities in coronary artery diagnosis due to its superior image resolution. Clinically, segmentation of coronary arteries is essential for the diagnosis and quantification of coronary artery disease. Recently, a variety of works have been proposed to address this problem. However, on one hand, most works rely on in-house datasets, and only a few works published their datasets to the public which only contain tens of images. On the other hand, their source code have not been published, and most follow-up works have not made comparison with existing works, which makes it difficult to judge the effectiveness of the methods and hinders the further exploration of this challenging yet critical problem in the community. In this paper, we propose a large-scale dataset for coronary artery segmentation on CTA images. In addition, we have implemented a benchmark in which we have tried our best to implement several typical existing methods. Furthermore, we propose a strong baseline method which combines multi-scale patch fusion and two-stage processing to extract the details of vessels. Comprehensive experiments show that the proposed method achieves better performance than existing works on the proposed large-scale dataset. The benchmark and the dataset are published at https://github.com/XiaoweiXu/ImageCAS-A-Large-Scale-Dataset-and-Benchmark-for-Coronary-Artery-Segmentation-based-on-CT.
Chapter
Full-text available
A major motivation for this research has been to investigate whether or not the scale-space model allows for determination and detection of stable phenomena. In this chapter it will be demonstrated that such determination and detection are indeed possible, and that the suggested representation can be used for extracting regions of interest with associated stable scales from image in a solely bottom-up data-driven way. The treatment is based on the following assumption: Structures, which are significant in scale-space, are likely to correspond to significant structures in the image.
Chapter
Full-text available
As pointed out in the introductory chapter, an inherent property of objects in the world and details in images is that they only exist as meaningful entities over certain ranges of scale. If one aims at describing the structure of unknown real-world signals, then a multi-scale representation of data is crucial.
Chapter
We describe an approach to the processing of an image by a front—end vision system. This system is a geometry engine [58] that converts the image intensity data into concise geometric information that can be interpreted by semantical systems in later stages of processing [31]. The basis for the approach is linear scale space [111, 11, 57, 4]. There has been much research in the area of both linear and nonlinear scale spaces. Two good research texts on the topic are [64] and [97]. Applications of these ideas to medical image analysis are also presented.