Figure 1 - uploaded by John Mashford
Content may be subject to copyright.
Four square patches Y , G , B , P are half overlapping. Pixel s is covered by all of the four patches. 

Four square patches Y , G , B , P are half overlapping. Pixel s is covered by all of the four patches. 

Source publication
Conference Paper
Full-text available
We propose an algorithm for creating superpixels. The major step in our algorithm is simply minimizing two pseudo-Boolean functions. The processing time of our algorithm on images of moderate size is only half a second. Experiments on a benchmark dataset show that our method produces superpixels of comparable quality with existing algorithms. Last...

Contexts in source publication

Context 1
... et al. assume the input image is intensively cov- ered by half overlapping square patches of the same size (Figure 1). Each square patch corresponds to a label. ...
Context 2
... more examples of superpixel produced by our methods can be found in Figure 10 and 11. ...

Similar publications

Article
Full-text available
In the current study we examined the influence of case complexity and pretrial publicity (PTP) through an information processing framework. Dual process models suggest that individuals can process information in a systematic or heuristic manner. We explored the effects of defendant PTP (negative v. positive), language complexity (moderate v. high),...

Citations

... To ensure that the speed of feature matching can meet most of the work in computer vision, this paper compares the performance of several mainstream superpixel segmentation algorithms. Among them, RTSS-DBSCAN [30,31] leads all other algorithms with a speed of 50 frames/s, which can be well adapted to computer vision tasks requiring high realtime performance such as visual SLAM and target tracking, and it can have faster computation speed and higher memory efficiency while acquiring graph boundaries better than other [33], LSC [34], PB [35], ERS [36], and LRW [37] superpixel segmentation algorithms for processing a single image is shown in Fig. 4. ...
Article
Full-text available
The feature matching algorithm based on deep learning has achieved superior performance compared to traditional algorithms in terms of both matching quantity and accuracy, but there are still some high-error matching results in complex scenes, which adversely affects the subsequent work. Based on SuperGlue, we propose an accurate feature matching algorithm via outlier filtering. Firstly, DBSCAN real-time superpixel segmentation (RTSS-DBSCAN) is used to divide the image into regions, and then the outlier filtering module is designed according to the local similarity principle of feature matching. On the premise of not affecting the correct matching results, the matching results with high errors will be filtered to improve the matching accuracy. Meanwhile, due to the lag of traditional Exponential Moving Average algorithm (EMA), an adaptive EMA is designed and integrated into the SuperGlue training process to further improve the training speed and matching accuracy. We evaluate the overall performance of the matching method using the AUC of pose error at the thresholds (5°, 10°, 20°), a common evaluation metric, to provide a more detailed and intuitive evaluation of the matching effectiveness using precision and recall. The experimental results show that the method in this paper can effectively filter the matching results with large errors and has high accuracy and robustness. The AUC of pose error at thresholds (5°, 10°, 20°) reaches 36.53, 56.23, and 73.68, and the precision and recall reach 80.07 and 91.52, respectively, which have better matching results compared with other algorithms.
... Numerous studies segment an image into superpixels to reduce the computation burden of pixels. Cut-based approaches [20,30,34,40] create superpixels by adding multiple minimum cuts into a graph with pixel nodes. Other methods evolve homogeneous clusters from the initial set of points [1,18]. ...
Preprint
Full-text available
Learning semantic segmentation requires pixel-wise annotations, which can be time-consuming and expensive. To reduce the annotation cost, we propose a superpixel-based active learning (AL) framework, which collects a dominant label per superpixel instead. To be specific, it consists of adaptive superpixel and sieving mechanisms, fully dedicated to AL. At each round of AL, we adaptively merge neighboring pixels of similar learned features into superpixels. We then query a selected subset of these superpixels using an acquisition function assuming no uniform superpixel size. This approach is more efficient than existing methods, which rely only on innate features such as RGB color and assume uniform superpixel sizes. Obtaining a dominant label per superpixel drastically reduces annotators' burden as it requires fewer clicks. However, it inevitably introduces noisy annotations due to mismatches between superpixel and ground truth segmentation. To address this issue, we further devise a sieving mechanism that identifies and excludes potentially noisy annotations from learning. Our experiments on both Cityscapes and PASCAL VOC datasets demonstrate the efficacy of adaptive superpixel and sieving mechanisms.
... Graph-based algorithms [68,148] consider the image as an undirected graph and divide it into sections based on edge-weights, which are frequently computed as colour differences or similarities. The partitioning techniques differ; for example, Felzenswalb and Huttenlocher (FH) [46], Entropy Rate Superpixels (ERS) [92], and Proposals for Objects from Improved Seeds and Energies (POISE) [67] merge pixels into superpixels from the bottom up, whereas Normalized Cuts (NC) [126] and Constant Intensity Superpixels (CIS) [140] utilise cuts, and Pseudo-Boolean Optimization Superpixels (PB) [162] uses elimination. ...
Article
Full-text available
Superpixel become increasingly popular in image segmentation field as it greatly helps image segmentation techniques to segment the region of interest accurately in noisy environment and also reduces the computation effort to a great extent. However, selection of proper superpixel generation techniques and superpixel image segmentation techniques play a very crucial role in the domain of different kinds of image segmentation. Clustering is a well-accepted image segmentation technique and proved their effective performance over various image segmentation field. Therefore, this study presents an up-to-date survey on the employment of superpixel image in combined with clustering techniques for the various image segmentation. The contribution of the survey has four parts namely (i) overview of superpixel image generation techniques, (ii) clustering techniques especially efficient partitional clustering techniques, their issues and overcoming strategies, (iii) Review of superpixel combined with clustering strategies exist in literature for various image segmentation, (iv) lastly, the comparative study among superpixel combined with partitional clustering techniques has been performed over oral pathology and leaf images to find out the efficacy of the combination of superpixel and partitional clustering approaches. Our evaluations and observation provide in-depth understanding of several superpixel generation strategies and how they apply to the partitional clustering method.
... There are roughly five existing categories of superpixel generation algorithms for PolSAR images, including density-based methods [13], graph-based methods [14,15], contour evolution methods [16,17], energy optimization methods [18] and clustering-based methods [8,19]. However, some methods such as graph-based methods [20,21] and energy optimization methods need to combine a variety of technical elements, such as the revised Wishart distance (RWD), edge map, and energy-driven sampling (SEEDS), to improve the accuracy, which is computationally demanding. ...
... Remote Sens. 2023,15, 1123 ...
Article
Full-text available
Superpixel generation of polarimetric synthetic aperture radar (PolSAR) images is widely used for intelligent interpretation due to its feasibility and efficiency. However, the initial superpixel size setting is commonly neglected, and empirical values are utilized. When prior information is missing, a smaller value will increase the computational burden, while a higher value may result in inferior boundary adherence. Additionally, existing similarity metrics are time-consuming and cannot achieve better segmentation results. To address these issues, a novel strategy is proposed in this article for the first time to construct the function relationship between the initial superpixel size (number of pixels contained in the initial superpixel) and the structural complexity of PolSAR images; additionally, the determinant ratio test (DRT) distance, which is exactly a second form of Wilks’ lambda distribution, is adopted for local clustering to achieve a lower computational burden and competitive accuracy for superpixel generation. Moreover, a hexagonal distribution is exploited to initialize the PolSAR image based on the estimated initial superpixel size, which can further reduce the complexity of locating pixels for relabeling. Extensive experiments conducted on five real-world data sets demonstrate the reliability and generalization of adaptive size estimation, and the proposed superpixel generation method exhibits higher computational efficiency and better-preserved details in heterogeneous regions compared to six other state-of-the-art approaches.
... Veksler et al. [33] use a data term and a smoothness term in graph-cut-based optimization. Zhang et al. [34] generate regular and square-like superpixels. Because [34] adds more constraints to generate superpxiels between two horizontal and vertical strips. ...
... Zhang et al. [34] generate regular and square-like superpixels. Because [34] adds more constraints to generate superpxiels between two horizontal and vertical strips. Shen et al. [35] propose a method using lazy random walk (LRW) to generate superpixels. ...
Article
Full-text available
Textureless building surfaces composed of homogenized pixels could lead to failure of photometric consistency. However, textureless regions widely present in artificial scenes usually exhibit strong planarity enabling depth estimation of textureless regions with planar priors. However, existing methods for generating planar priors suffer from over-segmentation of large planes with textureless regions, which indicates that planarity is not fully exploited. In this study, we propose a novel generation method of planar prior by combining mean-shift clustering and superpixel segmentation. The planarity is fully utilized given preferential generation of planar priors for large planes with textureless regions in artificial scenes. Finally, a probabilistic graphical model is used to adopt the planar priors and smoothing constraints into depth estimation process. The image gradient is used as a criterion of the degree of texture to adaptively adjust the weights of different constraints. Experimental results on the benchmark dataset ETH3D, UDD5, and SenseFly demonstrate that the proposed method can effectively recover the depth information of textureless regions in high-resolution images to obtain highly complete three-dimensional (3-D) models of artificial scenes.
... (2) Graph-based methods [17,18]. Liu et al. [19] modified the normalized cuts algorithm by incorporating the revised Wishart distance (RWD) and an edge map for superpixel generation. ...
Article
Full-text available
Clustering-based methods of polarimetric synthetic aperture radar (PolSAR) image superpixel generation are popular due to their feasibility and parameter controllability. However, these methods pay more attention to improving boundary adherence and are usually time-consuming to generate satisfactory superpixels. To address this issue, a novel cross-iteration strategy is proposed to integrate various advantages of different distances with higher computational efficiency for the first time. Therefore, the revised Wishart distance (RWD), which has better boundary adherence but is time-consuming, is first integrated with the geodesic distance (GD), which has higher efficiency and more regular shape, to form a comprehensive similarity measure via the cross-iteration strategy. This similarity measure is then utilized alternately in the local clustering process according to the difference between two consecutive ratios of the current number of unstable pixels to the total number of unstable pixels, to achieve a lower computational burden and competitive accuracy for superpixel generation. Furthermore, hexagonal initialization is adopted to further reduce the complexity of searching pixels for relabelling in the local regions. Extensive experiments conducted on the AIRSAR, RADARSAT-2 and simulated data sets demonstrate that the proposed method exhibits higher computational efficiency and a more regular shape, resulting in a smooth representation of land cover in homogeneous regions and better-preserved details in heterogeneous regions.
... Traditional algorithms such as the watershed algorithm [53], superpixel segmentation [54,55], and edge operators can also complete the task of bone segmentation. However, traditional algorithms cannot obtain classification results from the extracted edge information, handling the stacking area, and complete tasks fully automatically. ...
Article
Full-text available
Radiography is an essential basis for the diagnosis of fractures. For the pediatric elbow joint diagnosis, the doctor needs to diagnose abnormalities based on the location and shape of each bone, which is a great challenge for AI algorithms when interpreting radiographs. Bone instance segmentation is an effective upstream task for automatic radiograph interpretation. Pediatric elbow bone instance segmentation is a process by which each bone is extracted separately from radiography. However, the arbitrary directions and the overlapping of bones pose issues for bone instance segmentation. In this paper, we design a detection-segmentation pipeline to tackle these problems by using rotational bounding boxes to detect bones and proposing a robust segmentation method. The proposed pipeline mainly contains three parts: (i) We use Faster R-CNN-style architecture to detect and locate bones. (ii) We adopt the Oriented Bounding Box (OBB) to improve the localizing accuracy. (iii) We design the Global-Local Fusion Segmentation Network to combine the global and local contexts of the overlapped bones. To verify the effectiveness of our proposal, we conduct experiments on our self-constructed dataset that contains 1274 well-annotated pediatric elbow radiographs. The qualitative and quantitative results indicate that the network significantly improves the performance of bone extraction. Our methodology has good potential for applying deep learning in the radiography’s bone instance segmentation.
... This is a kind of top-down segmentation methods. Its typical algorithms include Graph-based segmentation [13], Normalized cuts [14], Superpixel lattice [15], GCa10 and GCb10 [16], Entropy Rate Superpixel Segmentation [17], and Superpixels via Pseudo-Boolean Optimization [18]. ...
... (3) Average accuracy rate (AAR), given in (18), is designed in the comprehensive consideration of the first two evaluation indexes: ...
Article
Full-text available
Most traditional superpixel segmentation methods used binary logic to generate superpixels for natural images. When these methods are used for images with significantly fuzzy characteristics, the boundary pixels sometimes cannot be correctly classified. In order to solve this problem, this paper proposes a Superpixel Method Based on Fuzzy Theory (SMBFT), which uses fuzzy theory as a guide and traditional fuzzy c -means clustering algorithm as a baseline. This method can make full use of the advantage of the fuzzy clustering algorithm in dealing with the images with the fuzzy characteristics. Boundary pixels which have higher uncertainties can be correctly classified with maximum probability. The superpixel has homogeneous pixels. Meanwhile, the paper also uses the surrounding neighborhood pixels to constrain the spatial information, which effectively alleviates the negative effects of noise. The paper tests on the images from Berkeley database and brain MR images from the Brain web. In addition, this paper proposes a comprehensive criterion to measure the weights of two kinds of criterions in choosing superpixel methods for color images. An evaluation criterion for medical image data sets employs the internal entropy of superpixels which is inspired by the concept of entropy in the information theory. The experimental results show that this method has superiorities than traditional methods both on natural images and medical images.
... In addition, lots of redundant superpixels are produced by all of these methods. [6], (b) PB [14], (c) LSC [8], (d) DBSCAN [11,12], (e) ERS [9], (f) SEEDS [13]. ...
Article
Full-text available
Superpixel segmentation is one of the key image preprocessing steps in object recognition and detection methods. However, the over-segmentation in the smoothly connected homogenous region in an image is the key problem. That would produce redundant complex jagged textures. In this paper, the density peak clustering will be used to reduce the redundant superpixels and highlight the primary textures and contours of the salient objects. Firstly, the grid pixels are extracted as feature points, and the density of each feature point will be defined. Secondly, the cluster centers are extracted with the density peaks. Finally, all the feature points will be clustered by the density peaks. The pixel blocks, which are obtained by the above steps, are superpixels. The method is carried out in the BSDS500 dataset, and the experimental results show that the Boundary Recall (BR) and Achievement Segmentation Accuracy (ASA) are 95.0% and 96.3%, respectively. In addition, the proposed method has better performance in efficiency (30 fps). The comparison experiments show that not only do the superpixel boundaries have good adhesion to the primary textures and contours of the salient objects, but they can also effectively reduce the redundant superpixels in the homogeneous region.
... For example, it unnecessarily partitions uniform areas and computes unnecessary distances in dense areas. These issues motivated several improvements for kmeans based techniques including reducing the number of distance calculations, improving the seeding initialisation and improving the feature representation [21,17,2,23,45]. ...
... Several previous techniques, e.g. [10,37,45,18,30,12], serve as the basis for other graph-based techniques. ...
Preprint
Superpixels serve as a powerful preprocessing tool in many computer vision tasks. By using superpixel representation, the number of image primitives can be largely reduced by orders of magnitudes. The majority of superpixel methods use handcrafted features, which usually do not translate well into strong adherence to object boundaries. A few recent superpixel methods have introduced deep learning into the superpixel segmentation process. However, none of these methods is able to produce superpixels in near real-time, which is crucial to the applicability of a superpixel method in practice. In this work, we propose a two-stage graph-based framework for superpixel segmentation. In the first stage, we introduce an efficient Deep Affinity Learning (DAL) network that learns pairwise pixel affinities by aggregating multi-scale information. In the second stage, we propose a highly efficient superpixel method called Hierarchical Entropy Rate Segmentation (HERS). Using the learned affinities from the first stage, HERS builds a hierarchical tree structure that can produce any number of highly adaptive superpixels instantaneously. We demonstrate, through visual and numerical experiments, the effectiveness and efficiency of our method compared to various state-of-the-art superpixel methods.