Fig 4 - uploaded by Takeshi Shinohara
Content may be subject to copyright.
Division in the horizontal axis  

Division in the horizontal axis  

Source publication
Conference Paper
Full-text available
Hilbert sort arranges given points of a high-dimensional space with integer coordinates along a Hilbert curve. A naïve method first draws a Hilbert curve of a sufficient resolution to separate all the points, associates integers called Hilbert indices representing the orders along the Hilbert curve to points, and then, sorts the pairs of points and...

Similar publications

Conference Paper
Full-text available
There is a huge interest and recent worldwide effort to improve traction of harvesting machines when operating on steep slopes. One way to improve traction and stability on steep slopes is through assisting harvesting machines by winch and cable to anchor locations such as tree stumps or stationary equipment. With the exponential development of tec...

Citations

... The Hilbert curve is a space-filling curve that maps multidimensional data to one dimension while preserving the locality of the data [26] [30]. In this section, we describe the algorithm for sorting a point cloud using the Hilbert curve induced ordering [17] [18]. Given a point cloud P of size n, the Hilbert sorting algorithm is shown in Algorithm 1: The Hilbert curve mapping H n maps a spatial coordinate Algorithm 1 Hilbert Curve Sorting Algorithm for n-Dimensional Point Set P 1: procedure HILBERTSORT(P ) 2: Compute the bounding box of P . ...
Preprint
The irregularity and permutation invariance of point cloud data pose challenges for effective learning. Conventional methods for addressing this issue involve converting raw point clouds to intermediate representations such as 3D voxel grids or range images. While such intermediate representations solve the problem of permutation invariance, they can result in significant loss of information. Approaches that do learn on raw point clouds either have trouble in resolving neighborhood relationships between points or are too complicated in their formulation. In this paper, we propose a novel approach to representing point clouds as a locality preserving 1D ordering induced by the Hilbert space-filling curve. We also introduce Point2Point, a neural architecture that can effectively learn on Hilbert-sorted point clouds. We show that Point2Point shows competitive performance on point cloud segmentation and generation tasks. Finally, we show the performance of Point2Point on Spatio-temporal Occupancy prediction from Point clouds.
... The distance between image features is measured by an L 1 distance function. Also we fix the number m of pivots for Simple-Map to 8, which gives relatively good performance of similarity search using R-tree [9] constructed by clustering of the projected images sorted by Hilbert sort [14]. ...
Chapter
Full-text available
Annealing by Increasing Resampling (AIR, for short) is a stochastic hill-climbing optimization algorithm that evaluates the objective function for resamplings with increasing size. At the beginning stages, AIR makes state transitions like a random walk, because it uses small resamplings for which evaluation has large error at high probability. At the ending stages, AIR behaves like a local search because it uses large resamplings very close to the entire sample. Thus AIR works similarly as the conventional Simulated Annealing (SA, for short). As a rationale for AIR approximating SA, we show that both AIR and SA can be regarded as a hill-climbing algorithm according to objective function evaluation with stochastic fluctuations. The fluctuation in AIR is explained by the probit, while in SA by the logit. We show experimentally that the logit can be replaced with the probit in MCMC, which is a basis of SA. We also show experimental comparison of SA and AIR for two optimization problems, sparse pivot selection for dimension reduction, and annealing-based clustering. Strictly speaking, AIR must use resampling independently performed at each transition trial. However, it has been demonstrated by experiments that reuse of resampling within a certain number of times can speed up optimization without losing the quality of optimization. In particular, the larger the samples used for evaluation, the more remarkable the superiority of AIR is in terms of speed with respect to SA.
Article
Distribution comparison plays a central role in many machine learning tasks like data classification and generative modeling. In this study, we propose a novel metric, called Hilbert curve projection (HCP) distance , to measure the distance between two probability distributions with low complexity. In particular, we first project two high-dimensional probability distributions using Hilbert curve to obtain a coupling between them, and then calculate the transport distance between these two distributions in the original space, according to the coupling. We show that HCP distance is a proper metric and is well-defined for probability measures with bounded supports. Furthermore, we demonstrate that the modified empirical HCP distance with the $L_{p}$ cost in the $d$ -dimensional space converges to its population counterpart at a rate of no more than $O(n^{-1/2\max \lbrace d,p\rbrace })$ . To suppress the curse-of-dimensionality, we also develop two variants of the HCP distance using (learnable) subspace projections. Experiments on both synthetic and real-world data show that our HCP distance works as an effective surrogate of the Wasserstein distance with low complexity and overcomes the drawbacks of the sliced Wasserstein distance. The code of this work is at https://github.com/sherlockLitao/HCP .
Conference Paper
Outlier detection methods have used approximate neighborhoods in filter-refinement approaches. Outlier detection ensembles have used artificially obfuscated neighborhoods to achieve diverse ensemble members. Here we argue that outlier detection models could be based on approximate neighborhoods in the first place, thus gaining in both efficiency and effectiveness. It depends, however, on the type of approximation, as only some seem beneficial for the task of outlier detection, while no (large) benefit can be seen for others. In particular, we argue that space-filling curves are beneficial approximations, as they have a stronger tendency to underestimate the density in sparse regions than in dense regions. In comparison, LSH and NN-Descent do not have such a tendency and do not seem to be beneficial for the construction of outlier detection ensembles.