Fig 2 - uploaded by Abdullah Bulbul
Content may be subject to copyright.
The Hausdorff distance between two surfaces. The two-sided Hausdorff distance is H ( A, B ) = max ( D ( A, B ) , D ( B, A )) [6]. 

The Hausdorff distance between two surfaces. The two-sided Hausdorff distance is H ( A, B ) = max ( D ( A, B ) , D ( B, A )) [6]. 

Source publication
Article
Full-text available
Recent advances in evaluating and measuring the perceived visual quality of three-dimensional (3-D) polygonal models are presented in this article, which analyzes the general process of objective quality assessment metrics and subjective user evaluation methods and presents a taxonomy of existing solutions. Simple geometric error computed directly...

Context in source publication

Context 1
... this distance is non-symmetric, the two-sided Hausdorff distance is computed by taking the maximum of D ( A, B ) and D ( B, A ) (Figure ...

Similar publications

Article
Full-text available
At present, there is no study has been uitlized multi spectral remote sensing to reconstruct 3D lineaments mapping in UAE. In this context, image enhancement contrast, stretching and linear enhancement were applied to acquire an excellent visualization. In addition, automatic detection algorithm of Canny is performed to extract linear features in m...
Article
This paper presents a 3D object representation framework. We develop a hierarchical model based on probabilis-tic correspondences and probabilistic relations between 3D visual features. Features at the bottom of the hierarchy are bound to local observations. Pairs of features that present strong geometric correlation are iteratively grouped into hi...
Article
Full-text available
The micro-simulation of social and urban phenomena using software agents in geo-referenced virtual environments is a field of research whose popularity has strongly grown recently. Several platforms were developed for the specification and the implementation of this type of simulations, but they do not yet offer a complete language for the specific...
Article
Full-text available
The aim of this paper is to describe a research project concerning the colour planning in urban environment. The first part draws the research held in cooperation with the urban furniture departement of Milan City Council; the main goal was to collect the chromatic data of urban furniture inside a sample area in the historical centre of Milan and t...
Article
Full-text available
A software system has been developed for the study of static and dynamic data visualization in the context of Visual Data Mining in Virtual Reality. We use a specific data set to illustrate how the visual-ization tools of the 3D Visual Data Mining (3DVDM) system can assist in detecting potentially interesting non-linear data relationships that are...

Citations

... PCQA can be divided into subjective evaluation and objective evaluation. Subjective evaluation is to measure the quality of PCs via the visual perception of human eyes [8][9][10]. Objective evaluation is to quantitatively describe degradation through algorithm. ...
... Objective evaluation is to quantitatively describe degradation through algorithm. Early researches on PCQA focused on subjective assessment experiments, Pan et al. [8] and Zhang et al. [9] analyzed the psychological factors that affect subjective evaluation scores, and Bulbul et al. [10] explored the impact of experimental environment on subjective scores. However, subjective evaluation is too expensive and time-consuming. ...
Article
Full-text available
Due to the widespread use of point cloud, the demand for compression and transmission is more and more prominent. However, this cause various losses to the point cloud. It is necessary for application to evaluate the quality of point cloud. Therefore, we propose a new point cloud quality assessment (PCQA) metric named statistical information similarity (SISIM). First, we preprocess point cloud (PC) by scaling based on density and then project PC into texture maps and geometry maps. In addition, the SISIM based on Natural Scene Statistics (NSS) is proposed as texture features under the premise of proving that the texture maps meet NSS. Furthermore, we propose to extract geometry features based on local binary patterns (LBP) on account of the phenomenon that LBP maps of geometry images vary with different distortions. Finally, we predict the quality of PCs by fusing texture features with geometry features. Experiments show that our proposed method outperforms the state-of-the-art PCQA metrics on three publicly available datasets.
... The Mean Hausdorff Distance is the average length of all the arrows. Modified from Bulbul et al. (2011). a small proportion of cases where the MURSST image itself fails to detect part of the ocean current within the swath. ...
... These metrics are based on the correlation between mesh properties and a quantification of the human perception. The visual quality is typically captured as Mean Opinion Score (MOS), which is calculated from comparative assessments of meshes by test subjects (Bulbul et al., 2011). This approach is based on established standards in the evaluation of image and video recordings and validated datasets do exist (Corsini et al., 2007). ...
Article
Full-text available
AR/VR applications are a valuable tool in product design and lifecycle. But the integration of AR/VR is not seamless, as CAD models need to be prepared for the AR/VR applications. One necessary data transformation is the tessellation of the analytically described geometry. To ensure the usability, visual quality and evaluability of the AR/VR application, time consuming optimisation is needed depending on the product complexity and the performance of the target device. Widespread approaches to this problem are based on iterative mesh decimation. This approach ignores the varying importance of geometries and the required visual quality in engineering applications. Our predictive approach is an alternative that enables optimisation without iterative process steps on the tessellated geometry. The contribution presents an approach that uses surface-based prediction and enables predictions of the perceived visual quality of the geometries. This contains the investigation of different geometric complexity metrics gathered from literature as basis for prediction models. The approach is implemented in a geometry preparation tool and the results are compared with other approaches.
... In the field of 3D metrics, researchers can use two alternatives: either they use an existing 2D perceptual quality metric (2D image-based metrics) or they develop a 3D model-based metrics that exploit the geometry of 3D models to assess quality [9]. To measure the quality of a 3D model two types of metrics can be used: geometric metrics or HVS metrics. ...
Chapter
Interest in 3D modeling has increased in recent years. However, efforts to improve compression and transmission quality are severely hampered by a lack of effective quality assessment measures. This is a particularly serious problem for researchers trying to improve the robustness of lost packet transmission. Subjective measures are generally used to assess the robustness of applied treatments such as compression, watermarking and smoothing. These measures present enormous demands in terms of time and resources. To solve this problem, the researchers developed the objective metrics which are developed and executed by a computer. These metrics are integrated into many applications that require rendering or exchanging 2D images or 3D models. This article presents a new objective metric for evaluating the visual quality of static 3D models. The proposed full-reference metric is based on the relativity of the Human visual system. The performance of the presented approach are evaluated using a dataset of static model smoothed by the 3D Mesh Processing Platform (MEPP). The obtained results show that the proposed metric outperforms the MSDM metric value.KeywordsVisual quality assessment3D static modelsHVS objective metric
... In this chapter, we will mainly focus on the perceptual quality metrics that use AI techniques. A review of more general 3D visual quality assessment methods can be found in Bulbul et al. (2011); Lavoue and Mantiuk (2015); Muzahid et al. (2018). Surveys on perceptual quality assessment of 3D meshes are also available Lin and Kuo (2011); Corsini et al. (2013). ...
Chapter
The ultimate goal of computer graphics is to create images for viewing by people. The artificial intelligence-based methods should consider what is known about human visual perception in vision science literature. Modeling visual perception of 3D scenes requires the representation of several complex processes. In this chapter, we survey artificial intelligence and machine learning-based solutions for modeling human perception of 3D scenes. We also suggest future research directions. The topics that we cover include are modeling human visual attention, 3D object quality perception, and material recognition.
... Scale invariance Three-dimensional models can be viewed via different screen sizes. In addition, once the 3D models are created, their appearance depends not only on the geometry, but also on the size of the object [4]. As an example, a deformation of 3 mm in a 3D object with a global size that equals 10 mm does not have the same perceptual importance as another of 3 mm in a 3D object with a global size that equals 100 mm. ...
Chapter
The increasing use of 3D models in many areas lead us to think about the impact of the different distortions that can affect the 3D object during the rendering process. These deformations are usually evaluated using geometric metrics, which have not a good correlation with human judgment, while the visual perceptual quality of 3D models is necessary. In this context, a new full-reference metric denoted LWRMS is defined in this work. It can predict the distortion score between the original object and its damaged version without taking into account the constraint of connectivity. The proposed metric is defined in order to get a good correlation with the human visual perception coming from a subjective measurement. The numerical experiments are carried out on a known database LIRIS/EPFL General-Purpose database. The quantitative results show a good performance of the proposed metric in comparison with methods from the literature.
... This type of methods is based only on pure geometric distances, and it does not take into consideration the perceptual information that describes the main operations of the HVS. Consequently, the predicted visual quality is not well reflected as proven by the moderate correlation with human perception [11,12]. To overcome these drawbacks, many researchers have recently developed perceptually driven quality methods for 3D meshes [13,14]. ...
Article
Full-text available
A number of full reference and reduced reference methods have been proposed in order to estimate the perceived visual quality of 3D meshes. However, in most practical situations, there is a limited access to the information related to the reference and the distortion type. For these reasons, the development of a no-reference mesh visual quality (MVQ) approach is a critical issue, and more emphasis needs to be devoted to blind methods. In this work, we propose a no-reference convolutional neural network (CNN) framework to estimate the perceived visual quality of 3D meshes. The method is called SCNN-BMQA (3D visual saliency and CNN for blind mesh quality assessment). The main contribution is the usage of a CNN and 3D visual saliency to estimate the perceived visual quality of distorted meshes. To do so, the CNN architecture is fed by small patches selected carefully according to their level of saliency. First, the visual saliency of the 3D mesh is computed. Afterward, we render 2D projections from the 3D mesh and its corresponding 3D saliency map. Then the obtained views are split into 2D small patches that pass through a saliency filter in order to select the most relevant patches. Finally, a CNN is used for the feature learning and the quality score estimation. Extensive experiments are conducted on four prominent MVQ assessment databases, including several tests to study the effect of the CNN parameters, the effect of visual saliency and comparison with existing methods. Results show that the trained CNN achieves good rates in terms of correlation with human judgment and outperforms the most effective state-of-the-art methods.
... On the other hand, perceptual methods aim at measuring the perceived quality of meshes by incorporating HVS mechanisms. Recent works [4,25] review the mesh quality assessment literature. Moreover, Corsini et al. [9] and Lin and Kuo [27] presented recent surveys on perceptual methods for quality assessment. ...
Article
Full-text available
To decide whether the perceived quality of a mesh is influenced by a certain modification such as compression or simplification, a metric for estimating the visual quality of 3D meshes is required. Today, machine learning and deep learning techniques are getting increasingly popular since they present efficient solutions to many complex problems. However, these techniques are not much utilized in the field of 3D shape perception. We propose a novel machine learning-based approach for evaluating the visual quality of 3D static meshes. The novelty of our study lies in incorporating crowdsourcing in a machine learning framework for visual quality evaluation. We deliberate that this is an elegant way since modeling human visual system processes is a tedious task and requires tuning many parameters. We employ crowdsourcing methodology for collecting data of quality evaluations and metric learning for drawing the best parameters that well correlate with the human perception. Experimental validation of the proposed metric reveals a promising correlation between the metric output and human perception. Results of our crowdsourcing experiments are publicly available for the community.
... Thus, it is crucial to adopt objective * Corresponding author. quality metrics that try to mimic an ideal human observer and accurately predict the subjective assessment scores [6] . ...
Article
Blind or No reference quality evaluation is a challenging issue since it is done without access to the original content. In this work, we propose a method based on deep learning for the mesh visual quality assessment without reference. For a given 3D model, we first compute its mesh saliency. Then, we extract views from the 3D mesh and the corresponding mesh saliency. After that, the views are split into small patches that are filtered using a saliency threshold. Only the salient patches are selected and used as input data. After that, three pre-trained deep convolutional neural networks are employed for feature learning: VGG, AlexNet, and ResNet. Each network is fine-tuned and produces a feature vector. The Compact Multi-linear Pooling (CMP) is used afterward to fuse the retrieved vectors into a global feature representation. Finally, fully connected layers followed by a regression module are used to estimate the quality score. Extensive experiments are executed on four mesh quality datasets and comparisons with existing methods demonstrate the effectiveness of our method in terms of correlation with subjective scores.
... However, only recently the performance of such metrics was benchmarked and compared to the model-based approaches in [48]. The reader can refer to [39,49,50] for excellent reviews of subjective and objective quality assessment methodologies on 3D mesh contents. ...
Article
Full-text available
Recent trends in multimedia technologies indicate the need for richer imaging modalities to increase user engagement with the content. Among other alternatives, point clouds denote a viable solution that offers an immersive content representation, as witnessed by current activities in JPEG and MPEG standardization committees. As a result of such efforts, MPEG is at the final stages of drafting an emerging standard for point cloud compression, which we consider as the state-of-the-art. In this study, the entire set of encoders that have been developed in the MPEG committee are assessed through an extensive and rigorous analysis of quality. We initially focus on the assessment of encoding configurations that have been defined by experts in MPEG for their core experiments. Then, two additional experiments are designed and carried to address some of the identified limitations of current approach. As part of the study, state-of-the-art objective quality metrics are benchmarked to assess their capability to predict visual quality of point clouds under a wide range of radically different compression artifacts. To carry the subjective evaluation experiments, a web-based renderer is developed and described. The subjective and objective quality scores along with the rendering software are made publicly available, to facilitate and promote research on the field.