Fig 3 - uploaded by Hua-Chun Sun
Content may be subject to copyright.
Conjoint Proportions Plot. Plots showing pairwise comparison results for regularity judgments, for an ideal observer (right top), the five individual observers (right bottom) and their group average (left). The color scale of the pixels in the matrices represents the percentage of trials on which stimulus R ijk (absicassa) is judged more regular than stimulus R pqr (ordinate). The data matrix for an ideal observer (right top) is plotted based on the difference of jitter only, and is uncontaminated by the effects of element spacing and element size. The group matrix (left) is calculated by averaging the response percentages across the five observers (right bottom). Element spacing is indicated by the smallest, red numerical labels 1-3. Element size is indicated by the blue numerical labels 1-3. The blocks in dotted lines represent different element size levels. Jitter level is indicated by the largest, green numerical labels 1-5. The blocks in solid lines represent different jitter levels. https://doi.org/10.1371/journal.pcbi.1008802.g003

Conjoint Proportions Plot. Plots showing pairwise comparison results for regularity judgments, for an ideal observer (right top), the five individual observers (right bottom) and their group average (left). The color scale of the pixels in the matrices represents the percentage of trials on which stimulus R ijk (absicassa) is judged more regular than stimulus R pqr (ordinate). The data matrix for an ideal observer (right top) is plotted based on the difference of jitter only, and is uncontaminated by the effects of element spacing and element size. The group matrix (left) is calculated by averaging the response percentages across the five observers (right bottom). Element spacing is indicated by the smallest, red numerical labels 1-3. Element size is indicated by the blue numerical labels 1-3. The blocks in dotted lines represent different element size levels. Jitter level is indicated by the largest, green numerical labels 1-5. The blocks in solid lines represent different jitter levels. https://doi.org/10.1371/journal.pcbi.1008802.g003

Source publication
Article
Full-text available
Texture regularity, such as the repeating pattern in a carpet, brickwork or tree bark, is a ubiquitous feature of the visual world. The perception of regularity has generally been studied using multi-element textures in which the degree of regularity has been manipulated by adding random jitter to the elements’ positions. Here we used three-factor...

Contexts in source publication

Context 1
... 45 conditions, there were a total 1035 possible pairs (including same-condition pairs). The average responses to the 1035 pairwise comparisons are shown in a matrix format (Fig 3), separately for each observer (bottom right), for their group average (left), and for an ideal observer (upper right). Note this is like the Conjoint Proportions Plot of Ho et al. (2008) [18], but expanded to account for our 3-factor design. ...
Context 2
... perception of texture regularity responses of the five observers are different from an ideal observer whose judgment is based solely on jitter alone (Fig 3 top right). In other words, the pixel color is not homogeneous within the big blocks (solid lines of the matrix, indicating jitter levels), suggesting appreciable effects of element spacing and size on perceived regularity by the human observers. ...
Context 3
... perception of texture regularity the element spacing, element size and jitter level on each of the image pairs. This analysis is performed individually on the data from each participant, and results in a total of six factor variables: three for the first stimulus and another three for the second stimulus of a pair (Fig 3 left). We reduce the number of factor variables by half by using an approach similar to the Maximum Likelihood Conjoint Measurement (MLCM) method proposed by Knoblauch and Maloney [17]. ...

Citations

... Using synthetic data [18] may help alleviate this issue. Finally, there is no solid understanding of human perception of texture repetitiveness [60], and, while human perceptual validation is out of the scope of this work, its apparent higher correlation with TexTile, might add new insights to the elements identified so far. ...
Conference Paper
Full-text available
We introduce TexTile, a novel differentiable metric to quantify the degree upon which a texture image can be concatenated with itself without introducing repeating artifacts (i.e., the tileability). Existing methods for tileable texture synthesis focus on general texture quality, but lack explicit analysis of the intrinsic repeatability properties of a texture. In contrast, our TexTile metric effectively evaluates the tileable properties of a texture, opening the door to more informed synthesis and analysis of tileable textures. Under the hood, TexTile is formulated as a binary classifier carefully built from a large dataset of textures of different styles, semantics, regularities, and human annotations.Key to our method is a set of architectural modifications to baseline pre-train image classifiers to overcome their shortcomings at measuring tileability, along with a custom data augmentation and training regime aimed at increasing robustness and accuracy. We demonstrate that TexTile can be plugged into different state-of-the-art texture synthesis methods, including diffusion-based strategies, and generate tileable textures while keeping or even improving the overall texture quality. Furthermore, we show that TexTile can objectively evaluate any tileable texture synthesis method, whereas the current mix of existing metrics produces uncorrelated scores which heavily hinders progress in the field.
... Here, we compare DE and maximum likelihood conjoint measurement (MLCM) in a replication of experiments that investigated the influence of spatial complexity and cone contrast on color appearance. MLCM is based on paired comparisons and is used to estimate perceptual scales associated with the integration of information along multiple dimensions [8][9][10]. Shapley et al. argued in 2019 that two separate systems contribute to color appearance [11]. ...
Article
Full-text available
Perceptual scales of color saturation obtained by direct estimation (DE) and maximum likelihood conjoint measurement (MLCM) were compared for red checkerboard patterns and uniform red squares. For the DE task, observers were asked to rate the saturation level as a percentage, indicating the chromatic sensation for each pattern and contrast. For the MLCM procedure, observers judged on each trial which of two stimuli that varied in chromatic contrast and/or spatial pattern evoked the most salient color. In separate experiments, patterns varying only in luminance contrast were also tested. The MLCM data confirmed previous results reported with DE indicating that the slope of the checkerboard scale with cone contrast levels is steeper than that for the uniform square. Similar results were obtained with patterns modulated only in luminance. DE methods were relatively more variable within an observer, reflecting observer uncertainty, while MLCM scales showed greater relative variability across observers, perhaps reflecting individual differences in the appearance of the stimuli. MLCM provides a reliable scaling method, based only on ordinal judgments between pairs of stimuli and that provides less opportunity for subject-specific biases and strategies to intervene in perceptual judgements.
... In Park et al. (2009) a Markov Random Field (MRF) with a Mean-Shift Belief Propagation method was used. Other approaches to texton grouping include optimization of shape alignment (Cai & Baciu, 2011), structural regularity using symmetry groups (Liu et al., 2008), projection profiles (Aksoy, Yalniz & Tasdemir, 2012) and frequency filtering (Hettiarachchi, Peters & Bruce, 2014;Sun et al., 2021). ...
Article
Full-text available
Regular textures are frequently found in man-made environments and some biological and physical images. There are a wide range of applications for recognizing and locating regular textures. In this work, we used deep convolutional neural networks (CNNs) as a general method for modelling and classifying regular and irregular textures. We created a new regular texture database and investigated two sets of deep CNNs-based methods for regular and irregular texture classification. First, the classic CNN models ( e.g . inception, residual network, etc .) were used in a standard way. These two-class CNN classifiers were trained by fine-tuning networks using our new regular texture database. Next, we transformed the trained filter features of the last convolutional layer into a vector representation using Fisher Vector pooling (FV). Such representations can be efficiently used for a wide range of machine learning tasks such as classification or clustering, thus more transferable from one domain to another. Our experiments show that the standard CNNs attained sufficient accuracy for regular texture recognition tasks. The Fisher representations combined with support vector machine (SVM) also showed high performance for regular and irregular texture classification. We also find CNNs performs sub-optimally for long-range patterns, despite the fact that their fully-connected layers pool local features into a global image representation.
... The rapid increase in number of trials for this type of scaling procedure (Maximum Likelihood Difference Scaling (MLDS), MLCM) renders the procedures less attractive from a pragmatic point of view and might sometimes outweigh the theoretical benefits discussed above. Alternative strategies such as subsampling (Knoblauch & Maloney, 2012;Abbatecola et al., 2021), having a reduced number of stimuli per dimension (Sun et al., 2021), or the use of so-called embedded methods from the machine learning community (see Haghiri et al., 2020, for example) are currently being explored and might allow more efficient ways of perceptual scale measurements in the future. ...
Article
Full-text available
One fundamental question in vision research is how the retinal input is segmented into perceptually relevant variables. A striking example of this segmentation process is transparency perception, in which luminance information in one location contributes to two perceptual variables: the properties of the transparent medium itself and of what is being seen in the background. Previous work by Robilotto et al. (2002, 2004) suggested that perceived transparency is closely related to perceived contrast, but how these two relate to retinal luminance has not been established. Here we studied the relationship between perceived transparency, perceived contrast, and image luminance using maximum likelihood conjoint measurement (MLCM). Stimuli were rendered images of variegated checkerboards that were composed of multiple reflectances and partially covered by a transparent overlay. We systematically varied the transmittance and reflectance of the transparent medium and measured perceptual scales of perceived transparency. We also measured scales of perceived contrast using cut-outs of the transparency stimuli that did not contain any geometrical cues to transparency. Perceptual scales for perceived transparency and contrast followed a remarkably similar pattern across observers. We tested the empirically observed scales against predictions from various contrast metrics and found that perceived transparency and perceived contrast were equally well predicted by a metric based on the logarithm of Michelson or Whittle contrast. We conclude that judgments of perceived transparency and perceived contrast are likely to be supported by a common mechanism, which can be computationally captured as a logarithmic contrast.