Article

Texton Noise

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Designing realistic noise patterns from scratch is hard. To solve this problem, recent contributions have proposed involved spectral analysis algorithms that enable procedural noise models to faithfully reproduce some class of textures. The aim of this paper is to propose the simplest and most efficient noise model that allows for the reproduction of any Gaussian texture. Texton noise is a simple sparse convolution noise that sums randomly scattered copies of a small bilinear texture called texton. We introduce an automatic algorithm to compute the texton associated with an input texture image that concentrates the input frequency content into the desired texton support. One of the main features of texton noise is that its evaluation only consists to sum 30 texture fetches on average. Consequently, texton noise generates Gaussian textures with an unprecedented evaluation speed for noise by example. A second main feature of texton noise is that it allows for high-quality on-the-fly anisotropic filtering by simply invoking existing GPU hardware solutions for texture fetches. In addition, we demonstrate that texton noise can be applied on any surface using parameterization-free surface noise and that it allows for noise mixing.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... With respect to these criteria, we identify two state-of-the-art previous works. Regarding speed, Texton Noise [Galerne et al. 2017a] dramatically minimizes the cost of by-example random-phase noise synthesis, lowering it to about 30 texture fetches, and is currently the fastest method for this class of textures. On the other axis, Local Random Phase Noise [Gilet et al. 2014] remains the first attempt at synthesizing non-random-phase noise, i.e. it partially reproduces the structure present in the input if this structure can be identified as a periodic pattern. ...
... • Speed: our method generates random-phase noise 20 times faster than Texton Noise [Galerne et al. 2017a], which is currently the fastest method for this class of textures. In the first row of Figure 2, we reproduce random-phase noise seen in recent previous works [Galerne et al. 2012[Galerne et al. , 2017aLagae et al. 2009] with unprecedented performance. ...
... • Speed: our method generates random-phase noise 20 times faster than Texton Noise [Galerne et al. 2017a], which is currently the fastest method for this class of textures. In the first row of Figure 2, we reproduce random-phase noise seen in recent previous works [Galerne et al. 2012[Galerne et al. , 2017aLagae et al. 2009] with unprecedented performance. • Generative space: our method covers a wide set of non-random-phase input as shown in the second row of Figure 2. The richness of the generative space covered by our method is comparable to the one of LRPN [Gilet et al. 2014]. ...
Article
We propose a new by-example noise algorithm that takes as input a small example of a stochastic texture and synthesizes an infinite output with the same appearance. It works on any kind of random-phase inputs as well as on many non-random-phase inputs that are stochastic and non-periodic, typically natural textures such as moss, granite, sand, bark, etc. Our algorithm achieves high-quality results comparable to state-of-the-art procedural-noise techniques but is more than 20 times faster. Our approach is conceptually simple: we partition the output texture space on a triangle grid and associate each vertex with a random patch from the input such that the evaluation inside a triangle is done by blending 3 patches. The key to this approach is the blending operation that usually produces visual artifacts such as ghosting, softened discontinuities and reduced contrast, or introduces new colors not present in the input. We analyze these problems by showing how linear blending impacts the histogram and show that a blending operator that preserves the histogram prevents these problems. The main requirement for a rendering application is to implement such an operator in a fragment shader without further post-processing, i.e. we need a histogram-preserving blending operator that operates only at the pixel level. Our insight for the design of this operator is that, with Gaussian inputs, histogram-preserving blending boils down to mean and variance preservation, which is simple to obtain analytically. We extend this idea to non-Gaussian inputs by "Gaussianizing" them with a histogram transformation and "de-Gaussianizing" them with the inverse transformation after the blending operation. We show how to precompute and store these histogram transformations such that our algorithm can be implemented in a fragment shader.
... In practice, for a fixed ( f , θ ), the computation of the phase field φ f ,θ relies on the procedural generation of the Gaussian field G f ,θ using a local and parallel algorithm [LLDD09, GSV*14, GLM17,HN18]: ...
Article
Full-text available
Assisting the authoring of virtual terrains is a perennial challenge in the creation of convincing synthetic landscapes. Particularly, there is a need for augmenting artist‐controlled low‐resolution models with consistent relief details. We present a structured noise that procedurally enhances terrains in real time by adding spatially varying erosion patterns. The patterns can be cascaded, i.e. narrow ones are nested into large ones. Our model builds upon the Phasor noise, which we adapt to the specific characteristics of terrains (water flow, slope orientation). Relief details correspond to the underlying terrain characteristics and align with the slope to preserve the coherence of generated landforms. Moreover, our model allows for artist control, providing a palette of control maps, and can be efficiently implemented in graphics hardware, thus allowing for real‐time synthesis and rendering, therefore permitting effective and intuitive authoring.
... Portilla and Simoncelli [11] used wavelet and pyramid decomposition to build a parametric texture model, which can synthesize many nature textures, even those containing geometric patterns. Stationary Gaussian model [5,16,[26][27][28][29][30] is a simple texture model, synthesizing Gaussian textures is easy and requires few computational resources, however, it is not good at handling textures with complex geometry patterns. Gatys et al . ...
Preprint
Full-text available
Recently, enthusiastic studies have devoted to texture synthesis using deep neural networks, because these networks excel at handling complex patterns in images. In these models, second-order statistics, such as Gram matrix, are used to describe textures. Despite the fact that these model have achieved promising results, the structure of their parametric space is still unclear, consequently, it is difficult to use them to mix textures. This paper addresses the texture mixing problem by using a Gaussian scheme to interpolate deep statistics computed from deep neural networks. More precisely, we first reveal that the statistics used in existing deep models can be unified using a stationary Gaussian scheme. We then present a novel algorithm to mix these statistics by interpolating between Gaussian models using optimal transport. We further apply our scheme to Neural Style Transfer, where we can create mixed styles. The experiments demonstrate that our method can achieve state-of-the-art results. Because all the computations are implemented in closed forms, our mixing algorithm adds only negligible time to the original texture synthesis procedure.
... By using the coordinates of the cell and a local PRN generator [21,9] the number of grains whose centers belong to this cell (and those grains' information) can be generated. Our implementation uses the PRN generation employed for procedural texture generation by Galerne et al. [6]. the ξ k 's (see Equation (4)). ...
Article
Film grain is the unique texture which results from the silver halide based analog photographic process. Film emulsions are made up of microscopic photo-sensitive silver grains, and the fluctuating density of these grains leads to what is known as film grain. This texture is valued by photographers and film directors for its artistic value. We present two implementations of a film grain rendering algorithm based on a physically realistic film grain model. The rendering algorithm uses a Monte Carlo simulation to determine the value of each output rendered pixel. A significant advantage of using this model is that the images can be rendered at any resolution, so that arbitrary zoom factors are possible, even to the point where the individual grains can be observed. We provide a method to choose the best implementation automatically, with respect to execution time.
Article
Node-graph-based procedural materials are vital to 3D content creation within the computer graphics industry. Leveraging the expressive representation of procedural materials, artists can effortlessly generate diverse appearances by altering the graph structure or node parameters. However, manually reproducing a specific appearance is a challenging task that demands extensive domain knowledge and labor. Previous research has sought to automate this process by converting artist-created material graphs into differentiable programs and optimizing node parameters against a photographed material appearance using gradient descent. These methods involve implementing differentiable filter nodes [Shi et al. 2020] and training differentiable neural proxies for generator nodes to optimize continuous and discrete node parameters [Hu et al. 2022a] jointly. Nevertheless, Neural Proxies exhibits critical limitations, such as long training times, inaccuracies, fixed resolutions, and confined parameter ranges, which hinder their scalability towards the broad spectrum of production-grade material graphs. These constraints fundamentally stem from the absence of faithful and efficient implementations of generic noise and pattern generator nodes, both differentiable and non-differentiable. Such deficiency prevents the direct optimization of continuous and discrete generator node parameters without relying on surrogate models. We present Diffmat v2 , an improved differentiable procedural material library, along with a fully-automated, end-to-end procedural material capture framework that combines gradient-based optimization and gradient-free parameter search to match existing production-grade procedural materials against user-taken flash photos. Diffmat v2 expands the range of differentiable material graph nodes in Diffmat [Shi et al. 2020] by adding generic noise/pattern generator nodes and user-customizable per-pixel filter nodes. This allows for the complete translation and optimization of procedural materials across various categories without the need for external proprietary tools or pre-cached noise patterns. Consequently, our method can capture a considerably broader array of materials, encompassing those with highly regular or stochastic geometries. We demonstrate that our end-to-end approach yields a closer match to the target than MATch [Shi et al. 2020] and Neural Proxies [Hu et al. 2022a] when starting from initially unmatched continuous and discrete parameters.
Article
Full-text available
By‐example aperiodic tilings are popular texture synthesis techniques that allow a fast, on‐the‐fly generation of unbounded and non‐periodic textures with an appearance matching an arbitrary input sample called the “exemplar”. But by relying on uniform random sampling, these algorithms fail to preserve the autocovariance function, resulting in correlations that do not match the ones in the exemplar. The output can then be perceived as excessively random. In this work, we present a new method which can well preserve the autocovariance function of the exemplar. It consists in fetching contents with an importance sampler taking the explicit autocovariance function as the probability density function (pdf) of the sampler. Our method can be controlled for increasing or decreasing the randomness aspect of the texture. Besides significantly improving synthesis quality for classes of textures characterized by pronounced autocovariance functions, we moreover propose a real‐time tiling and blending scheme that permits the generation of high‐quality textures faster than former algorithms with minimal downsides by reducing the number of texture fetches.
Article
Full-text available
Stochastic micro‐patterns successfully enhance the realism of virtual scenes. Procedural models using noise combined with transfer functions are extremely efficient. However, most patterns produced today employ 1D transfer functions, which assign color, transparency, or other material attributes, based solely on the single scalar quantity of noise. Multi‐dimensional transfer functions have received widespread attention in other fields, such as scientific volume rendering. But their potential has not yet been well explored for modeling micro‐patterns in the field of procedural texturing. We propose a new procedural model for stochastic patterns, defined as the composition of a bi‐dimensional transfer function (a.k.a. color‐map) with a stochastic vector field. Our model is versatile, as it encompasses several existing procedural noises, including Gaussian noise and phasor noise. It also generates a much larger gamut of patterns, including locally structured patterns which are notoriously difficult to reproduce. We leverage the Gaussian assumption and a tiling and blending algorithm to provide real‐time generation and filtering. A key contribution is a real‐time approximation of the second order statistics over an arbitrary pixel footprint, which enables, in addition, the filtering of procedural normal maps. We exhibit a wide variety of results, including Gaussian patterns, profiled waves, concentric and non‐concentric patterns.
Article
Full-text available
The interaction between light and materials is key to physically-based realistic rendering. However, it is also complex to analyze, especially when the materials contain a large number of details and thus exhibit “glinty” visual effects. Recent methods of producing glinty appearance are expected to be important in next-generation computer graphics. We provide here a comprehensive survey on recent glinty appearance rendering. We start with a definition of glinty appearance based on microfacet theory, and then summarize research works in terms of representation and practical rendering. We have implemented typical methods using our unified platform and compare them in terms of visual effects, rendering speed, and memory consumption. Finally, we briefly discuss limitations and future research directions. We hope our analysis, implementations, and comparisons will provide insight for readers hoping to choose suitable methods for applications, or carry out research.
Article
Procedural modeling is now the de facto standard of material modeling in industry. Procedural models can be edited and are easily extended, unlike pixel-based representations of captured materials. In this article, we present a semi-automatic pipeline for general material proceduralization. Given Spatially Varying Bidirectional Reflectance Distribution Functions (SVBRDFs) represented as sets of pixel maps, our pipeline decomposes them into a tree of sub-materials whose spatial distributions are encoded by their associated mask maps. This semi-automatic decomposition of material maps progresses hierarchically, driven by our new spectrum-aware material matting and instance-based decomposition methods. Each decomposed sub-material is proceduralized by a novel multi-layer noise model to capture local variations at different scales. Spatial distributions of these sub-materials are modeled either by a by-example inverse synthesis method recovering Point Process Texture Basis Functions (PPTBF) [ 30 ] or via random sampling. To reconstruct procedural material maps, we propose a differentiable rendering-based optimization that recomposes all generated procedures together to maximize the similarity between our procedural models and the input material pixel maps. We evaluate our pipeline on a variety of synthetic and real materials. We demonstrate our method’s capacity to process a wide range of material types, eliminating the need for artist designed material graphs required in previous work [ 38 , 53 ]. As fully procedural models, our results expand to arbitrary resolution and enable high-level user control of appearance.
Article
Full-text available
Stationary Gaussian processes have been used for decades in the context of procedural noises to model and synthesize textures with no spatial organization. In this paper we investigate cyclostationary Gaussian processes, whose statistics are repeated periodically. It enables the modeling of noises having periodic spatial variations, which we call “cyclostationary Gaussian noises”. We adapt to the cyclostationary context several stationary noises along with their synthesis algorithms: spot noise, Gabor noise, local random‐phase noise, high‐performance noise, and phasor noise. We exhibit real‐time synthesis of a variety of visual patterns having periodic spatial variations.
Article
Full-text available
Procedural noise functions are fundamental tools in computer graphics used for synthesizing virtual geometry and texture patterns. Ideally, a procedural noise function should be compact, aperiodic, parameterized, and randomly accessible. Traditional lattice noise functions such as Perlin noise, however, exhibit periodicity due to the axial correlation induced while hashing the lattice vertices to the gradients. In this paper, we introduce a parameterized lattice noise called prime gradient noise (PGN) that minimizes discernible periodicity in the noise while enhancing the algorithmic efficiency. PGN utilizes prime gradients, a set of random unit vectors constructed from subsets of prime numbers plotted in polar coordinate system. To map axial indices of lattice vertices to prime gradients, PGN employs Szudzik pairing, a bijection F : ℕ ² → ℕ. Compositions of Szudzik pairing functions are used in higher dimensions. At the core of PGN is the ability to parameterize noise generation though prime sequence offsetting which facilitates the creation of fractal noise with varying levels of heterogeneity ranging from homogeneous to hybrid multifractals. A comparative spectral analysis of the proposed noise with other noises including lattice noises show that PGN significantly reduces axial correlation and hence, periodicity in the noise texture. We demonstrate the utility of the proposed noise function with several examples in procedural modeling, parameterized pattern synthesis, and solid texturing.
Article
The goal of exemplar-based texture synthesis is to generate texture images that are visually similar to a given exemplar. Recently, promising results have been reported by methods relying on convolutional neural networks (ConvNets) pretrained on large-scale image datasets. However, these methods have difficulties in synthesizing image textures with non-local structures and extending to dynamic or sound textures. In this article, we present a conditional generative ConvNet (cgCNN) model which combines deep statistics and the probabilistic framework of generative ConvNet (gCNN) model. Given a texture exemplar, cgCNN defines a conditional distribution using deep statistics of a ConvNet, and synthesizes new textures by sampling from the conditional distribution. In contrast to previous deep texture models, the proposed cgCNN does not rely on pre-trained ConvNets but learns the weights of ConvNets for each input exemplar instead. As a result, cgCNN can synthesize high quality dynamic, sound and image textures in a unified manner. We also explore the theoretical connections between our model and other texture models. Further investigations show that the cgCNN model can be easily generalized to texture expansion and inpainting. Extensive experiments demonstrate that our model can achieve better or at least comparable results than the state-of-the-art methods.
Article
We present MATch, a method to automatically convert photographs of material samples into production-grade procedural material models. At the core of MATch is a new library DiffMat that provides differentiable building blocks for constructing procedural materials, and automatic translation of large-scale procedural models, with hundreds to thousands of node parameters, into differentiable node graphs. Combining these translated node graphs with a rendering layer yields an end-to-end differentiable pipeline that maps node graph parameters to rendered images. This facilitates the use of gradient-based optimization to estimate the parameters such that the resulting material, when rendered, matches the target image appearance, as quantified by a style transfer loss. In addition, we propose a deep neural feature-based graph selection and parameter initialization method that efficiently scales to a large number of procedural graphs. We evaluate our method on both rendered synthetic materials and real materials captured as flash photographs. We demonstrate that MATch can reconstruct more accurate, general, and complex procedural materials compared to the state-of-the-art. Moreover, by producing a procedural output, we unlock capabilities such as constructing arbitrary-resolution material maps and parametrically editing the material appearance.
Article
Rendering glinty details from specular microstructure enhances the level of realism, but previous methods require heavy storage for the high-resolution height field or normal map and associated acceleration structures. In this paper, we aim at dynamically generating theoretically infinite microstructure, preventing obvious tiling artifacts, while achieving constant storage cost. Unlike traditional texture synthesis, our method supports arbitrary point and range queries, and is essentially generating the microstructure implicitly. Our method fits the widely used microfacet rendering framework with multiple importance sampling (MIS), replacing the commonly used microfacet normal distribution functions (NDFs) like GGX by a detailed local solution, with a small amount of runtime performance overhead.
Article
Full-text available
We introduce a novel semi‐procedural approach that avoids drawbacks of procedural textures and leverages advantages of data‐driven texture synthesis. We split synthesis in two parts: 1) structure synthesis, based on a procedural parametric model and 2) color details synthesis, being data‐driven. The procedural model consists of a generic Point Process Texture Basis Function (PPTBF), which extends sparse convolution noises by defining rich convolution kernels. They consist of a window function multiplied with a correlated statistical mixture of Gabor functions, both designed to encapsulate a large span of common spatial stochastic structures, including cells, cracks, grains, scratches, spots, stains, and waves. Parameters can be prescribed automatically by supplying binary structure exemplars. As for noise‐based Gaussian textures, the PPTBF is used as stand‐alone function, avoiding classification tasks that occur when handling multiple procedural assets. Because the PPTBF is based on a single set of parameters it allows for continuous transitions between different visual structures and an easy control over its visual characteristics. Color is consistently synthesized from the exemplar using a multiscale parallel texture synthesis by numbers, constrained by the PPTBF. The generated textures are parametric, infinite and avoid repetition. The data‐driven part is automatic and guarantees strong visual resemblance with inputs.
Article
Full-text available
Recently, enthusiastic studies have devoted to texture synthesis using deep neural networks, because these networks excel at handling complex patterns in images. In these models, second-order statistics, such as Gram matrix, are used to describe textures. Although these models have achieved promising results, the structure of their parametric space is still unclear. Consequently, it is difficult to use them to mix textures. This paper addresses the texture mixing problem by using a Gaussian scheme to interpolate deep statistics computed from deep neural networks. More precisely, we first reveal that the statistics used in existing deep models can be unified using a stationary Gaussian scheme. We then present a novel algorithm to mix these statistics by interpolating between Gaussian models using optimal transport. We further apply our scheme to Neural Style Transfer, where we can create mixed styles. The experiments demonstrate that our method outperforms a number of baselines. Because all the computations are implemented in closed forms, our mixing algorithm adds only negligible time to the original texture synthesis procedure.
Preprint
Full-text available
This paper describes a novel approach for on demand volumetric texture synthesis based on a deep learning framework that allows for the generation of high quality 3D data at interactive rates. Based on a few example images of textures, a generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes that reproduce the visual characteristics of the examples along some directions. To cope with memory limitations and computation complexity that are inherent to both high resolution and 3D processing on the GPU, only 2D textures referred to as "slices" are generated during the training stage. These synthetic textures are compared to exemplar images via a perceptual loss function based on a pre-trained deep network. The proposed network is very light (less than 100k parameters), therefore it only requires sustainable training (i.e. few hours) and is capable of very fast generation (around a second for $256^3$ voxels) on a single GPU. Integrated with a spatially seeded PRNG the proposed generator network directly returns an RGB value given a set of 3D coordinates. The synthesized volumes have good visual results that are at least equivalent to the state-of-the-art patch based approaches. They are naturally seamlessly tileable and can be fully generated in parallel.
Article
Full-text available
This paper describes a novel approach for on demand volumetric texture synthesis based on a deep learning framework that allows for the generation of high‐quality three‐dimensional (3D) data at interactive rates. Based on a few example images of textures, a generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes that reproduce the visual characteristics of the examples along some directions. To cope with memory limitations and computation complexity that are inherent to both high resolution and 3D processing on the GPU, only 2D textures referred to as ‘slices’ are generated during the training stage. These synthetic textures are compared to exemplar images via a perceptual loss function based on a pre‐trained deep network. The proposed network is very light (less than 100k parameters), therefore it only requires sustainable training (i.e. few hours) and is capable of very fast generation (around a second for 256³ voxels) on a single GPU. Integrated with a spatially seeded pseudo‐random number generator (PRNG) the proposed generator network directly returns an RGB value given a set of 3D coordinates. The synthesized volumes have good visual results that are at least equivalent to the state‐of‐the‐art patch‐based approaches. They are naturally seamlessly tileable and can be fully generated in parallel. image
Article
To deal with the increasing demand for complex visual details in virtual worlds, procedural methods for content authoring are an expanding field in Computer Graphics. Focusing on-the-fly texture generation, we present in this paper a content authoring process based on Locally Controlled Spot Noise. Through the control of both the impulses distribution and the spatially-defined kernel, this process can cover a wide range of appearances. In this context, we introduce a new kernel formulation that provides an efficient anisotropic filtering of the generated texture. Furthermore, our method allows users to interactively create the desired appearance by controlling both albedo and meso-geometry of the underlying surface, tackling on-the-fly normal map generation. Our method can be used as an artist friendly tool to model high-quality surface details with direct control over the final appearance in real-time.
Article
Generating natural textures is a challenging task in graphics and virtual reality. Leaf texture is an important part of natural textures. Different from many other textures with repetitive and random patterns, leaves are closely related to its botanic structures, especially veins. Appearing in forms of foliages in the wild, the variety of leaf textures produces the realism of virtual scenes. In this paper, we propose a novel leaf-texturing method that models the inherent relevance between structural features and pattern distributions. Based on the structure-guided model, we design an example-based algorithm to extract and generate leaf textures depending on venation structures. Global variations and local details are processed separately for multi-scale texture features. Experiments show that our model produces visually plausible leaf textures with variations, which can be easily applied to many other applications, including texture transfer between different leaf structures, aging effect and texture editing.
Article
Exemplar-based texture synthesis consists in producing new synthetic images which have the same perceptual characteristics as a given texture sample while exhibiting sufficient innovation (to avoid verbatim copy). In this paper, we propose to address this problem with a model obtained as local transformations of Gaussian random fields. The local transformations operate on 3 × 3 patches and are designed to solve a semi-discrete optimal transport problem in order to reimpose the patch distribution of the exemplar texture. The semi-discrete optimal transport problem is solved with a stochastic gradient algorithm, whose convergence speed is evaluated on several practical transport cases. After studying the properties of such transformed Gaussian random fields, we propose a multiscale extension of the model which aims at preserving the patch distribution of the exemplar texture at multiple scales. Experiments demonstrate that this multiscale model is able to synthesize structured textures while keeping several mathematical guarantees, and with low requirements in synthesis time and memory storage. In particular, a single patch optimal transport map is shown to be better than iterated nearest neighbor assignments in terms of statistical guarantees. Besides, once the model is estimated, the resulting synthesis algorithm is fast and highly parallel since it amounts to performing weighted nearest neighbor patch assignments at each scale.
Conference Paper
A bilevel texture model is proposed, based on a local transform of a Gaussian random field. The core of this method relies on the optimal transport of a continuous Gaussian distribution towards the discrete exemplar patch distribution. The synthesis then simply consists in a fast post-processing of a Gaussian texture sample, boiling down to an improved nearest-neighbor patch matching, while offering theoretical guarantees on statistical compliancy.KeywordsOptimal transportTexture synthesisPatch distribution
Article
Inpainting consists in computing a plausible completion of missing parts of an image given the available content. In the restricted framework of texture images, the image can be seen as a realization of a random field model, which gives a stochastic formulation of image inpainting: on the masked exemplar one estimates a random texture model which can then be conditionally sampled in order to fill the hole. In this paper is proposed an instance of such stochastic inpainting methods, dealing in particular with the case of Gaussian textures. First, a simple procedure is proposed for estimating a Gaussian texture model based on a masked exemplar, which, although quite naive, gives sufficient results for our inpainting purpose. Next, the conditional sampling step is solved with the traditional algorithm for Gaussian conditional simulation. The main difficulty of this step is to solve a very large linear system, which, in the case of stationary Gaussian textures, can be done efficiently with a conjugate gradient descent (using a Fourier representation of the covariance operator). Several experiments show that the corresponding inpainting algorithm is able to inpaint large holes (of any shape) in a texture, with a reasonable computational time. Moreover, several comparisons illustrate that the proposed approach performs better on texture images than state-of-the-art inpainting methods.
Article
The realistic synthesis and rendering of film grain is a crucial goal for many amateur and professional photographers and film-makers whose artistic works require the authentic feel of analogue photography. The objective of this work is to propose an algorithm that reproduces the visual aspect of film grain texture on any digital image. Previous approaches to this problem either propose unrealistic models or simply blend scanned images of film grain with the digital image, in which case the result is inevitably limited by the quality and resolution of the initial scan. In this work, we introduce a stochastic model to approximate the physical reality of film grain, and propose a resolution-free rendering algorithm to simulate realistic film grain for any digital input image. By varying the parameters of this model, we can achieve a wide range of grain types. We demonstrate this by comparing our results with film grain examples from dedicated software, and show that our rendering results closely resemble these real film emulsions. In addition to realistic grain rendering, our resolution-free algorithm allows for any desired zoom factor, even down to the scale of the microscopic grains themselves.
Article
We propose a bi-layer representation for textures which is suitable for on-the-fly synthesis of unbounded textures from an input exemplar. The goal is to improve the variety of outputs while preserving plausible small-scale details. The insight is that many natural textures can be decomposed into a series of fine scale Gaussian patterns which have to be faithfully reproduced, and some non-homogeneous, larger scale structure which can be deformed to add variety. Our key contribution is a novel, bi-layer representation for such textures. It includes a model for spatially-varying Gaussian noise, together with a mechanism enabling synchronization with a structure layer. We propose an automatic method to instantiate our bi-layer model from an input exemplar. At the synthesis stage, the two layers are generated independently, synchronized and added, preserving the consistency of details even when the structure layer has been deformed to increase variety. We show on a variety of complex, real textures, that our method reduces repetition artifacts while preserving a coherent appearance.
Article
Full-text available
Exemplar-based texture synthesis is defined as the process of generating, from an input texture sample, new texture images that are perceptually equivalent to the input. Efros and Freeman's method is a non-parametric patch-based method which computes an output texture image by quilting together patches taken from the input sample. The main innovation of their work relies in the stitching technique which significantly reduces the transition effect between patches. In this paper, we propose a detailed analysis and implementation of their work. We provide a complete mathematical description of the linear programing problem used for the quilting step as well as implementation details. Additionally we propose a partially parallel version of the quilting technique.
Article
Full-text available
This contribution is concerned with texture synthesis by example, the process of generating new texture images from a given sample. The Random Phase Noise algorithm presented here synthesizes a texture from an original image by simply randomizing its Fourier phase. It is able to reproduce textures which are characterized by their Fourier modulus, namely the random phase textures (or micro-textures).
Article
Full-text available
Local random-phase noise is an efficient noise model for procedural texturing. It is defined on a regular spatial grid by local noises, which are sums of cosines with random phase. Our model is versatile thanks to separate samplings in the spatial and spectral domains. Therefore, it encompasses Gabor noise and noise by Fourier series. A stratified spectral sampling allows for a faithful yet compact and efficient reproduction of an arbitrary power spectrum. Noise by example is therefore obtained faster than state-of-the-art techniques. As a second contribution we address texture by example and generate not only Gaussian patterns but also structured features present in the input. This is achieved by fixing the phase on some part of the spectrum. Generated textures are continuous and non-repetitive. Results show unprecedented framerates and a flexible visual result: users can modify noise parameters to interactively edit visual variants.
Article
Full-text available
In computer graphics, rendering visually detailed scenes is often achieved through texturing. We propose a method for on-the-fly non-periodic infinite texturing of surfaces based on a single image. Pattern repetition is avoided by defining patches within each texture whose content can be changed at runtime. In addition, we consistently manage multi-scale using one input image per represented scale. Undersampling artifacts are avoided by accounting for fine-scale features while colors are transferred between scales. Eventually, we allow for relief-enhanced rendering and provide a tool for intuitive creation of height maps. This is done using an ad-hoc local descriptor that measures feature self-similarity in order to propagate height values provided by the user for a few selected texels only. Thanks to the patch-based system, manipulated data are compact and our texturing approach is easy to implement on GPU. The multi-scale extension is capable of rendering finely detailed textures in real-time.
Article
Full-text available
This paper addresses the problem of modeling textures with Gaussian processes, focusing on color stationary textures that can be either static or dynamic. We detail two classes of Gaussian processes parameterized by a small number of compactly supported linear filters, the so-called textons. The first class extends the spot noise (SN) texture model to the dynamical setting. We estimate the space-time texton to fit a translation-invariant covariance from an input exemplar. The second class is a specialization of the auto-regressive (AR) dynamic texture method to the setting of space and time stationary textures. This allows one to parameterize the covariance with only a few spatial textons. The simplicity of these models allows us to tackle a more complex problem, texture mixing which, in our case, amounts to interpolate between Gaussian models. We use optimal transport to derive geodesic paths and barycenters between the models learned from an input data set. This allows the user to navigate inside the set of texture models and perform texture synthesis from each new interpolated model. Numerical results on a library of exemplars show the ability of our method to generate arbitrary interpolations among unstructured natural textures. Moreover, experiments on a database of stationary textures show that the methods, despite their simplicity, provide state of the art results on stationary dynamical texture synthesis and mixing.
Conference Paper
Full-text available
In this paper, we are interested in the mathematical analysis of the micro-textures that have the property to be perceptually invariant under the randomization of the phases of their Fourier Transform. We propose a compact representation of these textures by considering a special instance of them: the one that has identically null phases, and we call it “texton”. We show that this texton has many interesting properties, and in particular it is concentrated around the spatial origin. It appears to be a simple and useful tool for texture analysis and texture synthesis, and its definition can be extended to the case of color micro-textures.
Article
Full-text available
Procedural noise is a fundamental tool in Computer Graphics. However, designing noise patterns is hard. In this paper, we present Gabor noise by example, a method to estimate the parameters of bandwidth-quantized Gabor noise, a procedural noise function that can generate noise with an arbitrary power spectrum, from exemplar Gaussian textures, a class of textures that is completely characterized by their power spectrum. More specifically, we introduce (i) bandwidth-quantized Gabor noise, a generalization of Gabor noise to arbitrary power spectra that enables robust parameter estimation and efficient procedural evaluation; (ii) a robust parameter estimation technique for quantized-bandwidth Gabor noise, that automatically decomposes the noisy power spectrum estimate of an exemplar into a sparse sum of Gaussians using non-negative basis pursuit denoising; and (iii) an efficient procedural evaluation scheme for bandwidth-quantized Gabor noise, that uses multi-grid evaluation and importance sampling of the kernel parameters. Gabor noise by example preserves the traditional advantages of procedural noise, including a compact representation and a fast on-the-fly evaluation, and is mathematically well-founded.
Article
Full-text available
Figure 1: Several interactive applications relying on tile-based methods. From left to right: landscape modeling, non-photorealistic render-ing, object distribution, point distribution, texture mapping, surface modeling and ornamental design. Abstract Over the last years, several techniques have been demonstrated that rely on tile-based methods. Figure 1 shows several examples. A lot of interactive applications could potentially benefit from these techniques. However, the state-of-the-art is scattered over several publications, and survey works are not available. In this class we give a detailed overview of tile-based methods in computer graph-ics. The class consist of five parts, which are briefly covered in the following paragraphs. Tile-Based Methods using Wang and Corner Tiles The first part of the class introduces tile-based methods in computer graphics based on Wang tiles and corner tiles. This part serves as a general introduction for the class, but also covers methods and applications based on Wang tiles and corner tiles. We introduce Wang tiles and corner tiles, and present several tiling algorithms. We discuss in de-tail tile-based texture mapping using graphics hardware, tile-based generation of Poisson disk distributions, and object distribution for procedural texturing. We briefly cover other applications such as sampling, non-photorealistic rendering, and geometric object dis-tribution. The lecturer for the first part is Ares Lagae, who recently finished his PhD about tile-based methods in computer graphics [Lagae 2007].
Article
Full-text available
Dynamic textures are sequences of images of moving scenes that exhibit certain stationarity properties in time; these include sea-waves, smoke, foliage, whirlwind etc. We present a characterization of dynamic textures that poses the problems of modeling, learning, recognizing and synthesizing dynamic textures on a firm analytical footing. We borrow tools from system identification to capture the “essence” of dynamic textures; we do so by learning (i.e. identifying) models that are optimal in the sense of maximum likelihood or minimum prediction error variance. For the special case of second-order stationary processes, we identify the model sub-optimally in closed-form. Once learned, a model has predictive power and can be used for extrapolating synthetic sequences to infinite length with negligible computational cost. We present experimental evidence that, within our framework, even low-dimensional models can capture very complex visual phenomena.
Article
Full-text available
Solid noise is a fundamental tool in computer graphics. Surprisingly, no existing noise function supports both high-quality antialiasing and continuity across sharp edges. In this paper we show that a slicing approach is required to preserve continuity across sharp edges, and we present a new noise function that supports anisotropic filtering of sliced solid noise. This is made possible by individually filtering the slices of Gabor kernels, which requires the proper treatment of phase. This in turn leads to the introduction of the phase-augmented Gabor kernel and random-phase Gabor noise, our new noise function. We demonstrate that our new noise function supports both high-quality anti-aliasing and continuity across sharp edges, as well as anisotropy.
Conference Paper
Full-text available
Noise is an essential tool for texturing and modeling. Designing interesting textures with noise calls for accurate spectral control, since noise is best described in terms of spectral content. Texturing requires that noise can be easily mapped to a surface, while high-quality rendering requires anisotropic filtering. A noise function that is procedural and fast to evaluate offers several additional advantages. Unfortunately, no existing noise combines all of these properties. In this paper we introduce a noise based on sparse convolution and the Gabor kernel that enables all of these properties. Our noise offers accurate spectral control with intuitive parameters such as orientation, principal frequency and bandwidth. Our noise supports two-dimensional and solid noise, but we also introduce setup-free surface noise. This is a method for mapping noise onto a surface, complementary to solid noise, that maintains the appearance of the noise pattern along the object and does not require a texture parameterization. Our approach requires only a few bytes of storage, does not use discretely sampled data, and is nonperiodic. It supports anisotropy and anisotropic filtering. We demonstrate our noise using an interactive tool for noise design.
Article
Full-text available
This paper explores the mathematical and algorithmic properties of two sample-based texture models: random phase noise (RPN) and asymptotic discrete spot noise (ADSN). These models permit to synthesize random phase textures. They arguably derive from linearized versions of two early Julesz texture discrimination theories. The ensuing mathematical analysis shows that, contrarily to some statements in the literature, RPN and ADSN are different stochastic processes. Nevertheless, numerous experiments also suggest that the textures obtained by these algorithms from identical samples are perceptually similar. The relevance of this study is enhanced by three technical contributions providing solutions to obstacles that prevented the use of RPN or ADSN to emulate textures. First, RPN and ADSN algorithms are extended to color images. Second, a preprocessing is proposed to avoid artifacts due to the nonperiodicity of real-world texture samples. Finally, the method is extended to synthesize textures with arbitrary size from a given sample.
Article
Full-text available
When the Discrete Fourier Transform of an image is computed, the image is implicitly assumed to be periodic. Since there is no reason for opposite borders to be alike, the ``periodic'' image generally presents strong discontinuities across the frame border. These edge effects cause several artifacts in the Fourier Transform, in particular a well-known ``cross'' structure made of high energy coefficients along the axes, which can have strong consequences on image processing or image analysis techniques based on the image spectrum (including interpolation, texture analysis, image quality assessment, etc.). In this paper, we show that an image can be decomposed into a sum of a ``periodic component'' and a ``smooth component'', which brings a simple and computationally efficient answer to this problem. We discuss the interest of such a decomposition on several applications.
Article
Full-text available
Description of a class of simple, extremely fast random number generators (RNGs) with periods 2k - 1 for k = 32, 64, 96, 128, 160,'2. These RNGs seem to pass tests of randomness very well.
Article
Full-text available
We present a simple stochastic system for non-periodically tiling the plane with a small set of Wang Tiles. The tiles may be filled with. patterns, or geometry that when assembled create a continuous representation. The primary advantage of using Wang Tiles is that once the tiles are filled, large expanses of non-periodic texture (or patterns or geometry) can be created as needed very efficiently at runtime. Wang Tiles are squares in which each edge is assigned a color. A valid tiling requires all shared edges between tiles to have matching colors. We present a new stochastic algorithm to nonperiodically tile the plane with a small set of Wang Tiles at runtime. Furthermore, we present new methods to fill the tiles with 2D texture, 2D Poisson distributions, or 3D geometry to efficiently create at runtime as much non-periodic texture (or distributions, or geometry) as needed. We leverage previous texture synthesis work and adapt it to fill Wang Tiles. We demonstrate how to fill individual tiles with Poisson distributions that maintain their statistical properties when combined. These are used to generate a large arrangement of plants or other objects on a terrain. We show how such environments can be rendered efficiently by pre-lighting the individual Wang Tiles containing the geometry. We also extend the definition of Wang Tiles to include a coding of the tile corners to allow discrete objects to overlap more than one edge. The larger set of tiles provides increased degrees of freedom.
Article
A simple expression for the characteristic functional of generalized shot noise is developed. Through expansions in terms of functional derivatives this yields expressions for moment functions of all orders. A central limit theorem also follows. Several examples are discussed.
Article
The distance from Gaussianity of the shot noise process is considered, where ti are the random times of a Poisson process with average density λ(t). With F(x) the distribution function of x(t) and G(x) that of a normal process with the same mean and variance as x(t) it is shown that where If the process x(t) is stationary with λ(t) =λ and h(t, τ) = h(t – τ) and the function h(t) is bandlimited by ωc, then the above yields
Article
Gaussian textures can be easily simulated by convolving an initial image sample with a conveniently normalized white noise. However, this procedure is not very flexible (it does not allow for non-uniform grids in particular), and can become computationally heavy for large domains. We here propose an algorithm that summarizes a texture sample into a synthesis-oriented texton, that is, a small image for which the discrete spot noise simulation (summed and normalized randomly-shifted copies of the texton) is more efficient than the classical convolution algorithm. Using this synthesis-oriented texture summary, Gaussian textures can be generated in a faster, simpler, and more flexible way.
Article
his article is dedicated to describe a new type of data: high precision raw data coming from the acquisition of objects by a 3D laser scanner.
Article
The distance from Gaussianity of the shot noise process x(t) = ∑i h(t, ti) is considered, where ti are the random times of a Poisson process with average density λ(t). With F(x) the distribution function of x(t) and G(x) that of a normal process with the same mean and variance as x(t) it is shown that |F(x) - G(x)| ≤ 4I3/I2(2π/I2)1/2, where In = ∫- ∞ ∞ λ(τ) | h(t, τ) |n dτ. If the process x(t) is stationary with λ(t) = λ and h(t, τ) = h(t - τ) and the function h(t) is bandlimited by ωc, then the above yields $|F(x) - G(x)
Article
A simple expression for the characteristic functional of generalized shot noise is developed. Through expansions in terms of functional derivatives this yields expressions for moment functions of all orders. A central limit theorem also follows. Several examples are discussed.
Conference Paper
Texture mapping has been a fundamental feature for commodity graphics hardware. However, a key challenge for texture mapping is how to store and manage large textures on graphics processors. In this paper, we present a tile-based texture mapping algorithm by which we only have to physically store a small set of texture tiles instead of a large texture. Our algorithm generates an arbitrarily large and non-periodic virtual texture map from the small set of stored texture tiles. Because we only have to store a small set of tiles, it minimizes the storage requirement to a small constant, regardless of the size of the virtual texture. In addition, the tiles are generated and packed into a single texture map, so that the hardware filtering of this packed texture map corresponds directly to the filtering of the virtual texture. We implement our algorithm as a fragment program, and demonstrate performance on latest graphics processors.
Conference Paper
The use of stochastic texture for the visualization of scalar and vector fields over surfaces is discussed. Current techniques for texture synthesis are not suitable, because they do not provide local control, and are not suited for the design of textures. A new technique, , is presented that does provide these features. Spot noise is synthesized by addition of randomly weighted and positioned spots. Local control of the texture is realized by variation of the spot. The spot is a useful primitive for texture design, because, in general, the relations between features of the spot and features of the texture are straightforward. Various examples and applications are shown. Spot noise lends itself well for the synthesis of texture over curved surfaces, and is therefore an alternative to solid texturing. The relations of spot noise with a variety of other techniques, such as radom faults, filtering, sparse convolution, and particle systems, are discussed. It appears that spot noise provides a new perspective on those techniques.
Conference Paper
Solid texturing is a powerful way to add detail to the surface of rendered objects. Perlin's "noise" is a 3D basis function used in some of the most dramatic and useful surface texture algorithms. We present a new basis function which complements Perlin noise, based on a partitioning of space into a random array of cells. We have used this new basis function to produce textured surfaces re- sembling flagstone-like tiled areas, organic crusty skin, crumpled paper, ice, rock, mountain ranges, and craters. The new basis func- tion can be computed efficiently without the need for precalculation or table storage. In this paper, we propose a new set of related texture basis func- tions. They are based on scattering "feature points" throughout and building a scalar function based on the distribution of the local points. The use of distributed points in space for texturing is not new; "bombing" is a technique which places geometric fea- tures such as spheres throughout space, which generates patterns on surfaces that cut through the volume of these features, forming polkadots, for example. (9, 5) This technique is not a basis func- tion, and is significantly less useful than noise. Lewis also used points scattered throughout space for texturing. His method forms a basis function, but it is better described as an alternative method of generating a noise basis than a new basis function with a different appearance.(3) In this paper, we introduce a new texture basis function that has interesting behavior, and can be evaluated efficiently without any precomputation. After defining the function and its implementation, we show some applications demonstrating its utility.
Article
Image textures can easily be created using texture synthesis by example. However, creating procedural textures is much more difficult. This is unfortunate, since procedural textures have significant advantages over image textures. In this paper we address the problem of texture synthesis by example for procedural textures. We introduce a method for procedural multiresolution noise by example. Our method computes the weights of a procedural multiresolution noise, a simple but common class of procedural textures, from an example. We illustrate this method by using it as a key component in a method for texture synthesis by example for isotropic stochastic procedural textures. Our method significantly facilitates the creation of these procedural textures.
Article
Programmable graphics hardware makes it possible to generate procedural noise textures on the fly for interactive rendering. However, filtering and antialiasing procedural noise involves a tradeoff between aliasing artifacts and loss of detail. In this paper we present a technique, targeted at interactive applications, that provides high-quality anisotropic filtering for noise textures. We generate noise tiles directly in the frequency domain by partitioning the frequency domain into oriented subbands. We then compute weighted sums of the subband textures to accurately approximate noise with a desired spectrum. This allows us to achieve high-quality anisotropic filtering. Our approach is based solely on 2D textures, avoiding the memory overhead of techniques based on 3D noise tiles. We devise a technique to compensate for texture distortions to generate uniform noise on arbitrary meshes. We develop a GPU-based implementation of our technique that achieves similar rendering performance as state-of-the-art algorithms for procedural noise. In addition, it provides anisotropic filtering and achieves superior image quality.
Article
Noise functions are an essential building block for writing procedural shaders in 3D computer graphics. The original noise function introduced by Ken Perlin is still the most popular because it is simple and fast, and many spectacular images have been made with it. Nevertheless, it is prone to problems with aliasing and detail loss. In this paper we analyze these problems and show that they are particularly severe when 3D noise is used to texture a 2D surface. We use the theory of wavelets to create a new class of simple and fast noise functions that avoid these problems.
Article
The quality and speed of most texture synthesis algorithms depend on a 2D input sample that is small and contains enough texture variations. However, little research exists on how to acquire such sample. For homogeneous patterns this can be achieved via manual cropping, but no adequate solution exists for inhomogeneous or globally varying textures, i.e. patterns that are local but not stationary, such as rusting over an iron statue with appearance conditioned on varying moisture levels. We present inverse texture synthesis to address this issue. Our inverse synthesis runs in the opposite direction with respect to traditional forward synthesis: given a large globally varying texture, our algorithm automatically produces a small texture compaction that best summarizes the original. This small compaction can be used to reconstruct the original texture or to re-synthesize new textures under user-supplied controls. More important, our technique allows real-time synthesis of globally varying textures on a GPU, where the texture memory is usually too small for large textures. We propose an optimization framework for inverse texture synthesis, ensuring that each input region is properly encoded in the output compaction. Our optimization process also automatically computes orientation fields for anisotropic textures containing both low- and high-frequency regions, a situation difficult to handle via existing techniques.
Article
We present an algorithm for synthesizing textures from an input sample. This patch-based sampling algorithm is fast and it makes high-quality texture synthesis a real-time process. For generating textures of the same size and comparable quality, patch-based sampling is orders of magnitude faster than existing algorithms. The patch-based sampling algorithm works well for a wide variety of textures ranging from regular to stochastic. By sampling patches according to a nonparametric estimation of the local conditional MRF density function, we avoid mismatching features across patch boundaries. We also experimented with documented cases for which pixel-based nonparametric sampling algorithms cease to be effective but our algorithm continues to work well.
Article
Procedural noise functions are widely used in computer graphics, from off-line rendering in movie production to interactive video games. The ability to add complex and intricate details at low memory and authoring cost is one of its main attractions. This survey is motivated by the inherent importance of noise in graphics, the widespread use of noise in industry and the fact that many recent research developments justify the need for an up-to-date survey. Our goal is to provide both a valuable entry point into the field of procedural noise functions, as well as a comprehensive view of the field to the informed reader. In this report, we cover procedural noise functions in all their aspects. We outline recent advances in research on this topic, discussing and comparing recent and well-established methods. We first formally define procedural noise functions based on stochastic processes and then classify and review existing procedural noise functions. We discuss how procedural noise functions are used for modelling and how they are applied to surfaces. We then introduce analysis tools and apply them to evaluate and compare the major approaches to noise generation. We finally identify several directions for future work.
Article
We have recently proposed a new procedural noise function, Gabor noise, which offers a combination of properties not found in the existing noise functions. In this paper, we present three significant improvements to Gabor noise: 1) an isotropic kernel for Gabor noise, which speeds up isotropic Gabor noise with a factor of roughly two, 2) an error analysis of Gabor noise, which relates the kernel truncation radius to the relative error of the noise, and 3) spatially varying Gabor noise, which enables spatial variation of all noise parameters. These improvements make Gabor noise an even more attractive alternative for the existing noise functions.
Article
We present a new interactive method to texture complex geometries at very high resolution, while using little memory and without the need for a global planar parameterization. We rely on small texture elements, the texture sprites, locally splatted onto the surface to define a composite texture. The sprites can be arbitrarily blended to create complex surface appearances. Their attributes (position, size, texture id) can be dynamically updated, thus providing a convenient framework for interactive editing and animated textures. We demonstrate the flexibility of our method by creating new surface aspects difficult to achieve with other methods. Each sprite is described by a small set of attributes which is stored in a hierarchical structure surrounding the object's surface. The patterns supported by the sprites are stored only once. The whole data structure is compactly encoded into GPU memory. At run time, it is accessed by a fragment program which computes the final appearance of a surface point from all the sprites covering it. The overall memory cost of the structure is very low compared to the resulting texturing resolutions. Rendering is done in real-time. The resulting texture is linearly interpolated and altered.
Article
The authors introduce the concept of a Pixel Stream Editor. This forms the basis for an interactive synthesizer for designing highly realistic Computer Generated Imagery. The designer works in an interactive Very High Level programming environment which provides a very fast concept/implement/view iteration cycle. Naturalistic visual complexity is built up by composition of non-linear functions, as opposed to the more conventional texture mapping or growth model algorithms. Powerful primitives are included for creating controlled stochastic effects. The concept of 'solid texture' to the field of CGI is introduced. The authors have used this system to create very convincing representations of clouds, fire, water, stars, marble, wood, rock, soap films and crystals. The algorithms created with this paradigm are generally extremely fast, highly realistic, and asynchronously parallelizable at the pixel level
Article
We propose to model the statistics of natural images thanks to the large class of stochastic processes called Infinitely Divisible Cascades (IDC). IDC were first introduced in one dimension to provide multifractal time series to model the so-called intermittency phenomenon in hydrodynamical turbulence. We have extended the definition of scalar infinitely divisible cascades from 1 to N dimensions and commented on the relevance of such a model in fully developed turbulence in [1]. In this article, we focus on the particular 2 dimensional case. IDC appear as good candidates to model the statistics of natural images. They share most of their usual properties and appear to be consistent with several independent theoretical and experimental approaches of the literature. We point out the interest of IDC for applications to procedural texture synthesis.
Article
We present a simple image-based method of generating novel visual appearance in which a new image is synthesized by stitching together small patches of existing images. We call this process image quilting. First, we use quilting as a fast and very simple texture synthesis algorithm which produces surprisingly good results for a wide range of textures. Second, we extend the algorithm to perform texture transfer -- rendering an object with a texture taken from a different object. More generally, we demonstrate how an image can be re-rendered in the style of a different image. The method works directly on the images and does not require 3D information.
Article
Numerous real-time applications such computer games or flight simulators require non-repetitive high-resolution texturing on large landscapes. We propose an algorithm which procedurally determines the texture value at any surface location by aperiodically combining provided patterns according to user-defined controls such as a probability distribution (possibly non stationary). Our algorithm can be implemented on programmable hardware by taking advantage of the texture indirection ability of recent graphics boards. We use explicit and virtual indirection tables to determine the pattern to apply at each pixel as well as its attributes (displacement, scaling, time...). This provides the programmer with a very high resolution virtual texture with nice properties: Low memory consumption, no periodicity, control of the statistics, numerous control parameters (which can be edited on the fly)... Our representation consists of building blocks that we combine in order to illustrate various convenient texture modalities such as aperiodic tiling, sparse convolution, domain transitions and animated textures.
Article
The problem of digital painting is considered from a signal processing viewpoint, and is reconsidered as a problem of directed texture synthesis. It is an important characteristic of natural texture that detail may be evident at many scales, and the detail at each scale may have distinct characteristics. A "sparse convolution " procedure for generating random textures with arbitrary spectral content is described. The capability of specifying the texture spectrum (and thus the amount of detail at each scale) is an improvement over stochastic texture synthesis processes which are scalebound or which have a prescribed 1/f spectrum. This spectral texture synthesis procedure provides the basis for a digital paint system which rivals the textural sophistication of traditional artistic media. Applications in terrain synthesis and texturing computerrendered objects are also shown.
Ronsin S. A texton for random phase and Gaussian textures
  • A Desolneux
  • L Moisan
Procedural descriptions of anisotropic noisy textures by example
  • Gilet G.
Spot noise texture synthesis for data visualization Proceedings of the 18th Annual Conference on Computer Graphics and Interactive Techniques
  • J J Van Wijk
Image quilting for texture synthesis and transfer Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques
  • A A Efros
  • W T Freeman
Texture synthesis for digital painting Proceedings of the 11th Annual Conference on Computer Graphics and Interactive Techniques
  • J.-P Lewis
Quick and easy GPU random numbers in D3D11
  • Reed N.
An image synthesizer Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques
  • K Perlin
A cellular texture basis function Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques
  • S Worley
Modèle Squentiel de Partition Alatoire
  • G. Matheron
A compact representation of random phase and Gaussian textures
  • Desolneux