Figure 6 - uploaded by He Zhang
Content may be subject to copyright.
Fingerprint separation results. We compare the visual performance of our method with that of BNN, MCA and A-MCA based om recovered Whor and Delta part.

Fingerprint separation results. We compare the visual performance of our method with that of BNN, MCA and A-MCA based om recovered Whor and Delta part.

Source publication

Context in source publication

Context 1
... first row of Figure 6 shows the input latent fingerprint and the learned texture (finger- print) filters. The learned fingerprint filters show some characteristics unique to fingerprints. ...

Similar publications

Preprint
Full-text available
In this paper we propose two algorithms for completing a tensor with partial observations of its entries. Our algorithms are based on a low rank prior under the Tensor Train (TT) decomposition. We improve existing nuclear norm methods for tensor completion problems by directly solving the convex nuclear norm model. Whereas simplifications are alway...
Article
Full-text available
Recent studies have shown convolutional neural networks (CNNs) can be trained to perform modal decomposition using intensity images of optical fields. A fundamental limitation of these techniques is that the modal phases cannot be uniquely calculated using a single intensity image. The knowledge of modal phases is crucial for wavefront sensing, ali...

Citations

... Other methods include machine learning-based approaches [33][34][35], approaches based on patch recurrence [36,37] or the very recent generalized smoothing framework [38,39]. Finally, it is also possible to extract the texture first. ...
Article
Full-text available
We consider a variational approach to the problem of structure + texture decomposition (also known as cartoon + texture decomposition). As usual for many variational problems in image analysis and processing, the energy we minimize consists of two terms: a data-fitting term and a regularization term. The main feature of our approach consists of choosing parameters in the regularization term adaptively. Namely, the regularization term is given by a weighted p(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p(\cdot )$$\end{document}-Dirichlet-based energy ∫a(x)|∇u|p(x)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\int \!a({\varvec{x}}){\,\!|\,\!}\nabla u {\,\!|\,\!}^{p({\varvec{x}})}$$\end{document}, where the weight and exponent functions are determined from an analysis of the spectral content of the image curvature. Our numerical experiments, both qualitative and quantitative, suggest that the proposed approach delivers better results than state-of-the-art methods for extracting the structure from textured and mosaic images, as well as competitive results on image enhancement problems.
... Such a problem can be transformed into an alternative solvable convex optimization problem: one group of variables can be solved by fixing another. In particular, we ignore the TV term in Eq. (4) and employ TV correction [41] to refine the depth map by a redundant Haar wavelet-based shrinkage estimate, as follows: 1. Update step for α n . To optimize each α n separately with a local strategy, we transform Eq. (4) into Eq. ...
Article
Full-text available
Depth information can benefit various computer vision tasks on both images and videos. However, depth maps may suffer from invalid values in many pixels, and also large holes. To improve such data, we propose a joint self-supervised and reference-guided learning approach for depth inpainting. For the self-supervised learning strategy, we introduce an improved spatial convolutional sparse coding module in which total variation regularization is employed to enhance the structural information while preserving edge information. This module alternately learns a convolutional dictionary and sparse coding from a corrupted depth map. Then, both the learned convolutional dictionary and sparse coding are convolved to yield an initial depth map, which is effectively smoothed using local contextual information. The reference-guided learning part is inspired by the fact that adjacent pixels with close colors in the RGB image tend to have similar depth values. We thus construct a hierarchical joint bilateral filter module using the corresponding color image to fill in large holes. In summary, our approach integrates a convolutional sparse coding module to preserve local contextual information and a hierarchical joint bilateral filter module for filling using specific adjacent information. Experimental results show that the proposed approach works well for both invalid value restoration and large hole inpainting.
... One of the most notable models is the TV approach [50], and early researches discussed the TV-based models characterizing textures by different functional spaces or norms [2], involving TV-G [42], TV-div(L p ) [57], TV-H −1 [46], TV-H −s [32], TV-div(BMO) [30], TV-L 1 [72], etc. To well model textural component, sparse coding [19,52,74] and low rank [16] were proposed in TV-based models. However, these models often over-smooth structural component as the property of TV. ...
Article
Full-text available
Image decomposition is a fundamental but challenging ill-posed problem in image processing and has been widely applied to compression, enhancement, texture removal, etc. In this paper, we introduce a novel structure–texture image decomposition model via non-convex total generalized variation regularization (NTGV) and convolutional sparse coding (CSC). NTGV aims to characterize the detailed-preserved structural component ameliorating the staircasing artifacts existing in total variation-based models, and CSC aims to characterize image fine-scale textures. Moreover, we incorporate both structure-aware and texture-aware measures to well distinguish structural and textural component. The proposed model is numerically implemented by an alternating minimization scheme based on alternating direction method of multipliers. Experimental results demonstrate the effectiveness of our approach on several applications including texture removal, high dynamic range image tone mapping, detail enhancement and non-photorealistic abstraction.
... Convolutional sparse coding. An appealing model that addresses these problems has been introduced in the form of convolutional sparse coding [7], [10], [22], [24], [29], [47], which has demonstrated impressive performance in various applications [23], [26], [30], [35], [48]. With this strategy, a set of local atoms {d j } p j=1 is used to represent a global signal by convolutions with representation vectors. ...
Preprint
Sparse coding techniques for image processing traditionally rely on a processing of small overlapping patches separately followed by averaging. This has the disadvantage that the reconstructed image no longer obeys the sparsity prior used in the processing. For this purpose convolutional sparse coding has been introduced, where a shift-invariant dictionary is used and the sparsity of the recovered image is maintained. Most such strategies target the $\ell_0$ "norm" or the $\ell_1$ norm of the whole image, which may create an imbalanced sparsity across various regions in the image. In order to face this challenge, the $\ell_{0,\infty}$ "norm" has been proposed as an alternative that "operates locally while thinking globally". The approaches taken for tackling the non-convexity of these optimization problems have been either using a convex relaxation or local pursuit algorithms. In this paper, we present an efficient greedy method for sparse coding and dictionary learning, which is specifically tailored to $\ell_{0,\infty}$, and is based on matching pursuit. We demonstrate the usage of our approach in salt-and-pepper noise removal and image inpainting. A code package which reproduces the experiments presented in this work is available at https://web.eng.tau.ac.il/~raja
... The linear form of the degeneration model of images is where y is a clean image, x is the noisy target to be restored, and n is additive noise. Similar to image separation (Zhang 2016), image denoising can be simply viewed as the problem of separating two components from a noisy image. During the acquisition of images from imaging system, the actual system suffers from a variety of noise disturbances. ...
Article
Full-text available
Existing methods for image denoising mainly focused on noise and visual artifacts too much but rarely mentioned the loss of edge information. In this paper, we propose a deep denoising network based on the residual learning and perceptual loss to generate high-quality denoised results. Inspired by the deep residual network, two new strategies are used to modify the original structure, which can improve the learning process by compressing the mapping range. At first, the high-frequency layer of noisy image is used as the input by removing background information. The secondly, a residual mapping is trained to predict the difference between clean and noisy images as output instead of the final denoised image. Furtherly improve the denoised result, a joint loss function is defined as the weighted sum of pixel-to-pixel Euclidean loss and perceptual loss. A well-trained convolutional neural network is connected to learn the semantic information we would like to measure in our perceptual loss. It encourages the train process to learn similar feature representations rather than match each low-level pixel, which can guide front denoising network to reconstruct more edges and details. As opposed to the standard denoising models that concentrate on one specific noise level, our single model can deal with the noise of unknown levels (i.e., blind denoising). The experiments show that our proposed network achieves superior performances and recovers majority of missing details from low-quality observations.
... Other state-of-the-art image decomposition methods include various modifications and extensions of the bilateral filter [12], [63] and machine learning methods [60], [62], [40]. ...
Article
A preliminary structure + texture image decomposition is very useful for a number of digital image processing tasks, as different strategies are supposed to be employed for processing the structure and texture image components. In this paper, a new variational structure + texture image decomposition method is developed. The main ingredients of the proposed approach are: (1) using a low-pass filtered level-set curvature of the input image as a guidance image; (2) texture suppressing by minimizing a variable exponent energy, where the variable exponent is learned from the result of the curvature-guided image filtering. Numerical experiments demonstrate that the method is competitive with the current state of the art in structure + texture image decomposition. Several applications are considered.
... Convolutional Sparse Coding (CSC) [30, 63] [55, Sec. II], a highly structured sparse representation model, has recently attracted increasing attention for a variety of imaging inverse problems [23,32,65,39,54,66]. CSC aims to represent a given signal s ∈ R N as a sum of convolutions, ...
... Let the step size be small enough η ≤ min (µ/L 2 , 1/µ), we have 0 ≤ 1−2µη+η 2 L 2 ≤ 1, which implies (65). Combining (65) and (43), we get (56). ...
... Let the step size be small enough η ≤ min (µ/L 2 , 1/µ), we have 0 ≤ 1−2µη+η 2 L 2 ≤ 1, which implies (65). Combining (65) and (43), we get (56). ...
Article
Full-text available
Convolutional sparse representations are a form of sparse representation with a structured, translation invariant dictionary. Most convolutional dictionary learning algorithms to date operate in batch mode, requiring simultaneous access to all training images during the learning process, which results in very high memory usage and severely limits the training data that can be used. Very recently, however, a number of authors have considered the design of online convolutional dictionary learning algorithms that offer far better scaling of memory and computational cost with training set size than batch methods. This paper extends our prior work, improving a number of aspects of our previous algorithm; proposing an entirely new one, with better performance, and that supports the inclusion of a spatial mask for learning from incomplete data; and providing a rigorous theoretical analysis of these methods.
... Convolutional sparse representations are a recent 1 alternative that replace the general linear representation with a sum of convolutions m dm * xm ≈ s, where the elements of the dictionary dm are linear filters, and the representation consists of the set of coefficient maps xm, each of which is the same size as s. The convolutional form has, thus far, received very little attention for image reconstruction problems, but there are indications that interest is growing, with recent work addressing applications in superresolution[3], image fusion[4], impulse noise denoising[5], structure/texture decomposition[6], and dynamic MRI reconstruction[7]. Surprisingly, denoising of Gaussian white noise, arguably the simplest of all imaging inverse problems, has received no attention beyond a very brief example providing insufficient detail for reproducibilty[8, Sec. ...
... Note that the solution to this problem could be approximated by solving the CBPDN problem followed by an additional TV denoising step applied to the result, as in[6]. The final TV term can be expressed as ...
Article
While convolutional sparse representations enjoy a number of useful properties, they have received limited attention for image reconstruction problems. The present paper compares the performance of block-based and convolutional sparse representations in the removal of Gaussian white noise. While the usual formulation of the convolutional sparse coding problem is slightly inferior to the block-based representations in this problem, the performance of the convolutional form can be boosted beyond that of the block-based form by the inclusion of suitable penalties on the gradients of the coefficient maps.
... In our method, we compute the feature loss at layer relu2 2 in VGG-16 model [44]. 2 Given a set of N input images (ground truth or de-rained result) {y i } N i=1 with the corresponding labels l i (1-real, 0fake), the binary cross-entropy loss for the discriminator D is defined as: ...
Article
Full-text available
Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.