Vincenzo Lipari

Vincenzo Lipari
Politecnico di Milano | Polimi · Department of Electronics, Information, and Bioengineering

Ph.D.

About

44
Publications
17,001
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
462
Citations

Publications

Publications (44)
Article
Full-text available
Source device identification is an important topic in image forensics since it allows to trace back the origin of an image. Its forensics counterpart is source device anonymization, that is, to mask any trace on the image that can be useful for identifying the source device. A typical trace exploited for source device identification is the photo re...
Article
Seismic deblending is an ill-posed inverse problem that involves counteracting the effect of a blending matrix derived from the shots’ position and firing time. In this letter, we propose a seismic deblending method based on so-called deep preconditioners. A convolutional autoencoder (AE) is first trained in a patch-wise fashion to learn an effecti...
Preprint
Data interpolation is a fundamental step in any seismic processing workflow. Among machine learning techniques recently proposed to solve data interpolation as an inverse problem, Deep Prior paradigm aims at employing a convolutional neural network to capture priors on the data in order to regularize the inversion. However, this technique lacks of...
Article
Full-text available
Irregularity and coarse spatial sampling of seismic data strongly affect the performances of processing and imaging algorithms. Therefore, interpolation is a usual preprocessing step in most of the processing workflows. In this work, we propose a seismic data interpolation method based on the deep prior paradigm: an ad hoc convolutional neural netw...
Preprint
Source device identification is an important topic in image forensics since it allows to trace back the origin of an image. Its forensics counter-part is source device anonymization, that is, to mask any trace on the image that can be useful for identifying the source device. A typical trace exploited for source device identification is the Photo R...
Conference Paper
Interpolation of seismic data is an important pre-processing step in most seismic processing workflows. Through the deep image prior paradigm, it is possible to use Convolutional Neural Networks for seismic data interpolation without the costly and prone-to-overfitting training stage. The proposed method makes use of the multi-res U-net architectur...
Article
Full-text available
The advent of new deep learning and machine learning paradigms enables to develop new solutions to tackle the challenges posed by new geophysical imaging applications.For this reason, convolutional neural networks (CNN) have been deeply investigated as novel tools for seismic image processing.In particular, we study a specific CNN architecture, the...
Preprint
The advent of new deep learning and machine learning paradigms enables to develop new solutions to tackle the challenges posed by new geophysical imaging applications. For this reason, we deeply investigate the use of convolutional neural networks as novel tools for seismic image processing. Specifically, we process seismic migrated images through...
Preprint
In this work, we resume the same goal of employing deep neural networks as a generic seismic post-processing operator, but we aim at relaxing the need for corresponding image pairs. Specifically, we leverage a network architecture known as CycleGAN (Zhu et al., 2017), successfully employed for domain transfer problems in computer vision. We test th...
Preprint
Seismic data processing algorithms greatly benefit, or even require regularly sampled and reliable data. Therefore, interpolation and denoising play a fundamental role as starting steps of most seismic data processing pipelines. In this paper, we exploit convolutional neural networks for the joint tasks of interpolation and random noise attenuation...
Conference Paper
Although, with the usual seismic acquisition, much of the information about subsurface parameters is contained in the reflections data, many of the current Full Waveform Inversion applications still focus on the use of the transmitted energy (i.e. refractions). In order to reduce the dependency on refractions, in this work we propose to pre-process...
Conference Paper
Full-text available
A common issue of seismic data analysis consists in the lack of regular and densely sampled seismic traces. This problem is commonly tackled by rank optimization or statistical features learning algorithms, which allow interpolation and denoising of corrupted data. In this paper, we propose a completely novel approach for reconstructing missing tra...
Conference Paper
Full-text available
The new challenges of geophysical imaging applications ask for new methodologies going beyond the standard and well established techniques. In this work we propose a novel tool for seismic imaging applications based on recent advances in deep neural networks. Specifically, we use a generative adversarial network (GAN) to process seismic migrated im...
Conference Paper
Full-text available
Over the years, the forensic community has developed a series of very accurate camera attribution algorithms enabling to detect which device has been used to acquire an image with outstanding results. Many of these methods are based on photo response non uniformity (PRNU) that allows tracing back a picture to the camera used to shoot it. However, w...
Conference Paper
Full-text available
In this work we describe a machine learning pipeline for facies classification based on wireline logging measurements. The al- gorithm has been designed to work even with a relatively small training set and amount of features. The method is based on a gradient boosting classifier which demonstrated to be effec- tive in such a circumstance. A key as...
Article
Reflection tomography is the industry standard tool for velocity model building, but it is also an ill-posed inverse problem as its solution is not unique. The usual way to obtain an acceptable result is to regularize tomography by feeding the inversion with some a priori information. The simplest regularization forces the solution to be smooth, im...
Conference Paper
Well mis-ties can be incorporated as additional constraints into reflection tomography in the post-migrated domain. Exploiting such source of prior information enriches conventional seismic tomography and reduces the ill-conditioning of the operator, specifically in presence of anisotropy. The method proposed here to account for well mis-ties with...
Article
Reflection tomography is the industry standard tool for velocity model building, but it is also an ill‐posed inverse problem as its solution is not unique. The usual way to obtain an acceptable result is to regularize tomography by feeding the inversion with some a priori information. The simplest regularization forces the solution to be smooth, im...
Article
Surface-related multiple elimination is the leading methodology for surface multiple removal. This data-driven approach can be extended to interbed multiple prediction at the expense of a huge increase of the computational burden. This cost makes model-driven methods still attractive, especially for the three dimensional case. In this paper we pres...
Article
Despite being less general than 3D surface‐related multiple elimination (3D‐SRME), multiple prediction based on wavefield extrapolation can still be of interest, because it is less CPU and I/O demanding than 3D‐SRME and moreover it does not require any prior data regularization. Here we propose a fast implementation of water‐bottom multiple predict...
Conference Paper
The fully data driven approach at the basis of 3D-SRME can be extended to inter-bed multiple prediction at the expense of a huge increase of the computational burden. This cost makes model driven approaches still attractive. Here we propose a methodology based on wavefield extrapolation. We use a compact Kirchhoff extrapolation operator in order to...
Conference Paper
Despite being less general than 3D-SRME, multiple prediction based on wavefield extrapolation can still be of interest, because it is less CPU and I/O demanding than 3D-SRME and moreover it does not require any prior data regularization. Here we propose a fast implementation of water bottom multiple prediction that uses the Kirchhoff formulation of...
Conference Paper
The fully data driven approach at the basis of 3D-SRME can be extended to interbed multiple prediction at the expense of a huge increase of the computational burden. This cost makes model driven approaches still attractive. Here we propose a methodology based on wavefield extrapolation, that exploits the compact Kirchhoff extrapolation operator tha...
Article
The necessity to satisfy the world need for oil & gas pushes oil companies to drill in conditions that are getting harder and harder in terms of Temperature and Pressure. The drillers have to know the conditions as precisely as possible before drilling a new borehole. Generally, in complex and deep areas the more traditional relationships linking i...
Conference Paper
Kirchhoff migration is well known to be the work-horse of seismic imaging. It is fast, it does not require regular acquisition geometry and it is naturally target oriented. Moreover, at least for some time to come, it is the kernel of Migration Velocity A
Article
Kirchhoff prestack depth migration in the angle domain helps in the reduction of acquisition footprints, in true amplitude migration, in Amplitute Versus Offset or Angle analysis and in migration velocity analysis. Scattering angles replace surface related acquisition coordinates in the Kirchhoff summation. The regularization of illumination is per...
Conference Paper
The benefits of recasting pre-stack depth migration in the angle domain are such to suggest its application in industrial Kirchhoff PSDM software. Here we examine some practical aspects of the implementation in the angle domain, namely those that mostly impact performances and effectiveness: memory requirements, poor illumination and errors in velo...

Network

Cited By