Figure 2 - uploaded by Luca Foschini
Content may be subject to copyright.
The black dot in the subject image is captured by two frames a and b. In each frame, the dot is captured by the same square (thin grid), but it is averaged with different portions of the original image, because of the relative offset in the pixelized area (thick grid) in a and b.

The black dot in the subject image is captured by two frames a and b. In each frame, the dot is captured by the same square (thin grid), but it is averaged with different portions of the original image, because of the relative offset in the pixelized area (thick grid) in a and b.

Source publication
Conference Paper
Full-text available
Pixelization is a technique to make parts of an image impossible to discern by the human eye by artificially decreasing the image resolution. Pixelization, as other forms of image censorship, is effective at hiding parts of an image that might be offensive to the viewer. However, pixelization is also often used also to achieve anonymity, for exampl...

Similar publications

Conference Paper
Full-text available
The popularity and utility of social networking services have the potential to change the relationship between academic departments and their undergraduate students, and students’ own relationships with their subject. This has been shown to be particularly relevant in the area of undergraduate mathematics learning, where local communities of practi...
Conference Paper
This study provides preliminary insights into the linguistic features that contribute to Internet censorship in mainland China. We collected a corpus of 344 censored and uncensored microblog posts that were published on Sina Weibo and built a Naive Bayes classifier based on the linguistic, topic-independent, features. The classifier achieves a 79.3...
Article
Full-text available
A bstract It has been argued that the strong cosmic censorship conjecture is violated by Reissner-Nordström-de Sitter black holes: for near-extremal black holes, generic scalar field perturbations arising from smooth initial data have finite energy at the Cauchy horizon even though they are not continuously differentiable there. In this paper, we c...
Conference Paper
Full-text available
Microblogging services have become popular, especially since smartphones made them easily accessible for common users. However, current services like Twitter rely on a centralized infrastructure, which has serious drawbacks from privacy and reliability perspectives. In this paper, we present a decentralized privacy-preserving microblogging infrastr...
Article
Full-text available
The Baptism at the Savica (in Slovene Krst pri Savici), the poem about the loss of the Slovenian independence, can be understood as Prešeren’s successful attempt to trick the censorship and to express, in a form of a historical narrative written as a metaphor, the very contents which had to be removed from his elegy Dem Andenken des Matthias Čop in...

Citations

... Furthermore, de-identified video information can be easily identified as such, limiting its use as training data or for other secondary purposes. Moreover, these techniques are vulnerable to removal and restoration technologies like denoising [4]- [6] or inpainting [7], a limitation noted in related research. These challenges suggest the need for new approaches in advancing face de-identification technology. ...
Article
Full-text available
With the advancement of facial recognition technology, concerns over facial privacy breaches owing to data leaks and external attacks have been escalating. Existing de-identification methods face challenges with compatibility with facial recognition models and difficulties in verifying de-identified images. To address these issues, this study introduces a novel framework that combines face verification-enabled de-identification techniques with face-swapping methods, tailored for video surveillance environments. This framework employs StyleGAN, Pixel2Style2Pixel (PSP), HopSkipJumpAttack (HSJA), and FaceNet512 to achieve face verification-capable de-identification, and uses the dlib library for face swapping. Experimental results demonstrate that this method maintains high face recognition performance (98.37%) across various facial recognition models while achieving effective de-identification. Additionally, human tests have validated its sufficient de-identification capabilities, and image quality assessments have shown its excellence across various metrics. Moreover, real-time de-identification feasibility was evaluated using Nvidia Jetson AGX Xavier, achieving a processing speed of up to 9.68 fps. These results mark a significant advancement in demonstrating the practicality of high-quality de-identification techniques and facial privacy protection in the field of video surveillance.
... Machine Learning is not necessarily required for such tasks: Cavedon et al. [CFV11] studied possibilities for full reconstruction of pixelized parts in videos. This is possible because the redacted part under the pixelation mask moves in videos, hence disclosing additional information that can be used for the reconstruction with a Maximum a Posteriori approach. ...
Conference Paper
Full-text available
On the internet, you find numerous images like screenshots where secret parts are hidden with irreversible redaction techniques like pixelation or blurring. In this paper, we propose a system that recovers information from redacted text in raster graphics using a composition of a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN) using Long short-term memory (LSTM) and a Connectionist Temporal Classification (CTC) layer to output the most probable character sequence. We furthermore show that our model operates in an automated pipeline, performs on blurred images without modification and is even able to compensate JPEG quality loss. Finally, our test results indicate that a generic neural network can be trained successfully to assist the recovery of pixelized or blurred information on screenshots or high-quality photos.
... The transformation is applied to the remaining pixels using the estimated matrix, while any gaps are interpolated using a bicubic algorithm. The discussed process is reversible if the inverse transformation matrix can be re-estimated [13]. However, due to the interpolation of some pixels, an unwarped image is only an approximation of the original. ...
... However, there is another attack type applicable to videos, utilizing temporal dependencies between adjacent frames in the video sequences. Cavedon et al. (2011) [13] developed a technique that completely recovers the identity of a pixelized face in video streams under certain conditions. When pixelization is applied to video sequences depicting the same subject, there is a high probability that the pixelization squares will change position with respect to the image background, hence averaging different pixels at different times. ...
... However, there is another attack type applicable to videos, utilizing temporal dependencies between adjacent frames in the video sequences. Cavedon et al. (2011) [13] developed a technique that completely recovers the identity of a pixelized face in video streams under certain conditions. When pixelization is applied to video sequences depicting the same subject, there is a high probability that the pixelization squares will change position with respect to the image background, hence averaging different pixels at different times. ...
Preprint
Full-text available
The exploding rate of multimedia publishing in our networked society has magnified the risk of sensitive information leakage and misuse, pushing the need to secure data against possible exposure. Data sanitization -- the process of obfuscating or removing sensitive content related to the data -- helps to mitigate the severe impact of potential security and privacy risks. This paper presents a review of the mechanisms designed for protecting digital visual contents (i.e., images and videos), the attacks against the cited mechanisms, and possible countermeasures. The provided thorough systematization, alongside the discussed challenges and research directions, can pave the way to new research.
... The authors of [1] have proposed an approach performing an attack against privacy of pixelized videos, assuming that the same image has been recorded in different frames at different positions. As the same subject moves in a video, the pixelization squares change position. ...
Chapter
Protecting the identity of users in social networks and photographic media has become one of the most important concerns, especially with the breakthrough of internet and information technology. Thus several techniques, such as blurring and pixelization, have been introduced by privacy-protection technology designers to make the features of a user’s face unrecognizable. However, many researchers pointed out the ineffectiveness of these techniques. In this paper with deal with the problem of reconstructing obfuscated human faces. We propose a Deep Neural Network based on U-Net and trained on Labeled Faces in the Wild (LFW) dataset. Our model is evaluated using two image quality metrics PSNR and SSIM. Experimental results show the ability of our model to reconstruct images that have underwent different forms of obfuscation (blurring and pixelization) with great accuracy, outperforming previous work.
... Video analizi işlemleri piksel sayma ile yapılmaktadır. Piksel; bilgisayar, tablet, televizyon ve telefon gibi elektronik cihazların ekran görüntülerinin en küçük birimine verilen isimdir [6]. Ekranda değişen pikseller yazılım programı tarafından yorumlanmaya başlanır. ...
... However, such approaches are prone to de-anonymization attacks when the adversary has access to additional information about the individuals in the dataset (Narayanan & Shmatikov, 2008). In the case of images, simple approaches such as blurring of faces/eyes have been shown to be easily overcome using model inversion or image similarity based attacks (Li et al., 2014;Cavedon et al., 2011) by adversaries with access to auxiliary data. To overcome such attacks, more sophisticated approaches have been proposed that fall under the umbrella term 'privacy preserving machine learning' (Al-Rubaie & Chang, 2019; Agrawal & Srikant, 2000). ...
Preprint
Full-text available
Generative Adversarial Networks (GANs) have made releasing of synthetic images a viable approach to share data without releasing the original dataset. It has been shown that such synthetic data can be used for a variety of downstream tasks such as training classifiers that would otherwise require the original dataset to be shared. However, recent work has shown that the GAN models and their synthetically generated data can be used to infer the training set membership by an adversary who has access to the entire dataset and some auxiliary information. Here we develop a new GAN architecture (privGAN) which provides protection against this mode of attack while leading to negligible loss in downstream performances. Our architecture explicitly prevents overfitting to the training set thereby providing implicit protection against white-box attacks. The main contributions of this paper are: i) we propose a novel GAN architecture that can generate synthetic data in a privacy preserving manner and demonstrate the effectiveness of our model against white--box attacks on several benchmark datasets, ii) we provide a theoretical understanding of the optimal solution of the GAN loss function, iii) we demonstrate on two common benchmark datasets that synthetic images generated by privGAN lead to negligible loss in downstream performance when compared against non--private GANs. While we have focosued on benchmarking privGAN exclusively of image datasets, the architecture of privGAN is not exclusive to image datasets and can be easily extended to other types of datasets.
... The methods were categorized under Transform-domain and pixel level, based on the method that is used by each of these techniques to anonymize the image. Authors in [10] performed an attack on the pixelization technique that was done on video streams. This was done by first taking the average of two frames, and then applying maximum-a-posteriori method to recover the image. ...
Article
Full-text available
With the increasing usage of images to express opinions, feelings and one’s self, on social media, and other websites, privacy concerns become an issue. The need to anonymize a person’s face, or other aspects presented in an image for legal or personal reasons has sometimes been overlooked. Pixelization is a common technique that is used for anonymizing images. However, this technique has been proved to be a not-so-reliable technique, as the images can be restored using de-pixelization techniques. Clustering is usually used in relation to images, for image segmentation. When used in combination with pixelization, it proves to be an effective way to anonymize images. In this paper, the authors investigate the cons of using only pixelization, and prove how the use of clustering can improve the chances of anonymizing effec-tively.
... The authors suggested a new de-identification technique based on Newton et al. [31]. Cavedon et al. [6] exploited the changes in pixel boxes for obfuscating video frames to reconstruct pixelated videos using image processing methods so that humans can identify objects in the reconstructed images. Wilber et al. [50] used Facebook's face tagging system as a black-box tool to determine if faces obfuscated with various techniques, including blurring, are still recognizable by Facebook. ...
Article
Full-text available
We demonstrate that modern image recognition methods based on artificial neural networks can recover hidden information from images protected by various forms of obfuscation. The obfuscation techniques considered in this paper are mosaicing (also known as pixelation), blurring (as used by YouTube), and P3, a recently proposed system for privacy-preserving photo sharing that encrypts the significant JPEG coefficients to make images unrecognizable by humans. We empirically show how to train artificial neural networks to successfully identify faces and recognize objects and handwritten digits even if the images are protected using any of the above obfuscation techniques.
... Many different techniques are used for redaction; the techniques applied and the tools used vary by community. 1 Redaction has importance beyond adhering to community norms. Images ineffectively redacted and posted by members of at-risk communities (e.g., Reddit's /r/CreepyPMs) could render those users vulnerable to retribution. ...
... Newton, Sweeney, and Malin showed that face recognition software can be used to recognize mosaiced faces [15] from still images. Likewise, Cavedon, Foschini, and Vigna [1] used superresolution techniques to recover mosaiced faces from video, assuming that the subjects on video did not move much between consecutive frames. (This possibility was earlier noted by Dufaux [4].) ...
Article
Full-text available
In many online communities, it is the norm to redact names and other sensitive text from posted screenshots. Sometimes solid bars are used; sometimes a blur or other image transform is used. We consider the effectiveness of two popular image transforms - mosaicing (also known as pixelization) and blurring - for redaction of text. Our main finding is that we can use a simple but powerful class of statistical models - so-called hidden Markov models (HMMs) - to recover both short and indefinitely long instances of redacted text. Our approach borrows on the success of HMMs for automatic speech recognition, where they are used to recover sequences of phonemes from utterances of speech. Here we use HMMs in an analogous way to recover sequences of characters from images of redacted text. We evaluate an implementation of our system against multiple typefaces, font sizes, grid sizes, pixel offsets, and levels of noise. We also decode numerous real-world examples of redacted text. We conclude that mosaicing and blurring, despite their widespread usage, are not viable approaches for text redaction.
... Another important issue is to ensure anonymity while preserving the rest of the information [1]. Finally, it is worth noting that simple image manipulations can be retro engineered, allowing to reconstruct faces [6]. Besides these simple blurring and pixelization techniques, more advanced techniques have been introduced, most of them based on the well-known eigenfaces representation [9, 23], or some variants [12]. ...
Conference Paper
With the adoption of pervasive surveillance systems and the development of efficient automatic face matchers, the question of preserving privacy becomes paramount. In this context, automated face de-identification is revived. Typical solutions based on eyes masking or pixelization, while commonly used in news broadcasts, produce very unnatural images. More sophisticated solutions were sparingly introduced in the literature, but they fail to account for fundamental constraints such as the visual likeliness of de-identified images. In contrast, we identify essential principles and build upon efficient techniques to derive an automated face de-identification solution meeting our predefined criteria. More specifically, our approach relies on a set of face donors from which it can borrow various face components (eyes, chin, etc.). Faces are then de-identified by substituting their own face components with the donors’ ones, in such a way that an automatic face matcher is fooled while the appearance of the generated faces are as close as possible to original faces. Experiments on several datasets validate the approach and show its ability both in terms of privacy preservation and visual quality.