Figure 5 - uploaded by Reza Shokri
Content may be subject to copyright.
An original AT&T image and two blurred frames extracted from a blurred YouTube video. Although the unblurred frames were identical, the two blurred frames are different.

An original AT&T image and two blurred frames extracted from a blurred YouTube video. Although the unblurred frames were identical, the two blurred frames are different.

Source publication
Article
Full-text available
We demonstrate that modern image recognition methods based on artificial neural networks can recover hidden information from images protected by various forms of obfuscation. The obfuscation techniques considered in this paper are mosaicing (also known as pixelation), blurring (as used by YouTube), and P3, a recently proposed system for privacy-pre...

Context in source publication

Context 1
... the blurring often extended outside of the original image borders, we extracted the center 184×224 pixels from each frame and then resized them to 92 × 112 pixels. Two examples of a blurred image can be seen in Figure 5. ...

Citations

... Several obfuscation techniques (e.g., blurring [10,11] or pixelating [10]) have been employed to protect SI in images. Recent works [9,12,13] demonstrated that deep learning-based (DL) techniques [14,15] can be employed by adversaries to re-identify/recognize obfuscated SI in images 1 (e.g., recognition-based [17][18][19]) and breach individuals' privacy and anonymity 2 . ...
... As stated in [12,20,21], the recent deployment of DL approaches as attacks raised new privacy concerns in the Bechara Al Bouna and Gilbert Tekli 1 Several studies showed that deep neural networks outperform traditional learning-based approaches for image recognition and restoration tasks [1,16]. Hence, from a privacy perspective, deep learning-based (DL) techniques are considered strong attacks [8,17,18]. 2 Throughout the rest of this paper, we will use the terms obfuscation and anonymization interchangeably. ...
... context of image obfuscation. The authors in [8,9,13,17,18,21] evaluated the robustness of obfuscation techniques by exploring different aspects of adversaries performing standalone DL-assisted attacks. Nevertheless, enhancing the evaluation methodology of obfuscation techniques and improving the defense strategies against adversaries require considering more ''pessimistic'' attacking scenario, i.e., stronger adversaries [13,21,22]. ...
Article
Full-text available
Obfuscation techniques (e.g., blurring) are employed to protect sensitive information (SI) in images such as individuals’ faces. Recent works demonstrated that adversaries can perform deep learning-assisted (DL) attacks to re-identify obfuscated face images. Adversaries are modeled by their goals, knowledge (e.g., background knowledge), and capabilities (e.g., DL-assisted attacks). Nevertheless, enhancing the evaluation methodology of obfuscation techniques and improving the defense strategies against adversaries requires considering more "pessimistic” attacking scenario, i.e., stronger adversaries. According to a 2019 article published by the European Union Agency for Cybersecurity (ENISA), adversaries tend to perform more sophisticated and dangerous attacks when collaborating together. To address these concerns, our paper investigates a novel privacy challenge in the context of image obfuscation. Specifically, we examine whether adversaries, when collaborating together, can amplify their DL-assisted attacks and cause additional privacy breaches against a target dataset of obfuscated images. We empirically demonstrate that federated learning (FL) can be used as a collaborative attack/adversarial strategy to (i) leverage the attacking capabilities of an adversary, (ii) increase the privacy breaches, and (iii) remedy the lack of background knowledge and data shortage without the need to share/disclose the local training datasets in a centralized location. To the best of our knowledge, we are the first to consider collaborative and more specifically FL-based attacks in the context of face obfuscation.
... faces) are first detected and then modified using inpainting techniques [15][16][17] or more basic methods such as blurring or pixelization. The former approach's use of two deep neural networks (DNNs) makes it challenging for real-time processing, while the latter approach is vulnerable to deep learning attacks [18]. An alternative one-step solution is generative adversarial privacy [9]. ...
Article
Full-text available
Visual analysis tasks, including crowd management, often require resource-intensive machine learning models, posing challenges for deployment on edge hardware. Consequently, cloud computing emerges as a prevalent solution. To address privacy concerns associated with offloading video data to remote cloud platforms, we present a novel approach using adversarial training to develop a lightweight obfuscator neural network. Our method focuses on pedestrian detection as an example of visual analysis, allowing the transformation of video frames on the camera itself to retain only essential information for pedestrian detection while preserving privacy. Importantly, the obfuscated data remains compatible with publicly available object detectors, requiring no modifications or significant loss in accuracy. Additionally, our technique overcomes the common limitation of relying on labeled sensitive attributes for privacy preservation. By demonstrating the inability of pedestrian attribute recognition models to detect attributes in obfuscated videos, we validate the efficacy of our privacy protection method. Our results suggest that this scalable approach holds promise for enabling camera usage in video analytics while upholding personal privacy.
... In line with previous work [47,56,74,84], we propose that recognition systems be trained on anonymized data so that a more reliable anonymization performance is achieved. The idea of retraining recognition systems was first proposed for face recognition by Newton et. ...
Article
Full-text available
Biometric data contains distinctive human traits such as facial features or gait patterns. The use of biometric data permits an individuation so exact that the data is utilized effectively in identification and authentication systems. But for this same reason, privacy protections become indispensably necessary. Privacy protection is extensively afforded by the technique of anonymization. Anonymization techniques protect sensitive personal data from biometrics by obfuscating or removing information that allows linking records to the generating individuals, to achieve high levels of anonymity. However, our understanding and possibility to develop effective anonymization relies, in equal parts, on the effectiveness of the methods employed to evaluate anonymization performance. In this paper, we assess the state-of-the-art methods used to evaluate the performance of anonymization techniques for facial images and for gait patterns. We demonstrate that the state-of-the-art evaluation methods have serious and frequent shortcomings. In particular, we find that the underlying assumptions of the state-of-the-art are quite unwarranted. State-of-the-art methods generally assume a difficult recognition scenario and thus a weak adversary. However, that assumption causes state-of-the-art evaluations to grossly overestimate the performance of the anonymization. Therefore, we propose a strong adversary which is aware of the anonymization in place. This adversary model implements an appropriate measure of anonymization performance. We improve the selection process for the evaluation dataset, and we reduce the numbers of identities contained in the dataset while ensuring that these identities remain easily distinguishable from one another. Our novel evaluation methodology surpasses the state-of-the-art because we measure worst-case performance and so deliver a highly reliable evaluation of biometric anonymization techniques.
... Blurring filters slide a Gaussian kernel over an image, thereby using neighbourhood pixels to influence the values of a central pixel (Fig. 13f). Although widely used in applications as large as Google Maps, blurring has been shown to be ineffective for protecting identity against various deep learning-based attacks, even while appearing de-identified to human observers [87,101]. For pixelation, a grid of a certain size is chosen for the sensitive pixels in an image. ...
... These simpler image filtering techniques have, however, been shown in various studies to not be robust in providing privacy [70,87,90,98]. Deblurring techniques have also been researched in literature [75,124,169]. ...
Article
Full-text available
This paper reviews the state of the art in visual privacy protection techniques, with particular attention paid to techniques applicable to the field of Active and Assisted Living (AAL). A novel taxonomy with which state-of-the-art visual privacy protection methods can be classified is introduced. Perceptual obfuscation methods, a category in this taxonomy, is highlighted. These are a category of visual privacy preservation techniques, particularly relevant when considering scenarios that come under video-based AAL monitoring. Obfuscation against machine learning models is also explored. A high-level classification scheme of privacy by design, as defined by experts in privacy and data protection law, is connected to the proposed taxonomy of visual privacy preservation techniques. Finally, we note open questions that exist in the field and introduce the reader to some exciting avenues for future research in the area of visual privacy.
... Nevertheless, we have to admit that researchers have shown that low resolution alone does not provide enough privacy guarantees. McPherson et al. found that obfuscated images contain enough information correlated with the obfuscated content to enable accurate reconstruction of the latter [39]. Although we have compared the privacy recognition performance of state-of-art machine learning algorithms on low-resolution images, we believe that our evaluation results on low-resolution images leave much room for discussion. ...
... In a nutshell, obfuscation is done by altering/removing features from the images to hide SI while, at the same time, retaining some visual features to keep the image suitable for processing. However, these visual features can be used to identify/reconstruct the obfuscated SI via different attacks that can be classified as recognition-based [7][8][9] and restoration-based attacks [10][11][12]. [15] Recognition-based attacks breach the images privacy and anonymity by training learning-based algorithms to perform recognition tasks on obfuscated information [7]. ...
... However, these visual features can be used to identify/reconstruct the obfuscated SI via different attacks that can be classified as recognition-based [7][8][9] and restoration-based attacks [10][11][12]. [15] Recognition-based attacks breach the images privacy and anonymity by training learning-based algorithms to perform recognition tasks on obfuscated information [7]. Restoration-based attacks de-anonymize privacy-protected images by trying to restore/reconstruct the clear original features of the obfuscated information [10][11][12]. ...
... Several studies showed that Deep Neural Networks outperform traditional learning-based approaches for image restoration and recognition tasks [1,13,14]. Hence, from a privacy perspective, these Deep Learning-based (DL) techniques are highly nominated as strong recognition-based and restoration-based attacks [7,[15][16][17]. ...
Article
Full-text available
Image obfuscation techniques (e.g., pixelation, blurring and masking,...) have been developed to protect sensitive information in images (e.g. individuals’ faces). In a previous work, we designed a recommendation framework that evaluates the robustness of image obfuscation techniques and recommends the most resilient obfuscation against Deep-Learning assisted attacks. In this paper, we extend the framework due to two main reasons. First, to the best of our knowledge there is not a standardized evaluation methodology nor a defined model for adversaries when evaluating the robustness of image obfuscation and more specifically face obfuscation techniques. Therefore, we adapt a three-components adversary model (goal, knowledge and capabilities) to our application domain (i.e., facial features obfuscations) and embed it in our framework. Second, considering several attacking scenarios is vital when evaluating the robustness of image obfuscation techniques. Hence, we define three threat levels and explore new aspects of an adversary and its capabilities by extending the background knowledge to include the obfuscation technique along with its hyper-parameters and the identities of the target individuals. We conduct three sets of experiments on a publicly available celebrity faces dataset. Throughout the first experiment, we implement and evaluate the recommendation framework by considering four adversaries attacking obfuscation techniques (e.g. pixelating, Gaussian/motion blur and masking) via restoration-based attacks. Throughout the second and third experiments, we demonstrate how the adversary’s attacking capabilities (recognition-based and Restoration & Recognition-based attacks) scale with its background knowledge and how it increases the potential risk of breaching the identities of blurred faces.
... In line with previous work [25,27,39,46], we propose that recognition systems also be trained on anonymized data in order to get more reliable anonymization performance. The idea of retraining recognition systems was first proposed for face recognition by Newton et. ...
Preprint
Full-text available
Biometric data contains distinctive human traits such as facial features or gait patterns. The use of biometric data permits an individuation so exact that the data is utilized effectively in identification and authentication systems. But for this same reason, privacy protections become indispensably necessary. Privacy protection is extensively afforded by the technique of anonymization. Anonymization techniques obfuscate or remove the sensitive personal data to achieve high levels of anonymity. However, the effectiveness of anonymization relies, in equal parts, on the effectiveness of the methods employed to evaluate anonymization performance. In this paper, we assess the state-of-the-art methods used to evaluate the performance of anonymization techniques for facial images and gait patterns. We demonstrate that the state-of-the-art evaluation methods have serious and frequent shortcomings. In particular, we find that the underlying assumptions of the state-of-the-art are quite unwarranted. When a method evaluating the performance of anonymization assumes a weak adversary or a weak recognition scenario, then the resulting evaluation will very likely be a gross overestimation of the anonymization performance. Therefore, we propose a stronger adversary model which is alert to the recognition scenario as well as to the anonymization scenario. Our adversary model implements an appropriate measure of anonymization performance. We improve the selection process for the evaluation dataset, and we reduce the numbers of identities contained in the dataset while ensuring that these identities remain easily distinguishable from one another. Our novel evaluation methodology surpasses the state-of-the-art because we measure worst-case performance and so deliver a highly reliable evaluation of biometric anonymization techniques.
... The result of this masking process succeeds in anonymizing the images by completely hiding the identity-related components, but as a consequence renders the facial attribute information such as a person's pose, expression, or skin tone (from which many computer vision tasks learn) indecipherable. Another problem with these methods is that, whilst the resulting images may not be re-identifiable by humans, they can often be reversed by deep learning models [29,33]. ...
... Tansuriyavong et al. [41] de-identifies people in a room by detecting the silhouette of the person, masking it, and showing only the name to balance privacy protection and the ability to convey information about the situation, Chen et al. [5] obscures the body information of a person with an obscuring algorithm exploiting the background subtraction technique leaving only the body outline visible. Naive de-identification techniques that maintain little information about the region of interest, such as pixelation and blurring, may seem to work to the human eye, but there exist approaches able to revert the anonymized face to its original state [29,33]. To improve the level of privacy protection, techniques defined as k-Same have been introduced [32], where, given a face, a de-identified visage is computed as the average of the k closest faces and then used to replace the original faces from the ones used in the calculation. ...
Preprint
This work addresses the problem of anonymizing the identity of faces in a dataset of images, such that the privacy of those depicted is not violated, while at the same time the dataset is useful for downstream task such as for training machine learning models. To the best of our knowledge, we are the first to explicitly address this issue and deal with two major drawbacks of the existing state-of-the-art approaches, namely that they (i) require the costly training of additional, purpose-trained neural networks, and/or (ii) fail to retain the facial attributes of the original images in the anonymized counterparts, the preservation of which is of paramount importance for their use in downstream tasks. We accordingly present a task-agnostic anonymization procedure that directly optimizes the images' latent representation in the latent space of a pre-trained GAN. By optimizing the latent codes directly, we ensure both that the identity is of a desired distance away from the original (with an identity obfuscation loss), whilst preserving the facial attributes (using a novel feature-matching loss in FaRL's deep feature space). We demonstrate through a series of both qualitative and quantitative experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes. We make the code and the pre-trained models publicly available at: https://github.com/chi0tzp/FALCO.
... Diese Methoden haben gemeinsam, dass die Gesichter weder für Menschen noch für automatische Gesichtserkennungssysteme erkennbar sind. Es wurden auch schon Verfahren vorgeschlagen, die solche Methoden für De-Identifikation umkehren oder umgehen [9]. ...
Article
Full-text available
Zusammenfassung Mit De-Identifikation von Gesichtsbildern wird das Ausblenden der Information in einem Gesichtsbild einer Person bezeichnet, mit der die Identität dieser Person hergeleitet werden kann. Das Ziel der De-Identifikation kann sein, die Identität vor automatischen Gesichtserkennungssystemen, vor Menschen oder vor beiden zu verbergen. Dieser Artikel gibt einen kurzen Überblick der derzeit verfügbaren Ansätze für De-Identifikation, die versuchen, Identität aus menschlicher Sicht zu erhalten, es gleichzeitig aber automatischen Gesichtserkennungssystemen nicht länger zu ermöglichen, die Identität herzuleiten. Wenn Gesichtsbilder massenhaft gesammelt werden, könnensie so nicht mehr genutzt werden um identitätsbezogene Daten zu finden.
... (2) and (3)). The ReLU is faster in gradient descent than Tanh and Sigmoid functions in terms of training time (McPherson et al., 2016). Therefore, the deep convolutional neural networks using the ReLU activation function are trained many times faster than the Tanh function (Krizhevsky et al., 2012). ...
Article
Soil salinity may occur naturally through pedogenetic processes or may result from abiotic factors resulting from human activities such as irrigation with poor quality water, lack of drainage and land development. The infertility of agricultural lands due to soil salinity brings many economic and social problems on a local and global scale. In this study, the salinity problem in the Harran Plain, one of the largest agricultural areas in Turkey, was investigated using deep learning-based U-NET algorithm. Different combinations of Normalized Difference Salinity Index (NDSI), Salinity Index I (SI), Salinity Index II (SII), and Normalized Difference Vegetation Index (NDVI) indices integrated into the RapidEye multispectral image to increase the segmentation accuracy. The most successful result (93.78% overall accuracy) was achieved when the algorithm was trained with 300 iterations and only the SII index was added to the original image. The same images were also segmented by the SVM method, and the U-NET deep learning architecture was able to detect soil salinity about 20–32% more accurately than SVM for three different 5-band test images. This study demonstrates that the delineation and mapping of soil salinity can be done automatically thanks to a deep learning (DL) network with greater accuracy, less time and effort, and without requiring continuous in-situ Electrical Conductivity (EC) measurements and repetitive supervised learning for each new satellite image.