Figure - available from: Remote Sensing
This content is subject to copyright.
The overall structure of MAFN.

The overall structure of MAFN.

Source publication
Article
Full-text available
Radar emitter identification (REI) aims to extract the fingerprint of an emitter and determine the individual to which it belongs. Although many methods have used deep neural networks (DNNs) for an end-to-end REI, most of them only focus on a single view of signals, such as spectrogram, bi-spectrum, signal waveforms, and so on. When the electromagn...

Similar publications

Article
Full-text available
Specific emitter identification (SEI) is a technology that identifies different emitters through their unique characteristics. Research on traditional specific emitter identification systems focuses on general feature extraction. However, its effectiveness significantly decreases when faced with various influencing factors, such as time changes. Ex...

Citations

... Deep neural networks (DNNs) have achieved excellent performance in many remote sensing applications, such as object detection [4][5][6][7][8][9], image classification [10][11][12][13][14], and semantic segmentation [15][16][17][18][19][20]. However, DNNs have been shown by Szegedy [21] to be vulnerable to adversarial examples. ...
Article
Full-text available
Adversarial example generation on Synthetic Aperture Radar (SAR) images is an important research area that could have significant impacts on security and environmental monitoring. However, most current adversarial attack methods on SAR images are designed for white-box situations by end-to-end means, which are often difficult to achieve in real-world situations. This article proposes a novel black-box targeted attack method, called Shallow-Feature Attack (SFA). Specifically, SFA assumes that the shallow features of the model are more capable of reflecting spatial and semantic information such as target contours and textures in the image. The proposed SFA generates ghost data packages for input images and generates critical features by extracting gradients and feature maps at shallow layers of the model. The feature-level loss is then constructed using the critical features from both clean images and target images, which is combined with the end-to-end loss to form a hybrid loss function. By fitting the critical features of the input image at specific shallow layers of the neural network to the target critical features, our attack method generates more powerful and transferable adversarial examples. Experimental results show that the adversarial examples generated by the SFA attack method improved the success rate of single-model attack under a black-box scenario by an average of 3.73%, and 4.61% after combining them with ensemble-model attack without victim models.