Comparison of different channel recognition performance.

Comparison of different channel recognition performance.

Source publication
Article
Full-text available
Modulation recognition is the indispensable part of signal interception analysis, which has always been the research hotspot in the field of radio communication. With the increasing complexity of the electromagnetic spectrum environment, interference in signal propagation becomes more and more serious. This paper proposes a modulation recognition s...

Similar publications

Article
Full-text available
Deep learning has shown remarkable advantages in many fields. Although the image recognition capabilities and deep neural network (DNN) have developed rapidly in recent years, relevant studies have confirmed that DNN will be attacked by well-crafted images, resulting in model recognition errors. The adversarial examples generated by the traditional...

Citations

... Another research goal was to select a modulation that would guarantee the highest possible frequency of data packet transmission with the lowest possible bandwidth. The use of a wide band in radio communication allows for easier detection, interception, and disruption of the network [26]. In addition, the network itself may interfere with other receivers operating on the 434 MHz band, including garage doors, car remote controls, etc. ...
... A/P Long Short Term Memory (LSTM) [16], a LSTM denoising auto-encoder [14] Well recognize AM-SSB, and distinguish between QAM16 and QAM64 [22] Spectrum RSBU-CW with Welch spectrum, square spectrum, and fourth power spectrum [23]; SCNN [18] with the short-time Fourier transform (STFT), a fine-tuned CNN model [17] with smooth pseudo-Wigner-Ville distribution and Born-Jordan distribution Achieves high accuracy of PSK [23], recognizes OFDM well, which is revealed only in the spectrum domain due to its plentiful sub-carriers [17] In recent years, several studies have also focused on the advantages of multimodal information fusion for AMR tasks. In [24], modality discriminative features are captured separately using three Resnet networks, and I/Q, A/P, and the amplitudes of spectrum, square spectrum, and fourth power spectrum features are concatenated with the corresponding bitwise summation. ...
... A/P Long Short Term Memory (LSTM) [16], a LSTM denoising auto-encoder [14] Well recognize AM-SSB, and distinguish between QAM16 and QAM64 [22] Spectrum RSBU-CW with Welch spectrum, square spectrum, and fourth power spectrum [23]; SCNN [18] with the short-time Fourier transform (STFT), a fine-tuned CNN model [17] with smooth pseudo-Wigner-Ville distribution and Born-Jordan distribution Achieves high accuracy of PSK [23], recognizes OFDM well, which is revealed only in the spectrum domain due to its plentiful sub-carriers [17] In recent years, several studies have also focused on the advantages of multimodal information fusion for AMR tasks. In [24], modality discriminative features are captured separately using three Resnet networks, and I/Q, A/P, and the amplitudes of spectrum, square spectrum, and fourth power spectrum features are concatenated with the corresponding bitwise summation. ...
... Specifically, the DSCLDNN multiplies I/Q and A/P features with an outer product. Unlike the above direct addition or multiplication fusion approach, Ref. [23] uses a PNN (Product-based Neural Network) model to cross-fuse the three modal features in a fixed order. However, most of the above methods fuse multimodal features via direct or crosswise summation or outer product, which tends to ignore the variability of different modes and their different impacts on modulation identification. ...
Article
Full-text available
Recently, deep learning models have been widely applied to modulation recognition, and they have become a hot topic due to their excellent end-to-end learning capabilities. However, current methods are mostly based on uni-modal inputs, which suffer from incomplete information and local optimization. To complement the advantages of different modalities, we focus on the multi-modal fusion method. Therefore, we introduce an iterative dual-scale attentional fusion (iDAF) method to integrate multimodal data. Firstly, two feature maps with different receptive field sizes are constructed using local and global embedding layers. Secondly, the feature inputs are iterated into the iterative dual-channel attention module (iDCAM), where the two branches capture the details of high-level features and the global weights of each modal channel, respectively. The iDAF not only extracts the recognition characteristics of each of the specific domains, but also complements the strengths of different modalities to obtain a fruitful view. Our iDAF achieves a recognition accuracy of 93.5% at 10 dB and 0.6232 at full signal-to-noise ratio (SNR). The comparative experiments and ablation studies effectively demonstrate the effectiveness and superiority of the iDAF.
... A/P Long Short Term Memory (LSTM) [15], a LSTM denoising auto-encoder [13] well recognize AM-SSB, and distinguish between QAM16 and QAM64 [19] Spectrum RSBU-CW with welch spectrum, square spectrum, and fourth power spectrum [20], SCNN [17] with the short-time Fourier transform(STFT), a fine-tuned CNN modelcitezhang2019automatic with smooth pseudo-wigner-ville distribution and Born-Jordan distribution achieve high accuracy of PSK [20], recognize OFDM well which is revealed only in the spectrum domain due to its plentiful sub-carriers [16] In recent years, several studies have also focused on the advantages of multimodal information fusion for AMR tasks. In [21], modality discriminative features are captured separately using three Resnet networks, and I/Q, A/P, and amplitude of spectrum, square spectrum, and fourth power spectrum features are concatenated with the corresponding bitwise summation. ...
... A/P Long Short Term Memory (LSTM) [15], a LSTM denoising auto-encoder [13] well recognize AM-SSB, and distinguish between QAM16 and QAM64 [19] Spectrum RSBU-CW with welch spectrum, square spectrum, and fourth power spectrum [20], SCNN [17] with the short-time Fourier transform(STFT), a fine-tuned CNN modelcitezhang2019automatic with smooth pseudo-wigner-ville distribution and Born-Jordan distribution achieve high accuracy of PSK [20], recognize OFDM well which is revealed only in the spectrum domain due to its plentiful sub-carriers [16] In recent years, several studies have also focused on the advantages of multimodal information fusion for AMR tasks. In [21], modality discriminative features are captured separately using three Resnet networks, and I/Q, A/P, and amplitude of spectrum, square spectrum, and fourth power spectrum features are concatenated with the corresponding bitwise summation. ...
... Specifically, the DSCLDNN multiplies I/Q and A/P features with an outer product. Unlike the above direct addition or multiplication fusion approach, [20] uses a PNN model to cross-fuse the three modal features in a fixed order. However, most of the above methods fuse multimodal features by direct or crosswise summation or outer product, which tends to ignore the variability of different modes and their different impacts on modulation identification. ...
Preprint
Full-text available
Recently, deep learning models have been widely applied to modulation recognition, which have become a hot topic due to their excellent end-to-end learning capabilities. However, current methods are mostly based on uni-modal inputs, which suffer from incomplete information and local optimization. To complement the advantages of different modalities, we focus on the multi-modal fusion method. Therefore, we introduce an iterative dual-scale attentional fusion (iDAF) method to integrate multimodal data. Firstly, two feature maps with different receptive field sizes are constructed using local and global embedding layers. Secondly, the feature inputs are iterated into the Iterative Dual Scale Attention Module (iDCAM), where the two branches capture the details of high-level features and the global weights of each modal channel, respectively. The iDAF not only extracts the recognition characteristics of each specific domains, but also complements the strengths of different modalities to obtain a fruitful view. Our iDAF achieves a recognition accuracy of 93.5\% at 10dB and 0.6232 at full SNR. The comparative experiments and ablation studies effectively demonstrate the effectiveness and superiority of the iDAF.
... Qi et al. propose a Waveform Spectrum Multi-modality Fusion (WSMF) method, which relies on a deep Residual Network (ResNet) and a concise concatenation layer [8]. An optimized Product-based Neural Networks (PNN) model [9] cross-combines the features extracted from I/Q, A/P, and spectrum. However, the above methods simply carry out the crossconnect or direct concatenation of features instead of further capturing the underlying information. ...
... Modality embedding: Inspired by [9], the original signal symbol is transferred into three modalities, i.e., In-phase/Quadrature (IQ), Amplitude/Phase(AP), and Spectrum (SP). IQ, AP, and SP represent information on signal frequency, waveform, and spectrum analysis. ...
Preprint
Automatic Modulation Recognition (AMR) is a fundamental research topic in the field of signal processing and wireless communication, which has widespread applications in cognitive radio, non-collaborative communication, etc. However, current AMR methods are mostly based on unimodal inputs, which suffer from incomplete information and local optimization. In this paper, we focus on the modality utilization in AMR. The proxy experiments show that different modalities achieve a similar recognition effect in most scenarios, while the personalities of different inputs are complementary to each other for particular modulations. Therefore, we mine the universal and complementary characteristics of the modality data in the domain-agnostic and domain-specific aspects, yielding the Universal and Complementary subspaces accordingly (dubbed as UCNet). To facilitate the subspace construction, we propose universal and complementary losses accordingly, where the former minimizes the heterogeneous feature gap by an adversarial constraint and the latter consists of an orthogonal constraint between universal and complementary features. The extensive experiments on the RadioML2016.10A dataset demonstrate the effectiveness of UCNet, which has achieved the highest recognition accuracy of 93.2% at 10 dB, and the average accuracy is 92.6% at high SNR greater than zero.
Article
Signal modulation classification (SMC) has attracted extensive attention for its wide application in the military and civil fields. The current direction of combining deep learning technology with wireless communication technology is developing hotly. Deep learning models are riding high in the field of SMC with their highly abstract feature extraction capability. However, most deep learning models are decision-agnostic, limiting their application to critical areas. This paper proposes combining traditional feature-based methods to set appropriate manual features as interpretable representations for different modulation classification tasks. The fitted decision tree model is used as the basis for the decision of the original model on the instance to be interpreted, and the trustworthiness of the original deep learning model is verified by comparing the decision tree model with the prior knowledge of the signal feature-based modulation classification algorithm. We apply the interpretable explanation method under the current leading deep learning model in the field of modulation classification. The interpretation results show that the decision basis of the model under a high signal-to-noise ratio(SNR) is consistent with the expert knowledge in the traditional SMC method. The experiments show that our method is stable and can guarantee local fidelity. The decision tree as an interpretation model is intuitive and consistent with human reasoning intuition.
Article
Multimodal fusion-based methods are a research hotspot for Automatic Modulation Recognition (AMR). But the existing methods primarily emphasize information integration and neglect the balance between the modalities. This paper proposes a novel Contrastive Learning-based Multimodal Fusion (CLMF) model, which integrates both signals and key features to enhance AMR. To obtain adequate signal representations, a contrastive learning architecture is proposed to learn the meaningful representations from the multimodal fusion data, and a Multi-Layer Perceptron (MLP) is incorporated for precise signal classification. Moreover, a threshold discrimination disturbance strategy is designed to balance the information conflicts arising from the two modalities. The experiments demonstrate the efficiency of the CLMF model for AMR on the public dataset.
Article
Automatic modulation recognition (AMR) is a fundamental research topic in the field of signal processing and wireless communication, which has widespread applications in cognitive radio, non‐collaborative communication etc. In this paper, the focus is on the multi‐modal utilization in AMR. Specifically, the universal and complementary characteristics of multiple modality data in the domain‐agnostic and domain‐specific aspects are mined, yielding the universal and complementary subspaces network accordingly (dubbed as UCNet). To facilitate the subspace construction, universal and complementary losses are proposed accordingly. The proposed UCNet has achieved the highest recognition accuracy of 93.2% at 10 dB on the RadioML2016.10A dataset, and the average accuracy is 92.6% at high SNR greater than zero.
Article
Full-text available
Context. The subject matter of the article is the recognition of a reference signal in the presence of additive interference. Objective. The recognition of the reference signal by the obtained value of its weighting factor in conditions where additive interference is imposed on the spectrum of the reference signal at unknown random frequencies. The task is the development of a method for recognizing a reference signal for the case when the interference consists of an unknown periodic signal that can be represented by a finite sum of basis functions. In addition, interference may also include deterministic signals from a given set with unknown weighting coefficients, which are simultaneously transmitted over the communication channel with the reference signal. Method. The method of approximating the unknown periodic component of the interference by the sum of basis functions is used. The current number of values of the signal that enters the recognition system depends on the number of basis functions. This signal is the sum of the basis functions and the reference signal with unknown weighting coefficients. To obtain the values of these coefficients, the method based on the properties of the disproportion functions is used. The recognition process is reduced to the calculation of the weight coefficient of the reference signal. If it is zero, it indicates that the reference signal is not part of the signal being analyzed. The recognition system is multi-level. The number of levels depends on the number of basis functions. Results. The obtained results show that, provided that the reference signal differs by at least one component from the given set of basis functions, the recognition is successful. The given examples show that the system recognizes the reference signal even in conditions where the weighting coefficient of the interference is almost 1000 times greater than the coefficient for the reference signal. The recognition system also works successfully in conditions where the interference includes the sum of deterministic signals from a given set, which are simultaneously transmitted over the communication channel. Conclusions. The scientific novelty of the obtained results is that a method for recognizing the reference signal has been developed in conditions where only an upper estimate of its maximum frequency is known for the periodic component of the interference. Also, recognition occurs when, in addition to unknown periodic interference, the signals from a given set with unknown weighting coefficients are superimposed on the reference signal. In the process of recognition, in addition to the weighting factor for the reference signal, the factors for the interference components are also obtained.