Comparison of different models.

Comparison of different models.

Source publication
Article
Full-text available
In this paper, we propose a multi-feature fusion network (MFF-Net) for a modulation format identification (MFI) and optical signal-to-noise ratio (OSNR) monitoring scheme. The constellation map data used in this work comes from five modulation formats, namely 56 Gbit/s 4/8 phase shift keying (PSK) and 16/32/64 quadrature amplitude modulation (QAM)....

Contexts in source publication

Context 1
... dimension of the convolution kernel is k × k, where c and n correspond to the number of channels in the input feature map and the output feature map for the layer, respectively. As shown in Table 1, the OSNR monitoring accuracies of MFF-Net, B-CNN, VGGNet, and EW-MTL on this dataset are 98.82%, 97.38%, 98.16%, and 98.41%, respectively, corresponding to the number of parameters 2.64 × 10 5 , 1.63 × 10 8 , 4.27 × 10 6 , and 4.76 × 10 5 . In addition, we also compared CNN and LSTM, which are shown in the last two columns of Table 1. ...
Context 2
... shown in Table 1, the OSNR monitoring accuracies of MFF-Net, B-CNN, VGGNet, and EW-MTL on this dataset are 98.82%, 97.38%, 98.16%, and 98.41%, respectively, corresponding to the number of parameters 2.64 × 10 5 , 1.63 × 10 8 , 4.27 × 10 6 , and 4.76 × 10 5 . In addition, we also compared CNN and LSTM, which are shown in the last two columns of Table 1. The OSNR monitoring accuracy is 97.13% and 98.53%, respectively, and the number of parameters is 2.51 × 10 6 and 4.80 × 10 5 , correspondingly. ...
Context 3
... is no significant advantage compared with the model in this paper, both in terms of the number of parameters and accuracy. In the MFF-Net model, we replaced the flattening layer with the global As shown in Table 1, the OSNR monitoring accuracies of MFF-Net, B-CNN, VGG-Net, and EW-MTL on this dataset are 98.82%, 97.38%, 98.16%, and 98.41%, respectively, corresponding to the number of parameters 2.64 × 10 5 , 1.63 × 10 8 , 4.27 × 10 6 , and 4.76 × 10 5 . In addition, we also compared CNN and LSTM, which are shown in the last two columns of Table 1. ...
Context 4
... the MFF-Net model, we replaced the flattening layer with the global As shown in Table 1, the OSNR monitoring accuracies of MFF-Net, B-CNN, VGG-Net, and EW-MTL on this dataset are 98.82%, 97.38%, 98.16%, and 98.41%, respectively, corresponding to the number of parameters 2.64 × 10 5 , 1.63 × 10 8 , 4.27 × 10 6 , and 4.76 × 10 5 . In addition, we also compared CNN and LSTM, which are shown in the last two columns of Table 1. The OSNR monitoring accuracy is 97.13% and 98.53%, respectively, and the number of parameters is 2.51 × 10 6 and 4.80 × 10 5 , correspondingly. ...

Similar publications

Article
Full-text available
Scanning acoustic microscopy (SAM) is a non-ionizing and label-free imaging modality used to visualize the surface and internal structures of industrial objects and biological specimens. The image of the sample under investigation is created using high-frequency acoustic waves. The frequency of the excitation signals, the signal-to-noise ratio, and...

Citations

... Initially, the modulation format recognition technique rapidly developed in the field of radio communications [5][6][7]. In recent years, numerous machine learning algorithms have also made strides in the field of MFR in optical communications [8][9][10][11][12][13]. While inputting time-series signals can capture the complete dynamic characteristics of signals over time, severe noise interference may cause modulation features to become blurred, thus increasing the difficulty of recognition. ...
Article
Full-text available
In indoor visible light communication (VLC), the received signals are subject to severe interference due to factors such as high-brightness backgrounds, long-distance transmissions, and indoor obstructions. This results in an increase in misclassification for modulation format recognition. We propose a novel model called VLCMnet. Within this model, a temporal convolutional network and a long short-term memory (TCN-LSTM) module are utilized for direct channel equalization, effectively enhancing the quality of the constellation diagrams for modulated signals. A multi-mixed attention network (MMAnet) module integrates single- and mixed-attention mechanisms within a convolutional neural network (CNN) framework specifically for constellation image classification. This allows the model to capture fine-grained spatial structure features and channel features within constellation diagrams, particularly those associated with high-order modulation signals. Experimental results obtained demonstrate that, compared to a CNN model without attention mechanisms, the proposed model increases the recognition accuracy by 19.2%. Under severe channel distortion conditions, our proposed model exhibits robustness and maintains a high level of accuracy.
... In the joint multiparameter monitoring schemes, deep neural networks (DNNs) are widely applied in OPM for EON because they have high monitoring accuracy and can learn features autonomously in dynamic, large capacity and complex data environments [14]. Some researchers had proposed features such as constellation diagrams [15], optical spectrums [16], intensity and differential phase images [6], amplitude histograms (AHs) [17] as inputs for multi-task DNNs to achieve MFI and OSNR estimation. However, other parameters or impairments in transmission links can also affect signal quality, so monitoring only two parameters is insufficient. ...
Article
In elastic optical network (EON), joint multi-parameter optical performance monitoring (OPM) can effectively manage, diagnose, and reduce operational costs for transmission optical links. In this paper, we propose a novel joint monitoring scheme utilizing Hough transform (HT) images combined with multi-task residual neural network (MT-ResNet) for EON. The scheme can realize baud rate identification (BRI), modulation format identification (MFI), residual chromatic dispersion identification (CDI), optical signal-to-noise ratio (OSNR) and residual differential group delay (DGD) estimation at the same time. The HT image is obtained by preprocessing original constellation diagram, which is a key feature of parameter monitoring for EON signals and presents obvious differentiation in parameter space. By optimizing the skip connections in MT-ResNet, we effectively resolve the issue of details information loss or incompleteness caused by the transmission of impaired optical signals in the neural network. The simulation results demonstrate that the identification success rate can reach 100 % for two common BRs, five mainstream MFs, and seven residual CD values with different impairment degrees. The mean absolute errors (MAEs) of OSNR and residual DGD estimates are 0.42 dB and 0.014 times symbol period respectively. The scheme has excellent tolerance for fiber nonlinear effects. In experimental verification, the accuracies of BRI and MFI are 100 %, and the MAEs of corresponding OSNR estimation for PDM-QPSK/16QAM/32QAM are 0.25 dB, 0.36 dB, and 0.40 dB, respectively. Compared with the existing typical schemes, our scheme significantly improves performance and reduces complexity while simultaneously monitoring a large number of parameters.