Fig 1 - uploaded by Michail Matthaiou
Content may be subject to copyright.
Conventional TDD versus ML-based TDD. In learning-based block (LB), the CE overhead is removed from the frame structure for P intervals due to the introduction of ML-based CSI prediction.

Conventional TDD versus ML-based TDD. In learning-based block (LB), the CE overhead is removed from the frame structure for P intervals due to the introduction of ML-based CSI prediction.

Source publication
Article
Full-text available
To support the ever increasing number of devices in massive multiple-input multiple-output (mMIMO) systems, an excessive amount of overhead is required for conventional orthogonal pilot-based channel estimation schemes. To circumvent this fundamental constraint, we design a machine learning (ML)-based time-division duplex scheme in which channel st...

Contexts in source publication

Context 1
... proposed ML-based TDD scheme increases the resources for data transmission by reducing the CE overhead from the frame structure, while CSI is obtained using an ML technique via exploring the correlation among adjacent intervals. The ML-based TDD scheme contains two types of blocks, namely, head block (HB) and learning-based block (LB), shown in Fig. 1. The following considerations are made in the ML-based TDD ...
Context 2
... to the proposed ML-based TDD scheme shown in Fig. 1, U is an important parameter that determines how often the system resets itself. Fig. 7 illustrates the NMSE of estimation for different predictors against U . First, we observe that the NMSE of all predictors converge. Specifically, for the ML-based architectures, the performance of CNN-AR predictor slightly deteriorates with ...
Context 3
... now verify the average per-user throughput by considering the CNN-AR in ML-based TDD scheme in Fig. 10. The results of conventional TDD are included in the figure as benchmark. All the results are averaged over 1000 ...
Context 4
... evaluate the performance gain of ML-based TDD scheme, we illustrate the joint impact of P and O con on the ratio of TP cnn−ar to TP con in Fig. 11. First, different from the behavior of per-user achievable throughput, we observe that the performance gain increases monotonically with increasing O con . This is reasonable because of the inherently poor behavior of conventional TDD for massive user scenarios. Moreover, regarding the parameter P , when O con = 0.2, the ratio of the ...
Context 5
... further illustrate the performance gain provided by the optimal configuration of CNN-AR, we introduce an indicator δ = TP cnn−ar max TP con as our metric. Fig. 12 illustrates δ against the normalized Doppler-shift f n . First of all, a significant gain can be observed even for the high speed scenarios. The reason is, although the small-scale fading is hard to track in high mobility scenarios, the channel statistics, i.e., the LSF can be observed from predicted CSI. More specifically, the ...

Similar publications

Preprint
Full-text available
This study introduces a receiver architecture for dual-functional communication and radar (RadCom) base-stations (BS), which exploits the spatial diversity between the received radar and communication signals, and performs interference cancellation (IC) to successfully separate these signals. In the RadCom system under consideration, both communica...

Citations

... Fortunately, artificial intelligence (AI) provides with some considerable ideas and practical schemes to address this problem owing to its rapid development and extensive application in the field of communications. For this reason, channel prediction techniques based on AI methods have been studied recently [17]- [20]. Different from traditional channel estimation, channel prediction can speculate the CSIs at the next moments based on the historical CSIs, which can effectively improve the timeliness of CSIs. ...
... Different from traditional channel estimation, channel prediction can speculate the CSIs at the next moments based on the historical CSIs, which can effectively improve the timeliness of CSIs. A machine-learning based CSI prediction method has been proposed in [17], where the temporal channel correlation features are extracted by the convolutional neural network (CNN) and thus the future CSI can be predicted effectively. Moreover, the researchers in [18] have proposed a long short term memory (LSTM)based predictor to solve the problem of channel aging in LEO satellite communication systems. ...
... In this subsection, we design a DL-based unsupervised scheme to solve the formulated problem in (24b). The whole architecture is illustrated in Fig. 5, where the input is the predicted channel matrixH obtained in (17), and the output is the desired precoding matrix W = [w 1 , w 2 , · · · , w K ]. The detailed structure of deep learning precoding network (DLPCN) will be explained later and now we focus on the problem modeled with a DL-based network. ...
Preprint
Low earth orbit (LEO) satellite internet of things (IoT) is a promising way achieving global Internet of Everything, and thus has been widely recognized as an important component of sixth-generation (6G) wireless networks. Yet, due to high-speed movement of the LEO satellite, it is challenging to acquire timely channel state information (CSI) and design effective multibeam precoding for various IoT applications. To this end, this paper provides a deep learning (DL)-based joint channel prediction and multibeam precoding scheme under adverse environments, e.g., high Doppler shift, long propagation delay, and low satellite payload. {Specifically, this paper first designs a DL-based channel prediction scheme by using convolutional neural networks (CNN) and long short term memory (LSTM), which predicts the CSI of current time slot according to that of previous time slots. With the predicted CSI, this paper designs a DL-based robust multibeam precoding scheme by using a channel augmentation method based on variational auto-encoder (VAE).} Finally, extensive simulation results confirm the effectiveness and robustness of the proposed scheme in LEO satellite IoT.
... 利用循环神经网络 (recurrent neural network, RNN), 文 献 [165,166] 实现了对窄带单天线条件下信道信息的预测. 为了提升网络的效率, 文献 [167] 利用门 控循环单元 (gated recurrent unit, GRU) 和长短期记忆 (long short-term memory, LSTM) 实现信道预 测. 通过将信道信息当作图像信号来处理, 借鉴图像领域的机器学习算法, 文献 [167∼169] 利用 LSTM 和 CNN 来实现信道预测. 进一步地, 文献 [170] 从离散 CSI 数据的角度来处理信道预测问题, 往往不能很好地利用信道响应的变化过程在物理上 的连续性. 神经常微分方程 (neural ordinary differential equation, Neural ODE) 借助神经网络来表示 一个动态系统, 将物理过程转化为隐式的前向传播过程, 可以很好地解决这一缺陷. ...
... The authors of [14], examined the autoregressive (AR) modeling approach for the accurate prediction of correlated Rayleigh time-varying channels. A machine learning (ML)based framework was proposed to improve the CSI prediction quality in [15]. The authors of this work implemented a convolutional neural network (CNN) to identify the channel aging pattern and combined it with the AR method to predict the CSI in MIMO systems. ...
... The CNN-AR channel estimation method introduced in [15] exhibited promising performance in terms of spectral efficiency and pilot overhead within standard MIMO systems. Nevertheless, the application of this method to RIS-assisted environments remains unexplored. ...
... This literature gap motivates the development of this work. In this paper, we extend the ideas of [15] and propose a generalized CNN-AR framework for RIS-assisted MIMO systems. Specifically, a CNN model is designed and trained to identify the aging characteristics of the correlated time-varying wireless channels in a multi-user scenario. ...
Preprint
Full-text available
Reconfigurable intelligent surfaces (RISs) have emerged as a promising technology to enhance the performance of sixth-generation (6G) and beyond communication systems. The passive nature of RISs and their large number of reflecting elements pose challenges to the channel estimation process. The associated complexity further escalates when the channel coefficients are fast-varying as in scenarios with user mobility. In this paper, we propose an extended channel estimation framework for RIS-assisted multiple-input multiple-output (MIMO) systems based on a convolutional neural network (CNN) integrated with an autoregressive (AR) predictor. The implemented framework is designed for identifying the aging pattern and predicting enhanced estimates of the wireless channels in correlated fast-fading environments. Insightful simulation results demonstrate that our proposed CNN-AR approach is robust to channel aging, exhibiting a high-precision estimation accuracy. The results also show that our approach can achieve high spectral efficiency and low pilot overhead compared to traditional methods.
... Thus, in the following section, we first explain the proposed channel prediction methodologies and then address the proposed CSI feedback mechanism. Though ML-based channel prediction has already been considered in the literature, e.g., [5], [38] has many issues. We will address those issues in the following section, and later in Section VIII, we will show that the proposed channel prediction models outperform conventional methods. ...
Article
Full-text available
In the literature, machine learning (ML) has been implemented at the base station (BS) and user equipment (UE) to improve the precision of downlink channel state information (CSI). However, ML implementation at the UE can be infeasible for various reasons, such as UE power consumption. Motivated by this issue, we propose a CSI learning mechanism at BS, called CSILaBS, to avoid ML at UE. To this end, by exploiting channel predictor (CP) at BS, a lightweight predictor function (PF) is considered for feedback evaluation at the UE. CSILaBS reduces over-the-air feedback overhead, improves CSI quality, and lowers the computation cost of UE. Besides, in a multiuser environment, we propose various mechanisms to select the feedback by exploiting PF while aiming to improve CSI accuracy. We also address various ML-based CPs, such as NeuralProphet (NP), an ML-inspired statistical algorithm. Furthermore, inspired to use a statistical model and ML together, we propose a novel hybrid framework composed of a recurrent neural network and NP, which yields better prediction accuracy than individual models. The performance of CSILaBS is evaluated through an empirical dataset recorded at Nokia Bell-Labs. The outcomes show that ML elimination at UE can retain performance gains, for example, precoding quality.
... Considering the problem of insufficient datasets, the authors of [31] designed a generative adversarial network (GAN) and LSTM based channel prediction framework. Moreover, by visualizing the CSI matrix as a two-dimensional image, in [32], the authors employed convolutional neural network (CNN) and RNN in cascade to extract spatial and temporal features, respectively. In [33], convolutional LSTM (ConvLSTM) was used to jointly exploit the spatio-temporal correlations of CSI for the high-speed train (HST) channel prediction. ...
... 2) Encoder E: The encoder E is essential for DL based prediction models, and its design is a main focus in the existing literature (e.g., [28]- [30], [32], [33], [35]). By extracting the useful features from the input, the encoder E generates a feature representation vector z j . ...
Article
Full-text available
In order to break through the development bottleneck of modern wireless communication networks, a critical issue is the out-of-date channel state information (CSI) in high mobility scenarios. In general, non-stationary CSI has statistical properties which vary with time, implying that the data distribution changes continuously over time. This temporal distribution shift behavior undermines the accurate channel prediction and it is still an open problem in the related literature. In this paper, a hypernetwork based framework is proposed for non-stationary channel prediction. The framework aims to dynamically update the neural network (NN) parameters as the wireless channel changes to automatically adapt to various input CSI distributions. Based on this framework, we focus on low-complexity hypernetwork design and present a deep learning (DL) based channel prediction method, termed as LPCNet, which improves the CSI prediction accuracy with acceptable complexity. Moreover, to maximize the achievable downlink spectral efficiency (SE), a joint channel prediction and beamforming (BF) method is developed, termed as JLPCNet, which seeks to predict the BF vector. Our numerical results showcase the effectiveness and flexibility of the proposed framework, and demonstrate the superior performance of LPC-Net and JLPCNet in various scenarios for fixed and varying user speeds.
... In order to acquire the predicted CSI, many methods such as model-based [4], [5], [6] and deep learning (DL)-based [7], [8] schemes have been researched. In these schemes, according to the correlation of the CSI in time domain, the parameters of models or deep neural networks (DNNs) are obtained by optimization or training via using the offline channel data as much as possible. ...
... The element spacing d is equal to half of the wavelength, and each UE is equipped with 1 receive antenna. To demonstrate the superiority of the proposed method in the subsequent simulations, we take the AR-MMSE offline prediction and the CNN prediction [8] as the benchmark methods. The AR order p is set to 7. For all simulations, the normalized mean square error (NMSE) [10] is selected as the performance metric, which can be expressed as ...
... By making the assumption that the channel is quasi-static for a given time, and varies from block-by-block, this method achieves timevarying CE by tracking the model parameters of the sparse virtual channel with the Kalman filter (KF). The more realistic symbol-by-symbol time-varying nature of wireless channels, i.e., channel aging, is often modeled as an auto-regressive (AR) process [24]- [28], but even here, the KF-based CT approach can still be utilized [25], [26], although machine learning (ML)-aided counterparts are also emerging, examples of which are the schemes in [26], [27], where a deep neural network (DNN) was employed to perform channel prediction. ...
Article
Full-text available
We propose a novel joint channel tracking and data detection (JCTDD) scheme to combat the channel aging phenomenon typical of millimeter-wave (mmWave) multiple-input multiple-output (MIMO) communication systems in high-mobility scenarios. The contribution aims to significantly reduce the communication overhead required to estimate time-varying mmWave channels by leveraging a Bayesian message passing framework based on Gaussian approximation, to jointly perform channel tracking (CT) and data detection (DD). The proposed method can be interpreted as an extension of the Kalman filter-based two-stage tracking mechanism to a Bayesian bilinear inference (BBI)-based joint channel and data estimation (JCDE) framework, featuring the ability to predict future channel state information (CSI) from both reference and payload signals by using an auto-regressive (AR) model describing the time variability of mmWave channel as a state transition model in a bilinear inference algorithm. The resulting JCTDD scheme allows us to track the symbol-by-symbol time variation of channels without embedding additional pilots, leaving any added redundancy to be exploited for channel coding, dramatically improving system performance. The efficacy of the proposed method is confirmed by computer simulations, which show that the proposed method not only significantly outperforms the state-of-the-art (SotA) but also approaches the performance of an idealized Genie-aided scheme.
... Hence, an intelligent channel sounding is a promising approach to address the issues of signal overhead and calibration errors in multi-AP channel sounding, as depicted in Fig. 11. Various methods have been developed to predict CSI by examining a user's past mobility patterns or handover patterns to forecast future locations and estimate the CSI from that location [124]- [127]. Other approaches analyze an STA's past CSI information within an AP to predict the CSI. ...
Preprint
The 802.11 IEEE standard aims to update current Wireless Local Area Network (WLAN) standards to meet the high demands of future applications, such as 8K videos, augmented/virtual reality (AR/VR), the Internet of Things, telesurgery, and more. Two of the latest developments in WLAN technologies are IEEE 802.11be and 802.11ay, also known as Wi-Fi 7 and WiGig, respectively. These standards aim to provide Extremely High Throughput (EHT) and lower latencies. IEEE 802.11be includes new features such as 320 MHz bandwidth, multi-link operation, Multi-user Multi-Input Multi-Output (MIMO), orthogonal frequency-division multiple access, and Multiple-Access Point (multi-AP) cooperation (MAP-Co) to achieve EHT. With the increase in the number of overlapping Access Points (APs) and inter-AP interference, researchers have focused on studying MAP-Co approaches for coordinated transmission in IEEE 802.11be, making MAP-Co a key feature of future WLANs. Additionally, the high overlapping AP densities in EHF bands, due to their smaller coverage, must be addressed in future standards beyond IEEE 802.11ay, specifically with respect to the challenges of implementing MAP-Co over 60GHz bands. In this article, we provide a comprehensive review of the state-of-the-art in MAP-Co features and their drawbacks concerning emerging WLAN. Finally, we discuss several novel future directions and open challenges for MAP-Co.
... It can learn to recognize patterns in smaller sections from an input matrix. By constructing a matrix of the size given by the time steps and the number of antennas, a CNN is proposed in [6] to predict AR coefficients for channel evolution. Channel prediction has also been performed using a recurrent neural network (RNN) that utilizes the temporal correlation in sequential data, in contrast to the CNN. ...
Preprint
Full-text available
The performance of modern wireless communications systems depends critically on the quality of the available channel state information (CSI) at the transmitter and receiver. Several previous works have proposed concepts and algorithms that help maintain high quality CSI even in the presence of high mobility and channel aging, such as temporal prediction schemes that employ neural networks. However, it is still unclear which neural network-based scheme provides the best performance in terms of prediction quality, training complexity and practical feasibility. To investigate such a question, this paper first provides an overview of state-of-the-art neural networks applicable to channel prediction and compares their performance in terms of prediction quality. Next, a new comparative analysis is proposed for four promising neural networks with different prediction horizons. The well-known tapped delay channel model recommended by the Third Generation Partnership Program is used for a standardized comparison among the neural networks. Based on this comparative evaluation, the advantages and disadvantages of each neural network are discussed and guidelines for selecting the best-suited neural network in channel prediction applications are given.
... To address this issue, authors Jin et al 39 offer a unique convolutional blind denoising technique to enhance the resilience of noisy channels, as well as using a noise level estimate sub network, asymmetric joint loss functions, and a non-blind denoising sub network for blind channel estimation. In the state-of-the-art ML-based time-division duplex technique, 40 the perfect CSI is obtained by employing temporal channel correlation. Patterns are extracted using a convolutional neural network (CNN), and CSI is predicted using an autoregressive predictor trained using exogenous inputs. ...
Article
Full-text available
In this work, a deep learning (DL)‐based massive multiple‐input multiple‐output (mMIMO) orthogonal frequency division multiplexing (OFDM) system is investigated over the tapped delay line type C (TDL‐C) model with a Rayleigh fading distribution at frequencies ranging from 0.5 to 100 GHz. The proposed bi‐directional long short‐term memory (Bi‐LSTM) channel state information (CSI) estimator uses online learning during training and offline learning during the practical implementation phase. The design of the estimator takes into account situations in which prior knowledge of channel statistics is limited and targets excellent performance, even with limited pilot symbols (PS). Three separate loss functions (mean square logarithmic error [MSLE], Huber, and Kullback–Leibler Distance [KLD]) are assessed in three classification layers. The symbol error rate (SER) and outage probability performance of the proposed estimator are evaluated using a number of optimization techniques, such as stochastic gradient descent (SGD), momentum, and the adaptive gradient (AdaGrad) algorithm. The Bi‐LSTM‐based CSI estimator is trained considering a specific number of PS. It can be readily seen that by incorporating a cyclic prefix (CP), the system becomes more resilient to channel impairments, resulting in a lower SER. Simulations show that the SGD optimization approach and Huber loss function‐trained Bi‐LSTM‐based CSI estimator have the lowest SER and very high estimation accuracy. By using deep neural networks (DNNs), the Bi‐LSTM method for CSI estimation achieves a superior channel capacity (in bps/Hz) at 10 dB than long short‐term memory (LSTM) and other conventional CSI estimators, such as minimum mean square error (MMSE) and least squares (LS). The simulation results validate the analytical results in the study.