Conference PaperPDF Available

Deep Learning Based Massive MIMO Beamforming for 5G Mobile Network

Authors:

Figures

Content may be subject to copyright.
The 4th IEEE International Symposium on Wireless Systems within the International Conferences on Intelligent Data Acquisit ion and Advanced Computing Systems
20-21 September, 2018, Lviv, Ukraine
Deep Learning Based Massive MIMO
Beamforming for 5G Mobile Network
Taras Maksymyuk1, Juraj Gazda2, Oleh Yaremko1, Denys Nevinskiy1
1Lviv Polytechnic National University, UKRAINE, Lviv
2Technical University of Košice, SLOVAKIA, Kosice
E-mail: taras.maksymyuk.ua@ieee.org, juraj.gazda@tuke.sk, oleg.yaremko.304@gmail.com, nevinskiy90@gmail.com
Abstract — The rapid increasing of the data volume in
mobile networks forces operators to look into different
options for capacity improvement. Thus, modern 5G
networks became more complex in terms of deployment and
management. Therefore, new approaches are needed to
simplify network design and management by enabling self-
organizing capabilities. In this paper, we propose a novel
intelligent algorithm for performance optimization of the
massive MIMO beamforming. The key novelty of the
proposed algorithm is in the combination of three neural
networks which cooperatively implement the deep
adversarial reinforcement learning workflow. In the
proposed system, one neural network is trained to generate
realistic user mobility patterns, which are then used by
second neural network to produce relevant antenna
diagram. Meanwhile, third neural network estimates the
efficiency of the generated antenna diagram returns
corresponding reward to both networks. The advantage of
the proposed approach is that it leans by itself and does not
require large training datasets.
Keywords deep learning; massive MIMO; 5G;
beamforming; AI.
I. INTRODUCTION
Over the last few years, we experience a tremendous
growth of the data demand in the wireless networks driven
by the development of new services with high QoE
(Quality of Experience) requirements. According to the
Cisco Visual Index, global Internet traffic will reach 30
GB per capita by 2021, with the fraction of wireless and
mobile devices of more than 63 percent. In particular, this
growth is the result of the global trends of cloud
computing and Internet of Things, which tend to digitize
our world. It is expected that virtual transformation and
robotic will expand their presence in our life, which
requires intelligent and immersive QoE maintenance.
Numerous applications such as augmented reality, self-
driving cars, e-Health, e-Government, Industry 4.0 and
many others require high throughput, low latency as well
as good reliability. Moreover, in the era of Big Data,
Machine Learning and AI (Artificial Intelligence) we need
to offer scalable data transfer and management techniques
that can handle billion-object datasets within less than few
milliseconds [1].
Thus, development of the 5G mobile networks aims to
cope with the new challenging conditions. Despite
numerous research works on the 5G, there is no any single
view of the new standard, which makes 5G look like a
mix of solutions, which are partially compete and partially
supplement each other [2].
In this paper, we focus on the intelligent beamforming
based on the Massive MIMO (Multiple Input Multiple
Output) technology. The novelty of the proposed approach
is that deep learning is used to determine phase shift and
amplitude of each antenna element. Proposed solution
enables self-learning capabilities of the system that allows
to achieve higher capacity of the 5G mobile networks.
This paper is organized as follows. Section II covers the
brief overview of the existing achievements on 5G mobile
communications. Section III provides the description of
the system model and proposed beamforming algorithm.
Section IV concludes the paper.
II. OVERVIEW OF THE RECENT ACHIEVEMENTS IN
MOBILE COMMUNICATIOS
All of the solutions, proposed so far on the wireless
communications are designed for one of the three key
pillars of wireless communications:
link spectral efficiency;
available bandwidth;
area spectral efficiency.
Previously, link spectral efficiency has been widely
considered as the most important factor of the wireless
communications. Spectral efficiency is the normalized
metric, what determines achievable throughput over
wireless channel per 1 Hz of occupied bandwidth for
specified transmission techniques such as modulation,
coding and multiplexing. Spectral efficiency of wireless
channel can be defined as following:
eff
bps C
S
Hz F
 
  , (1)
where C channel throughput, bps; ΔF channel
bandwidth, Hz [3].
Numerous approaches have been proposed to improve
the link spectral efficiency on physical layer in order to
extend the capacity of wireless networks, without
purchase of additional piece of spectrum. All of them are
based on the advanced modulation and multiplexing
schemes. Nowadays, improvement of modulation schemes
has reached its threshold in terms of the tradeoff between
implementation cost and achieved gain, which makes it
less feasible option for overall network improvement.
Instead, modern solutions are focused mostly on the
aggregation of spectrum bands and improvement of the
bandwidth allocation per target coverage area [4].
One way to solve the problem is to deploy additional
small cells in the areas with high data demand, which
allows to increase the frequency reuse factor and area
spectral efficiency of mobile network. These additional
layers of small cells usually overlay the former coverage
of macro cell. Recent studies have shown that small cells
allow to increase area capacity by three orders of
magnitude comparing with conventional single tier
network deployment [4].
Multi-tier network coverage has enabled the feature of
multiple simultaneous connections to base stations of
different tiers. Thus, each user is able to aggregate
bandwidth from multiple connections into one logical
channel with much higher data rate. This approach
however has higher complexity, comparing to single tier
deployment. Another drawback of small cells is that their
performance is very sensitive to the instantaneous traffic
demand in the coverage area. Due to non-stationary
locations of mobile users, sometimes only few of them
appear in the small cell area, which result in low
bandwidth utilization and decreasing of the overall
network capacity.
Thus, small cells infrastructure should be redundant
with the possibility to turn off small cells with low
bandwidth utilization. However, redundant network
deployment requires additional capital expenditures. It
may not be feasible for operators to spend a lot of money
by deploying small cells with poor time utilization.
Therefore, Massive MIMO technology can be
considered as an alternative solution to increase the
capacity of mobile network, without redundant small cells
[5]. In particular, beamforming has been considered as a
promising approach to improve the energy allocation per
target coverage area. By using a large number of antennas
(up to few hundreds), base station can support multiple
spatially separated beams, which allows to reuse the same
spectrum band for each of them.
The main advantage of Massive MIMO, comparing to
conventional MIMO systems, is in the much higher
number of degrees of freedom for the base station, which
is similar to those in wireless sensor networks [6]. This,
in turn, allows to increase antenna resolution, i.e. capacity
gain from spatial multiplexing or beamforming precision.
In [5], authors proved that Massive MIMO
demonstrates better efficiency than small-cells for low
density of users, while for high users’ density small cells
shows significantly higher performance than Massive
MIMO. Thus, it is impossible to find the network
configuration with optimal trade-off between Massive
MIMO and small cells efficiency due to dynamic users’
density.
Hence, Massive MIMO systems along with small cells
should be considered as the key enabling combination for
5G design as shown in Fig. 1.
Massive
MIMO
Macro cell
Small cell
Small cell
Small cell
Small cell transceiver User equipment (UE)
D2D
channels
Figure 1. Heterogeneous architecture of 5G with combination of Massive MIMO and small cells.
III. DEEP LEARNING BASED MASSIVE MIMO
BEAMFORMING ALGORITHM
A. Beamforming in Massive MIMO Systems
Beamforming is the controlled interference of multiple
waves, which allows to increase the signal strength in the
target direction. Technically, this feature can be achieved
by using multiple transmitting antennas with different
phase shifts. Without beamforming all elements transmit
with the same phase, which result in the circular
irradiation pattern. However, circular (i.e. non-directed)
pattern can be effective only when traffic demand is
uniform, which is almost never the case. Therefore, it is
important to assess the instantaneous location of users to
determine the most suitable antenna irradiation pattern of
each base station.
In this paper, we consider the rectangular array of
antenna elements, which has the ability to tweak the
antenna pattern in the three dimensional space. The phase
shifts map for each antenna array is represented as
following [7]:
11 1
1
, ,
2 2
n
ij
n nn
 
 
 
 
 
 
 
 
 
Φ
  
(2)
Thus, the antenna diagram can be represented as:
 
11 11 1 1
'n1 1
cos cos
cos cos
n n
n nn nn
A t A t
A t A t
   
   
 
 
 
 
 
 
E
  
, (3)
where 2
f
 
, f is the carrier frequency, and Aij is the
amplitude of the irradiated wave, which is directly related
to the transmission power [8]. Thus, in order to change the
antenna diagram, we can adjust A and φ. Fig. 2 shows the
comparison of two different antenna diagrams, when
diagram in Fig. 2.a is more directed and the diagram in
Fig. 2.b is more wide in coverage.
(a) (b)
Figure 2. Comparison of two different antenna diagrams.
In addition, more precise beamforming can be
achieved by using sparse arrays where some of antenna
elements are inactive. Mathematically in can be
represented as a Hadamard product of matrix Φ and
identity matrix I:
ij ij
I
 
Φ Φ I
, (4)
It is obvious that higher number of antenna elements
provides better flexibility of beamforming due to more
degrees of freedom for adjusting parameters of antenna
array. However, when number of antenna elements is
getting higher the complexity of beamforming increases
exponentially, because more values need to be calculated
[9]. Therefore, it is impossible to scale the size of Massive
MIMO antenna array and provide real-time beamforming
simultaneously. Therefore, our aim in this paper is to
develop new approaches for Massive MIMO systems,
which will provide better tradeoff between complexity and
performance.
B. Deep Adversarial Reinforcement Learning Algorithm
for Massive MIMO Beamforming
Current achievements in the area of deep learning and
artificial intelligence enable the new level of the tasks
complexity for mobile network coverage optimization [10,
11]. However, such algorithms need to be trained by using
one of two possible options. First option is the supervised
learning, when system is trained according to the specific
training dataset. In this case, training process is done by
minimizing the root mean square error (RMSE) between
target data and obtained result [12]. Second option, called
reinforcement learning assumes that target dataset is not
known, but there is a reward function which provides
insights whether result is good or not. In this case, system
is trained by itself by trying to get as high reward as
possible [13].
In this paper, we propose new approach for
beamforming namely deep adversarial reinforcement
learning. The main idea is to use two competing neural
networks and one referee network, so that one network
will be trained by the other under supervision of third
network. First deep neural network is trained to generate
realistic user mobility patterns, second tries to response
with the most suitable antenna diagram by using all
available degrees of freedom and third evaluates the
efficiency of the result and returns reward to both
networks.
The inspiration of the proposed approach has been
proposed by Goodfellow et al in [14], where generative
adversarial networks (GAN) have been proposed first
time. In original GAN, generator produces random
samples of data, which try to mimic data from real world,
while discriminator tries to determine whether obtained
data sample is fake or real.
In our approach, we introduce second generator, which
produces the antenna diagram according to generated
location of users. In this case, discriminator implements
the workflow of deep reinforcement learning by returning
the reward to both generators. By reward we use the
aggregated throughput of all users, so that system will try
to improve it over the training time.
Below we describe the proposed training algorithm
step-by-step.
Step 1. First generator network produces a sample of
users’ location for the specific cell according to some
predefined probabilistic distribution.
Step 2. Second generator network reacts to the
obtained data sample and produces relevant antenna
diagram for the specific cell.
Step 3. Discriminator network evaluates the
instantaneous performance of the cell in terms of the total
aggregated throughput:
2
log 1
i i
i
C B SINR
 
, (6)
where Bi is the bandwidth of i-th user, and SINRi is the
SINR (signal-to-interference-plus-noise ratio) value
perceived by i-th user, expressed as:
 
2
1
,
i
j
i x i
iK
j x j
j
Ph PL
SINR i j
P h PL
 
 
, (7)
where Pi denotes the power of transmitted signal from
serving base station, Pj – denotes the power of transmitted
signal from interfering base station, hx – channel gain, σ2 is
additive white Gaussian noise, PL is the path loss of the
link between base station and user.
Step 4. Based on the throughput value, obtained from
previous step second generator network updates antenna
diagram (3) according to the following Q-function:
     
1
, 1 , max ,
t t t t t t t
a
Q s Q s C Q s
 
 
 
 
 
E E E , (8)
where st denotes the previous state, at denotes the previous
action of the second generator network, i.e. antenna
diagram before action, Ct is the current reward, expressed
by the throughput value, st+1 is the new state observed
after action, i.e. with updated antenna diagram, γ is the
discount factor, which determines how long algorithm can
expect the highest reward, e.g. γ=0 means that only
current reward is considered, while γ=1 means that
algorithm will be infinite.
Step 5. Check the obtained throughput value to assess
the convergence criteria:
max
C k C
  , (9)
where Cmax is the total aggregated throughput for the most
ideal case when all users have the highest possible spectral
efficiency values, k is the factor from 0 to 1, which
reflects the accepted deviation from ideal case. If
condition (9) is satisfied, algorithm proceeds to step 1.
Otherwise, algorithm iterates steps 2-5 until condition (9)
will be satisfied.
Thus, by continuous iteration of the above mentioned
algorithm, network management system is able to acquire
knowledge about optimal network configurations for
different location of users. In addition, proposed algorithm
can be supplied with real-world data, so that obtained
statistical distributions will be used to improve the
efficiency of training [15].
IV. CONCLUSION
In this paper, we propose the intelligent beamforming
algorithm for massive MIMO based on the deep
adversarial reinforcement learning. Proposed algorithm
uses two competing neural networks and one referee
network to improve the training process. The advantage
of the proposed algorithm is that it provides nearly
optimal antenna diagrams for a large number of
scenarios, without solving mathematically complex
optimization problem.
ACKNOWLEDGEMENT
This research was supported by the project No.
0117U007177 “Designing the methods of adaptive radio
resource management in LTE-U mobile networks for
4G/5G development in Ukraine,” funded by Ukrainian
government and by the Slovak Research and
Development Agency project number APVV-15-0055.
REFERENCES
[1] M. Wollschlaeger et al. “The future of industrial communication:
Automation networks in the era of the internet of things and
industry 4.0,” IEEE Industrial Electronics Mag., vol. 11, no. 1, pp.
17-27, 2017.
[2] T. Maksymyuk et al., “Deployment strategies and standardization
perspectives for 5G mobile networks,” IEEE Int. Conf. on the
Modern Problems of Radio Engineering, Telecommunications and
Computer Science (TCSET’2016), Lviv-Slavske, Ukraine, pp.
953-956, Feb. 2016.
[3] I. Hwang, B. Song, S. Soliman, “A holistic view on hyper-dense
heterogeneous and small cell networks,” IEEE Communications
Magazine, vol. 51, no.6, pp. 20-27, 2013.
[4] N. Bhushan et al., “Network densification: the dominant theme for
wireless evolution into 5G,” IEEE Communications Magazine,
vol. 52, no. 2, pp. 82-89, 2014.
[5] W. Liu, S. Han, and C. Yang, “Massive MIMO or Small Cell
Network: Who is More Energy Efficient?” in Proc. WCNCN 13,
Shanghai, China, Apr. 2013, pp. 24–29
[6] V. Yatskiv et al., “The use of modified correction code based on
residue number system in WSN,” IEEE 7th International
Conference on Intelligent Data Acquisition and Advanced
Computing Systems, pp. 513-516, 2013.
[7] V. Inzillo et al, “A low energy consumption smart antenna
adaptive array system for mobile ad hoc networks,” International
Journal of Computing, vol. 16, no. 3, pp. 124-132, 2017.
[8] V. Kochan et al., “Energy-efficient method for controlling the
transmitters power of wireless sensor network,” IEEE Int. Conf.
on Electrical and Computer Engineering, pp. 1117-1120, 2017.
[9] S. Obadan, Z. Wang, “A hybrid optimization approach for
complex nonlinear objective functions,” International Journal of
Computing, vol. 17, no. 2, pp. 102-112, 2018.
[10] S. Bezobrazov et al., “The methods of artificial intelligence for
malicious applications detection in Android OS,” International
Journal of Computing, vol. 15, no. 3, pp. 184-190, 2016.
[11] J. Gazda, et al., "Unsupervised Learning Algorithm for Intelligent
Coverage Planning and Performance Optimization of Multitier
Heterogeneous Network," IEEE Access, vol. 6, pp. 39807-39819
2018
[12] S. Pan et al., “A survey on transfer learning,” IEEE Transactions
on Knowledge and Data Engineering, vol. 22, no. 10, 2010.
[13] R. Razavi et al., “A fuzzy reinforcement learning approach for
self-optimization of coverage in LTE networks,” Bell Labs
Tachnical Journal, vol. 15, pp. 153–175, 2010.
[14] I. Goodfellow et al., “Generative adversarial nets,” In Advances in
neural information processing systems, pp. 2672-2680, 2014.
[15] T. Maksymyuk et al., “An IoT based Monitoring Framework for
Software Defined 5G Mobile Networks”, Proceedings of the 11th
ACM Int. Conf. on Ubiquitous Information Management and
Communication (IMCOM’2017), article #5-4, Jan. 5-7, 2017.
... To produce accurate user mobility patterns, pertinent antenna designs, and an estimate of the effectiveness of the generated antenna diagrams, the system uses three neural networks that apply a deep adversarial reinforcement learning workflow. This method has the advantage of learning independently and without requiring big training datasets [93]. ...
Thesis
Full-text available
As mobile communication systems evolve, the demand for enhanced network efficiency and pinpoint accuracy in user localization grows, particularly in the context of dynamic environments such as transport systems. This thesis is motivated by the critical challenge of adapting beamforming techniques to the rapidly changing positions of users, a task analogous to hitting a moving target with precision. The aim is to significantly improve cellular network performance by leveraging advanced beamforming and machine learning (ML) for precise user localization. A novel dataset, crucial to this endeavor, has been developed from simulations in open spaces and a digital twin of the University of Glasgow campus, incorporating vital parameters such as direction of arrival (DoA), time of arrival (ToA), and received signal strength indicators (RSSI). Our investigation commences with the deployment of Maximum Ratio Transmission (MRT) and Zero Forcing (ZF) beamforming techniques to evaluate their effectiveness in enhancing network efficiency through both real and simulated user locations. The application of an adaptive MRT algorithm in our beamforming strategy resulted in a remarkable 53% increase in Signal-to-Noise Ratio (SNR), showcasing the potential of contextual beamforming (Cont-BF) using location information. Furthermore, to refine localization accuracy, deep neural networks were employed, achieving a localization error of less than 1 meter surpassing conventional methods in accuracy. This research also introduces technique for user-assisted beam alignment in high-speed scenarios, addressing the challenges in dynamic transport systems. Venturing beyond traditional approaches, it explores the integration of user locations into beamforming decisions via a P4 switch, crafting a dynamic system responsive to user mobility. This is complemented by extensive data collection from 5G base stations (BS) using a TSMA 6 scanner, which enriches our analysis with detailed parameters for precision localization. Moreover, the study evaluates various MIMO beamforming techniques in 5G networks, demonstrating an average throughput increase from 9 Mbps to 14 Mbps, thereby underscoring the effectiveness of our proposed solutions. The potential of low-cost Software-Defined Radios (SDR) forDoA estimation and the design of a beam steering setup was also assessed, aiming to evaluate their utility in highfrequency beamforming. Despite uncovering limitations in sub-6GHz environments, this exploration led to the successful development of a DoA estimation setup using USRPs and antennas, alongside a beam steering system crafted through the design of phase shifters and antennas. By integrating precise location information into adaptive beamforming techniques, especially within the dynamic context of transport systems, this thesis underscores the imperative role of such integration in significantly enhancing communication efficiency. Our findings, which inlclude significant improvements in signal-to-interference-to-noise ratio (SINR) (up to 50%) and received power (up to 40%) through advanced beamforming methods, are pivotal for advancing high-demand applications, including smart vehicles and immersive virtual reality. This marks a crucial advancement towards the realization of next-generation cellular networks, paving the way for more efficient and reliable performance in an evolving technological landscape.
... Another important factor is learning the beam combination under the operation of FD, a fast-paced, and enormous data transmission mode [46]. Thus, to manage such constraints in wireless networks, artificial intelligence (AI) and machine learning (ML) algorithms become an effective treatment to mitigate contemporary issues and enhance overall network performance [47]. The idea behind AI/ML with M-MIMO amalgamation is to design a less complicated algorithm and synchronization. ...
Preprint
Full-text available
This comprehensive article explores Massive MIMO (M-MIMO) design and its associated concepts, focusing on the seamless integration requirements for Beyond 5G (B5G) and 6G networks. Addressing critical aspects such as RF chain reduction, pilot contamination, Cell-Free MIMO, and security considerations, the article delves into the intricacies of M-MIMO in the evolving landscape of B5G. Moreover, the emerging MIMO concepts in this article include AI-enabled M-MIMO three-dimensional beamforming, reconfigurable intelligent surfaces, visible light communication, and THz spectrum utilization. This review highlights challenges and open research issues, including Narrow Aperture Antenna Nodes, Plasmonic Antenna Arrays, Integrated Sensing with M-MIMO, and the application of Federated Learning in M-MIMO systems. By examining these cutting-edge developments, the article aims to contribute to advancing knowledge in the field and inspire future research directions in the exciting realm of B5G and 6G networks.
... There has also been much research on applying machine learning (ML) to beamforming in recent years [4]. For example, ML is used to predict beamforming vectors for users based on signals received by cooperating BSs [5], and Maksymyuk et al. [6] studied optimizing MIMO beamforming for 5G using neural networks based on user behaviors. For SAVs, Reus-Muns et al. [7] attempted to reduce the overhead in mmWave beam selection by leveraging GPS and image recognition. ...
Conference Paper
This paper addresses the challenges of dynamic beam search in millimeter wave (mmWave) communications for vehicle-to-everything (V2X) applications. With the rapid mobility of connected autonomous vehicles (CAVs) and dense urban environments, maintaining high-quality mmWave connections is critical for the reliability and efficiency of V2X communications. We propose a novel machine learning-assisted framework for dynamic mmWave beam search, which significantly enhances the adaptability and performance of V2X communication systems. Our approach leverages real-time environmental data and CAV dynamics to predict optimal beam directions, improving connection stability. Simulation results demonstrate the effectiveness of the proposed method in a real-world road scenario, offering a partial improvement over conventional beam search techniques.
... This method has the advantage of learning independently and without requiring big training datasets. 84 The authors of the work of Sun et al. investigated the use of deep reinforcement learning to predict coordinated BF strategy in an ultradense network and found that the optimal solution is a balanced combination of selfish and altruistic BF. 85 The BF vectors were obtained efficiently through the learned balancing coefficients. RL-based algorithm for cognitive BF was proposed for multi-target detection in massive MIMO (MMIMO) cognitive radars (MMIMO CRs). ...
Article
Full-text available
Beamforming, an integral componentofmodernmobilenetworks, enables spatial selectivityand improvesnetworkquality.However, manybeamformingtechniquesareiterative, introducingunwantedlatencytothesystem. Inrecenttimes, therehasbeenagrowinginterest inleveragingmobileusers’ locationinformationtoexpeditebeamformingprocesses.Thispaperexplores theconceptof contextual beamforming,discussingitsadvantages,disadvantages,andimplications.Notably,wedemonstrateanimpressive53%improvementinthe signal-to-interference-plus-noiseratiobyimplementingtheadaptivebeamformingmaximumratiotransmission(MRT)algorithmcompared toscenarioswithoutbeamforming. ItfurtherelucidateshowMRTcontributestocontextualbeamforming.Theimportanceof localization inimplementingcontextualbeamformingisalsoexamined.Additionally,thepaperdelvesintotheuseofartificialintelligence(AI)schemes, includingmachinelearninganddeeplearning,inimplementingcontextualbeamformingtechniquesthatleverageuserlocationinformation. Basedonthecomprehensivereview, theresultssuggest that thecombinationofMRTandzero-forcingtechniques,alongsidedeepneural networksemployingBayesianoptimization,representsthemostpromisingapproachforcontextualbeamforming.Furthermore, thestudy discussesthefuturepotentialofprogrammableswitches,suchasTofino—aninnovativeswitchdevelopedbyBarefootNetworks(nowapart ofIntel)—inenablinglocation-awarebeamforming.Thispaperhighlightsthesignificanceofcontextualbeamformingforimprovingwireless telecommunicationsperformance.BycapitalizingonlocationinformationandemployingadvancedAItechniques, thefieldcanovercome challengesandunlocknewpossibilitiesfordeliveringreliableandefficientmobilenetworks.
... Using the RL approach to construct an incremental optimizing framework for beamforming, energy distribution, and customer grouping. (Maksymyuk et al. 2018) introduces an innovative method to enhance Ma-MIMO beamforming efficiency. The suggested system integrates three NNs for cooperation to carry out the more profound adversarial RL process. ...
Article
Full-text available
Massive multiple-input multiple-output (MMIMO) is a WiFi access technique studied and investigated in response to the worldwide bandwidth bottleneck in the WiFi telecommunication industry. Massive MIMO, which brings multiple antennae to transmission and reception to deliver excellent spectrum and power effectiveness with comparatively simple computation, is among the leading fundamental technologies for next-generation networking. For such a practical implementation of 5G—and further, that networks will realize many implementations of the smart sensor device—it is essential to gain a greater understanding of such a massive MIMO model to address its underlying problems. Because of the significant achievements of reinforcement learning (RL) and deep learning (DL), new and potent techniques are now available to help MIMO telecommunication networks deal with problems. This paper presents a thorough analysis of the convergence among the two fields, emphasizing RL and DL methods for MIMO networks. Throughout this article, a framework for RL-based beam-forming vector assault defense has been presented (reinforcement learning). Its outcomes demonstrated acceptable efficiency as well as the anticipated outcome.
Article
Full-text available
The densification of mobile network infrastructure has been widely used to increase the overall capacity and improve user experience. Additional tiers of small cells provide a tremendous increase in the spectrum reuse factor, which allows the allocation of more bandwidth per user equipment (UE). However, the effective utilization of this tremendous capacity is a challenging task due to numerous problems, including co-channel interference, nonuniform traffic demand within the coverage area, and energy efficiency. Existing solutions for these problems, such as stochastic geometry, cause excessive sensitivity to the pattern of the UE traffic demand. In this paper, we propose an intelligent solution for both coverage planning and performance optimization using unsupervised self-organizing map (SOM) learning. We use a combination of two different mobility patterns based on Bézier curves and Lévy flights for more natural UE mobility patterns compared with a conventional random point process. The proposed approach provides the advantage of adjusting the positions of the small cells based on an SOM, which maximizes the key performance indicators, such as average throughput, fairness, and coverage probability, in an unsupervised manner. Simulation results confirm that the proposed unsupervised SOM algorithm outperforms the conventional binomial point process for all simulated scenarios by up to 30% in average throughput and fairness and has an up to 6-dB greater signal-to-interference-plus-noise ratio perceived by the UEs.
Article
Full-text available
Smart Antenna Systems (SAS) are largely employed in wireless networks for their capabilities to provide very high performances with respect to omnidirectional antennas. The use of adaptive array Smart Antenna could bring to enhance aspects such as energy management, especially in Mobile Ad Hoc Network (MANET), when the network performance closely depends on the battery life of the mobile nodes as well as on the routing protocol features. In this paper, we propose a new adaptive array Smart Antenna System that improves some Ad Hoc Distance Vector (AODV) network protocol features through a mechanism of energy consumption optimization. The system implements the Least Mean Square (LMS) as adaptive algorithm; for this purpose, we perform a co-simulation between Matlab, that implements the LMS algorithm and Omnet++, that provides a complete simulation environment for MANET.
Conference Paper
Full-text available
This paper presents a new approach for comprehensive monitoring of software defined 5G mobile network by using IoT (Internet of Things) based framework. The proposed framework gives much easier implementation of monitoring system for mobile network operators. Moreover, the framework uses a unified IoT protocol for telemetry transport, which is light, data-agnostic, and interoperable.
Article
With respect to the 'no free launch' theorem, no single algorithm has a better performance when tested against a completely stochastic algorithm on all objective functions. Consequently, choosing the best algorithm for a particular problem is often more of an art than science. The complexity of an objective function can be determined by certain features such as the modality, the basins, the valleys, the separability, and the dimensionality of the objective function. While the separability and modality contribute to the complexity of the function, the dimensionality and domain range increases the function's search space exponentially. In this paper, the authors analyze the algorithmic constructs of Simulated Annealing (SA), Cuckoo-search (CK), Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) along with two hybrid paradigms. In addition, an extensive comparative study was conducted using 30 standard bench mark functions to demonstrate how an ingenious hybrid algorithm could significantly shorten the amount of function calls (generations) needed to attain the optimal or rather near optimal solution for almost any complex objective function. Results from empirical analysis unveil the precision, robustness and success of the hybrid algorithm (without compromising run-time complexity) over its counterparts.
Article
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.
Article
This paper presents and discusses a method for Android's applications classification with the purpose of malware detection. Based on the application of an Artificial Immune System and Artificial Neural Networks we propose the "antivirus" system especially for Android system that can detect and block undesirable and malicious applications. This system can be characterized by self-adaption and self-evolution and can detect even unknown and previously unseen malicious applications. The proposed system is the part of our team's big project named "Intelligent Cyber Defense System" that includes malware detection and classification module, intrusions detection and classification module, cloud security module and personal cryptography module. This paper contains the extended research that was presented during the IEEE 8th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS'2015) [1].
Article
With the introduction of the Internet of Things (IoT) and cyberphysical system (CPS) concepts in industrial application scenarios, industrial automation is undergoing a tremendous change. This is made possible in part by recent advances in technology that allow interconnection on a wider and more fine-grained scale. The purpose of this article is to review technological trends and the impact they may have on industrial communication. We will review the impact of IoT and CPSs on industrial automation from an industry 4.0 perspective, give a survey of the current state of work on Ethernet time-sensitive networking (TSN), and shed light on the role of fifth-generation (5G) telecom networks in automation. Moreover, we will point out the need for harmonization beyond networking.
Conference Paper
Rapid growing of mobile communications market, driven by continuous development of new technologies, results in significant increasing of traffic demands. New powerful user's equipment became capable of all functions performed mainly by desktop computers. Therefore, mobile Internet access requires data rates comparable with fixed access networks. 5G concept has emerged to meet these growing demands. Recently, many approaches have been proposed as potential 5G technologies. In this paper, we survey main prerequisites for 5G concept and study key 5G technologies. Possible deployment strategies have been analyzed. Pros and cons of key 5G technologies and complementary technologies have been outlined. Future perspectives of 5G standardization have been explained.
Conference Paper
The WSN standard IEEE802.15.4 basically uses frequency of 2,4 GHz for data transmission. This unlicensed frequency band is used by a variety of devices, standards and applications: IEEE802.11, Bluetooth and etc. In this paper is considered the increase of data transmission reliability increasing in WSN with the use of Residue Number System error correcting code. These codes have high correcting characteristics and simplified coding procedure.