ArticlePDF Available

Reinforcement-Learning-Enabled Massive Internet of Things for 6G Wireless Communications

Authors:

Abstract and Figures

Recently, extensive research efforts have been devoted to developing beyond fifth generation (B5G), also referred to as sixth generation (6G) wireless networks aimed at bringing ultra-reli-able low-latency communication services. 6G is expected to extend 5G capabilities to higher communication levels where numerous connected devices and sensors can operate seamlessly. One of the major research focuses of 6G is to enable massive Internet of Things (mIoT) applications. Like Wi-Fi 6 (IEEE 802.11ax), forthcoming wireless communication networks are likely to meet massively deployed devices and extremely new smart applications such as smart cities for mIoT. However, channel scarcity is still present due to a massive number of connected devices accessing the common spectrum resources. With this expectation, next-generation Wi-Fi 6 and beyond for mIoT are anticipated to have inherent machine intelligence capabilities to access the optimum channel resources for their performance optimization. Unfortunately, current wireless communication network standards do not support the ensuing needs of machine learning (ML)-aware frameworks in terms of resource allocation optimization. Keeping such an issue in mind, we propose a reinforcement-learning-based, one of the ML techniques, a framework for a wireless channel access mechanism for IEEE 802.11 standards (i.e., Wi-Fi) in mIoT. The proposed mechanism suggests exploiting a practically measured channel collision probability as a collected dataset from the wireless environment to select optimal resource allocation in mIoT for upcoming 6G wireless communications.
Content may be subject to copyright.
126 IEEE Communications Standards Magazine • June 2021
2471-2825/21/$25.00 © 2021 IEEE
A
Recently, extensive research efforts have been
devoted to developing beyond fifth generation
(B5G), also referred to as sixth generation (6G)
wireless networks aimed at bringing ultra-reli-
able low-latency communication services. 6G is
expected to extend 5G capabilities to higher com-
munication levels where numerous connected
devices and sensors can operate seamlessly. One
of the major research focuses of 6G is to enable
massive Internet of Things (mIoT) applications.
Like Wi-Fi 6 (IEEE 802.11ax), forthcoming wireless
communication networks are likely to meet mas-
sively deployed devices and extremely new smart
applications such as smart cities for mIoT. Howev-
er, channel scarcity is still present due to a mas-
sive number of connected devices accessing the
common spectrum resources. With this expecta-
tion, next-generation Wi-Fi 6 and beyond for mIoT
are anticipated to have inherent machine intelli-
gence capabilities to access the optimum chan-
nel resources for their performance optimization.
Unfortunately, current wireless communication
network standards do not support the ensuing
needs of machine learning (ML)-aware frame-
works in terms of resource allocation optimiza-
tion. Keeping such an issue in mind, we propose
a reinforcement-learning-based, one of the ML
techniques, a framework for a wireless channel
access mechanism for IEEE 802.11 standards (i.e.,
Wi-Fi) in mIoT. The proposed mechanism suggests
exploiting a practically measured channel colli-
sion probability as a collected dataset from the
wireless environment to select optimal resource
allocation in mIoT for upcoming 6G wireless com-
munications.
I
In recent years, significant resources have been
devoted by the research community toward
next-generation massive Internet of Things (mIoT)
wireless technologies in 5G and beyond 5G
(B5G) networks (also referred to as 6G) [1]. It
is expected that the future wireless networks in
mIoT will infer the diverse network conditions to
control and optimize spectrum resources spon-
taneously. While cellular has its origins outdoors,
we expect Wi-Fi and 6G to coexist indoors and
outdoors. The IEEE Working Group (WG) for
Wi-Fi standards (i.e., IEEE 802.11 standards) has
recently launched an amendment to IEEE 802.11
WLANs, named IEEE 802.11ax high-efficiency
WLAN (HEW), also known as Wi-Fi 6. HEW deals
with massively connected device deployment
scenarios. It is anticipated that HEW infers the
exciting features of both the devices’ environment
and devices’ interacting behavior with its envi-
ronment to spontaneously manage the spectrum
resource allocation. In general, a wireless device
relies on exploiting the diverse system’s uncertain-
ty in terms of transmitted data variety. Therefore,
to accomplish the targeted objectives of HEW,
it is imperative to examine effective and robust
resource allocation schemes [2].
Today, WLAN has arrived at the time when it
must make a change in perspective to fulfill the
expanding needs of future mIoT applications [3].
Given the current advancement, machine learning
(ML), especially reinforcement learning (RL), is
expected to direct revolutionary changes, partic-
ularly concerning the spectrum resource sharing
of the B5G/6G wireless communications. RL tech-
niques are intended to engage a computational
framework for learning interactively. Based on
the action-state experience, future actions can
be appropriately overseen without having been
customized clearly. Concerning WLANs, there is
an enormous measure of unexploited data cre-
ated at both station (STA) and access point (AP)
levels, which could be incomprehensibly essential
for learning complex situations, likewise improv-
ing overall WLAN performance. For instance, the
channel access experience of the STAs in a wire-
less network can be anticipated through RL tech-
niques, given the information from experience.
Based on these anticipations, spectrum resources
can be appropriately obliged in future channel
access sessions. However, RL’s possible advantag-
es for wireless networks are presently limited by
the current network infrastructure, which is not
yet set up to oblige RL-enabled tasks, for exam-
ple, information collection, processing the infor-
mation, and optimal action selection based on
the processing. Instead, current wireless network
frameworks are commonly implied for data trans-
mission without considering the hidden attributes
of the system.
Recently, 5G systems have initiated moves
toward ML-empowered wireless networks through
Rashid Ali, Imran Ashraf, Ali Kashif Bashir, and Yousaf Bin Zikria
ULTRALOW LATENCY AND RELIABLE COMMUNICATIONS FOR FUTURE WIRELESS NETWORKS
Rashid Ali is with Sejong University; Yousaf Bin Zikria and Imran Ashraf are with Yeungnam University; Ali Kashif Bashir is with Manchester Metropolitan University.
He is also working at the National University of Science and Technology.
R-L-E
M I  T 
G W C
Digital Object Identifier:
10.1109/MCOMSTD.001.2000055
ALI_LAYOUT.indd 126ALI_LAYOUT.indd 126 6/8/21 8:20 PM6/8/21 8:20 PM
Authorized licensed use limited to: Sejong Univ. Downloaded on June 30,2021 at 08:04:37 UTC from IEEE Xplore. Restrictions apply.
IEEE Communications Standards Magazine • June 2021 127
network function virtualization (NFV) [4]. NFV
permits fast flexibility and rapid reconfiguration
in assigning spectrum resources. It is beneficial to
empower verticals like self-driving cars and smart
industries. Additionally, NFV is valuable to support
coordination and carry ML-based operations to a
large-scale level, with immense data and compu-
tational complexity.
Therefore, in this article, we acknowledge
using RL-aware frameworks for next-generation
WLAN networks, such as 802.11be and beyond,
to address the advancement of wireless com-
munications toward ML-based frameworks,
which will be a fundamental part of 6G wireless
communications. In contrast to mobile cellular
networks like 5G, HEW networks have gotten
considerably less consideration when planning
ML-aware solutions and applications. The rea-
son is that mobile phone networks fit perfectly
with big data analytics because of the enormous
measure of information and high computational
resources available for cellular network opera-
tors. On the other hand, HEW represents a set
of explicit issues due to their dense deployment
scenarios, such as train stations, stadiums, and
university campuses, and their typical distributed
nature. However, despite the truth that HEW
can tally with plenty of information to be utilized
by ML techniques in massive deployments, we
find other resource constraint situations, like res-
idential-type deployment. In these cases, tremen-
dous computing and processing resources for
spectrum access cannot be provided to the ML
activity.
The RL module-based framework permits
adapting to the problem instance and the set
of available resources in the environment to
empower the incorporation of ML-aware meth-
odologies into WLANs’ various modalities, thus
giving adaptability in terms of dense deployment
heterogeneity. For example, despite deep learn-
ing being a ground-breaking solution that may
improve the network performance in different sit-
uations, it involves many computations, massive
data storage, and ultra-reliable low-latency com-
munications (URLLC) requirements to be satisfied
in various deployments or parts of the network.
In an RL technique, a learner learns the actions
in its surrounding environment to maximize its
expected reward for the corresponding actions.
The learner learns the optimal policies and actions
to map current states for unknown future states
in the environment. The states, action, rewards,
and state-transition probabilities depict the new
environment. It makes it evident that RL-aware
frameworks will fit next-generation wireless com-
munications perfectly.
Following are the main contributions of this
article:
This article devises and examines the capa-
bility of RL-empowered future communica-
tions. At that point, we focus on IEEE 802.11
WLANs for efficient spectrum access.
• This article provides an overview of the
RL-aware architecture for next-generation
wireless communications.
• We portray the expected advantages of
RL-based methodologies empowered by
the proposed framework through simulation
results in a particular use case.
M L  
I T 
N-G WLAN
A brief discussion is required to elaborate on ML
techniques’ critical role in supporting next-gen-
eration WLANs’ advancement. In this section,
we specifically focus on the application of ML
to next-generation 802.11 networks (i.e., IEEE
802.11be and beyond).
The advancement of next-generation com-
munication applications is characterizing the
shape of future WLANs through a bunch of
strict prerequisites [4]. A few models are vehi-
cle-to-everything (V2X), Industry 4.0, and virtual
reality/augmented reality (VR/AR) in 6G commu-
nications. These applications are truly challenging
regarding transmission capacity (i.e., a bandwidth
of 10–20 Gb/s), less than 5 ms latency, 99.9 per-
cent reliability, and scalability of 1,000,000 devic-
es/km2. In 5G, the advanced technologies are
included, such as enhanced mobile broadband
(eMBB), massive machine-to-machine commu-
nication (mMTC), and URLLC. Similarly, 802.11
WG are also considering these technologies to
design next-generation advancements, such as
IEEE 802.11ax HEW and IEEE 802.11be extremely
high throughput (EHT).
To meet the previously mentioned existing
requirements, not only is a technological advance-
ment required (e.g., utilization of higher spec-
trum or massive antennas technologies), but a
paradigm shift is essential when planning novel
solutions for communication frameworks, oper-
ation, and management. Specifically, AI-enabled
wireless communications need to be engaged
with cognitive (behaviorist) and context-aware
abilities, which may require a novel framework.
Keeping this in mind, ML is required to be signif-
icant during the lifetime of 5G and will become
inescapable as included from the earliest starting
point in their origination for 6G communications.
The genuine utility of ML lies in those issues
that are difficult to tackle by conventional frame-
works because of their intricate underlying pat-
terns (e.g., network density and traffic load
estimation). Various ML techniques have been
classified in multiple ways. However, the most
widely recognized taxonomy differentiates super-
vised learning (SL), unsupervised learning (uSL),
and RL. In SL, labeled data is used for training an
agent. uSL requires no input data labels, whereas
RL uses exploration and exploitation trade-off with
labeled and unlabeled input data. Figure 1 shows
a few of the algorithms and potential wireless
communication applications for each kind of ML
algorithm, along with examples of inputs required
by these techniques. We assume the additional
discussion on these ML categories and techniques
is out of the scope of this article, and we suggest
readers refer to [5–7] for further details.
In addition to the specific ML-enabled solutions
for wireless communications issues, few efforts
have been made toward empowering ML-aware
frameworks in more general terms. Specifically,
several framework recommendations have been
proposed so far [8–10]. In addition to the specific
ML-enabled solutions for wireless communications
issues, few efforts have been made toward empow-
AI-enabled wireless
communications need to
be engaged with cognitive
(behaviorist) and con-
text-aware abilities, which
may require a novel frame-
work. Keeping this in
mind, ML is required to be
significant during the life-
time of 5G and will become
inescapable as included
from the earliest starting
point in their origination
for 6G communications.
ALI_LAYOUT.indd 127ALI_LAYOUT.indd 127 6/8/21 8:20 PM6/8/21 8:20 PM
Authorized licensed use limited to: Sejong Univ. Downloaded on June 30,2021 at 08:04:37 UTC from IEEE Xplore. Restrictions apply.
IEEE Communications Standards Magazine • June 2021128
ering ML-aware frameworks in more general terms.
Specifically, several framework recommendations
have been proposed so far [8–11]. The majority
of the related research works concede the vital
necessity for empowering data analytics in network
deployments, possibly at the stations (STAs) and
access points (APs): data gathering, data prepara-
tion, analyzing the data, and finally, future action
selections based on the analysis. In this regard, we
look deeper into RL operation and focus on the
actual strategies, including data gathering, analyz-
ing, and optimal action selection.
ML-E U C 
W N
It is essential to describe use cases where ML-en-
abled applications improve network performance.
Therefore, in this section, we discuss a set of
ML-enabled use cases to showcase the potential
of ML in next-generation 802.11 networks.
Network Slicing: Network slicing (NS) is prob-
ably the most sweltering research topic in 5G
communications due to its capability to virtually
isolate network resources to meet diverse appli-
cation necessities. In future WLANs, NS can be
realized through the optimal resource allocation
of spectrum resources using orthogonal frequen-
cy-division multiple access (OFDMA). However,
the diversity of applications and devices and their
subsequent flexibility become the challenge for
easily allocating spectrum resources. To tackle
this, ML can be utilized to predict the user require-
ments for network performance optimization.
Handover and Association Management:
The greater part of the current user association
and handover techniques in wireless networks
typically depends on signal strengths. It may be
challenging as load balancing can lead to serious
performance degradation in densely deployed
wireless networks like HEW. Thus, an ML-aware
framework is conceivable to deal with context-ori-
ented data, such as the traffic load, to help opti-
mal action selection. Furthermore, user mobility
and requirements prediction can be included in
the framework, consequently empowering the
handover and user association management with
insightful data.
Coordinated Scheduling: Contrary to the con-
ventional cellular communication networks, a
HEW deployment can be denser, particularly in
a public residential situation where anyone can
set up an AP and make their wireless network. It
usually prompts more complex situations where
base station system (BSS) collaborations prevent
the current scheduling techniques from ensuring
quality of service (QoS). Thus, ML can be utilized
to induce these interactions and bring optimal
coordinated scheduling. Specifically, through
ML-enabled coordinated scheduling, diverse APs
can trigger uplink/downlink transmissions from/
to the proper STAs, increasing the overall network
throughput while lowering the channel collision
among the STAs.
Spatial Reuse: Spatial reuse (SR) targets
improved spectrum utilization through chan-
nel sensitivity adjustment techniques. However,
choosing the optimal channel sensitivity threshold
limit is very difficult due to the complex spatial
communications among the STAs. At this point,
as a potential framework, RL techniques can be
applied locally to improve spectral resource allo-
cation in a decentralized and distributed way.
R-L-
A F 
IEEE . WLAN
In the RL algorithm, an agent performs actions
within a state of its environment to collect a value/
reward, as shown in Fig. 2. A typical RL technique
Figure 1. Machine learning categories, algorithms, and potential communication applications.
ALI_LAYOUT.indd 128ALI_LAYOUT.indd 128 6/8/21 8:20 PM6/8/21 8:20 PM
Authorized licensed use limited to: Sejong Univ. Downloaded on June 30,2021 at 08:04:37 UTC from IEEE Xplore. Restrictions apply.
IEEE Communications Standards Magazine • June 2021 129
has three key sub-components; strategy or poli-
cy, reward, and value function (usually referred
to as Q-value function) [12]. The policy is a key
component in the RL technique, and it character-
izes a strategy for a learning agent to behave in
its environment. Also, with each action, an agent
earns a reward from the system. The reward value
is a numerical value, and the main objective of
an agent is to maximize its accumulated reward
for any specific action-state pair. Similarly, a value
function (or a Q-value function) represents a long-
term accumulated reward for a given action. The
instant reward for a specific action might be small,
but it can be a higher value of a Q-value function
in the long run. It indicates that a high-value action
is visited several times by the agent due to the
action’s exploitation as an optimal action. Q-learn-
ing is one of the algorithms of model-free RL tech-
niques to solve behaviorist decision problems. It
uses learning rate to adjust the learning capabili-
ty, discount factor to give higher/lower value to
the future reward, instant reward, and change in
Q-value function to update current Q-value func-
tion. The maximization of the Q-value function
leads to the optimal action selection in the envi-
ronment. The Q-learning strategy has been used
successfully in the optimization of cognitive radios
and wireless channel access techniques.
R  
RL-A F  WLAN
To exhibit the RL-aware framework’s appropri-
ation for WLANs, let us take the example of
channel-observation-based spectrum resource
allocation [13]. We propose a hybrid RL-aware
solution where two principal RL-based processes
are performed: training the model (learning from
the practical channel information) and placement
of the model (optimizing the resource allocation
based on the learned information). Figure 3 rep-
resents the key stages of the proposed RL-aware
framework for WLANs in an mIoT environment.
While training of the model is done at the AP
with the collection of channel observation data
from numerous STAs, the model’s placement is
also done at the AP to provide an immediate
response to future actions (exploitation). Notice
that the framework can likewise be re-trained
during the second stage based on newly explored
observation data (exploration).
Training Phase: In our proposed framework,
the STAs in a wireless environment observe the
channel for channel-observation-based collision
probability as in [13] (as shown in red in Fig. 3).
Later, the AP gathers this data of various STAs
during their uplink transmission. The channel
collision probability can be utilized for either
training or algorithms that help the fundamental
MAC layer resources allocation (MAC-RA), such
as optimal contention window selection [14].
The AP’s collected data is pre-processed with
the goal that the RL technique can appropriately
learn the channel conditions. For example, in
the case of applying Q-learning [12], the input
data needs to be converted into value-based
information as rewards (i.e., convert the channel
observation information into a collision proba-
bility of a scalar between 0 and 1). While gen-
erating the RL framework, certain rules should
be considered. For example, based on the spec-
trum resources, an AP may set a maximum num-
ber of connected STAs. The rules are strongly
attached to the abilities of the wireless devices.
Once the RL strategy at the AP generates the
output (i.e., the optimized MAC-RA function),
it is distributed throughout the network environ-
ment to the STAs, which are then prepared to
give fast optimal spectrum resource allocation
to new cases.
Placement Phase: In the placement phase (as
shown in blue in Fig. 3), an AP can detect new
spectrum resource requests or potential hando-
Figure 2. Interaction of a typical agent of RL
technique with its environment.
Figure 3. Key stages of the proposed RL-aware framework for WLANs in an mIoT environment.
While training of the
model is done at the AP
with the collection of
channel observation data
from numerous STAs, the
model’s placement is also
done at the AP to provide
an immediate response to
future actions (exploita-
tion). Notice that the
framework can likewise be
re-trained during the sec-
ond stage based on newly
explored observation data
(exploration).
ALI_LAYOUT.indd 129ALI_LAYOUT.indd 129 6/8/21 8:20 PM6/8/21 8:20 PM
Authorized licensed use limited to: Sejong Univ. Downloaded on June 30,2021 at 08:04:37 UTC from IEEE Xplore. Restrictions apply.
IEEE Communications Standards Magazine • June 2021130
vers based on recently collected data from STAs.
The collected data is processed by the AP, simi-
larly as in the training phase. The Q-learning tech-
nique is applied locally at the AP, which provides
a reward-based output for future requests. The
MAC-RA decision is conveyed to the associated
STAs.
Potential of the RL-Based Framework: To
feature the capability of the RL-based frame-
work through simulation results, we compare
the throughput performance of the conventional
MAC-RA (ConMAC-RA) mechanism with a chan-
nel-observation-based MAC-RA (COBMAC-RA)
mechanism and a novel RL-based MAC-RA
(RLMAC-RA) approach [14]. We performed
simulations in network simulator 3 (NS-3) [15].
Table 1 lists all the simulation parameters used
for the performance evaluation of the RL-aware
framework. Specifically, the RLMAC-RA predicts
the throughput that an STA will acquire after
the association with a given AP based on chan-
nel-observation-based collision probability infor-
mation. Figure 4a shows the network throughput
for a different number of connected STAs. We
see that the RLMAC-RA approach improves the
average throughput performance and optimiz-
es the over-all network performance to allow a
much greater number of STAs within the envi-
ronment. Similarly, in Fig. 4b, we increase the
number of connected STAs within the same envi-
ronment with time. The figure shows an RLMAC-
RA mechanism that learns the environment and
converges the system throughput to the optimal
level. The RL technique can interactively learn
complex and dynamic situations from dense
deployments, consequently ensuring optimal
throughput requirements to STAs. One of these
figures’ interesting observations indicates that an
RL-aware framework for spectrum resource allo-
cation may allow many connected devices within
a WLAN environment. As shown in Figs. 4a and
4b, the network throughput of the ConMAC-RA
mechanism degrades with the increase of several
connected STAs, resulting in very low or possibly
zero throughput in the network due to increased
collisions among the STAs. On the other hand,
the RLMAC-RA mechanism is more stable and
converged even with the number of connected
STAs within the network.
C
Current wireless communication networks, like
IEEE 802.11 standards, are not yet ready for the
pervasive adoption of ML-based frameworks.
Therefore, disruptive framework-level changes
are required for upcoming wireless communica-
tion standards. This article presents an RL-aware
framework for next-generation wireless commu-
nications to cope with such a situation in future
technologies, 5G and beyond (6G) for IEEE
802.11 WLANs (e.g., IEEE 802.11ax). Our pro-
posed framework provides enhanced network
performance in throughput and allows a WLAN
network to support many connected STAs.
Thus, we conclude that future WLANs are
imagined sharing a typical flexible RL-aware archi-
tecture that permits optimized spectrum resource
allocation. Nevertheless, plenty of efforts are still
required before arriving at knowledgable wireless
networks. We highlight an RL-based framework
for data handling (collection from the WLAN
environment), coordination (distribution of the
RL operation and dealing with the data heteroge-
Table 1. MAC/PHY layer simulation parameters
for performance evaluation.
Parameter type Value
Frequency 5 GHz
Channel bandwidth 160 MHz
Data rate (MCS11) 1201 Mb/s
Payload size 1472 bytes
Transmission range 10 m
CWmin 32
CWmax 1024
Simulation time 500 s
Propagation loss LogDistancePropagation
Mobility ConstantPositionMobility
Rate-adaptation ConstantRateWifiManager
Error-rate NistErrorRateModel
Figure 4. Throughput comparison among
ConMAC-RA, COBMAC-RA, and RLMAC-RA
in: a) WLAN’s average throughput comparison
for a different number of connected STAs;
b) a dynamic network environment with an
increasing number of connected STAs after
every 50 s.
The network throughput
of the ConMAC-RA mech-
anism degrades with the
increase of several con-
nected STAs, resulting in
very low or possibly zero
throughput in the network
due to increased collisions
among the STAs. On the
other hand, the RLMAC-RA
mechanism is more stable
and converged even with
the number of connected
STAs within the network.
ALI_LAYOUT.indd 130ALI_LAYOUT.indd 130 6/8/21 8:20 PM6/8/21 8:20 PM
Authorized licensed use limited to: Sejong Univ. Downloaded on June 30,2021 at 08:04:37 UTC from IEEE Xplore. Restrictions apply.
IEEE Communications Standards Magazine • June 2021 131
neity), and robustness of the RL strategies (man-
aging vulnerability and preventing exceptional
events in the environment).
A
Rashid Ali and Imran Ashraf are co-first
authors. Yousaf Bin Zikria and Ali Kashif Bashir
are the corresponding authors.
R
[1] K. Sheth et al., “A Taxonomy of AI Techniques for 6G Com-
munication Networks,” Computer Commun., vol. 161, 2020,
pp. 279–303. DOI:10.1016/j.comcom.2020.07.035.
[2] R. Ali et al., “Design of MAC Layer Resource Allocation
Schemes for IEEE 802.11ax: Future Directions,” IETE
Technical Review, 2016, vol. 35, no. 1, pp. 28–56. DOI:
10.1080/02564602.2016.1242387.
[3] A. Osseiran et al., “Scenarios for 5G Mobile and Wireless
Communications: the Vision of the METIS Project,” IEEE
Commun. Mag., vol. 52, no. 5, May 2014, pp. 26–35.
[4] ITU-T Supp. Y.Supp55, “Machine Learning in Future Net-
works Including IMT-2020: Use Cases,” 2019.
[5] C. Jiang et al., “Machine Learning Paradigms for Next-Gen-
eration Wireless Networks,” IEEE Wireless Commun., vol. 24,
no. 2, Apr. 2016, pp. 98–105.
[6] C. Zhang, P. Patras, and H. Haddadi, “Deep Learning in
Mobile and Wireless Networking: A Survey,” IEEE Commun.
Surveys & Tutorials, vol. 21, no. 3, 2019, pp. 2224–87.
[7] U. Muhammad et al., “Unsupervised Machine Learning for
Networking: Techniques, Applications and Research Chal-
lenges,” IEEE Access, vol. 7, 2019, pp. 65,579–615.
[8] S. Bi et al., “Wireless Communications in the Era of Big
Data,” IEEE Commun. Mag., vol. 53, no. 10, Oct. 2015, pp.
190–99.
[9] I. Chih-Lin et al., “The Big-Data-Driven Intelligent Wireless Net-
work: Architecture, Use Cases, Solutions, and Future Trends,”
IEEE Vehic. Tech. Mag., vol. 12, no. 4, 2017, pp. 20–29.
[10] M. Wang et al., “Machine Learning for Networking: Work-
flow, Advances, and Opportunities,” IEEE Network, vol. 32,
no. 2, Mar./Apr. 2018, pp. 92–99.
[11] M. Sohail et al., “TrustWalker: An Efficient Trust Assessment
in Vehicular Internet of Things (VIoT) with Security Consid-
eration,” Sensors 2020, vol. 20, no. 14, p. 3945.
[12] R. S. Sutton and A. G. Barto, Reinforcement Learning: An
Introduction, 2nd ed., MIT Press, 1998.
[13] R. Ali et al., “Channel Observation-Based Scaled Backoff
Mechanism for High-Efficiency WLANs,” Electronics Letters,
May 2018, vol. 54, no. 10, pp. 663–65. DOI: 10.1049/
el.2018.0617.
[14] R. Ali et al., “Deep Reinforcement Learning Paradigm for
Performance Optimization of Channel Observation-Based
MAC Protocols in Dense WLANs,” IEEE Access, vol. 7, no. 1,
Jan. 2019, pp. 3500–11.
[15] The Network Simulator-ns-3; https://www.nsnam.org/,
accessed on: Jan. 6, -2020.
B
Rashid ali [S’ 17, M’ 20] (rashidali@sejong.ac.kr) is currently an
assistant professor with the School of Intelligent Mechatronics
Engineering, Sejong University, Seoul, Korea. He received his
B.S. degree in information technology (2007) from Gomal Uni-
versity, Pakistan. He received his Master’s in computer science
(advanced network design, 2010) from the University West,
Sweden. He received his Ph.D. degree (2019) in information
and communication engineering from the Department of Infor-
mation and Communication Engineering, Yeungnam University,
Korea. His research interests include next-generation wireless
local area networks (IEEE 802.11ax/ah), unlicensed wireless net-
works in 5G, and reinforcement learning techniques for wireless
networks.
imRan ashRaf (imranashraf@ynu.ac.kr) is working as an assistant
professor in the Department of Information and Communica-
tion Engineering (ICE), Yeungnam University, South Korea. He
received his Ph.D. degree from the Department of ICE, Yeun-
gnam University, in 2018. He received his M.S. degree from
Blekinge Institute of Technology, Sweden, in 2011. His research
interests include next-generation location-based services, indoor
positioning and localization using WLAN, smartphone sensors
and 4G/5G networks, machine/deep learning for positioning/
localization, deep learning architecture and algorithms for clas-
sification and prediction, smart sensors solutions (LIDAR) for
smart car, and data fusion strategies for environment sensing in
autonomous vehicles.
ali Kashi f B ashiR [M’15, SM’16] (dr.alikashif.b@ieee.org) is a
senior lecturer with the Department of Computing and Math-
ematics, Manchester Metropolitan University, United Kingdom,
and an adjunct professor at the National University of Sci-
ence and Technology, Pakistan. He received his B.S. degree
from the University of Management and Technology, Pakistan,
his M.S. degree from Ajou University, South Korea, and his
Ph.D. degree in computer science and engineering from Korea
University. He was an associate professor with the Faculty
of Science and Technology, University of the Faroe Islands,
Denmark. He is an Editor of several journals of IEEE, Elsevier,
and Springer.
Yousaf Bin ZiKRia [SM’ 17] (yousafbinzikria@ynu.ac.kr) is cur-
rently working as an assistant professor in the Department of
Information and Communication Engineering, College of Engi-
neering, Yeungnam University. He received a Ph.D. degree from
the Department of ICE, Yeungnam University in 2016. He has
more than 10 years of experience in research, academia, and
industry in the fields of information and communication engi-
neering and computer science. He has authored more than 80
scientific peer-reviewed papers in journals, conferences, patents,
and book chapters.
This article presents an
RL-aware framework for
next-generation wireless
communications to cope
with such a situation in
future technologies, 5G
and beyond (6G) for IEEE
802.11 WLANs (such as
IEEE 802.11ax). Our pro-
posed framework provides
enhanced network perfor-
mance in throughput and
allows a WLAN network to
support many connected
STAs.
ALI_LAYOUT.indd 131ALI_LAYOUT.indd 131 6/8/21 8:20 PM6/8/21 8:20 PM
Authorized licensed use limited to: Sejong Univ. Downloaded on June 30,2021 at 08:04:37 UTC from IEEE Xplore. Restrictions apply.
... Furthermore, in the context of the B5G network, MLenabled scheduling is highlighted for its crucial role in reducing queuing latency and ensuring reliable services [3]. In [26], a reinforcement-learning-based framework is presented for wireless channel access mechanisms in IEEE 802.11 standards, particularly in the context of Massive Internet of Things (mIoT). [27] classifies application scenarios, including strengthened eMBB/mMTC/uRLLC and novel scenarios like space-air-ground-sea integrated networks and AI-enabled networks. ...
... This approach involves a balance between exploration (randomly selecting actions with a probability '') and exploitation (choosing actions with the highest value function with a probability of '1 -'). This exploration-exploitation trade-off is pivotal in determining the optimal solution [26]. Fig. 1 offers an overview of various Machine Learning (ML) algorithms, with a specific focus on Reinforcement Learning (RL)algorithms that we will examine more in the future. ...
Article
Full-text available
In today’s and future wireless communications, especially in 5G and 6G networks, machine learning (ML) methods are crucial. Potentially, these techniques bring many benefits such as increased data throughput, improved security, reduced latency, and, on the whole, enhanced network efficiency. Furthermore, to facilitate the processing of large amounts of data in real-time situations, machine learning is used for various functions in wireless networks. This article aims to explore the significance and application of machine learning, with a particular focus on classic reinforcement learning, in the context of predicting optimal beam configurations within wireless communications scenarios. Our goal is to minimize interference between transmitters by finding the optimal beamforming angles. For this, ray tracing techniques are deployed. We see this research as a step forward towards integrating digital twin (DT) technology in network management and control. In this article, different machine learning methods are used and their performance is compared. Firstly, the most effective angles for beamforming, maximizing channel capacity are identified. Then, by using these methods and after verifying their accuracy, the optimal antenna angles in scenarios with an increased number of transmitters and receivers is found and evaluated.
... With the growing prominence of reinforcement learning (RL)-enabled massive IoT and the advent of the 6G of wireless networks [101], [102], [103], the necessity for robustness against adversarial attacks has become increasingly critical. Network slicing is one of the key technologies of nextgeneration radio access networks. ...
Article
The Internet of Things (IoT) and massive IoT systems are key to sixth-generation (6G) networks due to dense connectivity, ultra-reliability, low latency, and high throughput. Artificial intelligence, including deep learning and machine learning, offers solutions for optimizing and deploying cutting-edge technologies for future radio communications. However, these techniques are vulnerable to adversarial attacks, leading to degraded performance and erroneous predictions, outcomes unacceptable for ubiquitous networks. This survey extensively addresses adversarial attacks and defense methods in 6G network-assisted IoT systems. The theoretical background and up-to-date research on adversarial attacks and defenses are discussed. Furthermore, we provide Monte Carlo simulations to validate the effectiveness of adversarial attacks compared to jamming attacks. Additionally, we examine the vulnerability of 6G IoT systems by demonstrating attack strategies applicable to key technologies, including reconfigurable intelligent surfaces, massive multiple-input multiple-output (MIMO)/cell-free massive MIMO, satellites, the metaverse, and semantic communications. Finally, we outline the challenges and future developments associated with adversarial attacks and defenses in 6G IoT systems.
... With the growing prominence of reinforcement learning (RL)-enabled massive IoT and the advent of the 6G of wireless networks [82], [83], [84], the necessity for robustness against adversarial attacks has become increasingly critical. Network slicing is one of the key technologies of next-generation radio access networks. ...
Article
Full-text available
The Internet of Things (IoT) and massive IoT systems are key to sixth-generation (6G) networks due to dense connectivity, ultra-reliability, low latency, and high throughput. Artificial intelligence, including deep learning and machine learning, offers solutions for optimizing and deploying cutting-edge technologies for future radio communications. However, these techniques are vulnerable to adversarial attacks, leading to degraded performance and erroneous predictions, outcomes unacceptable for ubiquitous networks. This survey extensively addresses adversarial attacks and defense methods in 6G network-assisted IoT systems. The theoretical background and up-to-date research on adversarial attacks and defenses are discussed. Furthermore, we provide Monte Carlo simulations to validate the effectiveness of adversarial attacks compared to jamming attacks. Additionally, we examine the vulnerability of 6G IoT systems by demonstrating attack strategies applicable to key technologies, including reconfigurable intelligent surfaces, massive multiple-input multiple-output (MIMO)/cell-free massive MIMO, satellites, the metaverse, and semantic communications. Finally, we outline the challenges and future developments associated with adversarial attacks and defenses in 6G IoT systems.
... A salient challenge lies in the exponential surge in data magnitude and intricacy. Given the prodigious speed and minuscule latency intrinsic to 6G, IoT apparatuses are anticipated to engender data at an unparalleled scale [107]. To adeptly manage, process, and interpret this data contemporaneously, there's an imperative for state-of-the-art computing architectures, efficacious data storage modalities, and sophisticated data analytics methodologies. ...
Article
Full-text available
The advent of the fifth-generation mobile communication technology (5G) era has catalyzed significant advancements in medical diagnosis delivery, primarily driven by the surge in medical data from wearable Internet of Medical Things (IoMT) devices. Nonetheless, the IoMT paradigm grapples with challenges related to data security, privacy, constrained computational capabilities at the edge, and an inadequate architecture for handling traditionally error-prone data. In this context, our research offers: (1) an exhaustive review of large-scale medical data propelled by IoMT, (2) an exploration of the prevailing cloud-edge Artificial Intelligence (AI) framework tailored for IoMT, and (3) an insight into the application of Edge Federated Learning (EFL) in bolstering medical big data analytics to yield secure and superior diagnostic outcomes. We place a particular emphasis on the proliferation of IoMT wearable devices that incessantly stream medical data, either from patients or healthcare institutions, to centralized repositories. Furthermore, we introduce a federated cloud-edge AI blueprint designed to position computational resources proximate to the edge network, facilitating real-time diagnostic feedback to patients. We conclude by delineating prospective research trajectories in enhancing IoMT through AI integration.
... Internet users will continue to increase as long as the WLAN is still the favoured means of reaching the Internet [7]. Additionally, the embracement of unlicensed industrial, scientific, and medical (ISM) bands utilized by the wireless local area network, Bluetooth, and the Internet of things (IoT) is another factor responsible for the traffic spread of wireless networks [8,9]. ...
Article
Full-text available
The proliferation of the IEEE wireless local area network (WLAN), due to its flexibility, mobility, and support for billions of smart mobile devices, has gained widespread popularity. The high data rate, low complexity, and low cost make the IEEE WLAN suitable for deploying next generation wireless networks. The WLAN developed to carry packet data is currently deployed to transport packetized voice. However, integrating packet data and packetized voice poses new design complexity. This paper proposes an improved resource allocation management protocol to boost the performance of the infrastructure WLAN by enhancing its quality of service (QoS) parameters. Analytical expressions were derived for the point coordination function (PCF) infrastructure WLAN and simulated in MATLAB. Simulation results show an exponential decrease in throughput and spectral efficiency with increased randomized packets in the workstations investigated. Overall, results indicate that the queue delay and cycle duration of the PCF WLAN access increase exponentially with an increase in traffic density. The projected resource allocation management protocol would pose practical applications in improving the performance of the infrastructure WLAN in next generation wireless networks.
... Ali et al. [21] proposed a federated RL-based channel resource allocation framework for the fifth generation (5G) networks, which suggested collaborating learning estimates for faster learning convergence. Ali et al. [22] made extensive research efforts on developing beyond sixth generation (6G) wireless networks, which aimed at bringing ultrareliable low-latency communication services. Zhao et al. [23] found it difficult to achieve a balance between high resource consumption and high communication costs and proposed a local computing offloading method that minimizes the total energy consumption consumed by the terminal devices and edge servers by jointly optimizing the task offloading rate, the CPU frequency of the system, the allocated bandwidth of the available channels, and the transmission power of each device at each time slot. ...
Article
Full-text available
In IoT (Internet of Things) edge computing, task offloading can lead to additional transmission delays and transmission energy consumption. To reduce the cost of resources required for task offloading and improve the utilization of server resources, in this paper, we model the task offloading problem as a joint decision making problem for cost minimization, which integrates the processing latency, processing energy consumption, and the task throw rate of latency-sensitive tasks. The Online Predictive Offloading (OPO) algorithm based on Deep Reinforcement Learning (DRL) and Long Short-Term Memory (LSTM) networks is proposed to solve the above task offloading decision problem. In the training phase of the model, this algorithm predicts the load of the edge server in real-time with the LSTM algorithm, which effectively improves the convergence accuracy and convergence speed of the DRL algorithm in the offloading process. In the testing phase, the LSTM network is used to predict the characteristics of the next task, and then the computational resources are allocated for the task in advance by the DRL decision model, thus further reducing the response delay of the task and enhancing the offloading performance of the system. The experimental evaluation shows that this algorithm can effectively reduce the average latency by 6.25%, the offloading cost by 25.6%, and the task throw rate by 31.7%.
... The standard font of the logo combination uses Chinese at the top, English at the bottom, and a combination of Chinese and English. In the course of the investigation, it was discovered that the excessive combination of logos in colleges and universities made it impossible for users of logos to start, and the application was quite confusing [32,33]. The original intention of the design was to standardize the use of the logo. ...
Article
Full-text available
The development of the times is driving the competition in the market. In the current trend of the brand era, if a brand cannot fully display its own personality, it is difficult to be competitive. With the development of Internet of things (IoT) technology, different enterprise values produce different types of products and affect all aspects of social economy and daily life. This article mainly studies the innovative design of brand image based on 5G IoT era. This article takes the consumer group as the positioning object, is committed to shaping the core value of the brand, and applies the emotional personality psychology of consumers to the rational brand planning. Creating a perfect brand image and an accurate brand development track is an effective guarantee to promote and enhance brand development. The standard font of the logo combination is the combination of Chinese and English. The advantage of the combination of Chinese and English lies in the combination of nationality and cosmopolitanism. It is not only a new cultural phenomenon but also has strong operability, which meets the requirements of logo design internationalization. The explanatory capacity of the model increases with the entry of product image, corporate image, and self-consistency. The combined explanatory capacity of the three variables reaches 42.2%. The significance probability of the F test is p = 0.000, which is significantly less than 0.01, indicating that the regression of the model is significant. Except for “imaginative,” girls’ scores on the other seven brand personality dimensions are equal to or higher than boys’. The results show that the innovative design of brand image has far-reaching significance for the long-term development of the brand.
Article
Orthogonal frequency division multiple access (OFDMA) is introduced in IEEE 802.11ax to satisfy massive transmission demands. However, the uplink OFDMA-based random access (UORA) mechanism provides poor quality of service (QoS), and restricts channel utilization efficiency, especially in the dynamic network environment. To solve these issues, in this study, the transmission period is decoupled into a contention stage and a transmission stage, and an intelligent media access control (MAC) algorithm with QoS-guaranteed for next-generation wireless local area networks (WLANs) is presented. Specifically, we consider stochastic and various traffic in time-varying wireless communication conditions and propose two contention window (CW) optimization mechanisms based on Q-learning (QL) and a deep Q-network (DQN) (referred to as QL-MAC and DQN-MAC) for static and dynamic network scenarios, respectively. Meanwhile, the access point (AP) acts as the reinforcement learning (RL) agent and centrally optimizes the CW for all stations to eventually maximize the system throughput. Furthermore, we provide the QoS-guaranteed channel access mechanism for different priority data traffic, in which high-priority traffic can obtain more channel access opportunities in the uplink contention stage. Simulation results demonstrate that the proposed algorithms can significantly improve the network performance in terms of convergence, throughput, delay, and fairness compared to the adaptive grouping-based two-stage mechanism (BTM) and double random access QoS-oriented OFDMA MAC (DRA-OFDMA) algorithm in various scenarios.
Article
Full-text available
The Internet of Things (IoT) is a world of connected networks and modern technology devices, among them vehicular networks considered more challenging due to high speed and network dynamics. Future trends in IoT allow these inter networks to share information. Also, the previous security solutions to vehicular IoT (VIoT) much emphasize on privacy protection and security related issues using public keys infrastructure. However, the primary concern about efficient trust assessment, authorized users malfunctioning, and secure information dissemination in vehicular wireless networks have not been explored. To cope with these challenges, we propose a trust enhanced on-demand routing (TER) scheme, which adopts TrustWalker (TW) algorithm for efficient trust assessment and route search technique in VIoT. TER comprised of novel three-valued subjective logic (3VSL), TW algorithm, and ad hoc on-demand distance vector (AODV) routing protocol. The simulated results validate the accuracy of the proposed scheme in term of throughput, packet drop ratio (PDR), and end to end (E2E) delay. In the simulation, the execution time of the TW algorithm is analyzed and compared with another route search algorithm, i.e., Assess-Trust (AT), by considering real-world online datasets such as Pretty Good Privacy and Advogato. The accuracy and efficiency of the TW algorithm, even with a large number of vehicle users, are also demonstrated through simulations.
Article
Full-text available
The potential applications of deep learning (DL) to the media access control (MAC) layer of wireless local area networks (WLANs) have already been progressively acknowledged due to their novel features for future communications. Their new features challenge conventional communications theories with more sophisticated artificial intelligence (AI)-based theories. DL has been extensively proposed for the MAC layer of WLANs in various research areas, such as deployment of cognitive radio and communications networks. Deep reinforcement learning (DRL) is one DL technique that is motivated by the behaviorist sensibility and control philosophy, where a learner can achieve an objective by interacting with the environment. Next-generation dense WLANs like the IEEE 802.11ax high-efficiency WLAN (HEW) are expected to confront ultra-dense diverse user environments and radically new applications. To satisfy the diverse requirements of such dense WLANs, it is anticipated that prospective WLANs will freely access the best channel resources with the assistance of self-scrutinized wireless channel condition inference. Such intelligence is only possible with the introduction of DL techniques in future WLANs. Therefore, in this paper, we propose reinforcement learning (RL) as an intelligent paradigm for MAC layer resource allocation in dense WLANs. One of the RL models, Q-learning (QL), is used to optimize the performance of channel observation–based MAC protocols in dense wireless networks. An intelligent QL-based resource allocation (iQRA) mechanism is proposed for MAC layer channel access in dense WLANs. Simulation results indicate that the proposed intelligent paradigm learns diverse WLAN environments and optimizes performance, compared to conventional non-intelligent MAC protocols. The performance of the proposed iQRA mechanism is evaluated in diverse WLAN network environments with throughput, channel access delay, and fairness as performance metrics.
Article
Full-text available
In this letter, a channel observation-based scaled backoff (COSB) mechanism for the carrier sense multiple access with collision avoidance (CSMA/CA) of high efficiency wireless local area networks (WLANs) is devised. The proposed protocol modifies the blind scaling of contention window (W) in binary exponential backoff (BEB) scheme of currently deployed WLANs. COSB is employed to adaptively scaleup and scale-down the W size during the backoff mechanism for collided and successfully transmitted data frames, respectively. It can achieve higher throughput and shorter delay compared to conventional BEB mechanism in highly dense WLANs.
Article
Full-text available
The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, agile management of network resource to maximize user experience, and extraction of fine-grained real-time analytics. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques to help managing the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.
Article
Full-text available
While machine learning and artificial intelligence have long been applied in networking research, the bulk of such works has focused on supervised learning. Recently there has been a rising trend of employing unsupervised machine learning using unstructured raw network data to improve network performance and provide services such as traffic engineering, anomaly detection, Internet traffic classification, and quality of service optimization. The interest in applying unsupervised learning techniques in networking emerges from their great success in other fields such as computer vision, natural language processing, speech recognition, and optimal control (e.g., for developing autonomous self-driving cars). Unsupervised learning is interesting since it can unconstrain us from the need of labeled data and manual handcrafted feature engineering thereby facilitating flexible, general, and automated methods of machine learning. The focus of this survey paper is to provide an overview of the applications of unsupervised learning in the domain of networking. We provide a comprehensive survey highlighting the recent advancements in unsupervised learning techniques and describe their applications for various learning tasks in the context of networking. We also provide a discussion on future directions and open research issues, while also identifying potential pitfalls. While a few survey papers focusing on the applications of machine learning in networking have previously been published, a survey of similar scope and breadth is missing in literature. Through this paper, we advance the state of knowledge by carefully synthesizing the insights from these survey papers while also providing contemporary coverage of recent advances.
Article
Full-text available
Next-generation wireless networks are expected to support extremely high data rates and radically new applications, which require a new wireless radio technology paradigm. The challenge is that of assisting the radio in intelligent adaptive learning and decision making, so that the diverse requirements of next-generation wireless networks can be satisfied. Machine learning is one of the most promising artificial intelligence tools, conceived to support smart radio terminals. Future smart 5G mobile terminals are expected to autonomously access the most meritorious spectral bands with the aid of sophisticated spectral efficiency learning and inference, in order to control the transmission power, while relying on energy efficiency learning/inference and simultaneously adjusting the transmission protocols with the aid of quality of service learning/inference. Hence we briefly review the rudimentary concepts of machine learning and propose their employment in the compelling applications of 5G networks, including cognitive radios, massive MIMOs, femto/small cells, heterogeneous networks, smart grid, energy harvesting, device-todevice communications, and so on. Our goal is to assist the readers in refining the motivation, problem formulation, and methodology of powerful machine learning algorithms in the context of future networks in order to tap into hitherto unexplored applications and services.
Article
Full-text available
Wireless local area networks (WLANs) are widely deployed for internet-centric data applications. It is predicted that by 2018, about two-thirds of the world's internet traffic will be video, and more than half of the traffic will be offloaded to Wi-Fi networks. Consequently, WLANs need major improvements in both throughput and efficiency. New technologies continue to be introduced for WLAN applications for this purpose. The IEEE 802.11ac standard is the currently implemented amendment by the IEEE 802.11 standard working group that promises data rates at gigabits per second. The main features of the IEEE 802.11ac standard are adopting increased bandwidth and higher order modulation than the previous standard, and multiple-input multiple-output (MIMO) and multi-user MIMO transmission modes. These features are designed to improve the user experience. In addition to technologies that enhance the efficiency of the WLAN, the IEEE 802.11ax standard is also investigating and evaluating advanced wireless technologies to utilize the existing spectrum more efficiently. These modern communications technologies are steadily advancing physical layer data rates in WLANs, although data throughput efficiency of the WLAN may degrade rapidly as the physical layer data rate increases. The fundamental reason for the degradation is that the current medium access control (MAC) protocol allocates the entire channel to one user as a single source due to equally distributed time domain contention resolution. The challenges and difficulties have already been identified for designing efficient MAC layer resource allocation (MAC-RA) schemes for the upcoming IEEE 802.11ax high-efficiency WLAN. However, there is no profound investigation outcome for this kind of efficient resource allocation. Therefore, in this paper, we conduct an extensive survey of the expected features and challenges for IEEE 802.11ax in the design of fair and efficient MAC-RA. The associated previous research work is summarized as to future directions. Moreover, the need for each directed scheme is highlighted.
Article
With 6G flagship program launched by the University of Oulu, Finland, for full future adaptation of 6G by 2030, many institutes worldwide have started to explore various issues and challenges in 6G communication networks. 6G offers ultra high-reliable and massive ultra-low latency while opening the doors for many applications currently not viable by today's 4G and 5G communication standards. The current 5G technology has security and privacy issues which makes its usage in limited applications. In such an environment, we believe that AI can offer efficient solutions for the aforementioned issues having low communication overhead cost. Keeping focus on all these issues, in this paper, we presented a comprehensive survey on AI-enabled 6G communication technology, which can be used in wide range of future applications. In this article, we explore how AI can be integrated into different applications such as object localization, UAV communication, surveillance, security and privacy preservation etc. Finally, we discussed a use case that shows the adoption of AI techniques in intelligent transport system.
Article
The concept of using big data (BD) for wireless communication network optimization is no longer new. However, previous work has primarily focused on long-term policies in the network, such as network planning and management. Apart from this, the source of the data collected for analysis/model training is mostly limited to the core network (CN). In this article, we introduce a novel data-driven intelligent radio access network (RAN) architecture that is hierarchical and distributed and operates in real time. We also identify the required data and respective workflows that facilitate intelligent network optimizations. It is our strong belief that the wireless BD (WBD) and machine-learning/artificial-intelligence (AI)-based methodology applies to all layers of the communication system. To demonstrate the superior performance gains of our proposed methodology, two use cases are analyzed with system-level simulations; one is the neural-network-aided optimization for Transmission Control Protocol (TCP), and the other is prediction-based proactive mobility management.
Article
Recently, machine learning has been used in every possible field to leverage its amazing power. For a long time, the net-working and distributed computing system is the key infrastructure to provide efficient computational resource for machine learning. Networking itself can also benefit from this promising technology. This article focuses on the application of Machine Learning techniques for Networking (MLN), which can not only help solve the intractable old network questions but also stimulate new network applications. In this article, we summarize the basic workflow to explain how to apply the machine learning technology in the networking domain. Then we provide a selective survey of the latest representative advances with explanations on their design principles and benefits. These advances are divided into several network design objectives and the detailed information of how they perform in each step of MLN workflow is presented. Finally, we shed light on the new opportunities on networking design and community building of this new inter-discipline. Our goal is to provide a broad research guideline on networking with machine learning to help and motivate researchers to develop innovative algorithms, standards and frameworks.