Conference PaperPDF Available

A Study on the Integration of Machine Learning in Wireless Communication

Authors:
AbstractPresently, we are observing a paradigm shift in
communication technology as every enterprise is trying to shift
their focus to smart communication networks to take advantage
of network traffic data. Modern communication networks, in
particular, mobile networks, create a gigantic measure of
information at the system framework level and at the end-user
level. This information is an important supply of potentially
valuable data such as the area, mobility pattern and the call
inclinations of the user. The vision of network operators is to
make use of this massive amount of traffic data for in-house
administration purposes that involves network management and
optimization. Keeping in mind the end goal to make this vision a
reality, there is a solid requirement for the advancement and
usage of new machine learning algorithms for big data analytics
in communication networks. These must have the capacity to
separate helpful data from this system activity while considering
constrained communication resources and then try to exploit this
information for external or in-house services.
Index TermsArtificial Intelligence, Data Analytics, Fifth
Generation Cellular, Internet of Things, Machine Learning,
Wireless Communication
I. INTRODUCTION
THERE is an ongoing convergence of four key technologies
that are set to significantly alter the information and
communications technology (ICT) ecosystem. These
technologies are fifth generation (5G) cellular, artificial
intelligence (AI), data analytics and the internet of things (IoT).
Each of these technologies will have a huge impact on their
own right, both on ICT as well as on all major industry
verticals that depend on telecom and information technology
(IT) services. However, the combination of these technologies
is poised to create opportunities to significantly enhance user
experiences for communication, applications, digital content
and commerce.
Aritra Basu is with the Department of Electronics and Communication
Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India.
(E-mail: aritra.basu2014@vit.ac.in)
Dr. Budhaditya Bhattacharyya is an Associate Professor in the Department
of Electronics and Communication Engineering, Vellore Institute of
Technology, Vellore, Tamil Nadu, India.
(E-mail: budhaditya@vit.ac.in)
A. Artificial Intelligence (AI)
AI is a form of machine-based intelligence which
typically manifests in cognitive functions that are associated
with human minds. Machine learning (ML), deep learning
(DL) and natural language processing (NLP) are different
technologies associate with AI. These technologies are
expected to have an ever increasing role in ICT.
B. Fifth Generation (5G) Cellular
5G networks are expected to provide much higher
data rates (in the range of several gigabits per second) than
4G/LTE. 5G also aims to provide ultra-low latency (less than
1ms delay) for certain services such as virtual reality. Unlike
4G/LTE, 5G will be designed to support connected devices and
will facilitate a game changing wireless infrastructure
transformation for CSPs. CSPs will be able to deploy smart
equipments at base transceiver stations (BTS) which will
transform the BTS into distributed data centers.
C. Internet of Things (IoT)
IoT refers to the virtual representations of uniquely
identifiable objects in an internet like environment. The world
is moving beyond standalone devices into a new era where
everything is connected via IoT technologies. This has broad
and deep implications for products, services and solutions
across every industry vertical.
D. Data Analytics
Data analytics refers to the processing of vast amounts of
machine-generated unstructured data. AI technology can be
further integrated to automate decision making and to engage
ML for analysis of data.
The convergence of cloud, AI and IoT will play a vital role in
the evolution of data analytics by generating increasing
amounts of unstructured machine data. Such data will drive
substantial opportunities for AI to support unstructured data
analytics solutions.
This paper is organized as follows: Section II discusses the
need for machine learning in the field of communication,
section III covers the application of supervised learning in
wireless communication, section IV gives a review on how
machine learning has already been integrated in several
communication technologies, section V discusses the
application of machine learning in wireless networks, section
VI focusses on the areas of future research while section VII
concludes the paper.
Aritra Basu and Budhaditya Bhattacharyya
A Study on the Integration of Machine Learning
in Wireless Communication
0955
International Conference on Communication and Signal Processing, April 3-5, 2018, India
978-1-5386-3520-9/18/$31.00 ©2018 IEEE
II. NEED FOR MACHINE LEARNING
Machine learning (ML) basically refers to getting
computers to program themselves. ML is a powerful set of
mathematical and computational tools that make use of a set of
data inputs from previous events to predict or estimate the next
possible outputs. Such predictions can be used for smart
decision making, which yields a performance metric
optimization over a particular model. In computer science, ML
is considered to be a sub-field of AI and is usually overlapping
with other sub-fields such as computational statistics or data
mining. The applications of ML can be found in a large number
of decision-making algorithms that modify their behaviour
based on statistical models.
A large number of high-quality wireless services are
required to support the rapid development of mobile
communication technologies. According to Cisco VNI Global
Mobile Data Traffic Forecast 2017, global mobile data traffic is
likely to increase by around seven times between 2016 and
2021, while mobile network connection speeds are expected to
be three times as compared to what it is today by 2021. Thus,
there exists a big gap between the future requirements of
wireless services and the existing technologies.
Designing intelligent algorithms that make use of the
constrained wireless resources is the need of the hour and ML,
with its pattern recognition and computational learning theory
that enables such models to learn from the past experience and
make predictions in complicated scenarios, can be used to
analyze the current radio situations and communication
paradigms in wireless communication such as spectrum
utilization, channel capacity, power level and antenna
configurations in order to generate an optimal solution aimed at
improving the quality of service (QoS).
Recently, several ML algorithms have been proposed for
wireless sensor networks, cognitive radio networks, machine-
to-machine communication, MIMO link adaption, antenna
selection, congestion control and so forth. ML has been one of
the most active research fields due to its great success in a wide
range of domains. However, its impact on wireless
communication has so far been very limited, even though the
potential of ML in building state-of-the-art communication
systems is broad. The main challenge is how to formulate the
problems in communication systems as a proper ML model.
III. SUPERVISED LEARNING IN WIRELESS COMMUNICATION
The primary performance indicators of 5G wireless
networks are energy-efficient algorithms, not just at the core
network level, but also at the data storage facilities. It is
generally assumed that quick access to accurate data can
enhance the overall performance of the system. With regards to
future wireless networks, obtaining significant amounts of
information to identify correlations and statistical probabilities
are considered to provoke proactive decisions thereby
improving the efficiency of the network.
Next-generation networks are expected to take in the
various attributes of both the user's location and the human
calling patterns in order to automatically determine the optimal
system configuration. For achieving this, these smart mobile
terminals have to depend on modern learning and decision-
making algorithms and ML learning, as a standout amongst the
most capable AI tools, presents itself as a promising solution
[1].
Correlating the data traffic with the user's geo location
is a promising concept for giving enhanced efficiency. Such
information can be presented as radio environment maps that
can provide significant insights on channel quality, throughput
and link reliability. Traffic maps that are used for visualizing
the distribution of traffic, mobility patterns and trajectory
information can also improve long-term network resource
management. Implementation of recorded data related to the
users and their migration patterns can be applied for better
caching of data content. Finally, knowledge of the user's
behaviour can be utilized for improving energy efficiency. This
can lead to switching off of selected sectors of base stations on
the basis of traffic prediction maps generated from historical
data. To summarize, an access to the rich pool of traffic data
that defines the context of communication (known as context
information) can significantly improve the various performance
metrics of the network. However, with increasing data, the
payload will also increase leading to the need for significant
enhancements on the backhaul part of the system.
A. MIMO Channel & Energy Learning - Regression Models,
K-Nearest Neighbour (KNN) & Support Vector Machine
(SVM):
Regression analysis relies on a statistical method for
assessing the relationship among variables. Linear regression
involves a linear regression function whereas, in logistic
regression, the regression function is logistic, assuming a
common sigmoid curve.
The KNN and SVM calculations are primarily used
for characterization of objects. In KNN, an object is assigned to
the class that is most similar to its k nearest neighbours. The
output may be affected by a specific property of the object.
SVM algorithm, on the other hand, depends on nonlinear
mapping. It involves transforming the original training data
into a higher dimension so that it can be separated into multiple
classes. The algorithm then searches for the optimal linear
separating hyper plane that can distinguish one class from
another.
These models can be utilized for assessing or
foreseeing radio parameters that are related to particular users.
For instance, in massive MIMO systems, due to the presence of
hundreds of antenna, channel estimation causes significant
high-dimensional search-problems, which can be addressed by
the previously mentioned learning models. Hierarchical SVM
(H-SVM) has been proposed [2] for the estimation of the
Gaussian channel’s noise level in a MIMO wireless network. In
heterogeneous networks where handovers are frequent, both
the KNN and SVM can be applied for finding the optimal
solution. [3] suggests that these models can be utilized for
learning the terminal’s specific usage pattern.
B. Massive MIMO & Cognitive Radio - Bayesian Learning:
Bayesian learning is based on the computation of
probability distribution of target variables conditioned on its
inputs. Examples of generative models that may be estimated
using Bayesian techniques are the Gaussian mixture model
(GM), expectation maximization (EM) and hidden Markov
model (HMM) [4]. All the data points in a GM model are
0956
International Conference on Communication and Signal Processing, April 3-5, 2018, India
divided into clusters where each cluster is Gaussian distributed.
EM is a generalization of the maximum likelihood estimation
technique. It makes use of an iterative approach to find the
most probable outcome. It is portrayed by two stages: the "E"
step which selects a function to characterize the lower bound of
the probability and the "M" step which maximizes the previous
function. HMM is a tool that is intended for representing the
probability distributions of a sequence of observations. It is a
generalized form of the mixture-based model, where the hidden
variables not independent of each other. Rather, they are related
to each other through a Markov process.
The Bayesian learning model finds its application in
spectral characteristic learning in next-generation networks. In
[5], the pilot contamination problem faced in massive MIMO
systems has been addressed by not only estimating the channel
parameters of the target cell, but also those of the adjacent
interfering cells. Bayesian learning can also be applied in the
field of cognitive radio networks. In [6], a cooperative
wideband spectrum sensing scheme was proposed for the
detection of a primary user (PU) based on the EM algorithm. In
[7], Bayesian learning was used to propose a tomography
model for describing a variety of techniques that are fit for
extracting the relevant information such as path delay and
successful packet receptions for deployment in cognitive radio
networks.
IV. EXISTING WORKS ON INTEGRATION OF MACHINE
LEARNING WITH COMMUNICATION TECHNOLOGY
A. Communication Networks
Routing significantly affects the performance of a
network. ML algorithms have been utilized for handling
different routing issues in the past such as adaptive routing and
shortest path routing. In [8], a package routing algorithm was
proposed for dynamically changing networks on the basis of
reinforcement learning. This algorithm adjusts the route length
and the likelihood of congestion along the access routes. The
same problem was approached in a different way using genetic
algorithms in [9]. This involved the creation of new routes with
the help of crossover and mutation. Genetic algorithms also
find their application in the multicast routing scenario where
data is sent to multiple receivers in a communication network
[10]. Genetic algorithms have even been applied in mobile ad-
hoc networks for the construction of multicast trees that can
tackle issues like bounded end-to-end delay [11]. In [12], ML
techniques have been proposed for improving the throughput in
communication networks by making use of a dynamic
throughput control technique to fulfill the QoS requirements
while making efficient use of the network resources. In [13],
neural networks were applied for dynamically allocating
throughput in real-time video applications. Traffic
identification is another imperative matter for network
operators as it deals with the management of networks to
guarantee the QoS and to install necessary security measures.
ML methods can be used here to recognize historical patterns
in the traffic by analyzing the captured packet headers and
flow-level information [14].
B. Wireless Communication
Modern wireless communication systems must
continuously adapt to the changing network environment for
improved QoS. In [15], it has been suggested that the dynamic
nature of the wireless communication environment demand
adapting hardware parameters. The PAPR reduction problem
has also attracted a lot of attention with [16] recommending a
neural network approach and [17], a set-theoretic approach.
Methods of ML and compressive sensing can also significantly
improve the efficiency of OFDM channel estimation. In [18], a
solution based on neural network has been proposed with
known pilot signals at its input whereas [19] deals with the
same issue, but with the added disadvantage of the presence of
nonlinearities. Studies have also been carried out in the field of
cognitive radio systems in [20] with cooperative spectrum
sensing. This is based on the principle of cooperation among
multiple secondary users for improved spectrum sensing. ML
has even used for MIMO power control in [21]. Various
learning methods have also been proposed to tackle the inter-
cell interference problem [22], which significantly affects the
performance of wireless users in mobile networks. In order to
realize the vision of self-organizing network, a lot of research
has also been carried out to fully automate the network
management process [23].
C. Security & Privacy in Communication
ML algorithms form a core part of many emerging
applications of communication technology. However,
application of such functions in communication may leak
information that affects the privacy of individuals. Thus, it is of
utmost importance to preserve the privacy of data by tackling
the various security-related problems. ML algorithms are used
to monitor various network activities to detect anomalies [24].
Automatic spam filtering [25] and detection of phishing [26]
are some other applications. Preserving data privacy is another
very important aspect of security in communication, especially
with the involvement of sensitive data. In [27], a decision-tree
classifier has been designed to capture only corrupted data
without significant loss in accuracy.
D. Smart Services, Smart Infrastructure & IoT
The new field of smart applications in communication
technology has seen the rapid integration of ML algorithms. To
forecast the power production in a photovoltaic plant, a neural
network based prediction algorithm has been proposed in [28].
A similar algorithm has been used in [29] for context-aware
computing in IoT. Tasks such as data traffic monitoring and
prediction of resource usage have also been handled with
learning algorithms in [30].
E. Image & Video Communication
The increasing convergence of ML with
communication can also be observed in image and video
communication. In [31], in excess of 200 uses of neural
systems for pictures have been summarized. Signal
compression is one vital use of these algorithms as forms an
integral part of all video communication systems [32]. Further,
as video signals are stored as compressed data, object
recognition in the compressed domain is also of finds high
relevance. In [33], a deep neural network (DNN) based object
tracking system has been described. The video quality is of
utmost importance in multimedia applications. Different ML
0957
International Conference on Communication and Signal Processing, April 3-5, 2018, India
algorithms have been proposed to estimate the subjective
quality of images in [34].
V. APPLICATIONS OF MACHINE LEARNING IN WIRELESS
NETWORKS
The ML algorithms have been applied to mobile ad-
hoc networks (MANETs) and wireless sensor networks
(WSNs). One of the initial goals is to estimate or predict if the
strength of a given wireless link may drop below a threshold
and for how long, so that minimal service quality can be
provided. A direct application of such predictions is mobility
management (handover) where the models attempt to predict
the next location of the devices. The research for MANETs and
WSNs fall in various categories.
A. Routing
Different ML models can be used to evaluate the
probability of successful reception of packets or to estimate the
link quality using the signal-to-noise ratio (SNR). Regarding
routing, the aim of most of the research is to achieve self-
organizing MANETs and WSNs so that the dynamics of
multiple hops and network topologies can be predicted by
statistical models. One common scenario is to have primary
and secondary networks (cognitive radio) where sensing
capabilities are restricted since primary networks must not be
affected by secondary users
B. Clustering and Data Aggregation
In large-scale energy-constrained MANETs and
WSNs, a large amount of data must be sent to the base station.
An approach to reduce power consumption and signaling
overload is by clustering neighbor nodes that send messages
only to one elected node called cluster head. The cluster head
then collects data from its cluster and sends the necessary data
to the base station. In this scenario, ML techniques are used to
extract relevant features from the inputs reported by the nodes
in order to perform an efficient selection of the cluster head.
C. Event Detection and Query Processing
The ML techniques used for monitoring the WSNs are
mainly focused on event detection and event scheduling. The
detection can be based on event classification, whereas query
processing is assessed by the nodes as needed by the
application manager entity.
D. Medium Access Control
The nodes in a WSN can decide if they remain active
or idle by means of ML algorithms that predict if the channel
will be available based on the transmission history of the
network. The ML designs are focused on energy efficiency and
latency reduction. An interesting example is that MAC
protocols can be switched according to the network conditions
and ML is in charge of learning which protocol fits better a
particular set of conditions.
E. Intrusion Detection
In MANETs and WSNs, the biggest security
challenge is to detect an attack or a security threat. The ML
algorithm can be used to classify packets as useful data or as
part of a Denial of Service (DoS) and take the necessary
actions. In case of an intrusion, the ML algorithms can classify
the nodes based on their traffic profiles, interactions within the
network or energy consumption. Once a node behaviour is
predicted to be unusual, it can be isolated.
VI. SCOPE FOR FUTURE RESEARCH
There are many issues that the integration of ML with
wireless communication needs to address.
A. Low Complexity Models
Best in class ML models such as DNNs exhibit very
high computational complexity and thus are not suitable for
communication systems that have limited storage capabilities
and energy resources. Recent works such as [35] have tended
to this issue by showing that the size of VGG-16, a popular
DNN for image classification, can be diminished by more than
95% with no loss of precision. In [36], the issue of weight
binarization has been handled for adjusting DL models to
processor architectures which do not permit floating point
operations. Further research on these topics is of prime
importance in order to reduce the complexity of these models
so that these can be implemented in computationally
constrained environments with the minimal performance loss.
B. Standardized Formats for Machine Learning
The standardization of algorithms is of prime
importance in the communication industry in order to promote
the reliability and interoperability of such systems. With the
expanding utilization of ML algorithms, the need for
standardized formats for data and standards to determine the
interaction among various ML models are on the rise. Such
standardization can also ensure that models fulfill certain
security and privacy requirements.
C. Security & Privacy Mechanisms
The lack of transparency of ML algorithms is a major
issue in communication applications. It is also a known fact
that DNNs can behave in unexpected ways when presented
with data with properties that differ from the data that was used
for training the model. Thus, it is necessary to increase the
reliability of the models. Another area of research is to design
some effective encryption mechanisms to ensure the security of
data during and after the learning process.
D. Radio Resource and Network Management
Radio resources such as beam forming and medium
access control parameters and network management have a
strong influence on the end-to-end performance of mobile
networks. Additionally, some of these parameters such as
power budget and neighborhood lists are dynamically adapted
on a moderately brief time interim based on the changing
network topologies [37]. Thus, 5G networks call for data-
driven radio resource management techniques that require the
need of ML techniques for extracting information from the
system in order to gradually gain the knowledge so that the
network can perform efficiently in the lack of complete channel
state information.
0958
International Conference on Communication and Signal Processing, April 3-5, 2018, India
VII. CONCLUSION
The increasing influence of ML in communication
technology has not only led to such algorithms excelling in
network management activities such as channel estimation and
PAPR reduction but also be an indispensable element in the
emerging fields of smart cities and IoT. The availability of vast
amounts of data and recent improvements in DL methodology
is likely to aid the convergence of these two fields and redefine
the entire world of communication technology. In spite of the
effective utilization of ML techniques in different
communication applications, there are numerous difficulties
that should be tended to. The substantial size and high
computational requests of modern day algorithms restrict the
huge scale utilization of these models in embedded devices.
Likewise, there is the requirement for novel learning
approaches in radio resource and network management
approaches that can adapt to vulnerabilities such as incomplete
channel state information. Other issues concerning the
reliability and security aspects of ML models must also be
addressed before they can be deployed in real-time
applications.
REFERENCES
[1] M. van der Schaar and F. Fu, “Spectrum Access Games and Strategic
Learning in Cognitive Radio Networks for Delay-Critical Applications,”
Proc. IEEE, vol. 97, no. 4, Apr. 2009, pp. 72040.
[2] P. Zhou, Y. Chang, and J. A. Copeland, “Determination of Wireless
Networks Parameters through Parallel Hierarchical Support Vector
Machines,” IEEE Trans. Parallel Distrib. Syst., vol. 23, no. 3, Mar.
2012, pp. 50512.
[3] B. K. Donohoo et al., “Context-Aware Energy Enhancements for Smart
Mobile Devices,” IEEE Trans. Mobile Comput., vol. 13, no. 8, Aug.
2014, pp. 172032.
[4] E. Alpaydm, Introduction to Machine Learning, 3rd ed., The MIT Press,
Cambridge, Massachusetts, 2014.
[5] C.-K. Wen et al., “Channel Estimation for Massive MIMO Using
Gaussian-Mixture Bayesian Learning,” IEEE Trans. Wireless Commun.,
vol. 14, no. 3, Mar. 2015, pp. 135668.
[6] K. W. Choi and E. Hossain, “Estimation of Primary User Parameters in
Cognitive Radio Systems via Hidden Markov Model,” IEEE Trans.
Signal Process., vol. 61, no. 3, Feb. 2013, pp. 78295.
[7] C.-K. Yu, K.-C. Chen, and S.-M. Cheng, “Cognitive Radio Network
Tomography,” IEEE Trans. Veh. Technol., vol. 59, no. 4, May 2010, pp.
198097.
[8] J. A. Boyan and M. L. Littman. Packet routing in dynamically changing
networks: A reinforcement learning approach. NIPS, pages 671671,
1994.
[9] M. Munetomo, Y. Takai, and Y. Sato. A migration scheme for the
genetic adaptive routing algorithm. In IEEE Sys. Man Cyber., volume 3,
pages 27742779, 1998.
[10] Q. Zhang and Y.-W. Leung. An orthogonal genetic algorithm for
multimedia multicast routing. IEEE Trans. Evol. Comput., 3(1):5362,
1999.
[11] T. Lu and J. Zhu. Genetic algorithm for energy-efficient qos multicast
routing. IEEE Commun. Lett., 17(1):31 34, 2013.
[12] A. Eswaradass, X.-H. Sun, and M. Wu. A neural network based
predictive mechanism for available bandwidth. In IPDPS, pages 33a
33a, 2005.
[13] Y. Liang. Real-time vbr video traffic prediction for dynamic bandwidth
allocation. IEEE Trans. Syst., Man, Cybern. C, 34(1):3247, 2004.
[14] T. T. Nguyen and G. Armitage. A survey of techniques for internet
traffic classification using machine learning. IEEE Commun. Surveys
Tuts., 10(4):5676, 2008.
[15] J. Joung. Machine learning-based antenna selection in wireless
communications. IEEE Commun. Lett., 20(11):22412244, 2016.
[16] Y. Jabrane, V. P. G. Jim´enez, A. G. Armada, B. A. E. Said, and A. A.
Ouahman. Reduction of power envelope fluctuations in ofdm signals by
using neural networks. IEEE Commun. Lett., 14(7):599601, 2010.
[17] R. L. G. Cavalcante and I. Yamada. A flexible peak-toaverage power
ratio reduction scheme for ofdm systems by the adaptive projected
subgradient method. IEEE Trans. Signal Process., 57(4):14561468,
2009.
[18] C.-H. Cheng, Y.-H. Huang, and H.-C. Chen. Channel estimation in ofdm
systems using neural network technology combined with a genetic
algorithm. Soft Comput., 20(10):41394148, 2016.
[19] M. S´anchez-Fern´andez, M. de Prado-Cumplido, J. Arenas-Garc´ıa, and
F. P´erez-Cruz. Svm multiregression for nonlinear channel estimation in
multiple-input multiple-output systems. IEEE Trans. Signal Process.,
52(8):22982307, 2004.
[20] K. M. Thilina, K. W. Choi, N. Saquib, and E. Hossain. Machine learning
techniques for cooperative spectrum sensing in cognitive radio networks.
IEEE J. Sel. Areas Commun., 31(11):22092221, 2013.
[21] P. Mertikopoulos and A. L. Moustakas. Learning in an uncertain world:
Mimo covariance matrix optimization with imperfect feedback. IEEE
Trans. Signal Proc., 64(1):518, 2016.
[22] A. Galindo-Serrano and L. Giupponi. Distributed qlearning for
aggregated interference control in cognitive radio networks. IEEE Trans.
Veh. Technol., 59(4):18231834, 2010.
[23] O. G. Aliu, A. Imran, M. A. Imran, and B. Evans. A survey of self
organisation in future cellular networks. IEEE Commun. Surveys Tuts.,
15(1):336361, 2013.
[24] C.-F. Tsai, Y.-F. Hsu, C.-Y. Lin, and W.-Y. Lin. Intrusion detection by
machine learning: A review. Expert Syst. Appl., 36(10):1199412000,
2009.
[25] T. S. Guzella and W. M. Caminhas. A review of machine learning
approaches to spam filtering. Expert Syst. Appl., 36(7):1020610222,
2009.
[26] R. B. Basnet, S. Mukkamala, and A. H. Sung. Detection of phishing
attacks: A machine learning approach. Soft Comp. Appl. Indust.,
226:373383, 2008.
[27] R. Agrawal and R. Srikant. Privacy-preserving data mining. In ACM
SIGMOD Manag. Data, pages 439450, 2000.
[28] L. Ciabattoni, G. Ippoliti, A. Benini, S. Longhi, and M. Pirro. Design of
a home energy management system by online neural networks. IFAC
Proc. Volumes, 46(11):677682, 2013.
[29] C. Perera, A. Zaslavsky, P. Christen, and D. Georgakopoulos. Context
aware computing for the internet of things: A survey. IEEE Commun.
Surveys Tuts., 16(1):414454, 2014.
[30] J. Xu, M. Zhao, J. Fortes, R. Carpenter, and M. Yousif. Autonomic
resource management in virtualized data centers using fuzzy logic-based
approaches. Cluster Comput., 11(3):213227, 2008.
[31] M. Egmont-Petersen, D. de Ridder, and H. Handels. Image processing
with neural networksa review. Pattern Recognit., 35(10):22792301,
2002.
[32] J. Jiang. Image compression with neural networksa survey. Signal
Process. Image, 14(9):737760, 1999.
[33] Y. Chen, X. Yang, B. Zhong, S. Pan, D. Chen, and H. Zhang.
Cnntracker: online discriminative object tracking via deep convolutional
neural network. Appl. Soft Comput., 38:10881098, 2016.
[34] S. Bosse, K.-R. M¨uller, T. Wiegand, and W. Samek. Brain-computer
interfacing for multimedia quality assessment. In IEEE Syst. Man
Cyber., pages 002834002839, 2016.
[35] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep
neural networks with pruning, trained quantization and huffman coding.
[36] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio.
Binarized neural networks: Training deep neural networks with weights
and activations constrained to +1 or -1.
[37] S. Stanczak, M. Wiczanowski, and H. Boche. Fundamentals of resource
allocation in wireless networks:theory and algorithms, volume 3.
Springer, 2009.
0959
International Conference on Communication and Signal Processing, April 3-5, 2018, India
... Machine Learning (ML) has performed well in many real-world areas, such as the success Alpha Go series [24,25], face recognition application on smartphones [26], etc. Recently, researchers in the field of communications have also been strongly interested in ML applications [27][28][29][30]. Wireless networks involve complex features, such as path congestion, communication signal identities, queuing status of each node, channel quality, etc. ...
Article
Full-text available
Unmanned Aerial Vehicle (UAV) is emerged as a promising technology for the support of human activities, such as target tracking, disaster rescue and surveillance. However, these tasks require a large computation load of image or video processing, which imposes huge pressure on the UAV computation platform. To solve this issue, in this work, we propose an intelligent Task Offloading Algorithm (iTOA) for UAV edge computing network. Compared with existing methods, iTOA is able to perceive intelligently the environment of the network to decide the offloading action based on deep Monte Calor Tree Search (MCTS), the core algorithm of Alpha Go. MCTS will simulate the offloading decision trajectories in the future to acquire the best decision by maximizing the reward, such as lowest latency or power consumption. To accelerate the search convergence of MCTS, we also proposed a splitting Deep Neural Network (sDNN) to supply the prior probability for MCTS. The sDNN is trained by a self-supervised learning manager. Here, the training data set is obtained from iTOA itself as its own teacher. Compared with game theory and greedy search based methods, the proposed iTOA improves service latency performance by 33% and 60%, respectively.
Article
Recently, as the development of artificial intelligence (AI), data-driven AI methods have shown amazing performance in solving complex problems to support the Internet of Things (IoT) world with massive resource-consuming and delay-sensitive services. In this paper, we propose an intelligent Resource Allocation Framework (iRAF) to solve the complex resource allocation problem for the Collaborative Mobile Edge Computing (CoMEC) network. The core of iRAF is a multi-task deep reinforcement learning algorithm for making resource allocation decisions based on network states and task characteristics, such as the computing capability of edge servers and devices, communication channel quality, resource utilization, and latency requirement of the services, etc. The proposed iRAF can automatically learn the network environment and generate resource allocation decision to maximize the performance over latency and power consumption with self-play training. iRAF becomes its own teacher: a Deep Neural Network (DNN) is trained to predict iRAF’s resource allocation action in a self-supervised learning manner, where the training data is generated from the searching process of Monte Carlo Tree Search (MCTS) algorithm. A major advantage of MCTS is that it will simulate trajectories into the future, starting from a root state, to obtain a best action by evaluating the reward value. Numerical results show that our proposed iRAF achieves 59.27% and 51.71% improvement on service latency performance compared with the greedy-search and the deep Q-learning based methods, respectively.
Article
Full-text available
In this paper, we present a distributed learning algorithm for the optimization of signal covariance matrices in Gaussian multiple-input and multiple-output (MIMO) multiple access channel with imperfect (and possibly delayed) feedback. The algorithm is based on the method of matrix exponential learning (MXL) and it has the same information and computation requirements as distributed water-filling. However, unlike water-filling, the proposed algorithm converges to the system's optimum signal covariance profile even under stochastic uncertainty and imperfect feedback. Moreover, the algorithm also retains its convergence properties in the presence of user update asynchronicities, random delays and/or ergodically changing channel conditions. Our theoretical analysis is complemented by extensive numerical simulations which illustrate the robustness and scalability of MXL in realistic network conditions. In particular, the algorithm retains its convergence speed even for large numbers of users and/or antennas per user.
Conference Paper
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, We introduce a three stage pipeline: pruning, quantization and Huffman encoding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman encoding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory, which has 180x less access energy.
Conference Paper
The assessment of perceived multimedia quality is a central research field in information and media technology. Conventionally, psychophysical techniques are used for determining the quality of multimedia signals. Recently, Brain-Computer Interfacing (BCI)-based methods have been proposed for the assessment of perceived multimedia signal quality. In this paper we give an overview over the shortcomings of conventional approaches, present the state-of-the art of BCI-based methods and discuss open questions and challenges relevant to the BCI community.
Article
Orthogonal frequency division multiplexing (OFDM) is a multicarrier modulation used in a variety of broadband systems, such as asymmetric digital subscriber lines, very-high-speed digital subscriber lines, digital video and audio broadcasting, and wide local area network standards. OFDM is a promising solution achieving high data rates in a mobile environment. In wireless communication systems, signal distortion and attenuation in transmission caused by multipath effects makes it necessary to obtain knowledge related to the channel impulse response using channel estimation to provide compensation. This study combined a back propagation neural network for the estimation of channel and compensation signals with a genetic algorithm to improve performance and the convergence rate. We compared bit error rates and the mean square error of the proposed neural network with that of the conventional neural network, the least square (LS) algorithm, and the minimum mean square error (MMSE) algorithm in existing OFDM channel environments. Our results demonstrate that the proposed algorithm outperforms the LS algorithm and is on par with the MMSE algorithm.
Article
Object appearance model is a crucial module for object tracking and numerous schemes have been developed for object representation with impressive performance. Traditionally, the features used in such object appearance models are predefined in a handcrafted offline way but not tuned for the tracked object. In this paper, we propose a deep learning architecture to learn the most discriminative features dynamically via a convolutional neural network (CNN). In particular, we propose to enhance the discriminative ability of the appearance model in three-fold. First, we design a simple yet effective method to transfer the features learned from CNNs on the source tasks with large scale training data to the new tracking tasks with limited training data. Second, to alleviate the tracker drifting problem caused by model update, we exploit both the ground truth appearance information of the object labeled in the initial frames and the image observations obtained online. Finally, a heuristic schema is used to judge whether updating the object appearance models or not. Extensive experiments on challenging video sequences from the CVPR2013 tracking benchmark validate the robustness and effectiveness of the proposed tracking method.
Article
Recent interest in data collection and monitoring using data mining for security and business-related applications has raised privacy. Privacy Preserving Data Mining (PPDM) techniques require data modification to disinfect them from sensitive information or to anonymize them at an uncertainty level. This study uses PPDM with adult dataset to investigate effects of K-anonymization for evaluation metrics. This study uses Artificial Bee Colony (ABC) algorithm for feature generalization and suppression where features are removed without affecting classification accuracy. Also k-anonymity is accomplished by original dataset generalization.
Article
Pilot contamination posts a fundamental limit on the performance of massive multiple-input–multiple-output (MIMO) antenna systems due to failure in accurate channel estimation. To address this problem, we propose estimation of only the channel parameters of the desired links in a target cell, but those of the interference links from adjacent cells. The required estimation is, nonetheless, an underdetermined system. In this paper, we show that if the propagation properties of massive MIMO systems can be exploited, it is possible to obtain an accurate estimate of the channel parameters. Our strategy is inspired by the observation that for a cellular network, the channel from user equipment to a base station is composed of only a few clustered paths in space. With a very large antenna array, signals can be observed under extremely sharp regions in space. As a result, if the signals are observed in the beam domain (using Fourier transform), the channel is approximately sparse, i.e., the channel matrix contains only a small fraction of large components, and other components are close to zero. This observation then enables channel estimation based on sparse Bayesian learning methods, where sparse channel components can be reconstructed using a small number of observations. Results illustrate that compared to conventional estimators, the proposed approach achieves much better performance in terms of the channel estimation accuracy and achievable rates in the presence of pilot contamination.