Content uploaded by Ahmed Mohammed Mikaeil
Author content
All content in this area was uploaded by Ahmed Mohammed Mikaeil on Oct 28, 2017
Content may be subject to copyright.
Machine Learning to Data Fusion Approach for Cooperative Spectrum Sensing
Ahmed Mohammed Mikaeil,Bin Guo
School of Electronics and Information Engineering
Changchun University of Science and Technology
Changchun, Jilin Province 130022, China
ahmed_mikaeil@yahoo.co.uk,
guobin@cust.edu.cn
Zhijun Wang
Jilin Engineering Research Center of RFID and
Intelligent Information Processing
Changchun Normal University
Changchun, Jilin Province 130032, China
zjwang@jlu.edu.cn
Abstract—Cooperative spectrum sensing has been shown to
be an effective method to improve the detection
performance of the licensed user availability by exploiting
spatial diversity. However, cooperation among cognitive
radio (CR) users may also introduce a variety of overheads
due to the extra sensing time, delay, energy, and operations
that limit achievable cooperative gain. In responding to this
paper, we propose a machine learning based fusion center
algorithm that can provide real time per frame training and
decision based cooperative spectrum sensing. The new
fusion algorithm based on training a machine learning
classifier over a set containing some frame energy test
statistics along with their corresponding decisions about the
presence or absence of the primary user (PU) transmission,
so as to predict the decisions for new frames with new
energy test statistics. The simulation and numerical results
show that the new approach performs the same as the
current fusion rule with less sensing time, delay and
operations. In this paper we also present a simulation
comparison of four supervised machine learning classifiers:
K-nearest neighbor (KNN), support vector machine (SVM),
Naive Bayes (NB), and Decision Tree (DT) in classifying
1000 testing frames after training these classifiers over a set
containing 1000 frames. It shows that KNN and DT
classifier outperform the other two classifiers in the
accuracy of classifying the new frames.
Keywords-component; cooperative spectrum sensing;
data fusion; machine learning classifier; per frame
decision sensing
I. INTRODUCTION
Cognitive radio provides a new way for better utilizing the
spectrum resource by introducing opportunistic usage of the
frequency bands that are not heavily occupied by primary
users [1]. Spectrum sensing is a key function of cognitive
radio to prevent the harmful interference with primary users
and identify the available spectrum hole [2]. Cooperative
spectrum sensing has proven its capability in improving the
detection performance of primary user availability under
noise uncertainty, signal fading and shadowing effects, and
hidden primary user conditions. Cooperative sensing can be
centralized data fusion [3], distributed [4] and relay-assisted
[5]. Centralized data fusion has shown its superiority in
mitigating the fading effects and increasing the probability
of detecting the primary user. In the centralized data fusion,
the fusion center collects all the local sensing results from
the secondary users via a control channel and fuses them
using one of the fusion decision rules, then performs a
binary hypothesis testing algorithm like Neyman-Pearson
test or Bayesian test for making the global sensing decision.
The fusion rules based on the hypothesis testing are studied
in detail in [3, 6, and 7]. Recently, new fusion algorithm
based on per sample training of the machine learning
classifier such as weighted- KNN and SVM instead of
hypothesis testing is introduced in [8, 9]. This study
proposes a novel per frame training fusion algorithm. This
algorithm utilize the current fusion rules for training the
machine learning classifier to provide a real time decision
per frame based cooperative sensing with less sensing time,
delay and operations.
The rest of this paper is organized as follows. In section II,
we define the system model and present the method of
calculating the thresholds for the different fusion rules. In
Section III, we formulate the machine learning classification
problem and present four machine-learning classifiers to
solve it. Simulation results are presented in Section IV.
Finally, conclusions are given in Section V.
II. SYSTEM MODEL
This study considers a cooperative CR network with
cooperative nodes utilizing samples for the energy
detection and for training the machine learning
(ML) classifier, as shown in Fig.1.
Fig.1. Block diagram of machine learning classifier based fusion center
2014 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery
978-1-4799-6236-5/14 $31.00 © 2014 IEEE
DOI 10.1109/CyberC.2014.80
429
The received signal of theth frame contains samples at
the j th cooperative node ୧୨ሺሻ,ͳ,ͳ,
ͳis given by:
୧୨ሺሻൌቊ୧୨ሺሻͲ
ඥɀ୧୨୧୨ሺሻ୧୨ሺሻͳ (1)
where୧୨ሺሻis the primary user’s signal, and it is assumed
to be Gaussian i.i.d random process with zero mean and
variance ɐୱଶ, ୧୨ሺሻ is the noise, and assumed to be
Gaussian i.i.d random process with zero mean and variance
ɐ୳ଶ. ୧୨ሺሻand୧୨ሺሻ are assumed to be independent. Since
all the K nodes are sensing the same frame at the same time
and the global decision about the primary user availability
will be made at the fusion center only, the energy statistic
for the th frame at the th cooperative node୧୨ can be
represented by the energy test statistic of the i th frame at
the fusion center as follow:
୧ൌଵ
σหሺ୧୨ሺሻሻหଶǡͳ
୬ୀଵ (2)
where ୧is a random variable whose probability density
function (PDF) is chi-square distribution with ʹ degrees of
freedom for complex value ,and with degrees of freedom
for real value case. If we assume that the channel remains
unchanged during the observation interval and that a
sufficient number of samples are observed
ሺʹͲͲሻ[10],
then ୧ can be approximated by Gaussian distribution [11]
and the distribution of the power test ୧ for wide band signal
follows:
୧ൌ൝ሺɐ୧୨ଶǡʹɐ୧୨ସ
ΤሻͲ
ሺɐ୧୨ଶሺͳɀ୧୨ሻǡʹɐ୧୨ସ൫ͳɀ୧୨൯ଶ
Τͳ (3)
where ɐ୧୨ଶ,ɀ୧୨are the standard deviation of noise samples
୧୨ሺሻand the observed signal-to-noise ratio (SNR) of the
th frame sensed at the th cooperative node, respectively. If
we assume that the noise variance and the SNR at the node
stay the same during the training process for all frames,
then ɀ୧୨ൌɀ୨and ɐ୧୨ଶൌɐ୨ଶ, and for a chosen threshold ɉ୨for
each frame in the training set, the probability of the false
alarm as given in [12] can be written as:
൫ɉ୨൯ൌ൫୧ɉ୨หͲ൯
ൌ ͳ
ඥʹɎɐ୨නି൫ౠିౠ൯మȀξଶౠమ
ஶ
ౠ
ൌሺሺౠ
ౠమെͳሻ) (4)
and the probability of detection ୢis given by:
ୢ൫ɉ୨൯ൌ൫୧ɉ୨หͳ൯
ൌ൭൬ ౠ
ౠమ൫ଵାஓౠ൯െͳ൰ට
ଶ൱ (5)
where ሺǤሻis the complementary distribution function of
Gaussian distribution with zero mean and unit variance. In
order to obtain a single optimal threshold ɉforall
thecooperative sensing nodes, the data fusion scheme is
used. The calculation of the thresholds for different fusion
rules is presented in the following subsection A and B.
A . Single User
We consider a single user as a special case for hard data
fusion scheme where the number of the cooperative nodes
=1,ɐ୨ଶൌɐ୳ଶǡɀ୨ൌɀ୳Ǥ Equation (4) and for a given
probability of false alarm, the threshold can be written as:
ɉୱ୧୬୪ୣ ൌሺටଶ
ିଵሺሻͳሻɐ୳ଶ(6)
where ିଵሺǤሻis the inverse of the ሺǤሻ function ,and
theprobability of the detection ୢୱ୧୬୪ୣcan be written as
ୢୱ୧୬୪ୣ ൌሺሺ౩ౝౢ
౫మሺଵାஓ౫ሻെͳሻට
ଶ) (7)
B. Data fusion
In a data fusion scheme, nodes cooperate for
calculating the threshold to make the global sensing
decision. There are many fusion rules used for calculating
the global sensing decision threshold, and they can be
divided into: hard fusion rules including AND, OR, and
majority rule; and soft fusion rules including maximum ratio
combining (MRC), equal gain combining (EGC) and square
law selection (SLS).
a) AND fusion rule: If K nodes with a same false
alarm probability cooperate using AND rule, the fusion
center threshold can be expressed, as given in [3,6] by:
ɉୈ ൌሺටଶ
ିଵቀభ
ేቁͳሻɐ୳ଶ (8)
And the detection probability ୢୈcan be written as:
ୢୈ ൌቆሺሺ ఽొీ
౫మሺଵାஓ౫ሻെͳሻට
ଶሻቇ (9)
b) OR fusion Rule: The fusion center threshold for
OR fusion rule can be expressed as:
ɉୖ ൌሺටଶ
ିଵቀሺͳെሺͳെሻభ
ేቁͳሻɐ୳ଶ (10)
And the detection probability ୢୖis:
ୢୖ ൌቆͳെሺͳെሺሺ ో
౫మሺଵାஓ౫ሻͳሻට
ଶሻቇ (11)
c) Maximum ratio combination (optimal MRC): In a
soft combination, cooperative nodes with noise variances
ሼɐଵଵଶǡɐଶଶଶǥɐଶሽ and instantaneous SNRs
{ɀଵଵǡɀଶଶǥǡɀ} send their th frame energy test statistics
୧୨ൌଵ
σหሺ୧୨ሺሻሻหଶǡ
୬ୀଵ ͳto the fusion center.
After receiving these energy statistics, the fusion center
weights and adds them together as follow:
୧ൌσ
୨ୀଵ ୨୧୨ǡͳ (12)
under the assumption that the noise variances and the SNRs
at the node remain unchanged for all the frames during the
training process, namely, ɀ୧୨ ൌɀ୨, ɐ୧୨ଶൌɐ୨ଶ, then the
fusion threshold for MRC fusion rule can be written as
given in [6]:
ɉୖେ ൌሺσ୨ɐ୨ଶሻ
୨ୀଵ ିଵሾሿσ୨ɐ୨ଶ
୨ୀଵ (13)
And the detection probability ୢୖେis given by:
430
ୢୖେ ൌ൭൬ ి
σ൫ଵାஓౠ൯୵ౠౠమ
ే
ౠసభ െͳ൰ට
ଶ൱ (14)
where the weighting coefficient vector୨ሼଵǡଶǥሽ can
be obtained by:
୨ൌሺሻ
Where
ൌୌଵିଵȀଶሾୌଵିଵȀሿȀฮୌଵିଵȀଶሾୌଵିଵȀଶሿฮ
where
ୌଵൌʹሺɐଵସሺͳɀଵሻଶǡǥǤǤɐ୩ସሺͳɀሻଶሻȀ
ൌሾɐଵଶɀଵǡɐଶଶɀଶǡɐଷଶɀଷǡɐସଶɀସǡǥǥǤǥǤǡɐଶɀሿ
d) Equal gain combination (EGC): In the equal gain
combination, the received energies are equally weighted and
then added together. The calculation of the
threshold ɉୋେand the detection probability ୢୋେ follow
equation (13) and (14) respectively; and the weighting vector
ሼ୨ൌଵǡǥሽ where ଵൌଶൌଷǥൌൌͳȀξ
[7].
e) Square law selection (SLS): Here, the fusion center
selects the node with the highest SNR
ɀୗୗൌሺɀଵǡɀଶǡǤǤɀ୩ሻ[3] and considers the noise
variance ɐୗୗଶ associated with that node. Then calculate the
fusion center threshold as follow:
ɉୗୗൌሺටଶ
ିଵቀͳെሺͳെሻభ
ేቁͳሻɐୗୗଶ (15)
And the detection probability ୢୗୗis:
ୢୗୗ ൌͳെቆሺͳെሺሺ ై
ైమሺଵାஓైሻെͳሻට
ଶሻቇ(16
III. CLASSIFIC ATION PROBLEM FORMULATION
The th frame energy statistic ୧ for hard fusion or ୧
for soft fusion rule which is computed from equation (2) or
(12), is compared to the hard or soft fusion rule threshold for
calculating the decision ୧associated with the th frame in
the training data set which contains M training frames as
follow:
୧ൌ൜ ͳɉ
െͳ൏ߣͳ(17)
ɉא൫ɉୱ୧୬୪ୣǡɉୟ୬ୢǡɉୖǡɉୖେǡɉୋେǡɉୗୗ൯ǡא
ሼ୧ǡ୧ሽǡwhere “െͳ” represents the absence of primary
user and “1” represents the presence of the primary user
transmission on the frame. The output of equation (17) gives
a set of pairs
ሺ୧ǡ୧ሻǡൌͳǡʹǥǡ୧א
ሺെͳǡͳሻrepresenting the frame energy statistics and their
corresponding decisions. Suppose that we would like to
predict the decision “the class label” ୶associated with a new
frame energy statistic୶,which formulates a classification
problem. We can solve this classification problem using one
of the following machine learning classifiers.
A. KNN classifier
For K-nearest neighbors classifier, nearest points to ୶
are used to predict the class label ୶corresponding to
୶[13]. Forൌͳ, the Euclidian distanceୱ୲ between ୶
and the training data points can be calculated as follow:
ୱ୲ሺሻൌඥሺ୶െ୧ሻଶൌȁ୶െ୧ȁൌͳǡʹǥ(18)
Then the new ୶is classified with the label ୶ =୧୬,
where ୧୬is the point that has minimum Euclidian
distanceୱ୲ to୶Ǥ
B.Naïve Bayes classifier
For Naïve Bayes classifier, under the assumption that
୧ൌെͳand ୧ൌͳare independent, the prior probabilities
for ୧ൌെͳand ୧ൌͳfrom the given training example
ሺ୧ǡ୧ሻǡൌͳǡʹǥis calculated and the class-conditional
densities “likelihood probabilities” is estimated from the set
ሾଵǡଶǥ୩ሿ the new ୶ is likely to fall into, then the
probability that the new ୶becomes a member of either
୧ൌെͳor ୧ൌͳclass “posterior probability” is calculated
using Naïve Bayes assumption and Bayes rule [14] as
follow:
ሺ୶ሻൌ
ୢሺ୧ሻς൫୨Ȁ୧൯
୩
୨ୀଵ (19)
where the prior probabilities are given by:
ሺ୧ൌെͳሻൌ୬୳୫ୠୣ୰୭ଢ଼୵୧୲୦ୡ୪ୟୱୱ୪ୟୠୣ୪̶ଵ̶
୲୭୲ୟ୪୬୳୫ୠୣ୰୭ୡ୪ୟୱୱ୪ୟୠୣ୪ୱ ,
ሺ୧ൌͳሻൌ୬୳୫ୠୣ୰୭ଢ଼୵୧୲୦ୟୡ୪ୟୱୱ୪ୟୠୣ୪̶̶
୲୭୲ୟ୪୬୳୫ୠୣ୰୭ୡ୪ୟୱୱ୪ୟୠୣ୪ୱ
And the class-conditional densities “likelihood
probabilities” can be estimated using Gaussian density
function by:
൫୨Ȁ୧൯ൌ ͳ
ɐ୨ξʹɎషቀౕషρౠቁ
మಚౠǡଵ൏ܻ൏୩ǡɐ୨Ͳǡ
where ρ୨ǡɐ୨ are the mean and the variance of the
set ሾଵǡଶǥ୩ሿǤ Equation (19) means that Naïve Bayes
classifier will label the new ୶ the class label ୧ that
achieves the highest posterior probability.
C. Support Vector Machine
As for support vector machine classifier, for a given
training set of pairs ሺ୧ǡ୧ሻǡൌͳǡʹǥ, where ୧א,
and୧אሺͳǡെͳሻ, the minimum weight and constant
that can maximize the margin between the positive and
negative class:୧ൌേͳwith respect to the hyper-
plane equation ୧ൌͲcan be calculated by achieving
the following optimization [9]:
୵ǡୠ ቀԡ୵ԡమ
ଶቁǡԡԡଶൌ(20)
subject to ୧ሺ୧ሻͳൌͳǡʹǥ
The solution to this quadratic optimization problem using
Lagrangian function can be express as:
ሺǡǡȽሻൌԡ୵ԡమ
ଶെσȽ୧ሺ୧ሺ୧ሻെͳሻ
୧ୀଵ ǡȽ୧Ͳ(21)
where ȽൌሺȽଵǡȽଶǥȽሻ is the Lagrangian multipliers, by
setting ሺǡǡȽሻൌͲ, we can get ൌσȽ୧୧୧
୧ୀଵ and
σȽ୧୧ൌͲ
୧ୀଵ , then by substituting them into equation (21),
the dual optimization problem that defines the hyper-plane
can be written
ቌͳ
ʹ୧୨൫୧୨൯Ƚ୧Ƚ୨
୨ୀଵ െȽ୨
୧ୀଵ
୧ୀଵ ቍǡȽ୨Ͳሺʹʹሻ
431
ሺʹʹሻ
ൌσȽ
୧
୧
୧
୧ୀଵ
ǡ
Ƚ
ൌ
୨
െσȽ
୧
୧
୧ୀଵ
൫
୧
୨
൯Ǥ
୶
ǡǣ
ሺ
୶
ሻൌ൫σȽ
୨
୨
୧ୀଵ
ሺ
୧
୶
ሻ
which means that the classification of t
h
expressed in terms of the dot product of
୶
vectors.
D . Decision Tree classifier (DT)
Decision tree classifier for the
t
pairs ሺ
୧
ǡ
୧
ሻǡൌͳǡʹǥ,
୧
אሺെͳǡͳሻ
c
tree based on either impurity or node erro
r
divide the training set into disjoint subset
s
the splitting recursively for each subse
t
becomes “pure”, then minimize the error
taking the majority vote of the training set
To classify the new example
୶
, the decis
i
chooses the leaf where that new
୶
fa
l
classifies the new
୶
with the class label
frequently among that leaf.
IV. SIMULATION RESUL
T
The first simulation in Fig. 2 was run to ge
n
operating characteristic (ROC) curves for
rules including single user, soft and hard f
u
AWGN (Additive White Gaussian Noise
)
The simulation is under the assum
p
tion that
system with =7 cooperative nodes operat
i
22dB and the local node decisio
n
observing=1000 samples for the energy
d
fusion rules, the SNRs ɀ
୨
for the nodes are
24.3, -21.8, -20.6, -21.6,-20.4,-22.2,-21.3}
,
ɐ
୨ଶ
areሼͳǡͳǡͳǡͳǡͳǡͳǡͳሽǡand the false ala
r
for the node is varied from 0 to 1 increasi
n
Fig. 2. ROC curves for the soft and hard fusion rul
e
AWGN receiver noiseǡߪ
௨ଶ
ൌͳ,ߛ
௨
= -22 dB, ܭ=
detection ove
r
=1000 sam
p
les
Ƚ
Ƚ
୧
Ͳ
൯ ሺʹ͵ሻ
h
e new
୶
can be
୶
and the support
t
raining set of
c
reates a binary
r
splitting rule to
s
. After repeating
t until the leaf
in each leaf by
in that leaf [15].
i
on tree classifier
l
ls in, and then
that occurs most
T
n
erate the receiver
different fusion
u
sion rules under
)
channel model.
a cognitive radio
i
ng at SNR ɀ
୳
= -
n
s made after
d
etection. For soft
assumed to be {-
,
noise variances
r
m probability
n
g by 0.025. The
simulation results show that soft
fusion rules perform better than o
t
rules even though that soft EGC
fu
any channel state information from
t
Threshold
SVM
Single
User
AND
Rule
OR
Rule
Accuracy ͻǤͳΨ ͻͺǤ͵Ψ ͻͺǤͳΨ
Precession 77.7% 100% 53.7%
Recall 100% 97.8% 100%
Table. 1 Shows the accuracy,
p
recession a
n
Fig. 3 shows the performance of
classifying ͳͲͲͲ frames after
t
containingൌͳͲͲͲ frames. The
MRC, SLS and EGC fusion rule th
r
the SVM classifier are obtained n
u
the same cognitive system generat
e
false alarm probability
ൌͲǤͳ. Fr
o
observe that training SVM classi
f
following thresholds: single user,
O
achieves 100% detection rate of
t
spectrum hole” and training with E
G
90% precession of classifying th
e
harmful interference”, whereas S
V
threshold can 100% precisely cla
s
p
ositive classes. Table 1 shows th
e
all true classifications over all
p
recession” proportion of true p
positive classes” and the recall “eff
e
in identifying positive classes”
classifying theͳͲͲͲtesting frames.
Fig. 4 Shows ROC curves of comp
machine learning classifiers inclu
d
(KNN), support vector machine (
S
e
s under the case o
f
7 users and energ
y
Fig. 3. ROC curves shows the perfor
m
predicting the decisions for 1000 new fra
m
containing 1000 frames when single user,
EGC thresholds are used for the training p
r
E
GC and optimal MRC
t
her soft and hard fusion
u
sion rule does not need
t
he nodes.
MRC
Rule
SLS
Rule
EGC
Rule
ͻǤΨ ͻͺǤͻΨ 98.0%
89.4% 74.4% 90.1%
100% 100% 100%
n
d the recall of SVM classifie
r
SVM classifier used in
t
raining it over a set
single user, AND, OR,
r
esholds used for training
u
merically by considering
e
d in Fig.2, but with the
o
m Fig. 3 and table 1, we
f
ier with anyone of the
O
R, MRC, SLS or EGC
t
he positive classes “the
G
C threshold can provide
e
positive classes” 10%
V
M trained with AND
s
sify only 97.8% of the
e
accuracy “proportion of
testing examples”, the
ositive classes over all
e
ctiveness of the classifier
for SVM classifier in
arison of four supervised
d
ing K-nearest neighbor
S
VM), Naive Bayes, and
m
ance of SVM classifier in
m
es after training it over a se
t
AND, OR, MRC, SLS, and
r
ocess.
432
Decision Tree used in classifying 1000 fra
m
them over a set containing 1000 frames
examples using the single user threshold.
T
used to generate the simulation of Fig. 3 i
s
for computing the single user threshold. B
o
Table 2 indicates that KNN and decisi
o
outperform
N
aïve Bayes classifier and SV
M
of classifying the new frames and the det
e
positive classes” the spectrum holes”.
Classifier Accuracy Precession
KNN 100% 100%
Decision Tree 100% 100%
Naïve Bayes 98.9% 100%
SVM 97.6% 83.9%
Table. 2. Accuracy, precession and recall of KNN,
classifiers used in classifying 1000 new frames aft
e
1000 frames.
Number Of
Samples Accuracy Precession
200
100%
100%
400
100%
100%
600
100%
100%
800
100%
100%
1000
100%
100%
Table.3 Accuracy, precession and recall for both de
c
classifier used in classifying 3000 frames for differe
n
used for energy detection
Fig. 4. ROC curves shows a comparison of fo
u
classifiers: KNN, SVM, Naive Bayes, and Decisio
n
1000 frames after training them over a set with
single user scheme threshold.
m
es after training
s
as the training
T
he same system
s
considered here
o
th Figure 4 and
o
n tree classifier
M
in the accuracy
e
ction rate of the
Recall
100%
100%
91.2%
100%
,
SVM, NB and DT
e
r being trained with
Recall
100%
100%
100%
100%
100%
c
ision tree and KNN
n
t number of samples
Table 3. shows the accuracy, pre
c
decision tree and KNN classifier
u
frames after training it over a set
examples for the same cognitive sy
s
3. The single user threshold is used
The simulation was run with diff
e
used for the energy detection proce
s
decision tree and KNN classifier
c
frames correctly “achieve 100% d
e
200 samples for the energy detectio
n
time is proportional to the numb
energy detector, the less number o
f
detection, the less sensing time wil
l
we use decision tree or KNN
b
ased
sensing time from 200 μs to 40
μ
channel as an example, and we stil
l
rate of the spectrum hole.
V. CONCL
U
In this paper, we have discussed
decision based cooperative spec
t
applying machine learning classi
center. The simulation and nume
r
that the machine learning classifie
r
performs the same as the current fu
s
of sensing with less sensing time,
addition, we have presented a simu
l
supervised machine learning classi
f
(KNN), support vector machine (
S
Decision Tree in classifying new f
r
over a training set. The simulatio
n
KNN and DT classifier outperfor
m
in the accuracy of classifying t
h
detection rate of the positive class
e
Finally, in this paper, it has also sh
o
and KNN
b
ased fusion can 100%
frames availability even with a s
m
used for the energy detection proce
s
ACKNOWLEDG
M
This work was supported b
Foundation of Jilin Province-
C
201215133, and the Sci-tech Dev
e
Province of China under Grant No.
REFERENC
E
[1] S.Haykin,“Cognitive radio:
b
communications,”IEEE Journal
Communications,vol. 23, no. 2, pp. 20
1
[2] Akyildiz, Ian F., Brandon F. Lo,
a
"Cooperative spectrum sensing in
survey."Physical Communication 4.1 (
2
[3] Teguig, D., B. Scheers, and V. Le
cooperative spectrum sensing
u
r machine learning
n
Tree in classifying
1000 frames using
c
ession and the recall of
u
sed in classifying 3000
contai
n
ing 1000 training
s
tem used to generate Fig.
for training the classifier.
e
rent number of samples
s
s. Table 2 shows that the
c
an classify all the 3000
e
tection rate” using only
n
process and the sensing
er of samples taken by
f
sample used for energy
l
be. In other wo
r
d, when
fusion we can reduce the
μ
s for 5 MHz bandwidth
l
achieve 100% detection
U
SION
per frame training and
t
rum sensing based on
fier to the data fusion
r
ical results have shown
r
based fusion algorithm
s
ion rules in the accuracy
delay and operations. In
l
ation comparison of four
f
iers: K-nearest neighbor
S
VM), Naive Bayes, and
r
ames after training them
n
results have shown that
m
the other two classifiers
h
e new frames and the
e
s or the spectrum holes.
o
wn that the decision tree
correctly detect the PU
m
all number of samples
s
s.
M
ENT
y
the Natural Science
C
hina under Grant No.
e
lopment Project of Jilin
20130521015JH.
E
S
b
rain-empowered wireless
on Selected Areas in
1
-220, Feb. 2005
a
nd Ravikuma
r
Balakrishnan.
cognitive radio networks: A
2
011): 40-62..
Nir."Data fusion schemes for
in cognitive radio
433
networks."Communications and Information Systems Conference
(MCC), 2012 Military. IEEE, 2012
[4] Z. Li, F. R. Yu, and M. Huang, “Distributed spectrum sensing in
cognitive radio networks,” in Proc. IEEE WCNC’09, (Budapest,
Hungary),Apr. 2009.
[5] Zhang, Wei, and Khaled Ben Letaief. "Cooperative spectrum sensing
with transmit and relay diversity in cognitive radio networks."IEEE
Trans. Wireless Communication 7.12 (2008).
[6] Kyperountas, Spyros, Neiyer Correal, and Qicai Shi. "A comparison
of fusion rules for cooperative spectrum sensing in fading channels."
EMS Research, Motorola (2010).
[7] Qin, Qin, Zeng Zhi min, and Guo Caili. "A study of data fusion and
decision algorithms based on cooperative spectrum sensing."Fuzzy
Systems and Knowledge Discovery, 2009. FSKD'09. Sixth
International Conference on. Vol. 1. IEEE, 2009.
[8] Thilina, Karaputugala Madushan, et al. "Machine Learning
Techniques for Cooperative Spectrum Sensing in Cognitive Radio
Networks."Selected Areas in Communications, IEEE Journal on
31.11 (2013): 2209-2221.
[9] Thilina, Karaputugala Madushan, et al. "Pattern classification
techniques for cooperative spectrum sensing in cognitive radio
networks: SVM and W-KNN approaches." Global Communications
Conference (GLOBECOM), 2012 IEEE.IEEE, 2012.
[10] Kieu-Xuan, Thuc, and Insoo Koo. "A Cooperative Spectrum
Sensing Scheme Using Fuzzy Logic for Cognitive Radio Networks."
KSII Transactions on Internet & Information Systems 4.3 (2010).
[11] Liang, Ying-Chang, et al. "Sensing-throughput tradeoff for cognitive
radio networks."Wireless Communications, IEEE Transactions on 7.4
(2008): 1326-1337.
[12] Peh, Edward, and Ying-Chang Liang. "Optimization for cooperative
sensing in cognitive radio networks." Wireless Communications and
Networking Conference, 2007. WCNC 2007. IEEE.IEEE, 2007.
[13] Bermejo, Sergio, and Joan Cabestany. "Adaptive soft k-nearest
neighbor classifiers." Pattern Recognition 33.12 (2000): 1999-2005.
[14] Metsis, Vangelis, Ion Androutsopoulos, and Georgios Paliouras.
"Spam filtering with naive bayes-which naive bayes?."CEAS. 2006.
[15] Barros, Rodrigo Coelho, et al. "A survey of evolutionary algorithms
for decision-tree induction."Systems, Man, and Cybernetics, Part C:
pplications and Reviews, IEEE Transactions on 42.3 (2012): 291-312.
434