Content uploaded by Ali Bou Nassif
Author content
All content in this area was uploaded by Ali Bou Nassif on Apr 12, 2019
Content may be subject to copyright.
Clustering Enabled Classification using Ensemble
Feature Selection for Intrusion Detection
Fadi Salo∗, MohammadNoor Injadat∗, Abdallah Moubayed∗, Ali Bou Nassif†, Aleksander Essex∗
∗Department of Electrical and Computer Engineering, The University of Western Ontario, London, ON, Canada
Email: {fsalo, minjadat, amoubaye, aessex}@uwo.ca
†Department of Electrical and Computer Engineering, University of Sharjah, Sharjah, UAE
Email: anassif@sharjah.ac.ae
Abstract—Machine learning has been leveraged to increase the
effectiveness of intrusion detection systems (IDSs). The focus of
this approach, however, has largely be on detecting known attack
patterns based on outdated datasets. In this paper, we propose
an ensemble feature selection method along with an anomaly
detection method that combines unsupervised and supervised
machine learning techniques to classify network traffic to identify
previously unseen attack patterns. To that end, three different
feature selection techniques are used as part of an ensemble
model that selects 8 common features. Moreover, k-Means cluster-
ing is used to first partition the training instances into kclusters
using the Manhattan distance. A classification model is then built
based on the resulting clusters, which represent a density region
of normal or anomaly instances. This in turn helps determine
the effectiveness of the clustering in detecting unknown attack
patterns within the data. The performance of our classifier is
evaluated using the Kyoto dataset, which was collected between
2006 and 2015. To our knowledge, no previous work proposed
such a framework that combines unsupervised and supervised
machine learning approaches using this dataset. Experimental
results show the effectiveness of the proposed framework in
detecting previously unseen attack patterns compared to the
traditional classification approach.
Index Terms—Network anomaly detection, Ensemble feature
selection, k-means clustering, Classification, Kyoto dataset.
I. INTRODUCTION
Two performance indicators often used to evaluate the
effectiveness of intrusion detection systems (IDSs) [1] are
precision and stability [2], [3]. Past research has focused on
rule-based systems and statistical approaches [4], however the
performance of such approaches suffers in larger datasets. Data
mining and machine learning approaches have been proposed
as one promising solution to this problem [5].
IDSs typically classify abnormal traffic as either anomaly-
based and misuse-based, with each class having merits and
limitations. Misuse-based methods compare the data against a
predefined set of rules or patterns to detect network attacks.
However, such approaches are not adaptable and are limited in
their ability to detect previously unseen attack types. On the
other hand, anomaly-based methods collect data that represent
normal behavior and build familiarity models, and actions
deviating from the model are labeled as suspicious/anomalous
[6].
Despite the promising improvements achieved by previous
work, intrusion detection is still a challenging problem. This is
exacerbated by the high volume of traffic data, a continuously
evolving environment, and an abundance of available features
[7]. For example, a set of irrelevant, redundant, or highly
correlated features can be found in high dimensional datasets,
which can negatively impact the accuracy and performance of
IDSs. Choosing an appropriate subset of features is essential
to improving the detection model [8].
In this paper, we propose an effective intrusion detection
framework based on ensemble feature selection, clustering,
and supervised machine learning classifiers that can detect
previously unseen attack patterns with high accuracy. Toward
the development of an accurate and robust anomaly detection
methodology, we used several graphical and statistical ex-
ploratory data analytics techniques on our chosen dataset. The
result of our analysis led us to select Support Vector Machine
with Gaussian kernels (SVM-RBF), k-nearest neighbors (k-
NN), Random Forest (RF), and quadratic discriminant analysis
(QDA) due to the non-linear nature of the considered dataset.
The performance of our approach is evaluated and com-
pared to the traditional classification approach by conducting
different experiments with the Kyoto 2006+ dataset that was
built during a 9 year period of network traffic collection
from honeypots in Kyoto University [9]. To our knowledge,
no previous work proposed such a framework that combines
unsupervised and supervised machine learning approaches
using this dataset.
This paper combines the use of homogeneous ensem-
ble feature selection using three feature selection algorithms
(correlation-based, information gain-based, and significance-
based) with k-means clustering and different classification
techniques to improve the IDSs ability to detect and iden-
tify unknown patterns. The feasibility and efficiency of the
considered methods is evaluated using various metrics such as
accuracy, precision, recall, F-measure, and false alarm rate.
The main contributions of this paper include:
•An investigation of the the behavior and characteristics
of the Kyoto dataset using a graphical and statistical
exploratory data analytics approach.
•A proposal for a homogeneous ensemble feature selection
approach using three base feature selection algorithms.
•The first study, to our knowledge, of the efficiency and ef-
fectiveness of a clustering-based classification framework
for anomaly detection using the Kyoto 2006+ dataset.
The remainder of this paper is organized as follows. Section
II summarizes some of the related work. Section III gives a
brief overview of considered algorithms. Section IV presents
the proposed framework. Section V discusses the research
methodology and the experimental results. Finally, Section VI
concludes the paper and provides future research directions.
II. RE LATE D WOR K
Researchers have typically treated intrusion detection as a
classification problem. To that end, various supervised ma-
chine learning classifiers such as support vector machines
(SVM) [10], decision trees [11], k-nearest neighbor (k-NN)
[12], and naive Bayes have been proposed. See e.g., the
review of Tsai et al.[13]. Further promising results were
obtained by proposing novel data mining-based approaches
(cf. Wu and Banzhaf [14]). Recently, hybrid optimization-
based models have been proposed to improve the performance
of intrusion detection systems. For instance, Chung and Wahid
[15] proposed a hybrid approach that includes feature se-
lection and classification with simplified swarm optimization
(SSO). The performance of SSO was further improved by
using weighted local search (WLS) to obtain better solutions
from the neighborhood [15]. The authors reported intrusion
detection accuracy up to 93.3%. Similarly, Kuang et al. [16]
proposed a hybrid method that combined genetic algorithm
(GA) and multi-layered SVM with kernel principal compo-
nent analysis (KPCA) to enhance the detection performance.
Another technique by Zhang et al. [17] combined misuse and
anomaly detection using random forests. In contrast, a novel
particle swarm optimization-based algorithm, Catfish-BPSO,
was proposed in [18] to select features and enhance the model
performance.
III. THEORETIC ASP EC TS O F THE TECHNIQUES
Starting with the clustering, the k-means algorithm was
chosen to group the instances into two clusters. The algorithm
was chosen specifically due to its simplicity, effectiveness
in dealing with network traffic data [19], and flexibility in
offering the choice of the desired number of clusters. In brief,
k-means is an unsupervised machine learning algorithm that
groups instances into kclusters using a particular distance
metric such as Euclidean, Manhattan, or Mahalanobis distance
[20]. This distance metric is used to determine the proximity
of each instance to the cluster centroid. Upon convergence,
the output of the algorithm is the centroid of each of the k
clusters and the cluster label of each instance.
On the other hand, four different supervised machine learn-
ing classification algorithms are considered in this work:
support vector machines (SVM), knearest neighbors (k-
NN), random forests (RF), and quadratic discriminant analysis
(QDA). We briefly recall some specifics. These algorithms are
chosen due to their ability to deal with non-linear datasets.
SVM is a supervised machine learning classification algo-
rithm that tries to determine the maximum separation hyper-
plane between two classes to identify the class positive and
negative [21]. The output of the SVM with Gaussian kernel
(SVM-RBF) is [22]:
f(x) = wTΦ(x) + b(1)
where Φ(x)represents the used kernel. The goal is to deter-
mine the weight vector wTand intercept bthat minimizes the
following objective function:
min
w,b
1
2w2+
C
m
X
i=1
hyi·cost1(f(xi)) + (1 −yi)·cost0(f(xi))i(2)
where Cis a regularization parameter that penalizes incor-
rectly classified instances and costiis the squared error over
the training dataset.
k-NN is a simple classification algorithm that determines
the class of an instance based on the majority class of its k
nearest neighboring points. This is done by first evaluating
the distance from the data point to all other points within
the training dataset. Different distance measures can be used,
such as the Euclidean, Manhattan, or Mahalanoblis distance
[20]. After determining the distance, the knearest points are
identified and a majority voting-based decision is made on the
class of the considered data point [23].
Random forests are an ensemble learning decision tree-
based classifier combining several decision trees to predict the
class [24]. Each tree is independently and randomly sampled
with their results combined using a majority vote. The RF
classifier sends any new incoming data points to each of its
trees and chooses the class that is classified by the most trees.
Finally, discriminant analysis is a statistical technique that
tries to find a group of prediction equations based on inde-
pendent variables [25]. This technique can be used for one
of two objectives, either to determine a prediction equation
that can be used to classify new input points, or to interpret
the equation to get a better understanding of the relationship
that exists between the variables [25]. A quadratic kernel is
one which assumes a quadratic relationship exists between the
independent variables.
IV. PROPOSED FRAMEWOR K
This section presents an overview of the clustering-enabled
classification framework for intrusion detection. The goal is
to evaluate the efficiency of the framework to detect un-
seen/unknown patterns. This is done in two-folds: first by
comparing the result of the clustering to that of the true
label of the instance, and second by using a separate data
sample to act as the testing dataset for unseen data instances.
The framework is as follows: the first step consists of se-
lecting appropriate features. To that end, this work proposes
an ensemble feature selection technique that combines three
different feature selection methods, namely information gain,
correlation, and significance methods. The features chosen are
the features common among the three methods. Following
feature selection, graphical and statistical data analytics is
applied to get a better understanding of the behavior of the
selected features. Then, the training dataset is clustered using
the k-means algorithm, which we chose due to its simplicity
and effectiveness in dealing with network traffic data [19].
This is followed by building the final classification model
using the selected classification algorithms. The choice of
these algorithms is dependent on the insights gained from the
exploratory data analytics step. The model is then applied to
the testing dataset to evaluate the efficiency of the clustering
in detecting previously unseen patterns. The overall flow of
the proposed framework is illustrated in Fig. 1.
Feature Selection
Datasets
Building Model
Testing Model
Result
NormalNormal
Information
Gain
Selected Features
Training
Testing
Clustering
(kMeans)
Classification
(SVM/KNN/RF/QDA)
Final Model
AttackNormal
Correlation Significance
Fig. 1. Proposed framework
V. EXPERIMENTAL SETUP AND RESULT
DISCUSSION
A. Dataset Description
We used the Kyoto 2006+ dataset to evaluate our proposed
framework. This dataset was collected from honeypots by the
University of Kyoto over the 9 year period from Nov. 2006
to Dec. 2015. It consists of 1 million records, each containing
24 features [9]. A random subset of approximately 300,000
records was chosen to form our experimental dataset. This
was further divided into training and testing datasets, using a
60/40 split. The training dataset consisted of 178,479 records
with 92,729 normal and 85,750 attack records. The testing
dataset consisted of 118,986 records with 61,765 normal and
57,221 attack records.
B. Experimental setup and Data Preprocessing
The proposed techniques were implemented using MAT-
LAB 2018a. The selected dataset was transformed from their
original format into a new dataset consisting of 8 features as
illustrated using the Venn diagram shown in Fig. 2.
Correlation
(11)
Information Gain
(11)
Significance
(20)
Fig. 2. Ensemble Feature Selection Output
Outlier removal was performed using Inter-Quartile Range
(IQR) to remove any redundant or noisy data points. As most
of the classifiers do not accept categorical features [26], data
mapping was used to transform non-numeric feature values
into numeric ones (named categorical in MATLAB).
C. Prediction Performance Measures
To evaluate and compare prediction models quantitatively,
the following measurements were utilized:
Accuracy =T P +T N
T P +T N +F P +F N (3)
P recision =T P
T P +F P (4)
Recall =T P
T P +F N (5)
F-measure = 2 ·P recision ·Recall
P recision +Recall (6)
where T P is the true positive rate, T N is the true negative
rate, F P is the false positive rate, and F N is the false negative
rate [27].
Fig. 3. Probability Density Function of Normal and Attack Traffic
Fig. 4. Number of Same Service to Same Destination ID vs Same Service
Rate
D. Results Discussion
The goal of this work is to evaluate the effectiveness of
the clustering method in detecting unseen/unknown patterns
within the dataset. However, a crucial initial step to better
understand the dataset is to study its behavior to gain insights
into it. To that end, exploratory data analytics is applied
by plotting the probability density function of two features,
namely the same srv rate (same service rate) and the Dst host
srv count (number of connections of the same service type to
the same destination IP) for both normal and attack traffic as
shown in Fig. 3. It can be seen that normal traffic data tends
to have higher service rate and number of same services to
same destination IP. The service rate refers to the percentage
of the connections that have the same service type (e.g. http,
telnet, etc.) On the other hand, attack traffic data has the exact
opposite trend. These statistical trends give us initial insights
into the behavior of normal and attack traffic that can be
helpful in future prediction.
Fig. 4 plots the two highest ranked numeric features against
each other. It plots the number of same service to the same
destination against the same service rate. It is clear that the
dataset is not linearly separable. This provides further insights
and justifies the choice of using SVM-RBF, k-NN, RF, and
QDA methods as they can handle non-linear data.
Table I shows the performance of the considered classifi-
cation algorithms for both the training and testing datasets
without clustering. These results will be used as a benchmark
to gauge the effectiveness of the clustering algorithm in
detecting previously unseen patterns.
Table II, on the other hand, shows the performance of
the different classification algorithms with clustering. Several
observations can be made. First is that almost all algorithms
TABLE I
PER FOR MA NCE R ESU LTS OF T HE T HRE E CL ASS IFI ERS W IT HOU T CL UST ER ING
Training Testing
Classifier Acc(%) Precision Recall FAR F-measure Acc(%) Precision Recall FAR F-measure
SVM-RBF 96.88 0.982 0.958 0.018 0.97 80.03 0.767 0.883 0.288 0.821
k-NN (k=3) 98.03 0.985 0.977 0.016 0.981 86.95 0.882 0.864 0.124 0.873
k-NN (k=5) 97.65 0.982 0.972 0.018 0.977 88.33 0.898 0.875 0.107 0.886
RF 98.52 0.994 0.978 0.006 0.986 57.79 0.911 0.207 0.021 0.337
QDA 87.1 0.926 0.817 0.071 0.868 87.02 0.936 0.805 0.059 0.866
TABLE II
PER FOR MA NCE R ESU LTS OF T HE T HRE E CL ASS IFI ERS W IT H CLU ST ERI NG
Training Testing
Classifier Acc(%) Precision Recall FAR F-measure Acc(%) Precision Recall FAR F-measure
SVM-RBF 98.24 0.988 0.984 0.02 0.986 76.89 0.716 0.92 0.393 0.805
k-NN (k=3) 98.96 0.991 0.991 0.013 0.992 82.28 0.783 0.911 0.272 0.842
k-NN (k=5) 98.56 0.989 0.988 0.018 0.988 81.33 0.768 0.918 0.299 0.836
RF 99.98 0.999 0.999 0.001 0.999 79.06 0.741 0.916 0.344 0.82
QDA 93.49 0.973 0.92 0.041 0.946 81.63 0.784 0.892 0.265 0.834
(with the exception of QDA) achieved good training accuracy,
as shown in Table I. However, this was not necessarily
translated in the testing accuracy. This shows that algorithms
such as RF are not well suited to anomaly detection as it only
had a testing accuracy of approximately 58%. The second
observation is that using clustering to detect anomalies is
effective. This is based on the fact that the testing accuracy
of the classifiers after clustering is close to that of the non-
clustering case. This shows that clustering is able to detect
previously unseen patterns effectively. This is further high-
lighted in Figs. 5 and 6 which show the training and testing
accuracy of the different classification algorithms with and
without clustering. The results in Fig. 5 are expected, since
the model was trained using this dataset and hence will have
a high accuracy. However, the results in Fig. 6 emphasize the
efficiency of the proposed clustering-enabled classification. It
can be seen that the difference in testing accuracy between the
clustering and the non-clustering cases is less than 10% for
most classification algorithms. This means that the clustering
was able to predict previously unseen patterns with a relatively
high accuracy. By comparison, results reported in [28] only
shown an accuracy between 60% and 77%. However, this
proposed work was able to achieve higher testing accuracy
with the lowest being approximately 77%, thus illustrating
the effectiveness of the proposed framework. Moreover, it
can be concluded that k-NN with 3 neighbors has the best
performance given that it achieved high training accuracy and
had the smallest difference in testing accuracy between the
non-clustering and the clustering cases.
VI. CONCLUSIONS
In this paper, an efficient intrusion detection framework
based on homogeneous ensemble feature selection, clustering,
and supervised machine learning classifiers was proposed. This
was done in order to test the efficiency of the clustering
algorithm in detecting previously unseen attack patterns for
96.88
98.03 97.65
98.52
87.10
98.24 98.96 98.56
99.98
93.49
SVM-RBF k-NN(k=3) k-NN(k=5) RF QDA
85
90
95
100
Accuracy
Without Clustering
With Clustering
Fig. 5. Overall Accuracy of Training Dataset
80.03
86.95 88.33
57.79
87.02
76.89
82.28 81.33 79.05
81.63
SVM-RBF k-NN(k=3) k-NN(k=5) RF QDA
55
60
65
70
75
80
85
90
95
100
Accuracy
Without Clustering
With Clustering
Fig. 6. Overall Accuracy of Testing Dataset
intrusion detection. The techniques considered were chosen
based on the nature of the selected dataset which was investi-
gated using different graphical and statistical exploratory data
analytics techniques such as the probability density function.
The performance was evaluated and compared by conducting
different experiments with the Kyoto 2006+ dataset that was
built during a 9 years of real traffic data collection (between
Nov. 2006 and Dec. 2015) from diverse types of honeypots in
Kyoto University. To the best of our knowledge, no previous
work proposed such a framework using this dataset. To explore
the dataset, a homogeneous ensemble feature selection mech-
anism using three feature selection algorithms (correlation-
based, information gain-based, and significance-based) was
applied to extract 8 representative features. This was followed
by applying different graphical and statistical exploratory data
analytics techniques to better understand the behavior of the
features. The results of this data analysis showed that the
dataset is highly non-linear, which motivated the choice of
the considered supervised classification algorithms. The new
dataset was then clustered using k-means algorithm and a clas-
sification model was then built using different classification
techniques to improve the IDS’s ability to detect and identify
unknown patterns. Experimental results showed that k-means
clustering was indeed efficient in detecting previously unseen
patterns. This was highlighted by the small difference in
testing accuracy between the clustering and the non-clustering
cases which did not exceed the 10% range for most classifiers.
Furthermore, it was also shown that the k-NN algorithm with
k= 3 had the best performance as it achieved high training
accuracy and had the smallest testing accuracy difference. In
order to further improve the performance of the proposed
approach, we plan to develop an adaptive model that clusters
any new attacks with existing ones. This in turn will provide
a more robust and dynamic intrusion detection system and
improve its security.
REFERENCES
[1] R. Zuech, T. M. Khoshgoftaar, and R. Wald, “Intrusion detection
and big heterogeneous data: a survey,” Journal of Big Data, vol. 2,
no. 1, p. 3, Feb 2015. [Online]. Available: https://doi.org/10.1186/
s40537-015- 0013-4
[2] L. de S´
a Silva, A. C. F. dos Santos, T. D. Mancilha, J. D. S. da Silva,
and A. Montes, “Detecting attack signatures in the real network traffic
with annida.” Elsevier, 2008, vol. 34, no. 4, pp. 2326–2333.
[3] A. Patcha and J.-M. Park, “An overview of anomaly detection tech-
niques: Existing solutions and latest technological trends,” Computer
networks, vol. 51, no. 12, pp. 3448–3470, 2007.
[4] S. Mukkamala, G. Janoski, and A. Sung, “Intrusion detection using
neural networks and support vector machines,” vol. 2, pp. 1702–1707,
2002.
[5] S.-Y. Wu and E. Yen, “Data mining-based intrusion detectors,” Expert
Systems with Applications, vol. 36, no. 3, pp. 5605–5612, 2009.
[6] C. Kruegel, F. Valeur, and G. Vigna, “Intrusion detection and correlation:
challenges and solutions,” vol. 14, 2004.
[7] S. Suthaharan, “Big data classification: Problems and challenges in net-
work intrusion prediction with machine learning,” ACM SIGMETRICS
Performance Evaluation Review, vol. 41, no. 4, pp. 70–73, 2014.
[8] J. Zhang and M. Zulkernine, “Anomaly based network intrusion de-
tection with unsupervised outlier detection,” in Communications, 2006.
ICC’06. IEEE International Conference on, vol. 5. IEEE, 2006, pp.
2388–2393.
[9] M. A. Ambusaidi, X. He, P. Nanda, and Z. Tan, “Building an intrusion
detection system using a filter-based feature selection algorithm,” IEEE
Transactions on Computers, vol. 65, no. 10, pp. 2986–2998, Oct 2016.
[10] A. S. Eesa, Z. Orman, and A. M. A. Brifcani, “A novel feature-selection
approach based on the cuttlefish optimization algorithm for intrusion
detection systems,” Expert Systems with Applications, vol. 42, no. 5,
pp. 2670–2679, 2015.
[11] W. Li, P. Yi, Y. Wu, L. Pan, and J. Li, “A new intrusion detection system
based on knn classification algorithm in wireless sensor network,”
Journal of Electrical and Computer Engineering, vol. 2014, 2014.
[12] S. Aljawarneh, M. Aldwairi, and M. B. Yassein, “Anomaly-based in-
trusion detection system through feature selection analysis and building
hybrid efficient model,” Journal of Computational Science, 2017.
[13] C.-F. Tsai, Y.-F. Hsu, C.-Y. Lin, and W.-Y. Lin, “Intrusion detection by
machine learning: A review,” Expert Systems with Applications, vol. 36,
no. 10, pp. 11 994–12 000, 2009.
[14] S. X. Wu and W. Banzhaf, “The use of computational intelligence in
intrusion detection systems: A review,” Applied soft computing, vol. 10,
no. 1, pp. 1–35, 2010.
[15] Y. Y. Chung and N. Wahid, “A hybrid network intrusion detection system
using simplified swarm optimization (sso),” Applied Soft Computing,
vol. 12, no. 9, pp. 3014–3022, 2012.
[16] F. Kuang, W. Xu, and S. Zhang, “A novel hybrid kpca and svm with
ga model for intrusion detection,” Applied Soft Computing, vol. 18, pp.
178–184, 2014.
[17] J. Zhang, M. Zulkernine, and A. Haque, “Random-forests-based network
intrusion detection systems,” IEEE Transactions on Systems, Man, and
Cybernetics, Part C (Applications and Reviews), vol. 38, no. 5, pp. 649–
659, 2008.
[18] A. J. Malik and F. A. Khan, “A hybrid technique using multi-objective
particle swarm optimization and random forests for probe attacks
detection in a network,” in Systems, Man, and Cybernetics (SMC), 2013
IEEE International Conference on. IEEE, 2013, pp. 2473–2478.
[19] Y. Liu, W. Li, and Y.-C. Li, “Network traffic classification using k-means
clustering,” in Computer and Computational Sciences, 2007. IMSCCS
2007. Second International Multi-Symposiums on. IEEE, 2007, pp.
360–365.
[20] A. Kind, M. P. Stoecklin, and X. Dimitropoulos, “Histogram-based
traffic anomaly detection,” IEEE Transactions on Network and Service
Management, vol. 6, no. 2, pp. 110–121, 2009.
[21] I. S. Thaseen and C. A. Kumar, “Intrusion detection model using fusion
of chi-square feature selection and multi class svm,” Journal of King
Saud University-Computer and Information Sciences, vol. 29, no. 4, pp.
462–472, 2017.
[22] H. Bostani and M. Sheikhan, “Modification of supervised opf-based
intrusion detection systems using unsupervised learning and social
network concept,” Pattern Recognition, vol. 62, pp. 56–72, 2017.
[23] W. Meng, W. Li, and L.-F. Kwok, “Design of intelligent knn-based alarm
filter using knowledge-based alert verification in intrusion detection,”
Security and Communication Networks, vol. 8, no. 18, pp. 3883–3895,
2015.
[24] M. Injadat, F. Salo, and A. B. Nassif, “Data mining techniques in
social media: A survey,” Neurocomputing, vol. 214, pp. 654 – 670,
2016. [Online]. Available: http://www.sciencedirect.com/science/article/
pii/S092523121630683X
[25] N. S. Software, “Chapter 440:discriminant analysis,” Available at:
https://ncss-wpengine.netdna- ssl.com/wp-content/themes/ncss/pdf/
Procedures/NCSS/Discriminant Analysis.pdf.
[26] M. Salem and U. Buehler, “Mining techniques in network security to
enhance intrusion detection systems,” arXiv preprint arXiv:1212.2414,
2012.
[27] M. H. Tang, C. Ching, S. Poon, S. S. Chan, W. Ng, M. Lam, C. Wong,
R. Pao, A. Lau, and T. W. Mak, “Evaluation of three rapid oral fluid test
devices on the screening of multiple drugs of abuse including ketamine,”
Forensic science international, 2018.
[28] F. Hosseinpour, P. V. Amoli, F. Farahnakian, J. Plosila, and
T. H¨
am¨
al¨
ainen, “Artificial immune system based intrusion detection:
innate immunity using an unsupervised learning approach,” International
Journal of Digital Content Technology and its Applications, vol. 8, no. 5,
p. 1, 2014.