ArticlePDF Available

Abstract and Figures

Technology has become inevitable in human life, especially the growth of Internet of Things (IoT), which enables communication and interaction with various devices. However, IoT has been proven to be vulnerable to security breaches. Therefore, it is necessary to develop fool proof solutions by creating new technologies or combining existing technologies to address the security issues. Deep learning, a branch of machine learning has shown promising results in previous studies for detection of security breaches. Additionally, IoT devices generate large volumes, variety, and veracity of data. Thus, when big data technologies are incorporated, higher performance and better data handling can be achieved. Hence, we have conducted a comprehensive survey on state-of-the-art deep learning, IoT security, and big data technologies. Further, a comparative analysis and the relationship among deep learning, IoT security, and big data technologies have also been discussed. Further, we have derived a thematic taxonomy from the comparative analysis of technical studies of the three aforementioned domains. Finally, we have identified and discussed the challenges in incorporating deep learning for IoT security using big data technologies and have provided directions to future researchers on the IoT security aspects.
Content may be subject to copyright.
A preview of the PDF is not available
... Password guessing, brute-force attacks, stolen verification attacks, and other security threats target big data storage systems. Users' and data owners' confidentiality is not adequately protected by current security measures, which advocate encrypting data before delivering it [19]. As Figure 2 clearly portrays, the privacy assurance of the Big Data in Healthcare system [20], SecPri-BGMPOP, which can offer a solution that incorporates multiple processes, is proposed as a means of addressing the numerous issues associated with privacy and security protection. ...
... The location has been updated, which is the main reformation in the recommended paradigm. Equation (19) illustrates the updating of the position in our Hybrid Fragment Horde Bland Lobo Optimization model, where ⃗⃗ denotes the velocity for updating the location of PSO, as indicated in Equations (18) and (24): ...
Article
Full-text available
One of the industries with the fastest rate of growth is healthcare, and this industry’s enormous amount of data requires extensive cloud storage. The cloud may offer some protection, but there is no assurance that data owners can rely on it for refuge and privacy amenities. Therefore, it is essential to offer security and privacy protection. However, maintaining privacy and security in an untrusted green cloud environment is difficult, so the data owner should have complete data control. A new work, SecPri-BGMPOP (Security and Privacy of BoostGraph Convolutional Network-Pinpointing-Optimization Performance), is suggested that can offer a solution that involves several different steps in order to handle the numerous problems relating to security and protecting privacy. The Boost Graph Convolutional Network Clustering (BGCNC) algorithm, which reduces computational complexity in terms of time and memory measurements, was first applied to the input dataset to begin the clustering process. Second, it was enlarged by employing a piece of the magnifying bit string to generate a safe key; pinpointing-based encryption avoids amplifying leakage even if a rival or attacker decrypts the key or asymmetric encryption. Finally, to determine the accuracy of the method, an optimal key was created using a meta-heuristic algorithmic framework called Hybrid Fragment Horde Bland Lobo Optimisation (HFHBLO). Our proposed method is currently kept in a cloud environment, allowing analytics users to utilise it without risking their privacy or security.
... Las técnicas de Big Data resultan las más adecuadas para al tratamiento de datos masivos y poco estructurados (Oussous et al., 2018). Por otro lado, la seguridad de los datos que son detectados por medio de dispositivos IoT debe establecerse mediante técnicas nuevas de protección, pues se trata de arquitectura no tradicionales (Amanullah et al., 2020;Khan & Salah, 2018;Mohamad Noor & Hassan, 2019;Sidhu et al., 2019). Finalmente, usar la información colectada en los dataset para identificar enfermedades y anomalías en los cultivos a fin de ampliar aún más el alcance de la agricultura de precisión (Bhatia et al., 2021). ...
Article
Full-text available
The agricultural sector is among the most benefited by the Internet of Things (IoT) expansion since it allows collecting and managing a large amount of data about a crop's environment. This article presents the results of designing and testing a prototype monitoring system for agricultural variables, which can be stored and consulted in the cloud. Low-cost sensors and processing systems were easily adaptable to agricultural environments. The results confirmed the viability of implementing this system without resorting to complex technological and computational tools.
... Loi et al. (12) consumer IoT gadgets have been the focus of their attention. Meanwhile, IoT application and architectural security using blockchain has been studied by Fernández-Caramés et al. (13) and Lao et al. (18) Hassija et al. (14) including a discussion of IoT application security, while Berkay et al. (15) and Tabrizi and Pattabiraman (16) examined IoT security from the standpoint of the coding environment and the code itself. Lastly, Amanullah et al. (17) examined how big data, IoT security, and deep learning are related, and Joao et al. (19) surveyed IoT risk concepts and assault routes. ...
Article
Full-text available
Introduction: Passive intrusion detection in industrial environments can be challenging, especially when the area being monitored is vast. However, with the advent of IoT technology, it is possible to deploy sensors and devices that can help with mass segmentation of passive intrusion. Hence, this approach deploys ML (Machine Learning) algorithm as improvised (Convolutional Neural Network) CNN support for identifying and avoid illegal access to critical areas in real time, ultimately improving security and safety in industrial environments. Methods: In turn the proposed algorithm can detect patterns and anomalies that could indicate a passive intrusion. In order to discover the patterns and connections between the various sensor data points, DL (Deep Learning) techniques like CNNs, Recurrent Neural Networks (RNNs), and Autoencoders (AE) may be trained on massive datasets of sensor data. Results: Then, the robust technique DL (Deep Learning) can be utilized for ID (Intrusion Detection) the industrialized settings, when specifically combined with other IoT devices like sensors and alert systems. Thus, the model is trained and tested. Finally, it achieved 98.51% and 94.85% accuracy accordingly.Conclusion: These frameworks after the completing training phase can be employed for the novel sensor data’s actual analysis and also for the anomalies detection as it reveals a potential ID.
... AI Models like deep learning (DL) models, convolutional neural networks (CNNs), and recurrent neural networks (RNNs) have been barely used in solving information security problems such as attack detection [24]. However, ref. [25], a relational and comparative analysis involving DL, the internet of things, and big data (BD), showed that current security solutions require updates similar to those already implemented with DL, which researchers and organizations have widely accepted. ...
Article
Full-text available
With new high-performance server technology in data centers and bunkers, optimizing search engines to process time and resource consumption efficiently is necessary. The database query system, upheld by the standard SQL language, has maintained the same functional design since the advent of PL/SQL. This situation is caused by recent research focused on computer resource management, encryption, and security rather than improving data mining based on AI tools, machine learning (ML), and artificial neural networks (ANNs). This work presents a projected methodology integrating a multilayer perceptron (MLP) with Kmeans. This methodology is compared with traditional PL/SQL tools and aims to improve the database response time while outlining future advantages for ML and Kmeans in data processing. We propose a new corollary: hk → H = SSE(C),where k > 0 and ∃ X, executed on application software querying data collections with more than 306 thousand records. This study produced a comparative table between PL/SQL and MLPKmeans based on three hypotheses: line query, group query, and total query. The results show that line query increased to 9 ms, group query increased from 88 to 2460 ms, and total query from 13 to 279 ms. Testing one methodology against the other not only shows the incremental fatigue and time consumption that training brings to database query but also that the complexity of the use of a neural network is capable of producing more precision results than the simple use of PL/SQL instructions, and this will be more important in the future for domain-specific problems
... Agile concepts are combined with risk mitigation measures to promote a feedback-driven method for improving data quality [40]. Teams can modify their data collecting and preprocessing processes via continuous feedback loops in response to insights acquired from the model's success and changing demands of the project. ...
Chapter
Full-text available
The purpose of this study is to explore how artificial intelligence(AI) becomes a part of data visualization. Thus, data from complex datasets are transformed into dynamic, interactive, and personalized visual experiences that will help in deeper insights and actionable knowledge. The research is supposed to design a holistic system and rules for using AI to make data visualization more effective and super interactive for the users.
... They either provided a broad overview of IoT security [19] or a detailed security analysis focused on specific IoT technologies or a specific layer of IoT architecture [20]. Furthermore, several surveys [21,22,23,24] investigated the relationship between IoT security and Blockchain technologies. In the IIoT domain, survey directions have recently been directed to be hammered down [25]. ...
Preprint
Full-text available
Currently, Blockchain (BC), Artificial Intelligence (AI), and smart Industrial Internet of Things (IIoT) are not only leading promising technologies in the world, but also these technologies facilitate the current society to develop the standard of living and make it easier for users. However, these technologies have been applied in various domains for different purposes. Then, these are successfully assisted in developing the desired system, such as-smart cities, homes, manufacturers, education, and industries. Moreover, these technologies need to consider various issues-security, privacy, confidentiality, scalability, and application challenges in diverse fields. In this context, with the increasing demand for these issues solutions, the authors present a comprehensive survey on the AI approaches with BC in the smart IIoT. Firstly, we focus on state-of-the-art overviews regarding AI, BC, and smart IoT applications. Then, we provide the benefits of integrating these technologies and discuss the established methods, tools, and strategies efficiently. Most importantly, we highlight the various issues--security, stability, scalability, and confidentiality and guide the way of addressing strategy and methods. Furthermore, the individual and collaborative benefits of applications have been discussed. Lastly, we are extensively concerned about the open research challenges and potential future guidelines based on BC-based AI approaches in the intelligent IIoT system.
... This is the author's version which has not been fully edited and content may change prior to final publication. ing these attacks is known as ransomware detection [134]. Ransomware is recognized as a highly effective and lucrative attack method in traditional information technology settings. ...
Article
Full-text available
Vehicle-to-Everything (V2X) communication, essential for enhancing road safety, driving efficiency, and traffic management, must be robust against cybersecurity threats for successful deployment and acceptance. This survey comprehensively explores V2X security challenges, focusing on prevalent cybersecurity threats such as jamming, spoofing, Distributed Denial of Service (DDoS), and eavesdropping attacks. These threats were selected due to their prevalence and ability to compromise the integrity and reliability of V2X systems. Jamming can disrupt communications, spoofing can lead to data and identity manipulation, DDoS attacks can saturate system resources, and eavesdropping can compromise user privacy and information confidentiality. Addressing these major threats ensures that V2X systems are robust and secure for successful deployment and widespread acceptance. An extensive review uncovered a global landscape of V2X cybersecurity research. We highlight contributions from China in scientific publications and the United States in patent innovations, with notable advancements from leading corporations such as Qualcomm, LG, Huawei, Intel, Apple, and Samsung. This work educates and informs on the current state of V2X cybersecurity and identifies emerging trends and future research directions based on a year-by-year analysis of the literature and patents. The findings underscore the evolving cybersecurity landscape in V2X systems and the importance of continued innovation and research in this critical field. The survey navigates the complexities of securing V2X communications, emphasizing the necessity for advanced security protocols and technologies, and highlights innovative approaches within the global scientific and patent research context. By providing a panoramic view of the field, this survey sets the stage for future advancements in V2X cybersecurity.
Article
In order to improve the security and performance of the oral English instant translation model, this paper optimizes the instant translation model through the Internet of Things (IoT) security technology and deep learning technology. In this paper, the real-time translation model based on deep learning and IoT technology is analyzed in detail to show the application of these two technologies in the real-time translation model, and the related information security issues are discussed. Meanwhile, this paper proposes a method combining deep learning network and IoT technology to further improve the security of instant translation model. The experimental results show that under the optimized model, the parameter upload time is 60 seconds, the aggregation calculation time is 6.5 seconds, and the authentication time is 7.5 seconds. Moreover, the average recognition accuracy of the optimized model reaches 93.1%, and it is superior to the traditional machine translation method in accuracy and real-time, which has wide practical value and application prospects. Therefore, the research has certain reference significance for improving the security of the English corpus oral instant translation model.
Article
Full-text available
In modern internet of things (IoT), visual analysis and predictions are often performed by deep learning models. Salient object detection (SOD) is a fundamental pre-processing for these applications. Executing SOD on the fog devices is a challenging task due to the diversity of data and fog devices. To adopt convolutional neural networks (CNN) on fog-cloud infrastructures for SOD-based applications, we introduce a semisupervised adversarial learning method in this paper. The proposed model, named as SaliencyGAN, is empowered by a novel concatenated-GAN framework with partially shared parameters. The backbone CNN can be chosen flexibly based on the specific devices and applications. In the meanwhile, our method uses both the labelled and unlabelled data from different problem domains for training. Using multiple popular benchmark datasets, we compared state-of-the-art baseline methods to our SaliencyGAN obtained with 10% to 100% labelled training data. SaliencyGAN gained performance comparable to the supervised baselines when the percentage of labelled data reached 30%, and outperformed the weakly supervised and unsupervised baselines. Furthermore, our ablation study shows that SaliencyGAN were more more robust to the common “mode missing” (or “mode collapse”) issue compared to the selected popular GAN models. The visualized ablation results proved that SaliencyGAN learned a better estimation of data distributions. To the best of our knowledge, this is the first IoT-oriented semi-supervised SOD method. Source codes:https://github.com/Heye-SYSU/SaliencyGAN
Article
Full-text available
Off late, the ever increasing usage of a connected Internet‐of‐Things devices has consequently augmented the volume of real‐time network data with high velocity. At the same time, threats on networks become inevitable; hence, identifying anomalies in real time network data has become crucial. To date, most of the existing anomaly detection approaches focus mainly on machine learning techniques for batch processing. Meanwhile, detection approaches which focus on the real‐time analytics somehow deficient in its detection accuracy while consuming higher memory and longer execution time. As such, this paper proposes a novel framework which focuses on real‐time anomaly detection based on big data technologies. In addition, this paper has also developed streaming sliding window local outlier factor coreset clustering algorithms (SSWLOFCC), which was then implemented into the framework. The proposed framework that comprises BroIDS, Flume, Kafka, Spark streaming, SparkMLlib, Matplot and HBase was evaluated to substantiate its efficacy, particularly in terms of accuracy, memory consumption, and execution time. The evaluation is done by performing critical comparative analysis using existing approaches, such as K‐means, hierarchical density‐based spatial clustering of applications with noise (HDBSCAN), isolation forest, spectral clustering and agglomerative clustering. Moreover, Adjusted Rand Index and memory profiler package were used for the evaluation of the proposed framework against the existing approaches. The outcome of the evaluation has substantially proven the efficacy of the proposed framework with a much higher accuracy rate of 96.51% when compared to other algorithms. Besides, the proposed framework also outperformed the existing algorithms in terms of lesser memory consumption and execution time. Ultimately the proposed solution enable analysts to precisely track and detect anomalies in real time.
Article
Full-text available
Machine learning techniques are being widely used to develop an intrusion detection system (IDS) for detecting and classifying cyber-attacks at the network-level and host-level in a timely and automatic manner. However, many challenges arise since malicious attacks are continually changing and are occurring in very large volumes requiring a scalable solution. There are different malware datasets available publicly for further research by cyber security community. However, no existing study has shown the detailed analysis of the performance of various machine learning algorithms on various publicly available datasets. Due to the dynamic nature of malware with continuously changing attacking methods, the malware datasets available publicly are to be updated systematically and benchmarked. In this paper, deep neural network (DNN), a type of deep learning model is explored to develop a flexible and effective IDS to detect and classify unforeseen and unpredictable cyber-attacks. The continuous change in network behaviour and rapid evolution of attacks makes it necessary to evaluate various datasets which are generated over the years through static and dynamic approaches. This type of study facilitates to identify the best algorithm which can effectively work in detecting future cyber-attacks. A comprehensive evaluation of experiments of DNNs and other classical machine learning classifiers are shown on various publicly available benchmark malware datasets. The optimal network parameters and network topologies for DNNs is chosen through following hyper parameter selection methods with KDDCup 99 dataset. All experiments of DNNs are run till 1,000 epochs with learning rate varying in the range [0.01-0.5]. The DNN model which performed well on KDDCup 99 is applied on other datasets such as NSL-KDD, UNSW-NB15, Kyoto, WSN-DS and CICIDS 2017 to conduct the benchmark. Our DNN model learns the abstract and high dimensional feature representation of the IDS data by passing them into many hidden layers. Through a rigorous experimental testing it is confirmed that DNNs perform well in comparison to the classical machine learning classifiers. Finally, we propose a highly scalable and hybrid DNNs framework called Scale-Hybrid-IDS-AlertNet (SHIA) which can be used in real time to effectively monitor the network traffic and host-level events to proactively alert possible cyber-attacks
Article
Full-text available
This survey paper describes a literature review of deep learning (DL) methods for cyber security applications. A short tutorial-style description of each DL method is provided, including deep autoencoders, restricted Boltzmann machines, recurrent neural networks, generative adversarial networks, and several others. Then we discuss how each of the DL methods is used for security applications. We cover a broad array of attack types including malware, spam, insider threats, network intrusions, false data injection, and malicious domain names used by botnets.
Article
Malware perception is an important technique which has to be explored to analyze the corpus amount of malware in short duration for effective disaster management. Accurate analyses of malware must be done by detecting them in initial stage in an automatic way to avoid severe damage in Internet of Thing devices. This is enabled by visualizing malware by using a software-defined visual analytic system. Though many auto analysis techniques are present visualization of malware is one of the effective techniques preferred for large analysis. Malware exhibits malicious behavior on computing devices by installing harmful software such as viruses. The existing static and dynamic form of malware detection is an inefficient technique as it involve in disassembling of malicious code. In this project, the visualization of malware in the form of images is proposed in order to find the malicious insertion on the executable files of computing devices for extreme surveillance. The malware detection becomes easier to visualize the malicious behavior in form of images by feature based classification of images as the global property of exe gray scale image is unchanged. This will be an eye open in healing the security issues in cyber-crime and provide extreme surveillance.
Article
Internet of Things (IoT) is one of the rising innovations of the current era that has largely attracted both the industry and the academia. Life without the IoT is entirely indispensable. To dispel the doubts, if any, about the widespread adoption, the IoT certainly necessitates both technically and logically correct solutions to ensure the underlying security and privacy. This paper explicitly investigates the security issues in the perception layer of IoT, the countermeasures and the research challenges faced for large scale deployment of IoT. Perception layer being one of the important layers in IoT is responsible for data collection from things and its successful transmission for further processing. The contribution of this paper is twofold. Firstly, we describe the crucial components of the IoT (i.e., architectures, standards, and protocols) in the context of security at perception layer followed by IoT security requirements. Secondly, after describing the generic IoT-layered security, we focus on two key enabling technologies (i.e., RFID and sensor network) at the perception layer. We categorize and classify various attacks at different layers of both of these technologies through taxonomic classification and discuss possible solutions. Finally, open research issues and challenges relevant to the perception layer are identified and analyzed.
Article
With the rise of the Internet of Things (IoT) technology, the number of IoT devices/sensors has increased significantly. It is anticipated that large-scale sensor-based systems will prevail in our societies, calling for novel methodologies to design and operate those new systems. To support the computational demand of real-time delay-sensitive applications of largely distributed IoT devices/sensors, the Cloud is migrating to the edge of the network where resources such as routers, switches, and gateways are being virtualized. The open structural design of IoT architecture and the extensive usage of the paradigm itself cause to encounter conventional security issues for the existing networking technologies. Moreover, cooperation generates challenges as new security challenges can disrupt the systems’ regular functionalities and operations. Furthermore, the commercialization of the IoT has led to several public security concerns including threats of cyber-attacks, privacy issues, and organized crimes. In this paper, we aim to provide guidelines for researchers and practitioners interested in understanding IoT security issues. More specifically, an extensive description of security threats and challenges across the different layers of the architecture of IoT systems is presented. Also, the light will be shed on the solutions and countermeasures proposed in the literature to address such security issues. Finally, an emerging security challenge which has yet to be explained in-depth in previous studies is introduced.