Fig 2 - uploaded by Chalavadi Vishnu
Content may be subject to copyright.
Proposed method for black-box adversarial attacks in autonomous vehicle technology. (a) an input module to sense/detect the traffic signs through the camera attached to the autonomous vehicle (b) multi gradient attack module to generate 3 different gradient perturbations from Transfer based projected gradient descent (T-PGD), Simple Black box attack (SimBA), Modified Simple black-box attack (M-SimBA), and (c) a classification module which attacks the target black-box model

Proposed method for black-box adversarial attacks in autonomous vehicle technology. (a) an input module to sense/detect the traffic signs through the camera attached to the autonomous vehicle (b) multi gradient attack module to generate 3 different gradient perturbations from Transfer based projected gradient descent (T-PGD), Simple Black box attack (SimBA), Modified Simple black-box attack (M-SimBA), and (c) a classification module which attacks the target black-box model

Context in source publication

Context 1
... this section, we are presenting the proposed method for black-box adversarial attacks in AV. As shown in Fig 2, there are three main modules: (a) input module to sense/detect the traffic signs through the camera attached to the autonomous vehicle (b) multi gradient attack module, and (c) adversarial sample estimator that implements the target attack. The gradient perturbations can be generated from one of the three methods: Transfer based projected gradient descent (TPGD), a Simple Black box attack (SimBA), and Modified Simple black-box attack (M-SimBA). ...

Similar publications

Article
Full-text available
Accurate, robust and drift-free global pose estimation is a fundamental problem for autonomous vehicles. In this work, we propose a global drift-free map-based localization method for estimating the global poses of autonomous vehicles that integrates visual–inertial odometry and global localization with respect to a pre-built map. In contrast to pr...
Preprint
Full-text available
In this paper, we want to strengthen an autonomous vehicle's lane-change ability with limited lane changes performed by the autonomous system. In other words, our task is bootstrapping the predictability of lane-change feasibility for the autonomous vehicle. Unfortunately, autonomous lane changes happen much less frequently in autonomous runs than...
Thesis
Full-text available
La detección de objetos en la vía es un tema de gran importancia en el desarrollo de vehículos autónomos, así como su implementación en coches manuales con el objetivo de conducir de forma segura en calles con anomalías. En este trabajo se propone un sistema electrónico para la detección de topes y baches utilizando la red neuronal convolucional YO...
Preprint
Full-text available
Object detectors used in autonomous vehicles can have high memory and computational overheads. In this paper, we introduce a novel semi-structured pruning framework called R-TOSS that overcomes the shortcomings of state-of-the-art model pruning techniques. Experimental results on the JetsonTX2 show that R-TOSS has a compression rate of 4.4x on the...
Article
Full-text available
Vehicle-trajectory prediction is essential for intelligent traffic systems (ITS), as it can help autonomous vehicles to plan a safe and efficient path. However, it is still a challenging task because existing studies have mainly focused on the spatial interactions of adjacent vehicles regardless of the temporal dependencies. In this paper, we propo...

Citations

... When an attacker has partial knowledge of the system, they can launch targeted poisoning attacks in a grey-box scenario [71]. A black-box condition prevents the attacker from accessing model parameters [72]. Adversaries might also know different things about the data than others do. ...
... In the grey-box situation [79], the attacker possesses partial knowledge of the system, allowing the development of focused poisoning attacks. In the black-box condition [80], the attacker does not have access to model parameters. Moreover, adversaries may have full or partial knowledge of data. ...
... Despite these efforts, AML has not been investigated thoroughly within the IoT context [5,[12][13][14][15][16]. Very limited works investigated such issues and considered relevant domain constraints. ...
... In particular, the defense perspective can be elaborated to incorporate a combination of defensive modules that forms the "defense-in-depth" security concept. The adoption of such ensemble methods increases the attack complexity, in terms of time, knowledge, and computational resources, for the adversary [16]. ...
... Commonly employed strategies include feature squeezing, adversarial training, network distillation, adversarial detection, and ensemble classifiers. Compared to other defense methods, adversarial training is the most employed defense method for enhancing IoT-based IDSs [5,[12][13][14][15][16]. On the other hand, limited efforts were made to investigate the other defense strategies in the IoT context. ...
Article
Full-text available
Recently, Machine Learning (ML)-based solutions have been widely adopted to tackle the wide range of security challenges that have affected the progress of the Internet of Things (IoT) in various domains. Despite the reported promising results, the ML-based Intrusion Detection System (IDS) proved to be vulnerable to adversarial examples, which pose an increasing threat. In fact, attackers employ Adversarial Machine Learning (AML) to cause severe performance degradation and thereby evade detection systems. This promoted the need for reliable defense strategies to handle performance and ensure secure networks. This work introduces RobEns, a robust ensemble framework that aims at: (i) exploiting state-of-the-art ML-based models alongside ensemble models for IDSs in the IoT network; (ii) investigating the impact of evasion AML attacks against the provided models within a black-box scenario; and (iii) evaluating the robustness of the considered models after deploying relevant defense methods. In particular, four typical AML attacks are considered to investigate six ML-based IDSs using three benchmarking datasets. Moreover, multi-class classification scenarios are designed to assess the performance of each attack type. The experiments indicated a drastic drop in detection accuracy for some attempts. To harden the IDS even further, two defense mechanisms were derived from both data-based and model-based methods. Specifically, these methods relied on feature squeezing as well as adversarial training defense strategies. They yielded promising results, enhanced robustness, and maintained standard accuracy in the presence or absence of adversaries. The obtained results proved the efficiency of the proposed framework in robustifying IDS performance within the IoT context. In particular, the accuracy reached 100% for black-box attack scenarios while preserving the accuracy in the absence of attacks as well.
... Cont.White-box and black-box attackThe adversarial model is more effective at producing successful mispredictions of signboards at a quicker pace and with a larger likelihood of failure[111] User devices (insider, guest, bring-your-own-device for employees)[77] • Information injection: device controlled by an adversary can inject malicious code or information [135] • Service manipulation: virtual machines could be manipulated • Information disclosure of vehicles and RSUs [137,141] • Execution of any arbitrary code and taking control of the entire vehicular systems by access to program handling Bluetooth functionality • Compromise of the telematics ECU's Unix op-These devices could be endnotes such as COHDA units or internet-of-things devices • Wired connections could be manipulated to connect with rogue systems to give feedback to edge systems with artificially coded messages • Execution of any arbitrary code and taking control of the ...
Article
Full-text available
As threat vectors and adversarial capabilities evolve, Cloud-Assisted Connected and Autonomous Vehicles (CCAVs) are becoming more vulnerable to cyberattacks. Several established threat analysis and risk assessment (TARA) methodologies are publicly available to address the evolving threat landscape. However, these methodologies inadequately capture the threat data of CCAVs, resulting in poorly defined threat boundaries or the reduced efficacy of the TARA. This is due to multiple factors, including complex hardware–software interactions, rapid technological advancements, outdated security frameworks, heterogeneous standards and protocols, and human errors in CCAV systems. To address these factors, this study begins by systematically evaluating TARA methods and applying the Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privileges (STRIDE) threat model and Damage, Reproducibility, Exploitability, Affected Users, and Discoverability (DREAD) risk assessment to target system architectures. This study identifies vulnerabilities, quantifies risks, and methodically examines defined data processing components. In addition, this study offers an attack tree to delineate attack vectors and provides a novel defense taxonomy against identified risks. This article demonstrates the efficacy of the TARA in systematically capturing compromised security requirements, threats, limits, and associated risks with greater precision. By doing so, we further discuss the challenges in protecting hardware–software assets against multi-staged attacks due to emerging vulnerabilities. As a result, this research informs advanced threat analyses and risk management strategies for enhanced security engineering of cyberphysical CCAV systems.
... In the past years, several works have attempted to attack traffic sign recognition tasks in the digital and physical layers. Kumar et al. [46] proposed a query-based attack called Modified Simple Black Box Attack (M-SimBA) for traffic sign recognition using GTSRB dataset. Results have shown that this attack significantly decreases the efficacy of the models. ...
Chapter
Nowadays we are all witnesses of the technological development of the so-called 4th industrial revolution (Industry 4.0). In this context, a daily living environment of smart cities is formed in which artificial intelligence applications play a dominant role. Autonomous (pilotless) vehicles are a shining example of the application of artificial intelligence, based on which vehicles are allowed to move autonomously in both residential and rural areas. The proposed article examines the robustness, in adversarial attacks in the physical layer, of the deep learning models used in autonomous vehicles for the recognition of road traffic signals. As a case study the roads of Greece, having traffic signs highly contaminated not on purpose, is considered. Towards investigating this direction, a novel dataset with clear and attacked images of traffic signs is proposed and used in the evaluation of popular deep learning models. This study investigates the level of readiness of autonomous vehicles to perform in noisy environments that affect their ability to recognize road signs. This work highlights the need for more robust deep learning models in order to make the use of autonomous vehicles a reality with maximum safety for citizens.
... A piezo sensor is employed as a key sensory element, and the programming is accomplished using AVR Studio and AVR Dude [13]. This study significantly enhances accident data capture and reporting, blending hardware components and microcontroller technology for improved accident information retrieval [14] [15]. ...
... In the grey-box setting [51], the attacker has partial knowledge of the system, enabling the development of targeted poisoning attacks. In the black-box setting [52], the attacker lacks access to model parameters and must infer predictions, making data poisoning attacks a concern. In addition, attackers can have full or partial knowledge of data from benign clients [13], where full knowledge allows access to all benign and compromised clients in FL, while partial knowledge only provides access to data from compromised clients. ...
Article
Federated learning (FL) has emerged as a powerful machine learning technique that enables the development of models from decentralized data sources. However, the decentralized nature of FL makes it vulnerable to adversarial attacks. In this survey, we provide a comprehensive overview of the impact of malicious attacks on FL by covering various aspects such as attack budget, visibility, and generalizability, among others. Previous surveys have primarily focused on the multiple types of attacks and defenses but failed to consider the impact of these attacks in terms of their budget, visibility, and generalizability. This survey aims to fill this gap by providing a comprehensive understanding of the attacks' effect by identifying FL attacks with low budgets, low visibility, and high impact. Additionally, we address the recent advancements in the field of adversarial defenses in FL and highlight the challenges in securing FL. The contribution of this survey is threefold: first, it provides a comprehensive and up-to-date overview of the current state of FL attacks and defenses. Second, it highlights the critical importance of considering the impact, budget, and visibility of FL attacks. Finally, we provide ten case studies and potential future directions towards improving the security and privacy of FL systems
... Prior research has predominantly concentrated on terrestrial-based scenarios and applications, such as person detection [43]- [47], facial recognition [48]- [51], security surveillance [52]- [54] and autonomous vehicles [55]- [59]. The literature preceding this study has provided broader insights into the potential of adversarial attacks, as demonstrated by the works of Wei et al. [42], Wang et al. [60], Akhtar and Mian [61], and Akhtar et al. [62]. ...
Article
Full-text available
Deep models’ feature learning capabilities have gained traction in recent years, driving significant progress in various Artificial Intelligence (AI) domains. The use of Deep Neural Networks (DNNs) has expanded the scope of Computer Vision (CV) and revealed their vulnerability to deliberate adversarial attacks. These attacks involve the careful introduction of perturbations crafted through complex optimization problems. Exploiting vulnerabilities in advanced deep neural network algorithms present security concerns, particularly in practical applications with high stakes like unmanned aerial vehicles (UAVs) and satellite imagery in computer vision. Adversarial attacks, both in digital and physical dimensions, pose a serious threat in the field. This research provides a comprehensive examination of state-of-the-art adversarial attacks specific to aerial imagery using autonomous platforms such as UAVs and satellites. This review covers fundamental concepts, techniques, and the latest advancements, identifying research gaps and suggesting future directions. It aims to deepen researchers’ understanding of the challenges and threats related to adversarial attacks on aerial imagery, serving as a valuable resource to guide future research and enhance the security of computer vision systems in aerial environments.
... They allow organizations to scale their resources and applications as their needs change. As technology advances, data centers are becoming increasingly important in providing the necessary resources and infrastructure to keep up with the demands of a rapidly changing digital world [14][15]. Data centers are essential for enabling access to the digital world and providing organizations with the necessary resources and infrastructure to operate in today's digital world [16]. ...
... In [17], Gu et al. present a stacking-based ensemble learning model. (KNN), (ANN) are four popular ML models used as starting points. ...
... The effectiveness of the classification algorithms was evaluated with respect to the standards of accuracy, precision, and F-score, which are depicted in (14) to (17). In the equations, the True Positive is represented by the letter A, the True Negative by the letter B, the False Positive by the letter C, and the False Negative by the letter D. Lastly, the False Negative is represented by the letter D.. ...