Hyunjun Mun's research while affiliated with Sejong University and other places

What is this page?


This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

Publications (3)


Adversarial Attacks on Automatic Speech Recognition (ASR): A Survey
  • Article
  • Full-text available

January 2024

·

1 Read

IEEE Access

Amisha Rajnikant Bhanushali

·

Hyunjun Mun

·

Joobeom Yun

Automatic Speech Recognition (ASR) systems have improved and eased how humans interact with devices. ASR system converts an acoustic waveform into the relevant text form. Modern ASR inculcates deep neural networks (DNNs) to provide faster and better results. As the use of DNN continues to expand, there is a need for examination against various adversarial attacks. Adversarial attacks are synthetic samples crafted carefully by adding particular noise to legitimate examples. They are imperceptible, yet they prove catastrophic to DNNs. Recently, adversarial attacks on ASRs have increased but previous surveys lack generalization of the different methods used for attacking ASR, and the scope of the study is narrowed to a particular application, making it difficult to determine the relationships and trade-offs between the attack techniques. Therefore, this survey provides a taxonomy illustrating the classification of the adversarial attacks on ASR based on their characteristics and behavior. Additionally, we have analyzed the existing methods for generating adversarial attacks and presented their comparative analysis.We have clearly drawn the outline to indicate the efficiency of the adversarial techniques, and based on the lacunae found in the existing studies, we have stated the future scope.

Download
Share

FIGURE 5. Targeted attack success rate based on the swarm size. The horizontal and vertical axes represent the target label and the source label, respectively. Each subfigure shows the attack success rate based on the swarm size.
FIGURE 7. Results of temporary particle generation using GA.
FIGURE 8. Graph of query and query efficiency changes according to temporary particle generation.
FIGURE 9. Overlapped adversarial (blue) and original (orange) audio clip waveforms. The label of the original audio clip is ''no,'' whereas the label of the adversarial audio clip is ''up. ''
Black-Box Audio Adversarial Attack Using Particle Swarm Optimization

January 2022

·

23 Reads

·

8 Citations

IEEE Access

The development of artificial neural networks and artificial intelligence has helped to address problems and improve services in various fields, such as autonomous driving, image classification, medical diagnosis, and speech recognition. However, this technology has raised security threats that are different from existing ones. Recent studies have shown that artificial neural networks can easily malfunction by adversarial examples. The adversarial examples operate the neural network model as intended by the adversary. In particular, adversarial examples targeting speech recognition models is an area that has been actively studied in recent years. Existing studies have focused more on white-box methods. However, most speech recognition services are provided online and involve black-box, making it difficult or impossible for adversaries to attack. Black-box attacks have several challenges. Typically, they have a low success rate and a high risk of detection. In particular, previously proposed genetic algorithm (GA)-based attacks are at a high risk of detection because they require numerous queries. Therefore, we propose an adversarial attack system using particle swarm optimization (PSO) algorithms to address these problems. The proposed system uses adversarial candidates as particles to obtain adversarial examples through iterative optimization. PSO-based adversarial attacks are more efficient in queries and have a higher attack success rate than the adversarial methods using GAs. In particular, our key function is that temporary particle generation maximizes query efficiency to reduce detection risk and prevent wastage of system resources. On average, our system exhibits 96% attack success rates with 1416.17 queries, indicating that is 71.41% and 8% better in terms of query and success rates than existing GA-based attacks, respectively.


Citations (2)


... The only issue with PSO is that occasionally the population of particles may enter the local optimum, in which case the fitness score is no longer improved. Mun et al. [122] created a PSO-based attack by introducing a temporary particle creation approach based on GA to prevent particles from slipping into a local optimum. The temporary particles formed have various positions and directions from the initial set, which aids in broadening their search area. ...

Reference:

Adversarial Attacks on Automatic Speech Recognition (ASR): A Survey
Black-Box Audio Adversarial Attack Using Particle Swarm Optimization

IEEE Access

... Nowadays, Artificial Neural Networks (ANNs) are bioinspired computational models widely deployed in multiple application domains, such as multimedia, financial analysis, robotics, and automotive [1]. Convolutional Neural Networks (CNNs) represent a class of ANNs used to implement complex image and video processing algorithms for object recognition and path tracking [2]. ...

Recycling of Adversarial Attacks on the DNN of Autonomous Cars
  • Citing Conference Paper
  • January 2021