BookPDF Available

Quantum Machine Learning: What Quantum Computing Means to Data Mining

Authors:

Abstract and Figures

Quantum Machine Learning bridges the gap between abstract developments in quantum computing and the applied research on machine learning. Paring down the complexity of the disciplines involved, it focuses on providing a synthesis that explains the most important machine learning algorithms in a quantum framework. Theoretical advances in quantum computing are hard to follow for computer scientists, and sometimes even for researchers involved in the field. The lack of a step-by-step guide hampers the broader understanding of this emergent interdisciplinary body of research. Quantum Machine Learning sets the scene for a deeper understanding of the subject for readers of different backgrounds. The author has carefully constructed a clear comparison of classical learning algorithms and their quantum counterparts, thus making differences in computational complexity and learning performance apparent. This book synthesizes of a broad array of research into a manageable and concise presentation, with practical examples and applications.
Content may be subject to copyright.
A preview of the PDF is not available
... Various domains of computer science may benefit from quantum algorithms and their properties. ML stands out due to the potential for performance improvements through reducing the temporal or spatial complexity for important subroutines [13,14]. Classical data may be encoded in quantum states, for example, through qubit rotations. ...
... It is also possible to use a K-NN for unsupervised learning tasks. Based on the theory provided in [14], Quantum Nearest Neighbors is almost quadratically more efficient than the classical analog, even with sparse data, which is useful when working with large amounts of data in a short time. ...
... Our work builds on the theoretical foundations presented in [14,16,17], and provides a versatile framework capable of accommodating different training dataset sizes and different encoding schemes. Furthermore, in frameworks such as Qiskit, using intermediate measurements similar to dynamic circuits [51] while leaving other qubits unmeasured, enables circuit processing and the application of Grover's algorithm to identify states with higher probabilities than those with identical fidelity. ...
Article
Full-text available
This work introduces a quantum K-Nearest Neighbor (K-NN) classifier algorithm. The algorithm utilizes angle encoding through a Quantum Random Access Memory (QRAM) using n number of qubit addresses with O(log(n)) space complexity. It incorporates Grover’s algorithm and the quantum SWAP-Test to identify similar states and determine the nearest neighbors with high probability, achieving Om search complexity, where m is the qubit address. We implement a simulation of the algorithm using IBM’s Qiskit with GPU support, applying it to the Iris and MNIST datasets with two different angle encodings. The experiments employ multiple QRAM cell sizes (8, 16, 32, 64, 128) and perform ten trials per size. According to the performance, accuracy values in the Iris dataset range from 89.3 ± 5.78% to 94.0 ± 1.56%. The MNIST dataset’s mean binary accuracy values range from 79.45 ± 18.84% to 94.00 ± 2.11% for classes 0 and 1. Additionally, a comparison of the results of this proposed approach with different state-of-the-art versions of QK-NN and the classical K-NN using Scikit-learn. This method achieves a 96.4 ± 2.22% accuracy in the Iris dataset. Finally, this proposal contributes an experimental result to the state of the art for the MNIST dataset, achieving an accuracy of 96.55 ± 2.00%. This work presents a new implementation proposal for QK-NN and conducts multiple experiments that yield more robust results than previous implementations. Although our average performance approaches still need to surpass the classic results, an experimental increase in the size of QRAM or the amount of data to encode is not achieved due to limitations. However, our results show promising improvement when considering working with more feature numbers and accommodating more data in the QRAM.
... Este campo está relacionado con el ML, pues toma prestados métodos estadísticos, algoritmos del ML y métodos de programación distribuida, entre otros. Asimismo, utiliza conceptos de bases de datos y gestión de datos (Wittek, 2014). Por ejemplo, la agrupación de K-medias, que identifica grupos de elementos similares, las máquinas de vectores, que aprenden a clasificar conjuntos en categorías predefinidas y las técnicas de reducción de dimensionalidad, como la descomposición de valores singulares, la cual mejora el rendimiento de recuperación (Duan, Edwards y Dwivedi, 2019). ...
... Actualmente, la información cuántica tiene tres principales vertientes: computación cuántica, teoría de la información cuántica y criptografía cuántica (Wittek, 2014). Este artículo se enfoca en la computación cuántica, el campo de investigación que utiliza fenómenos cuánticos como la superposición, el entrelazamiento y la interferencia, la cual opera con datos representados por estados cuánticos. ...
... Se puede decir que, si un algoritmo cuántico tiene la misma complejidad computacional que su análogo clásico, no aporta nada nuevo al campo. Sin embargo, esto propicia una creación de algoritmos cuánticos lenta, y en consecuencia, el análisis algorítmico cuántico también es afectado, pues incluso con mejoras de velocidad (Wittek, 2014) muestra que algunos algoritmos de clase NP no se pueden resolver en un tiempo menor que exponencial, incluso, con una computadora cuántica. ...
Article
Full-text available
This article studies the Grover algorithm as the basis in the development of various versions of quantum search algorithms generated in the field of Quantum Machine Learning, a branch of artificial intelligence. For this purpose, they present the fundamentals of quantum computing, based on reversible unit functions and major quantum gates. Grover's algorithm is explained and is implemented, in the Qiskit programming language, to run on a real quantum computer. Finally, it presents a discussion over the implications that exist with the generation of quantum algorithms and the need to generate a definition of computational complexity in quantum scenarios, which helps to classify algorithms, as well as better development of them.
... For example, quantum GANs (QGANs) utilize quantum parallelism to produce samples matching the complicated probability distributions used in data synthesis and augmentation. Furthermore, quantum-inspired algorithms, which show simulated quantum processes on classical computers, supply the link between quantum and classical computing paradigms that allow the study of the fundamental phenomena of quantum mechanics in ML with traditional computational technologies [11]. Through quantum computer leverage, scientists can unravel the novel properties that contribute toward intelligent machine learning, thus proving a path to accelerated innovations in the computer science and data science areas. ...
... Similarly, QNNs apply advantageous quantum properties, parallelism, and entanglement to neural networks' training and inference processes. Other major quantum algorithms are quantum clustering algorithms, quantum reinforcement learning models, and quantum generative adversarial networks [11,12]. Regardless of the different approaches to this problem, many of the existing quantum machine learning algorithms still need to be younger, with research being actively done on improving their scalability, reliability, and practical applicability. ...
Article
Full-text available
This paper outlines an extensive analysis of an emerging concept of QML and provides a panoramic view of its disruptive power within machine learning. The quantum computing bracket has been a new and prevalent metaphor for a series of fields, such as machine learning, that will be transformed and revolutionized. The paper will perform a research analysis that examines algorithms' capability in quantum computing via machine learning as the primary focus [1]. The fact that quantum mechanics is the principle that quantum machine learning algorithms use is such that they could provide a faster process based on the exponent of the problem being solved. A quantitative machine learning (QML) algorithm review and analysis are performed in a literature review and analysis, exploring the current status, capabilities, constraints, and possible ramifications of the technology across sectors. The conventional algorithms can no longer keep pace with a growing rate of data complexity, which causes asymptotic growth in the number of calculations needed for processing. To cope with those difficulties, this research is designed to provide a thorough investigation capable of using quantum algorithms in dealing with massive datasets and complex optimization challenges [1]. The classical and quantum machine learning paradigms will be compared in our study. Since quantum computing has many advantages, the issues that will come up with its integration into machine learning will be highlighted. Besides, the path this research hopes to find for utilizing quantum calculations to help the current data-reliant applications become compatible with the system will be crystal clear. A deep analysis of QML and its capabilities will reveal insights into effecting change and the future of machine learning [1]. By clarifying the unprecedented collaboration between quantum principles and machine learning algorithms, our research aims to pave the way for advancements in data analysis, predictive modeling, and optimization in multidisciplinary domains and ultimately enhance innovation. By analyzing QML as a whole, its innovative techniques, and the barriers that lay on the path, we strive to help researchers and practitioners exploit the unique properties of quantum computing to reach the potential of machine learning and data analytics.
... Quantum computers promise dramatic speedups over classical computers for a broad range of problems. Provable advantage in many cases such as in Hamiltonian simulation [1][2][3][4][5][6], quantum machine learning [7][8][9][10][11][12], and quantum search are in the so-called query model. The quantum gate costs of these quantum algorithms are typically dominated by queries made to a particular unitary oracle, with each oracle query having a gate cost that scales polynomially with the amount of classical data needed to encode the problem instance. ...
Preprint
Full-text available
Quantum access to arbitrary classical data encoded in unitary black-box oracles underlies interesting data-intensive quantum algorithms, such as machine learning or electronic structure simulation. The feasibility of these applications depends crucially on gate-efficient implementations of these oracles, which are commonly some reversible versions of the boolean circuit for a classical lookup table. We present a general parameterized architecture for quantum circuits implementing a lookup table that encompasses all prior work in realizing a continuum of optimal tradeoffs between qubits, non-Clifford gates, and error resilience, up to logarithmic factors. Our architecture assumes only local 2D connectivity, yet recovers results that previously required all-to-all connectivity, particularly, with the appropriate parameters, poly-logarithmic error scaling. We also identify novel regimes, such as simultaneous sublinear scaling in all parameters. These results enable tailoring implementations of the commonly used lookup table primitive to any given quantum device with constrained resources.
... With the advancement in technology, quantum theory has leveraged to improve traditional ML methods using quantum computers (Biamonte et al., 2017). Quantum computers have led to the development of improved versions of existing algorithms within the ML discipline referring to it as quantum algorithms (Adcock et al., 2015;Schuld et al., 2015;Wittek, 2014). Schuld et al. (2015) describe that quantum ML is designed using the classical algorithms, creating the ability to run on quantum computers. ...
Thesis
Full-text available
Predicting wildfires using Machine Learning (ML) models is relevant and essential to minimize wildfire threats to protect human lives and reduce significant property damages. Mixed results have been found in this domain, potentially because of dataset manipulations to enable multi-class classification. This is because two or more classes are used in wildfire prediction modelling, where non-fire labels are created artificially, leading to an unbiased dataset for non-fire data. This thesis aims to discuss research that built wildfire prediction models using One-class classification algorithms. The significant features that influence wildfire ignition were derived from One-class ML models using the Shapley values method which is a novel contribution to the wildfire prediction domain. Elevation, vapour pressure deficit and dew point temperature were among the most influential features that were derived using the Shapley values method. The One-class algorithms used were Support Vector Machine, Isolation Forest, Neural network-based Autoencoder and the Variational Autoencoder models. The input features to the models were grouped based on topography, weather, plant fuel moisture, and population. Outcomes were validated using 5-fold cross-validation to avoid bias in the training and testing dataset selection on the ML models’ performances. These One-class models resulted in a high mean accuracy ranging from 98-99%, exceeding multi-class models’ performances in similar environmental conditions. The findings of the research have the potential to influence the state-of-the-art methods in wildfire prediction. Finally, a web-based tool to predict wildfires is presented as a proof of concept to show the usability of ML models for wildfire predictions.
... Quantum computing exploits the principles of quantum mechanics, like superposition and entanglement, to encode information and perform data operations using qubits [1,2,3]. Its significant potential for speeding up specific computational problems has attracted immense interest among researchers across diverse fields [4,5,6,7,8]. In particular, in specific scenarios, quantum computing of fluid dynamics (QCFD) could be more efficient than the methods in classical computational fluid dynamics (CFD) [9]. ...
Preprint
Full-text available
We propose a method for preparing the quantum state for a given velocity field, e.g., in fluid dynamics, via the spherical Clebsch wave function (SCWF). Using the pointwise normalization constraint for the SCWF, we develop a variational ansatz comprising parameterized controlled rotation gates. Employing the variational quantum algorithm, we iteratively optimize the circuit parameters to transform the target velocity field into the SCWF and its corresponding discrete quantum state, enabling subsequent quantum simulation of fluid dynamics. Validations for one- and two-dimensional flow fields confirm the accuracy and robustness of our method, emphasizing its effectiveness in handling multiscale and multidimensional velocity fields. Our method is able to capture critical flow features like sources, sinks, and saddle points. Furthermore, it enables the generation of SCWFs for various vector fields, which can then be applied in quantum simulations through SCWF evolution.
... Quantum machine learning models are mathematically similar to classical machine learning models according to Biamonte et al., [3]. These QML algorithms were found to be more robust in learning to solve hard problems according to Peter Wittek [4]. ...
Preprint
Full-text available
Machine learning through Quantum computing principles has been found to take an edge in the artificial intelligence and machine learning domains. Both domains have similarities in terms of higher-dimensional computing through complex linear algebra. This has given way to Quantum Machine learning. To work with available quantum computers to a greater extent, quantum machine learning algorithms are designed to work on Nearly Intermediate State Quantum (NISQ) devices using hybrid quantum-classical methods of execution. IBM Qiskit quantum framework with existing classical machine learning frameworks like SK-Learn and tensor-flow is used for this purpose of the study on how a Quantum support vector machine algorithm can be more effectively used over the classical support vector machine when classical support vector machines lack the computation power. In this paper, how quantum principles are involved in gaining the advantages and limitations of using quantum machines are discussed with an example of experimentation run on quantum machines and classical machines using the IBM Qiskit framework.
... QML algorithms can notably become invaluable for data mining, data clustering, classification and regression. [15] Also, the increase in the computational power has also advanced the field of material science to a great extent to include atomistic level of interactions and simulate a wide variety of materials in different lengths and time scales. But, the existing classical methods have reached their limiting potential in mimicking the complex nature of interactions and are proving to be computationally very costly. ...
Preprint
Full-text available
Quantum machine learning (QML) leverages the potential from machine learning to explore the subtle patterns in huge datasets of complex nature with quantum advantages. This exponentially reduces the time and resources necessary for computations. QML accelerates materials research with active screening of chemical space, identifying novel materials for practical applications and classifying structurally diverse materials given their measured properties. This study analyzes the performance of three efficient quantum machine learning algorithms viz., variational quantum eigen solver (VQE), quantum support vector machine (QSVM) and quantum neural networks (QNN) for the classification of transition metal chalcogenides and oxides (TMCs &TMOs). The analysis is performed on three datasets of different sizes containing 102, 192 and 350 materials with TMCs and TMOs labelled as +1 and -1 respectively. By employing feature selection, classical machine learning achieves 100% accuracy whereas QML achieves the highest performance of 99% and 98% for test and train data respectively on QSVC. This study establishes the competence of QML models in materials classification and explores the quantum circuits in terms of over-fitting using the circuit descriptors expressibility and entangling capability. In addition, the perspectives on QML in materials research with noisy intermediate scale quantum (NISQ) devices is given.
... The advent of quantum computing has foreboded new prospects for augmenting machine learning architectures Schuld et al. [2015], Ciliberto et al. [2018], Zhang and Ni [2020], Biamonte et al. [2017], Wittek [2014], Abohashima et al. [2020], Peral-García et al. [2024], Caro et al. [2022]. Quantum computing offers substantial enhancements in computational efficiency and capacity, particularly adept at managing the high-dimensional data spaces prevalent in anomaly detection tasks. ...
Preprint
Open set anomaly detection (OSAD) is a crucial task that aims to identify abnormal patterns or behaviors in data sets, especially when the anomalies observed during training do not represent all possible classes of anomalies. The recent advances in quantum computing in handling complex data structures and improving machine learning models herald a paradigm shift in anomaly detection methodologies. This study proposes a Quantum Scoring Module (Qsco), embedding quantum variational circuits into neural networks to enhance the model's processing capabilities in handling uncertainty and unlabeled data. Extensive experiments conducted across eight real-world anomaly detection datasets demonstrate our model's superior performance in detecting anomalies across varied settings and reveal that integrating quantum simulators does not result in prohibitive time complexities. Our study validates the feasibility of quantum-enhanced anomaly detection methods in practical applications.
... cial intelligence. Due to the enormous resources required for classical computers to perform neural network computations, designing machine learning algorithms that run on quantum computers and utilizing their unique properties is receiving increasing attention [5,6,8,[30][31][32]36]. Considering the respective characteristics and advantages of quantum computers and classical computers, many research efforts consider designing hybrid quantum-classical algorithms to take advantage of current quantum hardware. ...
Preprint
Full-text available
With the rapid development of quantum computing technology, we have entered the era of noisy intermediate-scale quantum (NISQ) computers. Therefore, designing quantum algorithms that adapt to the hardware conditions of current NISQ devices and can preliminarily solve some practical problems has become the focus of researchers. In this paper, we focus on quantum generative models in the field of quantum machine learning, and propose a hybrid quantum-classical normalizing flow (HQCNF) model based on parameterized quantum circuits. Based on the ideas of classical normalizing flow models and the characteristics of parameterized quantum circuits, we cleverly design the form of the ansatz and the hybrid method of quantum and classical computing, and derive the form of the loss function in the case that quantum computing is involved. We test our model on the image generation problem. Experimental results show that our model is capable of generating images of good quality. Compared with other quantum generative models, such as quantum generative adversarial networks (QGAN), our model achieves lower (better) Fr\'echet inception distance (FID) score, and compared with classical generative models, we can complete the image generation task with significantly fewer parameters. These results prove the advantage of our proposed model.
Article
It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems including very large complex applications show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks.1
Chapter
This chapter reproduces the English translation by B. Seckler of the paper byVapnik and Chervonenkis inwhich they gave proofs for the innovative results they had obtained in a draft form in July 1966 and announced in 1968 in their note in Soviet Mathematics Doklady. The paper was first published in Russian as Vapnik V. N. and Qervonenkis.16(2), 264-279 (1971). © Springer International Publishing Switzerland 2015. All rights are reserved.
Article
We propose a novel approach for categorizing text documents based on the use of a special kernel. The kernel is an inner product in the feature space generated by all subsequences of length k. A subsequence is any ordered sequence of k characters occurring in the text though not necessarily contiguously. The subsequences are weighted by an exponentially decaying factor of their full length in the text, hence emphasising those occurrences that are close to contiguous. A direct computation of this feature vector would involve a prohibitive amount of computation even for modest values of k, since the dimension of the feature space grows exponentially with k. The paper describes how despite this fact the inner product can be efficiently evaluated by a dynamic programming technique. Experimental comparisons of the performance of the kernel compared with a standard word feature space kernel (Joachims, 1998) show positive results on modestly sized datasets. The case of contiguous subsequences is also considered for comparison with the subsequences kernel with different decay factors. For larger documents and datasets the paper introduces an approximation technique that is shown to deliver good approximations efficiently for large datasets.
Article
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
Article
This paper combines quantum computation with classical neural network theory to produce a quantum computational learning algorithm. Quantum computation uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster than their classical counterparts. The unique characteristics of quantum theory may also be used to create a quantum associative memory with a capacity exponential in the number of neurons. This paper combines two quantum computational algorithms to produce such a quantum associative memory. The result is an exponential increase in the capacity of the memory when compared to traditional associative memories such as the Hopfield network. The paper covers necessary high-level quantum mechanical and quantum computational ideas and introduces a quantum associative memory. Theoretical analysis proves the utility of the memory, and it is noted that a small version should be physically realizable in the near future.