Quantum versus Classical Generative Modelling in Finance
Abstract
Finding a concrete use case for quantum computers in the near term is still an open question, with machine learning typically touted as one of the first fields which will be impacted by quantum technologies. In this work, we investigate and compare the capabilities of quantum versus classical models for the task of generative modelling in machine learning. We use a real world financial dataset consisting of correlated currency pairs and compare two models in their ability to learn the resulting distribution  a restricted Boltzmann machine, and a quantum circuit Born machine. We provide extensive numerical results indicating that the simulated Born machine always at least matches the performance of the Boltzmann machine in this task, and demonstrates superior performance as the model scales. We perform experiments on both simulated and physical quantum chips using the Rigetti forest platform, and also are able to partially train the largest instance to date of a quantum circuit Born machine on quantum hardware. Finally, by studying the entanglement capacity of the training Born machines, we find that entanglement typically plays a role in the problem instances which demonstrate an advantage over the Boltzmann machine.
positioning \DeclareUnicodeCharacter2212
Keywords: Generative modelling, Born machine, Boltzmann machine, finance. \ioptwocol
1 Introduction
The prediction power of machine learning algorithms is limited by the quality of the datasets used to train the models. In the age of big data, possessing highquality data can offer significant competitive advantage to institutions who utilize machine learning in their core business operations such as Facebook, Google and Amazon. However, for many organizations highquality data can be scarce. This is because training data for industrial problems are often plagued by erroneous information, limited by privacy and overfitting. Hence, highquality data can be expensive or even impossible to obtain especially for machine learning applications at industrial scales. Synthetic data generation (SDG) bridges the gap for training better machine learning models when such data is not readily available. Rather than collecting raw data, SDG uses statistical methods, simulation modeling, and neural networks to generate a synthetic equivalent of the realworld data set (i.e. sample generation). SDG allows users to overcome data scarcity, avoid privacy issues, and overcome overfitting problems at lower costs. This is achieved by SDG removing erroneous or mislabeled data, as each sample is generated from predefined parameters to produce clean and machine learningready datasets. SDG can also produce realistic data for unobserved scenarios to train more generalized models. In machine learning terms, SDG is typically achieved by generative modelling or distribution learning. Using quantum models for SDG has garnered interested due to the ease of generating data samples (alternatively, performing the ‘inference’ step) from a quantum distribution, whereas even the very act of sample generation can be difficult classically, which we elaborate on through the text.
In terms of quantum capabilities, we are now firmly in the noisy intermediate scale quantum (NISQ) [1] era, where we have access to small, errorprone quantum computers, but which are sufficiently powerful to be able to address problems which are not classically simulatable [2]. However, finding a useful application for such devices is a nontrivial task, with quantum chemistry [3, 4] or quantum optimization [5, 6] being the usual suspects for areas in which to search. Problems in finance have also proved to be a lucrative area of study, [7, 8, 9, 10]. With each discovered use case, an argument is frequently required as to why such a problem could not have been tackled by purely classical methods. The primary approaches to gain an advantage with quantum computers study the computational time complexity in solving these problems. The claims of exponential speedups [11, 12] in these cases usually rely on the nonexistence of unlikely relationships between computational complexity classes. However, simply solving the problem faster is not the only way in which quantum computers can gain victories. Alternatively, one can examine other relevant problem dimensions, such as accuracy of solution, which is the goal we aim for in this work.
We explore two different machine learning approaches for generating synthetic financial market data. One model is completely classical (although trained using simulated quantum methods): the restricted Boltzmann machines (RBM) and the other is completely quantum in nature: a quantum circuit Born machines (QCBM). This is similar to other recent works [13], which addressed financial problems with these two models and found that the Born machine has the capacity to outperform the Boltzmann machine, when it comes to generating synthetic data. In this work, we draw a similar conclusion by enforcing similar constraints on both models in order to draw a fair comparison.
In Section 2 we discuss the main ideas involved in generative modelling, and elaborate on the two models we use for this task. We also discuss the financial dataset we use for training. In Section 3, we detail the specific architectures for the Boltzmann and Born machines, namely the underlying graph structures and the circuit Ansätze for the QCBM. In Section 4, we describe the training protocols we use for each model and finally in Section 5 we detail the numerical results we find, and showcase examples where the Born machine outperforms the Boltzmann machine in learning the financial dataset. We present simulated and experimental results implemented on the Rigetti QPU [14] using Quantum Cloud Services (QCS™). Finally, we conclude in Section 6 and discuss future work.
2 Generative Modelling
Generative models are powerful machine learning models, which essentially aim to learn a probability distribution, denoted , over some data (say vectors, ), which is sampled from , . A typical use case is in classification tasks, where a generative model seeks to learn the joint distribution over data and labels, , . We assume the distribution in question is defined over the space of binary strings of length , . A generative model can be typically parameterised by some parameters, , and are represented by an output ‘model’ distribution over the data, which is a function of those parameters, . The goal of training a generative model is to force the model distribution as close as possible to the data distribution, relative to some measure. This is done by finding a suitable setting of the parameters, typically using some optimization routine. In practice however, we typically do not have access to the true distributions (meaning their explicit probability density functions or otherwise) This is inevitably true when using implicit models ^{1}^{1}1These are models for which we do not have explicit access to the underlying probability density function [15]. like generative adversarial networks (GANs) [16] or quantum circuit distributions, which by their very nature are distributions which are not directly accessible [2] due to the classical intractability of them. In this work, we assume we have samples from the model distribution, and samples from the data distribution, .
Common use cases for generative models are in image generation, but they have also received interest from the quantum computing community, as the acceleration in training of generative models using quantum techniques was one of the early areas of interest in the field of quantum machine learning [17]. The focus of the area has shifted somewhat in recent years, from accelerating training and inference of classical models using quantum techniques, to the development of completely new models in the quantum world. One of the earliest examples of which is the quantum Boltzmann machine (QBM), which is a generalization of the classical Boltzmann machine (see Section 2.2). This was followed by the introduction of Born machines [18] and quantum circuit Born machines (QCBMs) [19, 20], which sample from the fundamentally quantum distribution underlying a pure state of a quantum system. One of the most recent additions to this family are Hamiltonian based models and the variational quantum thermalizer (VQT) [21], which generalizes all of the above since it contains the distribution provided by a mixed quantum state as the underlying model. The latter is also a generalization of ‘energybased’ models, of which the Boltzmann machine is an example. Furthermore, quantum generative models are some of the most promising applications for near term quantum computers since their nature aligns them closely with demonstrations of ‘quantum supremacy’ [2] and such connections have recently been made [22, 23] with extensions into different architectures [24].
In this work we focus on two of these models in order to make a direct comparison and study any potential indication of quantum advantage for these models over purely classical generative models. We investigate a Born machine and a restricted Boltzmann machine (RBM) and make a thorough comparison between the two for a generative modelling task. We do this using a realistic dataset in a financial application, which facilitates a simple way to compare the models at differing scales. Our motivation is the work of [25] which showed the outperformance of an RBM over parametric models, for this dataset, which are the common tool used in the finance industry. This was subsequently followed by the subsequent outperformance of the RBM by a Born machine [13] on the same dataset. However, the degree to which this advantage was observable was not obvious. This research and [13] supplements the work of [26] which demonstrated a similar outperformance of an RBM by a QCBM, but for a different problem domain. Our work expands on the latter by running larger problem instances on simulators and physical hardware, using alternative training methods, and also using alternative methods of comparison of the models. Finally, drawing a comparison between a Born and Boltzmann machine is part of the goal of [18], in which they consider the problem from a mutual information point of view. They further conjecture that properties such as mutual information of the dataset, and entanglement entropy in the target problem, an/or model would be useful in determining problems where the Born machine could have superior performance over an RBM.
2.1 Born Machine
A Born machine [18] is a fundamentally quantum model, which achieves synthetic data generation by generating samples according to Born’s rule of quantum mechanics. The fundamentally nonclassical nature of the model has provided motivation for why it can outperform classical models in at least its expressive power [22]. This expressive power translates in an ability to represent certain distributions efficiently which cannot be done by any classical model, for example, those utilized in a recent demonstration of quantum computational supremacy [2].
In the most common scenario, a binary sample, , is generated from a quantum state, , according to:
(1) 
where is the projector onto the computational basis state described by . In order to obtain a trainable machine learning model, we parameterize the state: . We also further consider the scenario where the parameterised state is a pure state, i.e. . In this case, the correlations present in the model will be of a purely quantum nature. The parameterised distribution is then:
(2) 
Finally, if the state, is generated by a quantum circuit (as opposed to, for example, by a continuous time Hamiltonian evolution), the model is referred to as a quantum circuit Born machine [20, 19] (QCBM). In this form, the ease of performing inference becomes apparent: once trained, the parameterized quantum state prepared by a quantum circuit and then simply measured. The measurement results then constitute an (approximate) sample from the data distribution. Furthermore, utilizing quantum randomness as a sample generation mechanism this way relaxes the need to input randomness into the model as is usually done to build GANs. However, we mention that inputting randomness has been considered in the quantum case [27] as well, although the advantage of doing so has yet to be explored.
A generalization of the above can be achieved by relaxing the purity assumption of the underlying state, and doing so results in quantum Hamiltonian based models [21], which instead can carry both classical and quantum correlations.
In order to find a good fit to the data distribution, , such that the model, , can effectively generate synthetic data, an optimization routine is invoked to search over the space of possible states . Since Born machines are implicit models, careful consideration must be given to the choice of optimization routine, since any optimizer must be able to effectively, and efficiently, deal with samples alone. One may consider quantum training procedures [28], but more commonly the optimization procedure will be a fully classical routine. This makes these models hybrid quantumclassical in nature and therefore friendly to NISQ devices, only using the quantum resource when necessary.
2.2 Boltzmann Machine
Generalized Boltzmann machines (GBMs) are graphical models with powerful synthetic datageneration capabilities. While GBMs can vary significantly in terms of how they are applied to various problems and their particular architectures (see some example architectures in Figure 2), they all share some defining characteristics. The model architecture is defined by a graph , which consists of a set of edges, , and nodes (vertices) which we denote . Each edge, has a corresponding edge weight, . In generality, the edge weights can also be selfloops (biases in standard Boltzmann machine terminology), or hyperedges (edges connecting more than two nodes) as illustrated in Figure 2(c). Crucially, the nodes are typically partitioned into visible and hidden nodes, . The visible nodes directly model some aspect of the data distribution, while the hidden nodes are used for capturing features of the data, and are not tied to any particular aspect of it. As such the hidden nodes typically correspond directly to the expressive power of the model. Finally, a sample generated by the GBM is distributed according to the Boltzmann distribution:
(3) 
is the probability to observe the visible nodes in some state , and describes the model distribution for the Boltzmann machine. is a particular state (corresponding to a binary vector) of the visible nodes in . In this case, the model parameters are the weights of the machine, . and is the model energy (defining an energy based model [29]) and partition function respectively, and are defined by:
(4)  
(5) 
The notation, , refers to the nodes connected to edge , and is an effective inverse temperature term. The sum in Equation (5) is taken over all possible binary vectors .
In this work, we focus specifically on the restricted version of the Boltzmann machine (RBM) corresponding to Figure 2(b), and we discuss this specification further in Section 3.2. In this case, we denote where the latter distribution is generated by marginalizing over hidden units.
Finally, while all of the above is purely classical (in contrast to the Born machine, a GBM carries only classical correlations), the extension of the model itself into the quantum world has also been proposed in the quantum Boltzmann machine [30, 31, 32, 33] as we mentioned above. In this framework, the energy function, Equation (4) is replaced by a quantum Hamiltonian and the model distribution in question is generated by sampling from the thermal state of this Hamiltonian, mimicking a Boltzmann distribution. This thermal state can be prepared either by quantum annealing [30] or by a gate based approach [34]. By introducing offdiagonal terms in this Hamiltonian, nontrivial quantum behavior can be exploited, and the model inherits some characteristics of a Born machine (i.e. some of the randomness originates from Born’s rule).
In this work, however, we focus on the GBM as a completely classical object, which we detail in Section 3, however, we do leverage quantum inspired training methods which are discussed in Section 4. Furthermore, as mentioned we only study the RBM here, but we discuss the extension of the methods in this work to the more general Boltzmann machine structures in Section 6.
2.3 A Financial Dataset
In order to perform SDG, we require some dataset to learn. In this work, we focus on one of a financial origin, in particular one considered by [25]. This dataset comtains samples of daily logreturns of currency pairs between (see Figure 3). In order to fit on the binary architecture of the Born and Boltzmann machines, the spot prices of each currency pair are converted to bit binary values, resulting in samples of bits long. This discretisation provides a convenient method for fitting various problem sizes onto models with different numbers of qubits or visible nodes for the Born machine or RBM respectively. In particular, we can tune both the number of currency pairs (), and the precision of each pair () so the problem size is described by a tuple . For example, as we revisit in Section 3, a qubit Born machine can be tasked to learn the distribution of currency pairs at bits of precision, pairs with bits or pairs at bits of precision.
3 Model Structures
Here we provide specific details about the model architectures we choose to use, in order to derive as fair a comparison as possible. In the first instance, we choose to only train the bias terms in the RBM (the selfloops in Figure 2) for simplicity. We also fix the number of parameters in the Born machine by the number of layers, and then match the number of parameters in the RBM to this, since it is simpler to grow the number of RBM parameters by simply adding extra nodes.
3.1 Born Machine Ansatz
The Ansatz which we use for the QCBM is hardware efficient as we endeavor to run the model on real quantum hardware. We also restrict the number of parameters in the circuit to match the number used in the RBM, following [26]. We choose this hardware native approach to closely fit the structure of Rigetti’s chip design (the structure of the Aspen7 and Aspen8 can be seen in Figure 4). Furthermore, we solely parameterize the single qubit unitaries to avoid compilation overheads arising out of two qubit unitary parameterization. If we were to do so, we could employ a similar strategy to [35], which uses ‘blocks’ of parameterized unitaries in such a way to enforce a linear scaling of the number of parameters with the number of qubits, when building a quantum classifier.
We run all experiments using the Rigetti Aspen 7 and Aspen 8 chips, which are designed to contain qubits, however some qubits are not available. Each QPU can be divided into sublattices containing fewer qubits, some examples can be seen in Figure 5. The largest sublattice on the Aspen7 chip is the Aspen728QA which contains usable qubits (seen in Figure 5(f)).
For each of the lattices in Figure 5, we fit the native entanglement structure using gates, and layers of single qubits rotations. For convenience, we use rotation gates as the single qubit gates, which have the decomposition , using the Rigetti native single qubit rotations. The first ‘layer’ contains only gates, and each layer thereafter consists of the hardware native gates, plus a layer of gates. For the 4 qubit chip, Aspen74QC, we illustrate this in Figure 6. For the other sublattices in Figure 5, we illustrate the entanglement structure in Figure 7 for the first layer of the circuits. In this way, an qubit QCBM with layers will have trainable parameters.
For the above circuits, we compute the average MeyerWallach [36] entanglement capacity, a measure of entanglement in quantum states proposed as a method of comparing different circuit Ansätze by [37]. This measure has been used in a similar context by [38] in order to draw connections between Ansatz structure and classification accuracy. The entanglement measure is defined, for a given input state as:
(6)  
(7) 
where is a particular distance between two quantum states, . This distance can be understood as the square of the area of the parallelogram created by vectors and . The notation is a linear map which acts on computational basis states as follows:
(8) 
where indicates the absence of the qubit. For example, However, to evaluate for a quantum state, we instead use the equivalent formulation derived by [39], which involves computing the purities of each subsystem of the state :
(9) 
where is the partial trace over every one of the subsystem of except . This reformulation of gives more efficient computation and operational meaning since the purity of a quantum state is efficiently computable. Given , we define [37] as the average value of over a set, of randomly chosen parameter instances, :
(10) 
For the circuit Ansätze we choose, the value of is plotted for a given number of layers in Figure 7.
3.2 Boltzmann Machine Structure
Given the above choice for a Born machine ansatz, we can build a corresponding restricted Boltzmann machine which has visible nodes (where is the number of qubits) and hidden nodes. To reiterate, we fix the RBM weights to have random values and only the local biases are trained. We revisit weight training in B.3.
4 Training Procedures
In order to fit the model distribution to the data, one need some means of comparing how close these two distributions are. Typically, this comes in the form of a cost function, . In this work, we consider a variety of cost functions with which to compare both models we investigate.
This cost function is then minimized during the training procedure to find a setting of the parameters, such that is as small as possible. Gradient descent (GD) is a common method to minimize such costs in machine learning as it finds the steepest direction of descent in the parameter landscape defined by, . GD proceeds with a number of ‘epochs’, where in each epoch () the parameters are updated as follows:
(11) 
is the update rule defining how each parameter should be updated, depending on the current value of and is negative since we wish to go downhill in the parameter landscape. The ‘vanilla’ form of gradient descent simply directly uses an update of the form , where is a learning rate and is the partial derivative of with respect to the current parameters. Computing this gradient efficiently can be a nontrivial procedure, and it is estimate given the data. More complicated update rules such as Adam [40] are also possible, which include terms like ‘momentum’ to the update rule, to improve convergence speed.
4.1 Born Machine Training
The primary cost function we choose to train the Born machine is the Sinkhorn divergence (SHD), a recently defined [41, 42, 43] method of distribution comparison, and related to optimal transport (OT) [44], which is known to be a relatively powerful metric between probability distributions.
(12) 
(13) 
where is a regularisation parameter, and is the set of all couplings between and , i.e. the set of all joint distributions, whose marginals with respect to are respectively. is the KullbackLeibler [45] divergence (also relative entropy) between the coupling, , and a product distribution composed of the model and the data, . The introduction of the entropy term smooths the problem, so that it becomes more easily solvable, as a function of .
We use this cost function since we numerically found it to be the best choice, in terms of speed and accuracy of training. However, we provide a comparison to the maximum mean discrepancy () cost function, training with respect to an adversarial discriminator and a gradient free genetic algorithm in A.
As shown in [22] we can derive gradients of the Sinkhorn divergence, with respect to the given parameter, , since each parameterised gate we employ has the form , where . Using the parameter shift rule [46, 47], the gradient can be written as follows:
(14)  
(15)  
(16) 
The function is defined[43] in order to ensure the gradient extends to the entire sample space, and is defined as follows for each sample, :
(17) 
Therefore, one can compute the gradient by drawing samples from the distributions, , and computing the vector , for each sample, . The functions and in Equation (17) are optimal Sinkhorn potentials, arising from a primaldual formulation of optimal transport. These are computed using the Sinkhorn algorithm, which gives the divergence its name [48]. is the optimal transport cost matrix derived from the cost function applied to all samples, and is a logsumexp reduction for a vector . For further details on how the functions, and are computed, see along with the Sinkhorn divergence and its gradient, see [43, 22].
4.2 Boltzmann Machine Training
For the RBM, we use the standard Boltzmann protocol of maximizing the loglikelihood function ^{2}^{2}2Equivalent to minimizing an empirical cost . The maximization procedure adds an extra negative sign to the update rule., i.e. the probability of generating vectors belonging to a training set :
(18) 
where are the Boltzmann machine model parameters and is the probability of generating data vectors . For a particular data vector , we can take the likelihood function as our cost function as a function of the model parameters [49], which results in the gradient:
(19) 
wherein of (4) are the model parameters coupling their respective set of nodes , and are the expectation values of calculated from the data and model distributions respectively where is taken to be a random variable taking values .
As an example, consider an update to a (visible node) bias term (i.e. a selfloop in Figure 2), we have and also is simply since the edge connects only one node. Then the gradient is computed using the expectation value of the bias:
(20) 
In this work, we use vanilla gradient descent as the update rule, but we note we also considered more complex update rules or optimizers such as Adam [40], and we found that this only improved the convergence speed, and not the final accuracy of the model.
The above is discussion has no quantum component, as the update rule and the model is completely classical. However, in order to actually compute the first and second order moment terms (using data and modelgenerated bitstrings) in (19), we require a method of generating samples from the RBM. Unlike the Born machine, sample generation from a Boltzmann machine is not a trivial matter. Typical approaches are based on Gibbs sampling, for example step contrastive divergence [50]. Here, we use a method called QxSQA, a GPGPUAccelerated Simulated quantum annealer based on PathIntegral Monte Carlo (PIMC) [51]. This simulated QA has been shown to be useful for sampling Boltzmannlike distributions, and we have shown the ability to use this sampling to train large quantum generalized Boltzmann machines (QGBMs) for the purposes of generating synthetic data based on images [52] and financial data [25, 13] in previous research [53].
5 Results
Here we present the numerical results obtained above using the models and training methods detailed above. In particular, we focus of training using the Sinkhorn divergence with the Adam optimiser [40] and its analytic gradients for the Born Machine, and loglikelihood maximization using QxSQA for training the Boltzmann machine. In A, we revisit alternative training methods. As mentioned above, in the first instance, we also fix both models to only have trainable local parameters for simplicity. For the Born machine, this corresponds to only training single qubit unitaries, with the two qubit gates being unparameterised, and for the Boltzmann machine, this corresponds to training the biases of each node. We force each model to have the same number of trainable parameters in this way. The entanglement structure in the Born machine is fixed by the problem size, via the lattice topology, and the weights of the RBM are chosen to be random (but fixed) values on each instance. It is difficult to directly compare the connectivity of the models, howeveer we also experimented with randomly pruning the RBM weights to enforce the same number of connections as in the QCBM, but we found this did not affect performance significantly.
As a method of benchmarking the expressive power of each model in a fair way, we use an adversarial discriminator, and judge the performance relative to it. Specifically, we use a random forest discriminator from scikitlearn [54] with estimators. A higher discriminator error implies better performance, with an error of indicates the discriminator can at best guess randomly when presented with a sample as to its origin  whether it came from the real data, or the model. Where error bars are shown, they correspond to the mean and standard deviations of the training over 5 independent runs. As the QCBM scales, the classical simulation becomes a bottleneck and limits the number of runs which can be done.
In summary, we find the Born machine has the capacity to outperform the RBM as the precision of the currency pairs increases. In Figure 9, we use data from currency pairs, at and bits of precision. We notice the Born machine outperforms the RBM around bits (measured by a higher discriminator error), and still performs relatively well when run on the QPU. Similar bahaviour is observed for currency pairs in Figure 11, which uses a precision of and bits, and with pairs in Figure 12 for a precision of and bits. In Figure 13 we plot the entangling capability (defined by (9)) of the states generated by initial and final circuits learned via training. Curiously, we notice that in the problem instances in which the Born machine outperforms the Boltzmann machine (those with a higher level of precision), the trained circuits have a higher level of entanglement than those that do not, despite the data being completely classical in nature. This is especially prominent for currency pairs in Figure 13(a), in which the training drives the entanglement capability at and bits of precision close to zero (even for increased numbers of layers), but it is significantly higher for and bits of precision, when the Born machine outperforms the RBM, as seen in Figure 9. Similar behaviour is seen for currency pairs, but not as evident for pairs. The latter effect is possibly correlated to the similar performance of both models for currency pairs up to bits of precision.
We are also able to somewhat successfully train the largest instance of a Born Machine to date in the literature, namely one consisting of qubits on the Rigetti Aspen7 chip using the Sinkhorn divergence (whose topology is shown in Figure 5(e), and we find it performs surprisingly well. We show the performance of the qubit model versus the a Boltzmann machine with visible nodes, and a suitable number of hidden nodes to match the number of parameters in the Born machine in Figure 14. While the performance of the Born machine is significantly less than that of its counterpart, it is clear that the model is learning (despite hardware errors), up to a discriminator error of . While this result seems to contradict the previous findings in this work, we emphasize that it does not, since we are not able to simulate the QCBM at this scale in a reasonable amount of time. We would not necessarily expect the Born machine to match the performance of the RBM on hardware at this scale for a number of reasons, the most likely cause for diminishing performance is quantum errors in the hardware. However we cannot rule out other factors, such as the Ansatz choice. We leave thorough investigation of improving hardware performance to future work, perhaps by including error mitigation [55] to reduce errors, parametric compilation and active qubit reset [56, 14] to improve running time and other techniques.
6 Discussion
In conclusion, we investigate and compare two different models when trained on a realworld financial dataset consisting of currency pairs at varying levels of precision. We chose a completely classical model in the restricted Boltzmann machine, and put it up against a completely quantum model in the form of a quantum circuit Born machine, in order to compare their relevant expressive powers, and supplement recent related work in this direction [13, 26]. As a benchmark of fairness, we fixed the models to have the same numbers of trainable parameters and found that the simulated Born machine always performed at least as well as the RBM, and in several cases outperformed it, measured relative to the accuracy of an adversarial discriminator. To complement this finding, we investigated the entangling capability of the circuits learned by the QCBM, and found a rough correlation between training towards higher levels of entanglement, and outperforming the classical model.
From this work, there are many possible avenues for exploration. The first, is improving the Born machine training speed by, for example, leveraging GPU accelerated computation of the cost functions, and also incorporating techniques to improve running time and execution on the QPU. Furthermore, to improve performance, one could consider variable structure Ansätze [57, 58] or quantumspecific optimizers [59, 60, 61] for the model and training. An alternative direction, is to enlarge the suite of classical model comparison to compare the Born machine to, in order to solidify any perceived advantage and extending the model into mixed states to potentially increase the expressive power [21]. Alternatively, one could investigate methods to divide the classicalquantum resources in the learning procedure [62].
References
 [1] John Preskill. Quantum Computing in the NISQ era and beyond. Quantum, 2:79, August 2018. Publisher: Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften.
 [2] Frank Arute et. al. Quantum supremacy using a programmable superconducting processor. Nature, 574(7779):505–510, October 2019.
 [3] Sam McArdle, Suguru Endo, Alán AspuruGuzik, Simon C. Benjamin, and Xiao Yuan. Quantum computational chemistry. Rev. Mod. Phys., 92(1):015003, March 2020. Publisher: American Physical Society _eprint: 1808.10402.
 [4] Frank Arute et. al. HartreeFock on a superconducting qubit quantum computer. arXiv:2004.04174 [physics, physics:quantph], April 2020. arXiv: 2004.04174.
 [5] E Farhi, J Goldstone, and S Gutmann. A Quantum Approximate Optimization Algorithm. arXiv Prepr. arXiv1411.4028, 2014. _eprint: 1411.4028.
 [6] Frank Arute et. al. Quantum approximate optimization of nonplanar graph problems on a planar superconducting processor. arXiv Prepr. arXiv2004.04197, 2020.
 [7] Christa Zoufal, Aurélien Lucchi, and Stefan Woerner. Quantum Generative Adversarial Networks for learning and loading random distributions. npj Quantum Inf., 5(1):103, December 2019. Publisher: Nature Research _eprint: 1904.00043.
 [8] Sergi RamosCalderer, Adrián PérezSalinas, Diego GarcíaMartín, Carlos BravoPrieto, Jorge Cortada, Jordi Planagumà, and José I. Latorre. Quantum unary approach to option pricing. arxiv Prepr. arXiv1912.01618, December 2019. _eprint: 1912.01618.
 [9] Patrick Rebentrost, Brajesh Gupt, and Thomas R Bromley. Quantum computational finance: Monte Carlo pricing of financial derivatives. Phys. Rev. A, 98(2):22321, August 2018. Publisher: American Physical Society.
 [10] Samuel Mugel, Carlos Kuchkovsky, Escolastico Sanchez, Samuel FernandezLorenzo, Jorge LuisHita, Enrique Lizaso, and Roman Orus. Dynamic Portfolio Optimization with Real Datasets Using Quantum Processors and QuantumInspired Tensor Networks. arXiv Prepr. arXiv2007.00017, June 2020. _eprint: 2007.00017.
 [11] Peter W Shor. PolynomialTime Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. SIAM J. Comput., 26(5):1484–1509, 1997. Place: Philadelphia, PA, USA Publisher: Society for Industrial and Applied Mathematics.
 [12] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum Algorithm for Linear Systems of Equations. Phys. Rev. Lett., 103(15):150502, October 2009. Publisher: American Physical Society.
 [13] Alexei Kondratyev. NonDifferentiable Learning of Quantum Circuit Born Machine with Genetic Algorithm. Available SSRN 3569226, April 2020. Publisher: Elsevier BV.
 [14] Peter J Karalekas, Nikolas A Tezak, Eric C Peterson, Colm A Ryan, Marcus P da Silva, and Robert S Smith. A quantumclassical cloud platform optimized for variational hybrid algorithms. Quantum Sci. Technol., 5(2):24003, April 2020. Publisher: {IOP} Publishing.
 [15] Shakir Mohamed and Balaji Lakshminarayanan. Learning in Implicit Generative Models. arXiv1610.03483 [cs, stat], October 2016.
 [16] Ian J Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Networks. arXiv1406.2661 [cs, stat], June 2014.
 [17] Nathan Wiebe, Ashish Kapoor, Christopher Granade, and Krysta M Svore. Quantum Inspired Training for Boltzmann Machines. arXiv:1507.02642 [quantph], July 2015.
 [18] Song Cheng, Jing Chen, and Lei Wang. Information Perspective to Probabilistic Modeling: Boltzmann Machines versus Born Machines. Entropy, 20(8):583, August 2018. Publisher: MDPI AG.
 [19] Marcello Benedetti, Delfina GarciaPintos, Oscar Perdomo, Vicente LeytonOrtega, Yunseong Nam, and Alejandro PerdomoOrtiz. A generative modeling approach for benchmarking and training shallow quantum circuits. npj Quantum Inf., 5(1):45, May 2019.
 [20] JinGuo Liu and Lei Wang. Differentiable learning of quantum circuit Born machines. Phys. Rev. A, 98(6):62324, December 2018. _eprint: 1804.04168.
 [21] Guillaume Verdon, Jacob Marks, Sasha Nanda, Stefan Leichenauer, and Jack Hidary. Quantum HamiltonianBased Models and the Variational Quantum Thermalizer Algorithm. arXiv Prepr. arXiv1910.02071, October 2019. _eprint: 1910.02071.
 [22] Brian Coyle, Daniel Mills, Vincent Danos, and Elham Kashefi. The Born supremacy: quantum advantage and training of an Ising Born machine. npj Quantum Information, 6(1):60, July 2020.
 [23] Ryan Sweke, JeanPierre Seifert, Dominik Hangleiter, and Jens Eisert. On the Quantum versus Classical Learnability of Discrete Distributions. arXiv:2007.14451 [quantph], July 2020. arXiv: 2007.14451.
 [24] Jirawat Tangpanitanon, Supanut Thanasilp, Ninnat Dangniam, MarcAntoine Lemonde, and Dimitris G. Angelakis. Expressibility and trainability of parameterized analog quantum systems for machine learning applications. arXiv Prepr. arXiv2005.11222, May 2020. _eprint: 2005.11222.
 [25] Alexei Kondratyev and Christian Schwarz. The Market Generator. Available SSRN 3384948, 2019.
 [26] Javier Alcazar, Vicente LeytonOrtega, and Alejandro PerdomoOrtiz. Classical versus quantum models in machine learning: insights from a finance application. Machine Learning: Science and Technology, 2020.
 [27] Jonathan Romero and Alan AspuruGuzik. Variational quantum generators: Generative adversarial quantum machine learning for continuous distributions. arXiv:1901.00848 [quantph], January 2019. arXiv: 1901.00848.
 [28] Guillaume Verdon, Jason Pye, and Michael Broughton. A Universal Training Algorithm for Quantum Deep Learning. arXiv Prepr. arXiv1806.09729, June 2018. _eprint: 1806.09729.
 [29] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.
 [30] Mohammad H. Amin, Evgeny Andriyash, Jason Rolfe, Bohdan Kulchytskyy, and Roger Melko. Quantum Boltzmann Machine. Phys. Rev. X, 8(2):21050, May 2018. Publisher: American Physical Society _eprint: 1601.02036.
 [31] Maria Kieferova and Nathan Wiebe. Tomography and Generative Data Modeling via Quantum Boltzmann Training. Phys. Rev. A, 96(6), December 2017.
 [32] Hai Jing Song, Tieling Song, Qi Kai He, Yang Liu, and D. L. Zhou. Geometry and symmetry in the quantum Boltzmann machine. Phys. Rev. A, 99(4):042307, April 2019. Publisher: American Physical Society _eprint: 1808.04567.
 [33] Nathan Wiebe and Leonard Wossnig. Generative training of quantum Boltzmann machines with hidden units. arXiv Prepr. arXiv1905.09902, May 2019. _eprint: 1905.09902.
 [34] Guillaume Verdon, Michael Broughton, and Jacob Biamonte. A quantum algorithm to train neural networks using lowdepth circuits. arXiv Prepr. arXiv1712.05304, December 2017. _eprint: 1712.05304.
 [35] Maria Schuld, Alex Bocharov, Krysta M. Svore, and Nathan Wiebe. Circuitcentric quantum classifiers. Phys. Rev. A, 101(3):032308, March 2020. Publisher: American Physical Society.
 [36] David A. Meyer and Nolan R. Wallach. Global entanglement in multiparticle systems. J. Math. Phys., 43(9):4273–4278, September 2002. Publisher: American Institute of PhysicsAIP _eprint: 0108104.
 [37] Sukin Sim, Peter D. Johnson, and Alán Aspuru‐Guzik. Expressibility and Entangling Capability of Parameterized Quantum Circuits for Hybrid Quantum‐Classical Algorithms. Adv. Quantum Technol., 2(12):1900070, December 2019. Publisher: Wiley _eprint: 1905.10876.
 [38] Thomas Hubregtsen, Josef Pichlmeier, and Koen Bertels. Evaluation of Parameterized Quantum Circuits: on the design, and the relation between classification accuracy, expressibility and entangling capability. arXiv:2003.09887 [quantph], March 2020. arXiv: 2003.09887.
 [39] Gavin K. Brennen. An observable measure of entanglement for pure states of multiqubit systems. arXiv:quantph/0305094, November 2003. arXiv: quantph/0305094.
 [40] Diederik P Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd Int. Conf. Learn. Represent. {ICLR} 2015, San Diego, CA, USA, May 79, 2015, Conf. Track Proc., 2015.
 [41] Aaditya Ramdas, Nicolas Garcia, and Marco Cuturi. On Wasserstein Two Sample Testing and Related Families of Nonparametric Tests. arXiv:1509.02237 [math, stat], September 2015.
 [42] Aude Genevay, Gabriel Peyre, and Marco Cuturi. Learning Generative Models with Sinkhorn Divergences. In Amos Storkey and Fernando PerezCruz, editors, Proc. TwentyFirst Int. Conf. Artif. Intell. Stat., volume 84 of Proceedings of {Machine} {Learning} {Research}, pages 1608–1617, Playa Blanca, Lanzarote, Canary Islands, April 2018. PMLR.
 [43] Jean Feydy, Thibault Séjourné, FrançoisXavier Vialard, Shunichi Amari, Alain Trouve, and Gabriel Peyré. Interpolating between Optimal Transport and MMD using Sinkhorn Divergences. In Kamalika Chaudhuri and Masashi Sugiyama, editors, Proc. Mach. Learn. Res., volume 89 of Proceedings of {Machine} {Learning} {Research}, pages 2681–2690. PMLR, April 2019.
 [44] Cédric Villani. Optimal Transport: Old and New. Grundlehren der mathematischen {Wissenschaften}. SpringerVerlag, Berlin Heidelberg, 2009.
 [45] S Kullback and R A Leibler. On Information and Sufficiency. Ann. Math. Stat., 22(1):79–86, 1951.
 [46] Kosuke Mitarai, Makoto Negoro, Masahiro Kitagawa, and Keisuke Fujii. Quantum Circuit Learning. Phys. Rev. A, 98(3):32309, March 2018. _eprint: 1803.00745.
 [47] Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, and Nathan Killoran. Evaluating analytic gradients on quantum hardware. Phys. Rev. A, 99(3):32331, March 2019.
 [48] Richard Sinkhorn. A Relationship Between Arbitrary Positive Matrices and Doubly Stochastic Matrices. Ann. Math. Stat., 35(2):876–879, June 1964.
 [49] Asja Fischer and Christian Igel. Training restricted Boltzmann machines: An introduction. Pattern Recognit., 47(1):25–39, January 2014. Publisher: Pergamon.
 [50] Geoffrey E Hinton. A Practical Guide to Training Restricted Boltzmann Machines. In Grégoire Montavon, Geneviève B Orr, and KlausRobert Müller, editors, Neural Networks: Tricks of the Trade, volume 7700, pages 599–619. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
 [51] D. Padilha, S. Weinstock, and M. Hodson. QxSQA: GPGPUAccelerated Simulated Quantum Annealer within a NonLinear Optimization and Boltzmann Sampling Framework. In 2019 IEEE High Performance Extreme Computing Conference (HPEC), pages 1–8, 2019.
 [52] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. Proc. IEEE, 86(11):2278–2323, 1998. ISBN: 00189219 _eprint: 1102.0183.
 [53] Maxwell P. Henderson and Justin Chan Jin Le. Generation of industryrelevant synthetic data using simulated quantum annealingtrained Boltzmann machines. In QTML  Quantum Tech. Mach. Learn., Daejeon, South Korea, 2019.
 [54] F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, and E Duchesnay. Scikitlearn: Machine Learning in Python. J. Mach. Learn. Res., 12:2825–2830, 2011.
 [55] Kathleen E. Hamilton and Raphael C. Pooser. Errormitigated datadriven circuit learning on noisy quantum hardware. arXiv:1911.13289 [quantph], November 2019. arXiv: 1911.13289.
 [56] Robert S Smith, Michael J Curtis, and William J Zeng. A Practical Quantum Instruction Set Architecture. arXiv:1608.03355 [quantph], August 2016.
 [57] Lukasz Cincio, Yiğit Subaşı, Andrew T Sornborger, and Patrick J Coles. Learning the quantum algorithm for state overlap. New J. Phys., 20(11):113022, November 2018.
 [58] M. Cerezo, Kunal Sharma, Andrew Arrasmith, and Patrick J. Coles. Variational Quantum State Eigensolver. arXiv:2004.01372 [quantph], April 2020. arXiv: 2004.01372.
 [59] Jonas M. Kübler, Andrew Arrasmith, Lukasz Cincio, and Patrick J. Coles. An Adaptive Optimizer for MeasurementFrugal Variational Algorithms. Quantum, 4:263, May 2020. Publisher: Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften.
 [60] Andrew Arrasmith, Lukasz Cincio, Rolando D. Somma, and Patrick J. Coles. Operator Sampling for Shotfrugal Optimization in Variational Algorithms. arXiv:2004.06252 [quantph], April 2020. arXiv: 2004.06252.
 [61] Wim Lavrijsen, Ana Tudor, Juliane Müller, Costin Iancu, and Wibe de Jong. Classical Optimizers for Noisy IntermediateScale Quantum Devices. arXiv:2004.03004 [quantph], April 2020. arXiv: 2004.03004.
 [62] Marco Paini and Amir Kalev. An approximate description of quantum states. arXiv:1910.10543 [quantph], November 2019. arXiv: 1910.10543.
 [63] Karsten M Borgwardt, Arthur Gretton, Malte J Rasch, HansPeter Kriegel, Bernhard Schölkopf, and Alex J Smola. Integrating structured biological data by Kernel Maximum Mean Discrepancy. Bioinformatics, 22(14):e49–e57, 2006.
 [64] Arthur Gretton, Karsten M Borgwardt, Malte Rasch, Bernhard Schölkopf, and Alex J Smola. A Kernel Method for the TwoSampleProblem. In B Schölkopf, J C Platt, and T Hoffman, editors, Adv. Neural Information Processing Systems 19, pages 513–520. MIT Press, 2007.
 [65] Jonas M Kübler, Krikamol Muandet, and Bernhard Schölkopf. Quantum mean embedding of probability distributions. Phys. Rev. Res., 1(3):33159, December 2019.
 [66] Maria Schuld and Nathan Killoran. Quantum Machine Learning in Feature Hilbert Spaces. Phys. Rev. Lett., 122(4), March 2019. _eprint: 1803.07128.
 [67] Vojtěch Havlíček, Antonio D Córcoles, Kristan Temme, Aram W Harrow, Abhinav Kandala, Jerry M Chow, and Jay M Gambetta. Supervised learning with quantumenhanced feature spaces. Nature, 567(7747):209–212, 2019.
 [68] Seth Lloyd and Christian Weedbrook. Quantum Generative Adversarial Learning. Phys. Rev. Lett., 121(4):040502, July 2018. Publisher: American Physical Society.
 [69] PierreLuc DallaireDemers and Nathan Killoran. Quantum generative adversarial networks. Phys. Rev. A, 98(1):12324, July 2018. _eprint: 1804.08641.
 [70] Abhinav Anand, Jonathan Romero, Matthias Degroote, and Alán AspuruGuzik. Experimental demonstration of a quantum generative adversarial network for continuous distributions. arXiv:2006.01976 [quantph], June 2020. arXiv: 2006.01976.
Appendix A Alternative Training Methods
Here we provide numerical results illustrating the training of the Born machine using some alternative methods and cost functions, for small numbers of qubits.
a.1 Maximum Mean Discrepancy
The first alternative method is derived by using a different cost function, the socalled maximum mean discrepancy (). Like optimal transport, this defines a metric on the space of probability distributions, and from which, an efficienttocompute method of comparison can be defined [63, 64]:
(21) 
This cost function was originally utilized for hypothesis testing [64], but has since found use in training generative models. In particular, it enabled the first approach to train a QCBM [20, 22] in a differentiable way.
The function is a kernel, which enables a means of comparison on the support spaces, . For this work, we choose the common Gaussian mixture kernel [20] for the , which is universal, and hence enables the to act as a faithful method of distribution comparison:
(22) 
The parameters, , are bandwidths which determine the scale at which the samples are compared, and is the norm. Here we choose , as in [20]. Typically, the kernel is a classical function, but quantum kernels can also be considered here [22, 65, 66, 67]
a.2 Adversarial Discriminator
The second method we can choose to use is to not only use a discriminator as a benchmark, but also to train the model relative to it. As in the above cases, this is a gradient based approach, with the analytic gradient taken by differentiating the discriminator loss.
Adversarial training has become a popular and powerful way to train neural networks, originating with generative adversarial networks (GANs) [16]. GANs are composed of two machine learning components, a discriminator, , which attempts to predict if a sample is from a data distribution or rather has been generated by a generator network (in our notation, the generator network samples from a probability distribution, and is either a Born machine or a Boltzmann machine). Generalizations of the GAN into the quantum domain have also been considered [27, 68, 69, 7, 70]. The generator attempts to minimize the following loss:
(23) 
where is the probability that a discriminator, , guesses that is from the real data set. The approximation to the expectation value is taken over generated samples in practice. In order to train the generator with respect to this cost function (taken with respect to a specific discriminator, ), gradient descent can be used to minimize (23), with the gradient given by:
(24) 
If we again assume the generator network is a Born machine, composed of quantum gates of the form , then using the parameter shift rule as for the Sinkhorn divergence above (14), we get:
(25) 
These expectation values can be evaluated by sampling from the parameter shifted circuit distributions, as usual. Correspondingly, while a generator is trying to minimize the above cost, (23), the discriminator can also be trained for a number of substeps to become better at identifying false samples. This can be done by using gradient ascent to maximize the following cost:
(26) 
where the latter term is the same as in (24), and the former represents the probability that is able to correctly identify true data samples, . The gradient of (26) can be computed similarly.
In this work, we implement the training laid out in this section with two slight variations to note:

We used a slightly revised version of Eq. 25 which dropped the 0.5 and log components (simply used in both terms). As we were still using Adam as the update optimizer, we believe that asserting this overall should not pose any major impact to performance. Moreover, the adversarial approach was still slower compared to the Sinkhorn divergence, and therefore did not garner increased focus in this work.

As the modeling problem in this work was extremely small, we choose to simply retrain a new discriminator model from scratch at every generative model training iteration, with corresponding test set error of the discriminator being recorded and used as a primary metric in this work. Similarly, a different discriminator model was used for calculating model parameter updates every training iteration while using adversarial training.
a.3 Genetic Algorithm
Finally, we use a gradient free approach in a genetic algorithm, since this was also used to train a QCBM on this same dataset [13]. One could also choose one of the many gradient free optimisers from scikitlearn, as has been also done for Born machines [19]. A simplified version of a genetic algorithm was implemented in [13] due to the low number of parameters in a 12 qubit Born machine. We found that this method was significantly slower than the gradient based methods we discuss above.
Appendix B Alternative Model Structures
Here we showcase the effect of using alternative model structures for the QCBM and the RBM.
b.1 Differing numbers of Born machine layers
For completeness, in Figure 17, we show the effect of alternating the number of layers of the hardware efficient Ansätze, shown in Figure 7 for and qubits. In particular, we notice that increasing the number of layers does not have a significant impact, at least at these scales, except perhaps in convergence speed of the training. It is likely however, that at larger scales, increased parameter numbers would be required to improve performance.
b.2 Differing numbers of Boltzmann hidden nodes
We also demonstrate the effect of changing the number of hidden nodes in the Boltzmann machine in Figure 18, where we have and visible nodes. Again, we observe that an increasing number of hidden nodes (and by extension, number of parameters) does not substantially affect the performance of the model, in fact it can hinder it, at least when training only biases of the Boltzmann machine. In particular, it does not substantially alter the final accuracy achieved by the model. We also noticed similar behavior when also training the weights of the Boltzmann machine.
b.3 Weight training of Boltzmann machine
Finally, we compare the effect of weight training of the Boltzmann machine to training the bias terms alone. For the problem instances where the Boltzmann machine was able to converge to the best discriminator accuracy (i.e. in the small problem instances), we find training the weights has the effect of increasing convergence speed, and also increased accuracy where training the biases only was insufficient to achieve high discriminator error. Interestingly, we note that the Born machine still outperforms the and visible node RBMs, even when the weights are also trained, and this does not seem to majorly affect the performance. However, training the weights does make a large difference for nodes, as seen in Figure 19(c), so again further investigation is needed in future work of this phenomenon.