Abstract
We introduce novel algorithms for the quantum simulation of fermionic systems which are dramatically more efficient than those based on the Lie–Trotter–Suzuki decomposition. We present the first application of a general technique for simulating Hamiltonian evolution using a truncated Taylor series to obtain logarithmic scaling with the inverse of the desired precision. The key difficulty in applying algorithms for general sparse Hamiltonian simulation to fermionic simulation is that a query, corresponding to computation of an entry of the Hamiltonian, is costly to compute. This means that the gate complexity would be much higher than quantified by the query complexity. We solve this problem with a novel quantum algorithm for on-the-fly computation of integrals that is exponentially faster than classical sampling. While the approaches presented here are readily applicable to a wide class of fermionic models, we focus on quantum chemistry simulation in second quantization, perhaps the most studied application of Hamiltonian simulation. Our central result is an algorithm for simulating an N spin–orbital system that requires gates. This approach is exponentially faster in the inverse precision and at least cubically faster in N than all previous approaches to chemistry simulation in the literature.
Export citation and abstract BibTeX RIS
1. Introduction
As small, fault-tolerant quantum computers come increasingly close to viability [1–4] there has been substantial renewed interest in quantum simulating chemistry due to the low qubit requirements and industrial importance of the electronic structure problem. A recent series of papers tried to estimate the resources required to quantum simulate a small but classically intractable molecule [5–9]. Although qubit requirements seem modest, initial predictions of the time required were daunting. Using arbitrarily high-order Trotter formulas, the tightest known bound6 on the gate count of the second quantized, Trotter-based quantum simulation of chemistry is [10, 11], where is the precision required and N is the number of spin–orbitals. However, using significantly more practical Trotter decompositions, the best known gate complexity for this quantum algorithm is [6].
Fortunately, numerics indicated that the average circuit depth for real molecules may be closer to [7], or [9] when only trying to simulate ground states, where Z is the largest nuclear charge for the molecule. While this improved scaling restores hope that fault-tolerant devices will have an impact on some classically intractable chemistry problems, the Trotter-based quantum simulation of large (e.g. ) molecules still seems prohibitively costly [9, 12, 13]. This limitation would preclude simulations of many important molecular systems, such as those involved in biological nitrogen fixation and high-Tc superconductivity [12, 13].
The canonical quantum algorithm for quantum chemistry, based on the Trotter–Suzuki decomposition which was first applied for universal quantum simulation in [14, 15], was introduced nearly one decade ago [16]. This approach was later refined for implementation with a set of universal quantum gates in [17]. With the exception of the adiabatic algorithm described in [18] and a classical variational optimization strategy making use of a quantum wavefunction ansatz described in [19–21], all prior quantum algorithms for chemistry have been based on Trotterization [20, 22–28].
Trotter–Suzuki approaches were also applied to simulation of evolution under sparse Hamiltonians with the entries given by an oracle [29, 30]. A related problem is the simulation of continuous query algorithms; in 2009, Cleve et al showed how to achieve such simulation with exponentially fewer discrete queries than Trotterization in terms of [31]. The algorithm of [31] still required a number of ancilla qubits that scaled polynomially in , but this limitation was overcome in [32] which demonstrated that the ancilla register in [31] could be compressed into exponentially fewer qubits. In [33, 34], Berry et al combined the results of [29–32] to show exponentially more precise sparse Hamiltonian simulation techniques. A major contribution of [33] was to use oblivious amplitude amplification to make the algorithm from [31, 32] deterministic, whereas prior versions had relied on probabilistic measurement of ancilla qubits. An improvement introduced in [34] was to show how to simulate arbitrary Hamiltonians using queries that are not self-inverse (a requirement of the procedure in [33]). We focus on the methodology of [34] which is relatively self-contained.
The algorithm of [34] approximates the propagator using a Taylor series expansion rather than the Trotter–Suzuki decomposition. By dividing the desired evolution into a number of simulation segments proportional to the Hamiltonian norm, one can truncate the Taylor series at an order which scales logarithmically in the inverse of the desired precision [34]. The truncated Taylor series must be expressed as a weighted sum of unitary operators. To simulate the action of this operator, one first initializes the system along with an ancilla register that indexes all terms in the Taylor series sum. The ancilla register is then put in a superposition state with amplitudes proportional to the coefficients of terms in the Taylor series sum. Next, an operator is applied to the system which coherently executes a single term in the Taylor series sum that is selected according to the ancilla register in superposition. Finally, by applying the inverse of the procedure which prepares the ancilla register, one probabilistically simulates evolution under the propagator. The algorithm is made deterministic using an oblivious amplitude amplification procedure from [33].
In the present paper we develop two new algorithms for the application of the Hamiltonians terms, which we refer to as the 'database' algorithm and the 'on-the-fly' algorithm. In the database algorithm, the ancilla register's superposition state is prepared with amplitudes from a precomputed classical database. In the on-the-fly algorithm, those amplitudes are computed and prepared on-the-fly, in a way that is exponentially more precise than classically possible.
2. Overview of results
The simulation procedure described in [34] assumes the ability to represent the Hamiltonian as a weighted sum of unitaries which can be individually applied to a quantum state. Specifically, we must be able to express the simulation Hamiltonian as
where the Wγ are complex-valued scalars7 , the Hγ are unitary operators and a mechanism is available for selectively applying the Hγ. Using the Jordan–Wigner transformation [35, 36] or the Bravyi–Kitaev transformation [37–39], the second quantized molecular Hamiltonian can be mapped to a sum of local Hamiltonians. Since these local Hamiltonians are each a tensor product of Pauli operators multiplied by some coefficient, they automatically satisfy the form of equation (1).
We will need a circuit referred to in [34] as which is queried within the algorithm such that
One could construct by storing all the Pauli strings in a database. However, accessing this data would have time complexity of at least . Instead, we compute and apply the Pauli strings using gates (which can be parallelized to circuit depth) by dynamically performing the Jordan–Wigner transformation on the quantum hardware.
The algorithm of [34] also requires an operator that we refer to as which applies the mapping
where , is a normalization factor that will turn out to have significant ramifications for the algorithm complexity. In the first of two algorithms discussed in this paper, we implement using a database via a sequence of totally controlled rotations at cost . Because our first approach uses a database to store classically precomputed values of Wγ in order to implement , we refer to the first algorithm as the 'database' algorithm.
While we suggest a different strategy in section 3, a database could also be used to construct . That is, a controlled operation is performed which applies H1 if , followed by a controlled operation which performs H2 if , and so forth. This would result in a slightly higher gate count than , because each of the Γ controlled operations must act on qubits even if the Bravyi–Kitaev transformation is used. Nevertheless, this might represent a simpler solution than our construction of for early experimental implementations.
Our second algorithm involves modifications to the algorithm of [34] which allows us to avoid some of this overhead. We exploit the fact that the chemistry Hamiltonian is easy to express as a special case of equation (1) in which the coefficients are defined by integrals such as
where the integrand represents a scalar-valued function of the vector , which is an element of the integration domain . Because our approach involves computing integrals on-the-fly, we refer to the second algorithm as the 'on-the-fly' algorithm. We begin by numerically approximating the integrals as finite Riemann sums such as
where is a point in the integration domain at grid point ρ. Equation (5) represents a discretization of the integral in equation (4) using μ grid points where the domain of the integral, denoted as , has been truncated to have total volume . This truncation is possible because the functions can be chosen to decay exponentially over the integration domain for the molecular systems usually studied in chemistry. Note that this might not be true for other systems, such as conducting metals.
Our algorithm is effectively able to numerically compute this integral with complexity logarithmic in the number of grid points. It might be thought that this is impossible, because methods of evaluating numerical integrals on quantum computers normally only give a square-root speedup over classical Monte-Carlo algorithms [40]. The difference here is that we do not output the value of the integral. The value of the integral is only used to control the weight of a term in the Hamiltonian under which the state evolves.
We construct a circuit which computes the values of for the quantum chemistry Hamiltonian with gates. We call this circuit and define it by its action
where is the binary representation of using qubits.
By expanding the Wγ in equation (1) in terms of the easily computed as in equation (5), we are able to compute analogous amplitudes to those in equation (3) in an efficient fashion. Thus, we no longer need the database that characterizes that algorithm. State preparation where the state coefficients can be computed on the quantum computer is more efficient than when they are stored on, and accessed from, a database [41]. The worst-case complexity is the square root of the dimension (here it would be ), whereas the database state preparation has complexity linear in the dimension (which is for Wγ). Here this would not be an improvement, as we have increased the dimension in the discretization of the integral.
However, the worst-case complexity is only if the amplitudes can take arbitrary values (as this would enable a search algorithm, where the square root of the dimension is optimal [42]). If the amplitudes differ only by phases, the complexity of the state preparation is logarithmic in the dimension. We therefore decompose each into a sum of terms which differ only by a sign, . Then, although the dimension is increased, the complexity of the state preparation is reduced. In turn, we can express the Hamiltonian as a sum of unitaries weighted by identical amplitudes which differ only by an easily computed sign
As discussed above, the state preparation needed can be performed much more efficiently because the amplitudes are now identical up to a phase. By making a single query to and then performing phase-kickback we can implement the operator whose action is
where and , is a normalization factor that will turn out to have significant ramifications for the algorithm complexity. Later, we will show that and that can be implemented with gate count, the cost of a single query to .
The database algorithm performs evolution under H for time t by making queries to both and . Because requires = gates, the overall gate count of this approach scales as . To avoid the overhead from , our on-the-fly algorithm exploits a modified version of the truncated Taylor series algorithm which allows for the same evolution by making queries to and . As requires gates, the gate count for our on-the-fly algorithm scales as .
The paper is outlined as follows. In section 3 we introduce the second quantized encoding of the wavefunction and construct . In section 4 we review the procedure in [34] to demonstrate our database algorithm which uses and to perform a quantum simulation which is exponentially more precise than Trotterization. In section 5 we show that one can modify the procedure in [34] to allow for essentially the same result while simultaneously computing the integrals on-the-fly, and show how to implement so as to compute the integrals on-the-fly. In section 6 we bound the errors on the integrals by analyzing the integrands. In section 7 we discuss applications of these results and future research directions.
3. The Hamiltonian oracle
The molecular electronic structure Hamiltonian describes electrons interacting in a fixed nuclear potential. Using atomic units in which the electron mass, electron charge, Coulomb's constant and ℏ are unity we may write the electronic Hamiltonian as
where are the nuclei coordinates, Zi are the nuclear charges, and are the electron coordinates [43]. We represent the system in a basis of N single-particle spin–orbital functions usually obtained as the solution to a classical mean-field treatment such as Hartree–Fock [43]. Throughout this paper, denotes the ith spin–orbital occupied by the jth electron which is parameterized in terms of spatial degrees of freedom .
In second quantization, antisymmetry is enforced by the operators whereas in first quantization antisymmetry is explicitly in the wavefunction. The second quantized representation of equation (9) is
where the one-electron and two-electron integrals are
The operators and aj in equation (10) obey the fermionic anti-commutation relations
In general, the Hamiltonian in equation (10) contains terms, except in certain limits of very large molecules where use of a local basis and truncation of terms lead to scaling on the order of [8]. The spatial encoding of equation (10) requires qubits, one for each spin–orbital.
While fermions are antisymmetric, indistinguishable particles, qubits are distinguishable and have no special symmetries. Accordingly, in order to construct the operator , which applies terms in the second quantized Hamiltonian to qubits as in equation (2), we will need a mechanism for mapping the fermionic raising and lowering operators in equation (10) to operators which act on qubits. Operators which raise or lower the state of a qubit are trivial to represent using Pauli matrices
Throughout this paper, and denote Pauli matrices acting on the jth tensor factor. However, these qubit raising and lowering operators do not satisfy the fermionic anti-commutation relations in equation (13). To enforce this requirement we can apply either the Jordan–Wigner transformation [35, 36] or the Bravyi–Kitaev transformation [37–39].
The action of or aj must also introduce a phase to the wavefunction which depends on the parity (i.e. sum modulo 2) of the occupancies of all orbitals with index less than j [38]. If denotes the occupancy of orbital j then
In general, two pieces of information are needed in order to make sure the qubit encoding of the fermionic state picks up the correct phase: the occupancy of the state and the parity of the occupancy numbers up to j. The Jordan–Wigner transformation maps the occupancy of spin–orbital j directly into the state of qubit j. Thus, in the Jordan–Wigner transformation, occupancy information is stored locally. However, in order to measure the parity of the state in this representation, one needs to measure the occupancies of all orbitals less than j. Because of this, the Jordan–Wigner transformed operators are N-local, which means that some of the Jordan–Wigner transformed operators are tensor products of up to N Pauli operators. The Jordan–Wigner transformed operators are
It would be convenient if we could construct by applying the Jordan–Wigner transform and acting on the quantum state, one spin–orbital index at a time. For instance, might control the application of a fermionic operator as follows
However, the operators and aj are not unitary because the operators and are not unitary. To correct this problem, we add four qubits to the selection register where each of the four qubits indicates whether the or the part of the and operators should be applied for each of the four fermionic operators in a string such as . For ease of exposition, we define new fermionic operators which are unitary, and , where
We use these definitions to rewrite the Hamiltonian in equation (10) so that it is explicitly a weighted sum of unitary Pauli products of the form we require in equation (1)
Inspection reveals that applying the transformations in equations (23) and (24) to equation (25) gives the same expression as applying the transformations in equations (20) and (21) to equation (10). By removing factors of 1/2 from both transformation operators and instead placing them in equation (25), we obtain transformation operators that are always unitary tensor products of Pauli operators.
Accordingly, we can implement in the spirit of equation (22) by using four additional qubits and the transformation operators in equations (23) and (24) so that
A circuit which implements these operators controlled on the selection register is straightforward to construct. Furthermore, the transformation of the terms can be accomplished in time. Because the Jordan–Wigner transformation is N-local, the number of gates required to actually apply the unitaries in is . However, the terms in equations (23) and (24) are trivial to apply in parallel so that each query takes time.
Whereas the Jordan–Wigner transformation stores occupancy information locally and parity information N-locally, the Bravyi–Kitaev transformation stores both parity and occupancy information in a number of qubits that scales as [37–39]. For this reason, the operators obtained using the Bravyi–Kitaev basis act on at most qubits. It might be possible to apply the Bravyi–Kitaev transformation with gates, which would allow for an implementation of with instead of gates. However, the Bravyi–Kitaev transformation is much more complicated and this would not change the asymptotic scaling of our complete algorithm. The reason for this is because the total cost will depend on the sum of the gate count of and the gate count of or , and the latter procedures always require at least gates.
4. Simulating Hamiltonian evolution
Using the method of [34], Hamiltonian evolution can be simulated with an exponential improvement in precision over Trotter-based methods by approximating the truncated Taylor series of the time evolution operator . We begin by partitioning the total simulation time t into r segments of time t/r. For each of these r segments we perform a Taylor expansion of the propagator and truncate the series at order K, i.e.
where in the second line we have expanded H as in equation (1). Notice that if we truncate the series at order K, we incur error
If we wish for the total simulation to have error less than , each segment must have error less than . Accordingly, if we set then our total simulation will have error at most if
We now discuss how one can actually implement the truncated evolution operator in equation (27). First note that the sum in equation (27) takes the form
where the Vj are unitary and is close to unitary. Our simulation uses an ancillary 'selection' register , where and for all υ. We will encode k in unary, which requires qubits, so that . Additionally, we encode each in binary using qubits. While we need K of the registers, we note that only k will actually be in use for a given value of . The total number of ancilla qubits required for the selection register , denoted as J, scales as
By making queries to from section 2, we can implement an operator to apply the Vj which is referred to in [34] as
where the Vj are defined as in equation (30). This is equivalent to k applications of , using each of the registers, together with k multiplications by −i. In order to obtain k applications of , we may perform a controlled form of times, with each successive qubit in the unary representation of k as the control. Given that the gate count for scales as , we can implement with gates. Applying the Pauli strings in parallel leads to circuit depths of and , respectively. Table 1 lists relevant parameters along with their bounds in our database algorithm. Table 2 lists relevant operators and their gate counts in our database algorithm.
Table 1. Database algorithm parameters and bounds.
Parameter | Explanation | Bound |
---|---|---|
Λ | Normalization factor, equation (37) | |
r | Number of time segments, equation (40) | |
K | Truncation point for Taylor series, equation (29) | |
Γ | Number of terms in unitary decomposition, equation (1) | |
J | Number of ancilla qubits in selection register, equation (31) |
Table 2. Database algorithm operators and gate counts.
Operator | Purpose | Gate count |
---|---|---|
Applies specified terms from decomposition, equation (2) | ||
Applies specified strings of terms, equation (32) | ||
Prepares a superposition of states weighted by coefficients, equation (3) | ||
Prepares a superposition of states weighted by coefficients, equation (33) | ||
Probabilistically performs simulation under H for time t/r, equation (39) | ||
P | Projects system onto state of selection register, equation (42) | |
G | Amplification operator to implement sum of unitaries, equation (43) | |
Entire algorithm |
We will also need an operator that we refer to as , which initializes a state
where s is a normalization factor. To implement we first prepare the state
Using the convention that , we apply to the first qubit of the unary encoding for k followed by to the qubit controlled on qubit for all sequentially, where
To each of the K remaining components of the selection register , we apply once, which acts as
where
In principle, we only need to perform times, because the registers past k are not used. However, it is simpler to perform times, because it does not require control on k.
Using results from [44], can be implemented with gates by using a classically precomputed database of the Γ molecular integrals. The gate count for thus scales as . However, this construction is naturally parallelized to depth . A circuit implementing is shown in figure 1.
The general strategy for implementing the truncated evolution operator in equation (30) becomes clear if we consider what happens to state when we apply followed by the operator
The similarity of this state to the state motivates the operator
where is a state with the ancilla qubits orthogonal to . Note that in [34], the authors use the convention that all Wγ are positive and phases are incorporated into the operators Hγ. Since we depart from that convention for reasons described in section 2, the second application of in equation (39) is the transpose of the first application, in contrast to [34] where the conjugate transpose is used instead. The circuit implementing is shown in figure 2.
Download figure:
Standard image High-resolution imageAt this point, we choose the number of segments to be
Since , our choice of K in equation (29) remains valid. The additional factor of is included to satisfy a requirement of oblivious amplitude amplification as described in [33] so that
We now define a projection operator P onto the target state, which has support on the empty ancilla register
We also define the amplification operator
where is the reflection operator. With these definitions, we follow the procedure in [34] which uses the oblivious amplitude amplification procedure of [33] to deterministically execute the intended unitary. We use G in conjunction with P to amplify the target state
Recalling the definition of Ur in equation (27), our choices of K in equation (29) and imply that
so that the total error from applying oblivious amplitude amplification to all the segments will again be order .To summarize, the database algorithm is as below.
- (1)Express the Hamiltonian as a weighted sum of unitary operators as in equation (1).
- (2)Subdivide the simulation time t into segments, where Λ is defined in equation (37).
- (3)Expand the evolution for time t/r as a truncated Taylor series Ur, as defined in equation (27). (Steps 1 to 3 are classical pre-processing.)
- (4)For each segment perform the following steps.
- (a)Prepare a superposition state with amplitudes proportional to , where Wγ are the weights of the Hamiltonians in the sum. This is achieved using the operator , defined in equation (3).
- (b)Create the superposition of states encoded in unary, as given in equation (34), where the amplitudes correspond to the square roots of the weights for a truncated Taylor series. This is achieved using the controlled rotations by , depicted in figure 3, where the values of are given by equation (35). The overall operation performed in steps (a) and (b) is denoted , and defined in equation (33).
- (c)Use the ancillas prepared in steps (a) and (b) as controls for the operations Vj, defined in equation (30); this controlled operation is denoted , and defined in equation (32). This controlled operation corresponds to K controlled phase shifts and applications of , defined in equation (2). The result of is that it applies a superposition of the terms in the truncated Taylor series expansion of the Hamiltonian evolution to the target state.
- (d)Apply to invert the state preparation in steps (a) and (b). Then, if the ancilla qubits were measured as all zero, that would correspond to a success and give Ur applied to the target state.
- (e)Apply oblivious amplitude amplification to obtain success with unit probability.
Download figure:
Standard image High-resolution imageThe gate count of the entire algorithm is thus r times the cost of implementing plus the cost of implementing . Though we implement with gates, our brute-force construction of led to a gate count for which scales as . Thus, the total gate count of our database algorithm scales as
While this bound suggests an exponentially more precise algorithm than those based on Trotterization, in the remainder of our paper we discuss an even more efficient algorithm with improved dependence on N.
5. Evolution under integral Hamiltonians
In section 4 we analyzed the database algorithm for quantum simulating chemistry Hamiltonians in a manner that is exponentially more precise than Trotterization. The most costly part of that procedure is the implementation of , as in equation (3), which prepares a superposition state with amplitudes that are given by integrals over spin–orbitals, as in equations (11) and (12). Instead of classically precomputing these integrals and implementing with a database, the strategy we introduce is to numerically sample the integrals on-the-fly using the quantum computer. Because of this, we call this the 'on-the-fly' algorithm. To accomplish this, we discretize the integrals as sums and design a circuit which returns the integrand of these integrals at particular domain points. The motivation for approximating integrals as sums comes from a direct analogy between the discretization of time in the Taylor series approach for simulating time-dependent Hamiltonians [34] and the discretization of space in Riemann integration.
In [34], the time-ordered exponential is approximated by a Taylor series up to order K, and the integrals are then discretized as follows on each segment
where is the time ordering operator. Now let us suppose that our Hamiltonian does not change in time, but instead that the Hamiltonian itself is given as a definite integral over the spatial region so that
The second quantized Hamiltonian given by equation (25) is similar to this, except it includes both terms hij, which are integrals over one spatial coordinate, and hijkℓ, which are integrals over two spatial coordinates. While those integrands are also defined over all space, the integrands decay exponentially in space so we can approximate them as definite integrals over the finite region , having volume . Then we can approximate the integral by
where is a point in the domain at the ρth grid point.
As in section 4, we begin by dividing t into r segments. We turn our attention to a single segment
where in the second line we have performed a Taylor expansion of the propagator and truncated at order K. In equation (50), the bolded symbol indicates a vector of vectors. Like before, if then the relationship between K and is given by equation (29). To approximate the integral, we divide it into μ regions of volume . We now have
For the second quantized Hamiltonian, the Wγ in equation (1) are integrals over scalar functions as in equation (4). Using this property it is clear that
and the segment Ur can be expressed as
The second quantized Hamiltonian given in equation (25) is a sum of terms which are integrals over one spatial coordinate and terms which are integrals over two spatial coordinates. This case is easily accounted for by taking to be a vector including both spatial coordinates, and to be the product of the volumes for the two coordinates. One can take the terms with the integral over the single spatial coordinate to also be integrated over the second spatial coordinate, and divided by the volume of integration of that second coordinate to give the correct normalization. We may now proceed with the truncated Taylor series simulation as in section 4. Whereas our database algorithm required to create a superposition of states weighted by the , as in equation (3), our on-the-fly algorithm will need to create a superposition of states weighted by the scalar integrands .
As the first step, we discuss a method for dynamically decomposing each into a sum of terms which differ only by a sign, . That is, the decomposition is of the form
where ζ is the precision of the decomposition and
The sum in equation (54) corresponds to rounded to the nearest multiple of , so
To accomplish this on-the-fly, we perform logic on the output of which acts as
where is the binary representation of using qubits. In particular, we will need to determine whether the for a given value of m should be 1 or −1. Since the superposition we desire should be weighted by the square root of this coefficient, we need to prepare states that either do or do not have a phase factor so
where and was obtained from . The phase factor can be obtained using phase-kickback in the usual way. Then we apply a second time to erase the register . A single query to this circuit allows for the construction of with the same complexity as . Accordingly, the overall action of is
where and
Before explaining the integrand circuit we briefly comment on the additional resources required for the Taylor series simulation under a discretized, position-dependent, integrand Hamiltonian. As in the constant Hamiltonian case, we need one register with qubits to encode and K registers of qubits to encode . However, we also need extra ancilla qubits to store the value of m, the grid point registers, as well as the value registers which are used by the integrand oracle . This represents an additional ancilla overhead of .
The sources of simulation error are also similar to the constant Hamiltonian case. As we show in section 6, we can approximate the integrals with discrete sums to precision at a cost that is logarithmic in . The error due to the discrete sum is controlled by the choice of μ, which we need to select so that the resulting error per segment is less than . The most costly integrals (due to the size of their domain) will be the two-electron integrals in equation (12) which have integrands of the form
where and represent the three spatial degrees of freedom of two separate electrons. In section 6, we bound the cost to the quantum algorithm of estimating the corresponding integrals.
To summarize, the on-the-fly algorithm is as described below.
- (1)Decompose the Hamiltonian into an integral which is approximated as , as in equation (49).
- (2)Subdivide the simulation time t into segments, where λ is defined in equation (60).
- (3)Expand the evolution for time t/r by a truncated Taylor series as in equation (51). (Steps 1 to 3 are classical pre-processing.)
- (4)For each segment perform the following steps.
- (a)
- (b)Create the superposition of states encoded in unary, as given in equation (34), except with Λ replaced with λ.
- (c)Apply , i.e. K controlled phase shifts and applications of , to coherently execute all terms in the truncated Taylor series.
- (d)Apply the transpose of the state preparation in step (a) and step (b).
- (e)Apply oblivious amplitude amplification to obtain success with unit probability.
Note that the key difference between this algorithm and that described in section 4 is the state preparation in , which corresponds to terms in the discretized integral. The superposition of unary-encoded states is modified, but only in that Λ is replaced with λ. In the next section we detail how to construct the oracle for the discretized integral, and its cost.
6. The integrand oracle
In section 5, we showed how one can implement the truncated Taylor series simulation technique by replacing a superposition state having amplitudes given by integrals with a superposition state having amplitudes given by their integrands, as well as a way of decomposing those integrands. We begin this section by constructing a circuit which allows us to sample from the integrands as in equation (57). First, we will need a circuit which computes values of the N spin–orbital basis functions to at , a real-valued position vector at grid point ρ. The action of each these oracles is
where represents the binary expansion of using qubits. We will need N different circuits of this form, one for each basis function to . Usually, the molecular spin–orbital basis functions are represented as sums of Gaussians multiplied by polynomials [43]. In that case, the quantum circuit can be implemented as a reversible classical circuit that evaluates and sums the Gaussians associated with . For example, in the STO-nG basis set, each orbital is a linear combination of n Gaussian basis functions [43]. In general, Gaussian functions may be evaluated classically with complexity that is at most polylogarithmic in [45]. The use of Gaussians has a historical precedent; they are used because those functions are simple to integrate on a classical computer. However, the use of a Gaussian basis is not necessarily an optimal strategy for a quantum implementation. We leave open the problem of determining an optimal representation of molecular orbital basis functions for evaluation on a quantum computer and develop a strategy based on the model chemistries used in classical treatments of electronic structure.
Next, we combine N different circuits, one for each , to construct a circuit which allows us to apply any of the N basis functions. This circuit will have depth and may be constructed as the block diagonal operator
Thus, is a sequence of circuits with the spin–orbital selection completely controlled on a register encoding the basis function index j. There will be a factor of in the complexity because the controlled operations need to access qubits for j, as well as qubits storing the position . In addition, the circuit needs to perform analytic operations (e.g. calculating exponentials for STO-nG), which will contribute an additional factor polynomial in . An example implementation of for four basis functions is shown in figure 3.
We now discuss how one can use to compute the two-electron integrands in equation (61). To avoid singularities that would occur when two electrons occupy the same point in space, we change variables in equation (61) so that . With this substitution, the integral becomes
Expressing in spherical polar coordinates, with , we have
We define the maximum value of any spin–orbital function as and the maximum value of its derivative in any direction as . In addition, we truncate the integral at a finite distance xmax. Now assume that we discretize in intervals of size along each degree of freedom. We can take the maximum value of to be xmax, and choose .
The primary contribution to the complexity is in terms of the number of segments. The maximum value in the integrand of equation (65) is upper bounded by . When discretizing the integral, each term in the sum is no larger than and there are terms Multiplying these together gives us the contribution of the integral to the scaling of our on-the-fly algorithm,
which corresponds to the factor of in equation (60). But how do and xmax scale with N? The maximum values are predetermined by the model chemistry, and hence are independent of N. Determining the appropriate value of xmax is a little more complicated.
Because the Hamiltonian is a sum of of the integrals, each integral should be approximated within error to ensure that the final error is bounded by . Since the functions can be chosen to decay exponentially, xmax can be chosen logarithmically in the allowable error . The quantum chemistry problem is always defined within a particular basis, specified as part of a model chemistry [43]. The model chemistry prescribes how many spin–orbitals, how many basis functions, and what type of basis functions should be associated with each atom in a molecule. This includes a specification for parameters of the basis functions which impose a particular maximum value as well as a cutoff distance beyond which each is negligibly small. However, the space between basis functions on different atoms must grow as the cube root of N, because the molecular volume will grow as . This would imply that the value of xmax needed scales as
Nevertheless, each individual orbital is non-negligible on a region that grows only as for a given model chemistry. It is therefore advantageous to modify the grid used for the integral so it only includes points where one of the associated orbitals is non-negligible. This can be performed at unit cost if the center of each spin–orbital function is provided in an additional register when querying the circuit . As above, the region where the orbital can be regarded as non-negligible can be chosen logarithmically in , to ensure that the overall error in the simulation is within error .
To be more specific, the coordinates for can be chosen to be centered around the center of orbital , with the components of only going up to a maximum value scaling as
For , we only wish to take values such that are non-negligible. Here it should be noted that the spherical polar coordinates are only advantageous if we are in a region where is near zero, where the Cartesian coordinates would have a divergence. In regions where is large, the extra factor of ξ for the integral in spherical polar coordinates increases the complexity.
Therefore, if the minimum value of such that is non-negligible is , then the maximum value of such that is non-negligible will also be . Therefore we can use spherical polar coordinates, and obtain scaling as in equation (66) with xmax as in equation (68). On the other hand, if the minimum value of such that is non-negligible is , then we can use Cartesian coordinates, and the division by can only lower the complexity. We obtain a contribution to the complexity scaling as with xmax as in equation (68). Here the power of xmax is 3 rather than 5, because we divide instead of multiplying by as we did with spherical polar coordinates.
Next we consider the grid size needed to appropriately bound the error in the discretized integration. The analysis in the case where Cartesian coordinates are used is relatively straightforward. Considering a single block in six dimensions with sides of length , the value of the integrand can only vary by the maximum derivative of the integrand times (up to a constant factor). The error for the approximation of the integral on this cube is therefore that maximum derivative times . Then the number of these blocks in the integral is , giving an overall error scaling as times the maximum derivative of the integrand.
The maximum derivative of the integrand can be bounded in the following way. For the derivative with respect to any component of , we obtain the derivative of the integrand scaling as
where we have used the fact that we are only using Cartesian coordinates for . For the derivative of the integrand with respect to any component of in the numerator of the integrand, the same scaling is obtained. For derivatives with respect to components of in the denominator, the scaling is
Overall, we therefore bound the error when discretizing in Cartesian coordinates as
The analysis for spherical polar coordinates is a little more subtle, but it is largely equivalent if we scale the angular variables. It is convenient to define scaled angular variables
Then the discretization lengths for all variables are . The volume of each block in the discretization is again , and there are blocks. The total error is again therefore the maximum derivative of the integrand multiplied by .
The derivative of the integrand with respect to any component of is again given by equation (69). Multiplication by ξ gives a factor , but the change of variables to and gives division by a factor of . The derivative of the integrand with respect to or in any of the spin orbitals again gives a factor scaling as in equation (69). The derivative of the integrand with respect to ξ or in scales as in equation (70).
As a result, regardless of whether Cartesian coordinates are used or spherical polar coordinates, the error due to discretization is bounded as in equation (71). Thus, to achieve error in the integral no larger than , we require that
The total number of terms in the sum then scales as
This is quite large, but because we only need to use a number of qubits that is the logarithm of this, it only contributes a logarithmic factor to the complexity. Because the logarithm scales as , it contributes this factor to the complexity of .
Given , computing the integrand in equation (65) is straightforward. We need to call four times on registers that contain and . Let denote a circuit computing the value of when queried with the point . This circuit has the following action:
The final element of our sampler circuit will be a reversible multiplier which simply multiplies together five registers in a reversible fashion. This construction of is shown in figure 4 and enables us to evaluate the integrand of equation (65), i.e.
Download figure:
Standard image High-resolution imageNext we consider how to construct a circuit for the one-electron integrals in equation (11). First, one constructs N additional circuits similar to the ones in equation (62) that return as opposed to . These oracles are incorporated into a one-electron version of which is called along with a routine to compute the nuclear Coulomb interactions. The one-electron integrals have singularities at the positions of the nuclei. Similar to the two-electron integrals, these singularities can be avoided by using spherical polar coordinates. Each term in the sum over the nuclei should use spherical polar coordinates centered at that nucleus. Selection between the one-electron and two-electron routines is specified by . Putting this together with the circuit in figure 4, we can implement with gates, and, as discussed in section 5, has the same complexity.
While the gate count of is much less than the gate count of , our on-the-fly algorithm requires more segments than the database algorithm. Whereas our database algorithm requires segments where Λ is the normalization in equation (3), our on-the-fly algorithm requires segments where is the normalization in equation (59), which is accounted for in equations (60) and (66). Thus, by performing the algorithm in section 4 using instead of and taking , we see that our on-the-fly algorithm scales as
Using the scaling in equation (68), we can bound λ as
so that the overall gate count of the on-the-fly algorithm scales as
Recall that the notation indicates that logarithmic factors have been omitted. The full scaling includes a power of the logarithm of .
To summarize, in this section we have provided the algorithm for the operation used in section 5. To achieve this operation, the key steps are:
- (1)Convert from γ to , and from the sampling point ρ to the corresponding values of and .
- (2)
- (3)The circuit of step 2 uses controlled- operations which calculate the value of an orbital , and are performed using a circuit of the form in figure 3.
7. Discussion
We have introduced two novel algorithms for the simulation of molecular systems based primarily on the results of [34]. Our database algorithm involves using a database to store the molecular integrals; its gate count scales as . Our on-the-fly algorithm involves computing those integrals on-the-fly; its gate count scales as . Both represent an exponential improvement in precision over Trotter-based methods which scale as when using reasonably low-order decompositions, and over all other approaches to date.
Specifically, our database algorithm scales like where we have used the bound . However, we believe this bound is very loose. As discussed in [8, 43], the use of local basis sets leads to a number of two-electron integrals that scales as in certain limits of large molecules. Accordingly, the true scaling of the database algorithm is likely to be closer to . It also seems possible that our integration scheme is suboptimal; it is possible that it can be improved by taking account of smaller values of hijkℓ.
Our asymptotic analysis suggests that these algorithms will allow for the quantum simulation of molecular systems larger than would be possible using Trotter-based methods. However, numerical simulations will be crucial in order to further optimize these algorithms and better understand their scaling properties. Just as recent work showed significantly more efficient implementations of the original Trotterized quantum chemistry algorithm [5–9], we believe the implementations discussed here are far from optimal. Furthermore, just as was observed for Trotterized quantum chemistry [7, 9], we believe our simulations might scale much better when only trying to simulate ground states of real molecules. In light of this, numerical simulations may indicate that the scaling for real molecules is much less than our bounds predict.
Acknowledgments
The authors thank Jonathan Romero, Jarrod McClean, David Poulin, and Nathan Wiebe for helpful comments. DWB is funded by an Australian Research Council Future Fellowship (FT100100761) and a Discovery Project (DP160102426). PJL acknowledges the support of the National Science Foundation under grant PHY-0955518. AAG acknowledges the support of the Office of Naval Research under award: N00014-16-1-2008. AAG and PJL acknowledge the support of the Air Force Office of Scientific Research under award FA9550-12-1-0046.
Footnotes
- 6
We use the typical computer science convention that , for any functions f and g, if f is asymptotically upper and lower bounded by multiples of indicates an asymptotic upper bound, indicates an asymptotic upper bound up to polylogarithmic factors, Ω indicates the asymptotic lower bound and implies in the asymptotic limit.
- 7
The convention of [34] requires that the Wγ are real, non-negative scalars. This treatment remains general as arbitrary phases can be factored into the Hγ. However, we break with that convention and allow the Wγ to take arbitrary complex values. This is done for pedagogical purposes: so that we may separately describe computation of the Hγ and the Wγ for the chemistry Hamiltonian. Consequentially, our equation (39) differs from the analogous equation in [34] by a complex conjugate operator.