Paper

An integrated programming and development environment for adiabatic quantum optimization

, , , , , , and

Published 14 July 2014 © 2014 IOP Publishing Ltd
, , Citation T S Humble et al 2014 Comput. Sci. Discov. 7 015006 DOI 10.1088/1749-4680/7/1/015006

1749-4699/7/1/015006

Abstract

Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware has raised challenging questions about how to evaluate adiabatic quantum optimization (AQO) programs. Processor behavior depends on multiple steps to synthesize an adiabatic quantum program, which are each highly tunable. We present an integrated programming and development environment for AQO called Jade Adiabatic Development Environment (JADE) that provides control over all the steps taken during program synthesis. JADE captures the workflow needed to rigorously specify the AQO algorithm while allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation engine that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its potential use for benchmarking AQO programs by the quantum computer science community.

Export citation and abstract BibTeX RIS

1. Introduction

The discovery of quantum algorithms with significant speed-ups over their classical counterparts has spurred interest in the research and development of quantum computing systems [1]. Several different but computationally equivalent models for quantum computing have emerged including, in particular, the model of adiabatic quantum computing (AQC) [2, 3]. Notionally, the AQC model for universal quantum computation corresponds to adiabatic (i.e., slow) changes in the state of a quantum physical system. While computationally equivalent to other models, AQC promises some intrinsic benefits for ensuring fault-tolerant computation and reducing system complexity [46].

Additional attention to the AQC model has been stimulated by the recent commercial realization of a special purpose processor that implements the adiabatic quantum optimization (AQO) algorithm [2, 3, 7]. The processor, manufactured by the company D-Wave Systems, realizes a programmable Ising spin-glass model in a transverse field [812]. This hardware is specialized to the AQO algorithm and it is not capable of universal computation within the AQC model, but it does provide a complete realization of a quantum computational device. This has spurred vigorous scientific studies into exactly how the current hardware performs quantum computation, including efforts to differentiate its observed behavior from classical physical processes [13, 14]. Moreover, the AQO algorithm is broadly applicable to combinatorial optimization problems and, consequently, the D-Wave processor has garnered attention for its potential use in a number of application domains. Examples include problems in classification [15, 16], machine learning [17], graph theory [1820], artificial neural networks [21], and protein folding [22], among others [23].

The availability of quantum hardware allows for benchmarking performance relative to both quantum and classical metrics of computational power. Understanding observed behavior requires a detailed consideration of how the program and hardware interact as well as how the defined metrics represent performance. For example, it is known that performance of the AQO algorithm depends strongly on the specific programming and hardware operation schedules as well as the problem input [24, 25]. Indeed, whereas some studies of the AQO algorithm have reported runtimes that scale polynomially in problem size [2, 2628], others have suggested worst-case exponential behavior or trapping in local minima [29]. More generally, it has proven difficult to predict the run times of particular problem instances due to the complexity of the underlying quantum dynamics. An essential step in understanding these behaviors is to capture the influence that different programming choices have on observed run times [7, 2936].

A significant source of complexity in analyzing implementations of the AQO algorithm arises from the multiple steps undertaken to synthesize the adiabatic quantum program. We provide a brief summary of the process with elaboration of the detailed synthesis deferred to section 2. Figure 1 illustrates that an adiabatic quantum programming process begins with the reduction of a classical combinatorial optimization problem to a quadratic unconstrained binary optimization (QUBO) problem. The QUBO problem is then mapped into the parameters of an equivalent logical Ising Hamiltonian. The logical Ising Hamiltonian must then be mapped onto the processor as a physical Ising Hamiltonian, a process defined as embedding. This transformation of the reduced problem into a physically realizable program depends on both the hardware layout and the available hardware controls. Ultimately, the computed solution will depend on all previous decisions as well as the actual physics underlying the processor.

Figure 1.

Figure 1. A flowchart highlighting the multiple steps taken to synthesize an adiabatic quantum program for the AQO algorithm. The steps are elaborated in detail in section 2. Briefly, a QUBO problem serves as the classical input to program synthesis while the computed QUBO solution represents the value returned by the program. Each block in the diagram corresponds to a distinct intermediate representation of the quantum program that depends on the choices made in the previous steps.

Standard image High-resolution image

It is currently poorly understood how modifications at the various stages in figure 1 impact the correctness and efficacy of computed solutions. Reconciling the seemingly contradictory results from previous studies as well as understanding more recent experimental benchmarks requires investigating how programming choices impact performance. Motivated by this, we have developed a software environment that captures each step in deriving a program for the AQO algorithm. Our framework does not address programming for a universal adiabatic quantum computer, but instead it is specialized to the AQO algorithm and the Ising spin-glass physics underlying the D-Wave processors. The software synthesizes together the steps from figure 1 into an integrated workflow that includes the development of adiabatic quantum programs as well as the collection of diagnostic information for addressing questions about performance. In the absence of actual hardware, we use numerical simulation to evaluate the variety of programming and operational choices that can affect program behavior. Our simulation capabilities employ multiple numerical methods with the possibility for user extensions. Another important part of the framework is the ability to analyze both the solutions recovered by simulations as well as the intermediate dynamics and Hamiltonians. With the publication of recent benchmarks from available hardware [34, 35, 37], the ability to make comparisons between simulated and experimental results can be useful for understanding observed behavior.

The Jade Adiabatic Development Environment (JADE) implements the programming steps highlighted in figure 1. JADE capabilities include capturing input for a high-level optimization QUBO problem as well as generating the low-level quantum physical program representation. JADE is further integrated with a quantum simulation engine that supports user-defined methodologies for running diagnostic analyses. We present explicit examples of several simulations methodologies based on finite differencing as well as diagnostics derived from the time-dependent eigenspectra and eigenstate populations.

Because the JADE programming model is tailored to the AQO algorithm and Ising spin glass physics, we suggest that JADE may be useful in supporting ongoing benchmark studies of the D-Wave System processor. We do not address the issue of developing benchmarks or methods of evaluating quantum program efficiency, but we do provide a concrete realization of the integrated computational environment needed to carry out such efforts. In particular, our development environment formalizes methods used for programming the quantum processor while offering an interface to simulation for computing detailed diagnostics about how a program executes. For completeness, we note that there is some superficial similarity between JADE and the proprietary BlackBox Compiler from D-Wave Systems, which provides an interface for solving problems on hardware. The primary distinction of JADE is that it enables explicit control of the programming steps for the purpose of testing new programming techniques. Conceptually, a JADE program could be used to drive the actual quantum processor by interfacing with the hardware control system, but we have not explored that option here.

This paper is organized as follows. In section 2, we summarize the theoretical background leading to figure 1, including the quantum physical basis for AQO. In section 3, we present the model-based design of JADE including the system context, implementations of each component, and our test-driven framework for program verification and validation. We present usage results for the case of a recent benchmark problem in section 4 and we offer conclusions in section 5.

2. Adiabatic quantum programming

In this section, we provide a summary of the physical theory and computer science underlying adiabatic quantum programming. This includes the quantum physical description of AQC as well as the steps taken to map the AQO algorithm to a hardware control schedule.

2.1. Quantum computational model

The physical basis for the AQC model was first established in terms of quantum annealing by Kadowaki and Nishimori [38]. Farhi et al as well as others later formalized these ideas as a means of universal computation [2, 7, 39]. Several efforts have since shown the equivalence between the AQC model and other quantum computing models [40, 41]. In a generalized AQC algorithm [2, 3, 7], a quantum physical system of n qubits is evolved under the Schrödinger equation

Equation (1)

according to a time-dependent Hamiltonian

Equation (2)

that interpolates between the initial Hamiltonian ${{H}_{I}}$ and the final (problem) Hamiltonian ${{H}_{P}}$ from an initial time $t={{t}_{0}}$ to a final time t = T. We shall assume ${{t}_{0}}=0$. In equation (2), the schedules A(t) and B(t) satisfy the boundary conditions $A\left( 0 \right)\gg B\left( 0 \right)$ and $A\left( T \right)\ll B\left( T \right)$, while the quantum system is initially prepared in the lowest-energy eigenstate of ${{H}_{I}}$. Given the instantaneous eigenvalue equations

Equation (3)

with $j=0,1,\ldots {{2}^{n}}-1$ labeling states of monotonically increasing energy, the initial state condition implies $\left| \psi \left( 0 \right) \right\rangle =\left| {{\tilde{\phi }}_{0}}\left( 0 \right) \right\rangle $.

We define the energy gap between the ground and first excited state as

Equation (4)

in which we neglect possible ground state degeneracy for simplicity. If the energy gap is always strictly greater than zero, i.e., $\forall t:\Delta \left( t \right)>0$, then the state $\left| \psi \left( t \right) \right\rangle $ will remain in the instantaneous ground state with high probability provided certain bounds on the rate of change of the Hamiltonian are satisfied [2]. Consequently, evolution under equation (1) to the time T prepares the final state $\left| \psi \left( T \right) \right\rangle $ in the lowest energy eigenstate of ${{H}_{P}}$. By making a judicious choice of the final Hamiltonian ${{H}_{P}}$, the prepared final state may encode the solution to a computation. In order to ensure the computation is correct, the adiabatic condition must be satisfied. In its simplest interpretation, this implies the global time T must be much larger than the inverse of the minimum spectral gap of H(t) [2]. More sophisticated analysis shows that better results may be obtained by adjusting the evolution schedule according to the local energy gap [30]. In either case, failure to ensure the adiabatic condition risks the possibility that the final state will not belong to the ground state manifold of ${{H}_{P}}$ but rather to an excited state and an error in the computation. It is notable that the spectral gap depends not only on the problem to be solved, but also on how the problem is implemented as a quantum program. Understanding input influence on a program run time and error rates is an open question in quantum computer science.

2.2. AQO

In specializing to the AQO algorithm, we require a quantum logical system of n qubits with an initial Hamiltonian

Equation (5)

and final Ising Hamiltonian

Equation (6)

where ${{X}_{i}}$ and ${{Z}_{i}}$ are the Pauli operators for the ith qubit [1], ${{\alpha }_{i}}$ is the bias on the ith qubit, and ${{\beta }_{i,j}}$ is the coupling between qubits i and j. As shown in the section below, the graph ${{G}_{P}}=\left( {{V}_{P}},{{E}_{P}} \right)$ with vertex set ${{V}_{P}}$ $\left( \left| {{V}_{P}} \right|\equiv n \right)$ and edge set ${{E}_{P}}$ defines an input optimization problem, cf the weight matrix ${\bf P}$ in equation (7). The final Hamiltonian is diagonal in the basis defined by the tensor products of the $\pm 1$ eigenstates of the ${{Z}_{i}}$ operators. This basis will also serve as the computational basis. For comparison, the ground state of the initial Hamiltonian (and initial state of the AQO algorithm) is the symmetric superposition of these computational basis states and has an eigenvalue $-n$.

An important consequence arising from the choice of the initial and final Hamiltonians, respectively, equations (5) and (6), is that the time-dependent Hamiltonian H(t) of equation (2) is not capable of universal adiabatic quantum computation. Extending the form of the Hamiltonian beyond the Ising model, for example, to the 2-local ZZXX Hamiltonian of Biamonte and Love [42], would support universal computation, but we do not consider that case here.

2.3. QUBO

Any binary optimization problem (BOP) can be mapped into the form of the final Hamiltonian in equation (6). In doing so, we define the classical input to the AQO algorithm as a QUBO problem. This is because non-binary as well as constrained optimization problems can be reduced to QUBO [43], with multiple methods for performing the reduction available [44]. The QUBO problem is to find

Equation (7)

where ${\bf x}$ is a vector of n binary variables with ${{x}_{i}}\in \left\{ 0,1 \right\}$ and ${\bf P}$ is an n-by-n symmetric real-valued cost matrix. We use the weight matrix ${\bf P}$ to define the weighted adjacency matrix of the input (problem) graph ${{G}_{P}}$ introduced in equation (6). The graph ${{G}_{P}}$ has a vertex set ${{V}_{P}}$ of size $\left| {{V}_{P}} \right|\equiv n$ and an edge set ${{E}_{P}}$ defined as $\left( i,j \right)\in {{E}_{P}}$ iff ${{P}_{i,j}}\ne 0$. From this point of view, programming the AQO algorithm requires mapping the matrix ${\bf P}$ to the biases and couplings of the Ising Hamiltonian. It has been shown previously by Choi that parameterization of the logical Ising Hamiltonian in equation (6) may be given in terms of the QUBO problem as [24]

Equation (8)

and

Equation (9)

We may also add an energy shift to the Ising Hamiltonian in equation (6) of the form

Equation (10)

in order to match the energies of the solution state. Although this shift does not affect the solution obtained using AQC, it must be accounted for in reporting the minimal value in equation (7).

2.4. Hardware embedding

Whether or not the logical Hamiltonian in equation (6) is supported directly on a given hardware depends on the available connectivity of that hardware. We express the connectivity of a targeted processor in terms of its hardware graph ${{G}_{H}}=\left( {{V}_{H}},{{E}_{H}} \right)$. When any vertex can be coupled to any other vertex and $\left| {{V}_{H}} \right|\geqslant \left| {{V}_{P}} \right|$, then it is possible to support all possible input problems using a one-to-one mapping between the logical and physical qubits and the biases and couplings of the physical Hamiltonian. However, when ${{G}_{H}}$ is less than fully connected, then there are certain input problems that will not map directly into hardware. In such circumstances, it may be possible to embed the problem graph ${{G}_{P}}$ into the hardware graph ${{G}_{H}}$ via graph minor embedding [25, 45].

We formally define the minor embedding of a graph ${{G}_{P}}$ into a graph ${{G}_{H}}$ as a mapping $\phi :{{V}_{P}}\to {{V}_{H}}$ such that:

  • (i)  
    each vertex i in ${{V}_{P}}$ is mapped to the vertex set of a connected subgraph ${{T}_{i}}$ of ${{G}_{H}}$.
  • (ii)  
    if $\left( i,j \right)\in {{E}_{P}}$, then there exist ${{\tau }_{i}},{{\tau }_{j}}\in {{V}_{H}}$ such that ${{\tau }_{i}}\in {{T}_{i}}$, ${{\tau }_{j}}\in {{T}_{j}}$, and $\left( {{\tau }_{i}},{{\tau }_{j}} \right)\in {{E}_{H}}$.

If such a mapping ϕ exists, then ${{G}_{P}}$ is minor-embeddable in ${{G}_{H}}$, or ${{G}_{P}}$ is a minor of ${{G}_{H}}$. In subsequent discussions, we simply use the term embedding as a reference to minor embedding.

In adiabatic quantum programming, the vertices of the input graph ${{G}_{P}}$ represent the bits of a candidate solution to the QUBO problem, while the edges represent the presence of non-zero coupling coefficients, as defined in equations (8) and (9), respectively. The vertices of the hardware graph ${{G}_{H}}$ represent the physical qubits and the edges represent the couplings between qubits that are available in the hardware. An embedding maps each vertex in ${{V}_{P}}$ to a subset of ${{V}_{H}}$ and each edge in ${{E}_{P}}$ to edges between these subsets. When an embedding exists, then the resulting subgraph ${{G}^{*}}=\left( {{V}^{*}},{{E}^{*}} \right)$ of the hardware graph defines the physical Ising model

Equation (11)

The bias and coupling coefficients $\alpha _{k}^{*}$ and $\beta _{k,\ell }^{*}$ depend on the selected embedding ϕ per the requirements (i) and (ii) listed above. The physical Ising coefficients are defined as [45]

Equation (12)

and for $k\ne \ell $

Equation (13)

where $edges\left( {{T}_{i}},{{T}_{j}} \right)$ is the number of edges between subgraphs ${{T}_{i}}$ and ${{T}_{j}}$. The constant J is chosen sufficiently large to force the qubits in each subgraph to be strongly correlated, with an upper bound on its value given previously by Choi [45]. Setting the embedded Ising model coefficients requires knowledge of the matrix ${\bf P}$ and the selected embedding implied by ${{G}^{*}}$ [25, 45]. The embedding need not be unique and, consequently, different instances of the Hamiltonian in equation (11) may correspond to the same logical problem of equation (7).

A key dependency in finding an embedding is the target hardware graph ${{G}_{H}}$. The hardware graph defines the vertices and connectivity that are available to express the Ising model. An example hardware graph is shown in figure 2. Finding those graphs that can be embedded into a fixed hardware graph is an example of subgraph isomorphism, which is known to be NP-Complete in difficulty [46]. For small hardware graphs, it is tractable to calculate the maximal minors of the graph, i.e., the minors of ${{G}_{H}}$ whose subgraphs represent all other graphs contained in ${{G}_{H}}$ [25]. However, this is a brute force approach and therefore does not scale favorably with hardware size. Similarly, attempts to find complete graphs as minors of an arbitrary hardware graph is known to be NP-Complete [47]. Alternatives to these brute force approaches include heuristic algorithms that incorporate knowledge of ${{G}_{H}}$ or that limit the types of input problems [25].

Figure 2.

Figure 2. A hardware graph for the Rainier processor produced by D-Wave Systems. The design is a 4 × 4 lattice of interconnected unit cells, with each unit cell is expressed as a ${{K}_{4,4}}$ graph. The more recent Vesuvius processor has a similar design using an 8 × 8 lattice of unit cells. The geometry of the hardware plays an important role in determining which graphs can be embedded.

Standard image High-resolution image

At this point, we emphasize that the role of minor embedding is not as simple as identifying a physical Ising model that is equivalent to the logical Hamiltonian. Indeed, the embedding of a problem into a processor is not unique and, moreover, it is not well understood how different embeddings influence program behavior. There are known tradeoffs in the amount of time spent finding an embedding relative to the size of the embedded problem [25], but it remains unclear how to account for those costs in the benchmarking program performance.

In addition, the current approach to hardware embedding taken by JADE follows the decomposition of a BOP into a QUBO form using quadratization, i.e., decomposing into quadratic form [44]. However, an alternative programming sequence is to map a BOP directly into a multi-linear Ising model that is then decomposed into bilinear form [48]. The latter approach has led to the development of generalized gadgets [49] and, more recently, to resource efficient gadgets that replace multi-linear terms in the Hamiltonian with bilinear ones [5052]. Gadget decompositions introduce additional ancilla qubits in much the same way that quadratization introduces ancilla bits. Biamonte has presented decompositions minimal in the number of gadget ancilla that would be especially relevant for comparing performance [50]. We have not explored the use of gadgets in the JADE programming model, or compared the overhead using quadratization, but we believe that the impact of this alternative programming model should be investigated.

2.5. Hardware schedules and program execution

We restrict our discussion to AQO algorithms that use a time-dependent Hamiltonian fitting the form of equation (2), which interpolates between an initial Hamiltonian ${{H}_{I}}$ and the problem Hamiltonian ${{H}_{P}}$ according to the time-dependent annealing schedules A(t) and B(t). More generally, individual biases and couplings can be time-dependent, e.g., ${{\alpha }_{i}}={{\alpha }_{i}}\left( t \right)$. In either case, the time-dependent schedules specify the rate at which the total Hamiltonian H(t) changes and, consequently, they play an important role in the computational error rates. In particular, the final time T needs to be sufficiently large to ensure the validity of the adiabatic condition, namely,

Equation (14)

where ${{\Delta }^{*}}={{{\rm min} }_{t}}\;\Delta \left( t \right)$ is the global minimum of the spectral gap defined in equation (4) and $\mathcal{E}={{{\rm max} }_{t}}\;\left\langle {\rm d}H\left( t \right)/{\rm d}t \right\rangle $ is the maximal rate of change during evolution [2]. In the absence of information about $\Delta \left( t \right)$, it is difficult to ensure the adiabatic condition is satisfied. This uncertainty is one source of the difficulty in benchmarking adiabatic quantum programs. Recent results on amplifying spectral gaps [53] and developing fault-tolerant programs [54] suggest new methods for mitigating this uncertainty.

Although the annealing schedules are sufficient for coarsely specifying program execution, it is ultimately necessary to provide the physical implementation of those schedules in terms of hardware controls. The hardware controls that are available for tuning the biases and coupling of a processor must be capable of expressing programmed schedules. However, available controls are highly dependent on the physics underlying a processor and ensuring the exact implementation of an arbitrary annealing schedule may not be possible. Limitations on annealing schedules arising from constraints and dependencies of control values creates additional uncertainty in the benchmarking effort. Accounting for control constraints and quantification noise is necessary to provide a clear picture of how processor differences impact program behavior. For example, in the case of the family of processors from D-Wave Systems, biases and couplings can be mapped directly to models for the underlying superconductor Josephson-junction. However, the precision of this mapping is limited by the resolution of the on board digital to analog converters [8, 10].

In addition to the constraints expected from hardware design, it is also necessary to anticipate the influence of noise on program behavior. Two types of noise affecting quantum dynamics are classical noise in the controls and quantum noise in the system dynamics. Quantum noise may be modeled as an undesired interaction between computational qubits and non-control elements of the hardware. A specific example is the case of thermal influences on the quantum dynamics, which invalidate the pure state description in section 2 and undermine the adiabatic conditions [55]. Similarly, classical noise in the hardware controls yields a mixed-state description of the quantum dynamics and may bias program execution away from the solution of interest.

Once the time-dependent behavior of the Hamiltonian H(t) has been fully specified, it remains to execute the program. As noted before, the typical sequence begins by initializing the quantum computational register in the ground state of the initial Hamiltonian ${{H}_{I}}$. How initialization is implemented varies with processor and, more important, it may not be implemented perfectly. This additional source of noise must also be accounted for in evaluating program behavior as it is likely to influence the computational result. The remaining step in execution is to carry out the hardware control schedule and, therefore, the programmed computation.

2.6. Computational readout and problem solution

After evolving to the final time T, the state of the computational register is determined using a suitable measurement or readout method. For the case of the AQO algorithm, the ground states at time T are computational eigenstates and, therefore, readout implies a direct measurement in the computational (Z) basis. As with program execution, it is more realistic to describe the readout process in terms of the hardware controls. This description includes capturing any noise or uncertainty in the measurement process.

The bit string generated from computational readout is the result of the quantum annealing process. However, mapping this result back to a solution for the original QUBO problem requires decoding measurements according to the inverse of the embedding map. For those cases where a tree of physical qubits represents a single logical qubit, it is necessary to check the value of all such qubits. In cases where measurement results within a tree disagree, then various strategies can resolve the uncertainty. One simple example is to use a majority vote. After decoding the computational readout, a solution to the original QUBO problem is produced and the program is complete. It may be necessary to repeat the execution of the program, for example, to gather statistics on the readout or solution states, however, the steps performed are similar to those described above.

3. JADE

As presented in section 2, programming the AQO algorithm for an arbitrary QUBO is a highly tunable process. In this section, we describe a software-based implementation of the process that provides control over each of the programming steps shown in figure 1. We also describe the integration of this environment with a computational engine that uses numerical simulation for profiling these programs. The simulator is intended for providing insights into how program choices impact program performance.

JADE is motivated by the need to provide theoretical benchmarks for current and future AQC devices. In particular, it was designed to capture insights into the behavior of processor architectures. This is accomplished by using a numerical simulator backend to calculate the time-dependent processor state with respect to programmed algorithm. JADE provides both an engine for simulating the programs that run on AQC devices and a development environment for specifying program input. In addition, JADE provides methods for constructing adiabatic quantum processor configurations, i.e., the quantum hardware, and for debugging the implementation.

JADE is built using model-driven development, a software development methodology with a strong focus on system use cases, as well as architectural extensibility and stability [56]. This methodology allows developers to manage system complexity and rigorously verify and validate the final product implementation. Our model-based approach uses the Unified Modeling Language to capture design decisions and trace requirements [57]. We also rely heavily on an object-oriented programming paradigm and software design best practices, such as test-driven development [58].

3.1. Use cases

JADE is designed to provide infrastructure for developing AQO programs and a computational engine for simulating them. This includes functionality for parsing input optimization problems, configuring new quantum hardware, and performing program profiling. Given this broad scope in functionality, JADE was designed for two distinct actors: the Analyst and the Engineer.

An Analyst represents a JADE user whose primary goal is to solve a discrete optimization problem. The Analyst requires a development environment that automates programming choices and execution sequences. In contrast, an Engineer expects to perform additional programming tasks such as customizing low-level Hamiltonian parameters, constructing specialized processor configurations, and defining embedding maps or annealing schedules. As seen in figure 3, this desired JADE functionality is encapsulated by the following use case model.

  • Create a Problem—the Analyst constructs a discrete optimization problem as either a BOP or QUBO problem. In the case of the former, JADE converts the BOP to its corresponding QUBO representation. This use case creates a Problem entity.
  • Solve a Problem—the Analyst selects a previously created Problem to solve using the AQO algorithm. This use case returns a Solution entity, which is the computed solution to the input problem.
  • Create a Processor—the Engineer creates a processor configuration by specifying the number and connectivity of physical qubits. The Engineer may also customize the processor by specifying classical and quantum noise models as well as hardware control constraints. This use case creates a Processor entity.
  • Create a Program—the Engineer creates a quantum program that is either a logical program or a physical program. A logical program is synthesized from selected Problem, Processor, and Embedding entities, while a physical program is synthesized only from a Processor. For the physical program, the Engineer sets the parameters of the final Ising Hamiltonian including biases, coupling, and annealing schedules. Both instances of this use case create a Program entity.
  • Execute a Program—the Engineer executes a Program. With JADE, the Engineer submits the Program for simulation along with any profiling and simulations options. This use case creates a Result entity that corresponds to the computational readout following program execution. Note that the Result of a Program does not correspond necessarily with the Solution to a Problem, as the Result may require additional processing to generate a Solution.

Figure 3.

Figure 3. The Analyst and Engineer actors are distinguished by how they use JADE. The Analyst is exposed to only a high-level input problem and its computed solution. The Engineer has the ability to tune the low-level programming steps and to analyze the computational readout. The Engineer generalizes the Analyst, as indicated by the open arrow.

Standard image High-resolution image

3.2. System context

Alongside the use case model, we also present the system context model in figure 4. The system context describes the communication between JADE and its environment as driven by the use case model. The system context details how the Analyst and Engineer interact with the various input–output (I/O) data. As shown in figure 4, the six types of I/O data are: Problem, Processor, Embedding, Program, Result and Solution. These I/O entities are further specified in section 3.3.

Figure 4.

Figure 4. The JADE system context represents the interactions between the Analyst and Engineer actors with the top-level data entities and the software system. Straight lines indicate the assocation between the actors and the system while arrows indicate how the actors interact with the input and output from the system.

Standard image High-resolution image

An Analyst only has access to Problem and Solution entities. However, we anticipate that JADE must synthesize other entities internally, for example, a Program is required to generate a Solution. Consequently, JADE will need private non-interactive methods for internal synthesis of the remaining entities. Although Processor, Embedding, and Program are generated by the system during the Analyst workflow, we do not explicitly model that dependency in figure 4.

3.3. Component architecture

JADE comprises three distinct components: JadeD, Sapphire, and NiCE. The JadeD component is responsible for data creation, management, synthesis, and verification, i.e., domain logic. The Sapphire component is responsible for the simulation of quantum programs according to user-defined plug-ins. The NiCE component, a pre-existing open source project, is used to integrate the JadeD and Sapphire components and to manage the computational work flow [59]. Each component provides an independent application programming interface (API).

Figure 5 highlights the interactions between the three components and the associated interfaces. Both JadeD and Sapphire couple to NiCE, which provides a user-driven coupling between program development and program execution. There is a dependency between JadeD and Sapphire due the latterʼs need to parse Program data structures. This dependency is restricted to a very narrow subset of the JadeD functionality and we expect future versions will isolate it in a separate shared library.

Figure 5.

Figure 5. JADE comprises three components: JadeD, NiCE, and Sapphire. The interfaces presented for each component are used to manage component interactions and maintain the separation of concerns between domain logic (JadeD), workflow management (NiCE), and numerical simulation (Sapphire). The arrow indicates the uni-directional dependency of Sapphire on JadeD, while the straight line between JadeD and NiCE indicates a bi-directional association.

Standard image High-resolution image

3.4. JadeD

The JadeD component handles creation and manipulation of quantum programming by exposing a basic create, retrieve, update, and delete interface. This interface enables generation, manipulation, and persistence of Entity data objects, which represent high-level abstractions of the various types of I/O data. The functional scope of JadeD includes parsing user-provided input into verified formats, validating that input, and generating subclasses of Entity tailored to specific input types. We define an IJadeD interface to specify how the JadeD component interacts with clients. By defining a formal interface, we are able to offer the option of supporting multiple JadeD variants.

As shown in figure 6, the IJadeD interface includes a number of methods for creating and storing entity instances. The JadeD class is a realization of this interface that provides a concrete implementation of the defined functionality. The JadeD implementation presented here uses a variety of object-oriented design patterns with the factory design pattern being the most significant [57]. The factory pattern is used to create and modify entities in an abstract manner, which pushes the underlying details of construction to the varying entity subclasses. A registry enhances this factory pattern by permitting the sharing of objects across domain boundaries. The use of factories and a data registry allows future developers to add new entity specializations in an easy and efficient manner. In figure 6, the factory pattern and the corresponding data registry are implemented as EntityFactory and EntityRegistry, respectively.

Figure 6.

Figure 6. The IJadeD interface defines the methods exposed to the user or external application. JadeD implements this interface by making use of various data entities that can be generated using a factory pattern and managed using a common registry.

Standard image High-resolution image

3.4.1. Graph

The Graph data structure represents a set of vertices together with a set of edges coupling those vertices. Graph structures are common to the Problem, Processor, Embedding, and Program entities. The JadeD Graph model shown in figure 7 provides an abstraction of this structure in a way that promotes customization and extensibility with respect to a given entity type.

Figure 7.

Figure 7. The Graph class encapsulates vertices and edges, whose respective implementations use the VertexFactory and EdgeFactory factory patterns. Open arrows indicate the source associates with the target class while multiplicities annotating the arrowhead identify the number of instance (*denotes unlimited instances).

Standard image High-resolution image

In supporting this versatility, the Graph class utilizes two factory design patterns for generating vertices and edges [56]. This ensures object polymorphism by allowing custom subclasses to inject specialized edges and vertices. For example, this mechanism allows the production of static graphs for Problem, graphs that evolve in time for Program, and graphs that alter their state according to predefined conditions or controls for Processor.

3.4.2. Problem

The Problem class is a subclass of Entity that encapsulates the input data describing a discrete optimization problem. It is created by either an Analyst or Engineer in order to define the logical problem that the system will solve.

The current implementation of JadeD permits users to construct two distinct types of Problem. The first is a weighted or pseudo-Boolean optimization problem. The user inputs an arbitrary number of Boolean clauses in terms of the literals ${{b}_{i}}$, e.g., $\left( \left( {{b}_{1}}\;\;{\rm AND}\;\;{{b}_{2}} \right)\;\;{\rm OR}\;\;{\rm NOT}\;\;{{b}_{3}} \right)$, and each clause also has an associated real-valued weight ${{w}_{i}}$. The pseudo-Boolean function is then cast into an equivalent BOP by converting each Boolean literal to a corresponding binary variable, e.g., ${{b}_{i}}\mapsto {{x}_{i}}$, True $\mapsto 1$ and False $\mapsto 0$. The Boolean clauses are then recast into equivalent binary arithmetic expressions. Denoting the ith binary arithmetic clause as ${{f}_{i}}$ and the corresponding weight as ${{w}_{i}}$, the equivalent BOP over m bits is

Equation (15)

where ${\bf x}\in {{\left\{ 0,1 \right\}}^{m}}$ is an m-bit vector [43]. In JADE, the BOP class stores both the original Boolean clauses and the reductions to algebraic expressions with corresponding weights.

The second type of Problem supported by JadeD is the QUBO problem defined in equation (7). For this type, the input corresponds to the elements of the matrix ${\bf P}$. The matrix ${\bf P}$ is then interpreted as a weighted adjacency matrix and parsed by JadeD into a Graph. Accordingly, the QUBO class is a subclass of Graph. The dependencies between the various Problem subclasses are illustrated in figure 8.

Figure 8.

Figure 8. The dependencies of Problem on BOP and QUBO entities. Problem generates QUBO from an input BOP. Alternatively, the QUBO may be supplied directly.

Standard image High-resolution image

As discussed in section 2, a BOP of the form in equation (15) can be reduced to a corresponding QUBO problem of the form in equation (7). The reduction, however, requires introduction of penalty terms to replace multilinear terms with quadratic or linear terms [43]. Expressing these penalties ultimately requires additional ancilla bits which enlarge the binary state space. When JadeD instantiates a BOP, the corresponding QUBO is immediately generated as part of the Problem. JadeD uses a QUBO reduction method that replaces the product of two binary variables by a new binary variables; the process repeats until a quadratic form remains [44]. The relevant BOP information is maintained as part of the Problem in order to facilitate developing the Solution entity returned to the Analyst.

3.4.3. Processor

The Processor entity encapsulates the structure and behavior of a quantum hardware configuration. It generalizes Graph by using an adjacency matrix with unit diagonal entries to indicate vertex availability and unit off-diagonal entities for available connections between qubits. As shown in figure 9, Processor wraps a subclass of Graph referred to as Hardware and provides methods to query and manipulate its structure. The Hardware subclass can also implement the embedding of an input Problem into the hardware. This produces an Embedding entity, which subclasses Graph to express the graph ${{G}^{*}}$ that defines the embedded Hamiltonian ${{H}_{{{G}^{*}}}}$ from equation (11).

Processor also allows users to specify a functional time dependence for the bias and coupling parameters of vertices. The Control class encapsulates functions to express the Ising model parameters in terms of physical quantities that directly influence hardware behavior. For the example of a D-Wave processor, the parameters of the Ising Hamiltonians are mapped into the bias and tunneling energies of the superconducting flux qubits [8]. These physical quantities are controlled experimentally in terms of the applied current and magnetic flux, and the Control class allows the developer to express this dependency. Custom noise models for these controls can also be added to Processor through the Noise class, which can express both classical and quantum noise functions.

3.4.4. Program

The Program class is a subclass of Entity that is used to synthesize specific instances of Problem and Processor into an implementation of the AQO algorithm. A Program is the primary input to the Sapphire simulation component and two different types can be constructed, physical or logical. The main difference between these two types for Program is the presence or absence of a high-level logical Problem definition.

Figure 9.

Figure 9. The dependencies of the Processor class, which includes Hardware, Noise, and Control entities. The Embedding entity is instantiated after a QUBO is embedded into the Processor.

Standard image High-resolution image

As shown in figure 10, type-switching is accomplished by composing Program with two classes: Logical Part and Physical Part. The physical part of a Program encapsulates the physical representation of the time-dependent Hamiltonian defined in equation (2). This includes a reference to a Processor and the parameters defining the final Ising Hamiltonian as well as the annealing schedule for each qubit. The logical part of a Program encapsulates a physical program as well as a reference to the specified Problem entity that is being solved. While the physical part of a Program entity is always required, the logical part is not. For Analyst use cases, the Program always has a logical part. In the absence of a logical input, the Program corresponds to an Engineer defined instance of an Ising Hamiltonian.

Figure 10.

Figure 10. The dependencies of the Program class. The presence of a Problem distinguishes a logical program from a physical program, while both class types have an associated Processor.

Standard image High-resolution image

The mapping of the Logical Part into the Physical Part generates an Embedding of the Problem into the Processor. As described in section 2, embedding generates a map between each logical vertex and a connected subgraph in the Processor. Within JadeD, this is accomplished using a subclass of Graph called Embedding. The Embedding class finds an embedding of the Logical Part into the provided Processor and Hardware. The current Embedding class supports the maximal minors methods described by Klymko et al. Its use is limited to a ${{K}_{4,4}}$, but the extensibility of Embedding means that the additional, greedy methods described by Klymko et al can also be incorporated [25].

3.5. NiCE

The NiCE component is responsible for accepting user input, returning JADE output, and managing the computational workflow. It also provides a graphical front end for JADE. NiCE is an existing open-source project that was leveraged for reducing development time and ensuring extensibility. In addition to I/O management, the NiCE component orchestrates the interactions between the JadeD and Sapphire components. It enables users to create input files, launch simulations and examine program metrics.

NiCE is based on a client-server model, where the server handles primary data management and the client acts as the user front end. It is also possible for the server to manage remote workloads including, for example, simulations launched on remote hosts. We use the NiCE server as the primary means for launching and monitoring numerical simulations on both local and remote machines.

We have developed several plug-ins for NiCE that allow direct interaction with the JadeD component for the creation and revision of the Problem, Processor, and Program entities. A screenshot of one such NiCE form is provided in figure 11. NiCE is based on the Open Source Gateway Initiative framework that, among other things, permits dynamic registration of services. We use NiCEʼs implementation of dynamic registration to recognize and load user-defined plug-ins into JADE. This feature permits, for example, user-defined methods for simulation that are developed independently from JADE to be added during runtime. Additional information about NiCE is available from its website [59].

Figure 11.

Figure 11. A cropped screenshot from the NiCE client for JADE showing the synthesis of a Program from a logical Problem and a selected Processor.

Standard image High-resolution image

3.6. Sapphire

Sapphire is the JADE component responsible for profiling Program entities. This includes carrying out numerical simulations of the quantum dynamics as well as other characterizations such as computing the time-dependent energy eigenspectra and computational error rates. While its primary use is to compute the Result of a Program, Sapphire permits a robust set of possible use cases. This is a result of our use of a plug-in architecture to support user-defined extensions to Sapphire. For example, numerical simulation techniques can be tailored to specific questions or physical assumptions. This promotes analysis at any desired fidelity and gives the user the ability to compare different simulation techniques against experimental benchmarks.

The extensibility of Sapphire is achieved through the interplay of a number of abstractions and design patterns, as shown in figure 12. Sapphire only exposes a few methods to external clients through the ISapphire interface. This decoupling between behavioral definition and actual implementation allows Sapphire to take on a number of varied forms. For example, JADE currently provides a Sapphire implementation for multi-threaded, shared memory architecture. We have also implemented SapphireMPI, which uses the Message Passing Interface (MPI) library to execute simulations on distributed architectures. The most significant difference between the two implementations is the MPI dependency and the need to perform unique initialization steps for SapphireMPI prior to beginning the numerical simulation.

Figure 12.

Figure 12. The ISapphire interface expresses both the Sapphire and SapphireMPI classes. Sapphire makes use of the factory design pattern for generating Simulations that are labeled by the type argument to execute. SapphireMPI has an identical structure.

Standard image High-resolution image

All implementations of Sapphire must define the method execute. When execute is invoked, Sapphire utilizes the JadeD file-parsing capabilities to construct the Program object defining the parameters of the numerical simulation. Sapphire next parses the simulations options provided by the user to create a Simulation object using the SimulationFactory. The Simulation class is the basis for the extensibility of Sapphire using plug-in libraries. A plug-in is essentially a subclass of Simulation that provides a specialized numerical or algorithmic approach to simulation.

3.7. Simulation plug-ins

The Simulation class is the primary unit of functionality within Sapphire and it is used to encapsulate a specific mathematical evolution of a quantum state. The factory design pattern allows Sapphire to remain completely agnostic to simulation details. However, there is a specific sequence of execution statements that are a necessary part of Sapphire. Program execution always begins with an initialization statement followed by a loop over a time-dependent solver. Once the exit condition is met, i.e., when t = T, the computational state undergoes readout before the program issues finalization commands. All plug-ins for Sapphire must adhere to the Simulation class functionality defined below.

  • Initialize: this method is used primarily to initialize the quantum state of the simulation. Additional tasks include setting up any pre-simulation conditions or parameters.
  • Anneal: this method is called every time step by Sapphire to advance the program quantum state. Developers should implement this method to update the state vector with the mathematics inherent to a specific technique for solving the time-dependent Schrödinger equation.
  • QueryState: this method is used to query the state of the simulation, including the computational state of the simulated program. The output generated by this method is highly variable and it can include the internal representation of the quantum computational state or the complete eigenenergy spectrum written to an output file. These output files can also be used as checkpoints for restarting the simulation.
  • Measure: this method is called after anneal completes and it represents measurement of the final computational state.
  • Finalize: this method is used for any final calculations or clean up routines.

Developers of simulation plug-ins must subclass Simulation and implement the purely virtual anneal method. All other methods have default implementations that can be overwritten for specialized functionality. JADE also provides a specialized HamiltonianGenerator abstraction that permits decoupling of numerical dynamics from the actual form of the Hamiltonian describing the system.

3.7.1. Plug-in examples

The Sapphire plug-in architecture maintains extensibility to new simulation methodologies. A plug-in represents a user-created library that implements the Simulation class defined above. JADE users are therefore able to tailor quantum computing simulation techniques to specific problems or metrics of interest. We provide examples of plug-ins that implement Simulation below.

  • SimulationZero: this plug-in provides a zero-th order approximation about the state of the computational register. Specifically, this simulation calculates the time-dependent eigenspectrum and instantaneous eigenstates of the time-dependent Hamiltonian defined by a Program. SimulationZero does not provide information about the quantum dynamics but essentially diagonalizes the Hamiltonian at each time step. This analysis provides information about the time-dependent energy gap. Our implementation is modeled in figure 13 and makes use of the Eigen library, which is an open-source C++ template library for linear algebra [60].
  • RK4Simulation: this plug-in provides a fourth-order Runge–Kutta solver for the time-dependent Schrödinger equation as in equation (1). RK4Simulation uses two time steps, one for the outer anneal method which updates the Hamiltonian and a second for the inner evolve loop that numerically solves a finite-difference equation. For each evolve time step, the plug-in updates the quantum state and for each anneal it computes the instantaneous eigenspectrum. The plug-in also implements the queryState method to provide a Snapshot output that contains details about the computational state and eigenspectrum. Simulation options include the time steps, number of Snapshot files created, and number of eigenstates reported by queryState. This plug-in also makes use of the linear algebra functionality provided by the Eigen library.
  • FOPSimulation: the FOPSimulation plug-in is based on a first-order perturbative solution to the time-dependent Schrödinger equation. It evolves a pure state according to a first-order Magnus expansion for the time-dependent propagation operator. Numerically, the propagation operator is diagonalized by the anneal method and applied successively to the state during the evolve method. This method has an error of $\mathcal{O}\left( \Delta {{t}^{3}} \right)$. Similar to the other simulation methods, Eigen is used to perform the matrix exponential and matrix-vector multiplications.

Figure 13.

Figure 13. Examples of the plug-ins that implement the Simulation class. SimulationZero and RK4Simulation both implement subclass Simulation and also make use of IsingHamiltonianGenerator, a subclass of the HamiltonianGenerator.

Standard image High-resolution image

3.8. Testing framework

The design and implementation of JADE relies heavily on test-driven development. A formal and rigorous testing model was defined before any actual product code was developed. This has ensured that (1) the functionality of each test unit was defined prior to its implementation and (2) the implementation of each source unit was fully compliant with the predetermined functionality. We employed test-driven development by modeling and designing surrogate classes whose sole purpose was for unit testing critical behavior in actual JADE classes. An example is shown in figure 14, where we test the Simulation class using surrogates for most objects in the Sapphire component. There is a corresponding SimulationTester class. Every class in JADE has a corresponding test class in order to provide the greatest assurance that the code adheres to design requirements.

Figure 14.

Figure 14.  SimulationTester is external to Simulation but capable of accessing its methods.

Standard image High-resolution image

4. Usage example

As an example of how JADE can be used for evaluating quantum programs, we present results based on the recent experimental benchmarks reported by Boixo et al [34]. Their work was performed on the Rainier processor from D-Wave Systems and used the eight-qubit Ising model represented in figure 15.

Figure 15.

Figure 15. The graphical representation of the eight-qubit Ising model investigated by Boixo et al [34]. Vertices 1–4 (green) represent biases of $+1$ and vertices 5–8 represent biases of $-1$. All the edges represent $+1$ couplings between connected vertices.

Standard image High-resolution image

Boixo et al showed both theoretically and experimentally that the eight-qubit model in figure 15 exhibits a unique behavior. This particular eight-qubit problem exhibited a distinctive behavior that differentiates between the quantum and classical annealing dynamics. The Ising Hamiltonian has a 17-fold degenerate ground state. They used multiple runs of the developed program on the Rainier processor to recover all 17 ground states from computational readout.

We have used the benchmark developed by Boixo et al to demonstrate the functionality of JADE. Specifically, we defined an eight-qubit Processor supporting the ${{K}_{4,4,}}$ (bipartite) connectivity familiar from the unit cell in the Rainier processor as shown in figure 2. We used an Embedding entity based on the maximal minor method discussed by Klymko et al [25] and we matched the mapping taken by Boixo et al. We programmed linear annealing schedules, i.e., $A\left( t \right)=t/T$ and $B\left( t \right)=1-t/T$, and a final time of $T=30\tau $, where $\tau =h/{{E}_{0}}$ defines time relative to the energy scaling ${{E}_{0}}$ of the Hamiltonian H(t), i.e., $H\left( t \right)\to H\left( t \right)/{{E}_{0}}$. Fortuitously, the value of ${{E}_{0}}$ drops out of these calculations as we measure time relative to T, i.e., as $t/T$. We have neglected constraints on the controls, as the Ising parameters were very simple, and we have neglected all forms of noise in the hardware physics.

The developed Program entity was then simulated using the RKSimulation plug-in described in section 3.7.1. The simulation options given to this fourth-order Runge–Kutta finite-difference solver invoked a quasi-static approximation for the Hamiltonian. That is to say, we used an evolve time step of 0.0001τ with updates to the Hamiltonian made during every anneal with a time step of 0.05τ. The computational registers were initialized to the exact ground state of the initial Hamiltonian in equation (5). For diagnostics, we computed the complete eigenspectrum every 3τ and output both the spectrum and the complete quantum state as part of a Snapshot. The measure method returned an ordered listing of the output states with their associated probabilities in the generated Result entity.

The complete time-dependent eigenspectrum computed by JADE is shown in the left panel of figure 16. This consists of ${{2}^{8}}$=256 lines representing the time-dependent energies of the 256 eigenstates of the Hamiltonian. At the final time T, there are 17 ground states with eigenenergy $-8$. This matches the eigenenergy and degeneracy derived by Boixo et al. The 17 time-dependent spectra that result in a ground state at the final time are shown in the right panel of figure 16. The presence of kinks in the plot indicate that several states undergo avoided crossings with higher energy levels. Recall that the definition of the spectral gap $\Delta \left( t \right)$ in equation (4) did not distinguish between those instantaneous excited states that terminate in the final ground state manifold from those excited states that remain excited at time T. States terminating in the ground state manifold are not computational errors, but transitions from those states to higher lying excited states can contribute to the observed error rate.

Figure 16.

Figure 16. (left) The complete time-dependent eigenspectrum of the 8-qubit program. (right) Time-dependent spectrum for those states terminating in computational ground states. Spectra are computed every 3τ for a total of 11 points for each of the 256 spectra lines.

Standard image High-resolution image

The computed populations for the 17 ground states at time T are presented in table 1 alongside the corresponding computational basis state. It is evident that the first 16 states, i.e., the manifold of states with qubits 1–4 in the 0 (spin down) state, have approximately equal probability while the 17th state is roughly two orders of magnitude less. However, all the ground states are significantly more likely than the 18th most probable state, which has a probability much less than ${{10}^{-6}}$.

The time-dependence of the instantaneous population in the computational basis is shown in figure 17. Recall that the system is initialized in the singular computational ground state, as indicated by maximum probability at time t = 0. As time progresses, the population remains in the instantaneous ground state until $t\approx 0.9\;T$. At this point in the program schedule, the energy gap between the ground state and the lowest lying excited states has narrowed sufficiently to permit population transfer, thereby violating the adiabatic condition. At this point in the dynamics, however, the lowest-lying excited states represent instantaneous states that will terminate in the ground state at time t = T. There are 16 such states participating in the apparent convergence to approximately 15/16 of the total probability, as shown in table 1. The 17th ground state is not visible in this plot, due to the scale of its contribution, however it undergoes a similar behavior and contains approximately $1/{{16}^{2}}$ of the population. Approximately $15/256$ of probability is distributed over the remaining 239 excited states.

Figure 17.

Figure 17. Time-dependence of the population in the computational basis. The resolution is 11 points over the the range $\left[ 0,T \right]$.

Standard image High-resolution image

Table 1.  Degenerate ground states of the 8-qubit model and their computed probabilities.

Decimal Binary Probability
0 0000 0000 0.0582245
1 0000 0001 0.0598409
2 0000 0010 0.0598409
3 0000 0011 0.0620211
4 0000 0100 0.0598409
5 0000 0101 0.0627384
6 0000 0110 0.0620211
7 0000 0111 0.0651488
8 0000 1000 0.0598409
9 0000 1001 0.0620211
10 0000 1010 0.0627384
11 0000 1011 0.0651488
12 0000 1100 0.0620211
13 0000 1101 0.0651488
14 0000 1110 0.0651488
15 0000 1111 0.0677486
255 1111 1111 4.79745 × 10−4

Our simulation of the 8-qubit program appears to be in qualitative agreement with the experimental and theoretical results of Boixo et al [34]. However, there are several key differences between their program and ours. First, the annealing schedules used by Boixo et al are not linear and we expect that impacts our comparison of observed and computed probabilities. Second, we have not incorporated any sources of noise into our simulation studies, whereas previous experiments on the D-Wave processors have suggested influences of thermal noise may be significant. Nevertheless, our intention of this demonstration has been to provide a verifiable example that JADE is useful for developing quantum programs and supporting benchmark analysis.

5. Discussion

The current availability and continuing development of AQC hardware opens up new avenues of research for defining methods of quantum programming and computational benchmarking. Experimental studies are necessary for measuring actual computational power of processors and for improving programming practices. Test vectors appropriate for benchmark studies must be well defined and the associated difficulty well understood in order to reliably measure the influence of programming and processor methodologies.

Our contribution has been to develop a software environment that offers an interactive approach to programming the AQO algorithm. JADE parametrizes the process of programming the AQO algorithm and it offers opportunities for tuning each step. JADE, or software like it, is needed for standardizing program studies as well as for optimizing program performance. In particular, we have shown how there are many tunable parameters that contribute to the implementation of the AQO algorithm in a processor modeled by a spin-glass system. JADE offers opportunities for optimizing performance across program parameters by exposing these interfaces to the user. Similarly, the Program entity introduced here is one example of a data structure that captures program instance and, consequently, standardizes program specification. We have used Program to initialize numerical simulations, but it would also be possible to submit these program directly to the D-Wave Systems processor. The direct interaction between JADE and the underlying hardware is currently under investigation.

JADE also provides a plug-in architecture to enable extensions to functionality through user-defined programming, simulation, and diagnostic methodologies. We have discussed our implementation of two high-level logical input methods (BOP and QUBO problems), reconfigurable processor definitions in terms of hardware size and connectivity, and multiple numerical simulation methods for computing the time-dependent eigenspectra and eigenstates. The extensible design of JADE permits each of these features to be easily replaced by newer and potentially more versatile methods without revision to the existing code base.

Finally, the programming sequence for the AQO algorithm summarized in figure 1 is sufficient for the currently available hardware models. However, we anticipate that future hardware and programming models will modify the steps taken in compiling an adiabatic quantum algorithm down to a (future) quantum processor. In particular, our approach does not account for fault-tolerance, quantum error correction, or quantum control techniques, which are expected to be useful for the broader AQC paradigm [5]. Nevertheless, we believe JADE exemplifies the type of programming environment currently needed by the quantum computer science community for evaluating the performance of current and future quantum processors.

Acknowledgments

This work was supported by the Lockheed Martin Open Innovation Program at Oak Ridge National Laboratory. The authors thank Greg Tallant and Peter Stanfill of the Lockheed Martin Aeronautics Division for technical assistance. HS thanks the Department of Energy Science Undergraduate Laboratory Internship (SULI) program. TSH thanks Owen S Humble for technical discussions. This manuscript has been authored by a contractor of the US Government under Contract No. DE-AC05-00OR22725. Accordingly, the US Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for US Government purposes. Developers interested in licensing JADE should contact the authors.

Please wait… references are loading.