# Declarative Quantum Computing

**Summary**: All businesses regularly confront difficult decisions. Even when these decisions are constrained (perhaps by budgets) and the cost implications of the decisions are complex, optimization algorithms may nevertheless provide decisions that minimize cost and maximize benefit. Unfortunately, optimization can become very difficult as the number of decisions increases and this makes it an excellent use-case for quantum computing. Declarative languages like RelationalAI’s Rel make the integration of quantum optimizers simple.

Quantum computing may prove to be the most important computing technology developed this century. National governments, venture capitalists, and companies are collectively investing billions in this nascent but rapidly maturing technology.

Government task forces are already preparing for a post-quantum world where security is maintained in the face of the power of quantum computing to break modern encryption. New applications are developing at an accelerating pace in anticipation of harnessing quantum computation for heretofore intractable problems.

Quantum computers offer this enormous computational power because they rely on physics that is not directly leveraged in modern computer hardware. By directly exploiting the superposition and entanglement aspects of quantum physics, clever algorithms and special purpose quantum hardware may provide computational speedups that will surpass even decades of continued Moore’s law growth.

*Quantum computers rely in part on superposition, an intrinsic black-box randomness that processes many options in parallel. (Image ref (opens in a new tab))*

Unfortunately, quantum computing is foreign to most software professionals, which makes this potentially transformative technology accessible only to specially trained practitioners.

While vendors have made many quantum computing interfaces (for example, Pennylane (opens in a new tab), Q# (opens in a new tab), Qiskit (opens in a new tab)) available through Python APIs, the algorithms that underlie quantum computation differ dramatically from classical (non-quantum) algorithms and the expertise required to develop new quantum algorithms is in scarce supply.

*What then, is a company that wants to explore quantum computing to do?*

Our focus here is not on explaining quantum computation*, but rather to share our vision of how quantum computing can be made accessible to all enterprises. This vision retraces a path pioneered in the database community.

Relational database query languages like SQL and Rel (opens in a new tab) are declarative; they enable a user to describe what is desired by relating the desired result to known data. Purposefully left out of this description is the method by which the result is materialized.

Instead, query planning engines synthesize efficient algorithms by composing atomic select, project, and join operations. In this way, algorithmic expertise is captured in the planning engine and users of all skill levels can rely on efficient queries.

The adoption of a similar approach to quantum programming seems natural. Users without knowledge of quantum mechanics describe their problem in a declarative language and rely on a compiler to synthesize a quantum program.

Just like query plans, these quantum programs are assembled as compositions of atomic operations (in this case, quantum gates that are reversible analogues of the primitive logical operations `and`

, `or`

, `not`

, etc.).

We validate this approach with a proof-of-concept implementation of quantum computing in Rel, RelationalAI’s declarative modeling language. In this way, we leverage the benefits of RAI’s declarative data-centric knowledge graph to couple quantum computing with existing data sources, and to prepare for quantum-powered analytics as quantum hardware matures.

## An Optimization Problem and Its Declarative Relational Quantum Solution

Many simple to state problems on graphs generate difficult optimization tasks. Consider a graph where nodes represent workers and where weighted edges measure the efficiency of communication between pairs of coworkers. We seek to divide the workers into two groups (departments) but maintain the richest possible communication between the groups to avoid siloing.

This is a max-cut problem: *given an edge-weighted graph, assign every node to a group (i.e. cut the set of nodes into two groups) to maximize the total weighted connectivity between groups*.

Wikipedia (opens in a new tab) provides an illustration for a graph of five nodes. Here, the groups are identified as either black or white and all edge weights are 1 (absent edges have zero weight). The red edges indicate intergroup edges and have a weight of 5. You can convince yourself that this is the largest possible cut. A graph of `V`

nodes requires searching over `2ⱽ`

possible cuts for the maximal weight, making it exponentially hard for an enumerative search.

Several quantum algorithms have been designed for optimization problems like max-cut. One, the Quantum Approximate Optimization Algorithm (QAOA), operates in a layered fashion much like deep neural networks**.

Starting from an initial guess (a uniform superposition over all possible cuts), each layer filters for good (i.e. high weight) cuts (red boxes) and then mixes these (blue boxes) to generate new, potentially improved, candidates.

Tunable parameters control the amount of filtering/mixing at each layer***. After `n`

filter-and-improve layers the qubits are measured to provide a good guess for a maximal cut.

*(Image ref (opens in a new tab))*

Next, we show how to apply QAOA using Rel.

## Workflow: QAOA in Rel

We solve a max-cut problem using Microsoft’s Azure quantum platform. The steps are as follows:

**The graph is loaded from a relation encoding the weighted edges.****The max-cut cost function is formulated and the Rel optimizer is invoked.**- The Rel cost function is compiled to a Q# program (Microsoft’s quantum programming language) representing the QAOA procedure.
- The Q# program is sent to Azure, which in turn forwards it to cloud-accessed quantum hardware solvers.
- Azure collects the results of hardware runs and returns them to Rel.
- Rel returns the Azure results as a solution relation to the user.

Only steps one and two (in bold) are required to be specified by the user; the remaining steps (unbold) are performed automatically by Rel.

The max-cut decision variables indicate for each node of the graph which group the node belongs to, and can be represented in a relation `cut(v,s)`

where `v`

labels the nodes of the graph and where `s=0`

or `1`

indicates whether node `v`

is assigned to the group labeled `0`

or `1`

.

The total weight of a cut is then `sum[v1,v2,w where weighted_edge(v1,v2,w): w*(cut[v1]+cut[v2]-2*cut[v1]*cut[v2])]`

.

The term `cut[v1]+cut[v2]-2*cut[v1]*cut[v2]`

is `1`

when nodes `v1`

and `v2`

are in different groups and `0`

when they are in the same group, so that sum accumulates the intergroup weights.

The complete Rel code for solving max-cut on a quantum computer is shown below:

```
module Data // graph problem data
def nodes = range[1,5,1] // node indices
def weighted_edges = { 1,2,-0.5 ; 1,4,1.0 ; 1,5,2.0 ; // edge weights
2,3,-1.0 ; 2,4,-3.0 ; 2,5,2.1 ;
3,5,-1.0 ;
4,5,1.6}
end
module Variables[Data]
with Data use nodes
def cut:type = “binary” // decision values are binary 0/1 values
def cut:keys = nodes // 1 decision variable per node
end
module Model[Data,Vars,Solver]
with Data use weighted_edges
with Vars use cut
with Solver use sum, +, -, *
def cut_weight = sum[i,j,w where weighted_edges(v1,v2,w):
w*(cut[v1]+cut[v2]-2*cut[v1]*cut[v2])]
end
module SolverConfig[Model]
def sense = :maximize
def objective = :cut_weight // maximize the cut weight objective
def solver = :AzureQuantum // use Azure Quantum to solve problem
end
def output = rel:mathopt:solve[SolverConfig,Model,Data,Variables] // quantum solve
```

The output solution is a standard relation which can be used for further processing. For example, since the quantum solver is not deterministic and does not guarantee optimality, we might wish to run multiple optimizations, collect the returned solutions, and compare results across runs.

The following graphs (prepared with Rel’s visualization tools) show the frequency of returned solutions by both Azure’s quantum simulator and IonQ hardware using ten filter-and-improve layers at the same parameter settings.

The simulation results, made with Vega-Lite (opens in a new tab), predict that the most likely cut is 00001 (nodes one to four are in group 0 and node five is in group 1), which occurs about 30% of the time. 00101 occurs almost 20% of the time. Both these cuts are maximal. The IonQ hardware results are less definitive, but nevertheless, the optimal solution is one of two configurations that are most likely to be returned.

Lastly, the following figure uses Graphviz (opens in a new tab) to visualize both the graph we’re optimizing and the most likely cut (00001) found by both quantum solvers. The graph nodes are indicated by numbers, and weights annotate the edges. The 00001 cut indicates that nodes one through four are in the 0 group and node five is in the 1 group. The sole group 1 node is shown in blue. The weight of this cut is 4.7.

Interestingly, the second solution found by the simulation is 00101, which also has a weight of 4.7. However, the second most likely cut returned by IonQ hardware is 00011, which has a weight of 1.1 — significantly less than the maximal weight. It is likely that tuning the QAOA parameters can improve the IonQ performance.

## The Future

This proof-of-concept demonstration shows how straightforward it is to use quantum computers through Rel’s declarative programming model. As quantum computers mature and new applications emerge, we envision efficient and conceptually straightforward access to the latest hardware through Rel.

*The interested reader is referred to Quantum Computers Animated (opens in a new tab) as a starting point, with deeper explanations available at quantumcomputing.com (opens in a new tab).

**QAOA is well-suited to problems on graphs that have sparse local interactions.

***The parameters can be tuned by a non-quantum (classical) computer.

## Related Posts

### Trends in Machine Learning: ICML 2022

The 38th International Conference on Machine Learning (ICML) took place in mid-July in hybrid mode, two months after the International Conference on Learning Representations (ICLR) 2022. In our analysis of the ICLR, we pointed out the spread of language models (LMs) and self-supervision. Things are not that different at ICML. Read our analysis.

### Stop Wasting Good Data: Reclaim Predictive Information With Knowledge Graphs

As we move to data-centric AI, we demand higher quality data. Yet, the first step in creating predictive models odten involves tossing out predictive information. In this post, we'll look at how data scientists and ML engineers can use a relational knowledge graph to make domain expertise and contextual information available to ML models. Learn how to stop wasting the predictive data you already have.