Architectures & Algorithms Banner

We are interested in understanding, deploying and extending algorithms and architectures that best match underlying features of emerging hardware to cater to the ever-growing needs of computation. In particular, we are exploring alternative computing approaches including probabilistic and neuromorphic computing that can be used to design hardware accelerators for domain-specific applications for Optimization and Machine Learning. Additionally, we explore theoretical limits of deterministic, probabilistic and quantum computing through the lens of classical and quantum Monte Carlo techniques and their hardware implementations.

Traveling Salesman Problem
Traveling Salesman Problem

Computationally Hard Problems

A great mystery in computer science (CS) is why solving some problems seems to be much harder than others in hardware-independent, algorithmic ways, even though recognizing whether a given answer is a solution to these problems is easy. A celebrated example is the Traveling Salesman Problem (TSP) where a salesman needs to visit N cities (exactly once) and return to the original city, while minimizing the total distance travelled. It seems as if any algorithmic attempt to solve this problem yields solutions that scale exponentially with the number of cities (N) but recognizing whether a target is reached just amounts to adding N numbers, always an easy task. Indeed, such problems have their own classification in CS and even fault-tolerant, scaled quantum computers are expected to provide a modest speed up over these problems.

To make matters worse, for a given problem to be classified as hard doesn't seem to take a lot and many problems of great practical interest have been proved to be hard, just in this way. Even though we do not know whether we all simply missed an efficient algorithm to solve these problems, we can always attempt to solve such problems by building better hardware that takes each consecutive step faster than any other computing machine. Of course when facing exponentials this approach cannot take us far especially if N gets to be very large but doing the same thing progressively faster is the spirit behind Moore's Law that has revolutionized the world after all. Unless a strongly held and supported conjecture in modern CS theory is wrong (the P vs NP problem), there doesn't seem to be any other way to make progress anyway. Although we must be careful in making sweeping generalizations here, as Don Knuth says, one good algorithmic idea can make a million-fold improvement in runtime! 

In this hardware acceleration direction, the last few years have seen an explosion of domain-specific architectures and devices that target solving such computationally hard problems in dedicated hardware, originating from a community of physicists, device engineers and computer architects. These devices have come to be known as Ising Machines and in fact the 8-bit natural annealer discussed here is an example of such an Ising Machine with an asynchronous architecture that can perform classical annealing faster and more efficiently, at least if it can be scaled up. As an intermediate step towards such scaling, we are actively exploring FPGA emulations of probabilistic Ising Machines containing up to 10,000 spins and more.

A class of quantum models can be efficiently mapped to probabilistic models.
A class of quantum models can be efficiently mapped to probabilistic models.

Probabilistic Computing vs Quantum Computing

So what about inherently quantum problems that quantum computers are expected to tackle efficiently, for example solving quantum many-body problems? Could Ising Machines help in solving these problems? The answer is, it depends on the quantum problem. A well-known correspondence that was discovered by condensed matter physicists in the 70s is that a subclass of quantum problems have efficient classical mappings that can be solved by such probabilistic Ising Machines. Even though there are theoretical qualifiers that suggest even after an efficient mapping the classical approaches may not be as powerful as their quantum counterparts, so far the so-called Quantum Monte Carlo (QMC) algorithms that are compared with the so-called Quantum Annealers seem to possess similar computational powers. This opens up a new avenue for Ising Machines that can complement or surpass quantum annealers to solve optimization problems since accelerating Ising Machines through clever architectures and better devices is a far easier task than optimizing hardware that operates in cryogenic temperatures.

Therefore, one of the areas we are pursuing is the acceleration of QMC algorithms to solve quantum problems (by efficient FPGA architectures or ASIC implementations), at least when the classical representation is known to be efficient. Next, we discuss when such a representation faces severe limitations as we try to generically represent quantum interference with probabilities.

All quantum circuits can be mapped to Ising Models but sign-problem poses challenges
All quantum circuits can be mapped to Ising Models but sign-problem poses challenges

Probabilistic vs Quantum Computers

Interestingly, even the most generic quantum circuit based on the gate-based quantum computing (GQC) model can have an Ising Model representation. This itself is an interesting observation and allows quantum circuits up to a certain depth and width to be modeled by dedicated Ising Machines, potentially pushing the boundaries of classical simulation of quantum circuits.

However, a severe barrier, also known as the sign-problem, arises in most general cases: When GQC circuits are mapped to Ising Machines, some probabilities that are previously described by strictly positive numbers attain complex amplitudes and phases! In the most interesting cases, for example in Shor's Algorithm or Grover's Search, complex numbers associated with each path can interact and cancel each other such that the paths for the correct outputs remain intact (or stationary) while the incorrect outputs fade away. These cancellations do not naturally occur in the Ising Machine and painstakingly reproducing them requires exponentially many more samples to be taken compared to an ideal quantum computer. In specific cases, there may be hope to find different basis sets to reduce the severity of the sign-problem, but finding an efficient solution to this problem itself is believed to be computationally hard.

Understanding and quantifying the differences of computational powers of probabilistic vs quantum computers is an area that could push boundaries of both fields and a precise understanding might allow us to find better algorithms for the so-called Noisy-Intermediate-Quantum-Era computers and could lead to more efficient probabilistic algorithms that can be tackled by dedicated Ising Machines.

Relevant Publications