Let’s say you’re interested in buying a car, so you visit a new car dealership. Imagine your disappointment if you learned that the only information the salesperson can give you is the number of seats in each vehicle.

How crazy would that be? Of course, to make sure a car fits your requirements, you need a lot more information.

What color is the car? Does it have the right accessories? And, what kind of gas mileage does it get? Ultimately, you would also like to know how it performs under different conditions, like driving in town and on the highway. Only when armed with more information could you make an informed decision about buying a car or making a comparison between different vehicles.

Until now, that’s pretty much how we have evaluated quantum computers. The focus has mainly been on the number of qubits in a quantum computer while ignoring many other important factors affecting its computational ability.

Before Google announced it had achieved Quantum Supremacy, the media speculated on how many qubits would be needed to outperform a classical computer. Ignoring the controversial technical aspects of Google’s accomplishment, it finally achieved the benchmark. Of course, most articles focused on the fact that Google used a 54-qubit quantum processor.

**What’s next after quantum supremacy? **

Beyond quantum supremacy, the next major benchmark, called quantum advantage, is on the distant horizon. Quantum advantage will exist when programmable NISQ quantum gate-based or circuit-based computers reach a degree of technical maturity that allows them to solve many, but not necessarily all, significant real-world problems that classical computers can’t solve, or problems that classical machines require an exponential amount of time to solve.

World-changing applications will be possible once we reach quantum advantage. The major applications will likely include optimization, chemistry, machine learning, and organic simulations.

**From Quantum Supremacy to Quantum Advantage **

Although the number of qubits is essential, so are the number of operations completed before qubits lose their quantum states. Qubits decohere either due to noise or because of their inherent properties. For those reasons, building quantum computers capable of solving deeper, more complex problems is not just a simple matter of increasing the number of qubits.

Before we can move from a hundred qubits to one with thousands of qubits, and eventual to one with millions of qubits, many significant technical issues remain to be solved. Moreover, none of the problems have overnight solutions. It will likely take another five or ten years of incremental research, experimentation, and steady technical improvements to find answers.

Configuration changes – subtle and significant – can dramatically affect the performance of a quantum computer. Determining the optimum quantum computer configuration is much like solving a Rubik’s Cube. In the process of trying to align all the blue tiles on one surface, the previously solved red surface becomes disrupted.

**The power of Quantum Volume**

It would be helpful if quantum researchers had a tool that allowed them to systematically measure and understand how incremental technology, configuration and design changes affected a quantum computer’s overall power and performance. Corporate users also need a way to compare the relative power of one quantum computer to another.

IBM foresaw the need for such a metric in 2017 when its researchers developed a full-system performance measurement called Quantum Volume.

Quantum Volume’s numerical value indicates the relative complexity of a problem that can be solved by the quantum computer. The number of qubits and the number of operations that can be performed are called the width and depth of a quantum circuit. The deeper the circuit, the more complex of an algorithm the computer can run. Circuit depth is influenced by such things as the number of qubits, how qubits are interconnected, gate and measurement errors, device cross talk, circuit compiler efficiency, and more.

That’s where Quantum Volume comes in. It analyzes the collective performance and efficiency of these factors then produces a single, easy-to-understand Quantum Volume number. The larger the number, the more powerful the quantum computer.

**Demonstrated incremental improvements **

Quantum Volume can be used to compare the power of one quantum computer to another. It can also play a significant role in ongoing development and research necessary to create bigger and better quantum computers to achieve a quantum advantage.

IBM tested Quantum Volume on three of its quantum computers. The test included the following: the Tenerife five-qubit computer released through the IBM Q Experience quantum cloud service in 2017, the 20-qubit Tokyo 2018 computer, and the 20-qubit 2019 IBM Q System One.

The Volume Growth Chart shows how Quantum Volume doubled each successive year.

**Quantum Volume as a research tool**

Jay Gambetta, IBM Vice President, Quantum Computing, posted graphs online showing progressive improvement in CNOT error rates as a result of various changes. In the post, he said: “With four revisions of our 20-qubit quantum system (chip name “Penguin”) plotted together, you can really see the progress in stability, scalability, and reduction of errors over the last two years.”

When asked for more detail on how Quantum Volume assisted in these improvements, the IBM research team said: “These plots are a great example of why qubit counts alone do not tell the whole story. Over several revisions of the architecture, the CNOT error rates improved significantly – which is shown in the plots; and that has a significant impact on improving quantum volume from revision to revision. There are many other performance metrics and architectural choices that impact the quantum volume of a system as well – for instance how the qubits are laid out and connected to one another. Maximizing quantum volume is about making trade-offs between these different parameters – and identifying new ways to push those boundaries. This is why the quantum volume metric is so important; otherwise it would be quite possible to drive a couple of parameters to report on publicly yet to the detriment of overall quantum volume as it can be measured in a single system.”

**Optimizing with Quantum Volume **

For the foreseeable future, quantum computers will use noisy qubits with relatively high error rates and short coherence times. We are still in the experimental stages of error correction. We know functional error correction will likely require a thousand or more error-correction qubits for every computational qubit. Decoherence of quantum states is a significant obstacle we will have to overcome to build scalable and reliable quantum computers.

Simple logic and the Quantum Growth Chart tells us that to reach quantum advantage by 2025, we need quantum computers with much higher Quantum Volumes, perhaps with a numerical value of 1000 or more.

Along these lines, in response to my earlier question, the IBM research team also said, “Maximizing quantum volume is about making trade-offs between these different parameters – and identifying new ways to push those boundaries. This is why the quantum volume metric is so important; otherwise it would be quite possible to drive a couple of parameters to report on publicly yet to the detriment of overall quantum volume as it can be measured in a single system.”

**Quantum Volume and the quantum computing ecosystem**

Most of today’s benchmarking is done at the component level for qubits and quantum logic gates. There have been other benchmarking methods investigated as described in this excerpt from IBM’s paper on Quantum Volume:

“Methods such as randomized benchmarking, state and process tomography, and gate set tomography are valued for measuring the performance of operations on a few qubits, yet they fail to account for errors arising from interactions with spectator qubits. Given a system such as this, whose individual gate operations have been independently calibrated and verified, how do we measure the degree to which the system performs as a general-purpose quantum computer? We address this question by introducing a single-number metric, the quantum volume, together with a concrete protocol for measuring it on near-term systems. Similar to how LINPACK and improved benchmarks are used for comparing diverse classical computers, this metric is not tailored to any particular system, requiring only the ability to implement a universal set of quantum gates.”

Since quantum computing is now in a zone between quantum supremacy and quantum advantage, interim quantum computing performance benchmarks are now needed more than ever. The use of Quantum Volume offers many readily available operational and business benefits to gate-based and circuit-based quantum computer companies.

IBM’s published results demonstrate Quantum Volume’s benefits. It can facilitate research, help develop system roadmaps, and help to systematically optimize architectures needed to advance technology to quantum advantage and beyond.

Every researcher understands that funding for quantum computing research requires a favorable opinion of the technology by the general public. Unfortunately, because of quantum complexity and long intervals between significant announcements, the general public understands very little about quantum computing. Wider wide use of Quantum Volume would provide the public with a better feel for progress and generate more public interest.

Quantum Volume would also benefit the CEO or investor who lacks the in-depth technical knowledge necessary to make confident investment decisions in the technology. Additionally, reported variations in Quantum Volume from company to company would likely stimulate more articles by the media, which would serve to educate the general public further.

**Summary **

There should be no question that the development and ongoing support of Quantum Volume demonstrates IBM’s long-term commitment to the overall quantum computing ecosystem.

Organizations like the IEEE need to assist in the long-term evolution and development of quantum computing benchmarks. Unfortunately, it will take years to complete investigations, documentation, and negotiation of a final product.

The bottom line – there are many benefits to using Quantum Volume. Moor Insights & Strategy believes it is a powerful tool that should be adopted as an interim benchmarking tool by other gate-based quantum computer companies. When the time comes, it should also be considered by IEEE as a permanent standard.