Quantum technologies are paving the way for the future of computing, going beyond classical computation capabilities. Among the various platforms available for quantum information processing, photonic quantum processors show promise due to their scalability and potential for direct deployment in quantum networks. However, to gauge their true power, it is essential to assess the performance of these photonic systems. In this blog post, we discuss a performance metric for quantum computers called Quantum Volume and how it can be adapted to photonic processors. Technical details can be found in our recent publication (arXiv preprint).

A typical quantum computing task involves performing gates on several qubits within a quantum circuit (see Figure 1).

Current quantum processors are built out of a small number of noisy qubits, which is dubbed as noisy intermediate-scale quantum (NISQ) technology. One may wonder to what extent these noisy machines perform faithfully, i.e., how many more gates we can apply (increase circuit depth) or how many more qubits we can add (increase circuit width) before the quantum information is fully overwhelmed by the noise, and our quantum processor becomes a random machine. This is the essence of a metric called Quantum Volume (QV), which was initially proposed by IBM. Since then, it has been used to report the progress in quantum computing platforms based on superconducting circuits and trapped ions. The QV is often reported in an exponential form as 2^{d}, where d represents the largest circuit dimension (i.e., width or depth) of a faithful processor. In what follows, we briefly review photonic quantum processors and the challenges facing their realization; next, we present the minimum optical properties required to achieve a QV of 2^{10}, which is a typical value reported for common quantum computing platforms such as superconducting qubits or trapped ions.

Quantum photonics emerges as a promising platform for scalable quantum information processing (possibly at room temperature). It enables the direct deployment of quantum networks (see our earlier blog post) by serving as a repeater for quantum error correction or as a server for distributed quantum computing resources. Photonic qubits, which can be coherent pulses or single photons, form the basis of quantum photonics. Coherent communication techniques, such as homodyne detectors, are employed for measuring qubits encoded in coherent pulses (continuous variable) qubits, while single-photon detectors are used to measure single-photon-based (discrete variable) qubits. However, a challenge in photonics is the difficulty in storing qubits and applying gates on them as they are traveling at the speed of light.

To address the storage challenge, photonic quantum processors often utilize measurement-based quantum computing (MBQC) schemes instead of circuit-based quantum computing (CBQC) schemes suitable for stationary qubits. In MBQC, photonic chips generate a stream of photonic qubits which go through a series of interferometers (i.e., delay lines, beam splitters, and phase shifters) and are eventually directed towards photonic detectors for measurement. Various gates are realized through different sequences of measurement bases. For example, a small quantum circuit on three qubits (as shown below) can be implemented by a series of measurements on 42 pulses in the form of 7 instances of 6 synchronous pulses traveling through a 6-channel photonic chip.

The major source of noise in photonic systems is photon loss (or optical signal attenuation) as the quantum signal travels through waveguides and is ultimately measured at the detectors. In our recent paper, we discovered a mapping from a lossy quantum photonic chip to a circuit composed of noisy gates, enabling the computation of the QV for photonic processors. Our findings indicate that photonic hardware would require improvements to reduce losses to approximately 10% (or 0.5 dB) and a squeezing rate of 18 dB for coherent-pulse qubits to achieve a QV of 2^{10}. To put numbers in perspective, the record squeezing rate ever reported in optics labs around the world is 15 dB. Also, most photon loss events occur at the waveguide interface when the signal enters the chip and is measured at the detector. A typical value of loss rate at the interface is 20% (or 1 dB). Therefore, our results imply that hardware quality need to be improved further to meet the desired levels of squeezing and loss rates, although the gap is not too large.

Photonic quantum processors offer exciting prospects for quantum information processing. Evaluating their power using metrics like Quantum Volume enables meaningful comparisons with other quantum technologies. Despite the challenges posed by photon loss, continuous advancements in photonic hardware and encoding schemes will lead to enhanced performance. At the time of writing this blog post, a universal photonic processor based on MBQC has not been realized in the laboratory. In this regard, our QV framework is an essential tool for hardware research and development efforts to be used for reporting progress, identifying bottlenecks, and designing a technology roadmap.