### Experimental Boson Sampling

**[From left to right] Part of the Oxford integrated quantum optics team: Steve Kolthammer,**

Ben Metcalf, Merritt Moore, Peter Humphreys, Justin Spring, and Ian Walmsley

Ben Metcalf, Merritt Moore, Peter Humphreys, Justin Spring, and Ian Walmsley

**Authors: Steve Kolthammer, Justin Spring, Ian Walmsley**

**Affiliation: Department of Physics, University of Oxford, UK**

A long-term ambition of many quantum physicists these days is to construct and understand large and complex quantum systems. This goal is driven in part by the notion that such systems have the potential to reveal interesting phenomena that cannot be studied by classical simulation. Such phenomena occur widely across the natural sciences, from biochemistry to condensed matter physics. The notion of complexity itself, however, is perhaps on firmest ground within the framework of computation. Here complexity refers to how the physical resources needed to achieve a particular task depend on the size of the task. Dramatic cases in which a quantum algorithm provides an exponential advantage over the best known classical algorithm make clear the potential for a quantum system to process information in ways that would eclipse even the most fantastically advanced classical machines.

Past 2Physics articles by this Group:

December 25, 2011: "Entang-bling" by Ian Walmsley and Joshua Nunn

August 07, 2011: "Building a Quantum Internet" by Joshua Nunn

December 25, 2011: "Entang-bling" by Ian Walmsley and Joshua Nunn

August 07, 2011: "Building a Quantum Internet" by Joshua Nunn

Despite steady progress in manipulating and measuring simple quantum systems, a gulf remains between the theoretical promise of quantum computation and what has been shown in the laboratory. Bridging this divide requires not only further experimental advances but also the development of models of computation which are easier to test. In 2010, Scott Aaronson and Alex Arkhipov reported important progress toward the latter in a study of the computational power inherent in the quantum interference of indistinguishable, non-interacting bosons [1]. The specific problem they posed, termed boson sampling, is appealing due to its relatively straight-forward implementation with linear optical elements, such as beam splitters and phase shifters, sources of indistinguishable single photons, and single-photon detectors.

Importantly, this scheme does not require on-demand entanglement and feed-forward control, which are required for other approaches to linear-optical quantum computing and difficult to achieve in the laboratory. From a computational perspective, the key advance by Aaronson and Arkhipov was to show strong evidence that boson sampling is in fact interesting – that is, it cannot be solved efficiently by classical computation.

The essence of the boson sampling problem can be readily understood by examining the quantum machine which solves it. Our machine begins by injecting indistinguishable single photons into a linear optical network of single-mode waveguides. Due to quantum interference as the photons traverse the network, the photons emerge in a complicated entangled state. The positions of the photons are then measured by single-photon detectors. The machine bears some resemblance to the classical Galton board, illustrated below, in which balls randomly fall through an array of pegs. However, while these distinguishable classical balls take familiar, distinct paths down the board, rolling either left or right off of each peg they encounter, the photons in some sense collectively take all possible paths through their network.

Surprisingly, the interference of these paths makes it hard for a classical computer to predict where the photons tend to emerge! In the quantum case, the probability for a particular measurement outcome after N photons are injected into a network described by a unitary matrix

**U**, is related to the permanent of a specific N x N submatrix of

**U**. The permanent, a function of a matrix similar to the determinant, is noteworthy since its computation is hard. Aaronson and Arkhipov showed that an efficient classical algorithm that samples from a distribution of matrix permanents, precisely the task accomplished by the quantum machine, is very unlikely to exist.

**Classical Galton board (left) and quantum boson sampling machine (right).**

Last month, our group was one of two teams to report initial boson sampling experiments in Science [2, 3]. In fact, a remarkable interest in boson sampling became apparent last December when four groups - with experiments undertaken in Oxford, Brisbane, Vienna, and Rome - presented results on arXiv.org within the same week [2-5]. In our measurements, the principles of boson sampling were verified with three and four photons in a photonic chip. The photons were generated by degenerate parametric down-conversion, a nonlinear optical three-wave-mixing process, in a pair of crystals and coupled into a silica-on-silicon waveguide chip fabricated by our collaborators at the University of Southampton. The output from the chip was measured with single-photon avalanche photodiodes.

How closely did our quantum machine match an ideal boson sampling device? To address this issue we compared our measurements to the predictions of a classical computer, which is able to handle small boson sampling tasks. By extending the model of our machine to include a number of small, independently measured imperfections, such as rare multi-pair photon emission from a source, we verified that the machine ran as expected. In terms of quantum optics, our findings extend the well-studied Hong-Ou-Mandel quantum interference of two photons to processes with three and four photons. In terms of computation, we demonstrated the feasibility of running small-scale boson sampling tasks with the current experimental state of the art.

A striking feature of the boson sampling problem is that it appears to be amenable to the tools of complexity theorists as well as those of quantum experimenters. Meeting these disparate requirements is the crucial challenge to achieving a direct test of quantum computational power. Future work on boson sampling will benefit from an on-going effort to understand how unavoidable experimental imperfections relate to computational complexity. Meanwhile quantum optics labs, including our own, will continue to work towards experiments with increasing numbers of single photons. Extending our initial studies to ten photons should provide clear evidence for the computational power of this simple quantum machine, and a calculation with tens of photons could reach the frontier of quantum complexity which is inaccessible by classical computation.

**References**

**[1]**Scott Aaronson and Alex Arkhipov. “The Computational Complexity of Linear Optics”. Proceedings of the ACM Symposium on the Theory of Computing (ACM, New York, 2011), pp. 333-342. Download.

**[2]**Justin B. Spring, Benjamin J. Metcalf, Peter C. Humphreys, W. Steven Kolthammer, Xian-Min Jin, Marco Barbieri, Animesh Datta, Nicholas Thomas-Peter, Nathan K. Langford, Dmytro Kundys, James C. Gates, Brian J. Smith, Peter G. R. Smith, Ian A. Walmsley. “Boson Sampling on a Photonic Chip”. Science, 339, 798 (2013). Abstract.

**[3]**Matthew A. Broome, Alessandro Fedrizzi, Saleh Rahimi-Keshari, Justin Dove, Scott Aaronson, Timothy C. Ralph, Andrew G. White. “Photonic Boson Sampling in a Tunable Circuit”. Science, 339, 794 (2013). Abstract.

**[4]**Max Tillmann, Borivoje Dakić, René Heilmann, Stefan nolte, Alexander Szameit, Philip Walther. “Experimental Boson Sampling”. ArXiv:1212.2240 [quant-ph] (2012).

**[5]**Andrea Crespi, Roberto Oselame, Roberta Ramponi, Daniel J. Brod, Ernesto Galvão, Nicolò Spagnolo, Chiara Vitelli, Enrico Maiorino, Paolo Mataloni, Fabio Sciarrino. “Experimental Boson Sampling in Arbitrary Integrated Photonic Circuits”. ArXiv:1212.2783 [quant-ph] (2012).

Labels: Complex System 3, Photonics 5, Quantum Computation and Communication 8

## 0 Comments:

Post a Comment

## Links to this post:

Create a Link