.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Sunday, November 11, 2012

Measurement of Photon Statistics with Live Photoreceptor Cells

Leonid Krivitsky

Author: Leonid Krivitsky

Affiliation: Data Storage Institute, Agency for Science Technology and Research (A*STAR), Singapore

Conventional light sources, such as lamps, stars, laser pointers etc. are known to exhibit intrinsic photon fluctuations. This means that the number of photons emitted by the source is not strictly defined but is given by a specific statistical distribution. For example, the photon number distribution of a laser obeys a Poisson distribution, whilst the photon number distribution of a thermal source (lamp, star) obeys an exponential distribution.

The question which we address in this work is how fluctuations of various light sources are perceived by visual systems of living organisms [1]. A simple analogy which illustrates this is our human perception of the stars in the night sky. It is known that faint stars blink because of the atmospheric turbulence, which disturbs the star light on its way to the eye. At the same time, we may notice that bright stars, e.g. Polaris, observed under the same conditions, seem almost stable. This observation suggests that the ability of the eye to perceive blinking (fluctuating) lights is related to the brightness of the source. This scenario can be carefully reproduced in the lab by interfacing photosensitive eye cells, known as photoreceptors, with flashes of light from sources with different photon statistics.

Photoreceptor cells within the eye, known as retinal rods, are responsible for vision under low light conditions [2]. They are capable of detecting light down to the single photon level. When stimulated by light, the cells respond in ways that can be measured. In particular, the electrical activity of the cell, which is driven by the absorption of individual photons by the cells, can be measured by using fine glass microelectrodes. Moreover, since the cell is constrained within the recording microelectrode, moving the microelectrode allows us to position the cell close to the tip of an optical fiber, which is used to perform targeted light delivery into the cell (see Fig. 1) [3].

Fig.1 Microscope image of the retinal rod cell constrained in a glass suction pipette (on the right) and a tapered optical fiber (on the left) used for light delivery to the cell.

In our experiment, we send flashes of light into the cell by feeding flashes of light from the laser and pseudo-thermal light source into an optical fiber. We then measure the average and standard deviation of the cell response to repetitive light flashes at different flash intensities. The relation between the average <A> and the standard deviation ΔA of the amplitude of the cell response is characterized by a signal-to-noise ratio SNR= <A>/√ΔA.

It turns out that the fluctuation of the cell’s response depends crucially on the saturation of the cell response. Firstly, the dependence of the average cell response on the number of impinging photons behaves differently for light sources with different photon statistics (see Fig.2). As we can see, for the case of the pseudo-thermal light source (open symbols) the saturation is considerably smoother than for the laser light (solid symbols). This is explained by the fact that for bright (on average) pseudo-thermal light source there is always a considerable chance of observing flashes with low photon numbers, which prevents sharp saturation of the average response.

Fig.2 Dependence of the average normalized amplitude of the cell response on the normalized number of impinging photons for laser (solid symbols, solid lines) and pseudo-thermal (open symbols, dashed lines) light sources. Lines are results of theoretical modelling. Typical values of saturation amplitudes are in the range of 18-25 pA, and of photon numbers are in the range of 700-2500 photons per pulse. Saturation of the response is different for the two sources due to the difference in their photon statistics.

Secondly, the saturation of the cell at relatively bright flashes leads to a sharp increase of the SNR (see Fig.3). Indeed, if the cell is saturated by bright lights, its response does not fluctuate and this automatically results in a high value of SNR since ΔA becomes vanishingly small. This may be the reason why we are able to see fluctuating dim stars, but the bright stars in the night sky appear almost stable.

Fig.3 Dependence of the Signal to Noise Ratio (SNR) on the normalized number of impinging photons for laser (solid symbols, solid lines) and pseudo-thermal (open symbols, dashed lines) light sources. Lines are results of theoretical modelling. Sharp increase of SNR is a signature of the bleaching of the cell.

In conclusion, this work contributes to a better understanding of the sensitivity of retinal rod cells to photo-stimulation. It shows that under certain conditions, the cell can, like other man-made photodetectors, be used to measure the photon statistics of various light sources. It is of further interest to us to investigate how the cell interacts with sources of non-classical light and this study is currently in progress. More practical applications of the above work could include building a detector with retinal rods that can mimic the natural detection of light by our eyes.

References:
[1] "Measurement of Photon Statistics with Live Photoreceptor Cells", Nigel Sim, Mei Fun Cheng, Dmitri Bessarab, C. Michael Jones, Leonid A. Krivitsky, Physical Review Letters, 109, 113601 (2012). Abstract.
[2] "Single-photon detection by rod cells of the retina", F. Rieke and D. A. Baylor, Review of Modern Physics, 70, 1027 (1998). Abstract.
[3] "Method of targeted delivery of laser beam to isolated retinal rods by fiber optics", Nigel Sim, Dmitri Bessarab, C. Michael Jones, Leonid Krivitsky, Biomedical Optics Express, 2, 2926 (2011). Abstract.

Labels: ,


Sunday, November 04, 2012

Quantum Teleportation Over 143 Kilometers

From left to right: Anton Zeilinger, Xiao-song Ma, Rupert Ursin, Bernhard Wittmann, Thomas Herbst, Sebastian Kropascheck of Institute for Quantum Optics and Quantum Information (IQOQI), Vienna, Austria.

Authors: Xiao-song Ma1,2, Johannes Kofler3, Rupert Ursin1,2, Anton Zeilinger1,2

Affiliation:
1Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Vienna, Austria.
2Vienna Center for Quantum Science and Technology, Faculty of Physics, University of Vienna, Austria.
3Max Planck Institute of Quantum Optics (MPQ), Garching, Germany.

Johannes Kofler of Max Planck Institute of Quantum Optics (MPQ), Germany

The so-called "quantum internet” is envisaged as a revolutionary platform for information processing. It is not based anymore on classical computer networks but on the current developments of modern quantum information, where individual quantum particles are the carriers of information. Quantum networks promise absolutely secure communication and enhanced computation power for decentralized tasks compared to any conceivable classical technology. Due to the intrinsic transmission losses in conventional glass fibers a global quantum network will likely base on the free-space transfer of quantum states, e.g., between satellites and from satellites to ground. The now realized quantum teleportation [1] over a distance of 143 kilometers [2], beating a just one-month old record of 97 kilometers set by a group of physicists from China [3], is a significant step towards this future technology.

Past 2Physics articles by Rupert Ursin and/or Anton Zeilinger:

May 30, 2009: "Transmission of Entangled Photons over a High-Loss Free-Space Channel" by Alessandro Fedrizzi, Rupert Ursin and Anton Zeilinger,

June 08, 2007: "Entanglement and One-Way Quantum Computing "
by Robert Prevedel and Anton Zeilinger

On the island of La Palma our team produced entangled pairs of particles of light (photons 2 and 3, see figure 1). Quantum entanglement means that none of the photons taken by itself has a definite polarization but that, if one measures the polarization of one of the photons and obtains a random result, the other photon will always show a perfectly correlated polarization. This type of quantum correlation cannot be described by classical physics and Albert Einstein therefore called it “spooky action at a distance”. Photon 3 was then sent through the air to Tenerife, across the Atlantic Ocean at an altitude of about 2400 meters and over a distance of 143 kilometers, where it was caught by a telescope of the European Space Agency. Photon 2 remained in the laboratory at La Palma. There, we created additional particles of light (photon 1) in a freely selectable polarization state which we wanted to teleport.

Figure 1: Schematic illustration of the teleportation experiment. The polarisation state of particles of light was teleported over a distance of 143 kilometres from the Canary Island La Palma to Tenerife. Graphic: IQOQI Vienna & MPQ Garching.

This was achieved in several steps: First, a special kind of joint measurement, the so-called Bell measurement (“BM”), was performed on photons 1 and 2, which irrevocably destroys both photons. Two possible outcomes of this measurement were discriminated, and the corresponding classical information was sent via a conventional laser pulse (violet in the figure) to Tenerife. There, depending on which of the outcomes of the Bell measurement had been received, the polarization of photon 3 was transformed accordingly. This transformation (“T”) completed the teleportation process, and the polarization of photon 3 on Tenerife was then identical with the initial polarization of photon 1 on La Palma.

Figure 2: Long time exposure photography viewing from La Palma to Tenerife. A green laser beam indicates the free-space link between the two laboratories [Graphic: IQOQI Vienna].

The complexity of the setup and the environmental conditions (changes of temperature, sand storms, fog, rain, snow) constituted a significant challenge for the experiment. They also demanded a combination of modern quantum optical technologies concerning the source of entangled particles of light, the measurement devices, and the temporal synchronization of the two laboratories (see Figure 2 for the experimenter’s view from La Palma to Tenerife). The experiment therefore represents a milestone, which demonstrates the maturity and applicability of these technologies in real-world outdoor conditions and hence paves the way for future global quantum networks. For the next step of satellite-based quantum teleportation an international collaboration of the Austrian and Chinese Academy of Sciences plans to shoot a satellite into space in the foreseeable future.

References
[1] “Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels”, Charles H. Bennett, Gilles Brassard, Claude Crépeau, Richard Jozsa, Asher Peres, William K. Wootters, Physical Review Letters, 70, 1895 (1993). Abstract.
[2] “Quantum teleportation over 143 kilometres using active feed-forward”, Xiao-Song Ma, Thomas Herbst, Thomas Scheidl, Daqing Wang, Sebastian Kropatschek, William Naylor, Bernhard Wittmann, Alexandra Mech, Johannes Kofler, Elena Anisimova, Vadim Makarov, Thomas Jennewein, Rupert Ursin, Anton Zeilinger, Nature 489, 269 (2012). Abstract.
[3] “Quantum teleportation and entanglement distribution over 100-kilometre free-space channels”, Juan Yin, Ji-Gang Ren, He Lu, Yuan Cao, Hai-Lin Yong, Yu-Ping Wu, Chang Liu, Sheng-Kai Liao, Fei Zhou, Yan Jiang, Xin-Dong Cai, Ping Xu, Ge-Sheng Pan, Jian-Jun Jia, Yong-Mei Huang, Hao Yin, Jian-Yu Wang, Yu-Ao Chen, Cheng-Zhi Peng, Jian-Wei Pan, Nature 488, 185 (2012). Abstract.

Labels:


Sunday, October 14, 2012

Avian Compass Reloaded

Dagomir Kaszlikowski

Author: Dagomir Kaszlikowski

Affiliation: Centre for Quantum Technologies, Department of Physics, National University of Singapore, Singapore

Recently there has been a fundamental change in our understanding of how biological systems operate; new research revealed that quantum phenomena could play an essential part in biological processes. Understanding the fundamental mechanisms contributing to biological processes is an essential step in developing our understanding of life, its origins and evolution. However, one has to be cautious how to approach this problem to avoid triviality.

Molecules are purely quantum mechanical objects which must behave according to quantum mechanical laws. Therefore life is governed by the laws of quantum mechanics; it is the result of a series of chemical reactions between molecules happening inside of any living organism. There is nothing insightful in this observation. However, if one could demonstrate that biological systems utilize certain peculiar aspects of quantum mechanics such as coherence, entanglement or tunneling to their advantage, it would be a highly non-trivial statement.

Scientists have recently discovered strong evidence for complex quantum mechanical mechanisms in two interesting biological processes: photosynthesis [1] and the avian compass [2,3]. In this article I would like to focus on the latter.

It has been observed that some species of migratory birds can sense the direction of the Earth’s magnetic field. They use this sensitivity to the geomagnetic field to navigate during seasonal migration. This is extremely surprising because the magnetic field of our planet is very weak and it is difficult to imagine how it can affect a bird’s nervous system or trigger a behavioral response. In 2008 scientists realized that birds may directly observe the magnetic field by utilizing a magnetically sensitive photochemical reaction called the radical pair mechanism.

In a nutshell: it is assumed that the bird’s retina contains a photoreceptor pigment with molecular axis direction dependent on its position in the retina. Absorption of incident light by a part of the pigment results in electron transfer to a suitable nearby part and in this way a radical pair is formed, i.e. a pair of charged molecules each having an electron with unpaired spin. In the external magnetic field the state of the electron spins undergoes singlet-triplet transitions and at random times the radical pairs recombine forming singlet (triplet) chemical reaction products. In a bird’s brain, the amount of these chemical products varies along the retina as the direction of molecular axis changes, and the shape of this profile is believed to be correlated with the orientation of the geomagnetic field. I would like to mention at this point that singlet and triplet states of two electrons, which are examples of entangled states, are purely quantum mechanical phenomena that cannot be modeled using classical physics.

We need to put this hypothesis in the context of behavioral experiments carried on European Robins. During these experiments researchers determined that (i) the avian compass is only sensitive to inclination of the magnetic field, not its polarity; (ii) investigated birds were disoriented after being subjected to a weak radio frequency magnetic field whose frequency is adjusted to the characteristic frequency of the radical pair mechanism; (iii) the compass stopped working if the local geomagnetic field was weakened or strengthened by around 30%.

There are two essential physical parameters, in the radical pair mechanism model, which determine a bird’s ability to navigate: the average lifetime of the radical pair and its robustness to disturbances from the environment. The lifetime of the radical pair must be long enough to produce the chemical product profile necessary to stimulate the bird’s nervous system. Disturbances from the environment is called decoherence in quantum theory. Decoherence directly effects electronic singlet and triplet states making them lose their entanglement. This results in the bird’s compass being insensitive to the inclination of the magnetic field. The coherence time of the radical pair must be sufficient to prevent this from happening.

My colleagues from the National University of Singapore and Oxford University (UK) showed, in their recent paper [2], that both the average lifetime and the coherence time of the radical pair in the European Robin’s eye can be of the order of 100 microseconds. This is a surprising result, given that the longest coherence times of molecular electron spin states achieved in the laboratory (where the influence of the environment is minimized as much as possible) are 80 microseconds! It would imply that evolution created protection for fragile quantum processes beyond what humans can engineer.

In my recent paper [3] -- together with my collaborators Tomek Paterek and Jayendra Bandyopadhyay -- we arrived at different lifetime and coherence time estimations based on the same radical pair mechanism model. We estimated that lifetime and coherence of radical pair in the European Robin’s eye is of the order of 10 microseconds. The difference between our results and those in [2] stems from the fact that we considered all the results (i) to (iii) of the behavioral experiments whereas the result (iii) was not accounted for in the paper by my colleagues. Let me explain this.
The graphs above represent the angular dependence of the yield of radical pair chemicals produced in the theoretical model of the avian compass. The black curve corresponds to the local geomagnetic field near Frankfurt, Germany, where the behavioral experiments were carried out. The blue curve represents a static magnetic field that is 30% weaker than the one in Frankfurt. The green curve is the yield in the presence of a weak radio frequency oscillating magnetic field. The inverse of the parameter k is the lifetime of radical pair and a parameter in the theoretical model that must be adjusted in order to reproduce the experimental data.

According to behavioral experiments (ii) and (iii) a European Robin becomes disoriented if either it is subjected to a weak radio frequency magnetic field oscillating at a specific frequency or if the magnitude of the local geomagnetic field is changed by 30%. Therefore, for those values of the parameter k for which the green curve enters the region (called the “functional window” of the compass) one gets contradiction with the experiment because birds were disoriented in the presence of the RF magnetic field. As you can see from the right-most graphs this happens for radical pair lifetimes of the order of 10 microseconds. This is our estimation of the average lifetime of the radical pair in the European Robin’s retina. It also agrees with in vitro experiments on cryptochrome molecules [4]. These molecules are believed to constitute the photo-receptor pigment responsible for the radical pair based avian compass of European Robins.

Based on this value of the parameter k we were able to investigate the sensitivity of the avian compass as a function of the environmental noise as shown in graphs below:
The upper graph corresponds to the situation where the coupling strength of one of the electrons from the radical pair is slightly stronger than the strength of the geomagnetic field in Frankfurt (Larmor precession period of 0.78 microseconds). The lower graph corresponds to where the coupling is slightly weaker. I would like to mention that the compass only works for a small range of the coupling strength centered on 0.78 microseconds.

If the coupling strength is larger than the strength of the geomagnetic field then the sensitivity of the compass in the presence of the environmental noise (corresponding to coherence time of the order of one microsecond) is better than if there is no noise! This is not the case if the coupling strength is weaker but we still observe a local increase in the sensitivity for a coherence time of around one microsecond. A similar phenomenon is also found in studies of energy transfer during photosynthesis [5].

A plausible conclusion one can draw from these results is that nature may be optimizing performance of some biological processes by utilizing inevitable noise present in the environment, which is definitely a non-trivial statement about the role quantum mechanics can play in biology. More insight into this conjecture can be obtained from further studies of the resonance that magnetic sensitivity displays as a function of environmental noise -- as identified in this work.

References:
[1] Gregory S. Engel, Tessa R. Calhoun, Elizabeth L. Read, Tae-Kyu Ahn, Tomá Manal, Yuan-Chung Cheng, Robert E. Blankenship, Graham R. Fleming, "Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems", Nature 446, 782 (2007). Abstract.
[2] Erik M. Gauger, Elisabeth Rieper, John J. L. Morton, Simon C. Benjamin, and Vlatko Vedral, "Sustained Quantum Coherence and Entanglement in the Avian Compass", Physical Review Letters, 106, 040503 (2011). Abstract.
[3] Jayendra N. Bandyopadhyay, Tomasz Paterek, and Dagomir Kaszlikowski, "Quantum Coherence and Sensitivity of Avian Magnetoreception", Physical Review Letters, 109, 110502 (2012). Abstract.
[4] Till Biskup, Erik Schleicher, Asako Okafuji, Gerhard Link, Kenichi Hitomi, Elizabeth D. Getzoff, Stefan Weber, "Direct Observation of a Photoinduced Radical Pair in a Cryptochrome Blue-Light Photoreceptor", Angewandte Chemie International Edition, 48, 404 (2009). Abstract.
[5] Masoud Mohseni, Patrick Rebentrost, Seth Lloyd, and Alán Aspuru-Guzik, "Environment-assisted quantum walks in photosynthetic energy transfer", Journal of Chemical Physics, 129, 174106 (2008). Abstract.

Labels: ,


Tuesday, October 09, 2012

Physics Nobel Prize 2012: Quantum Measurement

Serge Haroche (left) and David J. Wineland (right)










The 2012 Nobel Prize in Physics has been awarded to Serge Haroche (Collège de France and Ecole Normale Supérieure, Paris, France) and David J. Wineland (National Institute of Standards and Technology (NIST) and University of Colorado Boulder, CO, USA) "for ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems".

Serge Haroche and David J. Wineland have independently invented and developed methods for measuring and manipulating individual particles while preserving their quantum-mechanical nature, in ways that were previously thought unattainable.

The Nobel Laureates have opened the door to a new era of experimentation with quantum physics by demonstrating the direct observation of individual quantum particles without destroying them. For single particles of light or matter the laws of classical physics cease to apply and quantum physics takes over. But single particles are not easily isolated from their surrounding environment and they lose their mysterious quantum properties as soon as they interact with the outside world. Thus many seemingly bizarre phenomena predicted by quantum physics could not be directly observed, and researchers could only carry out thought experiments that might in principle manifest these bizarre phenomena.


Through their ingenious laboratory methods Haroche and Wineland together with their research groups have managed to measure and control very fragile quantum states, which were previously thought inaccessible for direct observation. The new methods allow them to examine, control and count the particles.

Their methods have many things in common. David Wineland traps electrically charged atoms, or ions, controlling and measuring them with light, or photons.

Serge Haroche takes the opposite approach: he controls and measures trapped photons, or particles of light, by sending atoms through a trap.

Homepage of Serge Haroche at Collège de France, Paris >>

Both Laureates work in the field of quantum optics studying the fundamental interaction between light and matter, a field which has seen considerable progress since the mid-1980s. Their ground-breaking methods have enabled this field of research to take the very first steps towards building a new type of super fast computer based on quantum physics. Perhaps the quantum computer will change our everyday lives in this century in the same radical way as the classical computer did in the last century. The research has also led to the construction of extremely precise clocks that could become the future basis for a new standard of time, with more than hundred-fold greater precision than present-day caesium clocks.

Labels: , , ,


Sunday, September 30, 2012

Topologically-Protected Quantum Cloud Computing

Tomoyuki Morimae (left) and Keisuke Fujii (right) 











Authors: Tomoyuki Morimae1 and Keisuke Fujii2
Affiliation:
1Department of Physics, Imperial College, London, UK
2Graduate School of Engineering Science, Osaka University, Japan

A first generation quantum computer, which must be an integration of extremely high technologies, will be used in a ``cloud computing" style since only limited number of groups, such as governments and huge industries, will be able to possess it. Clients who do not have enough money and technologies to possess their own quantum computers will access a quantum ``server" from their terminals and remotely run their quantum algorithm on the server. One of the most essential requirements in such a cloud quantum computing is the security of the client's privacy: a client should be able to delegate her quantum computation to a server in such a way that the server cannot learn anything about client's privacy.

Blind quantum computation provides a solution to such a privacy issue. Blind quantum computation is a novel secure quantum computing protocol which enables Alice, who does not have enough quantum technologies, to delegate her quantum computation to Bob, who has a fully-fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output, and algorithm. A protocol of blind quantum computation for almost classical Alice was first invented by Broadbent, Fitzsimons, and Kashefi [1] using the measurement-based quantum computation [2] (Fig.1). In their protocol, Alice has only to possess a device which emits randomly-rotated single qubit states. Recently, the Broadbent-Fitzsimons-Kashefi (BFK) protocol was experimentally demonstrated in an optical system [3]. This proof-of-principle experiment of blind quantum computation has raised new challenges regarding the scalability of blind quantum computation in noisy conditions.

Fig. 1 The first blind protocol for almost classical Alice by Broadbent, Fitzsimons, and Kashefi.(Image courtesy of Liu Jia)

In Ref. [4], we have shown that fault-tolerant blind quantum computation is possible. We adopt the topological quantum computing scheme by Raussendorf, Harrington, and Goyal [5] to the BFK protocol. The topological quantum computing scheme is one of the most promising fault-tolerant methods in today's quantum computing architecture. In this scheme, a certain three-dimensional system of entangled qubits is first prepared, and then defects are created by measuring out some qubits of the three-dimensional system. These defects behave as anyons, and therefore braiding of defects can implement quantum gates [5].

Fig.2 Illustration of our protocol. Alice knows her topological quantum computation, whereas Bob cannot.

In our protocol, Alice can run her topologically-protected quantum computation on Bob's quantum computer in such a way that Alice's privacy is kept secret to Bob (Fig. 2). More precisely, our protocol runs as follows (Fig.3). Like the BFK protocol, Alice first sends randomly-rotated qubits to Bob (Fig.3 (a)). By entangling these qubits, Bob next creates a certain three-dimensional system, which is a modification of the Raussendorf-Harrington-Goyal one (Fig.3 (b)). Then, Alice and Bob perform measurement-based quantum computing with exchanging classical messages (Fig.3 (c) and (d)). If Bob is honest, Alice can perform the correct topologically-protected quantum computation (Fig. 3 (e)). On the other hand, whatever evil Bob does, he cannot learn anything about Alice's input, output, and algorithm (Fig.3 (e)).

Fig. 3 Explanation of our topological blind protocol.

We have also calculated the error threshold of our topological blind scheme. The error threshold is 0.43%, which is comparable to that (0.75%) of the normal (non-blind) topological quantum computation [5]. As the error per gate of the order 0.1% has already been achieved in some experimental systems, such as trapped ions, our result suggests that secure cloud quantum computation is within reach.

References
[1] ``Universal blind quantum computation", Anne Broadbent, Joe Fitzsimons, and Elham Kashefi, in Proceedings of the 50th Annual IEEE Symposium on Foundations of Computer Science, 517-527 (IEEE Computer Society, Los Alamitos, USA, 2009). Abstract.
[2] ``A one-way quantum computer", Robert Raussendorf and Hans J. Briegel, Physical Review Letters, 86, 5188 (2001). Abstract.
[3] ``Experimental demonstration of blind quantum computing", Stefanie Barz, Elham Kashefi, Anne Broadbent, Joe Fitzsimons, Anton Zeilinger, and Philip Walther, Science, 335, 303-308 (2012). Abstract.
[4] ``Blind topological measurement-based quantum computationh, Tomoyuki Morimae and Keisuke Fujii, Nature Communications 3, 1036 (2012). Abstract.
[5] ``Topological fault-tolerance in cluster state quantum computation", Robert Raussendorf, Jim Harrington, and Kovid Goyal, New Journal of Physics, 9, 199 (2007). Abstract.

Labels:


Sunday, September 16, 2012

The Shape of Quantum Light

Three of the authors of the paper in Physical Review Letters [3]: (from left to right) Marco Bellini, Constantina Polycarpou, and Alessandro Zavatta

Author: Marco Bellini 

Affiliation: Istituto Nazionale di Ottica (INO) – CNR, Florence and
European Laboratory for Non-linear Spectroscopy (LENS), Florence, Italy

Link to "Highly Nonlinear and Quantum Optics Group" at INO >>

Similar to water, a state of light does not possess a definite shape, but rather assumes that of the container that it occupies. In particular, any quantum state of light is a specific way of “filling” an empty container, the mode, which describes the spatiotemporal shape of the electromagnetic field. Any quantum state, like a single photon (the filling of the container with just a single quantum of excitation), can thus come in many different shapes, according to the shape of the mode it occupies.


Most applications of the quantum properties of light to novel technologies critically depend on the precise knowledge of the mode shape. If not perfectly under control, the processing, detection, and use of quantum light states become very inefficient or plainly impossible.

For example, future quantum networks require that light interacts with atomic species to perform quantum processing and implement memory units. These tasks require a very specific and precise preparation of the photonic wavepacket, i.e. of its spatiotemporal mode, such that it optimally couples to the different possible interfaces. On the other hand, in a typical experiment for the complete tomographic reconstruction of some quantum light state, homodyne detection only works with sufficient efficiency if the mode of the reference classical coherent field (the so-called local oscillator, LO) is perfectly matched to that of the state under examination [1,2]. If little or no prior information on such a mode is at hand, or if the mode itself has been somehow distorted during the propagation from the source to the detector, one may completely miss the target in the detection stage.

Figure 1: Artist’s view of the optimal detection of a shaped single-photon by mixing with an equally-shaped intense coherent light pulse on a beam splitter. This is the principle of the homodyne detection technique that was used in the experiment.

Based on this characteristic of homodyne detection, our research team at the Istituto Nazionale di Ottica (INO-CNR), Florence, Italy, has developed a new scheme, putting together concepts and techniques from the fields of ultrafast and quantum optics, for the complete measurement and the possible exploitation of the spectrotemporal shape of ultrashort nonclassical light states [3]. Using the measured efficiency of homodyne detection as a feedback, we applied an adaptive procedure based on a genetic algorithm to drive a random initial population of LO shapes towards the faithful representation of the fragile temporal mode of a single photon.

We tested the procedure with single photons of the duration of a few tens of femtoseconds that were generated under different conditions so that their spectrotemporal mode presented a variety of modulations [4,5]. In all cases we were able to completely map the photon wavepacket mode -- that is its spectral and temporal intensity/phase profile -- onto that of an intense coherent light pulse, which was then characterized in detail by means of standard techniques, like interferometric autocorrelation and FROG (Frequency Resolved Optical Gating).

Figure 2: Scheme of the genetic algorithm used to match the LO spectrotemporal shape to that of the generated single photons. Starting from a set of random LO shapes, the efficiency of homodyne detection is gradually increased by means of a genetic-like evolution of the initial shape population.

Interestingly, the scheme is not limited to the measurement of the spectral and temporal shape, and it is certainly not confined to using single photons. If a modulation of the LO wavefront is also introduced, the full spatiotemporal mode of the state can be completely probed in a similar way. Furthermore, the mode of other quantum light states besides single photons can be analyzed by properly choosing the right merit function in the adaptive algorithm.

The ability to precisely access the complex spectrotemporal structure of single photons can also be used for clever encoding and decoding of quantum information. This is possible by exploiting the different ways a single “filling” can be coherently distributed among a collection of different “containers”. More precisely, one can use different orthogonal spectrotemporal modes as different “letters” in a quantum alphabet, and encode information in the way a single photon is coherently spread over several of them. We clearly demonstrated this possibility by measuring the internal coherence of a single photon distributed between two distinct spectral modes, but our approach has the potential of dealing with much more complex situations.

Figure 3: Measured spectral intensity and phase profiles for a single photon coherently delocalized between two distinct spectral modes.

The possibility of using a larger quantum alphabet would bring enormous advantages to the field of quantum communications when compared to standard schemes normally based on qubits, i.e. on an alphabet consisting of only two possible states (like those based on light polarization or time-bins).

References
[1] “Time domain analysis of quantum states of light: noise characterization and homodyne tomography”, Alessandro Zavatta, Marco Bellini, Pier Luigi Ramazza, Francesco Marin, and Fortunato Tito Arecchi, Journal of the Optical Society of America B, 19, 1189 (2002). Abstract.
[2] “Continuous-variable optical quantum-state tomography”, A. I. Lvovsky and M. G. Raymer, Review of Modern Physics, 81, 299 (2009). Abstract.
[3] “Adaptive Detection of Arbitrarily Shaped Ultrashort Quantum Light States”, C. Polycarpou, K.N. Cassemiro, G. Venturi, A. Zavatta, and M. Bellini, Physical Review Letters, 109, 053602 (2012). Abstract.
[4] “Nonlocal pulse shaping with entangled photon pairs”, M. Bellini, F. Marin, S. Viciani, A. Zavatta and F. T. Arecchi, Physical Review Letters, 90, 043602 (2003). Abstract.
[5] “Nonlocal modulations on the temporal and spectral profiles of an entangled photon pair”, Silvia Viciani, Alessandro Zavatta and Marco Bellini, Physical Review A, 69, 053801 (2004). Abstract.

Labels:


Sunday, August 26, 2012

Experimental Implementation of Device-Independent Dimension Witnesses

The authors in their laboratory in Stockholm last week. From left to right: Johan Ahrens, Adán Cabello, Piotr Badziag, Mohamed Bourennane.

Authors: Johan Ahrens1, Piotr Badziag1, Adán Cabello1,2 and Mohamed Bourennane1

Affiliation:
1Physics Department, Stockholm University, Sweden
2Departamento de Fı´sica Aplicada II, Universidad de Sevilla, Spain

The concept of “dimension” is ubiquitous in physics. When we say that a physical system “has” dimension 2, or 3, or infinite, what do we mean? Why we say that a light switch has dimension 2 while a particle that can be anywhere within a line have infinite dimension? If we are provided by a black box emitting particles, how can we actually measure the dimension of the particles without knowing how the box works?

This month, Nature Physics publishes two papers [1, 2] describing experiments to determine the (minimum) dimension of particles emitted by a black box. These experiments measure some so-called “dimension witnesses”. Our experiment has been performed at Stockholm University (Sweden) in collaboration with the University of Seville (Spain) [1]; the other experiment was performed at the Institute of Photonic Sciences in Barcelona (Spain) in collaboration with the University of Bristol (UK) [2, also see last week's 2Physics article]. Both are based on a proposal of the Barcelona-Bristol group [3]. Here we explain what is a dimension witness and why is interesting to measure it in a black box scenario.

The dimension of a physical system on which a set of measurements can be carried out is the maximum number of perfectly distinguishable states using these measurements. This means that, among these measurements, there is at least one which allows us to distinguish between any two states.

A fundamental difference between classical and quantum physics is that, in classical physics, all the states are perfectly distinguishable, while this is not the case in quantum physics. For example, a quantum system of dimension 2 (or qubit) is a system in which the maximum number of perfectly distinguishable states is 2, but this does not mean, as in classical physics, that only 2 states are possible: there are infinite states, but it is only possible to distinguish 2.

Consider the following problem: We receive a black box with 3 buttons P1, P2 and P3, so every time we press one button, the box emits one particle. On this particle we can perform one measurement chosen between two, called M1 y M2 and represented by two buttons in a second box: whenever we press the button M1 (M2) we measure M1 (M2). Each of these measurements has two possible results that we denote as -1 and +1. The whole experiment is schematically illustrated in the following figure:

What can we say about the dimension of the particles emitted by the preparator? To answer that, we repeat the experiment many times, pressing all possible pairs of buttons Pi (i=1, 2, 3) and Mj (j=1, 2), and recording the frequencies of the different results.

A dimension witness is nothing but a linear combination of probabilities P(+1|Pi,Mj) of obtaining result +1 when preparing Pi and measuring Mj, such that its experimental value provides a lower bound to the dimension of the prepared systems. For example, the following combination T is a dimension witness:

T=P(+1|P1,M1)+P(+1|P1,M2)+P(+1|P2,M1)+P(-1|P2,M2)+P(-1|P3,M1).

Since probabilities cannot be higher than 1, then the maximum value for T is 5. Let us suppose we obtain T=5. This means that P(+1|P1,M2)=1 and P(-1|P2,M2)=1, which implies that M2 distinguishes P1 from P2. In addition, P(+1|P1,M1)=1 and P(-1|P3,M1)=1, indicating that M1 distinguishes P1 from P3. Finally, P(+1|P2,M1)=1 and P(-1|P3,M1)=1, thus M1 also distinguishes P2 from P3. Conclusion: If T=5, then dimension D is (at least) 3. However, if D=2, then T cannot be 5. Therefore, the experimental value of T allows us to have a lower bound for D.

Some dimension witnesses also allow us to distinguish between classical and quantum systems of the same dimension (e.g., between bits and qubits, or between trits and qutrits). For example, it can be proven that for classical systems of D=2 the maximum value of T is 4. However, for quantum systems of D=2 the maximum value is 4.414.

Specifically, in our experiment [1], we encode classical or quantum information in the polarization and spatial modes of individual photons and shown how the experimental value of two dimensional witnesses allow us to test whether the photons act as bits, or qubits, or trits, or qutrits.

The importance of such a tool is easy to understand if we notice that the physical system’s dimension determines its capacity to store, process and communicate information.

References:
[1] Johan Ahrens, Piotr Badziacedilg, Adán Cabello, Mohamed Bourennane, “Experimental device-independent tests of classical and quantum dimensions”. Nature Physics, 8, 592–595(2012). Abstract.
[2] Martin Hendrych, Rodrigo Gallego, Michal Mičuda, Nicolas Brunner, Antonio Acín, Juan P. Torres, “Experimental estimation of the dimension of classical and quantum systems”. Nature Physics, 8, 588–591(2012). Abstract. 2Physics article.
[3] Rodrigo Gallego, Nicolas Brunner, Christopher Hadley, and Antonio Acín, “Device-independent tests of classical and quantum dimensions”. Physical Review Letters 105, 230501 (2010). Abstract.

Labels:


Sunday, August 19, 2012

Testing the Dimension of Classical and Quantum Systems

Group leaders: Antonio Acin (left) and Juan Pérez Torres (right)

Authors: Martin Hendrych1, Rodrigo Gallego1, Michal Mičuda1,2, Nicolas Brunner3, Antonio Acín1,4, Juan P. Torres1,5

Affiliations:
1ICFO-Institut de Ciencies Fotoniques, Barcelona, Spain
2Department of Optics, Palacký University, Czech Republic
3H.H. Wills Physics Laboratory, University of Bristol, UK
4ICREA-Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
5Department of Signal Theory and Communications, Universitat Politècnica de Catalunya, Barcelona, Spain

The main goal of any scientific theory is to predict and explain the results of experiments. In doing so, the theory makes some assumptions about the experiment under consideration, based, for instance, on some a priori knowledge, or symmetries of the setup. Based on these assumptions, a model -- possibly with some free parameters -- is constructed. The model is satisfactory whenever it is able to reproduce the observed results for reasonable values of the free parameters.

Consider for instance a quantum experiment involving measurements on several interacting particles. The quantum postulates tell us that the state, interactions and measurements in the setup should be described by operators acting on a Hilbert space of a given dimension. A standard practice is to assume that the dimension of the Hilbert space, that is, the number of independent parameters necessary to describe the setup is known. The theoretical model should then provide the operators in this space that reproduce the observed measurement statistics.

However one may ask whether this initial assumption on the dimension of the Hilbert space is in fact unavoidable to describe the experimental data or, on the contrary, if it is possible to estimate the dimension of a completely unknown quantum system only from the statistics of measurements performed on it. The concept of dimension witnesses gives a positive answer to the last question, as it provides lower bounds on the dimension of an unknown system only from the collected measurement results and without making any assumptions on the physical system under consideration. Clearly, without any assumption, the best one can hope for is to get lower bounds on the minimal dimension needed to describe an unknown system. In fact, one can never exclude the existence of further degrees of freedom in the system that are not seen in the present setup but can be accessed using a more refined experimental arrangement.

Our recent work [1] and a similar and independent experiment [2] represent the first experimental demonstrations of a dimension witness. Dimension witnesses were introduced in Reference[3] in the context of Bell inequalities. Later, alternative techniques for bounding the dimension of unknown systems were proposed based on random access codes [4] or the time evolution of a quantum observable [5]. In our experimental demonstration, we followed the approach presented in Reference[6], which applies to a “prepare and measurement” scenario as the one depicted in Figure 1. In this scenario there are two devices, the state preparator and the measurement device. These devices are seen as black boxes, as no assumptions are made on their internal working. At the state preparator, a quantum state ρx is prepared, out of N possible states. The state is then sent to the measuring device. There, a measurement y is performed, among M possible measurements, which produces a result b that can take K different values. The whole experiment is thus described by the probability distribution p(b|x,y), giving the probability of obtaining outcome b when measurement y is performed on the prepared state x. The goal is to estimate the minimal dimension of the mediating quantum particle between the two devices needed to describe the observed statistics.

Figure 1: Prepare-and-measure scenario for dimension witnesses. At the state preparator one can choose to prepare one out of N possible quantum states. The prepared state is denoted by ρx. The quantum state is then sent to the measuring device, where a measurement y is performed among M possibilities. The measurement result is denoted by b and can take K different values. For instance, the figure shows a scenario with four preparations and three measurements.

A dimension witness for a system of dimension d is simply a function of the observed probabilities that is bounded by a given value for all systems of dimension not larger than d. If in a given experiment the observed value of the dimension witness exceeds this bound, the system must necessarily have dimension larger than d. In our experiment, we observed the violation of one of the dimension witnesses introduced in [6], denoted by I4. The maximum values this witness can take for classical or quantum systems of dimension up to four are given in Table 1. Note that if the system dimension is assumed to be bounded, the witness also allows distinguishing between classical and quantum systems.

Bit Qubit Trit Qutrit Quart/Ququart
  I4   5   6   7   7.97   9

Table 1: Classical and quantum bounds for the dimension witness I4. The witness I4 can be used to discriminate ensembles of classical and quantum states of dimension up to 4. Note that for some values of the dimension a gap appears between classical and quantum systems. Thus, if one assumes a bound on the dimension of the system, the witness can be used to certify its quantum nature.

Obviously, to demonstrate the dimension witness, we need to construct quantum states of different dimensions. Fortunately, photons have a rich structure: they have polarization, frequency and spatial shape. Moreover, pairs of photons can be entangled [7]. In our experiment we take advantage of all these features. First of all, multidimensional spaces of up to dimension 4 are created by generating photons in a superposition of two orthogonal polarization states (two dimensions) embedded into one out of two specific spatial modes (two more dimensions).

The state preparation works as follows. By means of spontaneous parametric down conversion, namely the generation of two lower frequency photons when a second order nonlinear crystal is pumped by an intense higher frequency optical beam, we generate photon pairs entangled in the polarization and spatial degrees of freedom. The detection of one of the photons in a tailored state effectively prepares (projects) the second photon in the desired quantum state. In our experiment, we are also interested in demonstrating the separation between quantum and classical systems of the same dimension. We achieve this at the preparation by exploiting the frequency degree of freedom, which is used to change the superposition that occurs in polarization from coherent (quantum) to incoherent (classical). The prepared photon is finally sent to the measuring device, where it is detected using optical tools very similar to those used in its generation: spatial light modulators, polarizers, optical fibers and single-photon counting modules.
Figure 2: (click on the image to view higher resolution version) Experimental results. The experiment probes the dimension witness I4 using systems of different nature, classical or quantum, and dimension (bit-qubit, trit-qutrit and quart). In the case of dimension 4 (quart), the dimension witness is insensitive to the quantum/classical transition (see also Table 1). For all dimensions, a violation of the corresponding bound is observed, certifying the dimension of the system.

To conclude, we have demonstrated that the dimension of classical and quantum systems can be bounded only from the measurement statistics without any extra assumption on the devices used in the experiment. Dimension witnesses represent an example of a device-independent estimation technique, in which relevant information about an unknown system is obtained only from the measurement data. Our work demonstrates how the device-independent approach can be employed to experimentally estimate the dimension of an unknown system. Beyond the fundamental motivation, the estimation of the dimension of unknown quantum systems is also relevant from a quantum information perspective, where the Hilbert space dimension is a resource that enables more powerful quantum information protocols. In fact, the quantum/classical distinction provided by dimension witnesses when the system dimension is bounded has recently been used for constructing protocols for secure key distribution [8] and randomness generation [9].

References
[1] Martin Hendrych, Rodrigo Gallego, Michal Mičuda, Nicolas Brunner, Antonio Acín, Juan P. Torres, "Experimental estimation of the dimension of classical and quantum systems", Nature Physics 8, 588–591 (2012). Abstract.
[2] Johan Ahrens, Piotr Badziacedilg, Adán Cabello, Mohamed Bourennane, "Experimental device-independent tests of classical and quantum dimensions", Nature Physics, 8 592–595 (2012). Abstract.
[3] Nicolas Brunner, Stefano Pironio, Antonio Acin, Nicolas Gisin, André Allan Méthot, and Valerio Scarani, "Testing the Dimension of Hilbert Spaces", Physical Review Letters, 100, 210503 (2008). Abstract.
[4] Stephanie Wehner, Matthias Christandl, and Andrew C. Doherty, "Lower bound on the dimension of a quantum system given measured data", Physical Review A 78, 062112 (2008). Abstract.
[5] Michael M. Wolf and David Perez-Garcia, "Assessing Quantum Dimensionality from Observable Dynamics", Physical Review Letters, 102, 190504 (2009). Abstract.
[6] Rodrigo Gallego, Nicolas Brunner, Christopher Hadley, and Antonio Acín, "Device-Independent Tests of Classical and Quantum Dimensions", Physical Review Letters, 105, 230501 (2010). Abstract.
[7] Juan P. Torres, K. Banaszek and I. A. Walmsley, "Engineering Nonlinear Optic Sources of Photonic Entanglement", Progress in Optics 56, Chapter V, 227-331 (2011). Abstract.
[8] Marcin Pawłowski and Nicolas Brunner, "Semi-device-independent security of one-way quantum key distribution", Physical Review A 84, 010302(R) (2011). Abstract.
[9] Hong-Wei Li, Marcin Pawłowski, Zhen-Qiang Yin, Guang-Can Guo, and Zheng-Fu Han, "Semi-device-independent randomness certification using n→1 quantum random access codes", Physical Review A 85, 052308 (2012). Abstract.

Labels:


Sunday, July 29, 2012

Imperfections, Disorder and Quantum Coherence

Steve Rolston [Image courtesy: University of Maryland, USA]

A new experiment conducted at the Joint Quantum Institute (JQI, operated jointly by the National Institute of Standards and Technology in Gaithersburg, MD and the University of Maryland in College Park, USA) examines the relationship between quantum coherence, an important aspect of certain materials kept at low temperature, and the imperfections in those materials. These findings should be useful in forging a better understanding of disorder, and in turn in developing better quantum-based devices, such as superconducting magnets. The new results are published in the New Journal of Physics [1].

Most things in nature are imperfect at some level. Fortunately, imperfections---a departure, say, from an orderly array of atoms in a crystalline solid---are often advantageous. For example, copper wire, which carries so much of the world’s electricity, conducts much better if at least some impurity atoms are present.

In other words, a pinch of disorder is good. But there can be too much of this good thing. The issue of disorder is so important in condensed matter physics, and so difficult to understand directly, that some scientists have been trying for some years to simulate with thin vapors of cold atoms the behavior of electrons flowing through solids trillions of times more dense. With their ability to control the local forces over these atoms, physicists hope to shed light on more complicated case of solids.

That’s where the JQI experiment comes in. Specifically, Steve Rolston and his colleagues have set up an optical lattice of rubidium atoms held at temperature close to absolute zero. In such a lattice atoms in space are held in orderly proximity not by natural inter-atomic forces but by the forces exerted by an array of laser beams. These atoms, moreover, constitute a Bose Einstein condensate (BEC), a special condition in which they all belong to a single quantum state.

This is appropriate since the atoms are meant to be a proxy for the electrons flowing through a solid superconductor. In some so called high temperature superconductors (HTSC), the electrons move in planes of copper and oxygen atoms. These HTSC materials work, however, only if a fillip of impurity atoms, such as barium or yttrium, is present. Theorists have not adequately explained why this bit of disorder in the underlying material should be necessary for attaining superconductivity.

The JQI experiment has tried to supply palpable data that can illuminate the issue of disorder. In solids, atoms are a fraction of a nanometer (billionth of a meter) apart. At JQI the atoms are about a micron (a millionth of a meter) apart. Actually, the JQI atom swarm consists of a 2-dimensional disk. “Disorder” in this disk consists not of impurity atoms but of “speckle.” When a laser beam strikes a rough surface, such as a cinderblock wall, it is scattered in a haphazard pattern. This visible speckle effect is what is used to slightly disorganize the otherwise perfect arrangement of Rb atoms in the JQI sample.

In superconductors, the slight disorder in the form of impurities ensures a very orderly “coherence” of the supercurrent. That is, the electrons moving through the solid flow as a single coordinated train of waves and retain their cohesiveness even in the midst of impurity atoms.

In the rubidium vapor, analogously, the slight disorder supplied by the speckle laser ensures that the Rb atoms retain their coordinated participation in the unified (BEC) quantum wave structure. But only up to a point. If too much disorder is added---if the speckle is too large---then the quantum coherence can go away. Probing this transition numerically was the object of the JQI experiment. The setup is illustrated in figure 1.

Figure 1: Two thin planes of cold atoms are held in an optical lattice by an array of laser beams. Still another laser beam, passed through a diffusing material, adds an element of disorder to the atoms in the form of a speckle pattern. [Image courtesy: Matthew Beeler]

And how do you know when you’ve gone too far with the disorder? How do you know that quantum coherence has been lost? By making coherence visible.

The JQI scientists cleverly pry their disk-shaped gas of atoms into two parallel sheets, looking like two thin crepes, one on top of each other. Thereafter, if all the laser beams are turned off, the two planes will collide like miniature galaxies. If the atoms were in a coherent condition, their collision will result in a crisp interference pattern showing up on a video screen as a series of high-contrast dark and light stripes.

If, however, the imposed disorder had been too high, resulting in a loss of coherence among the atoms, then the interference pattern will be washed out. Figure 2 shows this effect at work. Frames b and c respectively show what happens when the degree of disorder is just right and when it is too much.

Figure 2: Interference patterns resulting when the two planes of atoms are allowed to collide. In (b) the amount of disorder is just right and the pattern is crisp. In (c) too much disorder has begun to wash out the pattern. In (a) the pattern is complicated by the presence of vortices in the among the atoms, vortices which are hard to see in this image taken from the side. [Image courtesy: Matthew Beeler]

“Disorder figures in about half of all condensed matter physics,” says Steve Rolston. “What we’re doing is mimicking the movement of electrons in 3-dimensional solids using cold atoms in a 2-dimensional gas. Since there don’t seem to be any theoretical predictions to help us understand what we’re seeing we’ve moved into new experimental territory.”

Where does the JQI work go next? Well, in figure 2a you can see that the interference pattern is still visible but somewhat garbled. That arises from the fact that for this amount of disorder several vortices---miniature whirlpools of atoms---have sprouted within the gas. Exactly such vortices among electrons emerge in superconductivity, limiting their ability to maintain a coherent state.

Another of the JQI scientists, Matthew Beeler, underscores the importance of understanding the transition from the coherent state to incoherent state owing to the fluctuations introduced by disorder: “This paper is the first direct observation of disorder causing these phase fluctuations. To the extent that our system of cold atoms is like a HTSC superconductor, this is a direct connection between disorder and a mechanism which drives the system from superconductor to insulator.”

Reference:
[1] M C Beeler, M E W Reed, T Hong, and S L Rolston, "Disorder-driven loss of phase coherence in a quasi-2D cold atom system", New Journal of Physics, 14, 073024 doi:10.1088/1367-2630/14/7/073024 (2012). Abstract. Full Article.

Labels: , , , ,


Sunday, May 27, 2012

Free Randomness Can Be Amplified

Author: Roger Colbeck

Affiliation: Institute for Theoretical Physics, ETH Zurich, Switzerland

Are there fundamentally random processes in Nature? 150 years ago, scientists with a classical world-view would have likely answered in the negative: the laws of classical mechanics state that if one knew all the physical properties of every particle at some particular instant in time, then, in principle, the future evolution could be calculated. However, from the beginning of the 20th Century, this world-view started being challenged as quantum theory was born. This profoundly different theory asserts that the outcomes of measurements are fundamentally random. So, when a single photon is emitted from a source and sent through a half-silvered mirror, for example, all quantum theory tells us is that with probability 1/2 the photon is reflected, and with probability 1/2 it passes through, a distribution that can be confirmed statistically.

But is that the whole story? How can we be sure that the destiny of the photon (whether it will pass the mirror, or be reflected) wasn't already determined in such a way that observations on many photons nevertheless give the same statistics as if random?

In 1964, the work of John Bell [1] shed some light on the question of whether there could be higher explanations for the quantum statistics. By studying an extended experiment, involving two entangled photons sent towards two half-silvered mirrors, he showed that if the source determined the behaviour of the photons then the resulting correlations could not be those predicted by quantum mechanics. Experiments later confirmed the quantum predictions, e.g. [2].

It is tempting to use the above to argue for the existence of fundamentally random processes, but there is a catch. Bell's argument relies on different configurations of the half-silvered mirrors, and he assumes that these are chosen at random. Thus, for the purpose of arguing that there are truly random processes (note that this wasn't Bell's aim), the argument is circular. If we can randomly choose the configurations, then the outcomes are random, as was previously stressed by Conway and Kochen [3].

In our paper [4], we show that if we have access only to some weak randomness to choose the configurations, then the outcomes of certain quantum experiments are nevertheless completely random. To capture the idea of weak randomness, imagine that you write down a string of 0s and 1s choosing them as randomly as you can. However, before you write each bit, a sophisticated machine is asked to guess your next choice. If your choices are weakly random, then the machine can guess the next one with some probability greater than 1/2. What we show in our paper, is that provided the machine's guessing probability is not too high, it is possible to make random bits for which the machine knows nothing: this is randomness amplification.

[Image credit: T. Neupert]: Illustration of randomness amplification: A moving die about to strike an assembled tower. The tower falls yielding random numbers on several dice, thus amplifying the randomness of the original. In classical physics, the apparently random way the dice fall is in principle predictable, while, in quantum theory, there are ways to make this amplification fundamental.

It is interesting to note that this is not possible within classical mechanics. There, given a source of weak randomness, there is no protocol that can improve the quality of the randomness [5]. Thus, the task we present gives a new example of the improved power of using quantum systems over classical ones for information processing.

We conjecture that our result is extendible such that, provided the machine cannot guess the choices perfectly, it is possible to generate perfectly random bits. This would provided the strongest possible evidence in the existence of random processes in Nature: it would show that either the world is completely deterministic, or there are perfectly random processes.

This work also has applications in virtually any scenario that relies on randomness. For example, a casino that doesn't completely trust its random number generators could in principle use a protocol of the type we suggest to improve the quality of the randomness.

References:
[1] Bell, J. "On the Einstein Podolsky Rosen Paradox". Physics, 1, 195--200 (1964). Full Article.
[2] Aspect, A., Grangier, P. & Roger, G. "Experimental Realization of Einstein-Rosen-Rosen-Bohm Gedankenexperiment: A New Violation of Bell's Inequalities". Physical Review Letters 49, 91--94 (1982). Abstract.
[3] Conway, J. & Kochen, S. "The free will theorem", Foundations of Physics, 36, 1441--1473 (2006).Abstract.
[4] Colbeck, R. & Renner, R. "Free randomness can be amplified", Nature Physics,  doi:10.1038/nphys2300, (Published online May 6, 2012). Abstract.
[5] Santha, M. & Vazirani, U. V.  "Generating Quasi-Random Sequences From Slightly-Random Sources",  in Proceedings of the 25th IEEE Symposium on Foundations of Computer Science (FOCS-84) 434--440 (1984). Abstract.

Labels: ,


Sunday, May 20, 2012

Signatures of Majorana Fermions in Hybrid Superconductor-Semiconductor Nanowire Devices








Authors: Vincent Mourik1, Kun Zuo1, Sergey Frolov1, Sébastien Plissard2, Erik Bakkers1,2, Leo Kouwenhoven1

Affiliation:
1Kavli Institute of Nanoscience, Delft University of Technology, Netherlands.
2Dept of Applied Physics, Eindhoven University of Technology, Netherlands.

Particle Predictors:
Paul Dirac was the very first particle predictor. In 1927, Dirac developed a formula that linked two new theories: Einstein’s special theory of relativity and quantum mechanics. Dirac’s equation, however, had several solutions. The first solution described the familiar electron: a particle with a negative charge holding a certain amount of positive energy. Another solution actually constituted its very opposite: a positively charged particle holding a certain amount of negative energy. Rather than ignoring the contradiction raised by his additional solution, Dirac surmised that there must be a particle in nature with a positive electrical charge and negative energy [1]. This particle therefore has properties that would exactly mirror the properties of an electron. Several years later, this particle was indeed found, and was named positron. Together, the electron and the positron form a particle and antiparticle pair.

A pure genius, Paul Dirac was utterly convinced of the veracity of his formula. If his equation offered a certain solution, then a corresponding particle simply had to exist in nature. Since that time, numerous other particles have been predicted and identified this way. For example, the ongoing search for the Higgs boson is set up just like that, based on a prediction from the Standard Model.

Ettore Majorana was a physicist and a contemporary of Dirac. Majorana had an enigmatic biography that formed the topic of many books and films in Italy. At some point in the 1930s, Majorana was playing around with Dirac’s equation and after slightly modifying it he found a new solution: a particle that is identical to its antiparticle. And something can only be identical to its counterpart if it has properties that are all zero. Ettore Majorana, too, had a firm belief in formulas and in 1937 he published a paper [2] predicting his new particle, which has since become known as the Majorana fermion.

For decades the Majorana particle received little attention, but in the 1970s the search began afresh. Using large accelerators and detectors, scientists started hunting for neutrino particles with Majorana properties. Indeed, these elementary Majorana particles might even solve the mystery of dark matter that fills our Universe. So far, the elementary Majorana particles have remained elusive, but this important quest is still being pursued by CERN in Geneva.

Particle Creators:

In addition to elementary particles, composite or collective particles (see box) also exist in the world of condensed matter physics. We know of heat particles (phonons), electron density waves (plasmons), magnetic waves (magnons) and a long list of other collective particles. These collective particles are particularly convenient for making the physics of materials a lot simpler. Materials hold a distinct place in physics because by combining materials we can create objects that did not exist before. Technology, for instance, abounds with remarkable material combinations, such as silicon and silicon oxide forming the backbone of electronics. But material combinations can also be used in fundamental physics to create something new. This prompted a number of theoretical physicists to reflect on whether we could combine materials in such a way that the collective particles inside them will acquire the properties of Majorana fermions.

The one-dimensional lattice proposed by Alexei Kitaev in 2001 was still highly mathematical and abstract [3]. A number of propositions then followed based on (p-wave superconducting) materials that did not yet exist. In 2008, Liang Fu and Charles Kane’s theory [4] was the first to be based on existing materials, but was still difficult to put into practice. The year 2010 saw the publication, in Physical Review Letters of two similar theories by two groups of theorists, independently of each other, which for the first time looked feasible in practice. One of the publications [5] came from theorists at the University of Maryland (Roman Lutchyn, Jay Sau and Sankar Das Sarma); the other [6] was a collaborative effort between theorists at the Weizmann Institute in Israel, California Institute of Technology of USA and the Free University of Berlin in Germany (Yuval Oreg, Gil Rafael and Felix von Oppen). The importance of the aforementioned theoretical developments was that it shifted the focus from what is found in nature to the artificial creation of Majorana particles.

Prior knowledge: there are particles and then there are particles…

If you blow into your hand, what you will primarily feel are oxygen and nitrogen molecules. Those molecules are minute, subnanometer-scale particles that are composed, in this case, of two atoms each. In turn, each atom is made up of an atomic nucleus encircled by electrons. The electrons cannot be divided into smaller particles - they are ‘elementary’ particles. However, the protons and neutrons inside the nucleus can be shattered to create even smaller particles. This shattering process is generated in accelerators such as the one at CERN in Geneva, where the search for the Higgs particle continues unabated. Other popular particles are the neutrinos (which for a short period were believed to travel even faster than light) and the Majorana fermions. These Majorana fermions have not yet been found at CERN.The Majorana fermions may well be the key to explaining the dark matter mystery. In the universe, there is five times as much dark matter as ordinary matter, and so Majorana fermions could be the most widespread particles in the universe.

CERN are engaged in the study of fundamental particles. Each of these particles are smaller than the smallest atom, hydrogen. Our world of matter is based on atoms and clusters of atoms that form molecules. The glue that binds these atoms to molecules is described by quantum mechanics. Our bodies, for instance, are chemical factories in which atoms are stuck together with quantum glue. Apart from complex biological materials, there are also crystals that frequently hold the same atoms stacked inside a grid. Even the smallest materials may contain large numbers of atoms. For example, a nanowire with a diameter of 100 nm and a length of 1000 nm (1000 nm = 1 micrometer) alone contains some 10 billion atoms.

Next to fundamental particles and atoms there are also collective particles. The ‘wave’ in a stadium is a good example. The ‘wave’ is simply a group of spectators jumping up and down to create a wave. If we wanted to describe this wave in mathematical terms, we might do that by including everyone in a large formula. Then again, we could also approach it more simply by forgetting about all those individuals and only describe their collective behavior, that is, the wave. And for simplicity’s sake we could call the ‘wave’ a particle, in this case, a collective particle. This reduction to collective particles simplifies matters enormously and is often highly successful. An example: heat in a material is not described in terms of a bunch of vibrating atoms but, far more easily, as heat particles that are known as phonons.

You may think that ‘collective particles’ is a rather imprecise way of describing what actually happens. This may be true of the wave but phonons, for example, can in fact provide us with a very exact, realistic description. What is perhaps the most surprising fact is that collective particles can actually behave in accordance with the laws of quantum mechanics. A phonon can find itself in the superposition of both hot and cold. Such a quantum superposition may sound absurd enough for elementary particles, but is really stretching our imagination where collective particles are concerned.

The Majorana fermions in crystals are not only interesting from a fundamental viewpoint, but also have unique properties that can be used to build a quantum computer. Field medalist Michael Freedman works at Microsoft and has been carrying out active research into topological quantum computers with a team of scientists since 2005. This computer works by moving Majorana particles around each other and forming space-time braids.

The proposals put forward by Lutchyn et al [5] and Oreg et al [6] are both based on bringing semiconducting nanowires into contact with a superconducting material. We had already successfully accomplished this combination in Delft, which resulted in publications in Science (2005) and Nature (2006). Combining these specific materials suddenly made us the experimental specialists in the search for Majorana fermions.

Note that the Majorana quest had already been described at an early stage in the journal 'Science' [7].

Majorana in Delft:

How do you create a Majorana fermion? Based on the condition that the particle is identical to its antiparticle, you can do some reverse engineering. It cannot, for instance, have an electrical charge, nor have energy or spin. The theoretical proposals argue that those properties are created by combining materials consisting of a superconductor with a special semiconductor that has strong spin-orbit coupling. This semiconductor should take the form of a one-dimensional nanowire. If a magnetic field is also applied, the Majorana fermion should appear at low temperatures, just above absolute zero temperature. We combined these materials on a microchip. We developed InSb (Indium Antimonide) nanowires for the semiconductor. InSb has strong spin-orbit coupling. We used a Nb alloy as a superconductor. This material will retain its superconductive properties also in the presence of an external magnetic field. For this material we were granted permission to use the technology available in Teun Klapwijk’s group in Delft. Using nanotechnology, we produced an electronic chip that, admittedly, looks rather messy (top right). Zooming down to sub-micrometre scale, we can see the nanowire thread and the electrical contacts (below right).

In this device, the superconductor is larger than the semiconductor. The diameter of the nanowire is so small that it actually becomes a one-dimensional conductor. A portion of the superconductor is covering the nanowire, which causes the superconductivity to leak into the semiconductor, effectively creating a one-dimensional superconductor. These do not exist in nature but can be induced this way. The strong spin-orbit coupling in the InSb nanowire makes this one-dimensional superconductor particularly unique. It has a so-called p-symmetry, which again has also not been found in nature. This p-superconductor extends across the entire section where the nanowire is in contact with the superconductor. At the end points, where the p-superconductor ends, two Majorana fermions appear, one on each end point.

Image: The microchip used with three different Majorana devices. This chip is cooled down to almost absolute zero point (-273 degrees Celsius). The electrical wires are connected to measuring equipment at room temperature.
 
We can measure the Majorana fermions in the electrical conductivity. From the gold contact we send electrons into the nanowire, towards the lower Majorana fermion. Only when we send electrons inside with precisely zero energy can we measure a current. If we add voltage to the electrons to energise them further, they are reflected at the p-superconductor and we measure zero conductivity. The presence of the Majorana fermion in our system is therefore visible as a conductance peak at a voltage that is precisely zero.

In the 'Science' publication [8] we also included various control experiments, which demonstrate that each single ingredient from the original theory is essential for this observation. The results can only be interpreted if we assume the presence of Majorana fermions. The article is published online on 12 April in 'Science Express' [8].

Image: The nanowire, shown vertically in this photo, is lying flat on a substrate. Hidden in the substrate are different gate electrodes (the horizontal ‘stripes’ below the nanowire and the contacts), which can change the conductivity of the nanowire. The lower electrical contact to the nanowire is madefrom gold, a normal conductor. The contact on the top is covering half of the nanowire. This is the superconductor. The total length of the nanowire is three micrometres. The anticipated positions of two Majorana fermions are indicated with red stars. 

We have since been carrying out new experiments. As the title of our article ‘Signatures of …’ suggests, we also want to demonstrate other unique properties of Majorana fermions. And our Majorana fermions are literally one of a kind. Nature has two types of particles: fermions (such as electrons, positrons, neutrons, etc.) and bosons (photons, Higgs particles, phonons, etc.). Our Majorana particles are likely to have other properties than fermions and bosons. In terms of physics, their behaviour is described by non-Abelian statistics. If we can demonstrate these statistics in our new experiments, we add a completely new chapter to the book of physics. This new round of experiments is based on a highly theoretical approach using new concepts that are not yet quite understood. To translate abstract concepts into experiments we are working with Carlo Beenakker’s theory group from Leiden. The non-Abelian statistics also make Majorana particles useful for a topological quantum computer.

References:
[1] P. A. M. Dirac, "The Quantum Theory of the Electron". Proceedings of the Royal Society of London: Series A 117, 610–624 (1928). Full Article.
[2] Ettore Majorana, "Teoria simmetrica dell’elettrone e del positrone", Il Nuovo Cimento, 171 (1937). Abstract
[3] A. Yu. Kitaev, "Unpaired Majorana fermions in quantum wires". Physics Uspekhi, 44, 131 (2001). Full Article
[4] Liang Fu and Charles Kane, "Superconducting Proximity Effect and Majorana Fermions at the Surface of a Topological Insulator", Phys. Rev. Lett. 100, 096407 (2008). Abstract 
[5] Roman M. Lutchyn, Jay D. Sau and Sankar Das Sarma, "Majorana Fermions and a Topological Phase Transition in Semiconductor-Superconductor Heterostructures", Physical Review Letters, 105, 077001 (2010). Abstract.
[6] Yuval Oreg, Gil Refael, and Felix von Oppen, "Helical Liquids and Majorana Bound States in Quantum Wires", Physical Review Letters, 105, 177002 (2010). Abstract.
[7] Robert F. Service, "Search for Majorana Fermions Nearing Success at Last?", Science, 332, 193 (2011). Abstract.
[8] V. Mourik, K. Zuo, S.M. Frolov, S.R. Plissard, E.P.A.M. Bakkers, L.P. Kouwenhoven, "Signatures of Majorana fermions in hybrid superconductor-semiconductor nanowire devices", Science Express, DOI: 10.1126/science.1222360 (Published Online April 12 2012). Abstract.

Labels: , , ,