.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Sunday, December 25, 2011

Entang-bling

Ian Walmsley (left), Joshua Nunn (right) contempl -ating the universe.










Authors: Ian Walmsley and Joshua Nunn

Affiliation:
Clarendon Laboratory, Department of Physics, University of Oxford, UK.


More than 70 years ago, Erwin Schrödinger pointed out one (of many) striking features of the quantum mechanics that he’d recently invented: the possibility it allowed for stuff to do things that no one had actually seen in real life -- like cats being both dead and alive at the same time. This is one of what could be politely called the ‘interpretational difficulties’ of quantum physics. Familiar everyday objects behave in familiar everyday ways - they don’t engage in the sorts of nonsensical behaviour that Schrödinger’s equation predicts. But, as any physicist will tell you, basically quantum mechanics is not that complicated. It’s just that it takes familiar concepts, like position and direction, and makes us think about it in totally radical ways, so that in the end the results don’t make any sense at all!

Past 2Physics article by Joshua Nunn:
August 07, 2011: "Building a Quantum Internet"


Michael Sprague (top), KC Lee (left) and XianMin Jin (right), in the lab with the diamonds.

Of course, in the early 20th century, people were used to the idea that science was coming up with crazy new notions that dramatically altered our conception of things — like the notion of time in Einstein’s special theory of relativity. But Einstein’s theories deal with objects that can be seen on size and timescales that are familiar. The light from stars can be observed with a simple telescope; the timing of GPS satellites is a tangible technical problem. For this reason relativity has entered the scientific orthodoxy, which is why the recent neutrino speeding anomaly has caused such stir [1].

That’s what worried Schrödinger. In principle his theory also dealt with tangible objects — or at least there was no element in it that indicated otherwise. Yet it seemed at first as if quantum mechanics only gave good predictions for objects that are too small to be seen directly. It therefore took on the flavour of a story, in which the actors — electrons, atoms and photons — are convenient fictions we can use to explain what we see, but which are no more real than the characters in a novel.

Quantum theory tells us that in fact these characters can be in two places at once, that they are impossible to pin down exactly, and that they don’t really therefore give well defined answers to questions like ”are you red or blue”? We’re used to the idea in the ordinary world that an object, say a ball, will have a definite property like color. We may not know if a specific ball is red or blue, but we may regard it as having one of these two colors independent of whether we know which particular color it is. Quantum mechanics says, well, no, it’s not possible to be sure of that. In the quantum world, balls can be both red and blue at the same time.

But it is not the particles acting on their own that give rise to the deepest mysteries — it is when they get together that the fun really starts. For instance, it becomes possible to say that the color of one ball is well defined only in relation to a second ball. So if one is red, then the other is certainly blue -- but that neither is definite red or blue on its own. And you can actually test this proposition with microscopic particles — like photons (“particles” of light). This is the murky world of ”entanglement” in which pairs of particles are apparently connected across the universe as if by invisible filaments.

You can think about this in terms of what you know about light. Consider a beamsplitter. This is a common optical device: essentially a half-silvered mirror which passes half the incident light, and reflects the other half, as shown in Fig. 1. When a single photon encounters a beamsplitter it cannot split itself in two, so it must go one way or the other. Or does it? According to quantum physics it can go both one way and the other. In fact, the beamsplitter transforms the single photon into an entangled state[2]. If we measure if the photon is passed or reflected, we get that each option occurs 50% of the time. So this measurement alone does not help us distinguish between the entangled state and a state in which the input photon definitely goes one way or the other at random.

But consider the time reversal of this situation. Now we put a single photon into the back side of the beamsplitter. It also could be reflected or passed with 50% probability, except if the photon is in an entangled state of the two input ports. Then, by reversing argument above, we can see that it is definitely passed through the beamsplitter. Thus, by looking at how often a single input photon is passed by the beamsplitter, we can tell whether or not it was in an entangled state at the input. This kind of magic realism makes physicists (or at least, philosophers of physics) uncomfortable, but the edifice of science survives with such strangeness at its core because quantum effects are confined to the abstract domain of the microscopic, where human experience has no purchase and there can be no direct conflict with our intuitions.

Figure 1: A single photon (filled circle) cannot divide into two when it hits a beam splitter. It must either pass through, or be reflected. According to quantum mechanics, both of these possibilities occur, producing an entangled state, in which a single photon is shared between the two beams after the beam splitter. Running this process in reverse (i.e. from right to left) provides a way to detect entanglement, since only an entangled state will always produce a single photon in the same place on the left.

There is now a strong tradition of research which seeks to bring us face-to-face with our Frankenstein theory by confirming the predictions of quantum mechanics on human scales. The aim is to demonstrate quantum effects such as entanglement with increasingly large objects, containing more and more particles. Although many areas of physics have matured sufficiently that the underlying components of the theory are ‘accepted’, it is known that quantum mechanics as is cannot be the final word, since its predictions conflict with our experience of the world. Either some new physics is needed, or better arguments to explain how to reconcile quantum theory with the rest of the world.

While the philosophical debates smoulder on [3,4], experimentalists have set themselves the task of identifying the conditions under which quantum effects survive into the human realm, leading to behaviour we are not used to seeing in the familiar world of cats and elephants. Considerable progress has been made: large numbers of atoms have been entangled [5,6], and small pieces of solid material — but big enough to see with the naked eye — have been put into quantum superposition states where they were both vibrating and not vibrating at the same time [7].

These breakthroughs showed that quantum effects don’t need to be confined to small numbers of particles, or to particles without mass like photons. But so far, extremely specialised laboratory conditions have been required to observe these effects: very low temperatures just above absolute zero and high vacuum, with no air and no extraneous electric or magnetic fields. The objects were highly delicate composite devices which would not be found in nature, and careful preparation was required in order to keep them isolated from the deleterious effects of the environment.

We recognised that some materials have properties, like vibrations, that naturally lend themselves to realising these conditions in an everyday laboratory setting. These vibrations require a lot of energy to get going, so that ordinary environments at regular temperatures do not excite them. They are by nature in relatively pure quantum states, with no vibrational excitation at all. They may, however, be strongly coupled to their environment in the sense that once excited they quickly decay, so that quantum effects might be present only if we could be quick enough to observe them before they became overwhelmed.

We therefore decided to carry out an ‘easy experiment’ (though only Oxford physicists could be silly enough to think that any experiment is easy); to set one of these vibrations going using a very short duration light pulse from a laser, then to ”watch” it by means of a second short laser pulse acting as a probe of the vibrational motion. We realised that diamond was a naturally-occurring transparent material that was so hard that it could vibrate at a particular, very high-pitched frequency, which could be easily identified in a measurement. The vibrations in diamond last for just 7 picoseconds (1 ps is one thousandth of a nanosecond), so we had to use an ultrafast laser system producing laser pulses shorter than 100 femtoseconds (1 fs is one thousandth of a picosecond!).

We took an ordinary, common-or-garden diamond and set it vibrating using
a laser pulse. When the laser pulse hits the diamond, there is a small probability that just one photon from the laser pulse gives up some energy to the diamond to set it vibrating. By conservation of energy, this must mean that the photon leaves the diamond with reduced energy, and thus a longer wavelength than the original laser photon. By detecting this “red” photon, we could know that a single vibrational quantum (known technically as an optical phonon) had been created in the diamond crystal. We found that even at room temperature and pressure, in a lab with air and other vibrations and cups of tea, we could create this high-frequency vibration.

Now, we could prove this by detecting it using a second laser pulse, arriving after the first, but not so long after that the vibration had decayed away. The probe pulse detected the vibration by picking up energy from it, emerging with a shorter wavelength, so blue-shifted in color. So we detected a “red” photon to signal that a phonon has been generated, and a “blue” photon to prove that. Using this approach, we showed that we could catch a glimpse of the phonon before it vanished [8]. This type of “create-detect” experiment is precisely what has been done with cold clouds of atoms to entangle them, so we thought we would try to do that!





















Figure 2: The happy couple. We took data with the lights off but otherwise the diamonds were in a totally ordinary environment. The lenses are there to focus the laser pulses and collect the photons emitted by the crystals.

In a second experiment [9], we set up two diamonds, in ordinary little holders sitting near each other on a lab table (see Fig.2). By hitting both diamonds with a laser pulse at the same time, we created a vibration in one of the crystals, but it was impossible, even in principle, to tell which crystal was vibrating. We did this by combining the beams from the two diamonds that went to the “red” photon detector on a beamsplitter, as shown in Fig.3. When this detector fired, we knew that a single phonon has been generated, but we could not tell from which beam the red photon had arrived, and therefore in which diamond the phonon resided.

Quantum mechanics predicts that, if you don’t know this information, the right way to describe the diamonds is as an entangled quantum state, with one vibration shared between them. We then verified that the diamonds were entangled by combining the “blue” light from the diamonds at a beamsplitter (see Fig.3). We could detect first that each pulse only contained a single “blue” photon, and second that it was always passed by the beamsplitter, rather than reflected. This is only possible for a single photon if it is entering the beamsplitter in an entangled state, as argued previously, and thus was emitted from both diamonds! This means that the diamonds themselves were entangled, with a single vibration shared between both of them.

These results show, for the first time, that large, easily visible, solid objects (indeed, diamonds are naturally occurring minerals: pieces of rock), sitting in ambient conditions at room temperature and pressure, clamped to a table-top, can be put in a quantum-entangled state. Furthermore, the entanglement was created with vibrations — the motion of the crystals as a whole.

Figure 3: Generating and detecting entanglement between diamonds using ultrashort laser pulses (green lines). (a) The first set of pulses produces a single red-shifted photon from one of the diamonds. After the beams are mixed on a beam splitter, it is impossible to tell which diamond the photon came from. This means there is one vibration shared between the two diamonds — they are entangled. (b) After a small delay, we verify the entanglement by sending in a second pair of pulses, producing a blue-shifted photon in an entangled state. When this entangled state hits the beam splitter, the blue photon always emerges from just one side of the experiment (thick blue line). As shown in Fig.1, this can only happen if the diamonds are entangled.

So the positions of the atoms were entangled. This is particularly unsettling because we have an intuitive sense for position that we would not have if we had entangled magnetic fields or photons. Our measurements are, we feel, one of the most visceral demonstrations to date that the rules of quantum mechanics apply to us all: electrons and elephants alike.

References
[1] Opera Collaboraton, "Measurement of the neutrino velocity with the opera detector in the CNGS beam". arXiv:1109.4897 (2011). Link.
[2] S.J. van Enk. "Single-particle entanglement". Physical Review A, 72(6):064306 (2005). Abstract.
[3] D. Wallace. "Decoherence and Ontology: or, How I Learned to Stop Worrying and Love FAPP", in "Many Worlds? Everett, Quantum Theory, and Reality", eds. S. Saunders, J. Barrett, A. Kent and D. Wallace (OUP, 2010). Link.
[4] R. Penrose. "Wavefunction collapse as a real gravitational effect". Mathematical Physics 2000 (edited by A Fokas, A Grigoryan, T Kibble, B Zegarlinski), pages 266–282, (World Scientific eBooks, 2000). Link.
[5] K. S. Choi, H. Deng, J. Laurat, and H. J. Kimble. "Mapping photonic entanglement into and out of a quantum memory". Nature, 452:67–71 (2008). Abstract.
[6] H. Krauter, C.A. Muschik, K. Jensen, W. Wasilewski, J.M. Petersen, J.I. Cirac, and E.S. Polzik. "Entanglement generated by dissipation and steady state entanglement of two macroscopic objects". Physical Review Letters, 107(8):80503 (2011). Abstract.
[7] Adrian Cho. Faintest thrum heralds quantum machines. Science, 327(5965):516–518 (2010). Abstract.
[8] K. C. Lee, B. J. Sussman, M. R. Sprague, P. Michelberger, K. F. Reim, J. Nunn, N. K. Langford, P. J. Bustard, D. Jaksch, and I. A. Walmsley. "Macroscopic nonclassical states and terahertz quantum processing in room- temperature diamond", Nature Photonics, 6, 41-44 (2011). Abstract.
[9] KC Lee, MR Sprague, BJ Sussman, J. Nunn, NK Langford, X.M. Jin, T. Champion, P. Michelberger, KF Reim, D. England, D. Jaksch, I. A. Walmsley. "Entangling macroscopic diamonds at room temperature". Science, 334(6060):1253–1256 (2011). Abstract.

Labels:


Sunday, December 11, 2011

“Dressing” Atoms with Laser Allows High Angular Momentum Scattering : Could Reveal Ways to Observe Majorana Fermions

Ian Spielman (photo courtesy: Joint Quantum Institute, USA)

Scientists at the Joint Quantum Institute (JQI, a collaborative enterprise of the 'National Institute of Standards and Technology' and the University of Maryland) have for the first time engineered and detected the presence of high angular momentum collisions between atoms at temperatures close to absolute zero. Previous experiments with ultracold atoms featured essentially head-on collisions. The JQI experiment, by contrast, is able to create more complicated collisions between atoms using only lasers that dramatically influences their interactions in specific ways.

Such light-tweaked atoms can be used as proxies to study important phenomena that would be difficult or impossible to study in other contexts. Their most recent work, appearing in Science [1] demonstrates a new class of interactions thought to be important to the physics of superconductors that could be used for quantum computation.

Particle interactions are fundamental to physics, determining, for example, how magnetic materials and high temperature superconductors work. Learning more about these interactions or creating new “effective” interactions will help scientists design materials with specific magnetic or superconducting properties.Because most materials are complicated systems, it is difficult to study or engineer the interactions between the constituent electrons. Researchers at JQI build physically analogous systems using supercooled atoms to learn more about how materials with these properties work.

The key to the JQI approach is to alter the atoms’ environment with laser light. They “dress” rubidium atoms by bathing them in a pair of laser beams, which force the atoms to have one of three discrete values of momentum. In the JQI experiment, rubidium atoms comprise a Bose-Einstein condensate (BEC). BECs have been collided before. But the observation of high-angular-momentum scattering at such low energies is new.

The paper in 'Science Express' [1] includes a variety of technical issues which require some explanation:

Collisons

One of the cardinal principles of quantum science is that matter must be simultaneously thought of as both particles and waves. When the temperature of a gas of atoms is lowered, the wavelike nature of the atom emerges, and the idea of position becomes fuzzier. While an atom at room temperature might spread over a hundredth of a nm, atoms at nano-kelvin temperatures have a typical wavelength of about 100 nm. This is much larger than the range of the force between atoms, only a few nm. Atoms generally collide only when they meet face to face.

However, to study certain interesting quantum phenomena, such as searching for Majorana particles---hypothetical particles that might provide a robust means of encoding quantum information---it is desirable to engineer inter-atomic collisions beyond these low-energy, head-on type. That’s what the new JQI experiment does.

Partial Waves

Scattering experiments date back to the discovery of the atomic nucleus 100 years ago, when Ernest Rutherford shot alpha particles into a foil of gold. Since then other scattering experiments have revealed a wealth of detail about atoms and sub-atomic matter such as the quark substructure of protons.

A convenient way of picturing an interaction between two particles is to view their relative approach in terms of angular momentum. Quantized angular momentum usually refers to the motion of an electron inside an atom, but it necessarily pertains also to the scattering of the two particles, which can be thought of as parts of a single quantum object.

If the value of the relative angular momentum is zero, then the scattering is designated as “s-wave” scattering. If the pair of colliding particles has one unit of angular momentum, the scattering is called p-wave scattering. Still more higher-order scattering scenarios are referred to by more letters: d-wave, f-wave, g-wave, and so on. This model is referred to as the partial waves view.

In high energy scattering, the kind at accelerators, these higher angular-momentum scattering scenarios are important and help to reveal important structure information about the particles. In atomic scattering at low temperatures, the s-wave interactions completely swamp the higher-order scattering modes. For ultralow-temperature s-wave scattering, when two atoms collide, they glance off each other (back to back) at any and all angles equally. This isotropic scattering doesn’t reveal much about the nature of the matter undergoing collision; it’s as if the colliding particles were hard spheres.

This has changed now. The JQI experiment is the first to create conditions in which d-wave and g-wave scattering modes in an ultracold experiment could be seen in otherwise long-lived systems.

Quantum Collider

Ian Spielman and his colleagues at the National Institute for Standards and Technology (NIST) chill Rb atoms to nano-kelvin temperatures. The atoms, around half a million of them, have a density about a millionth that of air at room temperature. Radiofrequency radiation places each atom into a superposition of quantum spin states. Then two (optical light) lasers impart momentum (forward-going and backward-going motion) to the atoms.

Schematic drawing of collision between two BECs (the gray blobs) that have been “dressed” by laser light (brown arrows) and an additional magnetic field (green arrow). The fuzzy halo shows where atoms have been scattered. The non-uniform projection of the scattering halo on the graph beneath shows that some of the scattering has been d-wave and g-wave [image courtesy: JQI]

If this were a particle physics experiment, we would say that these BECs-in-motion were quantum beams, beams with energies that came in multiples of the energy kick delivered by the lasers. The NIST “collider” in Gaithersburg, Maryland is very different for the CERN collider in Geneva, Switzerland. In the NIST atom trap the particles have kinetic energies of a hundred pico-electron-volts rather than the trillion-electron-volt energies used at the Large Hadron Collider.

At JQI, atoms are installed in their special momentum states, and the collisions begin. Outward scattered atoms are detected after the BEC clouds are released by the trap. If the atoms hadn’t been dressed, the collisions would have been s-wave in nature and the observed scattered atoms would have been seen uniformly around the scattering zone.

The effect of the dressing is to screen the atoms from s-wave scattering in the way analogous to that in some solid materials, where the interaction between two electrons is modified by the presence of trillions of other electrons nearby. In other words, the laser dressing effectively increased the range of the inter-atom force such that higher partial wave scattering was possible, even at the lowest energies.

In the JQI experiment, the observed scattering patterns for atoms emerging from the collisions was proof that d-wave and g-wave scattering had taken place. “The way in which the density of scattered atoms is distributed on the shell reflects the partial waves,” said Ian Spielman. “A plot of scattered-density vs. spherical polar angles would give the sort of patterns you are used to seeing for atomic orbitals. In our case, this is a sum of s-, p-, and d- waves.”

Simulating Solids Using Gases

Ultracold atomic physics experiments performed with vapors of atoms are excellent for investigating some of the strongly-interacting quantum phenomena usually considered in the context of condensed matter physics. These subjects include superconductivity, superfluids, the quantum Hall effect, and topological insulators, and some things that haven’t yet been observed, such as the “Majorana” fermions.

Several advantages come with studying these phenomena in the controlled environment of ultracold atoms. Scientists can easily manipulate the landscape in which the atoms reside using knobs that adjust laser power and frequency. For example, impurities that can plague real solids can be controlled and even removed, and because (as in this new JQI experiment) the scattering of atoms can now (with the proper “dressing”) reveal higher-partial-wave effects. This is important because the exotic quantum effects mentioned above often manifest themselves under exactly these higher angular-momentum conditions.

“Our technique is a fundamentally new method for engineering interactions, and we expect this work will stimulate new directions of research and be of broad interest within the physics community, experimental and theoretical,” said Spielman. “We are modifying the very character of the interactions, and not just the strength, by light alone.”

On To Fermions

The JQI team, including Nobel Laureate William Phillips, is truly international, with scientists originating in the United Kingdom (lead author Ross Williams), Canada (Lindsay LeBlanc), Mexico (Karina Jiménez-García), and the US (Matthew Beeler, Abigail Perry, William Phillips and Ian Spielman).

The researchers now will switch from observing bosonic atoms (with a total spin value of 1) to fermion atoms (those with a half-integral spin). Combining the boson techniques demonstrated here with ultracold fermions offers considerable promise for creating systems which are predicted to support the mysterious Majorana fermions. “A lot of people are looking for the Majorana fermion,” says lead author and JQI postdoctoral fellow Ross Williams. “It would be great if our approach helped us to be the first.”

Reference
[1] R. A. Williams, L. J. LeBlanc, K. Jiménez-García, M. C. Beeler,A. R. Perry, W. D. Phillips, I. B. Spielman, "Synthetic partial waves in ultracold atomic collisions”, Science Express, (December 7, 2011). DOI: 10.1126/science.1212652. Abstract.

Labels: , , , ,


Sunday, November 27, 2011

A Simpler Approach to Making Plasmonic Materials

Peidong Yang [Photo by Roy Kaltschmidt, Lawrence Berkeley National Laboratory]

The question of how many polyhedral nanocrystals of silver can be packed into millimeter-sized supercrystals may not be burning on many lips but the answer holds importance for one of today’s hottest new high-tech fields – plasmonics! Researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) may have opened the door to a simpler approach for the fabrication of plasmonic materials by inducing polyhedral-shaped silver nanocrystals to self-assemble into three-dimensional supercrystals of the highest possible density.

Plasmonics is the phenomenon by which a beam of light is confined in ultra-cramped spaces allowing it to be manipulated into doing things a beam of light in open space cannot. This phenomenon holds great promise for superfast computers, microscopes that can see nanoscale objects with visible light, and even the creation of invisibility carpets. A major challenge for developing plasmonic technology, however, is the difficulty of fabricating metamaterials with nano-sized interfaces between noble metals and dielectrics.

Peidong Yang, a chemist with Berkeley Lab’s Materials Sciences Division, led a study in which silver nanocrystals of a variety of polyhedral shapes self-assembled into exotic millimeter-sized superstructures through a simple sedimentation technique based on gravity. This first ever demonstration of forming such large-scale silver supercrystals through sedimentation is described in a paper in the journal Nature Materials titled “Self-assembly of uniform polyhedral silver nanocrystals into densest packings and exotic superlattices" [1].

Yang, who also holds appointments with the University of California Berkeley’s Chemistry Department and Department of Materials Science and Engineering, is the corresponding author. Co-authoring the Nature Materials paper with Yang were Joel Henzie, Michael Grünwald, Asaph Widmer-Cooper and Phillip Geissler, who also holds joint appointments with Berkeley Lab and UC Berkeley.

On the left are micrographs of supercrystals of silver polyderal nanocrystals and on the right the corresponding diagrams of their densest known packings for (from top-down) cubes, truncated cubes and cuboctahedra. [Image courtesy of Berkeley Lab]

“We have shown through experiment and computer simulation that a range of highly uniform, nanoscale silver polyhedral crystals can self-assemble into structures that have been calculated to be the densest packings of these shapes,” Yang says. “In addition, in the case of octahedra, we showed that controlling polymer concentration allows us to tune between a well-known lattice packing structure and a novel packing structure that featured complex helical motifs.”

In the Nature Materials paper Yang and his co-authors describe a polyol synthesis technique that was used to generate silver nanocrystals in various shapes, including cubes, truncated cubes, cuboctahedra, truncated octahedra and octahedra over a range of sizes from 100 to 300 nanometers. These uniform polyhedral nanocrystals were then placed in solution where they assembled themselves into dense supercrystals some 25 square millimeters in size through gravitational sedimentation. While the assembly process could be carried out in bulk solution, having the assembly take place in the reservoirs of microarray channels provided Yang and his collaborators with precise control of the superlattice dimensions.

“In a typical experiment, a dilute solution of nanoparticles was loaded into a reservoir that was then tilted, causing the particles to gradually sediment and assemble at the bottom of the reservoir,” Yang says. “More concentrated solutions or higher angles of tilt caused the assemblies to form more quickly.”

The assemblies generated by this sedimentation procedure exhibited both translational and rotational order over exceptional length scales. In the cases of cubes, truncated octahedra and octahedra, the structures of the dense supercrystals corresponded precisely to their densest lattice packings. Although sedimentation-driven assembly is not new, Yang says this is the first time the technique has been used to make large-scale assemblies of highly uniform polyhedral particles.

Schematic representation of polyhedral shapes accessible using the Ag polyol synthesis. [Image courtesy of Berkeley Lab]

”The key factor in our experiments is particle shape, a feature we have found easier to control,” Yang says. “When compared with crystal structures of spherical particles, our dense packings of polyhedra are characterized by higher packing fractions, larger interfaces between particles, and different geometries of voids and gaps, which will determine the electrical and optical properties of these materials.”

The silver nanocrystals used by Yang and his colleagues are excellent plasmonic materials for surface-enhanced applications, such as sensing, nanophotonics and photocatalysis. Packing the nanocrystals into three-dimensional supercrystals allows them to be used as metamaterials with the unique optical properties that make plasmonic technology so intriguing.

“Our self-assembly process for these silver polyhedral nanocrystals may give us access to a wide range of interesting, scalable nanostructured materials with dimensions that are comparable to those of bulk materials,” Yang says.

Reference
[1] Joel Henzie, Michael Grünwald, Asaph Widmer-Cooper, Phillip L. Geissler, Peidong Yang, "Self-assembly of uniform polyhedral silver nanocrystals into densest packings and exotic superlattices", Nature Materials, Published online 20 November 2011. doi:10.1038/nmat3178. Abstract.


[This article is written by Lynn Yarris of Lawrence Berkeley National Laboratory]

Labels:


Sunday, November 20, 2011

New Limit on Antimatter Imbalance

Physicists including Pieter Mumm (shown) used the emiT detector they built at NIST to investigate any potential statistical imbalance between the two natural types of neutron decay [image courtesy: emiT team]

Why there is stuff in the universe—more properly, why there is an imbalance between matter and antimatter—is one of the long-standing mysteries of cosmology. A team of researchers working at the National Institute of Standards and Technology (NIST) at Boulder, Colorado has just concluded a 10-year-long study of the fate of neutrons in an attempt to resolve the question, the most sensitive such measurement ever made. The universe, they concede, has managed to keep its secret for the time being, but they’ve succeeded in significantly narrowing the number of possible answers.

Their work is published in a recent issue of Physical Review Letters [1]. The research team also includes scientists from the University of Washington, the University of Michigan, the University of California at Berkeley, Lawrence Berkeley National Laboratory, Tulane University, the University of Notre Dame, Hamilton College and the University of North Carolina at Chapel Hill. Funding was provided by the U.S. Department of Energy and the National Science Foundation.

Though the word itself evokes science fiction, antimatter is an ordinary—if highly uncommon—material that cosmologists believe once made up almost exactly half of the substance of the universe. When particles and their antiparticles come into contact, they instantly annihilate one another in a flash of light. Billions of years ago, most of the matter and all of the antimatter vanished in this fashion, leaving behind a tiny bit of matter awash in cosmic energy. What we see around us today, from stars to rocks to living things, is made up of that excess matter, which survived because a bit more of it existed.

“The question is, why was there an excess of one over the other in the first place?” says Pieter Mumm, a physicist at NIST’s Physical Measurements Lab. “There are lots of theories attempting to explain the imbalance, but there’s no experimental evidence to show that any of them can account for it. It’s a huge mystery on the level of asking why the universe is here. Accepted physics can’t explain it.”

An answer might be found by examining radioactivity in neutrons, which decay in two different ways that can be distinguished by a specially configured detector. Though all observations thus far have invariably shown these two ways occur with equal frequency in nature, finding a slight imbalance between the two would imply that nature favors conditions that would create a bit more matter than antimatter, resulting in the universe we recognize.

Two types of neutron decay produce a proton, an electron and an electron antineutrino but eject them in different configurations, The experiments at NIST detected no imbalance, but the improved sensitivity could help place limits on competing theories about the matter-antimatter imbalance in the universe [Image credit: emiT team]

Mumm and his collaborators from several institutions used a detector at the NIST Center for Neutron Research to explore this aspect of neutron decay with greater sensitivity than was ever possible before. For the moment, the larger answer has eluded them—several years of observation and data analysis once again turned up no imbalance between the two decay paths. But the improved sensitivity of their approach means that they can severely limit some of the numerous theories about the universe’s matter-antimatter imbalance, and with future improvements to the detector, their approach may help constrain the possibilities far more dramatically.

“We have placed very tight constraints on what these theories can say,” Mumm says. “We have given theory something to work with. And if we can modify our detector successfully, we can envision limiting large classes of theories. It will help ensure the physics community avoids traveling down blind alleys.”

Reference
[1] H.P. Mumm, T.E. Chupp, R.L. Cooper, K.P. Coulter, S.J. Freedman, B.K. Fujikawa, A. García, G.L. Jones, J.S. Nico, A.K. Thompson, C.A. Trull, J.F. Wilkerson and F.E. Wietfeldt. "New limit on time-reversal violation in beta decay". Physical Review Letters, Vol. 107, p. 102301 (2011). DOI: 10.1103/PhysRevLett.107.102301. Abstract.


[We thank National Institute of Standards and Technology, Boulder, CO for materials used in this report]

Labels:


Sunday, November 13, 2011

A New Scheme for Photonic Quantum Computing














[From Left to Right] Nathan K. Langford, Sven Ramelow and Robert Prevedel


Authors: Nathan K. Langford, Sven Ramelow and Robert Prevedel

Affiliation: Institute for Quantum Optics and Quantum Information (IQOQI), Austria;
Vienna Center for Quantum Science and Technology, Faculty of Physics, University of Vienna, Austria

Quantum computing is a fascinating and exciting example of how future technologies might exploit the laws of quantum physics [1]. Unlike a normal computer (“classical” computer), which stores information in 0s and 1s (called “bits”), a quantum computer stores information in quantum bits (“qubits”), states of quantum systems like atoms or photons. In principle, a quantum computer can solve the exact same problems as classical computers, so why do we think they could be so fantastic? It all comes down to speed – that is, in the context of computing, how many elementary computational steps are required to find an answer.

Past 2Physics articles by Robert Prevedel:
October 23, 2011: "Heisenberg’s Uncertainty Principle Revisited"
by Robert Prevedel
June 08, 2007: "Entanglement and One-Way Quantum Computing"
by Robert Prevedel and Anton Zeilinger


For many different types of problems, classical computers are already fast – meaning that reasonable problems can be solved in a reasonable time and the time required for a “larger” problem increases only slowly with the size of the problem (known as “scaling”). For example, once you know how to add 17 and 34, it’s not that much more difficult to add 1476 and 4238. For such problems, quantum computers can’t really do any better. Some types of problems, however, can be solved much faster on a quantum computer than on a classical computer. In fact, quantum computers can actually perform some tasks that are utterly impossible for any conceivable classical computer. The most famous example is Shor’s algorithm for finding the prime factors of a large integer [2], a problem which lies at the heart of many important computing tasks. It’s straightforward to work out that the factors of 21 are 3 and 7, but it’s already much harder to work out that the factors of 4897 are 59 and 83, and modern data protection (RSA encryption) relies on this problem becoming effectively impossible on a classical computer for really big numbers (say, 50 or 100 decimal digits long). But that would not be true for a quantum computer. It turns out that quantum computers could achieve an enormous speed-up, because of the unique quantum features of superposition and entanglement.

Shor’s algorithm is a great example of the revolutionary potential for technologies built on quantum physics. The problem with such technologies, however, is that quantum systems are incredibly hard to control reliably. Classical computers are an astonishingly advanced technology: classical information can be stored almost indefinitely and the elementary computational gates which manipulate the information work every time. “It just works.” [3] By contrast, quantum information is incredibly fragile – you can destroy it literally by looking at it the wrong way! This places extremely stringent demands on what is required to control it and make it useable. In 1998, David DiVincenzo outlined a set of minimum sufficient criteria required to build a scaleable quantum computer [4] and since then experimentalists from all corners of physics have been working to fulfil them.

One of the most promising architectures for quantum information processing (QIP) and in particular quantum computing is to encode information in single photons. Because they generally interact very weakly with their environment, provided they are not absorbed accidentally, they can be used to store and transmit information without it being messed up. But this strength also creates its own problems, which arise when you want to create, manipulate or measure this information. Because a single photon doesn't interact much with atoms or other photons, it is very hard to do these things efficiently. And efficiency is the key to the whole idea of quantum computing, because the enormous quantum speed-up can only be achieved if the basic building blocks work efficiently. This is the biggest challenge for photonic QIP: current schemes for preparing single photons are inefficient and linear-optics gates are inherently probabilistic [5]. For example, pioneering work by Knill, Laflamme and Milburn showed how to overcome these problems in principle [6], but at an enormous cost in physical resources (gates, photons, etc.) which makes their approach almost completely infeasible in practice. The main goal of our approach is to make photons talk to each other efficiently.

In a recent paper [7], we introduce a new approach to photonic QIP – coherent photon conversion (CPC) – which is based on an enhanced form of nonlinear four-wave mixing and fulfils all of the DiVincenzo criteria. In photonic QIP experiments, nonlinear materials are commonly used to provide probabilistic sources of single-photon states. By shining in a strong laser beam, the nonlinear interaction will very occasionally cause a laser photon (usually around one in a billion) to split into two photons, making a very inefficient source of heralded photons. Instead, we looked at what would happen if we used a single photon instead of a strong laser beam. Surprisingly, we found that, if you can make the interaction strong enough, it should be possible to make the efficiency of photon splitting rise to 100% – something that is impossible with a laser input. In fact, we found that the same type of interaction can be used to provide a whole range of “deterministic” tools (tools that work with 100% efficiency), including entangling multiphoton gates, heralded multiphoton sources and efficient detectors – the basic building blocks required for scaleable quantum computing. Some of these are shown and briefly described in Fig. 1.




















Figure 1: Fulfilling the DiVincenzo criteria with CPC. (a) A deterministic photon-photon interaction (controlled-phase gate) based on a novel geometric phase effect. (b) A scaleable element for deterministic photon doubling, which can be used in a photon-doubling cascade (c) to create arbitrarily large multiphoton states.

Perhaps the most remarkable thing about CPC is that it should be possible to build a single all-optical device, with four I/O ports, which can provide all of these building blocks just by varying what type of light is sent into each port. And because of the underlying four-wave mixing nonlinearity, it could be compatible with current telecommunications technology and perhaps even be built entirely on a single photonic chip. This could make it much easier to build more complex networks.

LinkFigure 2: Nonlinear photonic crystal fibre pumped by strong laser pulses (7.5 ps at 532 nm) to create the nonlinearity required for CPC.

To demonstrate the feasibility of our proposed approach, we performed a first series of experiments using off-the-shelf photonic crystal fibres (see Fig. 2) to demonstrate the nonlinear process underlying the CPC scheme¬. The next step required is to identify what can be done to optimise the nonlinear coupling, both by improving the materials and engineering a better design. While deterministic operation has yet to be achieved, our results show that this should be feasible with sophisticated current technology, such as with chalcogenide glasses, which are highly nonlinear and can be used to make both optical fibres and chip-based integrated waveguides [8].

Finally, we hope that CPC will be a useful technique for implementing coherent, deterministic multiphoton operations both for applications in quantum-enhanced technologies and for fundamental tests involving entanglement and large-scale quantum systems. Interestingly, the general idea of “coherent photon conversion” can also be implemented in physical systems other than photons, such as in optomechanical, electromechanical and superconducting systems where the intrinsic nonlinearities available are even stronger.

References
[1] R.P. Feynman, "Simulating Physics with Computers". International Journal of Theoretical Physics, 21, 467–488 (1982). Article(PDF).
[2] P. W. Shor. "Algorithms for quantum computation: Discrete logarithms and factoring" (In Proceedings of the 35th Annual Symposium on Foundations of Computer Science, page 124, Los Alamitos, 1994. IEEE Computer Society Press). Abstract.
[3] Steve Jobs (2011). YouTube Video.
[4] D.P. DiVincenzo, D. Loss, "Quantum information is physical". Superlattices and Microstructures, 23, 419–432 (1998). Abstract.
arXiv:cond-mat/9710259.
[5] P. Kok, W. J. Munro, K. Nemoto, T.C. Ralph, J.P. Dowling, G.J. Milburn, "Linear optical quantum computing with photonic qubits". Review of Modern Physics, 79, 135–174 (2007). Abstract.
[6] E. Knill, R. Laflamme, G.J. Milburn, "A scheme for efficient quantum computation with linear optics". Nature, 409, 46–52 (2001). Abstract.
[7] N. K. Langford, S. Ramelow, R. Prevedel, W. J. Munro, G. J. Milburn & A. Zeilinger. "Efficient quantum computing using coherent photon conversion". Nature 478, 360-363 (2011). Abstract.
[8] B.J. Eggleton, B. Luther-Davies, K. Richardson, "Chalcogenide photonics". Nature Photonics, 5, 141–148 (2011).
Abstract

Labels:


Sunday, November 06, 2011

Gradient Birefringent Lenses: A New Degree of Freedom in Optics

Aaron Danner

Author: Aaron Danner

Affiliation: Department of Electrical and Computer Engineering, National University of Singapore, Singapore


Birefringence refers to the fact that some materials have more than one refractive index, causing light beams hitting a birefringent material to split into two parts. Since such materials are uncommon, at first glance this may appear to be a rather startling phenomenon. What we have shown in recent work is that birefringent materials that have gradient indices of refraction -- meaning they are a function of position inside a material -- can do even more surprising things.


Why do some natural materials have two refractive indices (one for each polarization)?


The refractive index of a material comes from the way light interacts with electrons inside a material. Because different materials have their atoms packed together in different arrangements, their electronic structures are different. In many materials, like glass or water, those electronic structures look more or less the same when viewed from any direction or along any angle. Such materials are isotropic and have just one refractive index. Other materials, such as calcite or lithium niobate, however, have crystalline structures where the electronic configuration is different along some directions compared to others. It’s thus natural that some materials would have more than one index of refraction, depending on the polarization direction of the light. When light containing a mixture of randomly polarized photons strikes such a material, it exhibits double refraction.


Birefringence is important for metamaterials researchers because it is often something that pops up where it is unwanted. One doesn’t want to design a new and exciting lens only to see that half of the light isn’t going where you want it to go, but in fact this is exactly what happens in a lot of cases. Many exciting and astonishing devices designed in recent years, such as cloaking devices, depend on a technique in optics called transformation optics [2-7]. It is a toolset that allows researchers to determine what refractive index profile will allow light to behave in a predefined way. Normally, such devices have gradient indices of refraction and require materials that have properties that are not commonly found in nature (or are even impossible or unfeasible to ever build). Transformation optics, for instance, requires the magnetic permeability tensor to equal the permittivity tensor. Since this is unworkable in real materials, a compromise is commonly made in order to actually build some of the amazing devices that researchers have designed: one polarization is sacrificed [8-9] to ease material property constraints. This is the origin of the “unwanted” birefringence.

Figure 1: Computer-generated image of a spherical device that functions as an invisible sphere for one polarization, and as a Luneburg lens for the other polarization [Image reproduced from Ref [1]. Thanks to 'Nature Photonics']

What we have discovered is that the “unwanted” birefringence can do useful things. We can actually design devices that are fully dielectric (removing the most onerous requirement of transformation optics) and have two unique functions, one for each polarization. Figure 1, for example, shows a spherical device that is invisible for one polarization, but acts like a so-called Luneburg lens for another. Without careful consideration, it may seem obvious that such devices are possible – why not just pick each of the two constituent refractive indices in a gradient birefringent device to each perform its own function? The reason why it is not so simple is because as light propagates through a structure, especially a structure with a graded index tensor, the polarization direction itself will rotate. Thus a light beam’s trajectory depends on each of the two indices in a complicated way, and before now, it was not clear whether the two trajectories of a single incident beam could be independently designed except in a few circumstances [10].

Figure 2: Four examples of birefringent dielectric devices designed with the methods described: (a) Invisible for one polarization, a Luneburg lens for the other as shown on the cover art, (b) a Luneburg lens for one polarization, and a ring focuser for the other, (c) Invisible but with a polarization-dependent phase slip, and (d) an interior-focusing potential for one polarization and a Maxwell fisheye for the other [Image reproduced from Ref [1]. Thanks to 'Nature Photonics'].

What we have shown is that, in fact, for all intents and purposes, they can be. This means that a design scheme now exists whereby lenses with multiple functions can be designed (two simultaneous focal lengths in a spherical lens, for example, one for each polarization). It also means that researchers working on unusual devices such as optical cloaks that must be designed with transformation optics need not sacrifice one polarization to make their devices work. They can “save” the other polarization, and at least make it do something useful. Thus, a cloaking device can cloak for one polarization, and perform some other interesting function with the other polarization. Various examples of such devices with dual functionality are shown in Figure 2. With the methods developed, birefringence can be fully controlled in dielectrics and useful optical devices designed that have either two functions, or that have identical functionality for each polarization but with different ray trajectories.

Acknowledgments: I would like to thank my co-authors on the Nature Photonics paper [1], Prof. Tomas Tyc of Masaryk University and Prof. Ulf Leonhardt of the University of St. Andrews. Funding from the Singapore Ministy of Education Tier II Academic Research Fund under grant MOE2009-T2-1-086 is acknowledged.

References:
[1] Danner A.J., Tyc T., and Leonhardt U., "Controlling birefringence in dielectrics," Nature Photonics 5, 357 (2011). Abstract.
[2] Service, R. F. & Cho, A. "Strange New Tricks with Light", Science, 330, 1622 (2010). Abstract.
[3] Leonhardt, U. "Optical Conformal Mapping". Science, 312, 1777-1780 (2006). Abstract.
[4] Pendry, J. B., Schurig, D. & Smith, D. R. "Controlling Electromagnetic Fields". Science, 312, 1780-1782 (2006). Abstract.
[5] Shalaev, V. M. "Transforming Light". Science, 322, 384-386 (2008). Abstract.
[6] Chen, H. Y., Chan, C. T. & Sheng, P. "Transformation Optics and Metamaterials". Naure Materials, 9, 387-396 (2010). Abstract.
[7] Leonhardt, U. & Philbin, T. G. "Geometry and Light: The Science of Invisibility" (Dover, 2010).
[8] D. Schurig, J. J. Mock, B. J. Justice, S. A. Cummer, J. B. Pendry, A. F. Starr and D. R. Smith, "Metamaterial Electromagnetic Cloak at Microwave Frequencies", Science, 314, 977-980 (2006). Abstract.
[9] Ma, Y. G., Ong, C. K., Tyc, T. & Leonhardt, U. "An Omnidirectional Retroreflector Based on the Transmutation of Dielectric Singularities". Nature Materials, 8, 639-642 (2009). Abstract.
[10] Kwon, D. H. & Werner, D. H. "Polarization Splitter and Polarization Rotator Designs Based on Transformation Optics." Opt. Express, 16, 18731-18738 (2008). Abstract.

Labels: ,


Sunday, October 30, 2011

Gas Phase Optical Quantum Memory





















[From left to right] Ben Sparkes, Mahdi Hosseini, Ping Koy Lam and Ben Buchler


Authors: Ben Buchler, Mahdi Hosseini, Ben Sparkes, Geoff Campbell and Ping Koy Lam

Affiliation: ARC Centre for Quantum Computation and Communication Technology, Department of Quantum Science, The Australian National University, Canberra, Australia

In the early days of quantum mechanics, the Heisenberg uncertainty principle was seen as something of a problem. It limits the ways in which it is possible to measure the state of things, and as such, imposes an in principle limit to how well we can manipulate and harness measurements for technological applications. More recently, however, there has been an outbreak of proposals that suggest Heisenberg uncertainty and other quantum mechanical principles can be harnessed for advanced applications in the information sciences. Of these, quantum key distribution (QKD) is the most advanced. This technique allows the sharing of a secret key between remote parties over an open communication channel. The crucial point is that if someone tries to eavesdrop on the transmission of this key, the communication channel is disrupted due to the uncertainty principle. Only a clean communication line will allow the sharing of a key and in this way, any key that is generated is guaranteed perfectly secure. QKD has been demonstrated in optical fibres over distances over 60km [1].

Unfortunately, beyond about 100km the losses in optical fibres, or indeed any transmission medium, mean that it becomes very slow or even impossible to share a key.
One possible method to fix this problem is to build a quantum repeater [2]. These devices, which are yet to be demonstrated, will extend the range of quantum communication beyond the current limit. Integral to proposed repeaters is some kind of memory capable of storing, and recalling on demand, quantum states of light [3]. To build an ideal optical quantum memory, you need to capture a state of light with 100% efficiency without actually measuring it, since the quantum back-action from a measurement would disrupt the state. Then you have to recall it without adding noise or losing any of the light.

Our approach to building a quantum memory relies on reversible absorption in an ensemble of atoms. This is a photon-echo technique known as a “gradient echo memory” (GEM). In this scheme we organise our ensemble such that the absorption frequency of atoms varies linearly along the length of our memory cell. This is done using an applied field, such as a magnetic gradient, to shift the resonant frequency of the atoms (see Fig. 1).
















Figure 1: The GEM scheme. a: An pulse, in this example a modulated pulse, is incident on the memory cell, which has a gradient in the atomic absorption frequencies. b: The pulse is stored in the cell and due to the gradient, the atomic coherence has a spatial profile that is the Fourier transform of the pulse shape. c: After flipping the gradient, the pulse is recalled.


The bandwidth of the applied broadening can be matched to the bandwidth of the incoming light pulse. After the light is absorbed into the ensemble, the atoms dephase at a rate proportional to the spread in absorption frequencies. All that is required to recall the light pulse is to reverse the gradient. This reverses the relative frequency detunings meaning that the atomic dephasing is time-reversed and the ensemble will rephase. When this happens, the light is recalled in the forward direction.

This protocol works well in 2-level atoms, as described. Experiments with cryostatic rare-earth doped solid-state crystals have shown recall efficiencies up to 69% without added noise [4]. This experiment was the first to beat the crucial 50% limit. Above this percentage, you can be sure that a hypothetical, all-powerful eavesdropper who, in principle, could have collected all of the missing light, will have less than 50% of the original information. This means that any eavesdropper has less information about the stored state than you do. In the absence of added noise, the 50% barrier corresponds to the “no-cloning limit”.

Our GEM experiments work in a 3-level atomic system in a warm gas cell. This has several advantages: i) there are many suitable 3-level systems; ii) gas cells are can be bought off the shelf and require a small heater rather than a large cryostat and iii) the simplicity of the setup means that we can rapidly try out new protocols. With three levels, the scheme is exactly as illustrated in Fig. 1, except that the upper and lower levels are now two hyperfine ground states, which are coupled using a strong “control” beam as shown in Fig. 2.

Figure 2: The atomic level scheme

The control beam brings new flexibility to our scheme. By switching the control field off, we can suppress recall from the memory. If we have multiple pulses stored in the memory then we can thus choose which ones to recall at which time – i.e. we have a random access memory for pulses of light [5]. The Fourier nature of the memory also allows us to stretch and compress the bandwidth of the pulses, shift their frequency and recall different frequency components at different times [6]. The efficiency of our system is also the highest ever demonstrated for a quantum memory prototype with up to 87% recall [7]. We have also verified the “quantumness” of our memory by quantifying the added noise [8]. We found, by using conditional variance and signal transfer measurements, that our system easily beat the no-cloning limit. In terms of fidelity, for small photon numbers, we found fidelities as high as 98%.

Figure 3: One of our gas cells illuminated with 300mW of light at 795nm.

The current system, using sauna-temperature gas cells (around 80 degrees C) is limited by atomic diffusion. The storage times are only a few microseconds. We plan to implement our scheme on a cold atomic ensemble in the near future to improve this aspect of our system.

References
[1] D Stucki, N Gisin, O Guinnard, G Ribordy, and H Zbinden, "Quantum key distribution over 67 km with a plug & play system", New Journal of Physics, 4, 41 (2002). Article.
[2] Nicolas Gisin and Rob Thew, "Quantum communications", Nature Photonics, 1, 165–171 (2007). Abstract.
[3] Alexander I. Lvovsky, Barry C. Sanders, and Wolfgang Tittel, Optical quantum memory, Nature Photonics, 3, 706–714 (2009). Abstract.
[4] Morgan P Hedges, Jevon J Longdell, Yongmin Li, and Matthew J Sellars, "Efficient quantum memory for light", Nature, 465, 1052–1056 (2010). Abstract.
[5] Mahdi Hosseini, Ben M Sparkes, Gabriel Hétet, Jevon J Longdell, Ping Koy Lam, and Ben C Buchler, "Coherent optical pulse sequencer for quantum applications", Nature 461, 241–245 (2009). Abstract.
[6] B. C. Buchler, M. Hosseini, G. Hétet, B. M. Sparkes and P. K. Lam, "Precision spectral manipulation of optical pulses using a coherent photon echo memory", Optics Letters, 35, 1091-1093 (2010). Abstract.
[7] M Hosseini, B M Sparkes, G Campbell, P K Lam, and B C Buchler, "High efficiency coherent optical memory with warm rubidium vapour", Nature communications, 2, 174 (2011). Abstract.
[8] M. Hosseini, G. Campbell, B. M. Sparkes, P. K. Lam, and B. C. Buchler, "Unconditional room-temperature quantum memory", Nature Physics, 7, 794–798 (2011). Abstract
.

Labels:


Sunday, October 23, 2011

Heisenberg’s Uncertainty Principle Revisited

Robert Prevedel

Author: Robert Prevedel
Affiliation: Institute for Quantum Computing, University of Waterloo, Canada

In 1927, Heisenberg [1] showed that, according to quantum theory, one cannot know both the position and the velocity of a particle with arbitrary precision; the more precisely the position is known, the less precisely the momentum can be inferred and vice versa. In other words, the uncertainty principle sets limits on our ultimate ability to predict the outcomes of certain pairs of measurements on quantum systems. Such pairs of quantities can also be energy and time or the spins and polarizations of particles in various directions. The uncertainty principle is a central consequence of quantum theory and a pillar of modern physics. It lies at the heart of quantum theory and has profound fundamental and practical consequences, setting absolute limits on precision technologies such as metrology and lithography, but at the same time also provides the basis for new technologies such as quantum cryptography [2].

Past 2Physics article by the author:
June 08, 2007: "Entanglement and One-Way Quantum Computing"
by Robert Prevedel and Anton Zeilinger


Over the years, the uncertainty principle has been reexamined and expressed in more general terms. To link uncertainty relations to classical and quantum information theory, they have been recast with the uncertainty quantified by the entropy [3,4] rather than the standard deviation. Until recently, the favored uncertainty relation of this type was that of Maassen and Uffink [5], who showed that it is impossible to reduce the so-called Shannon entropies (a measure for the amount of information that can be extracted from a system) associated with any pair of measurable quantum quantities to zero. This implies that the more you squeeze the entropy of one variable, the more the entropy of the other increases.

This intuition stood for some decades, however very recently a new work by Berta et al. [6] showed that the limitations imposed by Heisenberg’s principle could actually be overcome through clever use of entanglement, the counterintuitive property of quantum particles that leads to strong correlations between them. More precisely, the paper predicts that an observer holding quantum information about the particle can have a dramatically lower uncertainty than one holding only classical information. In the extreme case, an observer that has access to a particle that is maximally entangled with the quantum system that he wishes to measure is able to correctly predict the outcome of whichever measurement is chosen. This dramatically illustrates the need for a new uncertainty relation that takes into account the potential entanglement between the system and another particle. A derivation of such a new uncertainty relation appeared in the work of Berta et al. [6] (also see the past 2Physics article dated August 29, 2010) The new relation proves a lower bound on the uncertainties of the measurement outcomes when one of two measurements is performed.

To illustrate the main idea how an observer holding quantum information can outperform one without, the paper outlines an imaginary “uncertainty game” which we briefly outline below. In this game, two players, Alice and Bob, begin by agreeing on two measurements, R and S, one of which will be performed on an quantum particle. Bob then prepares this particle in a quantum state of his choosing. Without telling Alice which state he has prepared, he sends the particle to Alice. Alice performs one of the two measurements, R or S (chosen at random), and tells Bob which observable she has measured, though not the outcome of the measurement. The aim of the game is for Bob to correctly guess the measurement outcome. If Bob had only a classical memory (e.g. a piece of paper), he would not be able to guess correctly all of the time — this is what Heisenberg’s uncertainty relation implies. However, if Bob is able to entangle the particle he sends with a quantum memory, then for any measurement Alice makes on the particle, there is a measurement on Bob’s memory that always gives him the same outcome. His uncertainty has thus vanished and he is capable of correctly guessing the outcome of whichever measurement Alice performs.














Fig. 1: The uncertainty game. Bob sends a particle, which is entangled with one that is stored in his quantum memory, to Alice (1), who measures it using one of two pre-agreed observables (2). She then communicates the measurement choice, but not its result, to Bob who tries to correctly guess Alice’s measurement outcome. See text for more details. Illustration adapted from [6].

In our present work [7], we experimentally realize Berta et al.'s uncertainty game in the laboratory and rigorously test the new and modified uncertainty relation in an optical experiment. We generate polarization-entangled photon pairs and send one of the photons to Alice who randomly performs one of two polarization measurements. In the meantime, we delay the other photon using an optical fiber – this acts as a simple quantum memory. Dependent on Alice’s measurement choice (but not the result), we perform the appropriate measurement on the photon that was stored in the fiber. In this way, we show that Bob can infer Alice's measurement result with less uncertainty if the particles were entangled. Varying the amount of entanglement between the particles allows us to fully investigate the new uncertainty relation. The results closely follow the Berta et al. relation. By using entangled photons in this way, we observe lower uncertainties than previously known uncertainty relations would predict. We show that this fact can be used to form a simple, yet powerful entanglement witness. This more straightforward witnessing method is of great value to other experimentalists who strive to generate this precious resource. As future quantum technologies emerge, they will be expected to operate on increasingly large systems. The entanglement witness we have demonstrated offers a practical way to quantitatively assess the quality of such technologies, for example the performance of quantum memories.




















Fig. 2: A photo of the actual experiment. In the center a down-conversion source of entangled photons can be seen. The ultraviolet pump laser is clearly visible as it propagates through a Sagnac-type interferometer with the down-conversion crystal at the center. On the lower left, a small spool of fiber is light up by a HeNe laser. This fiber spool is a miniaturization of the actual spool that serves as the quantum memory in our experiment. [Copyright: R. Prevedel]

A similar experiment was performed independently in the group of G.-C. Guo, and its results were published in the same issue of Nature Physics [8] (Also see the past 2Physics article dated October 11, 2011).

References:
[1] Heisenberg, W. Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Z. Phys. 43, 172-198 (1927).
[2] Bennett, C. H. & Brassard, G. Proceedings of IEEE International Conference on Computers, Systems and Signal Processing, Bangalore, India, 175-179 (1984).
[3] Bialynicki-Birula, I. & Mycielski, J. "Uncertainty relations for information entropy in wave mechanics". Communications in Mathematical Physics, 44, 129-132 (1975). Abstract.
[4] Deutsch, D. "Uncertainty in quantum measurements". Physical Review Letters, 50, 631-633 (1983). Abstract.
[5] Maassen, H. & Uffink, J. B. "Generalized entropic uncertainty relations". Physical Review Letters, 60, 1103-1106 (1988). Abstract.
[6] Berta, M., Christandl, M., Colbeck, R., Renes, J. M. & Renner, R. "The uncertainty principle in the presence of quantum memory". Nature Physics, 6, 659-662 (2010). Abstract. 2Physics Article.
[7] Prevedel, R., Hamel, D.R., Colbeck, R., Fisher, K., & Resch, K.J. "Experimental investigation of the uncertainty principle in the presence of quantum memory and its application to witnessing entanglement", Nature Physics, 7, 757-761 (2011). Abstract.
[8] Li, C-F., Xu, J-S., Xu, X-Y., Li, K. & Guo, G-C. "Experimental investigation of the entanglement-assisted entropic uncertainty principle". Nature Physics, 7, 752-756 (2011). Abstract. 2Physics Article.

Labels:


Sunday, October 16, 2011

Unusual ‘Quasiparticles’ in Tri-Layer Graphene

Liyuan Zhang and Igor Zaliznyak at the Center for Functional Nanomaterials, Brookhaven National Laboratory, USA

By studying three layers of graphene — sheets of honeycomb-arrayed carbon atoms — stacked in a particular way, scientists at the U.S. Department of Energy’s Brookhaven National Laboratory have discovered a “little universe” populated by a new kind of “quasiparticles” — particle-like excitations of electric charge. Unlike massless photon-like quasiparticles in single-layer graphene, these new quasiparticles have mass, which depends on their energy (or velocity), and would become infinitely massive at rest.

That accumulation of mass at low energies means this trilayer graphene system, if magnetized by incorporating it into a heterostructure with magnetic material, could potentially generate a much larger density of spin-polarized charge carriers than single-layer graphene — making it very attractive for a new class of devices based on controlling not just electric charge but also spin, commonly known as spintronics.

“Our research shows that these very unusual quasiparticles, predicted by theory, actually exist in three-layer graphene, and that they govern properties such as how the material behaves in a magnetic field — a property that could be used to control graphene-based electronic devices,” said Brookhaven physicist Igor Zaliznyak, who led the research team. Their work measuring properties of tri-layer graphene as a first step toward engineering such devices was published online in Nature Physics [1].

Graphene has been the subject of intense research since its discovery in 2004, in particular because of the unusual behavior of its electrons, which flow freely across flat, single-layer sheets of the substance. Stacking layers changes the way electrons flow: Stacking two layers, for example, provides a “tunable” break in the energy levels the electrons can occupy, thus giving scientists a way to turn the current on and off. That opens the possibility of incorporating the inexpensive substance into new types of electronics.

With three layers, the situation gets more complicated, scientists have found, but also potentially more powerful.

One important variable is the way the layers are stacked: In “ABA” systems, the carbon atoms making up the honeycomb rings are directly aligned in the top and bottom layers (A) while those in the middle layer (B) are offset; in “ABC” variants, the honeycombs in each stacked layer are offset, stepping upwards layer by layer like a staircase. So far, ABC stacking appears to give rise to more interesting behaviors — such as those that are the subject of the current study.

ABC trilayer graphene, where the three layers are offset from one another like stair steps [Image courtesy: Brookhaven National Laboratory]

For this study, the scientists created the tri-layer graphene at the Center for Functional Nanomaterials (CFN) at Brookhaven Lab, peeling it from graphite, the form of carbon found in pencil lead. They used microRaman microscopy to map the samples and identify those with three layers stacked in the ABC arrangement. Then they used the CFN’s nanolithography tools, including ion-beam milling, to shape the samples in a particular way so they could be connected to electrodes for measurements.

At the National High Magnetic Field Laboratory (NHMFL) in Tallahassee, Florida, the scientists then studied the material’s electronic properties — specifically the effect of an external magnetic field on the transport of electronic charge as a function of charge carrier density, magnetic field strength, and temperature.

“Ultimately, the success of this project relied on hard work and rare experimental prowess of talented young researchers with whom we engaged in these studies, in particular, Liyuan Zhang, who at the time was research associate at Brookhaven, and Yan Zhang, then a graduate student from Stony Brook University,” said Igor Zaliznyak.

The measurements provide the first experimental evidence for the existence of a particular type of quasiparticle, or electronic excitation that acts like a particle and serves as a charge carrier in the tri-layer graphene system. These particular quasiparticles, which were predicted by theoretical studies, have ill-defined mass — that is, they behave as if they have a range of masses — and those masses diverge as the energy level decreases with quasiparticles becoming infinitely massive.

Ordinarily such particles would be unstable and couldn’t exist due to interactions with virtual particle-hole pairs — similar to virtual pairs of oppositely charged electrons and positrons, which annihilate when they interact. But a property of the quasiparticles called chirality, which is related to a special flavor of spin in graphene sytems, keeps the quasiparticles from being destroyed by these interactions. So these exotic infinitively massive particles can exist.

“These results provide experimental validation for the large body of recent theoretical work on graphene, and uncover new exciting possibilities for future studies aimed at using the exotic properties of these quasiparticles,” Zaliznyak said.

For example, combining magnetic materials with tri-layer graphene could align the spins of the charge-carrier quasiparticles. “We believe that such graphene-magnet heterostructures with spin-polarized charge carriers could lead to real breakthroughs in the field of spintronics,” Zaliznyak said.

Reference
[1] Liyuan Zhang, Yan Zhang, Jorge Camacho, Maxim Khodas, Igor Zaliznyak, "The experimental observation of quantum Hall effect of l=3 chiral quasiparticles in trilayer graphene", Nature Physics, doi:10.1038/nphys2104 (Published online September 25, 2011). Abstract.

Labels: ,