.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Sunday, December 26, 2010

A Light Transistor Based on Photons and Phonons

Tobias J. Kippenberg

Researchers at the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland and the Max Planck Institute of Quantum Optics (MPQ), Germany discover a novel way to switch light all-optically on a chip.

The ability to control the propagation of light is at the technological heart of today’s telecommunication society. Researchers in the Laboratory of Photonics and Quantum Measurement led by Prof. Tobias J. Kippenberg (now EPFL) have discovered a novel principle to accomplish this, which is based on the interaction of light (photons) with mechanical vibrations (phonons). As they report in a recent publication [1], this scheme allows to control the transmission of a light beam past a chip-based optical micro-resonator directly by a second, stronger light beam. The new device could have numerous applications in telecommunication and quantum information technologies.

So far, this effect has only been observed in the interaction of laser light with atomic vapours, based on an effect referred to as “electromagnetically induced transparency” (EIT). EIT has been used to manipulate light propagation at an impressive level: slowing of light pulses and even full storage has been achieved. However, EIT is restricted to light of wavelengths matching the natural resonances of atoms. Also, these technologies are hardly compatible with chip-scale processing.

The novel principle, discovered by a team of scientists including Dr. Albert Schliesser and Dr. Samuel Deléglise and doctoral students Stefan Weis and Rémi Rivière, is based on optomechanical coupling of photons to mechanical oscillations inside an optical micro-resonator. These optomechanical devices are fabricated using standard nanofabrication methods – drawing on the techniques used in semiconductor integrated circuit processing available in the cleanroom of EPFL. They can both trap light in orbits and act, at the same time, as mechanical oscillators, possessing well-defined mechanical vibrational frequencies just like a tuning fork.

If light is coupled into the resonator, the photons exert a force: radiation pressure. While this force has been used for decades to trap and cool atoms, it is only in the last five years that researchers could harness it to control mechanical vibrations at the micro- and nanoscale. This has led to a new research field: cavity optomechanics, which unifies photonics and micro- and nanomechanics. The usually small radiation pressure force is greatly enhanced within an optical microresonator,and can therefore deform the cavity, coupling the light to the mechanical vibrations. For the optomechanical control of light propagation, a second, “control” laser can be coupled to the resonator in addition to the “signal” laser. In the presence of the control laser, the beating of the two lasers causes the mechanical oscillator to vibrate – which in turn prevents the signal light to enter the resonator by an optomechanical interference effect, leading eventually to a transparency window for the signal beam.

For a long time the effect remained elusive, “We have known for more than two years that the effect existed,” says Dr. Schliesser, who theoretically predicted the effect early on. “Once we knew where to look it was right there,” says Stefan Weis, one of the lead authors of the paper. In the subsequent measurements, “the agreement of theory and experiment is really striking”, comments Dr. Deléglise.

In contrast to atoms, this novel form of induced transparency does not rely on naturally occurring resonances and could therefore also be applied to previously inaccessible wavelength regions such as the technologically important telecommunication window (near-infrared). Optomechanical systems allow virtually unlimited design freedom using wafer-scale nano- and microfabrication techniques. Furthermore, already a single optomechanical element can achieve unity contrast, which in the atomic case normally not is possible.

The novel effect, which the researchers have termed “OMIT” (optomechanically induced transparency) to emphasize the close relation to EIT, may indeed provide entirely new functionality to photonics. Future developments based on OMIT could enable the conversion of a stream of photons into mechanical excitations (phonons). Such conversion of radio frequency signals into mechanical vibrations is used in cell-phone receivers today for narrow-band filtering, a principle that could potentially be applied to optical signals as well.

Figure 1: False-colour scanning electron micrograph of the microresonator used in the study of OMIT. The red top part is a silica toroid; it is supported by a silicon pillar (gray) on a semiconductor chip. The silica toroid serves both, as an excellent optical resonator for photons, and it supports mechanical vibrations (phonons). The mutual coupling of photons and phonons can be harnessed to control the propagation of light all-optically.

Furthermore, using OMIT, novel optical buffers could be realized that allow storing optical information for up to several seconds. Finally, with research groups all over the world reaching for control of optomechanical systems at the quantum level, the switchable coupling demonstrated in this work could serve as an important interface in hybrid quantum systems.

Figure 2: Principle of optomechanically induced transparency (OMIT). a) The signal laser (red beam), incident on the cavity, gets coupled into the resonator, and gets dissipated there. No light is returned from the system. b) In the additional presence of a control laser (green beam), the radiation pressure of the two beams drives the boundary of the cavity into resonant oscillations, preventing most of the signal beam to enter the cavity by an interference effect. In this case, the signal beam is returned by the optomechanical system.

Reference
[1] S. Weis, R. Rivière, S. Deléglise, E. Gavartin, O. Arcizet, A. Schliesser, T.J. Kippenberg, "Optomechanically induced transparency", Science, Vol.330, pp.1520-1523 (Dec 10, 2010).
Abstract.

Labels: ,


Sunday, December 19, 2010

A Large Faraday Effect Observed in An Atomically Thin Material

(From L to R): Dirk van der Marel, Alexey Kuzmenko, Julien Levallois and Iris Crassee of University of Geneva

A team of physicists from the University of Geneva (Switzerland) -- in collaboration with researchers in the University of Erlangen-Nuremberg (Germany) and Berkeley Advanced Light Source (USA) -- has recently measured the magnetically induced rotation of the polarization of light (Faraday rotation) [1] in graphene in the far-infrared range.

In contradiction to the common logics, the rotation angle, which is usually proportional to sample thickness, appears to be very strong – up to a few degrees in a single atomic layer. Such a large effect, which is due to the cyclotron resonance of ‘relativistic’ electrons in graphene, does not only provide a useful contact free tool to study the dynamics of the charge carriers in graphene, but also suggests that graphene can be used to manipulate the state of the optical polarization. This work is published in a recent issue of Nature Physics [2].

Graphene is a single layer of carbon atoms arranged in a honeycomb lattice. Electrons in graphene behave like massless relativistic particles moving with a velocity of about 300 times smaller than the speed of light [3]. A high mobility of charge carriers makes graphene potentially useful for electronics. Moreover, graphene shows unique optical properties such as the universal transparency [4], which in combination with excellent electrical conductivity favor its use in important optical applications, such as solar cells, infrared detection, computer screens and ultra fast lasers [5].

On the left: A schematic representation of the Faraday rotation. On the right: the Faraday rotation as a function of the photon energy and the magnetic field (This figure is reproduced from Reference [2]. We thank authors of the paper and 'Nature Physics' for their permission. -- 2Physics.com)

When an external magnetic field is applied over a medium it becomes magnetically polarized and the state of the optical polarization of light passing through the medium is affected: linearly polarized light is rotating gradually during its passage due to a difference in velocity and absorption of left- and right-handed polarized light. The rotation angle, also known as the Faraday angle, is proportional to the optical path length, to the applied magnetic field and a material specific parameter, the Verdet constant, which depends on the wavelength of the passing light. The ‘thickness’ of graphene is given by the inter atomic distance of graphite – stacked graphene layers; therefore an intriguing question is what happens to the optical polarization state if the optical path is as short as only one atom.

Iris Crassee, Julien Levallois, Dirk van der Marel and Alexey Kuzmenko at the University of Geneva have studied the Faraday rotation in the far-infrared range by graphene, epitaxially grown on SiC and characterized in the University of Erlangen-Nuremberg and Berkeley Advanced Light Source [2]. The experiments showed that even for such an extremely thin layer the Faraday rotation can reach 6 degrees in a moderate magnetic field of 7 Tesla (see the figure). If one could be able to stack several graphene layers at distances similar to interlayer spacing in graphite (about 0.35 nm) without changing their individual properties then the effective Verdet constant of such a material can in principle attain a few times of 107 radian/(meter∙Tesla). For comparison, the Verdet constants of the magneto-optical materials used in the visible range, such as rare-earth garnets, are only of the order of 102-103 radian/(meter∙Tesla). A more appropriate, though, would be to compare the Faraday rotation in graphene and in the semiconductor-based two-dimensional electron gases (2DEGs) in the same spectral range (far-infrared and teraherz). The fact is that the effective Verdet constant in graphene is still at least one to two orders of magnitude larger!

The origin of the observed Faraday rotation is in a peculiar cyclotron orbital motion of nearly massless electrons in graphene in a magnetic field. A similar effect can also be observed in 2DEGs. However, the cyclotron mass and therefore the cyclotron frequency (at a given magnetic field) in 2DEGs are fixed. In graphene they can be varied with doping. Moreover, since graphene can be doped both positively and negatively either electrostatically or chemically, the cyclotron frequency, and therefore the direction of the Faraday rotation, can be inverted without changing the magnetic field.

The Faraday effect and the associated magneto-optical Kerr effect are already widely used in such vital applications as optical communications, data storage and laser systems, largely in the visible range. Although the Faraday rotation in graphene was shown to be strong in the by far less exploited far-infrared part of the electromagnetic spectrum, one can nevertheless think of using graphene for example, in ultrathin and ultra fast tunable ‘Faraday isolators’, in which light can travel in one direction, but is blocked in the other. In contrast to the existing devices, one should be able to tune the spectral range and also change the sign of the Faraday rotation in graphene by simply adjusting the gate voltage.

References
[1] M. Faraday, “On the magnetization of light and the illumination of magnetic lines of force”, Phil. Trans. R. Soc. 136, 104 (1846).
[2] I. Crassee, J. Levallois, A.L.Walter, M. Ostler, A. Bostwick, E. Rotenberg, Th. Seyller, D. van der Marel and A. B. Kuzmenko, “Giant Faraday rotation in single- and multi ayer graphene”, Nature Physics, 7, 48-51 (2011).
Abstract.
[3] A. K. Geim and K. S. Novoselov, “The rise of graphene: Nature Materials, 6, 183 (2007).
Abstract.
[4] R.R. Nair, P. Blake, A.N. Grigorenko, K.S. Novoselov, T.J. Booth, T. Stauber, N.M.R. Peres and A.K. Geim, “Fine structure constant defines visual transparancy of graphene”, Science 320, 1308 (2008).
Abstract.
[5] F. Bonaccorso, Z. Sun, T. Hasan and A. C. Ferrari, “Graphene photonics and optoelectronics”, Nature Photonics, 4, 611 (2010).
Abstract.

Labels: ,


Sunday, December 12, 2010

Metaflex: Flexible Metamaterial at Visible Wavelengths

A. Di Falco (left) and T. F. Krauss

[This is an invited article based on a recently published work by the authors -- 2Physics.com]

Authors: Andrea Di Falco and Thomas F. Krauss

Affiliation:
School of Physics and Astronomy, Univ. of St Andrews, UK

Andrea Di Falco, Martin Plöschner and Thomas Krauss of the School of Physics and Astronomy of the Scottish University of St Andrews, in an article published by the New Journal of Physics [1], have recently reported on the fabrication of a key building block for flexible metamaterials for visible light, Metaflex.

Figure 1: Artist's impression of the Metaflex concept. The green sphere is made invisible and not reflected by the mirror.

Metamaterials have engineered properties that are not available with naturally occuring materials. For example, they can exhibit negative refraction, which means that light refracts in the opposite direction to the one we are used to. They can also be used to build superlenses, which are lenses that can form images with “unlimited” resolution, well beyond the diffraction limit and invisibility cloaks that can guide light around an object as if it did not exist. For these effects to take place, the smallest building blocks of metamaterials, called “meta-atoms”, have to be much smaller than the wavelength of the incident light. Therefore, at visible wavelengths, which are typically 400-600 nanometres, the meta-atoms have to be in the range of few tens of nanometers. For this reason, researchers have to employ the sophisticated techniques developed in the semiconductor industry, i.e. the same techniques that are used to densely pack the semiconductor circuits that are required in modern computer processors. As a result, most metamaterials are realised on flat and rigid substrates, which limits the range of applications that can be accessed.

The work carried out at St Andrews overcomes this limitation by demonstrating metamaterials on flexible substrates. This achievement can almost be understood as a transition from the hard and rigid “stone-age” of nanophotonics to a modern age marked by flexibility [2,3]. While some examples of stretchable and deformable metamaterials have previously appeared [4-6], the St Andrews researchers were the first to demonstrate such flexible metamaterials at visible wavelengths.

Metaflex consists of very thin, and self-supporting polymer membranes. The metamaterial property arises from an array of gold nanostructures that are resonant in the visible range. In particular, Di Falco et al. have “written” a nanometer sized gold fishnet pattern (in an area of few mm2), which interacts with light at a wavelength of 630 nm, i.e. the wavelength of red light. Because metaflex is so thin, multiple layers can be stacked together as well as wrapped around an object. Such multilayer metaflex will be demonstrated as the next step, which will allow the demonstration of more complex behaviors such as negative refraction in flexible substrates at optical wavelengths.

Metaflex is also a useful tool for exploring the paradigm of Transformation Optics, which is the concept behind the ideas of invisibility cloaks that are so inspiring [7]. Transformation Optics requires materials with “designer” refractive properties that go far beyond those available with natural materials, so are ideally suited to the application of metamaterials; flexibility then adds a key ingredient. Metaflex, being supple and modifiable, is the natural choice for applications where, for example, a curved geometry is required.

Figure 2 : A layer of Metaflex placed on a disposable contact lens to show its potential use in visual prostheses.

In addition to enabling such exciting ideas as invisibility cloaks, metaflex offers more immediately feasible and practical applications such as enhanced visual prostheses, whereby the designer refractive properties can be used to improve the performance of everyday objects such as contact lenses.



References
[1]
Andrea Di Falco, Martin Ploschner and Thomas F Krauss, "Flexible metamaterials at visible wavelengths", New Journal of Physics, vol.12, 113006 (2010).
Abstract.
[2] John A. Rogers, Takao Someya and Yonggang Huang, "Materials and Mechanics for Stretchable Electronics", Science, vol.327, 1603 (2010).
Abstract.
[3] I. Park, S. H. Ko, H. Pan, C. P. Grigoropoulos, A. P. Pisano, J. M. J. Fréchet, E.-S. Lee, J.-H. Jeong, "Nanoscale patterning and electronics on flexible substrate by direct nanoimprinting of metallic nanoparticles", Advanced Materials, vol. 20, 489 (2008).
Abstract.
[4] Hu Tao, A. C. Strikwerda, K. Fan, W. J. Padilla, X. Zhang and R. D. Averitt, "Reconfigurable Terahertz Metamaterials", Phys. Rev. Lett., vol. 103, 147401 (2009).
Abstract.
[5] H.O. Moser, L.K. Jian, H.S. Chen, M. Bahou, S.M.P. Kalaiselvi, S. Virasawmy, S.M. Maniam, X.X. Cheng, S.P. Heussler, Shahrain bin Mahmood, and B.-I. Wu, "All-metal self-supported THz metamaterial - the meta-foil", Opt Express (2009) vol. 17, 23914 (2009).
Abstract.
[6] Imogen M. Pryce, Koray Aydin, Yousif A. Kelaita, Ryan M. Briggs, and Harry A. Atwater, "Highly Strained Compliant Optical Metamaterials with Large Frequency Tunability", Nano Lett., vol. 10, 4222 (2010).
Abstract.
[7] Ulf Leonhardt and Thomas Philbin, "Geometry and Light: The Science of Invisibility" (Mineola, NY: Dover, 2010)

Labels: , ,


Sunday, December 05, 2010

Quantum Walks of Correlated Photons in Integrated Waveguide Arrays

Alberto Peruzzo

[This is an invited article based on a recently published work by the authors -- 2Physics.com]

Authors: Alberto Peruzzo and Jeremy L. O’Brien

Affiliation: Centre for Quantum Photonics, H. H. Wills Physics Laboratory and Dept of Electrical & Electronic Engineering, University of Bristol, UK


Since their initial development for studying the random motion of microscopic particles (such as those suspended in a fluid), random walks have been a successful model for random processes applied in many fields, from computer science to economics. Such processes are random in the sense that at a particular time the choice for a particle to make a particular step is probabilistic and decided by flipping a coin.

Past 2Physics articles based on works of this group:
Sep 20, 2009: "Shor's Quantum Factoring Algorithm Demonstrated on a Photonic Chip"

May 2, 2008: "Silicon Photonics for Optical Quantum Technologies"


In the quantum analogue – the quantum walk[1] – the walker is, at a given time, in a superposition of the possible states and different paths can interfere, exhibiting ballistic propagation with faster dynamics compared to the slow diffusion of classical random walks, prompting applications in quantum computer science and quantum communication. Indeed quantum walks have been shown to be universal for quantum computing, enable direct simulation of important physical, chemical and biological systems, and the possibility to study very large entangled states of several particles, with potential to investigate the existence of quantum- classical boundaries.

The first application of quantum walks was search algorithms on graphs (vertices connected by edges) and is more efficient than the classical search. Finding an element in a collection of N vertices using a quantum walk requires √N steps while the classical algorithm takes N steps to check all the vertices.

Quantum walks come in two types, the discrete time quantum walk (DTQW) and the continuous time quantum walk (CTQW). In a DTQW the step direction is specified by a coin and a shift operator, which are applied repeatedly, similarly to the classical random walk, but with the difference that now the coin flip is replaced by a quantum coin operation defining the superposition of the directions the step undertakes. The CTQW describes tunnelling of quantum particles through arrayed potential wells.

The theory of quantum walks have been extensively studied but only few experimental demonstration of several steps of single particle quantum walks with atoms, trapped ions, nuclear magnetic resonance and photons have been carried out so far.

Quantum walks are based on wave interference and require a stable environment to reduce the noise (decoherence) that would otherwise destroy the interference. Interferometric stability and miniaturization using phonics waveguide circuits have been shown to be a promising approach for quantum optics experiments, silica-on-silicon waveguides have been used to demonstrate high fidelity quantum information components [2, 3, 4] and a small scale quantum algorithm for prime number factorization [5].

We’ve implemented CTQW of photons designing periodic waveguide arrays in integrated photonic circuits that enable the injection of single photons and the coupling to single photon detectors at their output. The chips were fabricated in the high refractive index contrast material silicon oxynitride, enabling to quickly stop the coupling between neighbour waveguides so that high level of control over the propagation was possible.

Integrated quantum photonic circuit used to implement a continuous time quantum walk of two correlated photons.

In contrast to all previous demonstrations — which were restricted to single particle quantum walks that have exact classical counterparts — we have demonstrated the quantum walk of two identical photons spatially correlated within arrayed waveguide, observing uniquely quantum mechanical behaviour in the two-photon correlations at the outputs of this array [6]. Pairs of correlated photons were generated with a standard type I spontaneous parametric down-conversion process, a nonlinear process where a 402nm wavelength CW laser pumps a χ2 nonlinear bismuth borate crystal generating pairs of photons at 804nm wavelength in conservation of energy and momentum. The correlated photons were coupled to the waveguide using fibre arrays and the correlations at the output were recorded by measuring two photon coincide events with a detection system of 12 avalanche single photons detectors and 3 programmable counting boards. The measured correlations fit with high similarity to our simulations.

Artist’s impression of the two-photon quantum walk.
Credit: Image by Proctor & Stevenson


We’ve shown that the results strongly depend on the input state and these correlations violate classical limits by 76 standard deviation, proving that such phenomena cannot be described using classical theory. This generalized form of quantum interference is similar to the Hong-Ou-Mandel dip effect in an optical beam splitter but in our case on a 21 mode system. Bunching of correlated photons reduces the probability of detecting two photons at the opposite sides of the array while enhancing the case of two particles at the same side.

Such two particle quantum walks have already been identified as a powerful computational tool for solving important problems such as graph isomorphism, and provide a direct route to powerful quantum simulations. Implementing new algorithms based on quantum walks will require integration of the single photon sources and detectors. These have already been showed to be compatible with integration, reducing coupling losses and considerably improving the overall performances. Reconfigurability and feedback will provide further necessary tools enabling to perform more challenging and interesting tasks.

Random walk is an extremely successful tool, employed in many scientific fields and their quantum analogues promise to be similarly powerful.

References:
[1]
Y. Aharonov, L. Davidovich, N. Zagury, "Quantum Random Walks", Phys. Rev. A 48, 1687 (1993).
doi:10.1103/PhysRevA.48.1687
[2] A. Politi, M. J. Cryan, J. G. Rarity, S. Yu, J. L. O'Brien, "Silica-on-silicon waveguide quantum circuits", Science 320, pp. 646-649 (2008).
doi:10.1126/science.1155441
[3] J. C. F. Matthews, A. Politi, A. Stefanov, J. L. O'Brien, "Manipulation of multiphoton entanglement in waveguide quantum circuits", Nature Photonics, 3, pp. 346-350 (2009).
doi:10.1038/nphoton.2009.93
[4] A. Laing, A. Peruzzo, A. Politi, M. Rodas Verde, M. Halder, T. C. Ralph, M. G. Thompson, J. L. O'Brien, "High-fidelity operation of quantum photonic circuits", Quant. Phys., e-prints,
arXiv:1004.0326v2.
[5] A. Politi, J. C. F. Matthews, J. L. O'Brien, "Shor's quantum factoring algorithm on a photonic chip", Science 325, no. 5945, pp. 1221, 2009.
doi:10.1126/science.1173731.
[6] A. Peruzzo, M. Lobino, J.C.F. Matthews, N. Matsuda, A. Politi, K. Poulios, X.-Q. Zhou, Y. Lahini, N. Ismail, K. Wörhoff, Y. Bromberg, Y. Silberberg, M. G. Thompson, J. L. O’Brien, "Quantum Walks of Correlated Photons", Science 329, pp. 1500-1503 (2010).
doi:10.1126/science.1193515 .

Labels:


Sunday, November 28, 2010

High Magnetic Fields Coax New Discoveries from Topological Insulators

James Analytis [Photo courtesy: Stanford U.]

Using one of the most powerful magnets in the world, a small group of researchers has successfully isolated signs of electrical current flowing along the surface of a topological insulator, an exotic material with promising electrical properties. The research, led by James Analytis and Ian Fisher of the Stanford Institute of Materials and Energy Sciences, a joint SLAC-Stanford institute, was published last Sunday in Nature Physics [1]. The results provide a new window into how current flows in these exotic materials, which conduct along the exterior, while acting as insulators at the interior. At least in theory.

"This is a difficulty people in the field have been struggling with for two years," Fisher said. "The topological part is there but the insulator part isn't there yet." Chemical imperfections in the materials being tested have meant that the interior, or bulk, portions of topological insulators have been behaving more like metals than insulators.

Ian Fisher [Photo courtesy: Stanford U.]

In other words, while researchers have been trying to decipher the behavior of the electrons on the surface by observing the way they conduct current (called electronic transport), the electrons in the interior have also been conducting current. The difficulty arises in telling the two currents apart.

But, according to Fisher, the promise of useful applications for these exotic new materials—not to mention possible discoveries of fundamental new physics—rests on the ability to measure and control the electric current at the surface. In order to do so, Analytis, Fisher, and their group first had to reduce the amount of current running through the bulk of the material until the surface current could be detected, and then probe the physical properties of the electrons responsible for that surface current.

Analytis tackled the first problem by replacing some of the bismuth in bismuth selenide, a known topological insulator, with antimony, a lighter relative with the same number of electrons in its valence, or chemically reactive, shell. This provided a way to reduce the number of charge-carrying electrons in the interior of the sample.

But even after removing hundreds of billions of electrons, "we still didn't have an insulator," Analytis said. That's when he turned to Ross McDonald and the pulsed magnets at the Pulsed Field Facility, Los Alamos National Laboratory's branch of the National High Magnetic Field Laboratory.

Ross McDonald [Photo courtesy: National High Magnetic Field Laboratory, Tallahassee, FL]

Electrons in a uniform magnetic field follow circular orbits. As the electrons are subjected to higher and higher magnetic fields, they travel in tighter and tighter orbits, which are quantized, or separated into discrete energy levels, called Landau levels. Using a high-enough magnetic field to trap the bulk electrons in their lowest Landau level enabled Analytis to differentiate between the bulk electrons and the surface electrons, or, as Fisher put it, "get the bulk under control."

With McDonald's help, Analytis used one of Los Alamos' multi-shot pulsed magnets, so called because they deliver their full field strength in pulses lasting thousandths of a second. Analytis discovered that a moderate field of four Tesla (about twenty thousand times the strength of a refrigerator magnet) was sufficient to force the bulk conduction electrons into their lowest Landau level. Then he pushed the magnetic field to 65T to see what the surface electrons on the topological insulator would do.

The 100 Tesla multi-shot pulsed magnet at Los Alamos National Laboratory. James Analytis used a slightly less powerful magnet for the research covered in this article [Photo courtesy: National High Magnetic Field Laboratory, Tallahassee, FL].

He saw a clear signature from the Landau levels of the surface electrons. And, at the very highest magnetic fields, at which the surface electrons are pushed most closely together, Analytis detected signs that the electrons interacted with each other, instead of behaving like independent particles.

"It's beautiful," Fisher said. "It's unambiguous evidence that we can probe electronic transport in the surface of these materials." However, much of the difficulty in creating a truly insulating topological insulator remains.

"It feels like we've opened a door to the place [experimenters] want to be," he said, "but there's a lot more work to be done."

In the meantime, Analytis is moving ahead with his latest experiment—hitting the antimony-doped bismuth selenide with a staggering 85T—the highest magnetic field available in a multi-shot magnet anywhere in the world.

Reference
[1]
James G. Analytis, Ross D. McDonald, Scott C. Riggs, Jiun-Haw Chu, G.S. Boebinger & Ian R. Fisher, "Two-dimensional surface state in the quantum limit of a topological insulator", Nature Physics, Published online November 21 (2010), doi:10.1038/nphys1861.
Abstract.

[The text is written by Lori Ann White of Stanford Linear Accelerator Laboratory (SLAC)]

Labels:


Sunday, November 21, 2010

Four-Fold Quantum Memory

Jeff Kimble (photo courtesy: Caltech Particle Theory Group)

Researchers at the California Institute of Technology (Caltech) have demonstrated quantum entanglement for a quantum state stored in four spatially distinct atomic memories.

Their work, described in the November 18 issue of the journal Nature [1], also demonstrated a quantum interface between the atomic memories—which represent something akin to a computer "hard drive" for entanglement—and four beams of light, thereby enabling the four-fold entanglement to be distributed by photons across quantum networks. The research represents an important achievement in quantum information science by extending the coherent control of entanglement from two to multiple (four) spatially separated physical systems of matter and light.

The proof-of-principle experiment, led by the William L. Valentine Professor and professor of physics H. Jeff Kimble, helps to pave the way toward quantum networks [2]. Similar to the Internet in our daily life, a quantum network is a quantum "web" composed of many interconnected quantum nodes, each of which is capable of rudimentary quantum logic operations (similar to the "AND" and "OR" gates in computers) utilizing "quantum transistors" and of storing the resulting quantum states in quantum memories. The quantum nodes are "wired" together by quantum channels that carry, for example, beams of photons to deliver quantum information from node to node. Such an interconnected quantum system could function as a quantum computer, or, as proposed by the late Caltech physicist Richard Feynman in the 1980s, as a "quantum simulator" for studying complex problems in physics.

Link to Professor Jeff Kimble's Quantum Optics group at Caltech >>

Quantum entanglement is a quintessential feature of the quantum realm and involves correlations among components of the overall physical system that cannot be described by classical physics. Strangely, for an entangled quantum system, there exists no objective physical reality for the system's properties. Instead, an entangled system contains simultaneously multiple possibilities for its properties. Such an entangled system has been created and stored by the Caltech researchers.

Previously, Kimble's group entangled a pair of atomic quantum memories and coherently transferred the entangled photons into and out of the quantum memories [3]. For such two-component—or bipartite—entanglement, the subsystems are either entangled or not. But for multi-component entanglement with more than two subsystems—or multipartite entanglement—there are many possible ways to entangle the subsystems. For example, with four subsystems, all of the possible pair combinations could be bipartite entangled but not be entangled over all four components; alternatively, they could share a "global" quadripartite (four-part) entanglement.

Hence, multipartite entanglement is accompanied by increased complexity in the system. While this makes the creation and characterization of these quantum states substantially more difficult, it also makes the entangled states more valuable for tasks in quantum information science.

[Image Credit: Nature/Caltech/Akihisa Goban] The fluorescence from the four atomic ensembles. These ensembles are the four quantum memories that store an entangled quantum state.

To achieve multipartite entanglement, the Caltech team used lasers to cool four collections (or ensembles) of about one million Cesium atoms, separated by 1 millimeter and trapped in a magnetic field, to within a few hundred millionths of a degree above absolute zero. Each ensemble can have atoms with internal spins that are "up" or "down" (analogous to spinning tops) and that are collectively described by a "spin wave" for the respective ensemble. It is these spin waves that the Caltech researchers succeeded in entangling among the four atomic ensembles.

The technique employed by the Caltech team for creating quadripartite entanglement is an extension of the theoretical work of Luming Duan, Mikhail Lukin, Ignacio Cirac, and Peter Zoller in 2001 for the generation of bipartite entanglement by the act of quantum measurement. This kind of "measurement-induced" entanglement for two atomic ensembles was first achieved by the Caltech group in 2005 [4].

In the current experiment, entanglement was "stored" in the four atomic ensembles for a variable time, and then "read out"—essentially, transferred—to four beams of light. To do this, the researchers shot four "read" lasers into the four, now-entangled, ensembles. The coherent arrangement of excitation amplitudes for the atoms in the ensembles, described by spin waves, enhances the matter–light interaction through a phenomenon known as superradiant emission.

"The emitted light from each atom in an ensemble constructively interferes with the light from other atoms in the forward direction, allowing us to transfer the spin wave excitations of the ensembles to single photons," says Akihisa Goban, a Caltech graduate student and coauthor of the paper. The researchers were therefore able to coherently move the quantum information from the individual sets of multipartite entangled atoms to four entangled beams of light, forming the bridge between matter and light that is necessary for quantum networks.

The Caltech team investigated the dynamics by which the multipartite entanglement decayed while stored in the atomic memories. "In the zoology of entangled states, our experiment illustrates how multipartite entangled spin waves can evolve into various subsets of the entangled systems over time, and sheds light on the intricacy and fragility of quantum entanglement in open quantum systems," says Caltech graduate student Kyung Soo Choi, the lead author of the Nature paper. The researchers suggest that the theoretical tools developed for their studies of the dynamics of entanglement decay could be applied for studying the entangled spin waves in quantum magnets.

Further possibilities of their experiment include the expansion of multipartite entanglement across quantum networks and quantum metrology. "Our work introduces new sets of experimental capabilities to generate, store, and transfer multipartite entanglement from matter to light in quantum networks," Choi explains. "It signifies the ever-increasing degree of exquisite quantum control to study and manipulate entangled states of matter and light."

In addition to Kimble, Choi, and Goban, the other authors of the paper, "Entanglement of spin waves among four quantum memories," are Scott Papp, a former postdoctoral scholar in the Caltech Center for the Physics of Information now at the National Institute of Standards and Technology in Boulder, Colorado, and Steven van Enk, a theoretical collaborator and professor of physics at the University of Oregon, and an associate of the Institute for Quantum Information at Caltech.

References
[1]
K.S. Choi, A. Goban, S.B. Papp, S.J. van Enk, H. J. Kimble, "Entanglement of spin waves among four quantum memories", Nature 468, 412-416 (18 November 2010).
Abstract.
[2] H.J. Kimble, "The quantum internet", Nature 453, 1023-1030 (19 June 2008).
Abstract.
[3] K. S. Choi, H. Deng, J. Laurat, H. J. Kimble, "Mapping photonic entanglement into and out of a quantum memory", Nature 452, 67-71 (6 March 2008).
Abstract.
[4] C.W. Chou, H. de Riedmatten, D. Felinto, S.V. Polyakov, S.J. van Enk, H.J. Kimble, "Measurement-induced entanglement for excitation stored in remote atomic ensembles", Nature 438, 828-832 (8 December, 2005).
Abstract.

Labels:


Sunday, November 07, 2010

Hanbury Brown and Twiss Interferometry with Interacting Photons

Left to right: Eran Small, Yoav Lahini, Yaron Bromberg and Yaron Silberberg

[This is an invited article based on a recently published work by the authors
-- 2Physics.com]

Authors: Yoav Lahini, Yaron Bromberg, Eran Small and Yaron Silberberg
Affiliation: Department of Physics of Complex Systems, the Weizmann Institute of Science, Rehovot, Israel.


The next time you go out on a sunny day, take a minute to consider the sunlight you see reflected from the ground near you. If you could have frozen time, you would see that actually, the light pattern on the ground is not homogenous, but rather it is speckled – it is made out of patches of light and darkness, similar to the speckle pattern you see when a laser light hits a rough surface like a wall. For sunlight, the typical speckle size is around 100 microns, but that is not the reason why the sunlight speckles are not observed in everyday life. The real reason is that this speckle pattern changes much faster than the human eye – and in fact, faster than any man made detector – can follow. As a result we see an averaged, smeared homogenous light reflected around us.

To understand this phenomenon, and its relation to the Hanbury-Brown and Twiss effect and the birth of quantum optics, let’s first consider the sun as observed by a spectator on earth. The sun is an incoherent light source – that is, there is no fixed phase relation between the rays of light coming from different parts of the sun’s surface. In fact, it is more accurate to say that there is a phase relation between the rays only that this phase difference continuously fluctuates. The rate of the phase fluctuations is very fast, typically on the scale of femtoseconds. Nevertheless, let’s assume for a moment that we could freeze time while looking at the sunlight on Earth. What would we see? Since everything is “frozen”, the phase between all the rays coming from the sun is fixed, and the rays will interfere. The result of such interference of many rays with a random phase leads to the speckle pattern – patches of light and darkness. Bright regions are formed where the rays interfere constructively, and dark regions where the rays interfere destructively. The typical size of the patches is determined by the distance over which constructive interference changes to destructive. This happens when the path lengths from the emitters on the sun (or any incoherent source) to the Earth change by about half a wavelength. In fact it can be shown that the typical size of such speckle, if one could ever be photographed, goes like the wavelength over the angular size of the source, as seen from the earth [1]. This means that the size of a typical speckle is larger the further the distance between the source and the observer - the speckles diffract, their size increases as they propagate.

As noted earlier, the sun will create a speckle pattern on Earth with a typical speckle size of 100 microns, while a distant star (with a much smaller angular size) will create a speckle size of a few meters and even kilometers. So in theory, if the speckle size can be somehow measured, it will allow to determine the angular size of stars, or any other incoherent light source.

In 1956, two astronomers, Hanbury Brown and Twiss, did just that [2]. They found a way to determine the typical speckle size in starlight with just two detectors instead of a camera. The trick was to use two fast detectors, and look into the noise measured by the two detectors, instead of the averaged signal as we usually do in the lab. So how does it work? The intensities measured by the two detectors are noisy, since the speckle pattern that impinges on the detectors continuously varies. But, as long as the two detectors are separated by a distance smaller than the typical speckle size, they will be illuminated most of the time by the same speckle. The signals measured by the two detectors will therefore be noisy but correlated, i.e. the two signals will fluctuate together. However, if the two detectors are separated by a distance larger than the typical speckle size, the signals' fluctuations will be totally uncorrelated, since each detector sees different speckles. Therefore the distance in which noise in the two detectors becomes uncorrelated is a measure of the typical speckle size, and therefore a measure of the angular size of the observed star.

Hanbury Brown and Twiss (HBT) proved their theory several times [3,4], by giving accurate measures of the angular size of several stars using radio and optical interferometry. These experiments gave rise to a vigorous debate about the nature of light: it is easy to prove the HBT effect if you think of light as classical waves, but what happens if you try to take the particle view of light? How can two photons, coming from two distant atoms on the surface of a star and measures by two distant detectors on the surface of the earth, be correlated? The answer to this question was given only after a few years by the Nobel Prize laureate Roy Glauber [5], an answer that marked the birth of the field of Quantum Optics.

Since those days, the HBT technique was adopted and used in many different fields in physics as a tool to remotely measure properties of different sources. For example, the HBT method was used to measure the properties subatomic particles created in nuclear collisions [6], of Bose-Einstein Condensates (BEC) in lattice potentials [7,8] and other systems [9-13]. In a work recently published in Nature Photonics [14], we note that these modern uses of HBT interferometry rely on an assumption that there are no interactions between the particles on their way from the source to the detectors. Such interactions (or nonlinear effects in the case of classical waves) would affect the correlations while the particles (or waves) propagate from the source to the detectors. The assumption of no interactions is probably valid in the astronomical case (although due to the very long distances involved that might also be questioned), but is not necessarily true for atom-matter waves released from their confining potential, or for charged sub-atomic particles propagating from the point of interest to the detection.

To see how one can cope with such complications we analyzed the effect of interactions on the resulting HBT correlation by considering light propagation in a nonlinear medium – a scenario physically similar to matter waves released from a confining potential (the equations describing the dynamics of matter waves are identical, in certain limits, to the equations used in our paper). Using a spatial light modulator and diffusers we mimicked a spatially incoherent light source in a controlled manner, and measured the HBT correlations after propagation of the speckle field in a nonlinear medium. We investigated both repulsive and attractive interactions, in two and three dimensional space. Using these measurements, we have shown how the interactions modify the measured HBT correlations. While the fact the interactions modify correlations is expected, our work provides an intuitive picture for the source of this modification. The key idea is to follow the propagation of the speckle patterns in the nonlinear medium. As discussed above, when there are no interactions the speckles diffract along the propagation. But in the presence of interactions, or nonlinearity, each speckle can turn into what is known as a soliton – a self trapped entity, with a size that does not change along the propagation. This means that the size of the speckles is no longer a measure for the angular size of the source. It is in fact a measure for the strength of the interactions.

Experimental observation of a speckle pattern propagating in a nonlinear medium. In the interaction free case, the width of a typical speckle is inversely proportional to the width of the source, W. In the presence of interactions, one needs to take into account the strength of the intensity fluctuations as well. Image credit: Adi Natan
But perhaps more importantly, we provide a new framework that can include interactions in HBT interferometry. We found that the information on the source can still be retrieved if the interactions are taken into account correctly. We show that in the presence of interactions the angular size of the source can be recovered, but one needs in addition to the spatial correlation also to measure the strength of the signals' fluctuations. Intuitively, this stems from the fact that speckles which have became “solitons” still propagate at different angles. Since these “speckolitons” keep their size along the propagation, the chance that a speckoliton will hit the detectors goes down as the distance from the source to the detector increases. But the intensity the speckolitons carries is much higher than the intensity of a linear speckle which diffracts along the propagation. Careful analysis of this phenomena leads to the conclusion that in the presence of interactions the intensity fluctuations carry the missing information on the angular size of the source.

One can measure the strength of the fluctuations by simply looking at the variance of the detectors' readouts, which is closely related to the contrast of the bright to dark patches in the speckle pattern. As a possible application, consider HBT interferometry with trapped BEC. A recent paper [7] identified the complication of using HBT interferometry arising due to interactions during the time-of-flight, after the condensate is released from the trap. That paper suggests an intricate manipulation of the condensate during the time-of-flight, to scale out the effects of interactions. Our paper provides a framework to include the interactions in the analysis, without the need for such complicated experiments.

References:
[1] Goodman, J. W. , "Speckle Phenomena in Optics" (Roberts & Co., 2007)
[2] Hanbury Brown, R. &. Twiss, R. Q. "A test of a new type of stellar interferometer on Sirius", Nature 178, 1046–1048 (1956).
Abstract.
[3] Hanbury Brown, R. &. Twiss, R. Q. Correlations between photons in two coherent beams of light. Nature 177, 27–29 (1956).
Abstract.
[4] Hanbury Brown, R. "The Intensity Interferometer: Its Application to Astronomy" (Taylor & Francis, 1974).
[5] Glauber, R. G. "Photon correlations", Phys. Rev. Lett. 10, 84–86 (1963).
Abstract.
[6] Baym, G. "The physics of Hanbury Brown–Twiss intensity interferometry: from stars to nuclear collisions", Acta. Phys. Pol. B 29, 1839–1884 (1998).
Article.
[7] Simon Fölling, Fabrice Gerbier, Artur Widera, Olaf Mandel, Tatjana Gericke & Immanuel Bloch, "Spatial quantum noise interferometry in expanding ultracold atom clouds", Nature 434, 481–484 (2005).
Abstract.
[8] Altman, E., Demler, E. & Lukin, M. D. "Probing many body correlations of ultra-cold atoms via noise correlations", Phys. Rev. A 70, 013603 (2004).
Abstract.
[9] M. Schellekens, R. Hoppeler, A. Perrin, J. Viana Gomes, D. Boiron, A. Aspect, C. I. Westbrook, "Hanbury Brown Twiss effect for ultracold quantum gases", Science 310, 648–651 (2005).
Abstract.
[10] Oliver, W. D., Kim, J., Liu J. & Yamamoto, Y. "Hanbury Brown and Twiss-type experiment with electrons", Science 284, 299–301 (1999).
Abstract.
[11] Kiesel, H., Renz, A. & Hasselbach, F. "Observation of Hanbury Brown–Twiss anticorrelations for free electrons", Nature 418, 392–394 (2002).
Abstract.
[12] T. Jeltes, J. M. McNamara, W. Hogervorst, W. Vassen, V. Krachmalnicoff, M. Schellekens, A. Perrin, H. Chang, D. Boiron, A. Aspect & C. I. Westbrook, "Comparison of the Hanbury Brown–Twiss effect for bosons and fermions", Nature 445, 402–405 (2007).
Abstract.
[13] I. Neder, N. Ofek, Y. Chung, M. Heiblum, D. Mahalu & V. Umansky, "Interference between two indistinguishable electrons from independent sources", Nature 448, 333–337 (2007).
Abstract.
[14] Bromberg, Y., Lahini, Y., Small, E. & Silberberg, Y. Hanbury Brown and Twiss interferometry with interacting photons. Nature Photonics 4, 721-726 (2010).
Abstract.

Labels: , , , ,


Sunday, October 31, 2010

Random Numbers Game with Quantum Dice

A simple device measures the quantum noise of vacuum fluctuations and generates true random numbers.

Image caption: A true game of chance: Max Planck researchers produce true random numbers by making the randomly varying intensity of the quantum noise visible. To do this, they use a strong laser (coming from the left), a beam splitter, two identical detectors and several electronic components. The statistical spread of the measured values follows a Gaussian bell-shaped curve (bottom). Individual values are assigned to sections of the bell-shaped curve that correspond to a number.


Behind every coincidence lies a plan - in the world of classical physics, at least. In principle, every event, including the fall of dice or the outcome of a game of roulette, can be explained in mathematical terms. Researchers at the Max Planck Institute for the Science of Light in Erlangen have constructed a device that works on the principle of true randomness [1]. With the help of quantum physics, their machine generates random numbers that cannot be predicted in advance. The researchers exploit the fact that measurements based on quantum physics can only produce a special result with a certain degree of probability, that is, randomly. True random numbers are needed for the secure encryption of data and to enable the reliable simulation of economic processes and changes in the climate.

The phenomenon we commonly refer to as chance is merely a question of a lack of knowledge. If we knew the location, speed and other classical characteristics of all of the particles in the universe with absolute certainty, we would be able to predict almost all processes in the world of everyday experience. It would even be possible to predict the outcome of a puzzle or lottery numbers. Even if they are designed for this purpose, the results provided by computer programs are far from random: "They merely simulate randomness but with the help of suitable tests and a sufficient volume of data, a pattern can usually be identified," says Christoph Marquardt. In response to this problem, a group of researchers working with Gerd Leuchs and Christoph Marquardt at the Max Planck Institute for the Science of Light and the University of Erlangen- Nuremberg and Ulrik Andersen from the Technical University of Denmark have developed a generator for true random numbers.

True randomness only exists in the world of quantum mechanics. A quantum particle will remain in one place or another and move at one speed or another with a certain degree of probability. "We exploit this randomness of quantum-mechanical processes to generate random numbers," says Christoph Marquardt.

The scientists use vacuum fluctuations as quantum dice. Such fluctuations are another characteristic of the quantum world: there is nothing that does not exist there. Even in absolute darkness, the energy of a half photon is available and, although it remains invisible, it leaves tracks that are detectable in sophisticated measurements: these tracks take the form of quantum noise. This completely random noise only arises when the physicists look for it, that is, when they carry out a measurement.

To make the quantum noise visible, the scientists resorted once again to the quantum physics box of tricks: they split a strong laser beam into equal parts using a beam splitter. A beam splitter has two input and output ports. The researchers covered the second input port to block light from entering. The vacuum fluctuations were still there, however, and they influenced the two partial output beams. The physicists then send them to the detectors and measure the intensity of the photon stream. Each photon produces an electron and the resulting electrical current is registered by the detector.

When the scientists subtract the measurement curves produced by the two detectors from each other, they are not left with nothing. What remains is the quantum noise. "During measurement the quantum-mechanical wave function is converted into a measured value," says Christian Gabriel, who carried out the experiment with the random generator with his colleagues at the Max Planck Institute in Erlangen: "The statistics are predefined but the intensity measured remains a matter of pure chance." When plotted in a Gaussian bellshaped curve, the weakest values arise frequently while the strongest occur rarely. The researchers divided the bell-shaped curve of the intensity spread into sections with areas of equal size and assigned a number to each section.

Needless to say, the researchers did not decipher this quantum mechanics puzzle to pass the time during their coffee breaks. "True random numbers are difficult to generate but they are needed for a lot of applications," says Gerd Leuchs, Director of the Max Planck Institute for the Science of Light in Erlangen. Security technology, in particular, needs random combinations of numbers to encode bank data for transfer. Random numbers can also be used to simulate complex processes whose outcome depends on probabilities. For example, economists use such Monte Carlo simulations to predict market developments and meteorologists use them to model changes in the weather and climate.

There is a good reason why the Erlangen-based physicists chose to produce the random numbers using highly complex vacuum fluctuations rather than other random quantum processes. When physicists observe the velocity distribution of electrons or the quantum noise of a laser, for example, the random quantum noise is usually superimposed by classical noise, which is not random. "When we want to measure the quantum noise of a laser beam, we also observe classical noise that originates, for example, from a shaking mirror," says Christoffer Wittmann who also worked on the experiment. In principle, the vibration of the mirror can be calculated as a classical physical process and therefore destroys the random game of chance.

"Admittedly, we also get a certain amount of classical noise from the measurement electronics," says Wolfgang Mauerer who studied this aspect of the experiment. "But we know our system very well and can calculate this noise very accurately and remove it." Not only can the quantum fluctuations enable the physicists to eavesdrop on the pure quantum noise, no one else can listen in. "The vacuum fluctuations provide unique random numbers," says Christoph Marquardt. With other quantum processes, this proof is more difficult to provide and the danger arises that a data spy will obtain a copy of the numbers. "This is precisely what we want to avoid in the case of random numbers for data keys," says Marquardt.

Although the quantum dice are based on mysterious phenomena from the quantum world that are entirely counterintuitive to everyday experience, the physicists do not require particularly sophisticated equipment to observe them. The technical components of their random generator can be found among the basic equipment used in many laser laboratories. "We do not need either a particularly good laser or particularly expensive detectors for the set-up," explains Christian Gabriel. This is, no doubt, one of the reasons why companies have already expressed interest in acquiring this technology for commercial use.

References:
[1]
Christian Gabriel, Christoffer Wittmann, Denis Sych, Ruifang Dong, Wolfgang Mauerer, Ulrik L. Andersen, Christoph Marquardt, Gerd Leuchs, "A generator for unique quantum random numbers based on vacuum states", Nature Photonics, 4, 711-715 (October, 2010).
Abstract.
[2]
More information about the quantum random number generator >>

Labels:


Sunday, October 24, 2010

Looking for a Dark Matter Signature in the Sun’s Interior

Ilídio Lopes

[This is an invited article based on the author's work in collaboration with Joseph Silk of the University of Oxford -- 2Physics.com]

Author: Ilídio Lopes
Affiliation:
Centro Multidisciplinar de Astrofísica, Instituto Superior Técnico, Lisboa, Portugal;
Departamento de Física, Universidade de Évora, Évora, Portugal.

The standard concordance cosmological model of the Universe firmly established that 85% of its mass is constituted by cold, non-baryonic particles which are almost collisionless. During its evolution, the Universe formed a complex network of dark matter haloes, where baryons are gravitationally trapped, leading to the formation of galaxies and stars, including our own Galaxy and our Sun. There are many particle physics candidates for dark matter, for which their specific mass and other properties are still unknown. Among these candidates, the neutralino, a fundamental particle proposed by supersymmetric particle physics models, seems to be the more suitable candidate. The neutralino is a weak interacting massive particle with a present day relic thermal abundance determined by the annihilating dark matter freeze-out in the primordial universe.

Among other celestial’s bodies, the Sun is a privileged place to look for dark matter particles, due to its proximity to the Earth. More significantly, its large mass – which constitutes 99% of the mass of the solar system - creates a natural local trap for the capture of dark matter particles. Present day simulations show that dark matter particles in our local dark matter halo, depending on their mass and other intrinsic properties, can be gravitationally captured by the Sun and accumulate in significant amounts in its core. By means of helioseismology and solar neutrinos we are able to probe the physics in the Sun’s interior, and by doing so, we can look for a dark matter signature.

Neutrinos, once produced in the nuclear reactions of the solar core, will leave the Sun travelling to Earth in less than 8 minutes. These neutrinos stream freely to Earth, subject only to interactions with baryons in a weak scale with a typical scattering cross section of the order of 10-44 cm2, and hence are natural “messengers” of the physical processes occurring in the Sun’s deepest layers. In a paper to be published in the scientific journal “Science” [1], Ilidio Lopes (from Évora University and Instituto Superior Técnico) and Joseph Silk (from Oxford University) suggest that the presence of dark matter particles in the Sun’s interior, depending upon their mass among other properties, can cause a significant drop in its central temperature, leading to a decrease in the neutrino fluxes being produced in the Sun’s core. The calculations have shown that, in some dark matter scenarios, an isothermal solar core is formed. In another paper published in “The Astrophysical Journal Letters” [2], the same authors suggest that, through the detection of gravity waves in the Sun’s interior, Helioseismology can also independently test the presence of dark matter in the Sun’s core.

The new generation of solar neutrino experiments will be able to measure the neutrino fluxes produced in different locations of the Sun’s core. The Borexino and SNO experiments are starting to measure the neutrino fluxes produced at different depths of the Sun’s interior by means of the nuclear reactions of the proton-proton chain. Namely these are pp-ν, 7Be-ν and 8B-ν electronic neutrinos, among others. The high precision measurements expected to be obtained by such neutrino experiments will provide an excellent tool for testing the existence of dark matter in the Sun’s core. In the near future, it is expected that the measurements of pep-ν neutrino fluxes and neutrinos from the CNO cycle will also be measured by the Borexino detector or by the upcoming experiments SNO+ or LENA.

This work is supported in part by Fundação para a Ciência e a Tecnologia and Fundação Calouste Gulbenkian.

References:
[1]
Ilídio Lopes, Joseph Silk, ''Neutrino Spectroscopy Can Probe the Dark Matter Content in the Sun'', Science, DOI: 10.1126/science.1196564, in press.
Abstract.
[2] Ilídio Lopes, Joseph Silk, ''Probing the Existence of a Dark Matter Isothermal Core Using Gravity Modes'', The Astrophysical Journal Letters, Volume 722, Issue 1, pp. L95-L99 (2010), DOI:10.1088/2041-8205/722/1/L95.
Abstract.

Labels: , ,


Sunday, October 17, 2010

Optical Nano-antenna Controls Single Quantum Dot Emission

Niek F. van Hulst Text Color
[This is an invited article based on recent works by the author and his collaborators -- 2Physics.com]

Author: Niek F. van Hulst
Affiliation: ICFO – Institute of Photonic Sciences, 08860 Castelldefels - Barcelona, Spain.
ICREA – Institució Catalana de Recerca i Estudis Avançats, 08015 Barcelona, Spain.


Can one imagine a TV-antenna to send a beam of light? Yes, nanoscale TV-antennas have now been fabricated and brought into action to steer and brighten up the light of molecules and quantum dots by researchers at ICFO – the Institute of Photonic Sciences, in Barcelona, Spain. The achievement was reported in the Science issue of 20 August 2010 [1].

Everywhere. Antennas are all around in our modern wireless society: they are the front-ends in satellites, cell-phones, laptops, etc., that establish the communication by sending and receiving signals, typically MHz-GHz. Characteristic for any town is the chaotic forest of TV antennas covering roofs: metal bar constructions forming sub-wavelength structures, optimized to receive (or send) directional electro-magnetic fields with the wavelengths of the TV/radio signal.

Scaling. Can the proven antenna technology be scaled up towards the optical domain, i.e. from some 100 MHz towards typically a million times higher frequency of around 500 THz? Inevitably, this implies scaling down to a million times smaller structures, with dimensions of typically 100 nm, requiring nanofabrication accuracy down to a few nm. Moreover metals at optical frequencies are far from ideal, very dispersive and usually lossy. These are definite challenges in scaling antennas towards visible light, but the promise is clear: light, despite its submicron wavelength, is conventionally guided by rather bulky elements, such as lenses, mirrors and optical fibres. Optical antennas hold the promise to realize optical logics on truly sub-wavelength scale, comparable to the scale of electronic integrated circuitry [2]. Indeed this has motivated the exploration of modern nanofabrication methods, such as focussed electron and ion beams, to fabricate nanostructures and antennas with optical resonances [3, 4].

Bright quantum emitters. Yet beyond scaling, optical antennas offer a more fundamental advantage. Conventional antennas are connected to electronic circuitry by wires, impedance matched, to afford efficient communication between the local circuit and a certain far field directional signal. What about optical frequency electronic circuitry? Optical sources and detectors are atoms, molecules, quantum dots: quantum systems. Thus hooking up an atom to an antenna (resonant with the atom) does “impedance match” the atom to the surrounding vacuum. The result is an improved emitter or receiver with optimized communication between the localized near-field and the far field: a bright quantum emitter, or an efficient absorber. Indeed fluorescent molecules close to metallic nanoparticles do show enhanced signal and faster radiative decay rate [5].

Quantum emitter @ TV-antenna. With all potential advantages clearly in mind we decided to focus on the icon of optimized antenna technology, the TV-antenna, and strive for interfacing to the quantum world; thus obtaining full control on a directed bright quantum emitter. The “TV-antenna” is actually called Yagi-Uda antenna after the design of Hidetsugu Yagi and Shintaro Uda at Tohoku University in Japan in 1926 and was first widely used in the 2nd world war radar systems. The multi-element Yagi-Uda antenna is made of parallel metallic bars: a central half-wavelength dipole bar acts as the active “feed” element for emission or collection; the surrounding passive elements act as reflector and directors. As result the Yagi-Uda antenna has strongly unidirectional gain profile. This is why a TV-antenna on a roof has to be mounted with the right direction to catch signal. In recent years optical “Yagi’s” have been proposed, simulated [6] and in 2010 the first directional scattering of red light on an array of Yagi-Uda antennas was presented (fittingly) by a Japanese group [7]. In parallel, the interfacing of a quantum emitter to such optical Yagi-Uda antenna, to achieve active control of the direction of light emission, has been theoretically predicted [8, 9]. Now, can one do this in practise? In 2008 we achieved first encouraging results in observing the redirection of the dipolar photon emission pattern of a single molecule by scanning a resonant monopole antenna probe in its direct proximity [10].

Scanning electron microscopy (SEM) image of a five-element Yagi-Uda antenna consisting of a feed element, one reflector, and three directors, fabricated by e-beam lithography. Overall dimension of the antenna is 800 nm, equal to the wavelength of operation. A quantum dot is attached to one end of the feed element.

Getting it right. The final realization of our idea required a real team effort of ICFO researchers, involving both the research groups of Niek van Hulst and Romain Quidant. First Tim Taminiau designed a 5-element gold Yagi antenna, resonant enough to the red, around 800 nm, such that the elements still act as efficient radiation sources, while at the same time providing spectral overlap with the luminescence of CdSeTe quantum dots. Next, to drive the antenna by a quantum dot, it is essential to position the quantum dot at a high field point of the ~140 nm feed element. Giorgio Volpe and Mark Kreuzer developed double e-beam lithography and surface functionalization to position a quantum dot with ~20 nm accuracy at the end of each feed element in an array of Yagi antennas. Finally Alberto Curto adapted a single molecule detection microscope to scan and identify single quantum dots on individual antennas, by monitoring spectra, polarization, blinking and antibunching. Most importantly using a super high 1.46NA objective and detection in the back focal plane on an emCCD camera Alberto could record the angular luminescence for each single dot-antenna system. Indeed, after getting all the details right, we could observe unidirectional emission of a quantum emitter when resonantly coupled to an optical Yagi antenna [1]. The narrow forward angular cone of quantum dot luminescence shows a forward-to-backward ratio of about 5 times [1]. Also the luminescence becomes strongly linearly polarized, corresponding to the antenna dipolar mode. Moreover the directivity of the quantum dot emission is sensitive to the tuning of the antenna resonance, e.g. by changing antenna dimensions, even such that at certain mistuning conditions backward emission is created. Finally it should be noted that the Yagi antenna is very compact with its largest dimension only one wavelength, here 800 nm; thus directional emission is realized from a truly compact area.

Artist's impression of the directional emission of the Yagi-Uda antenna driven by a single luminescent quantum dot.

Perspective. Clearly our experiment demonstrates how photonic antennas are key nano-elements to control single photon emitters. Obviously this provides inspiration to interface such antennas to individual molecules, color centers, proteins, etc., allowing us to explore new avenues in the fields of active photonic circuits, bio-sensing and quantum information technology [2].

References
[1]
A.G. Curto, G. Volpe, T.H. Taminiau, M.P. Kreuzer, R. Quidant, N.F. van Hulst, "Unidirectional Emission of a Quantum Dot Coupled to a Nanoantenna", Science 329, 930 (2010) . Abstract.
[2] P. Bharadwaj, B. Deutsch, L. Novotny, "Optical antennas", Adv. Opt. Photon. 1, 438 (2009). Abstract.
[3] H. J. Lezec, A. Degiron, E. Devaux, R. A. Linke, L. Martin-Moreno, F. J. Garcia-Vidal, T. W. Ebbesen, "Beaming light from a subwavelength aperture", Science 297, 820 (2002). Abstract.
[4] P.Mühlschlegel, H.-J.Eisler, O.J.F.Martin, B.Hecht, D.W.Pohl, "Resonant Optical Antennas", Science 308, 1607 (2005). Abstract.
[5] S. Kühn, U. Hakanson, L. Rogobete, V. Sandoghdar, "Enhancement of single-molecule fluorescence using a gold nanoparticle as an optical nanoantenna", Phys. Rev. Lett. 97, 017402 (2006). Abstract.
[6] J. J. Li, A. Salandrino, N. Engheta, "Shaping light beams in the nanometer scale: A Yagi-Uda nanoantenna in the optical domain", Phys. Rev. B 76, 245403 (2007). Abstract.
[7] T.Kosako, Y.Kadoya, H.F.Hofmann, "Directional control of light by a nano-optical Yagi-Uda antenna", Nature Photonics, 4, 312 (2010). Abstract.
[8] T.H.Taminiau F.D.Stefani & N.F. van Hulst, "Enhanced directional excitation and emission of single emitters by a nano-optical Yagi-Uda antenna", Optics Express 16, 10858 (2008). Abstract.
[9] A. F. Koenderink, "Plasmon Nanoparticle Array Waveguides for Single Photon and Single Plasmon Sources", Nano Letters, 9, 4228 (2009). Abstract.
[10] T.H.Taminiau, F.D.Stefani, F.B.Segerink. & N.F. van Hulst, "Optical antennas direct single-molecule emission", Nature Photonics, 2, 234 (2008). Abstract.

Labels: