.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Sunday, May 29, 2011

Quantum Metrology Meets Noise

[From Left to Right] Ruynet L. de Matos Filho, Luiz Davidovich, Bruno M. Escher


Authors: Bruno M. Escher, Ruynet L. de Matos Filho, and Luiz Davidovich

Affiliation: Instituto de Física, Universidade Federal do Rio de Janeiro, Brazil


Link to the Quantum Optics and Quantum Information Group >>

The estimation of parameters is an essential part of the scientific analysis of experimental data. It may play an important role at a very fundamental level, involving the measurement of fundamental constants of Nature – as for instance the speed of light in vacuum, the Planck constant, and the gravitational constant. Furthermore, it has widespread practical implications, in the characterization of parameter-dependent physical processes – as for instance the estimation of the duration of a phenomenon, of the depth of an oil deposit through the scattering of seismic waves, of the transition frequency of an atom, or yet of the phase shift in an interferometric measurement, due to the presence of gravitational waves.

Detailed techniques for parameter estimation, dating back to the work of R. A. Fisher in 1922 [1, 2] and of H. Cramér [3] and C. R. Rao [4] a decade later, have allowed the characterization of the achievable limits in the precision of estimation. The basic steps in parameter estimation are illustrated in Figure 1: one prepares a probe in a known initial configuration, one sends it through the parameter-dependent process to be investigated, one measures then the final configuration of the probe, and from this measurement one estimates the value of the parameter.

Figure 1: General protocol to estimate an unknown parameter. In the first step (first box on the left), the probe is initialized in a fiducial configuration. It is, then, sent through a parameter-dependent process, which modifies its configuration (second and third boxes). In the next step (fourth box), the probe is submitted to a measurement. Finally, an estimative of the real value of the parameter is made relying on the results of the previous measurement (last box).

Due to the fact that realistic experimental data are noisy, it is not possible to biunivocally associate an experimental result (through an estimative) with the true value of the parameter. The error in an estimative may be quantified by the statistical average of the square of the difference between the estimated and the true value of the parameter, then the so-called Cramér-Rao limit yields a lower-bound to this error, which is inversely proportional to the square root of the number N of repetitions of the measurement process. In single-parameter estimation, this bound is expressed in terms of a quantity known as Fisher Information: the larger this quantity, the more accurate can be the estimative.

Indeed, the amount of information that can be extracted from experiments about the true value of an unknown parameter is given by the Fisher Information. This quantifier of information depends on properties of the probe, the parameter-dependent process, and the measurement on the probe used to investigate the process. An important aim of metrology is to calculate the Fisher Information, to find ways to maximize it, and to find protocols that allow for better estimation.

Quantum Metrology [5, 6] takes into account the quantum character of the systems and processes involved in the estimation of parameters. In this case, the estimation error is still limited by the Cramér-Rao bound, expressed in terms of the Fisher Information, which as before quantifies the maximum amount of information that can be extracted about the parameter, but considering the constraints imposed by quantum physics; in particular, its intrinsic probabilistic nature, the dependence of the result on the measurement scheme, and the more restricted set of possible measurements.

The so-called Quantum Fisher Information [7] involves a maximization over all possible measurement strategies allowed by quantum mechanics. It characterizes the maximum amount of information that can be extracted from quantum experiments about an unknown parameter using the best (and ideal) measurement device. It establishes the best precision that can be attained with a given quantum probe. The ultimate precision limit in quantum parameter estimation is obtained by maximizing the Quantum Fisher Information over all initial states of the probe. In the ideal situation of systems isolated from the environment, useful analytic results allow the calculation of this ultimate bound. It can be shown then that quantum strategies, involving non-classical characteristics of the probes, like entanglement and squeezing, lead to much better bounds, as compared to standard approaches that do not profit from these properties [8]. Thus, for a noiseless optical interferometer (Figure 2), use of entangled states of n photons leads to a precision in the measurement of phase shifts proportional to 1/n, a considerable improvement over the standard limit, which is inversely proportional to the square root of the number of photons.

Figure 2: Optical interferometer. A light field, in a well-known state, is sent through the interferometer. A phase shift θ, due to a refringent material in one of the interferomter arms, can be estimated by measurements on the output field.

In practice, systems cannot be completely isolated from their environment. This leads to the phenomenon of decoherence, which mitigates quantum effects, thus limiting the usefulness of quantum strategies. In this case, it is important to establish the robustness of those strategies. However, the determination of the ultimate precision bound for systems under the influence of the environment involves usually painstaking numerical work [9, 10], since there has been up to now no general approach for this more realistic situation [11,12]. The work by Escher et al. [13], recently published in Nature Physics, provides a general framework for quantum metrology in the presence of noise. It can be used to circumvent this difficulty, leading to useful analytic bounds for important problems.

The main idea behind the proposed approach is to consider the probe together with an environment as a single entity, and to consider the Quantum Fisher Information corresponding to this enlarged system, which implies a maximization over all possible measurement strategies applied to the ensemble probe plus environment. This quantity is, then, an upper bound to the Quantum Fisher Information of the probe alone. It can be shown that there are several (in fact, infinite!) equivalent environments that lead to the same noisy dynamics of the probe. The work by Escher et al. shows however that it is always possible to choose an environment so that the information about the parameter, obtained from measurements on the probe plus environment, is redundant with respect to the information obtained from the probe alone. In this case, the Quantum Fisher Information of the enlarged system coincides with the corresponding quantity for the probe. This allows in principle the determination of the ultimate precision limit by using the methods previously developed for isolated systems.

Even though finding this special class of environment is in general a difficult problem, useful approximations, based on physical insights, can be found, which yield analytical bounds for the precision in noisy systems. The power of this framework was demonstrated in the paper published in Nature Physics, by applying it to lossy optical interferometry and atomic spectroscopy under dephasing, displaying in both cases how noise affects the precision in the estimation of the relevant parameter. Thus, for a noisy optical interferometer, probed by n photons, it was shown that there is a continuous transition of the precision in the estimation of phase shifts, as the number of photons increases, from a 1/n scaling, the ultimate quantum limit in the absence of noise, to the so-called standard limit, inversely proportional to the square root of the number of photons. This result shows that noise leads unavoidably to the standard limit scaling, as the number of photons reaches a critical value, which depends on the noise strength.

Due to its generality, this framework may be applied to a large variety of systems, thus offering a useful tool for the determination of the ultimate limits of precision in the estimation of parameters in realistic scenarios.

References
[1] R. A. Fisher, “On the mathematical foundations of theoretical statistical”, Phil. Trans. R. Soc. A, v. 222, pp. 309–368 (1922) Full text in pdf.
[2] R. A. Fisher, “Theory of statistical estimation”, Proc. Camb. Phil. Soc., v. 22, pp. 700–725, 1925. Abstract.
[3] H. Cramér, "Mathematical methods of statistics". (Princeton University Press,1946).
[4] C. R. Rao, "Linear statistical inference and its applications". 2nd ed. (John Wiley & Sons, 1973).
[5] C. Helstrom, "Quantum detection and estimation theory". (Academic, 1976).
[6] A. Holevo, "Probabilistic and statistical aspects of quantum theory". (North-Holland, 1982).
[7] S. Braunstein, C. Caves, “Statistical distance and the geometry of quantum states”, Physical Review Letters, v. 22, pp. 3439–3443 (1994). Abstract.
[8] V. Giovannetti, S. Lloyd, L. Maccone, “Quantum-enhanced measurements: beating
the standard quantum limit”, Science, v. 306, pp. 1330–1336 (2004). Abstract.
[9] S. F. Huelga, C. Macchiavello, T. Pellizzari, A. K. Ekert, M. B. Plenio, J. I. Cirac, “Improvement of frequency standards with quantum entanglement”, Physical Review Letters, v. 79, pp. 3865–3868 (1997). Abstract.
[10] U. Dorner, R. Demkowicz-Dobrzanski, B. J. Smith, J. S. Lundeen, W. Wasilewski, K. Banaszek, and I. A. Walmsley, “Optimal quantum phase estimation”, Physical Review Letters, v. 102, pp. 040403 (2009). Abstract.
[11] V. Giovannetti, S. Lloyd, L. Maccone, “Advances in quantum metrology”, Nature Photonics, v. 5, pp. 222–229 (2011). Abstract.
[12] L. Maccone, V. Giovannetti, “Quantum metrology: Beauty and the noisy beast”, Nature Physics, v. 7, pp. 376–377 (2011). Abstract.
[13] B. M. Escher, R. L. de Mattos Filho, L. Davidovich “General framework for estimating the ultimate precision limit in noisy quantum-enhanced metrology”, Nature Physics, v. 7, pp. 406–411, 2011. Abstract.

Labels: ,


Sunday, May 22, 2011

Quantum Simulation with Light: Frustrations between Photon Pairs

Philip Walther

Author: Philip Walther

Affiliation:
Vienna Center for Quantum Science and Technology (VCQ), Faculty of Physics, University of Vienna, Austria.

Quantum information science will revolutionize our society if we are able to harness its power. Therefore, worldwide theoretical and experimental efforts are focused on the realization of the Holy Grail of quantum-enhanced applications: the quantum computer. But the difficulties encountered in realizing hundreds of coherent quantum gate operations that act on almost the same number of qubits for the implementation of useful quantum algorithms [1, 2] may seem very discouraging.

On the other side, more than a quarter of a century ago, Richard Feynman [3, 4] envisioned that a well-controlled quantum-mechanical system can be efficiently used for the simulation of other quantum systems and thus is capable of calculating properties that are unfeasible for classical computers. Quantum simulation promises potential returns in understanding detailed quantum phenomenon of inaccessible quantum systems, from molecular structure to the behavior of high-temperature superconductors [5]. Moreover, quantum simulations are conjectured to be less demanding than quantum computations by being less stringent on explicit gate operations or error corrections [6]. The tradeoff, however, is that the level of coherent control of quantum systems necessary for the physical realization of quantum simulators is very demanding.

Photonic quantum systems are not only the natural choice for quantum communication and metrology applications, but have also been proven to be a suitable system for scalable quantum computing [7]. Moreover, the single-particle addressability and tunable measurement-induced interactions make single photons also very promising for the simulation of other quantum systems. In addition to the superior level of quantum control, such photonic quantum simulators allow to utilize quantum interference at beamsplitters which can lead to interesting photon-entanglement that corresponds to ground states of complex correlations in chemical or solid-state systems.

The ground state properties of frustrated quantum systems raised significant interest due to the conjecture that these quantum phenomena may be important for the understanding of high-temperature superconductivity [8]. A quantum system is frustrated if competing requirements cannot be satisfied simultaneously.

A research team from the University of Vienna and the Institute of Quantum Optics and Quantum Information at the Austrian Academy of Sciences headed by Anton Zeilinger realized for the first time an experimental quantum simulation, where the frustration regarding the pairing of quantum correlations was closely investigated [9]. Such pairing of quantum correlations is an important mechanism in chemical or so-called valence bonds, where two electrons from different atoms share an anti-correlated spin state due to the Pauli principle. Obviously this leads to maximally entangled spins, where the spins are always oriented in opposite directions.

The same quantum correlation of valence-bond states can be simulated by a pair of photons that is maximally entangled in polarization, i.e. that the two photons are always orthogonal polarized. Therefore, we were able to simulate four spin-1/2 particles whose ground states exist in two valence-bond states by using two entangled photon pairs. For simulating the interesting entanglement dynamics of these states, however, an effective nonlinear interaction among the photons was implemented by superimposing photons from each pair (Figure 1) at a beamsplitter with a tunable splitting ratio, followed by a measurement of the photons in the output ports.

Figure 1 : Scheme of the photonic setup for studying frustrated spin-1/2 tetramers. The variable beamsplitters (VBS) of this analog quantum simulator allows for tuning the interaction from localized to delocalized of frustrated valence-bond states.

Depending on the interaction strength single photons from originally localized valence-bonds are facing the conflict over partnerships between each other. Each photon can establish a single bond to only one partner exclusively, but wants to get correlated with several partners – obviously this leads to frustration. As a result, the quantum system uses “tricks” that allow quantum fluctuations in which different pairings can coexist as superposition. In the scientific community such superpositions of valence-bond states are called spin liquid states and are an active area of research.

Figure 2: Photonic quantum simulation of the ground state energy of a frustrated Heisenberg spin system.

The precise quantum control of our single photons enabled us to extract the total energy (Figure 2) and the pair-wise quantum correlations for arbitrary configurations within this spin tetramer (Figure 3). Our simulation also proves that the pairwise quantum correlations and energy distribution are restricted by the role of quantum monogamy: only two photons can share a valence-bond independent of the coupling to other single photons. This can be nicely seen as complementarity relation among the various valence-bond configurations when tuning the coupling by changing the beamsplitter's reflectivity.

Figure 3: Experimental observation of the complementarity relation for the pair-wise quantum correlation (or Heisenberg energy) among the individual partners when changing the interaction.

This recent work underlines that quantum simulation is a very good tool for calculating quantum states of matter, and is thus opening the path for the investigation of more complex systems.

References
[1] P. W. Shor, "Algorithms for quantum computation: discrete logarithms and factoring", Proc. 35th Annual Symposium on "Foundations of Computer Science" (ed. S. Goldwasser) 124-134 (1994). Abstract.
[2] L. K. Grover, "Quantum Mechanics Helps in Searching for a Needle in a Haystack", Phys. Rev. Lett. 79, 325 (1997). Abstract.
[3] R. Feynman, "Simulating physics with computers," International Journal of Theoretical Physics, 21, 467 (1982). Abstract.
[4] R. P. Feynman, "Quantum mechanical computers," Foundations of Physics 16, 507 (1986). Abstract.
[5] I. Buluta, F. Nori, "Quantum Simulators," Science 326, 108 (2009). Abstract.
[6] A. Aspuru-Guzik, A. D. Dutoi, P. J. Love, M. Head-Gordon, "Simulated Quantum Computation of Molecular Energies," Science 309, 1704 (2005). Abstract.
[7] E. Knill, R. Laflamme, G. J. Milburn, "A scheme for efficient quantum computation with linear optics," Nature 409, 46 (2001). Abstract.
[8] P. W. Anderson, "The Resonating Valence Bond State in La2CuO4 and Superconductivity," Science 235, 1196 (1987).
[9] Xiao-song Ma, Borivoje Dakic, William Naylor, Anton Zeilinger & Philip Walther, "Quantum simulation of the wavefunction to probe frustrated Heisenberg spin systems," Nature Physics, 7, 399 (May, 2011). Abstract.

Labels: , ,


Sunday, May 15, 2011

Measurements of the 'Edge States' of Graphene Nanoribbons

Michael Crommie [photo courtesy: Lawrence Berkeley National Laboratory]

As far back as the 1990s, long before anyone had actually isolated graphene – a honeycomb lattice of carbon just one atom thick – theorists were predicting extraordinary properties at the edges of graphene nanoribbons. Now physicists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), and their colleagues at the University of California at Berkeley, Stanford University, and other institutions, have made the first precise measurements of the “edge states” of well-ordered nanoribbons.

A graphene nanoribbon is a strip of graphene that may be only a few nanometers wide. Theorists have envisioned that nanoribbons, depending on their width and the angle at which they are cut, would have unique electronic, magnetic, and optical features, including band gaps like those in semiconductors, which sheet graphene doesn’t have.

“Until now no one has been able to test theoretical predictions regarding nanoribbon edge-states, because no one could figure out how to see the atomic-scale structure at the edge of a well-ordered graphene nanoribbon and how, at the same time, to measure its electronic properties within nanometers of the edge,” says Michael Crommie of Berkeley Lab’s Materials Sciences Division (MSD) and UC Berkeley’s Physics Division, who led the research. “We were able to achieve this by studying specially made nanoribbons with a scanning tunneling microscope.”

Graphene nanoribbons are narrow sheets of carbon atoms only one layer thick. Their width, and the angles at which the edges are cut, produce a variety of electronic states, which have been studied with precision for the first time using scanning tunneling microscopy and scanning tunneling spectroscopy. [image courtesy: Lawrence Berkeley National Laboratory]

The team’s research not only confirms theoretical predictions but opens the prospect of building quick-acting, energy-efficient nanoscale devices from graphene-nanoribbon switches, spin-valves, and detectors, based on either electron charge or electron spin. Farther down the road, graphene nanoribbon edge states open the possibility of devices with tunable giant magnetoresistance and other magnetic and optical effects.

Crommie and his colleagues have published their research in Nature Physics, available May 8, 2011 in advanced online publication [1].

The well-tempered nanoribbon

“Making flakes and sheets of graphene has become commonplace,” Crommie says, “but until now, nanoribbons produced by different techniques have exhibited, at best, a high degree of inhomogeneity” – typically resulting in disordered ribbon structures with only short stretches of straight edges appearing at random. The essential first step in detecting nanoribbon edge states is access to uniform nanoribbons with straight edges, well-ordered on the atomic scale.

Hongjie Dai of Stanford University’s Department of Chemistry and Laboratory for Advanced Materials, a member of the research team, solved this problem with a novel method of “unzipping” carbon nanotubes chemically. Graphene rolled into a cylinder makes a nanotube, and when nanotubes are unzipped in this way the slice runs straight down the length of the tube, leaving well-ordered, straight edges.

By "unzipping" carbon nanotubes, regular edges with differing chiralities can be produced between the extremes of the zigzag configuration and, at a 30-degree angle to it, the armchair configuration. [image courtesy: Lawrence Berkeley National Laboratory]


Graphene can be wrapped at almost any angle to make a nanotube. The way the nanotube is wrapped determines the pitch, or “chiral vector,” of the nanoribbon edge when the tube is unzipped. A cut straight along the outer atoms of a row of hexagons produces a zigzag edge. A cut made at a 30-degree angle from a zigzag edge goes through the middle of the hexagons and yields scalloped edges, known as “armchair” edges. Between these two extremes are a variety of chiral vectors describing edges stepped on the nanoscale, in which, for example, after every few hexagons a zigzag segment is added at an angle.

These subtle differences in edge structure have been predicted to produce measurably different physical properties, which potentially could be exploited in new graphene applications. Steven Louie of UC Berkeley and Berkeley Lab’s MSD was the research team’s theorist; with the help of postdoc Oleg Yazyev, Louie calculated the expected outcomes, which were then tested against experiment.

Chenggang Tao of MSD and UCB led a team of graduate students in performing scanning tunneling microscopy (STM) of the nanoribbons on a gold substrate, which resolved the positions of individual atoms in the graphene nanoribbons. The team looked at more than 150 high-quality nanoribbons with different chiralities, all of which showed an unexpected feature, a regular raised border near their edges forming a hump or bevel. Once this was established as a real edge feature – not the artifact of a folded ribbon or a flattened nanotube – the chirality and electronic properties of well-ordered nanoribbon edges could be measured with confidence, and the edge regions theoretically modeled.

Electronics at the edge

“Two-dimensional graphene sheets are remarkable in how freely electrons move through them, including the fact that there’s no band gap,” Crommie says. “Nanoribbons are different: electrons can become trapped in narrow channels along the nanoribbon edges. These edge-states are one-dimensional, but the electrons on one edge can still interact with the edge electrons on the other side, which causes an energy gap to open up.”

A scanning tunneling microscope determines the topography and orientation of the graphene nanoribbons on the atomic scale. In spectroscopy mode, it determines changes in the density of electronic states, from the nanoribbon's interior to its edge. [image courtesy: Lawrence Berkeley National Laboratory]

Using an STM in spectroscopy mode (STS), the team measured electronic density changes as an STM tip was moved from a nanoribbon edge inward toward its interior. Nanoribbons of different widths were examined in this way. The researchers discovered that electrons are confined to the edge of the nanoribbons, and that these nanoribbon-edge electrons exhibit a pronounced splitting in their energy levels.

“In the quantum world, electrons can be described as waves in addition to being particles,” Crommie notes. He says one way to picture how different edge states arise is to imagine an electron wave that fills the length of the ribbon and diffracts off the atoms near its edge. The diffraction patterns resemble water waves coming through slits in a barrier.

For nanoribbons with an armchair edge, the diffraction pattern spans the full width of the nanoribbon; the resulting electron states are quantized in energy and extend spatially throughout the entire nanoribbon. For nanoribbons with a zigzag edge, however, the situation is different. Here diffraction from edge atoms leads to destructive interference, causing the electron states to localize near the nanoribbon edges. Their amplitude is greatly reduced in the interior.

The energy of the electron, the width of the nanoribbon, and the chirality of its edges all naturally affect the nature and strength of these nanoribbon electronic states, an indication of the many ways the electronic properties of nanoribbons can be tuned and modified.

Says Crommie, “The optimist says, ‘Wow, look at all the ways we can control these states – this might allow a whole new technology!’ The pessimist says, ‘Uh-oh, look at all the things that can disturb a nanoribbon’s behavior – how are we ever going to achieve reproducibility on the atomic scale?’”

Crommie himself declares that “meeting this challenge is a big reason for why we do research. Nanoribbons have the potential to form exciting new electronic, magnetic, and optical devices at the nanoscale. We might imagine photovoltaic applications, where absorbed light leads to useful charge separation at nanoribbon edges. We might also imagine spintronics applications, where using a side-gate geometry would allow control of the spin polarization of electrons at a nanoribbon’s edge.”

Although getting there won’t be simple — “The edges have to be controlled,” Crommie emphasizes — “what we’ve shown is that it’s possible to make nanoribbons with good edges and that they do, indeed, have characteristic edge states similar to what theorists had expected. This opens a whole new area of future research involving the control and characterization of graphene edges in different nanoscale geometries.”

Reference
[1]
Chenggang Tao, Liying Jiao, Oleg V. Yazyev, Yen-Chia Chen, Juanjuan Feng, Xiaowei Zhang, Rodrigo B. Capaz, James M. Tour, Alex Zettl, Steven G. Louie, Hongjie Dai, and Michael F. Crommie, “Spatially resolving edge states of chiral graphene nanoribbons,” Nature Physics, Published online on May 8th, 2011. doi:10.1038/nphys1991.
Abstract.

[The text is written by Paul Preuss of Lawrence Berkeley National Laboratory]

Labels: ,


Sunday, May 08, 2011

Largest 3-D Map of the Distant Universe Created by Using Light from 14000 Quasars

Anže Slosar [photo courtesy: Skorpinski/Berkeley Center for Cosmological Physics]

In a meeting of the American Physical Society in Anaheim, California on May 1, Anže Slosar, a physicist at the U.S. Department of Energy's Brookhaven National Laboratory, presented the largest ever three-dimensional map of the distant universe created by using the light of 14,000 quasars to illuminate ghostly clouds of intergalactic hydrogen.

The map was created by a team of scientists from the Sloan Digital Sky Survey (SDSS-III) by using a 2.5-meter telescope with a wide field of view. This will provide an unprecedented view of what the universe looked like 11 billion years ago and the role of dark energy in accelerating the expansion of the universe during that era. The findings are described in an article posted online on the arXiv astrophysics preprint server [1].

The new technique used by Slosar and his colleagues turns the standard approach of astronomy on its head. "Usually we make our maps of the universe by looking at galaxies, which emit light," Slosar explained. "But here, we are looking at intergalactic hydrogen gas, which blocks light. It's like looking at the moon through clouds — you can see the shapes of the clouds by the moonlight that they block."

Instead of the moon, the SDSS team observed quasars, brilliantly luminous beacons powered by giant black holes. Quasars are bright enough to be seen billions of light years from Earth, but at these distances they look like tiny, faint points of light. As light from a quasar travels on its long journey to Earth, it passes through clouds of intergalactic hydrogen gas that absorb light at specific wavelengths, which depend on the distances to the clouds. This patchy absorption imprints an irregular pattern on the quasar light known as the "Lyman-alpha forest."

[Click on the image to see a better resolution version] A slice through the three-dimensional map of the universe. We are looking out from the Milky Way, at the bottom tip of the wedge. Distances are labeled on the right in billions of light-years, and each section of the map is labeled on the left. The black dots going out to about 7 billion light years are nearby galaxies. The red cross-hatched region could not be observed with the SDSS telescope, but the future BigBOSS survey could observe it. The colored region shows the map of intergalactic hydrogen gas in the distant universe. Red areas have more gas; blue areas have less gas. This figure is a slice through the full three-dimensional map. [Image credit: A. Slosar and the SDSS-III collaboration].

An observation of a single quasar gives a map of the hydrogen in the direction of the quasar, Slosar explained. The key to making a full, three-dimensional map is numbers. "When we use moonlight to look at clouds in the atmosphere, we only have one moon. But if we had 14,000 moons all over the sky, we could look at the light blocked by clouds in front of all of them, much like what we can see during the day. You don't just get many small pictures — you get the big picture."

The big picture shown in Slosar's map contains important clues to the history of the universe. The map shows a time 11 billion years ago, when the first galaxies were just starting to come together under the force of gravity to form the first large clusters. As the galaxies moved, the intergalacitc hydrogen moved with them. Andreu Font-Ribera, a graduate student at the Institute of Space Sciences in Barcelona, created computer models of how the gas likely moved as those clusters formed. The results of his computer models matched well with the map. "That tells us that we really do understand what we're measuring," Font-Ribera said. "With that information, we can compare the universe then to the universe now, and learn how things have changed."

[Click on the image to see a better resolution version] A zoomed-in view of the map slice shown in the previous image. Red areas have more gas; blue areas have less gas. The black scalebar in the bottom right measures one billion light years. [Image credit: A. Slosar and the SDSS-III collaboration].

The quasar observations come from the Baryon Oscillation Spectroscopic Survey (BOSS), the largest of the four surveys that make up SDSS-III. Eric Aubourg, from the University of Paris, led a team of French astronomers who visually inspected every one of the 14,000 quasars individually. "The final analysis is done by computers," Aubourg said, "but when it comes to spotting problems and finding surprises, there are still things a human can do that a computer can't."

Past 2Physics articles on BOSS:
March 28, 2010: "General Relativity Is Valid On Cosmic Scale"
October 03, 2009: "BOSS – A New Kind of Search for Dark Energy"


"BOSS is the first time anyone has used the Lyman-alpha forest to measure the three-dimensional structure of the universe," said David Schlegel, a physicist at Lawrence Berkeley National Laboratory in California and the principal investigator of BOSS. "With any new technique, people are nervous about whether you can really pull it off, but now we've shown that we can." In addition to BOSS, Schlegel noted, the new mapping technique can be applied to future, still more ambitious surveys, like its proposed successor BigBOSS.

When BOSS observations are completed in 2014, astronomers can make a map ten times larger than the one being released today, according to Patrick McDonald of Lawrence Berkeley National Laboratory and Brookhaven National Laboratory, who pioneered techniques for measuring the universe with the Lyman-alpha forest and helped design the BOSS quasar survey. BOSS's ultimate goal is to use subtle features in maps like Slosar's to study how the expansion of the universe has changed during its history. "By the time BOSS ends, we will be able to measure how fast the universe was expanding 11 billion years ago with an accuracy of a couple of percent. Considering that no one has ever measured the cosmic expansion rate so far back in time, that's a pretty astonishing prospect."

Quasar expert Patrick Petitjean of the Institut d'Astrophysique de Paris, a key member of Aubourg's quasar-inspecting team, is looking forward to the continuing flood of BOSS data. "Fourteen thousand quasars down, one hundred and forty thousand to go," he said. "If BOSS finds them, we'll be happy to look at them all, one by one. With that much data, we're bound to find things that we never expected."

Reference
[1]
"The Lyman-alpha forest in three dimensions: measurements of large scale flux correlations from BOSS 1st-year data," by Anže Slosar, Andreu Font-Ribera, Matthew M. Pieri, James Rich, Jean-Marc Le Goff, Eric Aubourg, John Brinkmann, Nicolas Busca, Bill Carithers, Romain Charlassier, Marina Cortes, Rupert Croft, Kyle S. Dawson, Daniel Eisenstein, Jean-Christophe Hamilton, Shirley Ho, Khee-Gan Lee, Robert Lupton, Patrick McDonald, Bumbarija Medolin, Jordi Miralda-Escudé, Adam D. Myers, Robert C. Nichol, Nathalie Palanque-Delabrouille, Isabelle Paris, Patrick Petitjean, Yodovina Piskur, Emmanuel Rollinde, Nicholas P. Ross, David J. Schlegel, Donald P. Schneider, Erin Sheldon, Benjamin A. Weaver, David H. Weinberg, Christophe Yeche, and Don York. Available online:
arXiv:1104.5244.

Labels:


Sunday, May 01, 2011

Towards Very Compact Invisibility Cloaks

Jingjing Zhang (left) and Niels Asger Mortensen (right) of Technical University of Denmark

Authors: Jingjing Zhang1, Shuang Zhang2, and Niels Asger Mortensen1

Affiliations:
1DTU Fotonik - Dept of Photonics Engineering, Technical University of Denmark, Denmark
2School of Physics and Astronomy, University of Birmingham, UK

Much effort has been made to realize invisibility cloaks ever since the theoretical proposal based on transformation optics were first put forward in 2006 [1, 2]. There have been two main trends in the realization of invisibility cloaks: One is to extend the bandwidth of the cloak, and the other is to push the working frequencies to optical spectrum. These concerns have been addressed very recently where objects of millimeter size scale were successfully concealed over the whole visible range [3,4].

Shuang Zhang of University of Birmingham, UK

However, there is still one major problem that needs to be solved for the cloak design and realization, which is an important issue in practical applications. The size of the cloak device is usually much (one to two orders of magnitude) larger than that of the cloaked object, meaning that even hiding a tiny object requires a fairly large device. Our team, Jingjing Zhang, Liu Liu, and Niels Asger Mortensen from Technical University of Denmark, Yu Luo from Imperial College London, and Shuang Zhang from University of Birmingham makes the first attempt to address this concern and demonstrates at optical frequencies a cloak whose size is only four times that of the hidden object [5].

Past 2Physics article by Shuang Zhang:
February 13, 2011: "Macroscopic Invisibility Cloak made from Natural Birefringent Crystals"


Liu Liu of Technical University of Denmark (left) and Yu Luo from Imperial College London (right)

In the previous works on visible invisibility cloak, the ratio between the size of the object to be hidden and the size of cloak is limited by the small birefringence of the constituent calcite crystal (~10% difference between ne and no), resulting in cloaks 20 times larger than the object. Here we construct an invisibility cloak from nano-structured homogeneous anisotropic composite materials, which are obtained by adopting semiconductor manufacturing techniques that involve patterning the top silicon layer of an SOI wafer with subwavelength gratings of appropriate filling factor (see Fig. 1). The effective media consisting of silicon gratings bypass the limitation of natural material at hand and give us extra freedom to design the devices as desired, especially those with miniaturized thickness, and the shape of the cloak can be custom designed by properly arranging the composing materials.

Fig. 1 Scanning electron microscopic image of a fabricated carpet cloak from (a) top view (b) oblique view. The inset shows the detail of the silicon gratings.


In the measurement, the light with TM polarization from a tunable laser is used to characterize the cloak. The cloak works by essentially disguising an object from light, making it appear like a flat ground plane. By precisely restoring the path of the reflecting wave from the surface, the cloak creates an illusion of a flat plane for a triangular bump on the surface and any objects underneath, hiding their presence over wavelengths ranging from 1480nm to 1580nm (see Fig. 2).

Fig. 2 The measured output image from a flat surface (left) and a cloaked protruded surface (right) at 1480nm (a), 1550nm (b), and 1580nm (c).

The cloak is made exclusively of dielectric materials, enabling the broadband and low-loss invisibility. In contrast to previous works based on nanostructures [6, 7], our cloak also shows advantages of easier design and fabrication. More importantly, the uniform grating profiles may be fabricated using large area interferometric lithography technique, and therefore, there is no size limitation on the invisibility cloak.

This design approach can be well extended to other frequency ranges. In microwave frequencies where we have access to dielectric materials with very high permittivity, the relative size of the cloak compared with the hidden object can be even more significantly minimized. With more precise fabrication, our scheme also holds promise for a true invisibility cloak that works in the visible parts of the spectrum and at a larger size, and has potential applications in integrated photonics and plasmonics.

References:
[1]
J.B. Pendry, D. Schurig, D.R. Smith, “Controlling electromagnetic fields.” Science 312, 1780–1782 (2006). Abstract.
[2] U. Leonhardt, “Optical conformal mapping.” Science 312, 1777–1780 (2006). Abstract.
[3] Xianzhong Chen, Yu Luo, Jingjing Zhang, Kyle Jiang, John B. Pendry & Shuang Zhang, “Macroscopic invisibility cloaking of visible light.” Nature Communications, 2:176, (2011). Abstract.
[4] Baile Zhang, Yuan Luo, Xiaogang Liu, George Barbastathis,“Macroscopic invisibility cloak for visible light”, Phys. Rev. Lett. 106, 033901 (2011). Abstract.
[5] Jingjing Zhang, Liu Liu, Yu Luo, Shuang Zhang, and Niels Asger Mortensen, "Homogenous optical cloak constructed with uniform layered structures," Optics Express 19, 8625-8631, (2011). Abstract.
[6] J. Valentine, J. Li, T. Zentgraf, G. Bartal, X. Zhang, “An optical cloak made of dielectrics.” Nature Materials, 8, 568–571 (2009). Abstract.
[7] L.H. Gabrielli, J. Cardenas, C.B. Poitras, M. Lipson, “Silicon nanostructure cloak operating at optical frequencies.” Nature Photonics, 3, 461–463 (2009). Abstract.

Labels: