.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Sunday, March 28, 2010

General Relativity Is Valid On Cosmic Scale

Uros Seljak [photo courtesy: University of California, Berkeley]

An analysis of more than 70,000 galaxies by University of California, Berkeley, University of Zurich and Princeton University physicists demonstrates that the universe – at least up to a distance of 3.5 billion light years from Earth – plays by the rules set out 95 years ago by Albert Einstein in his General Theory of Relativity.

By calculating the clustering of these galaxies, which stretch nearly one-third of the way to the edge of the universe, and analyzing their velocities and distortion from intervening material, the researchers have shown that Einstein's theory explains the nearby universe better than alternative theories of gravity.

One major implication of the new study is that the existence of dark matter is the most likely explanation for the observation that galaxies and galaxy clusters move as if under the influence of some unseen mass, in addition to the stars astronomers observe.

A partial map of the distribution of galaxies in the Sloan Digital Sky Survey, going out to a distance of 7 billion light years. The amount of galaxy clustering that we observe today is a signature of how gravity acted over cosmic time, and allows as to test whether general relativity holds over these scales. (M. Blanton, Sloan Digital Sky Survey)

"The nice thing about going to the cosmological scale is that we can test any full, alternative theory of gravity, because it should predict the things we observe," said co-author Uros Seljak, a professor of physics and of astronomy at UC Berkeley and a faculty scientist at Lawrence Berkeley National Laboratory who is currently on leave at the Institute of Theoretical Physics at the University of Zurich. "Those alternative theories that do not require dark matter fail these tests."

In particular, the tensor-vector-scalar gravity (TeVeS) theory, which tweaks general relativity to avoid resorting to the existence of dark matter, fails the test.

The result conflicts with a report late last year that the very early universe, between 8 and 11 billion years ago, did deviate from the general relativistic description of gravity.

Seljak and his current and former students, including first authors Reinabelle Reyes, a Princeton University graduate student, and Rachel Mandelbaum, a recent Princeton Ph.D. recipient, report their findings in the March 11 issue of the journal Nature [1]. The other co-authors are Tobias Baldauf, Lucas Lombriser and Robert E. Smith of the University of Zurich, and James E. Gunn, professor of physics at Princeton and father of the Sloan Digital Sky Survey.

Einstein's General Theory of Relativity holds that gravity warps space and time, which means that light bends as it passes near a massive object, such as the core of a galaxy. The theory has been validated numerous times on the scale of the solar system, but tests on a galactic or cosmic scale have been inconclusive.

"There are some crude and imprecise tests of general relativity at galaxy scales, but we don't have good predictions for those tests from competing theories," Seljak said.

An image of a galaxy cluster in the Sloan Digital Sky Survey, showing some of the 70,000 bright elliptical galaxies that were analyzed to test general relativity on cosmic scales. (Sloan Digital Sky Survey)

Such tests have become important in recent decades because the idea that some unseen mass permeates the universe disturbs some theorists and has spurred them to tweak general relativity to get rid of dark matter. TeVeS, for example, says that acceleration caused by the gravitational force from a body depends not only on the mass of that body, but also on the value of the acceleration caused by gravity.

The discovery of dark energy, an enigmatic force that is causing the expansion of the universe to accelerate, has led to other theories, such as one dubbed f(R), to explain the expansion without resorting to dark energy.

Tests to distinguish between competing theories are not easy, Seljak said. A theoretical cosmologist, he noted that cosmological experiments, such as detections of the cosmic microwave background, typically involve measurements of fluctuations in space, while gravity theories predict relationships between density and velocity, or between density and gravitational potential.

"The problem is that the size of the fluctuation, by itself, is not telling us anything about underlying cosmological theories. It is essentially a nuisance we would like to get rid of," Seljak said. "The novelty of this technique is that it looks at a particular combination of observations that does not depend on the magnitude of the fluctuations. The quantity is a smoking gun for deviations from general relativity."

Three years ago, a team of astrophysicists led by Pengjie Zhang of Shanghai Observatory suggested using a quantity dubbed EG to test cosmological models. EG reflects the amount of clustering in observed galaxies and the amount of distortion of galaxies caused by light bending as it passes through intervening matter, a process known as weak lensing. Weak lensing can make a round galaxy look elliptical, for example.

"Put simply, EG is proportional to the mean density of the universe and inversely proportional to the rate of growth of structure in the universe," he said. "This particular combination gets rid of the amplitude fluctuations and therefore focuses directly on the particular combination that is sensitive to modifications of general relativity."

Using data on more than 70,000 bright, and therefore distant, red galaxies from the Sloan Digital Sky Survey, Seljak and his colleagues calculated EG and compared it to the predictions of TeVeS, f(R) and the cold dark matter model of general relativity enhanced with a cosmological constant to account for dark energy.

The predictions of TeVeS were outside the observational error limits, while general relativity fit nicely within the experimental error. The EG predicted by f(R) was somewhat lower than that observed, but within the margin of error.

In an effort to reduce the error and thus test theories that obviate dark energy, Seljak hopes to expand his analysis to perhaps a million galaxies when SDSS-III's Baryon Oscillation Spectroscopic Survey (BOSS), led by a team at LBNL and UC Berkeley, is completed in about five years. To reduce the error even further, by perhaps as much as a factor of 10, requires an even more ambitious survey called BigBOSS, which has been proposed by physicists at LBNL and UC Berkeley, among other places.

Future space missions, such as NASA's Joint Dark Energy Mission (JDEM) and the European Space Agency's Euclid mission, will also provide data for a better analysis, though perhaps 10-15 years from now.

Seljak noted that these tests do not tell astronomers the actual identity of dark matter or dark energy. That can only be determined by other types of observations, such as direct detection experiments.

Reference
[1] Reinabelle Reyes, Rachel Mandelbaum, Uros Seljak, Tobias Baldauf, James E. Gunn, Lucas Lombriser, Robert E. Smith, "Confirmation of general relativity on large scales from weak lensing and galaxy velocities", Nature, 464, 256-258 (2010).
Abstract.

[This report is written by Robert Sanders of University of California, Berkeley]

Labels: , , ,


Sunday, March 21, 2010

Theory of Quantum Mechanics Applies to the Motion of Large Objects

(L to R) Andrew Cleland, Aaron O'Connell and John Martinis [photo credit: George Foulsham / Univ of California, Santa Barbara]

A team of physicists from University of California, Santa Barbara has provided the first clear demonstration that the theory of quantum mechanics applies to the mechanical motion of an object large enough to be seen by the naked eye. Their work satisfies a longstanding goal among physicists.

In a paper published in the March 17 issue of the advance online journal Nature [1], Aaron O'Connell, a doctoral student in physics, and John Martinis and Andrew Cleland, professors of physics, describe the first demonstration of a mechanical resonator that has been cooled to the quantum ground state, the lowest level of vibration allowed by quantum mechanics. With the mechanical resonator as close as possible to being perfectly still, they added a single quantum of energy to the resonator using a quantum bit (qubit) to produce the excitation. The resonator responded precisely as predicted by the theory of quantum mechanics.

"This is an important validation of quantum theory, as well as a significant step forward for nanomechanics research," said Cleland.

The researchers reached the ground state by designing and constructing a microwave-frequency mechanical resonator that operates similarly to –– but at a higher frequency than –– the mechanical resonators found in many cellular telephones. They wired the resonator to an electronic device developed for quantum computation, a superconducting qubit, and cooled the integrated device to temperatures near absolute zero. Using the qubit as a quantum thermometer, the researchers demonstrated that the mechanical resonator contained no extra vibrations. In other words, it had been cooled to its quantum ground state.

Micrograph of the resonator

The researchers demonstrated that, once cooled, the mechanical resonator followed the laws of quantum mechanics. They were able to create a single phonon, the quantum of mechanical vibration, which is the smallest unit of vibrational energy, and watch as this quantum of energy exchanged between the mechanical resonator and the qubit. While exchanging this energy, the qubit and resonator become "quantum entangled," such that measuring the qubit forces the mechanical resonator to "choose" the vibrational state in which it should remain.

In a related experiment, they placed the mechanical resonator in a quantum superposition, a state in which it simultaneously had zero and one quantum of excitation. This is the energetic equivalent of an object being in two places at the same time. The researchers showed that the resonator again behaved as expected by quantum theory.

Reference
[1]
A. D. O’Connell, M. Hofheinz, M. Ansmann, Radoslaw C. Bialczak, M. Lenander, Erik Lucero, M. Neeley, D. Sank, H. Wang, M. Weides, J. Wenner, John M. Martinis & A. N. Cleland, "Quantum ground state and single-phonon control of a mechanical resonator", Nature advance online publication 17 March 2010 [doi:10.1038/nature08967].
Abstract.

Labels: ,


Sunday, March 14, 2010

Gravitational Lenses Measure the Age and Size of the Universe



Phil Marshall (KIPAC, SLAC/Stanford) demonstrates lensing using a wine glass. [Video courtesy of Brad Plummer/Julie Karceski (SLAC)].


Using entire galaxies as lenses to look at other galaxies, researchers have a newly precise way to measure the size and age of the universe and how rapidly it is expanding, on a par with other techniques. The measurement determines a value for the Hubble constant, which indicates the size of the universe, and confirms the age of the universe as 13.75 billion years old, within 170 million years. The results also confirm the strength of dark energy, responsible for accelerating the expansion of the universe.

These results, by researchers at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at the US Department of Energy’s SLAC National Accelerator Laboratory and Stanford University, the University of Bonn, and other institutions in the United States and Germany, is published in the March 1 issue of The Astrophysical Journal [1]. This research was supported in part by the Department of Energy Office of Science. The authors of the paper are S. Suyu of the University of Bonn, P. Marshall of KIPAC, M. W. Auger (University of California, Santa Barbara), S. Hilbert (Argelander Institut für Astronomie and Max-Planck-Institut für Astrophysik), R. D. Blandford (KIPAC), L. V. E. Koopmans (Kapteyn Astronomical Institute), C. D. Fassnacht (University of California, Davis), and T. Treu (University of California, Santa Barbara).


Sherry Suyu describes the recent measurements of the age of the universe [Video Courtesy: uni-bonn.tv /University of Bonn, Germany]


The researchers used data collected by the NASA/ESA Hubble Space Telescope, and showed the improved precision they provide in combination with the Wilkinson Microwave Anisotropy Probe (WMAP).

The team used a technique called gravitational lensing to measure the distances light traveled from a bright, active galaxy to the earth along different paths. By understanding the time it took to travel along each path and the effective speeds involved, researchers could infer not just how far away the galaxy lies but also the overall scale of the universe and some details of its expansion.

Oftentimes it is difficult for scientists to distinguish between a very bright light far away and a dimmer source lying much closer. A gravitational lens circumvents this problem by providing multiple clues as to the distance light travels. That extra information allows them to determine the size of the universe, often expressed by astrophysicists in terms of a quantity called Hubble's constant.

When a large nearby object, such as a galaxy, blocks a distant object, such as another galaxy, the light can detour around the blockage. But instead of taking a single path, light can bend around the object in one of two, or four different routes, thus doubling or quadrupling the amount of information scientists receive. As the brightness of the background galaxy nucleus fluctuates, physicists can measure the ebb and flow of light from the four distinct paths, such as in the B1608+656 system imaged above. [Image courtesy Sherry Suyu of the Argelander Institut für Astronomie in Bonn, Germany]


"We've known for a long time that lensing is capable of making a physical measurement of Hubble's constant," KIPAC's Phil Marshall said. However, gravitational lensing had never before been used in such a precise way. This measurement provides an equally precise measurement of Hubble's constant as long-established tools such as observation of supernovae and the cosmic microwave background. "Gravitational lensing has come of age as a competitive tool in the astrophysicist's toolkit," Marshall said.

When a large nearby object, such as a galaxy, blocks a distant object, such as another galaxy, the light can detour around the blockage. But instead of taking a single path, light can bend around the object in one of two, or four different routes, thus doubling or quadrupling the amount of information scientists receive. As the brightness of the background galaxy nucleus fluctuates, physicists can measure the ebb and flow of light from the four distinct paths, such as in the B1608+656 system that was the subject of this study. Lead author on the study Sherry Suyu, from the University of Bonn, said, "In our case, there were four copies of the source, which appear as a ring of light around the gravitational lens."

Though researchers do not know when light left its source, they can still compare arrival times. Marshall likens it to four cars taking four different routes between places on opposite sides of a large city, such as Stanford University to Lick Observatory, through or around San Jose. And like automobiles facing traffic snarls, light can encounter delays, too.

"The traffic density in a big city is like the mass density in a lens galaxy," Marshall said. "If you take a longer route, it need not lead to a longer delay time. Sometimes the shorter distance is actually slower."

The gravitational lens equations account for all the variables such as distance and density, and provide a better idea of when light left the background galaxy and how far it traveled.

In the past, this method of distance estimation was plagued by errors, but physicists now believe it is comparable with other measurement methods. With this technique, the researchers have come up with a more accurate lensing-based value for Hubble's constant, and a better estimation of the uncertainty in that constant. By both reducing and understanding the size of error in calculations, they can achieve better estimations on the structure of the lens and the size of the universe.

There are several factors scientists still need to account for in determining distances with lenses. For example, dust in the lens can skew the results. The Hubble Space Telescope has infra-red filters useful for eliminating dust effects. The images also contain information about the number of galaxies lying around the line of vision; these contribute to the lensing effect at a level that needs to be taken into account.

Marshall says several groups are working on extending this research, both by finding new systems and further examining known lenses. Researchers are already aware of more than twenty other astronomical systems suitable for analysis with gravitational lensing.

Reference
S. H. Suyu, P. J. Marshall, M. W. Auger, S. Hilbert, R. D. Blandford, L. V. E. Koopmans, C. D. Fassnacht and T. Treu, "Dissecting the Gravitational Lens B1608+656. II. Precision Measurements Of The Hubble Constant, Spatial Curvature, and the Dark Energy Equation Of State", The Astrophysical Journal, v711, p201 (2010). Abstract.


[This report is written by Julie Karceski, SLAC National Accelerator Laboratory]

Labels: ,


Sunday, March 07, 2010

Superposing Photons

Erwan Bimbard


[This is an invited article based on a recently published work by the authors and their collaborators from Canada, France and Germany -- 2Physics.com]






Authors: Erwan Bimbard, Alexander I. Lvovsky
Affiliation:
Institute for Quantum Information Science, University of Calgary, Canada,
Département de Physique, Ecole Normale Supérieure, Paris, France

The ability to generate and manipulate arbitrary quantum states of a particular system is necessary in order to use this system in quantum information technology. Scientists have developed this ability for a variety of physical settings, for example, ion traps [1] or superconducting cavities coupled to a Josephson qubit [2]. However, independently from the way future quantum computers may perform their own calculations, they will have to communicate among each other. There must be a communication medium able to carry quantum information over long distances without too much losses or decoherence. The only medium that satisfies this requirement is light.

The Quantum Information Technology Group at University of Calgary. Other authors of the 'Nature Photonics' paper [7] : Alexander Lvovsky (Leftmost) Andrew MacRae (4th from Left)and Nitin Jain (7th from Left).

Link to the Quantum Information Technology Group >>

The problem is that preparing arbitrary quantum states of photons is notoriously difficult because photons can not stand still while information is being encoded on them. Moreover, they are easily destroyed by any instrument they encounter. So far, for these reasons, only small islands have been explored in the vast ocean of quantum states of travelling light fields. Examples of quantum optical states prepared and analyzed to date include single- and two-photon states [3,4], superpositions of vacuum and single photon [5] and “Schrödinger kittens” [6].

To continue the naval allegory, exploring an ocean requires a map. In our case, the map is provided by photon number states, or Fock states. These states form a basis in the optical Hilbert space: any quantum state of light, however complex, can be written as a superposition of photon number states. If we could find a way of constructing arbitrary Fock state superpositions, we would have resolved our challenge. Unfortunately, such a vision is beyond practical reach, because there is an infinite number of Fock states and their energy is unlimited. It is possible, however, to approach this ideal with small steps.

What we accomplished, and reported in a recent paper in Nature Photonics[7], is extending the accessible part of the optical Hilbert space to the subspace spanned by the first three basis elements. In other words, we engineered and characterized arbitrary superpositions of 0-photon, 1-photon and 2-photon states.

In order to tailor the quantum state of a travelling light pulse without annihilating it, we made use of one of the most basic yet mysterious quantum phenomena: entanglement. We focused blue laser pulses into a nonlinear crystal that can convert a blue photon into an entangled pair of lower energy red photons going along two different paths or “channels”. Then we performed measurements on one of these channels (idler), which prepared the wanted state in the other channel (signal). Such remote state preparation is possible because of the entanglement between the channels: the two of them form a single system described by a global quantum state, so an interaction with one particle will affect the other, even though the two channels can be spatially separated.

To perform the measurement in the trigger channel, we mix, on beam splitters (half-silvered mirrors), the photons emerging from the crystal with those coming through two weak independent laser beams. Two of the beam splitter outputs are directed to ultra-sensitive single photon detectors, and we look for events where both these detectors “clicked” at the same time. We align our optics in such a way that it is impossible to determine whether the photons that trigger the detectors come from the crystal or the independent beams. Accordingly, a coincidence “click” indicates that the number of photons coming to detectors from the crystal could have been 0, 1 or 2. Because the photons in the crystal are always born in pairs, the signal channel will also contain 0, 1, or 2 photons.

A more thorough calculation involving the entangled nature of the optical state produced by the crystal shows that this alternative – 0, 1, or 2 photons in the signal channel – is not simply a probabilistic mixture, but a coherent superposition of these Fock states. By varying the amplitudes and phases of the two independent beams, we can change the probability amplitudes of the three components in the superposition. For example, if we set both amplitudes to zero, the “clicks” can occur only due to the photons from the crystal, and the state of the signal will be a pure two-photon state. If, on the other hand, we make the intensity of the independent beams high, they are likely to generate most of the “clicks”, so the signal will with high probability not contain any photons.

In order to verify that the signal state is what we expect it to be, we measured a large number of samples of this state and analyzed it using a technique known as optical homodyne tomography [8]. This technique allowed us to determine the signal states and compare them with theoretical predictions. By repeating the measurements for several different settings, we were able to show that arbitrary preparation of states within the set studied is indeed achievable for a travelling pulse of light, without destroying it or having to store it.

To summarize, this work enabled us to reach a whole new region of the optical Hilbert space and study the properties of new quantum states of light, at the same time unifying in a single experiment many previously investigated states. More practically, the kind of states produced during this experiment has immediate applications, for example, optimal estimation of the loss parameter in a gaussian bosonic channel [9].

References
[1]
A. Ben-Kish, B. DeMarco, V. Meyer, M. Rowe, J. Britton, W. M. Itano, B. M. Jelenković, C. Langer, D. Leibfried, T. Rosenband, and D. J. Wineland, "Experimental Demonstration of a Technique to Generate Arbitrary Quantum Superposition States of a Harmonically Bound Spin-1/2 Particle", Phys. Rev. Lett, 90, 037902 (2003). Abstract.
[2] Max Hofheinz, H. Wang, M. Ansmann, Radoslaw C. Bialczak, Erik Lucero, M. Neeley, A. D. O'Connell, D. Sank, J. Wenner, John M. Martinis & A. N. Cleland, "Synthesizing arbitrary quantum states in a superconducting resonator", Nature, 459, 546 (2009). Abstract.
[3] A. I. Lvovsky, H. Hansen, T. Aichele, O. Benson, J. Mlynek, and S. Schiller, "Quantum State Reconstruction of the Single-Photon Fock State", Phys. Rev. Lett. 87, 050402 (2001). Abstract.
[4] A. Ourjoumtsev, R. Tualle-Brouri and P. Grangier, "Quantum Homodyne Tomography of a Two-Photon Fock State", Phys. Rev. Lett. 96, 213601 (2006). Abstract.
[5] A. I. Lvovsky and J. Mlynek, "Quantum-Optical Catalysis: Generating Nonclassical States of Light by Means of Linear Optics", Phys. Rev. Lett. 88, 250401 (2002). Abstract.
[6] Alexei Ourjoumtsev, Rosa Tualle-Brouri, Julien Laurat, Philippe Grangier, "Generating Optical Schrödinger Kittens for Quantum Information Processing", Science 312, 83-86 (2006). Abstract.

[7] Erwan Bimbard, Nitin Jain, Andrew MacRae, A. I. Lvovsky, "Quantum-optical state engineering up to the two-photon level", Nature Photonics, Published online: 14 February 2010 doi:10.1038/nphoton.2010.6. Abstract.
[8] A.I. Lvovsky and M.G. Raymer, "Continuous-variable optical quantum-state tomography", Rev. Mod. Phys. 81, 299 - 332 (2009). Abstract.
[9] G. Adesso, F. Dell'Anno, S. De Siena, F. Illuminati, and L. A. M. Souza, "Optimal estimation of losses at the ultimate quantum limit with non-Gaussian states", Phys. Rev. A 79, 040305(R) (2009). Abstract.

Labels: