.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Sunday, June 02, 2013

The Observable Signature of Black Hole Formation

Anthony L. Piro


Author: Anthony L. Piro

Affiliation: Theoretical Astrophysics Including Relativity (TAPIR), California Institute of Technology, Pasadena, USA

Black holes are among the most exciting objects in the Universe. They are regions of spacetime predicted by Einstein's theory of general relativity in which gravity is so strong that it prevents anything, even light, from escaping. Black holes are known to exist and roughly come in two varieties. There are massive black holes at the centers of galaxies, which can have masses anywhere from a million to many billion times the mass of our Sun. And there are also black holes of around ten solar masses in galaxies like our own that have been detected via X-ray emission from accretion [1]. Although this latter class of black holes is generally believed to be formed from the collapse of massive stars, there is a lot of uncertainty that is the focus of current ongoing research. It is unknown what fraction of massive stars produce black holes (rather than neutron stars), what the channels for black holes formation are, and what corresponding observational signatures are expected. Through a combination of theory, state-of-the-art simulations, and new observations, astrophysicists are trying to address these very fundamental questions.

A computer-generated image of the light distortions created by a black hole [Image credit: 
Alain Riazuelo, IAP/UPMC/CNRAS]

The one instance where astronomers are fairly certain they are seeing black hole formation is in the case of gamma-ray bursts (GRBs). A GRB is believed to be the collapse of a massive, quickly rotating star that produces a black hole and relativistic jet. The problem is that these are too rare and are too confined to special environments to explain the majority of black holes. Astronomers regularly see stars exploding as supernovae, but it is not clear what fraction of any of these produce black holes. There is evidence, and it is generally expected, that in most cases these explosions in fact lead to neutron stars instead. This has led to the hypothesis that the signature of black hole formation is in fact the disappearance of a massive star, or "unnova," rather than an actual supernova-like event [2].

My theoretical work [3] hypothesizes that there may be an observational signature of black hole formation, even in circumstances where one might normally expect an unnova. Therefore I titled my work "Taking the 'Un' out of 'Unnovae'." The main idea is based on a somewhat forgotten theoretical study by D. Z. Nadezhin [4]. Before a black hole is formed within a collapsing star, a neutron star is formed first. This neutron star emits neutrinos [5,6], which stream out of the star (because neutrinos are very weakly interacting) carrying energy (and thus mass via E=mc2). This can last for a few tenths of a second before enough material falls onto the neutron star to collapse it to a black hole, and carrying away a mass equivalent to a few tenths of the mass of our Sun. From the point of view of the star's envelope, it sees the mass (and therefore gravitational pull) of the core abruptly decrease and the envelope expands in response. This adjustment of the star's envelope grows into a shock wave that heats and ejects the outer envelope of the star.

This process was also looked at in detail by Elizabeth Lovegrove and Stan Woosley at UC Santa Cruz [7]. They were focused on the heating and subsequent cooling of the envelope from this shock. They found that it would lead to something that looked like a very dim supernova that would last for about a year. In my work, I focused on the observational signature when this shock first hits surface of the star. When this happens, the shock's energy is suddenly released in what is called a "shock breakout flash." Although this merely lasts for a few days, it is 10 to 100 times brighter than the subsequent dim supernova. Therefore, this is the best opportunity for astronomers to catch a black hole being created right in the act.

The most exciting part of this result is that now is the perfect time for astronomers to discover these events. Observational efforts such as the Palomar Transient Factory (also known as PTF) and the Panoramic Survey Telescope and Rapid Response System (also known as Pan-STARRS) are surveying the sky every night and sometimes finding rare and dim explosive, transient events. These surveys are well-suited to find exactly the kind of event I predict for the shock breakout from black hole formation. Given the rate we expect massive stars to be dying, it is not out of the question that one or more of these will be found in the next year or so, allowing us to actually witness the birth of a black hole.

References:
[1] Ronald A. Remillard and Jeffrey E. McClintock, "X-Ray Properties of Black-Hole Binaries". Annual Review of Astronomy & Astrophysics, 44, 49-92 (2006). Abstract.
[2] Christopher S. Kochanek,John F. Beacom, Matthew D. Kistler, José L. Prieto, Krzysztof Z. Stanek, Todd A. Thompson, Hasan Yüksel, "A Survey About Nothing: Monitoring a Million Supergiants for Failed Supernovae". Astrophysical Journal, 684, 1336-1342 (2008). Fulltext.
[3] Anthony L. Piro, "Taking the 'Un' out of 'Unnovae'". Astrophysical Journal Letters, 768, L14 (2013). Abstract.
[4] D. K. Nadyozhin, "Some secondary indications of gravitational collapse". Astrophysics and Space Science, 69, 115-125 (1980). Abstract.
[5] Adam Burrows, "Supernova neutrinos". Astrophysical Journal, 334, 891-908 (1988). Full Text.
[6] J. F. Beacom, R. N. Boyd, and A. Mezzacappa, "Black hole formation in core-collapse supernovae and time-of-flight measurements of the neutrino masses". Physical Review D, 63, 073011 (2001). Abstract.
[7] Elizabeth Lovegrove and Stan E. Woosley, "Very Low Energy Supernovae from Neutrino Mass Loss". Astrophysical Journal, 769, 109 (2013). Abstract.

Labels: , ,


Sunday, February 24, 2013

New Mass Limit for White Dwarfs: Explains Super-Chandrasekhar Type Ia Supernovae

Upasana Das (left) and Banibrata Mukhopadhyay (right)







Authors: Upasana Das and Banibrata Mukhopadhyay

Affiliation: Dept of Physics, Indian Institute of Science, Bangalore, India

Background:

Extremely luminous explosions of white dwarfs, known as type Ia supernovae [1], have always been in the prime focus of natural science. Generally, they are believed to result from the violent thermonuclear explosion of a carbon-oxygen white dwarf, when its mass approaches the famous Chandrasekhar limit of 1.44M [2], with M being the mass of Sun. The observed luminosity is powered by the radioactive decay of nickel, produced in the thermonuclear explosion, to cobalt and then to iron. The characteristic nature of the variation of luminosity with time of these supernovae (see Figure 1) -- along with the consistent mass of the exploding white dwarf -- allows these supernovae to be used as a ‘standard’ for measuring far away distances (standard candle) and hence in understanding the expansion history of the universe.

Figure 1: Variation of luminosity as a function of time of a type Ia supernova [image courtesy: Wikipedia]

Observation and study of this very feature of distant supernovae led to the Nobel Prize in Physics in 2011 for the discovery of the accelerated expansion of the universe [3, 2Physics report]. Also, mainly because of the discovery of the limiting mass of white dwarfs, S. Chandrasekhar was awarded the Nobel Prize in Physics in 1983.

Chandrasekhar, by means of a remarkably simple calculation, was the first to obtain the maximum mass for a (non-magnetized, non-rotating) white dwarf [2]. So far, observations seemed to abide by this limit. However, the recent discovery of several peculiar type Ia supernovae -- namely, SN 2006gz, SN 2007if, SN 2009dc, SN 2003fg [4,5] -- provokes us to rethink the commonly accepted scenario. These supernovae are distinctly over-luminous compared to their standard counterparts, because of their higher than usual nickel mass. They also violate the ‘luminosity-stretch relation’ and exhibit a much lower velocity of the matter ejected during the explosion. However, these anomalies can be resolved, if super-Chandrasekhar white dwarfs, with masses in the range 2.1-2.8M, are assumed to be the mass of the exploding white dwarfs (progenitors of these peculiar supernovae). Nevertheless, these non-standard ‘super-Chandrasekhar supernovae’ can no longer be used as cosmic distance indicators. However, there is no estimate of an upper limit to the mass of these super-Chandrasekhar white dwarf candidates yet. Can they be arbitrarily large? Moreover, there has been no foundational level analysis performed so far, akin to that carried out by Chandrasekhar, in order to establish a super-Chandrasekhar mass white dwarf.

Our result at a glance:

We establish a new and generic mass limit for white dwarfs which is 2.58M [6]. This is significantly different from that proposed by Chandrasekhar. Our discovery naturally explains the over-luminous, peculiar type Ia supernovae mentioned above. We arrive at this new mass limit by exploiting the effects of the magnetic field in compact objects. The motivation behind our approach lies in the discovery of several isolated magnetized white dwarfs through the Sloan Digital Sky Survey (SDSS) with surface fields 105-109 gauss [7,8]. Hence their expected central fields could be 2-3 orders of magnitude higher. Moreover, about 25% of accreting white dwarfs, namely cataclysmic variables (CVs), are found to have magnetic fields as high as 107-108 gauss [9].

Underlying theory:

We first recall the basic formation scenario of white dwarfs. In order to do so, we have to understand the properties of degenerate electrons. When different states of a particle correspond to the same energy in quantum mechanics, they are called degenerate states. Moreover, Pauli’s exclusion principle prohibits any two identical fermions (in the present context: electrons) to occupy the same quantum state. Now, when a normal star of mass less than or of the order of 5Mʘ exhausts its nuclear fuel [10], it undergoes a collapse leading to a small volume consisting of a lot of electrons. Being in a small volume, many such electrons tend to occupy the same energy states, and hence they become degenerate, since the energy of a particle depends on its momentum which is determined by the total volume of the system. Hence, once all the energy levels up to the Fermi level, which is the maximum allowed energy of a fermion, are filled by the electrons, there is no available space for the remaining electrons in a small volume of the collapsing star. This expels the electrons to move out leading to an outward pressure. If the force due to the outward pressure is able to balance the inward gravitational force, then the collapse halts, forming the compact star white dwarf.

Figure 2: Landau quantization in presence of magnetic field B. [image courtesy: Warwick University, UK ]

For the current purpose, we have to also recall the properties of degenerate, relativistic electrons under the influence of a strong magnetic field, neglecting any form of interactions. The energy states of a free electron in a magnetic field are quantized into what is known as Landau orbitals [11]. Figure 2 shows that how the continuous energy levels split into discrete Landau levels with the increase of magnetic field in the direction perpendicular to the motion of the electron. Larger the magnetic field, smaller is the number of Landau levels occupied. Recent works [12-14] establish that Landau quantization due to a strong magnetic field modifies the equation of state (EoS), which relates the pressure (P) with density (ρ), of the electron degenerate gas. This should influence significantly the mass and radius of the underlying white dwarf (and hence the mass-radius relation). The main aim here is to obtain the maximum possible mass of such a white dwarf (which is magnetized), and therefore a (new) mass limit. Hence we look for the regime of high density of electron degenerate gas and the corresponding EoS, which further corresponds to the high Fermi energy (EF) of the system. This is because the highest density corresponds to the lowest volume and hence, lowest radius, which further corresponds to the limiting mass [2]. Note that the maximum Fermi energy (EFmax) corresponds to the maximum central density of the star. Consequently, conservation of magnetic flux (technically speaking flux freezing theorem, which is generally applicable for a compact star) argues for the maximum possible field of the system, which implies that only the ground Landau level will be occupied by the electrons.

Generally the EoS can be recast in the polytropic form of P=KρΓ, when K is a constant and Γ (=1+1/n) is the polytropic index. At the highest density regime (which also corresponds to the highest magnetic field regime), Γ=2. Now combining the above EoS with the condition of magnetostatic equilibrium (when net outward force is balanced by the inward force), we obtain the mass and radius of the white dwarf to scale with its central density (ρc) as MK(3/2) ρc(3-n)/2n and RK(1/2) ρc(1-n)/2n respectively [6]. For Γ = 2, which corresponds to the case of limiting mass, K ∝ ρc(-2/3) and hence M becomes independent of ρc and R becomes zero. Substituting the proportionality constants, for Γ = 2 we obtain exactly [6]:

where h is the Planck’s constant, c the speed of light, G Newton’s gravitation constant, μe the mean molecular weight per electron and mH the mass of hydrogen atom. For μe=2, which is the case for a carbon-oxygen white dwarf, M≈2.58M. To compare with Chandrasekhar’s result [2], we recall the limiting mass obtained by him as
which for μe =2 is 1.44M.
Figure 3: Mass-radius relation of a white dwarf. Solid line – Chandrasekhar’s relation; dashed line – our relation.

For a better reference, we include a comparison between the mass-radius relation of the white dwarf obtained by Chandrasekhar and that obtained by us in Figure 3.

Justification of high magnetic field and its effect to hold more mass:

The presence of magnetic field in a white dwarf creates an additional outward pressure apart from that due to degenerate electrons, which is however modified in presence of a strong field in it. On the other hand, the inward (gravitational) force is proportional to the mass of the white dwarf. Hence, when the star is magnetized, a larger outward force can balance a larger inward force, allowing it to have more mass.

However, the effect of Landau quantization becomes significant only at a high field B ≥ Bc = 4.414×1013 gauss. How can we justify such a high field in a white dwarf? Let us consider the commonly observed phenomenon of a magnetized white dwarf attracting mass from its companion star (called accretion). Now the surface field of an accreting white dwarf, as observed, could be 109 gauss (≪ Bc) [7]. Its central field, however, can be several orders of magnitude higher ∼ 1012 gauss, which is also less than Bc. Naturally, such a magnetized CV, still follows the mass-radius relation obtained by Chandrasekhar. However, in contrast with Chandrasekhar’s work (which did not include a magnetic field in the calculations), we obtain that, a nonzero initial field in the white dwarf, however ineffective for rendering Landau quantization effects, proves to be crucial in supporting the additional mass accumulated due to accretion.

As an above-mentioned magnetized white dwarf first gains mass due to accretion, its total mass increases which in turn increases the gravitational power and hence the white dwarf contracts in size due to the increased gravitational pull. However, the total magnetic flux in a white dwarf is understood to be conserved, which is magnetic field times the square of its radius. Therefore, if the white dwarf shrinks, its radius decreases and hence magnetic field increases. This in turn increases the outward force balancing the increased inward gravitational force (due to increase of its mass), which leads to a quasi-equilibrium situation. As the accretion is a continuous process, above process of shrinking the white dwarf, increasing the magnetic field and holding more mass, goes in a cycle. This continues until the gain of mass becomes so great that total outward pressure is unable to support the gravitational attraction. This finally leads to a supernova explosion, which we observe as a peculiar, over-luminous type Ia supernova, in contrast to their normal counter parts.

Punch lines:

More than 80 years after the proposal of Chandrasekhar mass limit, this new limit perhaps heralds the onset of a paradigm shift. This discovery has several consequences as briefly described below.

The masses of white dwarfs are measured from their luminosities assuming Chandrasekhar's mass-radius relation, as of now. These results may have to be re-examined based on the new mass-radius relation, at least for some peculiar objects (e.g. over-luminous type Ia supernovae). Further, some peculiar known objects, like magnetars (highly magnetized compact objects, supposedly neutron stars, as of now) should be examined based on the above considerations, which could actually be super-Chandrasekhar white dwarfs.

This new mass limit may also lead to establishing the underlying peculiar supernovae as a new standard candle for cosmic distance measurement. Hence, in order to correctly interpret the expansion history of the universe (and then dark energy), one might need to carefully sample the observed data from the supernovae explosions, especially if the peculiar type Ia supernovae are eventually found to be enormous in number. However, it is probably too early to comment whether our discovery has any direct implication on the current dark energy scenario, which is based on the observation of ordinary type Ia supernovae.

References:
[1] D. Andrew Howell, “Type Ia supernovae as stellar endpoints and cosmological tools”, Nature Communications, 2, 350 (2011). Abstract.
[2] S. Chandrasekhar, “The highly collapsed configurations of a stellar mass (Second Paper)”, Monthly Notices of the Royal Astronomical Society, 95, 207 (1935). Article.
[3] S. Perlmutter, G. Aldering, G. Goldhaber, R. A. Knop, P. Nugent, P. G. Castro, S. Deustua, S. Fabbro, A. Goobar, D. E. Groom, I. M. Hook, A. G. Kim, M. Y. Kim, J. C. Lee, N. J. Nunes, R. Pain, C. R. Pennypacker, R. Quimby, C. Lidman, R. S. Ellis, M. Irwin, R. G. McMahon, P. Ruiz-Lapuente, N. Walton, B. Schaefer, B. J. Boyle, A. V. Filippenko, T. Matheson, A. S. Fruchter, N. Panagia, H. J. M. Newberg, W. J. Couch, and The Supernova Cosmology Project, “Measurements of Omega and Lambda from 42 high-redshift supernovae”, The Astrophysical Journal, 517, 565 (1999). Article.
[4] D. Andrew Howell, Mark Sullivan, Peter E. Nugent, Richard S. Ellis, Alexander J. Conley, Damien Le Borgne, Raymond G. Carlberg, Julien Guy, David Balam, Stephane Basa, Dominique Fouchez, Isobel M. Hook, Eric Y. Hsiao, James D. Neill, Reynald Pain, Kathryn M. Perrett and Christopher J. Pritchet, “The type Ia supernova SNLS-03D3bb from a super-Chandrasekhar-mass white dwarf star”, Nature, 443, 308 (2006). Abstract.
[5] R. A. Scalzo, G. Aldering, P. Antilogus, C. Aragon, S. Bailey, C. Baltay, S. Bongard, C. Buton, M. Childress, N. Chotard, Y. Copin, H. K. Fakhouri, A. Gal-Yam, E. Gangler, S. Hoyer, M. Kasliwal, S. Loken, P. Nugent, R. Pain, E. Pécontal, R. Pereira, S. Perlmutter, D. Rabinowitz, A. Rau, G. Rigaudier, K. Runge, G. Smadja, C. Tao, R. C. Thomas, B. Weaver, and C. Wu, “Nearby supernova factory observations of SN2007if: First total mass measurement of a super-Chandrasekhar-mass progenitor”, The Astrophysical Journal, 713, 1073 (2010). Article.
[6] Upasana Das & Banibrata Mukhopadhyay, “New mass limit for white dwarfs: Super-Chandrasekhar type Ia supernova as a new standard candle”, Physical Review Letters, 110, 071102 (2013). Abstract.
[7] Gary D. Schmidt, Hugh C. Harris, James Liebert, Daniel J. Eisenstein, Scott F. Anderson, J. Brinkmann, Patrick B. Hall, Michael Harvanek, Suzanne Hawley, S. J. Kleinman, Gillian R. Knapp, Jurek Krzesinski, Don Q. Lamb, Dan Long, Jeffrey A. Munn, Eric H. Neilsen, Peter R. Newman, Atsuko Nitta, David J. Schlegel, Donald P. Schneider, Nicole M. Silvestri, J. Allyn Smith, Stephanie A. Snedden, Paula Szkody, and Dan Vanden Berk, “Magnetic white dwarfs from the Sloan Digital Sky Survey: The first data release”, The Astrophysical Journal, 595, 1101 (2003). Article.
[8] Karen M. Vanlandingham, Gary D. Schmidt, Daniel J. Eisenstein, Hugh C. Harris, Scott F. Anderson, Patrick B. Hall, James Liebert, Donald P. Schneider, Nicole M. Silvestri, Gregory S. Stinson, and Michael A. Wolfe, “Magnetic white dwarfs from the SDSS. II. The second and third data releases”, The Astronomical Journal, 130, 734 (2005). Article.
[9] D. T. Wickramasinghe and Lilia Ferrario, “Magnetism in isolated and binary white dwarfs”, Publications of the Astronomical Society of the Pacific, 112, 873 (2000). Article.
[10] S.L. Shapiro and S.A. Teukolsky, “Black holes, White dwarfs and Neutron stars: The physics of compact objects” (John Wiley & Sons Inc, 1983).
[11] Dong Lai and Stuart L. Shapiro, “Cold equation of state in a strong magnetic field – Effect of inverse beta-decay”, The Astrophysical Journal, 383, 745 (1991). Abstract.
[12] Upasana Das and Banibrata Mukhopadhyay, “Strongly magnetized cold degenerate electron gas: Mass-radius relation of the magnetized white dwarf”, Physical Review D, 86, 042001 (2012). Abstract.
[13] Upasana Das and Banibrata Mukhopadhyay, “Violation of Chandrasekhar mass limit: The exciting potential of strongly magnetized white dwarfs”, Int. J. Mod. Phys. D, 21, 1242001 (2012). Abstract.
[14] Aritra Kundu and Banibrata Mukhopadhyay, “Mass of highly magnetized white dwarfs exceeding the Chandrasekhar limit: An analytical view”, Modern Physics Letters A, 27, 1250084 (2012). Abstract.

Labels: , , ,


Sunday, July 08, 2012

Quantum Gravity: Can It Be Empirically Tested?

Claus Kiefer (left) and Manuel Krämer (right

[Every year (since 1949) the Gravity Research Foundation honors best submitted essays in the field of Gravity. This year's prize goes to Claus Kiefer and Manuel Krämer for their essay "Can Effects of Quantum Gravity Be Observed in the Cosmic Microwave Background?". The five award-winning essays will be published in a special issue of the International Journal of Modern Physics D (IJMPD). Today we present here an article by Claus Kiefer and Manuel Krämer on their current work. -- 2Physics.com ]

Authors: Claus Kiefer and Manuel Krämer

Affiliation: University of Cologne, Germany 

Quantum theory seems to be a universal framework for physical interactions. The Standard Model of particle physics, for example, is described by a quantum field theory of the strong and electroweak interactions. The only exception so far is gravity, which is successfully described by a classical theory: Einstein's theory of general relativity. The general expectation, however, is that general relativity is incomplete and must merge with quantum theory to a fundamental theory of quantum gravity [1,2]. One reason is the singularity theorems in Einstein's theory, the other is the universal coupling of gravity to all forms of energy and thus to the energy of all quantum fields.

2Physics articles by past winners of the Gravity Research Foundation award:
Mark Van Raamsdonk (2010): "Quantum Gravity and Entanglement"
Alexander Burinskii (2009): "Beam Pulses Perforate Black Hole Horizon"
T. Padmanabhan (2008): "Gravity : An Emergent Perspective"
Steve Carlip (2007): "Symmetries, Horizons, and Black Hole Entropy"

Despite many attempts in the last 80 years, a final quantum theory of gravity is elusive. There are various approaches, which all have their merits and shortcomings [1,2]. A major problem in the search for a final theory is the lack of empirical tests so far. This problem is usually attributed to the fact that the Planck scale, on which quantum gravity effects are supposed to become strong, is far remote from any other relevant scale. Expressed in energy units, the Planck scale is 15 orders of magnitude higher than even the energy reachable at the Large Hadron Collider (LHC) in Geneva. It is thus hopeless to probe the Planck scale directly by scattering experiments.

In our prize-winning essay [3], we have addressed the question whether effects of quantum gravity can be observed in a cosmological context. More precisely, we have investigated the presence of possible effects in the anisotropy spectrum of the cosmic microwave background (CMB) radiation.

But given the presence of many approaches, which framework should one use for the calculations? We have decided to be as conservative as possible and to base our investigation on quantum geometrodynamics, the direct quantization of Einstein's theory. The central equation in this approach is the Wheeler-DeWitt equation, named after the pioneering work of Bryce DeWitt and John Wheeler [4]. It is a conservative approach because the Wheeler-DeWitt equation is the quantum equation that directly leads to general relativity in the semiclassical limit. It possesses for gravity the same value that the Schrödinger equation has for mechanics.

While the Wheeler-DeWitt equation is difficult to solve in full generality, it can be treated in an approximation scheme that is similar to a scheme known from molecular physics - the Born-Oppenheimer approximation. It basically consists of an expansion with respect to the Planck energy. It is thus assumed that the relevant expansion parameter is (the square of) the relevant energy scale over the Planck energy. A Born-Oppenheimer scheme of this type has been applied to gravity in [5]. In this way, one first arrives at the limit of quantum field theory on a fixed background. The next order then gives quantum-gravitational corrections that are inversely proportional to the Planck mass squared. It is these correction terms that we have evaluated for the CMB. The quantitative discussion, on which our essay is based, is presented in [6]. We assume that the Universe underwent a period of inflationary expansion at an early stage and that it was this inflation that produced the CMB anisotropies out of which all structure in the Universe evolved.

What are the results? The calculations show that the quantum-gravitational correction terms lead to a modification of the anisotropy power spectrum that is most pronounced for large scales, that is, large angular separations at the sky. More precisely, one finds a suppression of power at large scales. Such a suppression can, in principle, be observed. Since up to now no such signal has been identified, not even in the measurements of the WMAP satellite, we can find from our investigation only an upper limit on the expansion rate of the inflationary Universe. The effect is therefore too small to be seen, it seems, although it is expected to be considerably larger than quantum-gravitational effects in the laboratory.

A similar investigation was done for loop quantum cosmology [7]. It was found there that quantum gravitational effects lead to an enhancement of the power at large scales, instead of a suppression. These considerations may thus be able to discriminate between different approaches to quantum gravity.

What are the implications for future research? It remains to be seen whether the size of quantum-gravitational corrections terms can become large enough to be observable in other circumstances. One may think of the polarization of the CMB anisotropies or at the correlations functions of galaxies. Such investigations are important because there will be no fundamental progress in quantum gravity research without observational guidance. We hope that our essay will stimulate research in this direction.

References
[1] C. Kiefer, "Quantum Gravity" (Oxford University Press, Oxford, 3rd edition, 2012).
[2] S. Carlip, "Quantum gravity: a progress report", Reports on Progress in Physics, 64, 885-942 (2001).Abstract.
[3] C. Kiefer and M. Krämer, "Can effects of quantum gravity be observed in the cosmic microwave background?",  To appear in Int. J. Mod. Phys. D. Available at: arXiv:1205.5161 [gr-qc],
[4] B. S. DeWitt, "Quantum theory of gravity. I. The canonical theory", Phys. Rev., 160, 1113-1148 (1967). Abstract; J. A. Wheeler, "Superspace and the nature of quantum geometrodynamics", in: "Battelle rencontres", ed. by C. M. DeWitt and J. A. Wheeler (Benjamin, New York, 1968), pp. 242-307.
[5] C. Kiefer and T. P. Singh, "Quantum gravitational correction terms to the functional Schrödinger equation", Phys. Rev. D, 44, 1067-1076 (1991).Abstract.
[6] C. Kiefer and M. Krämer, "Quantum Gravitational Contributions to the CMB Anisotropy Spectrum", Phys. Rev. Lett., 108, 021301 (2012).Abstract.
[7] M. Bojowald, G. Calcagni, and S. Tsujikawa, "Observational Constraints on Loop Quantum Cosmology", Phys. Rev. Lett., 107, 211302 (2012). Abstract.

Labels: ,


Sunday, January 15, 2012

Quantum Complementarity Meets Gravitational Redshift













(From left to right) Magdalena Zych, Fabio Costa, Igor Pikovski, Časlav Brukner


Authors: Magdalena Zych, Fabio Costa, Igor Pikovski, Časlav Brukner

Affiliations: Faculty of Physics, University of Vienna, Austria

Link to "Quantum Foundations and Quantum Information Theory" Group >>

The unification of quantum mechanics and Einstein's general relativity is one of the most exciting and still open questions in modern physics. In general relativity, space and time are combined into a unified underlying geometry, which explains the gravitational attraction of massive bodies. Typical predictions of this theory become clearly evident on a cosmic scale of stars and galaxies. Quantum mechanics, on the other hand, was developed to describe phenomena at small scales, such as single particles and atoms. Both theories have been confirmed by many experiments independently. However, it is still very hard to test the interplay between quantum mechanics and general relativity. When considering very small systems, gravity is typically too weak to be of any significance. The most precise experiments so far have only been able to probe the non-relativistic, Newtonian limit of gravity in conjunction with quantum mechanics. Conversely, quantum effects are generally not visible in large objects.

According to general relativity, time flows differently at different positions due to the distortion of space-time by a nearby massive object. A single clock being in a superposition of two locations allows probing quantum interference effects in combination with general relativity. [Image credits: Quantum Optics, Quantum Nanophysics, Quantum Information; University of Vienna]

There is, however, a possibility to measure predictions of Einstein’s theory of general relativity without using extremely massive probe particles: one of the counterintuitive predictions of Einstein's general relativity is that gravity distorts the flow of time. The theory predicts that clocks tick slower near a massive body and tick faster the further they are away from the mass. The earth’s gravitational field produces a sufficient distortion of space-time such that the different flow of time at different altitudes can be measured with very precise clocks. This has been confirmed experimentally with classical clocks and the results were in full agreement with Einstein’s theory.

Two initially synchronized clocks placed at different gravitational potentials will eventually show different times. According to general relativity a clock near a massive body ticks slower than the clock further away from the mass. This effect is known as gravitational time dilation or gravitational redshift.

Scientists at the University of Vienna now proposed that the effect described above, which is also commonly known as the “gravitational redshift”, can also be used to probe the overlap of general relativity with quantum mechanics. In the scheme published in October in Nature Communications, the classical version of the experiment is modified such that it becomes necessary to take quantum mechanics into account. The idea is to exploit the extraordinary possibility that a single particle can be without a well-defined position, or as phrased in quantum mechanical terms: it can be in a “superposition” of two different locations. This allows single particles to produce typical wave-like detection patterns, i.e. interference.

Superpositions of particles are, however, very fragile: if the position of the particle is measured, or even if it can in principle be known, the superposition is lost. In other words, it is not possible to know the position of the particle and to simultaneously observe interference. Such a connection between information and interference is an example of quantum complementarity - a principle originally proposed by Niels Bohr. Because of the above-mentioned fragility, it is very challenging to observe and to maintain superpositions of particles. Even a very weak interaction of the particle with its surrounding leads to the demise of quantum interference. But even though the loss of superpositions is a nuisance in many quantum experiments, the newly proposed experimental scheme to probe general relativity in conjunction with quantum mechanics actually builds upon this complementarity principle.

The novel idea developed in the group of Prof. Č. Brukner is to use a single clock (which can be any particle with evolving internal degrees of freedom, such as spin) that is brought in a superposition of two locations – one closer and one further away from the surface of the Earth. Afterwards, the two parts of the superposition are brought back together, and it is observed whether or not an interference pattern is produced. According to general relativity, the clock ticks at a different rate depending on its location. But since the time measured by the clock reveals the information on where the clock was located, the interference and the wave-nature of the clock should be lost. The amount of the loss of quantum mechanical interference becomes a measure of the general relativistic redshift. To describe this effect, both general relativity and quantum mechanics are required. Such an interplay between the two theories has never been probed in experiments yet. It is therefore the first proposal for an experiment that allows testing the genuine general relativistic notion of time in conjunction with quantum complementarity.

A single clock is brought in a quantum superposition of two locations: closer and further away from the surface of the Earth. Because of the gravitational redshift, the time shown by the clock reveals the information on the clock’s location. Thus, according to the quantum complementarity principle, interference and the quantum wave-nature of the clock will be lost.

In the setup described above, the loss of quantum interference becomes a tool to measure the general relativistic time dilation. It is not even necessary to read out the clock itself: The sheer existence of the clock is sufficient to destroy the interference. But since quantum interference effects are very fragile, it is important to verify that their demise is really caused by the distortion of the flow of time. This can be done by performing the same experiment in two different ways: one where the clock is running, as described above, and one where the clock is “switched off”. In the latter case the quantum interference should become visible, as opposed to the former case.

A further application of the proposed experiment is that it can also test new physical theories. For example, in the context of theories that aim at combining general relativity and quantum mechanics into a single framework, it has been proposed that every particle carries a clock with itself, which measures time along its path. Such a possibility can be probed by the proposed experiment, without the need to directly measure such a hypothetical internal clock: if quantum interference is lost even in the case when the clock which is controlled by the experimentalist (for example, the aforementioned precession of the particle’s spin) is switched off, one can infer that there is an intrinsic mechanism which can keep track of time by itself. On the other hand, if interference is observed, the existence of an internal clock can be ruled out.

Another interesting possibility is that the quantum interference persists even with the experimentally controlled clock turned on. This would mean that quantum mechanics or general relativity breaks down when phenomena inherent to both theories become relevant. Such a scale has never been accessible for experimental tests so far.

To experimentally observe the predicted interplay of quantum interference and the gravitational redshift, three parameters are of importance: The height difference of the two locations at which the particle is held in a superposition, the time that the particle is kept in the superposition and the ticking rate of the clock. The larger any of those values, the easier it is to observe the effect. Currently, the most promising systems for such an experiment are single atoms. They can be brought into superpositions in atomic fountains and their internal states can be used as atomic clocks. There are also other systems that can be used to successfully perform the experiment: neutrons, electrons and even large molecules. There has been rapid experimental progress in the precision of clocks and in the size of the superpositions that can be created and maintained in the laboratory. It is therefore possible that within the next few years the proposed experiment with quantum clocks can be realized.

Both quantum mechanics and general relativity seem to be universal theories, though we still don’t know how to properly combine them in a universal framework. New phenomena are expected at some scale at the interplay between the two theories. Only experimentally probing this interplay may give a hint as to how to proceed in constructing a unifying description of nature.

Reference
[1] Magdalena Zych, Fabio Costa, Igor Pikovski & Časlav Brukner. "Quantum interferometric visibility as a witness of general relativistic proper time". Nature Communications, 2:505 doi: 10.1038/ncomms1498 (2011). Full Article: PDF, HTML.

Labels: ,


Sunday, September 04, 2011

Black Hole Evaporation Rates without Spacetime

Samuel L. Braunstein

Author: Samuel L. Braunstein

Affiliation: Professor of Quantum Computation, University of York, UK


Why black holes are so important to physics

In the black hole information paradox, Hawking pointed out an apparent contradiction between quantum mechanics and general relativity so fundamental that some thought any resolution may lead to new physics. For example, it has been recently suggested that gravity, inertia and even spacetime itself may be emergent properties of a theory relying on the thermodynamic properties across black hole event horizons [1]. All these paradoxes and prospects for new physics ultimately rely on thought experiments to piece together more detailed calculations, each of which themselves only give a part of the full picture. Our work "Black hole evaporation rates without spacetime" adds another calculation [2] which may help focus future work.

The paradox, a simple view

In its simplest form, we may state the paradox as follows: In classical general relativity, the event horizon of a black hole represents a point of no return - as a perfect semi-permeable membrane. Anything can pass the event horizon without even noticing it, yet nothing can escape, even light. Hawking partly changed this view by using quantum theory to prove that black holes radiate their mass as ideal thermal radiation. Therefore, if matter collapsed to form a black hole which itself then radiated away entirely as formless radiation then the original information content of the collapsing matter would have vanished. Now, information preservation is fundamental to unitary evolution, so its failure in black hole evaporation would signal a manifest failure of quantum theory itself. This "paradox" encapsulates a profound clash between quantum mechanics and general relativity.

To help provide intuition about his result Hawking presented a heuristic picture of black hole evaporation in terms of pair creation outside a black hole's event horizon. The usual description of this process involves one of the pair carrying negative energy as it falls into the black hole past its event horizon. The second of the pair carries sufficient energy to allow it to escape to infinity appearing as Hawking radiation. Overall there is energy conservation and the black hole losses mass by absorbing negative energy. This heuristic mechanism actually strengthens the "classical causal" structure of the black hole's event horizon as being a perfect semi-permeable (one-way) membrane. The paradox seems unassailable.

Scratching the surface of the paradox

This description of Hawking radiation as pair creation is seemingly ubiquitous (virtually any web page providing an explanation of Hawking radiation will invoke pair creation).

Nonetheless, there are good reasons to believe this heuristic description may be wrong [3]. Put simply, every created pair will be quantum mechanically entangled. If the members of each pair are then distributed to either side of the event horizon the so-called rank of entanglement across the horizon will increase for each and every quanta of Hawking radiation produced. Thus, one would conclude that just as the black hole mass were decreasing by Hawking radiation, its internal (Hilbert space) dimensionality would actually be increasing.

For black holes to be able to eventually vanish, the original Hawking picture of a perfectly semi-permeable membrane must fail at the quantum level. In other words, this "entanglement overload" implies a breakdown of the classical causal structure of a black hole. Whereas previously entanglement overload had been viewed as an absolute barrier to resolving the paradox [3], we argue [2,4] that the above statements already point to the likely solution.

Evaporation as tunneling

The most straightforward way to evade entanglement overload is for the Hilbert space within the black hole to "leak away". Quantum mechanically we would call such a mechanism tunneling. Indeed, for over a decade now, such tunneling, out and across the event horizon, has proved a useful way of computing black hole evaporation rates [5].

Spacetime free conjecture

In our paper [2] we suggest that the evaporation across event horizons operates by Hilbert space subsystems from the black hole interior moving to the exterior. This may be thought of as some unitary process which samples the interior Hilbert space; picks out some subsystem and ejects it as Hawking radiation. Our manuscript primarily investigates the consequences of this conjecture applied specifically to event horizons of black holes.

At this point a perceptive reader might ask how and to what extent our paper sheds light on the physics of black hole evaporation. First, the consensus appears to be that the physics of event horizons (cosmological, black hole, or those due to acceleration) is universal. In fact, it is precisely because of this generality that one should not expect this Hilbert space description of evaporation at event horizons to bear the signatures of the detailed physics of black holes. In fact, as explained in the next section we go on to impose the details of that physics onto this evaporative process. Second, sampling the Hilbert space at or near the event horizon may or may not represent fair sampling from the entire black hole interior. This issue is also discussed below (and in more detail in the paper [2]).

Imposing black hole physics

We rely on a few key pieces of physics about black holes: the no-hair theorem and the existence of Penrose processes. We are interested in a quantum mechanical representation of a black hole. At first sight this may seem preposterous in the absence of a theory of quantum gravity. Here, we propose a new approach that steers clear of gravitational considerations. In particular, we derive a quantum mechanical description of a black hole by ascribing various properties to it based on the properties of classical black holes. (This presumes that any quantum mechanical representation of a black hole has a direct correspondence to its classical counterpart.) In particular, like classical black holes our quantum black hole should be described by the classical no-hair properties of mass, charge and angular momentum. Furthermore, these quantum mechanical black holes should transform amongst each other just as their classical counterparts do when absorbing or scattering particles, i.e., when they undergo so-called Penrose processes. By imposing conditions consistent with these classical properties of a black hole we obtain a Hilbert space description of quantum tunneling across the event horizons of completely generic black holes. Crucially, this description of black hole evaporation does not involve the detailed curved spacetime geometry of a black hole. In fact, it does not require spacetime at all. Finally, in order to proceed to the next step of computing the actual dynamics of evaporation, we need to invoke one more property of a black hole: that of its enormous dimensionality.

Tunneling probabilities

The Hilbert space dimensionalities needed to describe a black hole are vast (at least 101077 for a stellar-mass black hole). For such dimensionalities, random matrix theory tells us that the statistical behavior of tunneling (as a sampling of Hilbert space subsystems) is excellently approximated by treating tunneling as a completely random process. This immediately imposes a number of symmetries onto our description of black hole evaporation. We can now completely determine the tunneling probabilities as a function of the classical no-hair quantities [2]. These tunneling probabilities are nothing but the black hole evaporation rates. In fact, these are precisely the quantities that are computed using standard field theoretic methods (that all rely on the curved black hole geometry). Thus, the calculation of tunneling probabilities provides a way of validating our approach and making our results predictive.

The proof of the pudding: validation and predictions

Our results reproduce Hawking's thermal spectrum (in the appropriate limit), and reproduce his relation between the temperature of black hole radiation and the black hole's thermodynamic entropy.

When Hawking's semi-classical analysis was extended by field theorists to include backreaction from the outgoing radiation on the geometry of the black hole a modified non-thermal spectrum was found [5]. The incorporation of backreaction comes naturally in our quantum description of black hole evaporation (in the form of conservation laws). Indeed, our results show that black holes that satisfy these conservation laws are not ideal but "real black bodies" that exhibit a non-thermal spectrum and preserve thermodynamic entropy.

These results support our conjecture for a spacetime free description of evaporation across black hole horizons.

Our analysis not only reproduces these famous results [5] but extends them to all possible black hole and evaporated particle types in any (even extended) gravity theories. Unlike field theoretic approaches we do not need to rely on one-dimensional WKB methods which are limited to the analysis of evaporation along radial trajectories and produce results only to lowest orders in ℏ.

Finally, our work quite generally predicts a direct functional relation exists between the irreducible mass associated with a Penrose process and a black hole's thermodynamic entropy. This in turn implies a breakdown in Hawking's area theorem in extended gravity theories.


And the paradox itself

The ability to focus on events horizons is key to the progress we have made in deriving a quantum mechanical description of evaporation. By contrast, the physics deep inside the black hole is more elusive. If unitarity holds globally then our spacetime free conjecture can be used to describe the entire time-course of evaporation of a black hole and to learn how the information is retrieved (see e.g., [6]). Specifically, in a unitarily evaporating black hole, there should exist some thermalization process, such that after what has been dubbed the black hole's global thermalization (or scrambling) time, information that was encoded deep within the black hole can reach or approach its surface where it may be selected for evaporation as radiation. Alternatively, if the interior of the black hole is not unitary, some or all of this deeply encoded information may never reappear within the Hawking radiation. Unfortunately, any analysis relying primarily on physics at or across the horizon cannot shed any light on the question of unitarity (which lies at the heart of the black hole information paradox).

The bigger picture

At this stage we might take a step back and ask the obvious question: Does quantum information theory really bear any connection with the subtle physics associated with black holes and their spacetime geometry? After all we do not yet have a proper theory of quantum gravity. However, whatever form such a theory may take, it should still be possible to argue, either due to the Hamiltonian constraint of describing an initially compact object with finite mass, or by appealing to holographic bounds, that the dynamics of a black hole must be effectively limited to a finite-dimensional Hilbert space. Moreover, one can identify the most likely microscopic mechanism of black hole evaporation as tunneling. Formally, these imply that evaporation should look very much like our sampling of Hilbert space subsystems from the black hole interior for ejection as radiation [2,4,6]. Although finite, the dimensionalities of the Hilbert space are immense and from standard results in random unitary matrix theory and global conservation laws we obtain a number of invariances. These invariances completely determine the tunneling probabilities without needing to know the detailed dynamics (i.e., the underlying Hamiltonian). This result puts forth the Hilbert space description of black hole evaporation as a powerful tool. Put even more strongly, one might interpret the analysis presented as a quantum gravity calculation without any detailed knowledge of a theory of quantum gravity except the presumption of unitarity [2].

Hints of an emergent gravity

Verlinde recently suggested that gravity, inertia, and even spacetime itself may be emergent properties of an underlying thermodynamic theory [1]. This vision was motivated in part by Jacobson's 1995 surprise result that the Einstein equations of gravity follow from the thermodynamic properties of event horizons [7]. For Verlinde's suggestion not to collapse into some kind of circular reasoning we would expect the physics across event horizons upon which his work relies to be derivable in a spacetime free manner. It is exactly this that we have demonstrated is possible in our manuscript [2]. Our work, however, provides a subtle twist: Rather than emergence from a purely thermodynamic source, we should instead seek that source in quantum information.


In summary, this work [2,4]:
  • shows that the classical picture of black hole event horizons as perfectly semi-permeable almost certainly fails quantum mechanically
  • provides a microscopic spacetime-free mechanism for Hawking radiation
  • reproduces known results about black hole evaporation rates
  • authenticates random matrix theory for the study of black hole evaporation
  • predicts the detailed black hole spectrum beyond WKB
  • predicts that black hole area must be replaced by some other property in any generalized area theorem for extended gravities
  • provides a quantum gravity calculation based on the presumption of unitarity, and
  • provides support for suggestions that gravity, inertia and even spacetime itself could come from spacetime-free physics across event horizons

References
[1] E. Verlinde, "On the origin of gravity and the laws of Newton", JHEP 04 (2011) 029. Abstract.
[2] S.L. Braunstein and M.K. Patra, "Black Hole Evaporation Rates without Spacetime", Phys. Rev. Lett. 107, 071302 (2011). Abstract. Article (pdf).
[3] H. Nikolic, "Black holes radiate but do not evaporate", Int. J. Mod. Phys. D 14, 2257 (2005). Abstract; S.D. Mathur, "The information paradox: a pedagogical introduction", Class. Quantum Grav. 26, 224001 (2009). Abstract.
[4] Supplementary Material to [2] at http://link.aps.org/supplemental/10.1103/PhysRevLett.107.071302.
[5] M.K. Parikh and F. Wilczek, "Hawking Radiation As Tunneling", Phys. Rev. Lett. 85, 5042 (2000). Abstract.
[6] S.L. Braunstein, S. Pirandola and K. Życzkowski, "Entangled black holes as ciphers of hidden information", arXiv:0907.1190.
[7] T. Jacobson, "Thermodynamics of Spacetime: The Einstein Equation of State", Phys. Rev. Lett. 75, 1260 (1995). Abstract.

Labels: , ,


Sunday, June 20, 2010

Quantum Gravity and Entanglement

Mark Van Raamsdonk

[Every year (since 1949) the Gravity Research Foundation honors best submitted essays in the field of Gravity. This year's prize goes to Mark Van Raamsdonk for his essay "Building Up Spacetime with Quantum Entanglement". The five award-winning essays will be published in the Journal of General Relativity and Gravitation (GRG) and subsequently, in a special issue of the International Journal of Modern Physics D (IJMPD). Today we present here an invited article from Prof. Raamsdonk on his current work.
-- 2Physics.com ]

Author: Mark Van Raamsdonk
Affiliation: Department of Physics and Astronomy,

University of British Columbia, Vancouver, Canada.

Quantum Mechanics and Entanglement :

The development of quantum mechanics in the early 20th century is surely one of the most remarkable achievements of mankind. Quantum mechanics is fundamentally different than the physical theories developed earlier to describe physics on macroscopic scales, yet is absolutely essential in understanding atomic scale physics. At the heart of quantum mechanics is the idea of quantum superposition: in quantum mechanics, objects can in some sense be in two places at once (or more generally two physical configurations at once). Mathematically speaking, every state of a physical system can be associated with a kind of vector, and if A and B are vectors representing two allowed physical configurations, then A + B is also an allowed physical state. In a simple example, A could be a state where an object is in one place, and B could be a state where the same object is in a different place; A + B then represents a state where a single object has no definite location. If a measurement of the object’s location is performed, no definite prediction for the result is possible; we will find it either in one location or the other, and quantum mechanics can at best predict the probability for each possible outcome.

2Physics articles by past winners of the Gravity Research Foundation award:
Alexander Burinskii (2009): "Beam Pulses Perforate Black Hole Horizon"
T. Padmanabhan (2008): "Gravity : An Emergent Perspective"
Steve Carlip (2007): "Symmetries, Horizons, and Black Hole Entropy"


Intimately related to the idea of quantum superposition is the notion of quantum entanglement. If we have a physical system with two parts (e.g. a ball and a box) then in a general quantum state, we cannot say with certainty what is the state of the ball (e.g. whether or not it is in the box) or what is the state of the box (e.g. whether the box is open or closed). But for certain quantum states, this uncertainty can be correlated for the two objects. For example, suppose A represents a quantum state where the ball is in the box and the box is closed, and B represents a quantum state where the ball is not in the box and the box is open. Then in the state A + B, neither the location of the ball nor the state of the box is definite, but a measurement which determines the state of the box effectively also determines the location of the ball: if we measure the state A+B and find the box closed, we can be sure that the ball is in the box; if we find it open, we can be sure the ball is not in the box. In this situation, we say that the ball and the box are entangled, since a measurement of one part of the system influences the quantum state of the other part of the system. In practice, it would be exceedingly difficult to prepare a macroscopic system such as a ball and box in such an entangled state, but such situations are commonplace at the atomic scale. The phenomenon of entanglement is an intrinsically quantum phenomenon; indeed, it can be shown that a computer making use of quantum entanglement can perform certain calculations far faster than any ordinary computer; entanglement is the basic property of quantum systems that allows quantum computation.

Strange as they may seem, the rules of quantum mechanics have now been tested beyond any reasonable doubt and allow us to understand physical processes in nature with incredible precision. For certain properties of elementary particles, predications based on quantum mechanics have been shown to be correct to one part in 100,000,000 or better. We now have a fully quantum mechanical description (known as quantum field theory) for the strong, weak, and electromagnetic forces, that allows us to understand how these interactions operate even at distance scales 100,000,000,000,000 times smaller than we can resolve with our eyes.

Quantum Gravity :

The approach that allowed physicists to develop a quantum mechanical theory for the strong, weak, and electromagnetic forces turns out not to work when applied to the remaining force, the force of gravity. In fact, it fails miserably. As a result, finding the correct quantum mechanical theory of gravity has been a prominent open question for decades; indeed it is one of the greatest challenges in theoretical physics. While Einstein’s Theory of General Relativity is almost entirely adequate for the purposes of describing the observed gravitational dynamics of planets, stars, galaxies, and even the expansion of the universe as a whole, it cannot be the whole story, since it does not incorporate the quantum mechanics principles that are believed to underlie all physics in our universe. Usually, a quantum mechanical description of nature is only necessary at very short distance scales; at macroscopic distance scales, the pre-20th century ``classical’’ physics provides an excellent approximation. But there are certain situations, such as in the interior of a black hole, in the early universe just after the big bang, or in a hypothetical scattering of particles with energies many orders of magnitude larger than we can currently produce in an accelerator, where gravitational effects would be important at distance scales small enough that a quantum mechanical description of the physics is essential. Finding the right theory of quantum gravity is essential if we want to fully understand the workings of nature.

String theory and the AdS/CFT correspondence :

One example of a theory that is fully quantum-mechanical but also includes gravitational physics is provided by string theory. Until the mid 1990s, the mathematical description of string theory was such that it allowed only relatively simple calculations; for example, one could predict the results for scattering of a fixed number of particles (including gravitons) on some fixed spacetime background (e.g. flat spacetime). This was not an entirely satisfactory situation. We recall that in Einstein’s theory of gravity, space itself is a dynamical entity that can be curved or warped by matter and energy; it is the effect of this warping on other objects that gives rise to gravitational ``forces.’’ In a complete theory of quantum gravity, different quantum states should correspond to spacetimes with different geometries (i.e. different warpings); the original formulation of string theory could most readily describe only different types of particles on a fixed geometry.

The situation for string theory changed dramatically between 1995 and 1997 in what is now known as “the Second Superstring Revolution.” (The first revolution was the period in the mid 1980s when it became clear that the original formulation of string theory was mathematically consistent.) This period culminated in a stunning proposal by Juan Maldacena known as the AdS/CFT correspondence, or gauge theory / gravity duality. (This followed an earlier proposal of the same nature by Tom Banks, Willy Fischler, Steve Shenker, and Lenny Susskind). The proposal states that there is an exact equivalence between certain examples of string theory (full-fledged theories of quantum gravity) and certain ordinary quantum mechanical systems without gravity (often quantum field theories). These much simpler ordinary quantum mechanical systems suffer none of the restrictions found in the original formulation of string theory, and thus, via the equivalence, may be used to provide a complete formulation of the corresponding string theory, able to quantum mechanically describe gravity and other forces on a spacetime which can fluctuate dynamically. Remarkably, this much better formulation of string theory turns out to be no more complicated than the quantum mechanical description of the other forces, completely understood almost half a century ago.

Geometry from Entanglement :

According to the AdS/CFT correspondence, there must be a dictionary that allows us to associate to every state of some conventional quantum mechanical system a state of the corresponding equivalent quantum gravity theory. Different states in the quantum mechanics correspond to different spacetime geometries (i.e. different distributions of matter and a different warping of space). For example, the quantum state A might correspond to completely empty space, while the state B corresponds to space with some gravitational waves, and state C corresponds to a space with orbiting black holes. While the dictionary between quantum state and corresponding spacetime is known for very simple states, more generally the correspondence is far from obvious. Ideally, one would like to know the gravity interpretation for an arbitrary quantum state of the conventional system; understanding the general dictionary is a crucial open question for the field.

The central suggestion in my essay [1] is that crucial information about what the spacetime associated to a given quantum state looks like is contained in how the various parts of the ordinary quantum are entangled with each other in the given state. While the arguments rely on some specific results in string theory, it is not difficult to give some sense of where the idea comes from.

To start, suppose that a specific quantum system has a corresponding gravity theory such that each state of the system corresponds to some spacetime. Now consider a second quantum system, which we obtain by taking two copies of the first system (with no physical interactions between the two systems). For the larger system, the simplest states are those with no entanglement between the two parts. That is, we can consider a state A = (A1,A2) in which the first system is in state A1 and the second system is in state A2. Now A1 and A2 each correspond to some particular spacetime according to the AdS/CFT correspondence. Thus, we can interpret the state A of the larger system as corresponding to two completely disconnected spacetimes (imagine our universe and some parallel universe with which there is no possible communication).

More generally, we can consider states which are quantum superpositions such as (A1,A2) + (B1,B2) . For such states, there is entanglement between the two parts. In [1], based on various earlier works, I pointed out that for states with enough entanglement (certain states which are quantum superpositions (A1,A2) + (B1,B2) + (C1,C2) + … with many states in the superposition) the resulting complicated state can be interpreted as a single connected spacetime, in which two distinct parts are connected by something like a wormhole (or a black hole/white hole). Since all the individual states in the superposition had interpretations as disconnected spacetimes, we can say that a quantum superposition of disconnected spacetimes has produced a connected spacetime. Alternately, we can say that by entangling the two parts of our original quantum system, we have managed to connect up two parts of the corresponding spacetime.

Starting from this hint of a connection between entanglement and spacetime geometry, one can argue that more quantitative measures of entanglement in states of a quantum system give direct information about quantitative geometrical quantities in the corresponding spacetimes, such as areas and geodesic distances. The complete picture for how to deduce the spacetime associated with a particular state in the AdS/CFT correspondence is certainly still beyond our reach, but I believe these connections between entanglement and geometry may be an important part of the story. If correct, they suggest a deep connection between quantum gravity and quantum information theory (the natural setting for studies of entanglement in quantum systems) that may be of fundamental importance.

References
[1]
Mark Van Raamsdonk, “Building up spacetime with quantum entanglement,” arXiv:1005.3035.
Link.
[2] Nielsen, M.A., Chuang, I.L., “Quantum Computation and Quantum Information” (Cambridge University Press, Cambridge, 2000).
[3] Juan Maldacena, “The Illusion of Gravity” -, Scientific American, November 2005.
Link.
[4] Brian Greene, "The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory" (Vintage Series, Random House Inc, February 2000).

Labels: , ,


Sunday, March 28, 2010

General Relativity Is Valid On Cosmic Scale

Uros Seljak [photo courtesy: University of California, Berkeley]

An analysis of more than 70,000 galaxies by University of California, Berkeley, University of Zurich and Princeton University physicists demonstrates that the universe – at least up to a distance of 3.5 billion light years from Earth – plays by the rules set out 95 years ago by Albert Einstein in his General Theory of Relativity.

By calculating the clustering of these galaxies, which stretch nearly one-third of the way to the edge of the universe, and analyzing their velocities and distortion from intervening material, the researchers have shown that Einstein's theory explains the nearby universe better than alternative theories of gravity.

One major implication of the new study is that the existence of dark matter is the most likely explanation for the observation that galaxies and galaxy clusters move as if under the influence of some unseen mass, in addition to the stars astronomers observe.

A partial map of the distribution of galaxies in the Sloan Digital Sky Survey, going out to a distance of 7 billion light years. The amount of galaxy clustering that we observe today is a signature of how gravity acted over cosmic time, and allows as to test whether general relativity holds over these scales. (M. Blanton, Sloan Digital Sky Survey)

"The nice thing about going to the cosmological scale is that we can test any full, alternative theory of gravity, because it should predict the things we observe," said co-author Uros Seljak, a professor of physics and of astronomy at UC Berkeley and a faculty scientist at Lawrence Berkeley National Laboratory who is currently on leave at the Institute of Theoretical Physics at the University of Zurich. "Those alternative theories that do not require dark matter fail these tests."

In particular, the tensor-vector-scalar gravity (TeVeS) theory, which tweaks general relativity to avoid resorting to the existence of dark matter, fails the test.

The result conflicts with a report late last year that the very early universe, between 8 and 11 billion years ago, did deviate from the general relativistic description of gravity.

Seljak and his current and former students, including first authors Reinabelle Reyes, a Princeton University graduate student, and Rachel Mandelbaum, a recent Princeton Ph.D. recipient, report their findings in the March 11 issue of the journal Nature [1]. The other co-authors are Tobias Baldauf, Lucas Lombriser and Robert E. Smith of the University of Zurich, and James E. Gunn, professor of physics at Princeton and father of the Sloan Digital Sky Survey.

Einstein's General Theory of Relativity holds that gravity warps space and time, which means that light bends as it passes near a massive object, such as the core of a galaxy. The theory has been validated numerous times on the scale of the solar system, but tests on a galactic or cosmic scale have been inconclusive.

"There are some crude and imprecise tests of general relativity at galaxy scales, but we don't have good predictions for those tests from competing theories," Seljak said.

An image of a galaxy cluster in the Sloan Digital Sky Survey, showing some of the 70,000 bright elliptical galaxies that were analyzed to test general relativity on cosmic scales. (Sloan Digital Sky Survey)

Such tests have become important in recent decades because the idea that some unseen mass permeates the universe disturbs some theorists and has spurred them to tweak general relativity to get rid of dark matter. TeVeS, for example, says that acceleration caused by the gravitational force from a body depends not only on the mass of that body, but also on the value of the acceleration caused by gravity.

The discovery of dark energy, an enigmatic force that is causing the expansion of the universe to accelerate, has led to other theories, such as one dubbed f(R), to explain the expansion without resorting to dark energy.

Tests to distinguish between competing theories are not easy, Seljak said. A theoretical cosmologist, he noted that cosmological experiments, such as detections of the cosmic microwave background, typically involve measurements of fluctuations in space, while gravity theories predict relationships between density and velocity, or between density and gravitational potential.

"The problem is that the size of the fluctuation, by itself, is not telling us anything about underlying cosmological theories. It is essentially a nuisance we would like to get rid of," Seljak said. "The novelty of this technique is that it looks at a particular combination of observations that does not depend on the magnitude of the fluctuations. The quantity is a smoking gun for deviations from general relativity."

Three years ago, a team of astrophysicists led by Pengjie Zhang of Shanghai Observatory suggested using a quantity dubbed EG to test cosmological models. EG reflects the amount of clustering in observed galaxies and the amount of distortion of galaxies caused by light bending as it passes through intervening matter, a process known as weak lensing. Weak lensing can make a round galaxy look elliptical, for example.

"Put simply, EG is proportional to the mean density of the universe and inversely proportional to the rate of growth of structure in the universe," he said. "This particular combination gets rid of the amplitude fluctuations and therefore focuses directly on the particular combination that is sensitive to modifications of general relativity."

Using data on more than 70,000 bright, and therefore distant, red galaxies from the Sloan Digital Sky Survey, Seljak and his colleagues calculated EG and compared it to the predictions of TeVeS, f(R) and the cold dark matter model of general relativity enhanced with a cosmological constant to account for dark energy.

The predictions of TeVeS were outside the observational error limits, while general relativity fit nicely within the experimental error. The EG predicted by f(R) was somewhat lower than that observed, but within the margin of error.

In an effort to reduce the error and thus test theories that obviate dark energy, Seljak hopes to expand his analysis to perhaps a million galaxies when SDSS-III's Baryon Oscillation Spectroscopic Survey (BOSS), led by a team at LBNL and UC Berkeley, is completed in about five years. To reduce the error even further, by perhaps as much as a factor of 10, requires an even more ambitious survey called BigBOSS, which has been proposed by physicists at LBNL and UC Berkeley, among other places.

Future space missions, such as NASA's Joint Dark Energy Mission (JDEM) and the European Space Agency's Euclid mission, will also provide data for a better analysis, though perhaps 10-15 years from now.

Seljak noted that these tests do not tell astronomers the actual identity of dark matter or dark energy. That can only be determined by other types of observations, such as direct detection experiments.

Reference
[1] Reinabelle Reyes, Rachel Mandelbaum, Uros Seljak, Tobias Baldauf, James E. Gunn, Lucas Lombriser, Robert E. Smith, "Confirmation of general relativity on large scales from weak lensing and galaxy velocities", Nature, 464, 256-258 (2010).
Abstract.

[This report is written by Robert Sanders of University of California, Berkeley]

Labels: , , ,


Sunday, March 14, 2010

Gravitational Lenses Measure the Age and Size of the Universe



Phil Marshall (KIPAC, SLAC/Stanford) demonstrates lensing using a wine glass. [Video courtesy of Brad Plummer/Julie Karceski (SLAC)].


Using entire galaxies as lenses to look at other galaxies, researchers have a newly precise way to measure the size and age of the universe and how rapidly it is expanding, on a par with other techniques. The measurement determines a value for the Hubble constant, which indicates the size of the universe, and confirms the age of the universe as 13.75 billion years old, within 170 million years. The results also confirm the strength of dark energy, responsible for accelerating the expansion of the universe.

These results, by researchers at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at the US Department of Energy’s SLAC National Accelerator Laboratory and Stanford University, the University of Bonn, and other institutions in the United States and Germany, is published in the March 1 issue of The Astrophysical Journal [1]. This research was supported in part by the Department of Energy Office of Science. The authors of the paper are S. Suyu of the University of Bonn, P. Marshall of KIPAC, M. W. Auger (University of California, Santa Barbara), S. Hilbert (Argelander Institut für Astronomie and Max-Planck-Institut für Astrophysik), R. D. Blandford (KIPAC), L. V. E. Koopmans (Kapteyn Astronomical Institute), C. D. Fassnacht (University of California, Davis), and T. Treu (University of California, Santa Barbara).


Sherry Suyu describes the recent measurements of the age of the universe [Video Courtesy: uni-bonn.tv /University of Bonn, Germany]


The researchers used data collected by the NASA/ESA Hubble Space Telescope, and showed the improved precision they provide in combination with the Wilkinson Microwave Anisotropy Probe (WMAP).

The team used a technique called gravitational lensing to measure the distances light traveled from a bright, active galaxy to the earth along different paths. By understanding the time it took to travel along each path and the effective speeds involved, researchers could infer not just how far away the galaxy lies but also the overall scale of the universe and some details of its expansion.

Oftentimes it is difficult for scientists to distinguish between a very bright light far away and a dimmer source lying much closer. A gravitational lens circumvents this problem by providing multiple clues as to the distance light travels. That extra information allows them to determine the size of the universe, often expressed by astrophysicists in terms of a quantity called Hubble's constant.

When a large nearby object, such as a galaxy, blocks a distant object, such as another galaxy, the light can detour around the blockage. But instead of taking a single path, light can bend around the object in one of two, or four different routes, thus doubling or quadrupling the amount of information scientists receive. As the brightness of the background galaxy nucleus fluctuates, physicists can measure the ebb and flow of light from the four distinct paths, such as in the B1608+656 system imaged above. [Image courtesy Sherry Suyu of the Argelander Institut für Astronomie in Bonn, Germany]


"We've known for a long time that lensing is capable of making a physical measurement of Hubble's constant," KIPAC's Phil Marshall said. However, gravitational lensing had never before been used in such a precise way. This measurement provides an equally precise measurement of Hubble's constant as long-established tools such as observation of supernovae and the cosmic microwave background. "Gravitational lensing has come of age as a competitive tool in the astrophysicist's toolkit," Marshall said.

When a large nearby object, such as a galaxy, blocks a distant object, such as another galaxy, the light can detour around the blockage. But instead of taking a single path, light can bend around the object in one of two, or four different routes, thus doubling or quadrupling the amount of information scientists receive. As the brightness of the background galaxy nucleus fluctuates, physicists can measure the ebb and flow of light from the four distinct paths, such as in the B1608+656 system that was the subject of this study. Lead author on the study Sherry Suyu, from the University of Bonn, said, "In our case, there were four copies of the source, which appear as a ring of light around the gravitational lens."

Though researchers do not know when light left its source, they can still compare arrival times. Marshall likens it to four cars taking four different routes between places on opposite sides of a large city, such as Stanford University to Lick Observatory, through or around San Jose. And like automobiles facing traffic snarls, light can encounter delays, too.

"The traffic density in a big city is like the mass density in a lens galaxy," Marshall said. "If you take a longer route, it need not lead to a longer delay time. Sometimes the shorter distance is actually slower."

The gravitational lens equations account for all the variables such as distance and density, and provide a better idea of when light left the background galaxy and how far it traveled.

In the past, this method of distance estimation was plagued by errors, but physicists now believe it is comparable with other measurement methods. With this technique, the researchers have come up with a more accurate lensing-based value for Hubble's constant, and a better estimation of the uncertainty in that constant. By both reducing and understanding the size of error in calculations, they can achieve better estimations on the structure of the lens and the size of the universe.

There are several factors scientists still need to account for in determining distances with lenses. For example, dust in the lens can skew the results. The Hubble Space Telescope has infra-red filters useful for eliminating dust effects. The images also contain information about the number of galaxies lying around the line of vision; these contribute to the lensing effect at a level that needs to be taken into account.

Marshall says several groups are working on extending this research, both by finding new systems and further examining known lenses. Researchers are already aware of more than twenty other astronomical systems suitable for analysis with gravitational lensing.

Reference
S. H. Suyu, P. J. Marshall, M. W. Auger, S. Hilbert, R. D. Blandford, L. V. E. Koopmans, C. D. Fassnacht and T. Treu, "Dissecting the Gravitational Lens B1608+656. II. Precision Measurements Of The Hubble Constant, Spatial Curvature, and the Dark Energy Equation Of State", The Astrophysical Journal, v711, p201 (2010). Abstract.


[This report is written by Julie Karceski, SLAC National Accelerator Laboratory]

Labels: ,