.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Saturday, June 20, 2009

Beam Pulses Perforate Black Hole Horizon

Alexander Burinskii


[Every year (since 1949) the Gravity Research Foundation honors best submitted essays in the field of Gravity. This year's prize goes to Alexander Burinskii for his essay "Instability of Black Hole Horizons with respect to Electromagnetic Excitations". The five award-winning essays will be published in the Journal of General Relativity and Gravitation (GRG) and subsequently, in a special issue of the International Journal of Modern Physics D (IJMPD). Today we present here an invited article from Prof. Burinskii on his current work.
-- 2Physics.com ]



Author: Alexander Burinskii
Nuclear Safety Institute, Russian Academy of Sciences, Moscow, Russia

The variety of models of the black-hole (BH) evaporation process -- that appeared in recent years -- differ essentially from each other, as well as from Hawking’s original idea. However, they contain a common main point that the mechanism of evaporation is connected with a complex analyticity and conformal structure [1], which unifies the BH physics with (super)string theory and physics of elementary particles [2, 3].

It has been observed long ago that many exact solutions in gravity contain singular wires and beams. Looking for exact wave solutions for electromagnetic (EM) field on the Kerr-Schild background, we obtained results [4] which show that they do not contain the usual smooth harmonic functions, but acquire commonly singular beam pulses which have very strong back reaction to metric. Analysis showed [5] that the EM beams break the BH horizon, forming the holes connecting the internal and external regions. As a result, the horizon of a BH interacting with the nearby EM fields turns out to be covered by a set of holes [6, 7] and will be transparent for outgoing radiation. Therefore, the problem of BH evaporation acquires explanation at the classical level.

2Physics articles by past winners of the Gravity Research Foundation award:
T. Padmanabhan (2008): "Gravity : An Emergent Perspective"
Steve Carlip (2007): "Symmetries, Horizons, and Black Hole Entropy"


We consider BH metric in the Kerr-Schild (KS) form [8]: gμν = ημν + 2H kμ kν, which has many advantages. In particular, the KS coordinate system and solutions do not have singularities at the horizon, being disconnected from the positions of the horizons and rigidly related with auxiliary Minkowski space-time with metric ημν. The Kerr-Schild form is extremely simple and all the intricate details are encoded in the vortex vector field kμ(x) which is tangent to the light-like rays of the Kerr Congruence (in fact, these rays are twistors of the Penrose twistor theory).

The vector field kν determines symmetry of space, its polarization, and in particular, direction of gravitational ‘dragging‘. The structure of Kerr congruence is shown in Fig.1.

FIG. 1: The Kerr singular ring and Kerr congruence formed by the light-like twistor-beams.

Horizons are determined by function:
H =(mr − ψ2)/(r2 + a2 cos2θ) , where the function ψ ≡ ψ(Y) is related with electromagnetic field, and can be any analytic function of the complex angular coordinate
Y= exp{iφ} tan(θ/2) which parametrizes celestial sphere. The Reference [8] showed that the Kerr-Newman solution is the simplest solution of the Kerr-Schild class having ψ = q =constant, the value of charge. However, any holomorphic function ψ(Y ) also leads to an exact solution of this class, and such a non-constant function on sphere has to acquire at least one pole which creates the beam. So, the electromagnetic field corresponding to ψ(Y ) = q / Y forms a singular beam along z-axis which pierces the horizons, producing a hole allowing matter to escape the interior of black hole. The initially separated external and internal surfaces of the event horizons, r+ and r-, turn out to be joined by a tube, conforming a single connected surface.

This solution may be easily extended to the case of arbitrary numbers of beams propagating in different angular directions Yi = exp{i φi} tan(θi/2) , which corresponds to a set of the light-like beams destroying the horizon in different angular directions, via action of the function ψ(Y) in H. The solutions for wave beams have to depend on a retarded-time τ. Their back reaction to the metric is especially interesting. Some long-term efforts [4, 6, 7] led us to obtain such solutions of the Debney-Kerr-Schild equations [8] in the low-frequency limit, and finally, obtain the exact solutions consistent with a time-averaged stress-energy tensor [9]. These time-dependent solutions revealed a remarkable structure which sheds light on the possible classical explanation of the BH evaporation, namely, a classical analog of quantum tunneling. In the exact time-dependent solutions, a new field of radiation was obtained which is determined by regular function γ(reg)(Y,τ). This radiation is akin to the well known radiation of the Vaidya `shining star' and may be responsible for the loss of mass by evaporation. At the same time, the necessary conditions for evaporation -- the transparence of the horizon -- are provided by the singular field ψ(Y,τ) forming the fluctuating beam-pulses. As a result, the roles of ψ(Y,τ) and γ(reg)(Y,τ) are separated! The horizon turns out to be fluctuating and pierced by a multitude of migrating holes, see Fig. 2.

The obtained solutions showed that the horizon is not irresistible obstacle, and there should not be any information loss inside the black hole. Due to topological instability of the horizon, the black-holes lose their demonic image, and hardly they can be created in a collider. However, the usual scenarios of the collapse have to be apparently valid, since the macroscopic processes should not be destroyed by the fine-grained fluctuations of the horizon. The known twosheetedness of the Kerr metric, which was considered as a long time mystery of the Kerr solution, turns out to be matched perfectly with the holographic structure of space-time [9,10]. The resulting classical geometry produced by fluctuating twistor-beams may be considered as a fine-grained structure which takes an intermediate position between the classical and quantum gravity [9].

References:
[1]
S. Carlip, "Black Hole Entropy and the Problem of Universality",

J. Phys. Conf. Ser.67: 012022, (2007), gr-qc/0702094.
[2] G. `t Hooft, "The black hole interpretation of string theory",
Nucl Phys. B 335, 138 (1990). Abstract.
[3] A. Burinskii, "Complex Kerr geometry, twistors and the Dirac electron",

J. Phys A: Math. Theor, 41, 164069 (2008). Abstract. arXiv: 0710.4249[hep-th].
[4] A. Burinskii, "Axial Stringy System of the Kerr Spinning Particle",
Grav. Cosmol. 10, (2004) 50, hep-th/0403212.
[5] A. Burinskii, E. Elizalde, S.R. Hildebrandt and G. Magli, "Rotating 'black holes' with holes in the horizon", Phys. Rev. D 74, 021502(R) (2006)
Abstract; A. Burinskii, "The Kerr theorem, Kerr-Schild formalizm and multiparticle Kerr-Schild solutions", Grav. Cosmol. 12, 119 (2006), gr-qc/0610007.
[6] A. Burinskii, "Aligned electromagnetic excitation of the Kerr-Schild Solutions",

Proc. of MG12 (2007), arXiv: gr-qc/0612186.
[7] A. Burinskii, E. Elizalde, S.R. Hildebrandt and G. Magli,"Aligned electromagnetic excitations of a black hole and their impact on its quantum horizon", Phys.Lett. B 671 486 (2009). Abstract.
[8] G.C. Debney, R.P. Kerr and A.Schild, "Solutions of the Einstein and Einstein-Maxwell Equations",

J. Math. Phys. 10, 1842 (1969). Abstract.
[9] A. Burinskii, "Beam Pulse Excitations of Kerr-Schild Geometry and Semiclassical Mechanism
of Black-Hole Evaporation",
arXiv:0903.2365 [hep-th] .
[10] C.R. Stephens, G. t' Hooft and B.F. Whiting, "Black hole evaporation without information loss", Class. Quant. Grav. 11, 621 (1994).
Abstract.

Labels: ,


Saturday, May 23, 2009

The Shadows of Gravity

Jose A. R. Cembranos

[This is an invited article based on the author's recently published work -- 2Physics.com]

Author: Jose A. R. Cembranos
Affiliation:
William I. Fine Theoretical Physics Institute, University of Minnesota in Minneapolis, USA

Many authors have tried to explain the dark sectors of the cosmological model as modifications of Einstein’s gravity (EG). Dark Matter (DM) and Dark Energy (DE) are the main sources of the cosmological evolution at late times. They dominate the dynamics of the Universe at low densities or low curvatures. Therefore it is reasonable to expect that an infrared (IR) modification of EG can lead to a possible solution of these puzzles. However, it is in the opposite limit, at high energies (HE), where EG needs corrections from a quantum approach. These natural ultraviolet (UV) modifications of gravity are usually thought to be related to inflation or to the Big Bang singularity. In a recent work, I have shown that DM can be explained with HE modifications of EG. I have used an explicit model: R2 gravity and study its possible experimental signatures [1].

Einstein’s General Relativity describes the classical gravitational interaction in a very successful way by the metric tensor of the space-time through the Einstein-Hilbert action (EHA). This theory is particularly beautiful and the action particularly simple, since it contains only one term proportional to the scalar curvature. The proportionality parameter which multiplies this term, defines the Newton’s constant of gravitation and the typical scale of gravity. This magnitude is known as the Planck scale and its approximated energy value is 1019 Giga-electronvolts, which is equivalent to a distance of 10-35 meters.

However, the inconsistency of quantum computations within the gravitational theory described by the EHA demands its modification at HE. Quantum radiative corrections produced by standard matter provide divergent terms that are constant, linear, and quadratic in the Riemann curvature tensor of the space-time. The constant divergence can be regularized by the renormalization of the cosmological constant, which may explain the Dark Energy. The linear term is absorbed in the renormalization of the Planck scale itself. On the contrary, the quadratic terms are not included in the standard gravitational action. If these quantum corrections are not cancelled by invoking new symmetries, these terms need to be taken into account for the study of gravity at HE [2]. Indeed, these terms are also produced by radiative corrections coming from the own EG. Unfortunately, the gravitational corrections do not stop at this order as the associated with the matter content. There are cubic terms, quartic terms, etc. All these local quantum corrections are divergent and the fact that there is a non finite number of them implies that the theory is non-renormalizable. We know how to deal with gravity as an effective field theory, working order by order, but we cannot access higher energies than the Planck scale by using this effective approach [2]. In any case, the Planck scale is very high, and unreachable experimentally so far.

Inspired by this effective field theory point of view, which identifies higher energy corrections with higher curvature terms, I have studied the viability of a solution to the missing matter problem from the UV completion of gravity. As I have explained above, the first HE modification to EG is provided by the inclusion of quadratic terms in the curvature of the space-time geometry. The most general quadratic action supports, in addition to the usual massless spin-two graviton, a massive spin-two and a massive scalar mode, with a total of eight degrees of freedom (in the physical gauge [3]). In fact, this gravitational theory is renormalizable [3]. However, the massive spin-two gravitons are ghost-like particles that generate new unitarity violations, breaking of causality, and important instabilities.

In any case, there is a non-trivial quadratic extension of EG that is free of ghosts and phenomenologically viable. It is the so called R2 gravity since it is defined by the only addition of a term proportional to the square of the scalar curvature to the EHA. This term by itself does not improve the UV behaviour of EG but illustrates the idea in a minimal way. This particular HE modification of EG introduces a new scalar graviton that can provide the solution to the DM problem.

In this model, the new scalar graviton has a well defined coupling to the standard matter content and it is possible to study its phenomenology and experimental signatures [1] [3][4]. Indeed, this DM candidate could be considered as a superweakly interacting massive particle (superWIMP [5]) since its interactions are gravitational, i.e. it couples universally to the energy-momentum tensor with Planck suppressed couplings. It means that the new scalar graviton mediates an attractive Yukawa force between two non-relativistic particles with strength similar to Newton’s gravity. Among other differences, this new component of the gravitational force has a finite range, shorter than 0.1 millimeters, since the new scalar graviton is massive.

This is the most constraining lower bound on the mass of the scalar mode and it is independent of any supposition about its abundance. On the contrary, depending on its contribution to the total amount of DM, its mass is constrained from above. I have shown that it cannot be much heavier than twice the mass of the electron. If that is the case, this graviton decays in an electron-positron pair. These positrons annihilate producing a flux of gamma rays that we should have observed. In fact, the SPI spectrometer on the INTEGRAL (International Gamma-ray Astrophysics Laboratory) satellite, has observed a flux of gamma rays coming from the galactic centre (GC), whose characteristics are fully consistent with electron-positron annihilation [6].

If the mass of the new graviton is tuned close to the electron-positron production threshold, this line could be the first observation of R2 gravity. The same gravitational DM can explain this observation with a less tuned mass and a lower abundance. For heavier masses, the gamma ray spectrum originated by inflight annihilation of the positrons with interstellar electrons is even more constraining than the 511 keV photons [7].

On the contrary, for lighter masses, the only decay channel that may be observable is in two photons. It is difficult to detect these gravitational decays in the isotropic diffuse photon background (iDPB) [8]. A most promising analysis is associated with the search of gamma-ray lines from localized sources, as the GC. The iDPB is continuum since it suffers the cosmological redshift, but the mono-energetic photons originated by local sources may give a clear signal of R2 gravity [1].

In conclusion, I have analyzed the possibility that the DM origin resides in UV modifications of gravity [1]. Although, strictly speaking, my results are particular of R2 gravity, I think they are qualitatively general with a minimum set of assumptions about the gravitational sector. In any case, different approaches to try to link our ignorance about gravitation with the dark sectors of standard cosmology can be taken [9], and it is a very interesting subject which surely deserves further investigations.

This work is supported in part by DOE Grant No. DOE/DE-FG02-94ER40823, FPA 2005-02327 project (DGICYT, Spain), and CAM/UCM 910309 project.

References

[1] J. A. R. Cembranos, ‘Dark Matter from R2 Gravity’ Phys. Rev. Lett. 102, 141301 (2009).
Abstract

[2] N. D. Birrell and P. C. W. Davies, 'Quantum Fields In Curved Space’ (Cambridge Univ. Pr, 1982); J. F.Donoghue, ‘General Relativity As An Effective Field Theory: The Leading Quantum Corrections’ Phys. Rev. D 50, 3874 (1994)
Abstract; A. Dobado, et al., ‘Effective lagrangians for the standard model’ (Springer-Verlag, 1997).

[3] K. S. Stelle, ‘Renormalization Of Higher Derivative Quantum Gravity’ Phys. Rev. D 16, 953 (1977)
Abstract; K.S. Stelle, ‘Classical Gravity With Higher Derivatives’ Gen Rel. Grav. 9, 353 (1978) Abstract.

[4] A. A. Starobinsky, ‘A New Type of Isotropic Cosmological Models Without Singularity’ Phys. Lett. B 91, 99 (1980)
Abstract; S. Kalara, N. Kaloper and K. A. Olive, ‘Theories of Inflation and Conformal Transformations’ Nucl. Phys. B 341, 252 (1990) Abstract; J. A. R. Cembranos, ‘The Newtonian Limit at Intermediate Energies’ Phys. Rev. D 73, 064029 (2006) Abstract.

[5] J. L. Feng, A. Rajaraman and F. Takayama, ‘Superweakly-Interacting Massive Particles’ Phys. Rev. Lett. 91, 011302 (2003)
Abstract; J. A. R. Cembranos,Jonathan L. Feng, Arvind Rajaraman, and Fumihiro Takayama,‘SuperWIMP Solutions to Small Scale Structure Problems’ Phys. Rev. Lett. 95, 181301 (2005) Abstract.

[6] B. J. Teegarden et al., 'INTEGRAL/SPI Limits on Electron-Positron Annihilation Radiation from the Galactic Plane’ Astrophys. J. 621, 296 (2005)
Article.

[7] J. F. Beacom and H. Yuksel, ‘Stringent Constraint on Galactic Positron Production’ Phys. Rev. Lett. 97, 071102 (2006)
Abstract.

[8] J. A. R. Cembranos, J. L. Feng and L. E. Strigari, ‘Resolving Cosmic Gamma Ray Anomalies with Dark Matter Decaying Now’ Phys. Rev. Lett. 99, 191301 (2007)
Abstract; J. A. R. Cembranos and L. E. Strigari, ‘Diffuse MeV Gamma-rays and Galactic 511 keV Line from Decaying WIMP Dark Matter’ Phys. Rev. D 77, 123519 (2008) Abstract.

[9] J. A. R. Cembranos, A. Dobado and A. L. Maroto, ‘Brane-World Dark Matter’ Phys. Rev. Lett. 90, 241301 (2003)
Abstract; ‘Dark Geometry’ Int. J. Mod. Phys. D 13, 2275 (2004) arXiv:hep-ph/0405165.

Labels: , , ,


Saturday, March 21, 2009

Multipolar Post Minkowskian and Post Newtonian Toolkits for Gravitational Radiation

Bala R. Iyer

[This is an invited article reviewing two decades of work by the author and his international collaborators. -- 2Physics.com ]

Author: Bala R. Iyer
Affiliation: Raman Research Institute, Bangalore, India

My work on gravitational waves (GW) began during a sabbatical I spent with Thibault Damour at DARC (CNRS-Observatoire de Paris) and Institut des Hautes Etudes Scientifiques (IHES) in France during 1989-90. I was exposed to the powerful Multipolar Post Minkowskian (MPM) formalism that Luc Blanchet and Thibault Damour had set up. Though the MPM formalism then seemed more elaborate than necessary, it is a good example of the advantage that a complete and mathematically rigorous treatment of a problem can eventually bring in the future for more demanding applications.

The wave generation formalism relates the gravitational waves observed by a detector in the far-zone of the source to the stress-energy tensor describing the source. Successful wave-generation formalisms [1] combine post-Minkowskian (PM) methods [expansions in G ], post-Newtonian (PN) methods [expansions in 1/c ], multipole (M) expansions [expansions into irreducible representations of the rotation group], and perturbations around curved backgrounds. There are two independent aspects addressing two different problems. The general method (MPM expansion) applicable to extended or fluid sources with compact support, based on the mixed PM and multipole expansion matched to some PN (slowly moving, weakly gravitating, small-retardation) source. And, the particular application to describe inspiralling compact binaries(ICB) by use of point particle models.

Starting from the general solution to the linearized Einstein's equations in the form of a multipolar expansion (valid in the external region), a PM iteration is performed and each multipolar piece is independently treated at each PM order. For the external field, the general method is not a priori limited to PN sources. However, closed form expressions for the multipole moments can be only obtained for PN sources because the exterior field may be related to the inner field only if there exists an overlapping region where both the MPM and PN expansions are valid and can be matched together. After matching, the multipole moments have a non-compact support since they depend on the gravitational field stress-energy that is distributed everywhere up to spatial infinity. To account for this correctly, the definition of the multipole moments involves a crucial finite part operation based on analytic continuation (equivalent to a Hadamard partie finie of the integrals at infinity).

The physical post-Newtonian source, for any PN order, is characterized by six symmetric and trace free (STF) time-varying moments, functionals of a formal PN expansion of the stress-energy pseudo-tensor of the material and gravitational fields. Starting from the six STF source moments one can define a different set of two canonical source moments, such that the two sets of moments are physically equivalent (i.e. lead to the same metric modulo coordinate transformations). The use of the canonical source moments simplifies the calculation of the external non-linearities and their existence shows that any radiating isolated source is characterized by two and only two sets of time-varying multipole moments.

The MPM formalism is valid all over the weak field region outside the source including the wave zone (up to future null infinity). The far zone expansion at Minkowskian future null infinity contains logarithms in the distance which are artefacts of the harmonic coordinates. One can define, order by order in the PM expansion, some radiative coordinates such that the log-terms are eliminated. One recovers the standard (Bondi-type) radiative form of the metric from which the radiative moments seen by the detector, can be extracted in the usual way.

Nonlinearities in the external field are determined by a post-Minkowskian algorithm and one obtains the radiative multipole moments as some non-linear functional of the canonical moments and thus of the actual source moments. The source moments mix with each other as the waves propagate from the source to the detector and thus the relation between radiative and source moments include many non-linear multipole interactions including hereditary (history dependent) effects like tails, memory and tails-of-tails. The radiative moments are also very convenient for the computation of the spin-weighted spherical harmonic decomposition of the gravitational waveform employed to compare analytical PN results to numerical relativity simulations for binary black holes.

The application of the above results to compact binary systems involves a new input. For compact objects the effects of finite size and quadrupole distortion induced by tidal interactions are of order 5PN. Hence, neutron stars and black holes can be modelled as point particles represented by Dirac δ-functions. The general formalism, however, applies only to smooth matter distributions and cannot be directly used for point particles since they lead to divergent integrals at the location of the particles when the energy momentum tensor of a point particle is substituted into the source moments. The calculation needs to be supplemented by a method for self-field regularisation i.e. a prescription for removing this infinite part of the integrals.

Hadamard regularisation, based on Hadamard's notion of partie finie, was employed in all earlier works and led to consistent results in different approaches up to 2.5PN order. Thus, it was surprising to discover that Hadamard regularisation at 3PN order was incomplete as signalled by the presence of four undetermined constants in the final 3PN generation results. The 3PN generation was technically more involved, and only after almost a decade of struggle and by the use of the gauge invariant dimensional regularisation, was the problem finally resolved and completed [2]. For non-spinning ICB on quasi-circular orbits, the equation of motion and the gravitational wave polarisations [3] are now known to 3PN accuracy. The radiation field determining GW phasing is known to 3.5PN order beyond the leading Einstein quadrupole formula [2]. The 3PN results for non-spinning ICB on quasi-elliptical orbits [4] and 2.5PN results for spinning binaries [5] have recently been completed. In the test particle limit results are known to order 5.5PN [1].

The PN results for ICB are the basis for the construction of all templates employed by the LIGO and VIRGO detectors [6]. Resummation methods like Pade approximants and Effective One Body models [7] going beyond the adiabatic inspiral phase to include plunge, merger and quasi-normal-mode ringing improve the convergence and extend the domain of validity of the PN approximants. In the context of the recent exciting numerical relativity simulations [8] of GW from plunge, merger and ringdown of binary black holes the best analytical PN results for inspiral (3.5PN in phase, 3PN in amplitude) are crucial for calibration and interpretation.

References:
[1] L. Blanchet, Living Rev. Relativity, 9, 4 (2006);
http://www.livingreviews.org/lrr-2006-4; M. Sasaki and H. Tagoshi, Living Rev. Relativity, 6, 6 (2003) [arXiv:gr-qc/0306120]

[2] L. Blanchet, Class.Quant.Grav. 15, 1971 (1998) [
arXiv:gr-qc/9801101]; ibid, 15, 113 (1998) [arXiv:gr-qc/9710038]; ibid, 15, 89 (1998) [arXiv:gr-qc/9710037]; L. Blanchet, B.R. Iyer and B. Joguet, Phys. Rev. D, 65, 064005 (2002) [gr-qc/0105098]; L. Blanchet, G. Faye, B.R. Iyer and B. Joguet, Phys. Rev. D, 65, 061501(R) (2002) [gr-qc/0105099]; L. Blanchet and B.R. Iyer, Phys. Rev. D, 71, 024004 (2005) [gr-qc/0409094]; T. Damour, P. Jaranowski and G. Schäfer, Phys. Lett. B, 513, 147 (2001) [arXiv:gr-qc/0105038]; L. Blanchet, T. Damour and G. Esposito-Farese, Phys. Rev. D 69, 124007 (2004) [arXiv:gr-qc/0311052]; L. Blanchet, T. Damour, G. Esposito-Farese and B.R. Iyer, Phys. Rev. Lett., 93, 091101 (2004) [gr-qc/0406012]; L. Blanchet, T. Damour, G. Esposito-Farese and B.R. Iyer, Phys. Rev. D, 71, 124004 (2005) [gr-qc/0503044]; T. Futamase and Y. Itoh, Living Rev. Relativity, 10, 2 (2007).

[3] K.G. Arun, L. Blanchet, B.R. Iyer and M.S.S. Qusailah, Class. Quant. Gravity, 21, 3771 (2004) [
gr-qc/0404085]; L. E. Kidder, Phys. Rev. D, 77, 044016, (2008) [arXiv:0710.0614]; L. Blanchet, G. Faye, B. R. Iyer and S. Sinha, Class. Quant. Grav., 25, 165003 (2008) [gr-qc/0802.1249]; M. Favata (2009) [arXiv:0812.0069].

[4] K G Arun, L. Blanchet, B. R Iyer and M. S. S. Qusailah, Phys. Rev. D 77, 064034 (2008) [
arXiv:0711.0250]; ibid, Phys. Rev. D, 77, 064035 (2008) [gr-qc/0711.0302].

[5] G. Faye, L. Blanchet and A. Buonanno, Phys. Rev. D 74, 104033 (2006) [
arXiv:gr-qc/0605139]; L. Blanchet, A. Buonanno and G. Faye, Phys. Rev. D, 74, 104034 (2006) [arXiv:gr-qc/0605140]; K.G. Arun, A. Buonanno, G. Faye and E. Ochsner, [arXiv:0810.5336]; T. Damour, P. Jaranowski and G. Schafer, Phys. Rev. D 78, 024009 (2008) [arXiv:0803.0915].

[6] T. Damour, B.R. Iyer and B.S. Sathyaprakash, Phys. Rev D, 63, 044023 (2001)[
gr-qc/0010009]; ibid, 66, 027502 (2002) [gr-qc/0207021].

[7] T. Damour, [
arXiv:0802.4047]; T. Damour, B.R. Iyer and B.S. Sathyaprakash,Phys. Rev. D, 57, 885 (1998) [gr-qc/9708034]; A. Buonanno and T. Damour,Phys. Rev. D, 59 (1999) 084006 [arXiv:gr-qc/9811091]; T.Damour, B. R. Iyer and A. Nagar, Phys. Rev. D, 79, 064004 (2009) [gr-qc/0811.2069]; T. Damour and A. Nagar, [arXiv:0902.0136]; A. Buonanno et al, [arXiv:0902.0790].

[8] F. Pretorius [
arXiv:0710.1338]; F. Pretorius, Phys. Rev. Lett. 95, 121101(2005)[arXiv:gr-qc/0507014]; M. Campanelli, C. O. Lousto, P. Marronett and Y. Zlochower, Phys. Rev. Lett. 96, 111101 (2006) [arXiv:gr-qc/0511048]; J. G. Baker et al,Phys. Rev. Lett. 96, 111102 (2006)[arXiv:gr-qc/0511103]; M. Boyle et al, Phys. Rev. D, 76, 124038 (2007) [arXiv:0710.0158].

Labels: ,


Thursday, September 04, 2008

Gravity : An Emergent Perspective

T. PadmanabhanAuthor:
T. Padmanabhan

Affiliation: Inter University Centre for Astronomy and Astrophysics (IUCAA), Pune, India

Historically, we thought of electrons as particles and photons as waves, time as absolute and gravity as a force. Padmanabhan, in his recent work "Gravity-the Inside Story" which won the First Award in Gravity Research Foundation Essay Contest 2008, suggests that we have similarly misunderstood the true nature of gravity because of the way our ideas evolved historically. When seen with the `right side up', the description of gravity becomes remarkably simple, beautiful, and explains features which we never thought needed explanation!

To understand what is involved in this appraoch one could compare the standard, historical, development of gravity with the approach developed by me over the last few years [1]. Historically, Einstein started with the Principle of Equivalence and --- with a few thought experiments --- motivated why gravity should be described by a metric of spacetime. This approach gives the correct backdrop for the equality of inertial and gravitational masses and describes the kinematics of gravity. Unfortunately there is no equally good guiding principle which Einstein could use that leads in a natural fashion to field equations Gab= κ Tab which govern the evolution of gab (or to the corresponding action principle). So the dynamics of gravity is not backed by a strong guiding principle.

Strange things happen as soon as: (a) we let the metric to be dynamical and (b) allow for arbitrary coordinate transformations or, equivalently, observers on any timelike curve examining physics. Horizons are inevitable in such a theory and they are always observer dependent. This conclusion arises very simply: (i) Principle of equivalence implies that trajectories of light will be acted by gravity. So in any theory which links gravity to spacetime dynamics, we can have nontrivial null surfaces which block information from certain class of observers. (ii) Similarly, one can construct timelike congruences (e.g., uniformly accelerated trajectories) such that all the curves in such a congruence have a horizon. What is more, the horizon is always an observer dependent concept, even when it can be given a purely geometrical definition. For example, the r = 2M surface in Schwarzschild geometry acts operationally as a horizon only for the class of observers who choose to stay at r > 2M and not for the observers falling into the black hole.

Once we have horizons which are inevitable, we get into more trouble. It is an accepted dictum that all observers have a right to describe physics using an effective theory based only on the variables (s)he can access. (This was, of course, the lesson from renormalization group theory. To describe physics at 10 GeV one shouldn't need to know what happens at 1014 GeV in "good" theories.) This raises the famous question first posed by Wheeler to Bekenstein [2]: What happens if you mix cold and hot tea and pour it down a horizon, erasing all traces of "crime" in increasing the entropy of the world? The answer to such thought experiments demands that horizons should have an entropy which should increase when energy flows across it.

With hindsight, this is obvious. The Schwarschild horizon -- or for that matter any metric which behaves locally like Rindler metric -- has a temperature which can be identified by the Euclidean continuation. If energy dE flows across a hot horizon of temperature T then the ratio dE/T = dS gives the entropy of the horizon. Again, historically, nobody including Wheeler and Bekenstein looked at the Euclidean periodicity in the Euclidean time (in Rindler or Schwarzschild metrics) before Hawking's result came! And the idea of Rindler temperature came after that of black hole temperature! So in summary, the history proceeded as follows:
-----------------------------------------------------------------------------------------
Principle of equivalence ( ~ 1908)

=> Gravity is described by the metric gab ( ~ 1908)

? Postulate Einstein's equations without a real guiding principle! (1915)

=> Black hole solutions with horizons (1916) allowing the entropy of hot tea to be hidden ( ~1971)

=> Entropy of black hole horizon (1972)

=> Temperature of black hole horizon (1975)

=> Temperature of the Rindler horizon (1975 -- 1976)
-----------------------------------------------------------------------------------------

There are several peculiar features in the theory for which there is no satisfactory answer in the conventional approach described above and they have to thought of as algebraic accidents. But there is an alternative way of approaching the dynamics of gravity, in which these features emerge as naturally as the equality of inertial and gravitational masses emerges in the geometric description of the kinematics of gravity. These new results also show that the thermodynamic description is far more general than just Einstein's theory and occurs in a wide class of theories in which the metric determines the structure of the light cones and null surfaces exist blocking the information. So instead of the historical path, we can proceed as follows reversing most of the arrows:
-----------------------------------------------------------------------------------------
Principle of equivalence

=> Gravity is described by the metric gab


=> Existence of local Rindler frames (LRFs) with a horizon around any event

=>Temperature of the local Rindler horizon H from the Euclidean continuation

=> Virtual displacements of H allow for flow of energy across a hot horizon hiding an entropy dS = dE=T as perceived by a given observer

=> The local horizon must have an entropy, Sgrav

=> The dynamics should arise from maximizing the total entropy of horizon (Sgrav ) plus matter (Sm) for all LRF's leading to field equations!

-----------------------------------------------------------------------------------------

The procedure uses the local Rindler frame (LRF) around any event P with a local Rindler horizon H. When matter crosses a hot horizon in the LRF -- or, equivalently -- a virtual displacement of the H normal to itself engulfs the matter, some entropy will be lost to the outside observers unless displacing a piece of local Rindler horizon itself costs some entropy Sgrav, say. Given the correct expression for Sgrav, one can demand that (Smatter + Sgrav) should be maximized with respect to all the null vectors which are normals to local patches of null surfaces that can act locally as horizons for a suitable class of observers -- in the spacetime. This puts a constraint on the background spacetime leading to the field equations. To the lowest order, this gives Einstein's equations with calculable corrections [3]. More generally, the resulting field equations are identical to those for Lanczos-Lovelock gravity with a cosmological constant arising as an undetermined integration constant. One can also show, in the general case of Lanczos-Lovelock theory, the on shell value of Stot gives the correct gravitational entropy, further justifying the original choice. Several peculiar features involving the connection between gravity and thermodynamics is embedded in this approach in a natural fashion. In particular:

♦ There are microscopic degrees of freedom ("atoms of spacetime") which we know nothing about. But just as thermodynamics worked even before we understood atomic structure, we can understand long wavelength gravity arising possibly from a corpuscular spacetime by a thermodynamic approach.

♦ Einstein's equations are essentially thermodynamic identities valid for each and every local Rindler observer [4]. In spacetimes with horizons and high level of symmetry, one can also consider virtual displacements of these horizons (like rH → rH + ε) and obviously we will again get TdS = dE + PdV .

♦ If the flow of matter across a horizon costs entropy, the resulting gravitational entropy has to be related to the microscopic degree of freedom associated with the horizon surface. It follows that any dynamical description will require a holographic action with both surface and bulk encoding the same information [5]. For the same reason, the surface term in the action will give the gravitational entropy. Both these features have been investigated in detail by me and my collaborators in the previous years.

Most importantly, this is not just a reformulation of Einstein's theory. Shifting the emphasis from Einstein's field equations to a broader picture of spacetime thermodynamics of horizons leads to a general class of field equations which includes Lanczos-Lovelock gravity. It is now no surprise that Lanczos-Lovelock action is also holographic, is related to entropy and has a thermodynamic interpretation.

References
[1] "Classical and Quantum Thermodynamics of horizons in spherically symmetric spacetimes",
T Padmanabhan, Class. Quan. Grav., 19, 5387, (2002), Abstract [gr-qc/0204019];
"Gravity and the Thermodynamics of Horizons",
T. Padmanabhan, Phys. Rept., 406, 49, (2005) [gr-qc/0311036];
"Dark Energy and its Implications for Gravity", T. Padmanabhan, (2008) [arXiv:0807.2356].
[2] This is based on what Wheeler told me in 1985, from his recollection of events; it is also mentioned in his book 'A Journey into Gravity and Space-time', [Scientific American Library, NY, 1990] page 221. I have heard somewhat different versions from other sources.
[3] "Dark Energy and Gravity",
T. Padmanabhan, Gen.Rel.Grav., 40, 529-564 (2008). Full Text. [arXiv:0705.2533];
"Entropy of Null Surfaces and Dynamics of Spacetime",
T. Padmanabhan, Aseem Paranjape, Phys.Rev. D75 064004 (2007). Abstract. [gr- qc/0701003];
[4] "Einstein's equations as a thermodynamic identity: The cases of stationary axisymmetric horizons and evolving spherically symmetric horizons",
D. Kothawala, S. Sarkar, T. Padmanabhan, Phys. Letts, B 652, 338-342 (2007) [gr-qc/0701002];
"Thermodynamic route to field equations in Lanczos-Lovelock gravity", A Paranjape, S Sarkar and T Padmanabhan, Phys. Rev. D 74, 104015 (2006). Abstract.
[5] "Holography of gravitational action functionals",
A Mukhopadhyay, T Padmanabhan, Phys. Rev. D74,124023, (2006). Abstract. [arXiv:hep-th/0608120].

Labels: ,


Sunday, June 15, 2008

Non-commutative Gravity, a Quantum-Classical Duality, and the Cosmological Constant Puzzle

T.P. SinghTejinder Pal Singh

[Every year since 1949, the Gravity Research Foundation honors best submitted essays in the field of Gravity. This year's list of awardees has something unique about it. While the first prize for the award winning essay goes to T. Padmanabhan, the second prize goes to his former Ph.D student, Tejinder Pal Singh. The five award-winning essays will be published in the Journal of General Relativity and Gravitation (GRG) and subsequently, in a special issue of the International Journal of Modern Physics D (IJMPD). Today we present here an invited article from Prof. Singh on his current work.
-- 2Physics.com ]

Author: Tejinder Pal Singh
Affiliation: Tata Institute of Fundamental Research, India

The evolution of a system in quantum mechanics is described by the Schrodinger equation. What happens to this quantum system when a measurement is made on it by a classical measuring apparatus? What we have learnt from standard text-books in quantum mechanics is that the wave-function for the quantum system ‘collapses’ into one of the eigenstates of the observable being measured. For instance, if a double slit interference experiment is performed on a beam of photons, one observes an interference pattern on the photographic screen. The interference pattern arises because the wave-function of a photon is a linear superposition of two wave-functions: one corresponding to its passing through the upper slit, and the other corresponding to its passing through the lower slit. It is as if the photon is simultaneously passing through both the slits [1]. However, if now a detector is placed behind one of the slits (this is a measurement) the interference pattern disappears, and the photon is interpreted as having passed through one or the other of the two slits, depending on whether the detector has clicked or not. The wave-function of the photon is said to have collapsed, from being originally in a linear superposition, to being a wave-function corresponding to the photon passing through only one of the two slits, not both.

What is often not emphasized in text-books is that this so-called collapse of the wave-function cannot be explained by the Schrodinger equation. This is because the Schrodinger equation is linear in the wave-function, and preserves superposition during evolution. The collapse process, on the other hand, breaks superposition, because the system goes from being in a superposition of many states (before measurement), to being in only one of those states (after measurement). What is the physical process which causes this collapse to take place? The honest answer is that as of today we do not know the correct answer, although an enormous effort has been invested, for nearly a century, in finding the answer. It is not some vague issue of ‘interpreting’ quantum mechanics; rather we are looking for a physical answer, based on sound mathematics, to the question: if we treat the original quantum system, along with the classical measuring apparatus, as one larger quantum system, why does this larger (macroscopic) system not obey the linear superposition principle of quantum mechanics? It is a physical question in precisely the same sense in which understanding planetary motion was a physical question : long ago, people did not know what caused planets to wander in the sky; until through observation and theory it became established that planets revolve around the Sun, and their motion is explained by Newton’s law of gravitation. Today we do not understand what causes the wave-function to collapse, but one day, through experiment and theory, we hope to have a clear understanding of the physics involved.

A remarkable aspect of the collapse process is the Born probability rule. During a measurement, when the wave-function collapses to one of the eigenstates, which eigenstate does it collapse to? This is where probabilities enter quantum mechanics, and this is the only place where they do (The Schrodinger evolution, prior to the measurement, is completely deterministic). The probability that the wave-function goes into one particular eigenstate is proportional to the square of the absolute magnitude of the wave-function for that eigenstate. Repeated experimental measurements on the same quantum system will produce different outcomes, always in accordance with this Born probability rule. There is no explanation in standard quantum mechanics for this rule, and the correct explanation of the collapse process must also include a derivation of this probability rule.

The possible explanations of the collapse process broadly fall into two classes. The first is the Everett many-worlds interpretation [2] of quantum mechanics, according to which the collapse never really takes place in fact, and is in essence an illusion. According to this explanation, at the time of a quantum measurement, the Universe (this includes the measuring apparatus and the observer) splits into many branches, and one outcome is realized in one branch, and a different outcome in another branch. For our double slit experiment, this means that when the detector is placed behind the slit (say the upper slit), then in one branch of the Universe (say ours) it will click, and the photon will have gone through the upper slit. In another branch of the Universe, a ‘different copy’ of the observer will find that the detector did not click, and the photon went through the lower slit. Linear superposition is preserved, and Schrodinger evolution continues to be preserved during and after the measurement. The different branches of the Universe do not interfere with each other because of the (experimentally observed) phenomenon of decoherence [3]. This is the process wherein, because of the interaction of a macroscopic system with its environment, interference between different outcomes is strongly destroyed, even though superposition among the outcomes continues to be preserved. This would explain why in the double slit experiment the detector either clicks or it does not, but is never seen in a superposition of the two states `detector clicks’ and ‘detector does not click’ even though the superposition is in reality present.

The many-worlds interpretation is completely consistent with standard quantum mechanics, but it is not clear how it can be experimentally tested, because by construction one is not supposed to be able to observe the other branches of the Universe. Also, it is not yet clear how the Born probability rule will be arrived at within the framework of this explanation of a quantum measurement.

The second class of explanations of the collapse process assumes that there is only one branch of the Universe, not many branches, and that collapse is a real physical process, not an illusion. It is then immediately obvious that the Schrodinger equation, and hence quantum mechanics, must be modified [4] in order to explain the collapse process, because only then will it become possible to break linear superposition during the measurement process. For instance, it could be that the Schrodinger equation that we know of is only a linear approximation to a more general, non-linear, Schrodinger equation. The non-linearity might become significant only during a quantum measurement, and be responsible for breakdown of superposition, driving the quantum system to one of the eigenstates, in accordance with the Born rule.

As it turns out, as of today there is absolutely no experimental evidence that the Schrodinger equation needs to be modified. We thus find ourselves in this unpalatable position that if the Schrodinger equation is not modified, we must accept the many-worlds interpretation, but there seems to be no way to experimentally test this interpretation! So, does the collapse take place or not? Do we have to wait for more and more precise experimental tests of quantum mechanics to know the answer? Or is there some theoretical reason, over and above quantum mechanics as we know it, which favours collapse over no collapse, or vice versa? Fortunately, the answer to this question seems to be yes, and there is a theoretical argument suggesting that collapse does take place [5]. Furthermore, it may be possible to test this argument experimentally.

The theoretical argument is based on another incompleteness in quantum mechanics, more serious but much less appreciated in comparison with the quantum measurement problem. Quantum systems evolve with time; but this time is a classical concept. Time is a part of space-time, whose geometry is determined by classical bodies such as stars and galaxies, through the Einstein equations of the general theory of relativity. If there were no classical bodies in the Universe, there would be no classical time – this is a consequence of something known as the Einstein hole argument [5]. But even in such a situation, one should be able to describe quantum systems – there must exist a reformulation of quantum mechanics which does not refer to an external classical time. In looking for such a reformulation, one is led to the conclusion that standard linear quantum mechanics is a limiting case of a more general non-linear quantum theory. The non-linearity becomes significant when the mass-energy of the quantum system becomes comparable to or larger than Planck mass, but is completely negligible for smaller systems such as atoms. Planck mass is a fundamental unit of mass made out of Planck’s constant, speed of light, and Newton’s gravitational constant, and its numerical value is about a hundred-thousandth of a gram. Since this non-linearity in the Schrodinger equation becomes significant in about the same mass range where quantum measurement takes place, it suggests the possibility that linear superposition might break down during a measurement. Hence the many-worlds interpretation is disfavoured, as a consequence of the theoretical arguments described in this paragraph.

A programme, still tentative, is being developed to arrive at such a reformulation of quantum mechanics, and at the consequent non-linear Schrodinger equation [5]. One starts by noting that in the absence of a classical space-time, the point structure of space-time is lost, and space-time points are themselves subject to quantum fluctuations. An inevitable mathematical way to express such fluctuations is to impose commutation relations amongst these coordinates, and also amongst the components of momenta of a particle in the presence of such spacetime fluctuations. The branch of mathematics which can naturally accommodate these features is known as noncommutative geometry [6]. In such a geometry, which is a natural extension of the Riemannean geometry of general relativity, space-time coordinates do not commute with each other.

The aforesaid reformulation is motivated by the following new proposal : basic laws of physics are invariant under general coordinate transformations of non-commuting coordinates. This seems like a natural step forward from the general theory of relativity, which is based on the principle of invariance under general coordinate transformations of (commuting) coordinates. Standard linear quantum mechanics is reformulated as a non-commutative special relativity. As and when an external classical time becomes available, the reformulation reduces to the standard linear quantum theory. The generalization from non-commutative special relativity to non-commutative general relativity leads to a non-linear quantum mechanics. The latter reduces to the former when the mass-energy of the quantum system is much less than Planck mass. The relation between the non-linear quantum theory and its linear limit is the same as the relation between general relativity and special relativity. The second is recovered from the first in the limit in which Newton’s gravitational constant goes to zero. When the mass-energy of the system is much larger than Planck mass, the non-linear quantum theory goes over to standard classical mechanics.

The non-linear Schrodinger equation which arises here can in principle explain the collapse of the wave-function, under a further assumption whose validity remains to be established. The essential idea is that at the onset of quantum measurement the non-linearity drives the quantum system to one or the other outcomes, depending on certain initial conditions in the quantum system (for instance the phase of the wave-function) at the time when the measurement begins. Superposition is thus broken. One can also give a quantitative estimate of the life-time of a quantum superposition – predictably this life-time goes from astronomically large values to extremely small values as the number of degrees of freedom in the system is increased.

An interesting fall-out of this study is that one might obtain some understanding of the origin of the observed acceleration of the Universe, and of dark energy, for which the most likely explanation is a non-zero value for the cosmological constant. Why is this constant non-zero, and yet so small when expressed in fundamental units? In the present study, it appears that the dynamics of a quantum particle whose mass m1 is much less than Planck mass can be recovered from the knowledge of the dynamics of a classical particle whose mass m2 is much greater than Planck mass. We call this a quantum-classical duality [7]. The product of the masses m1 and m2 is equal to the square of Planck mass. If one assumes that the classical ‘particle’ is the whole observed Universe, then the cosmological constant can be shown to be equal to the (finite) zero-point energy of the dual quantum field, and this matches with the value currently seen in cosmological observations.

The programme described here should strictly be described as ‘work in progress’, and there is still quite some way to go before these ideas can be put on a firm footing, and before one knows that this is the right track. Nonetheless, the ideas appear aesthetically appealing and natural, and a distinct advantage of the programme is that it is experimentally falsifiable. The non-linear theory agrees with standard quantum mechanics for small masses such as atomic masses, and it agrees with classical mechanics for large macroscopic masses. However its predictions differ from those of linear quantum mechanics in the mesoscopic mass range, which very crudely could be taken to be the mass range 10-20 grams to 10-8 grams. It is a significant fact that quantum mechanics has not been experimentally verified in this vast mass range, simply because such experiments are very difficult to perform with the currently available technology. The non-linear Schrodinger equation that we have predicts that the lifetime of a quantum superposition will decrease with increasing mass of the system. If the disturbing effects of the environment could be shielded (avoidance of decoherence) such a dependence of the superposition life-time on mass could be experimentally tested. Avoiding decoherence is however a great experimental challenge. An easier class of experiments is one for which the predictions of the non-linear theory for some measurable constant differ from that of the linear theory. For instance, the non-linear theory predicts a different value of the ratio h/m in the mesoscopic range, as compared to the linear theory, and this should be testable. Another possible prediction of the non-linear theory is that the outcome of a quantum measurement is not probabilistic, but deterministic, and possibly depends on the phase of the wave-function at the onset of measurement. Suitable correlation experiments might be able to test this by making fast successive measurements on a quantum system.

References
[1]
"Feynman Lectures in Physics", Vol. III, Chapter I",

R. P. Feynman, R. B. Leighton and M. Sands, (Addison-Wesley, Reading, 1965).
[2] " 'Relative State' Formulation of Quantum Mechanics",

Hugh Everett, III, Reviews of Modern Physics 29, 454 (1957). Abstract Link.
[3] "Decoherence and the appearance of a classical world in quantum theory",

E. Joos, H. D. Zeh, C. Kiefer, D. Giulini, J. Kupsch and I.-O. Stamatescu, (Springer, New York) 2nd Edn.
[4] "Collapse Models", P. Pearle,
http://in.arxiv.org/abs/quant-ph/9901077 .
[5] "Quantum measurement and quantum gravity : many-worlds or collapse of the wave-function?"

T. P. Singh, http://arxiv.org/abs/0711.3773.
[6] "An introduction to non-commutative differential geometry and its physical applications",

J. Madore (Cambridge University Press, 1999).
[7] "Noncommutative gravity, a `no strings attached' quantum-classical duality, and the cosmological constant puzzle", T.P. Singh,
http://arxiv.org/abs/0805.2124.

Labels: , ,


Saturday, June 07, 2008

Black Holes at the End of the World

Ulf Leonhardt [photo credit: Maud Lang]

[This is an invited article based on an ongoing work led by the author. -- 2Physics.com]

Author: Ulf Leonhardt
Affiliation: School of Physics and Astronomy, University of St Andrews, Scotland

Black Holes are the remainders of supermassive stars that have collapsed under their own weight, but now scientists at the University of St Andrews are using lasers and fibre optics to simulate black holes in the laboratory. They want to test Professor Stephen Hawking’s prediction that black holes are not black after all but glow in the dark.

According to an ancient legend, the Scottish university and golfing town of St Andrews is “the end of the world”. In the 6th century, Saint Regulus, a Greek monk, saw a vision: a dream commanded him to bury the bones of Saint Andrew at the end of the world. So he sailed up the coast of Britain in search of the right place and finally found the perfect spot, St Andrews in fact. Now a small team at the University of St Andrews are using fibre optics and lasers to create artificial black holes at this end of the world. To be absolutely clear: the experiment is perfectly safe. No harm will happen, because these artificial black holes only exist as tiny flashes of light that race through a few inches of optical fibre and are gone when they leave the fibre. The team wants to fulfil a modern type of prophecy, a vision of theoretical physics.

In 1974 Professor Stephen Hawking at Cambridge University published a famous prediction about black holes and quantum physics. Astrophysical black holes are the remainders of collapsed stars. They swallow everything that comes in their way. Their gravity is so strong that not even light can escape. And yet, as Professor Hawking’s flash of insight showed, black holes are not perfectly black; they glow in the dark. However, this Hawking radiation of black holes is so faint that there is probably no chance of ever observing it in space.

Hawking’s theoretical vision has been the stuff of modern legends, because it shows a mysterious connection between various branches of physics, between the physics of the very large, astrophysics, and the physics of the very small, quantum mechanics. According to quantum mechanics, the world is teeming with virtual processes where Nature tries out many things, before some of them turn into reality. At the event horizon of a black hole, virtual light particles are turned into real ones, light is created from nothing, which then radiates into space as Hawking radiation.

The St Andrews team, led by Professor Ulf Leonhardt and Dr Friedrich König, is creating artificial black holes made of light. These creatures resemble real black holes, but they are much smaller (and a lot safer), they have no gravity, but they affect light like their big astrophysical cousins. Professor Leonhardt has been working for a decade on developing and testing ideas of how to engineer optical devices that make Hawking radiation observable. Now he believes he has found the perfect method. In performing and analysing this experiment the scientists hope to understand more about the way Nature creates light quanta at the horizon, something from nothing.

The figure illustrates the principal idea of the experiment. A light pulse in a fibre adds a small contribution to the refractive index, as if an additional piece of glass would be added. This fictitious piece of glass moves with the pulse; so it moves at the speed of light: pulses in fibres behave like materials moving at the speed of light. Imagine that a continuous wave of light follows the pulse, light with a different wavelength. Due to optical dispersion, the velocity of light depends on the wavelength. Suppose that the continuous probe wave is faster than the pulse, but is slowed down by it. The place where the speed of the probe equals the speed of the pulse is the horizon.

Professor Leonhardt put all his eggs in one basket and convinced others to contribute as well. The "start-up capital" for the experiment came from a private donation by Leonhardt Group AG, the corporation of Ulf Leonhardt's cousins Uwe and Helge. Both are businessmen from the Ore Mountains (Erzgebirge) in former East Germany, close to the Bohemian border (at another end of the world). In the less than 20 years after the fall of the Wall they have created quite something from nothing, a multinational corporation. The Leverhulme Trust financed the theory, a charity of Unilever that supports innovative research in the sciences and arts, and that also supported Leonhardt's work on invisibility devices. After the foundations had been laid, the Engineering and Physical Sciences Research Council UK took over. The first results of the team have recently appeared [1], but it will still take time, hard work and further financial support until a legend may become reality at the “end of the world”.

Further information: http://www.st-andrews.ac.uk/~ulf/fibre.html

Reference
[1] "Fiber-Optical Analog of the Event Horizon"
Thomas G. Philbin, Chris Kuklewicz, Scott Robertson, Stephen Hill, Friedrich König, Ulf Leonhardt,
Science 319, 1367 (2008). Abstract Link.

Labels: , ,


Friday, January 04, 2008

High Energy Physics : 5 Needed Breakthroughs
-- Mark Wise

Mark Wise[In the ongoing feature '5 Breakthroughs', our guest today is Mark Wise, the John A. McCone Professor of High Energy Physics at California Institute of Technology.

Prof. Wise is a fellow of the American Physical Society, and member of the American Academy of Arts and Sciences and the National Academy of Sciences. He was a fellow of the Alfred P. Sloan Foundation from 1984 to 1987.

Although Prof. Wise has done some research in cosmology and nuclear physics, his interests are primarily in theoretical elementary particle physics. Much of his research has focused on the nature and implications of the symmetries of the strong and weak interactions. He is best known for his role in the development of heavy quark effective theory (HQET), a mathematical formalism that has allowed physicists to make predictions about otherwise intractable problems in the theory of the strong nuclear interactions.

To provide a background of his current research activities, Prof. Wise said,"Currently we have a theory for the strong, weak and electromagnetic interactions of elementary particles that has been extensively tested in experiments. It is usually called the standard model. Even with this theory many features of the data are not explained. For example, the quark and lepton masses are free parameters in the standard model and are not predicted. Furthermore the theory has some unattractive aspects -- the most noteworthy of them being the extreme fine tuning needed to keep the Higgs mass small compared to the ultraviolet cutoff for the theory. This is sometimes called the hierarchy problem."

He explained,"My own research breaks into two parts. One part is using the standard model to predict experimental observables. Just because you have a theory doesn’t mean it’s straightforward to use it to compare with experiment. Usually such comparisons involve expansions in some small quantity. One area I have done considerable research on is the development of methods to make predictions for the properties of hadrons that contain a single heavy quark".

He elaborated,"The other part is research on physics that is beyond what is in the standard model. In particular I have worked on the development of several extensions of the standard model that solve the hierarchy problem: low energy supersymmetry, the Randall-Sundrum model and most recently the Lee-Wick standard model. This work is very speculative. It is possible that none of the extensions of the standard model discussed in the scientific literature are realized in nature."

Prof. Wise shared the 2001 Sakurai Prize for Theoretical Particle Physics with Nathan Isgur and Mikhail Voloshin. The citation mentioned his work on "the construction of the heavy quark mass expansion and the discovery of the heavy quark symmetry in quantum chromodynamics, which led to a quantitative theory of the decays of c and b flavored hadrons."

He obtained his PhD from Stanford University in 1980. While doing his thesis work, he also co-authored the book 'From Physical Concept to Mathematical Structure: an Introduction to Theoretical Physics' (U. Toronto Press, 1980) with Prof Lynn Trainor of the University of Toronto (where he did his B.S. in 1976 and M.S. in 1977). He also coauthored, with Aneesh Manohar, a monograph on 'Heavy Quark Physics' (Cambridge Univ Press, 2000).

We are pleased to present the list of 5 needed breakthroughs that Prof. Mark Wise would be happy to see in the field of high energy physics.
-- 2Physics.com]

"Here go five breakthroughs that would be great to see:

1) An understanding of the mechanism that breaks the weak interaction symmetry giving the W's and Z's mass. This we should know the answer to in my lifetime since it will be studied at the LHC (Large Hadron Collider) and I am trying to stay healthy.

2) Reconciling gravity with quantum mechanics. Currently the favored candidate for a quantum theory of gravity is String Theory. However, there is no evidence from experiment that this is the correct theory. Perhaps quantum mechanics itself gives way to a more fundamental theory at extremely short distances.

3) An answer to the question, why is the value of the cosmological constant so small? I am assuming here that dark energy is a cosmological constant. (Hey if it looks like a duck and quacks like a duck it's probably a duck.) A cosmological constant is a very simple term in the effective low energy Lagrangian for General Relativity. The weird thing about dark energy is not what it is but rather why it's so small.

4) An understanding of why the scale at which the weak symmetry is broken is so small compared to the scale at which quantum effects in gravity become strong. This is usually called the hierarchy problem. Breakthrough (1) might provide the solution to the hierarchy problem or it might not.

5) Discovery of the particle that makes up the dark matter of the universe and the measurement of its properties (e.g., spin, mass, ...).

There are other things I would love to know. For example, is there a way to explain the values of the quark and lepton masses? But you asked for five."

Labels: , , , ,


Monday, December 17, 2007

Particle Astrophysics: 5 Needed Breakthroughs
-- James Hough

James HoughJames Hough [Photo Courtesy: Institute for Gravitational Research, University of Glasgow]

[Today's guest in our ongoing feature '5-Breakthroughs' is James Hough, Director of the Institute for Gravitational Research, and Professor of Experimental Physics in the Department of Physics and Astronomy, University of Glasgow.

Prof. Hough is also the Chairperson of
Gravitational Wave International Committee (GWIC) which was formed in 1997 by the directors and representatives of projects and research groups around the world whose research is aimed at the detection of gravitational radiation. The purpose of GWIC is to encourage coordination of research and development across the groups and collaboration in the scheduling of detector operation and data analysis. GWIC also advises on the location, timing and programme of the Edoardo Amaldi Conferences on Gravitational Waves which are held every 2 years, and presents a prize for the best Ph.D. thesis submitted each year (for details, visit 'GWIC Thesis Prize')

His current research interests are in the investigation of materials for test masses and mirror coatings, and in the development of suspension systems of ultra-low mechanical loss towards
a) second generation gravitational wave detectors, in particular Advanced
LIGO – upgrade to the US LIGO gravitational wave detector systems (Advanced LIGO is now approved by the National Science Board in the USA and supported by a significant capital contribution from PPARC in the UK and MPG in Germany).
b) third generation long baseline gravitational wave detectors, in particular the proposed Einstein Telescope in Europe, and towards
LISA the ESA/NASA space borne gravitational wave detector.

Prof. Hough is Fellow of the Royal Society of London (2003), the American Physical Society (2001), the Institute of Physics (1993) and the Royal Society of Edinburgh (1991). He received
Duddell Prize and Medal of the Institute of Physics in 2004 and Max Planck Research Prize in 2001.

It's our pleasure to present the 5 most important breakthroughs that Prof. Hough would like to see in the field of Particle Astrophysics.
-- 2Physics.com Team]

1) The direct detection of gravitational radiation
It is very important to make a direct detection to verify one of the few unproven predictions of Einstein's General Relativity and even more importantly to lead to the birth of a new astronomy. Gravitational wave astronomy will let us look into the hearts of some of the most violent events in the Universe.

2) The quantisation of Gravity
The challenge of developing a quantum theory of gravity and unifying gravity with the other fundamental forces in nature will undoubtedly lead to new discoveries about our Universe

3) The understanding of Dark Energy
Dark Energy - the mysterious reason for our Universe expanding anomalously - is not understood. Solving this enigma may help with understanding quantum gravity and will certainly give us a new perspective on fundamental interactions.

4) The successful launching of LISA, the space-borne gravitational wave detector
LISA will allow the study of the birth and interaction of massive black holes in the Universe in a way that cannot be achieved by any other mission.

5) The identification of dark matter
Observations suggest that there is much more matter in the Universe than we observe by standard means. Finding out the nature of the unseen 'dark' matter is a challenging problem for experimental physicists.

Labels: , , , ,


Wednesday, July 11, 2007

Gravitational Lenses Helped to Find Most Distant Galaxies

Gravitational LensingCaltech astronomers have pioneered the use of foreground clusters of galaxies as `natural telescopes' to boost faint signals from the most distant sources, seen as they were when the Universe was only a few percent of its current age [Image Courtesy: Caltech Media Center. For more images and description of techniques, visit Johan Richard's page]

Today, at the "From IRAS to Herschel and Planck" conference at the Geological Society in London, Richard Ellis, Professor of Astronomy at the California Institute of Technology presented images of some faint and distant objects in his talk. These are the first traces of a population of the most distant galaxies yet seen - the light we see from them today left more than 13 billion years ago, when the universe was just 500 million years old or less than 4% of its present age. Using natural "gravitational lenses," an international team of astronomers that Prof. Ellis led, have found images of these galaxies using the 10-meter Keck II telescope, sited atop Mauna Kea on the Big Island of Hawaii. This new survey is the culmination of three years' painstaking observations.

When light from very distant bodies passes through the gravitational field of much nearer massive objects, it bends in an effect known as "gravitational lensing". This is one of the predictions of Albert Einstein's General Theory of Relativity. Massive clusters of galaxies are the best example of natural gravitational lenses. Using a pioneering technique in a series of campaigns, the group used the presence of such clusters to locate progressively more distant systems that would not be detected in normal surveys.

It is thought that when the universe was 300,000 years old, it entered a period when no stars were shining. Cosmologists refer to this phase of cosmic history as the "Dark Ages." Pinpointing the moment of "cosmic dawn" when the first stars and galaxies began to shine and the Dark Ages ended is a major observational quest and provides the motivation for building future powerful telescopes such as the Thirty Meter Telescope, and the space-borne James Webb Telescope.

Reference:
"A Keck Survey for Gravitationally Lensed Lyman-alpha Emitters in the Redshift Range 8.5<z<10.4: New Constraints on the Contribution of Low Luminosity Sources to Cosmic Reionization"
Daniel P. Stark, Richard S. Ellis, Johan Richard, Jean-Paul Kneib, Graham P. Smith, Michael R. Santos
The Astrophysical Journal, V.663, p.10 (July, 2007). Link to Abstract

Labels: ,