.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Sunday, February 03, 2013

Origin of Cosmic Magnetic Fields: From Small Random Aperiodic to Ordered Large-Scale Structures

Author: Reinhard Schlickeiser

Affiliation:
Institut für Theoretische Physik, Lehrstuhl IV: Weltraum- und Astrophysik, Ruhr-Universität Bochum, Germany
and
Research Department: Plasmas with Complex Interactions, Ruhr-Universität Bochum, Germany.

Magnets have practically become everyday objects. Permanent ferromagnetism is a property of only a few densely packed materials, such as iron, in which the spin exchange interactions of individual atoms naturally line up in the same direction and create a residual persistent magnetic field. In the early universe, before iron and other magnetic materials had been created inside stars, such permanent magnetism did not exist.

Scientists have long wondered[1,2] where the observed cosmic magnetization came from, given that the fully ionized gas of the early universe contained no ferromagnetic particles. Many astrophysicists believe that galactic magnetic fields are generated and maintained by dynamo action[3,4], whereby the energy associated with the differential rotation of spiral galaxies is converted into magnetic field energy. However, the dynamo mechanism is only a means of amplification and dynamos require seed magnetic fields. Neither the dynamo process nor plasma instabilities[5] generate magnetic fields out of nothing: they need finite seed fields to start from.

Before the formation of the first stars, the luminous proto-interstellar matter consisted only of a fully ionised gas of protons, electrons, helium nuclei and lithium nuclei which were produced during the Big Bang. The physical parameters that describe the state of this gas are, however, not constant. Density and pressure fluctuate around certain mean values, and consequently electric and magnetic fields fluctuate around vanishing mean values. This small but finite dispersion in the form of random magnetic fields has now been calculated[6], specifically for the proto-interstellar gas densities and temperatures that occurred in the plasmas of the early universe at redshifts z=4-7 of the reionization epoch, when something, probably the light from the first stars, provided the energy needed to break up the previously neutral gas that existed in the universe. The protons and electrons inside the plasma would have moved around continuously, simply by virtue of existing at a finite temperature. It is the finite variance of the resulting magnetic fluctuations, that subsequently led to the creation of a stronger magnetism across the universe.

There have been alternative proposals for cosmic seed magnetic fields. Indeed, as far back as 1950 the German astronomer Ludwig Biermann[7] proposed that the centrifugal force generated in a rotating plasma cloud will separate out heavier protons from lighter electrons, thereby creating a separation of charge that leads to tiny electric and magnetic fields. However, this scheme suffers from a lack of suitable rotating objects everywhere, meaning that it could only ever generate the magnetic fields in a small portion of the medium.

To work out the field-strength variance of the fluctuations, Schlickeiser used a theory developed together with Peter Yoon of the University of Maryland[8]. The fluctuations are aperiodic, which means that, unlike the variations in magnetic and electric fields that give rise to electromagnetic radiation, they do not propagate as a wave. Indeed, their wavelength, the spatial distance over which the fluctuations occur, and their frequency, dictating how long these fluctuations last, are uncorrelated, in contrast to light, for which the values of wavelength and frequency are tied to one another via the wave's velocity. Summed over all possible wavelengths and frequencies for the magnetic fluctuations in a gas at 10,000 K, which would have been roughly the temperature of the proto-interstellar medium at the time of reionization. The calculation revealed magnetic fields variances of about 10-16 Tesla inside very early-stage galaxies and around about 10-25 Tesla in the voids between the protogalaxies. These values compare with the roughly 30 millionths of a Tesla of the Earth's magnetic field and 0.01 Tesla typical of a strong refrigerator magnet. The magnetic field in the plasma of the early universe was thus very weak, but it covered almost 100 percent of the plasma volume. Being so weak it could serve providing the seeds of primordial magnetic fields. The seed fields are tied passively to the highly conducting proto-interstellar plasma as frozen-in magnetic fluxes.

Figure 1: Illustration of the hydrodynamical stretching and ordering of cosmic magnetic fields. On the left figure a turbulent random magnetic field pervades the medium between five protostars. The right figure shows the ordering and stretching of the magnetic field as one of the stars explodes as a supernova. The outgoing shock wave compresses and orders the magnetic field in its vicinity [Image courtesy: Stefan Artmann].

Earlier analytical considerations and numerical simulations[9-12] showed that any shear and/or compression of the proto-interstellar medium not only amplifies these seed magnetic fields, but also make them anisotropic. Considering a cube containing an initially isotropic magnetic field, which is compressed to a factor η ≪ 1 times its original length along one axis, these authors showed that the perpendicular magnetic field components are enhanced by the factor &eta-1. Depending on the specific exerted compression and/or shear, even one-dimensional ordered magnetic field structures can be generated out of the original isotropically tangled field configuration[12].

Hydrodynamical compression or shearing of the IGM medium arises from the shock waves of the supernova explosions of the first stars at the end of their lifetime, or from supersonic stellar and galactic winds. Fig. 1 sketches the basic physical process. The seed magnetic field upstream of these shocks is random in direction, and by solving the hydrodynamical shock structure equations for oblique and conical shocks it has been demonstrated[13], that the shock compression enhances the downstream magnetic field component parallel to the shock, but leaving the magnetic field component normal to the shock unaltered.

Consequently, a more ordered downstream magnetic field structure results from the randomly oriented upstream field. Such stretching and ordering of initially turbulent magnetic fields is also seen in the numerical hydrodynamical simulations of supersonic jets in radio galaxies and quasars[11]. Obviously, this magnetic field stretching and ordering occurs only in gas regions overrun frequently by shocks and winds. Each individual shock or wind (with speed Vs compression orders the field on spatial scales R on time scales given by the short shock crossing time R/Vs, but signifant amplification requires multiple compressions. The ordered magnetic field filling factor is determined by the shock's and wind's filling factors which are large (80 percent) in the coronal phase of interstellar media[14] and near shock waves in large-scale cosmic structures[15].

In cosmic regions with high shock/wind activity, this passive hydrodynamical amplification and stretching of magnetic fields continues until the magnetic restoring forces affect the gas dynamics, i.e. at ordered plasma betas near unity. As a consequence, magnetic fields with equipartition strength are not generated uniformly over the whole universe by this process, but only in localized cosmic regions with high shock/wind activity.

In protogalaxies significant and rapid amplification of the spontaneously emitted aperiodic turbulent magnetic fields results from the small-scale kinetic dynamo process[16,17] generated by the gravitational infall motions during the formation of the first stars[18-20]. Additional gaseous spiral motion may stretch and order the magnetic field on large protogalactic spatial scales.

References:
[1] Philipp P. Kronberg, "Intergalactic Magnetic Fields", Physics Today, 55, 40 (2002). Abstract.
[2] Lawrence M. Widrow, "Origin of galactic and extragalactic magnetic fields", Review of Modern Physics, 74, 775 (2002). Abstract.
[3] Dario Grasso, Hector R. Rubinstein, "Magnetic fields in the early Universe", Physics Reports, 348, 163 (2001). Abstract.
[4] E. N. Parker, Cosmical Magnetic Fields (Oxford, Clarendon, 1979).
[5] R. Schlickeiser, P. K. Shukla, "Cosmological Magnetic Field Generation by the Weibel Instability", Astrophysical Journal, 599, L57 (2003). Abstract.
[6] R. Schlickeiser, "Cosmic Magnetization: From Spontaneously Emitted Aperiodic Turbulent to Ordered Equipartition Fields", Physical Review Letters, 109, 261101 (2012). Abstract.
[7] L. Biermann, Z. Naturforschung, A 5, 65 (1950).
[8] R. Schlickeiser, P. H. Yoon, "Spontaneous electromagnetic fluctuations in unmagnetized plasmas I: General theory and nonrelativistic limit", Physics of Plasmas, 19, 022105 (2012). Abstract.
[9] R. A. Laing, MNRAS 193, 439 (1980).
[10] P. A. Hughes, H. D. Aller, M. F. Aller, Astrophysical Journal, 298, 301 (1985).
[11] A. P. Matthews, P. A. G. Scheuer, MNRAS 242, 616 (1990); A. P. Matthews, P. A. G. Scheuer, MNRAS 242, 623 (1990).
[12] R. A. Laing, "Synchrotron emission from anisotropic disordered magnetic fields", MNRAS 329, 417 (2002). Abstract.
[13] T. V. Cawthorne, W. K. Comb, Astrophysical Journal, 350, 536 (1990).
[14] C. McKee, J. P. Ostriker, "A theory of the interstellar medium - Three components regulated by supernova explosions in an inhomogeneous substrate", Astrophysical Journal, 218, 148 (1977). Abstract.
[15] Francesco Miniati, Dongsu Ryu, Hyesung Kang, T. W. Jones, Renyue Cen, and Jeremiah P. Ostriker, "Properties of Cosmic Shock Waves in Large-Scale Structure Formation", Astrophysical Journal, 542, 608 (2000). Abstract.
[16] Axel Brandenburga, Kandaswamy Subramanian, "Astrophysical magnetic fields and nonlinear dynamo theory", Physics Reports, 417, 1 (2005). Abstract.
[17] Alexander A. Schekochihin, Stanislav A. Boldyrev, and Russell M. Kulsrud, "Spectra and Growth Rates of Fluctuating Magnetic Fields in the Kinematic Dynamo Theory with Large Magnetic Prandtl Numbers", Astrophysical Journal, 567, 828 (2002). Abstract.
[18] Hao Xu, Brian W. O'Shea, David C. Collins, Michael L. Norman, Hui Li, and Shengtai Li, "The Biermann Battery in Cosmological MHD Simulations of Population III Star Formation", Astrophysical Journal, 688, L57 (2008). Abstract.
[19] D. R. G. Schleicher, R. Banerjee, S. Sur, T. G. Arshakian, R. S. Kleesen, R. Beck, M. Spaans, "Small-scale dynamo action during the formation of the first stars and galaxies, I. The ideal MHD limit", Astronomy & Astrophysics, 522, A115 (2010). Abstract.
[20] Jennifer Schober, Dominik Schleicher, Christoph Federrath, Simon Glover, Ralf S. Klessen, Robi Banerjee, "The Small-scale Dynamo and Non-ideal Magnetohydrodynamics in Primordial Star Formation", Astrophysical Journal, 754, 99 (2012). Abstract.

Labels: , ,


Sunday, June 10, 2012

Pulsar Timing Arrays: Gravitational-wave detectors as big as the Galaxy

Rutger van Haasteren

[Rutger van Haasteren is the recipient of the 2011 GWIC (Gravitational Wave International Committee) Thesis Prize for his PhD thesis “Gravitational Wave detection and data analysis for Pulsar Timing Arrays” (PDF). -- 2Physics.com]

Author: Rutger van Haasteren 

Affiliation: 
Currently at Albert-Einstein-Institute (Max-Plack Institute for Gravitational Physics) in Hannover, Germany;
PhD research done at Leiden Observatory, Leiden University, The Netherlands.

Pulsars, rapidly rotating neutron stars that send an electromagnetic pulse towards the Earth with each revolution, are intimately connected to gravitational research and testing of Einstein's general theory of relativity. Besides the fact that neutron stars have very strong gravitational fields -- which is interesting from a general relativity point of view, their use as accurate clocks allows for a whole range of new gravitational experiments. Especially millisecond pulsars, recycled pulsars that have been spun-up by a companion star (see this video by John Rowe Animation/Australia Telescope National Facility, CSIRO, Australia), are very stable rotators due to their high spin frequency, relatively low magnetic field, and high mass. This makes millisecond pulsars most suitable as nearly-perfect Einstein clocks.

2Physics articles by past winners of the GWIC Thesis Prize:

Haixing Miao (2010): "Exploring Macroscopic Quantum Mechanics with Gravitational-wave Detectors"
Holger J. Pletsch (2009): "Deepest All-Sky Surveys for Continuous Gravitational Waves"
Henning Vahlbruch (2008): "Squeezed Light – the first real application starts now"
Keisuke Goda (2007): "Beating the Quantum Limit in Gravitational Wave Detectors"
Yoichi Aso (2006): "Novel Low-Frequency Vibration Isolation Technique for Interferometric Gravitational Wave Detectors"
Rana Adhikari (2003-5)*: "Interferometric Detection of Gravitational Waves : 5 Needed Breakthroughs"
*Note, the gravitational wave thesis prize was started initially by LIGO as a biannual prize, limited to students of the LIGO Scientific Collaboration (LSC). The first award covered the period from 1 July 2003 to 30 June 2005. In 2006, the thesis prize was adopted by GWIC, renamed, converted to an annual prize, and opened to the broader international community.

We can accurately track a pulsar's trajectory with respect to the Earth by monitoring the arrival times of their pulses; given that for the most well-timed millisecond pulsars we can determine the time of arrival of a single averaged pulse up to 50 nanoseconds, we are effectively sensitive to variations in the Earth-pulsar distance up to only several dozens of meters. This is because light can only travel about one foot in a nanosecond. Because some pulsars are in very tight binary systems, such an accurate measurement of the orbit of a pulsar around a companion can be used to verify/falsify the predictions of general relativity. This was first done with the binary pulsar, PSR J1915+1606, also called the Hulse-Taylor binary, which was discovered in 1974 [1]. This system is a double neutron star system of which one of the two bodies is a pulsar. The two stars are close together: a full orbital period only takes 7.75 hours. For two such massive bodies in such a tight orbit general relativity predicts that the emission of gravitational waves is significant, which would cause a decrease in the orbital period due to loss of energy of the system (see see this video by John Rowe Animation/Australia Telescope National Facility, CSIRO, Australia). By closely tracking the dynamics of the Hulse-Taylor binary, this decrease in the orbital period was confirmed exactly (figure 1), confirming the existence of gravitational waves. This has resulted in Hulse and Taylor been awarded the Nobel prize in physics in 1993 [2].

Figure 1: Decreasing period of rotation of binary pulsar PSR J1915+1606

The confirmation of the existence of gravitational waves with the Hulse-Taylor binary is considered an indirect detection of gravitational waves, because it has been shown that the energy loss of a system is consistent with gravitational-wave emission. A direct detection would have to consist of evidence that the gravitational waves are present elsewhere than at the point of emission: by using a gravitational-wave detector.

Generally speaking, two approaches exist to directly detect gravitational waves:
1) A large body of mass is used as a resonator, where the gravitational waves are expected to excite the resonant frequencies of such a so-called resonant-mass detector.
2) A signal is sent from one place to another, where the gravitational waves are expected to perturb the propagation of the signal such that its arrival time slightly changes. In laser interferometry detectors (e.g. LIGO) this results in a changing interference pattern at the point of recombination of two laser beams.

Figure 2: Concept of a pulsar timing array [Image credit: David J. Champion]

As it turns out, millisecond pulsars can be used to 'construct' a gravitational-wave detector of the second kind, where the pulse propagation from the pulsar to the Earth is perturbed by astrophysical gravitational waves [3]. The very regular arrival times of a millisecond pulsar will arrive slightly early or late due to gravitational waves that pass through the Earth-pulsar system, which in principle makes the gravitational waves detectable. Because the typically observed millisecond pulsars for these purposes are several kpc away, a pulsar timing array is basically a gravitational wave detector of galactic scale (figure 2; Also see this video by John Rowe Animation/Australia Telescope National Facility, CSIRO, Australia)).

A pulsar timing array is sensitive to gravitational waves with frequencies of a few dozen to a few hundred nHz [4], which is the frequency range where tight supermassive black-hole binaries (SMBHB) are expected to be the dominant sources of continuous gravitational waves. A canonical SMBHB system that would contribute to the gravitational-wave signal would consist of two supermassive black holes with masses of close to one billion solar masses at a distance of a Gpc, with an orbital period of several months to years. Many such systems are expected to exist in the universe, which would results in an isotropic superposition usually called a stochastic background of gravitational waves [5]. Large-scale computer simulations of the evolution of the universe suggest that some of the individual sources might be eventually uniquely detectable, but the bulk of the signal would consist of such an isotropic stochastic background of gravitational waves [6]. Because the evolution of the universe is intimately connected to the SMBHB gravitational-wave signal, it is thought that measuring the stochastic gravitational-wave background and possibly single SMBHB sources would contribute greatly to our understanding of cosmology. This is a frequency band that is unreachable for any other type of gravitational-wave detector, which makes pulsar timing arrays a unique and complementary tool next to the other gravitational-wave detection programmes like the ground-based gravitational observatories.

Pulsar timing array science is still relatively new, and a new international pulsar timing array (IPTA, [7]) collaboration has only recently been formed as an alliance between three ongoing pulsar timing array efforts: the European Pulsar Timing Array (EPTA, [8]), the North American Nanohertz Observatory for Gravitational waves (NANOGrav, [10]), and the Australian Parkes Pulsar Timing Array (PPTA, [9]). The PPTA uses a single radio telescope based in Parkes, Australia, with a 64m dish. NANOGrav uses the worlds two largest single-dish radio telescope: the 100m Green Bank Telescope, and the 305m Arecibo Observatory. The EPTA uses five radio telescopes spread throughout Europe: the Westerbork synthesis radio telescope in the Netherlands, the Lovell telescope in the UK, the Effelsberg telescope in Germany, the Nancay radio telescope in France, and the Sardinia radio telescope in Italy. These five European radio telescopes are currently being linked together to coherently combine their signals, effectively forming one big phased array called the Large European Array for Pulsars (LEAP, [8]). This improvement should boost sensitivity for pulsar timing array purposes.

The Effelsberg Radio Telescope near Effelsberg, Germany. This is the worlds second-largest fully steerable radio telescope, with a diameter of 100 meters. [Image credit: Gemma Janssen]

Between the ground-based gravitational-wave detectors and pulsar timing arrays, it is basically a scientific race focused on who will make the first detection, with both projects having good chances of being the first. Pulsar timing arrays have the advantage that the signal rms is expected to increase sharply with time. Even if pulsar timing arrays cannot reduce their noise with their always ongoing efforts, sensitivity will still gradually increase over time, making a detection possible. However, the theoretical predictions about the stochastic background amplitude and the event rates of single sources are less certain than for ground-based detectors. The big ground-based detectors are currently upgrading their instruments, which are expected to become operational somewhere in 2015.

The Parkes Radio Telescope near Parkes, Australia. This radio telescope with a diameter of 64 meters is the worlds leading radio telescope in discovery of radio pulsars. [Image credit: Aristeidis Noutsos]

Even without upgrading instruments, sensitivity of both types of gravitational-wave detectors can be increased with better data analysis methods which would allow more information to be extracted from the data. In order to do that, a Bayesian data analysis method for pulsar timing arrays has been developed that can theoretically extract all information of the signal that is present in the data. General relativity describes the gravitational-wave signal of the stochastic background as a both time correlated and spatially correlated signal between all the pulsars, which means that the data of the different pulsars cannot be treated individually. Extracting such a signal from the data is non-trivial, especially for non-uniformly sampled data with ill-understood noise like that of millisecond pulsars. The Bayesian analysis is suitable for such an analysis, and has been shown to work well for both stochastic background signals [11], and single sources like the gravitational-wave memory effect [12]. The developed Bayesian analysis has resulted in the most stringent upper-limit on the stochastic gravitational-wave background to date [11].

The Westerbork Synthesis Radio Telescope near Westerbork, The Netherlands. This radio telescope is composed of 14 dishes with a diameter of 25 meter, which combine into a radio telescope of similar sensitivity to that of the Effelsberg Radio Telescope. [Image credit: Cees Bassa]

In the coming years, the Chinese five hundred meter aperture spherical telescope (FAST, [13]), and the planned Square Kilometre Array (SKA, [14]) will provide a major leap in sensitivity. Especially the SKA, built by a collaboration of 20 countries, will dramatically change pulsar timing array science. It will be a phased array of many dishes located across South Africa, Australia and New Zealand [15]. With its vast collecting area of one million square meter it is expected to find nearly all the pulsars in the Galaxy. With all those pulsars a very sensitive gravitational-wave detector with possibly up to one hundred arms can be constructed. This should open up a new window to observe the universe, and provide unique insights into cosmology.

References:
[1] R.A. Hulse, J.H. Taylor, "Discovery of a pulsar in a binary system", The Astrophysical Journal, 195, L51 (1975). Full text.
[2] http://www.nobelprize.org/nobel_prizes/physics/laureates/1993/
[3] Frank B. Estabrook and Hugo D. Wahlquist, "Response of Doppler spacecraft tracking to gravitational radiation", General Relativity and Gravitation, 6, 439 (1975). Abstract.
[4] R.S. Foster, D.C. Backer, "Constructing a pulsar timing array", The Astrophysical Journal, 361, 300 (1990). Abstract.
[5] E.S. Phinney, "A Practical Theorem on Gravitational Wave Backgrounds", eprint arXiv:astro-ph/0108028v1 (2001).
[6] A. Sesana, A. Vecchio, C. N. Colacino, "The stochastic gravitational-wave background from massive black hole binary systems: implications for observations with Pulsar Timing Arrays", Monthly Notices of the Royal Astronomical Society, 390, 192 (2008). Abstract.
[7] G Hobbs, A Archibald, Z Arzoumanian, D Backer, M Bailes, N D R Bhat, M Burgay, S Burke-Spolaor, D Champion, I Cognard, W Coles, J Cordes, P Demorest, G Desvignes, R D Ferdman, L Finn, P Freire, M Gonzalez, J Hessels, A Hotan, G Janssen, F Jenet, A Jessner, C Jordan, V Kaspi, M Kramer, V Kondratiev, J Lazio, K Lazaridis, K J Lee, Y Levin, A Lommen, D Lorimer, R Lynch, A Lyne, R Manchester, M McLaughlin, D Nice, S Oslowski, M Pilia, A Possenti, M Purver, S Ransom, J Reynolds, S Sanidas, J Sarkissian, A Sesana, R Shannon, X Siemens, I Stairs, B Stappers, D Stinebring, G Theureau, R van Haasteren, W van Straten, J P W Verbiest, D R B Yardley and X P You,"The International Pulsar Timing Array project: using pulsars as a gravitational wave detector", Classical and Quantum Gravity, 27, 084013 (2010). Abstract.
[8] R D Ferdman, R van Haasteren, C G Bassa, M Burgay, I Cognard, A Corongiu, N D'Amico, G Desvignes, J W T Hessels, G H Janssen, A Jessner, C Jordan, R Karuppusamy, E F Keane, M Kramer, K Lazaridis, Y Levin, A G Lyne, M Pilia, A Possenti, M Purver, B Stappers, S Sanidas, R Smits and G Theureau, "The European Pulsar Timing Array: current efforts and a LEAP toward the future", Classical and Quantum Gravity, 27, 084014 (2010). Abstract.
[9] G. Hobbs, D. Miller, R. N. Manchester, J. Dempsey, J. M. Chapman, J. Khoo, J. Applegate, M. Bailes, N. D. R. Bhat, R. Bridle, A. Borg, A. Brown, C. Burnett, F. Camilo, C. Cattalini, A. Chaudhary, R. Chen, N. D’Amico, L. Kedziora-Chudczer, T. Cornwell, R. George, G. Hampson, M. Hepburn, A. Jameson, M. Keith, T. Kelly, A. Kosmynin, E. Lenc, D. Lorimer, C. Love, A. Lyne, V. McIntyre, J. Morrissey, M. Pienaar, J. Reynolds, G. Ryder, J. Sarkissian, A. Stevenson, A. Treloar, W. van Straten, M. Whiting and G. Wilson, "The Parkes Observatory Pulsar Data Archive", Publications of the Astronomical Society of Australia, 26, 103 (2009). Full Text.
[10] P. B. Demorest, R. D. Ferdman, M. E. Gonzalez, D. Nice, S. Ransom, I. H. Stairs, Z. Arzoumanian, A. Brazier, S. Burke-Spolaor, S. J. Chamberlin, J. M. Cordes, J. Ellis, L. S. Finn, P. Freire, S. Giampanis, F. Jenet, V. M. Kaspi, J. Lazio, A. N. Lommen, M. McLaughlin, N. Palliyaguru, D. Perrodin, R. M. Shannon, X. Siemens, D. Stinebring, J. Swiggum, W. W. Zhu, "Limits on the Stochastic Gravitational Wave Background from the North American Nanohertz Observatory for Gravitational Waves", eprint arXiv:1201.6641 (2012)
[11] R. van Haasteren, Y. Levin, G. H. Janssen, K. Lazaridis, M. Kramer, B. W. Stappers, G. Desvignes, M. B. Purver, A. G. Lyne, R. D. Ferdman, A. Jessner, I. Cognard, G. Theureau, N. D’Amico, A. Possenti, M. Burgay, A. Corongiu, J. W. T. Hessels, R. Smits and J. P. W. Verbiest, "Placing limits on the stochastic gravitational-wave background using European Pulsar Timing Array data", Monthly Notices of the Royal Astronomical Society, 414, 3117 (2011). Abstract.
[12] Rutger van Haasteren and Yuri Levin, "Gravitational-wave memory and pulsar timing arrays", Monthly Notices of the Royal Astronomical Society, 401, 2372 (2010). Abstract.
[13] R. Smits, M. Kramer, B. Stappers, D.R. Lorimer, J. Cordes, and A. Faulkner, "Pulsar searches and timing with the square kilometre array", Astronomy & Astrophysics, 505, 919 (2009). Abstract.
[14] T. Joseph W. Lazio, "The Square Kilometre Array", in Panoramic Radio Astronomy: Wide-field 1-2 GHz Research on Galaxy Evolution (2009). Full Text.
[15] http://www.skatelescope.org/news/dual-site-agreed-square-kilometre-array-telescope/

Labels: ,


Sunday, October 24, 2010

Looking for a Dark Matter Signature in the Sun’s Interior

Ilídio Lopes

[This is an invited article based on the author's work in collaboration with Joseph Silk of the University of Oxford -- 2Physics.com]

Author: Ilídio Lopes
Affiliation:
Centro Multidisciplinar de Astrofísica, Instituto Superior Técnico, Lisboa, Portugal;
Departamento de Física, Universidade de Évora, Évora, Portugal.

The standard concordance cosmological model of the Universe firmly established that 85% of its mass is constituted by cold, non-baryonic particles which are almost collisionless. During its evolution, the Universe formed a complex network of dark matter haloes, where baryons are gravitationally trapped, leading to the formation of galaxies and stars, including our own Galaxy and our Sun. There are many particle physics candidates for dark matter, for which their specific mass and other properties are still unknown. Among these candidates, the neutralino, a fundamental particle proposed by supersymmetric particle physics models, seems to be the more suitable candidate. The neutralino is a weak interacting massive particle with a present day relic thermal abundance determined by the annihilating dark matter freeze-out in the primordial universe.

Among other celestial’s bodies, the Sun is a privileged place to look for dark matter particles, due to its proximity to the Earth. More significantly, its large mass – which constitutes 99% of the mass of the solar system - creates a natural local trap for the capture of dark matter particles. Present day simulations show that dark matter particles in our local dark matter halo, depending on their mass and other intrinsic properties, can be gravitationally captured by the Sun and accumulate in significant amounts in its core. By means of helioseismology and solar neutrinos we are able to probe the physics in the Sun’s interior, and by doing so, we can look for a dark matter signature.

Neutrinos, once produced in the nuclear reactions of the solar core, will leave the Sun travelling to Earth in less than 8 minutes. These neutrinos stream freely to Earth, subject only to interactions with baryons in a weak scale with a typical scattering cross section of the order of 10-44 cm2, and hence are natural “messengers” of the physical processes occurring in the Sun’s deepest layers. In a paper to be published in the scientific journal “Science” [1], Ilidio Lopes (from Évora University and Instituto Superior Técnico) and Joseph Silk (from Oxford University) suggest that the presence of dark matter particles in the Sun’s interior, depending upon their mass among other properties, can cause a significant drop in its central temperature, leading to a decrease in the neutrino fluxes being produced in the Sun’s core. The calculations have shown that, in some dark matter scenarios, an isothermal solar core is formed. In another paper published in “The Astrophysical Journal Letters” [2], the same authors suggest that, through the detection of gravity waves in the Sun’s interior, Helioseismology can also independently test the presence of dark matter in the Sun’s core.

The new generation of solar neutrino experiments will be able to measure the neutrino fluxes produced in different locations of the Sun’s core. The Borexino and SNO experiments are starting to measure the neutrino fluxes produced at different depths of the Sun’s interior by means of the nuclear reactions of the proton-proton chain. Namely these are pp-ν, 7Be-ν and 8B-ν electronic neutrinos, among others. The high precision measurements expected to be obtained by such neutrino experiments will provide an excellent tool for testing the existence of dark matter in the Sun’s core. In the near future, it is expected that the measurements of pep-ν neutrino fluxes and neutrinos from the CNO cycle will also be measured by the Borexino detector or by the upcoming experiments SNO+ or LENA.

This work is supported in part by Fundação para a Ciência e a Tecnologia and Fundação Calouste Gulbenkian.

References:
[1]
Ilídio Lopes, Joseph Silk, ''Neutrino Spectroscopy Can Probe the Dark Matter Content in the Sun'', Science, DOI: 10.1126/science.1196564, in press.
Abstract.
[2] Ilídio Lopes, Joseph Silk, ''Probing the Existence of a Dark Matter Isothermal Core Using Gravity Modes'', The Astrophysical Journal Letters, Volume 722, Issue 1, pp. L95-L99 (2010), DOI:10.1088/2041-8205/722/1/L95.
Abstract.

Labels: , ,


Sunday, July 25, 2010

Deepest All-Sky Surveys for Continuous Gravitational Waves

Holger J. Pletsch

[This is an invited article from Dr. Holger J. Pletsch who is the recipient of the 2009 GWIC (Gravitational Wave International Committee) Thesis Prize for his PhD thesis “Data Analysis for Continuous Gravitational Waves: Deepest All-Sky Surveys” (PDF). The thesis also received the 2009 Dieter Rampacher Prize of the Max Planck Society in Germany -- awarded to its youngest Ph.D. candidates usually between the ages of 25 and 27 for their outstanding doctoral work. -- 2Physics.com]

Author: Holger J. Pletsch
Affiliation:
Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut) and Leibniz Universität Hannover

Besides validating Einstein's theory of General Relativity, direct detection of gravitational waves will also constitute an important new astronomical tool. Prime target sources of continuous gravitational waves (CW) for current Earth-based laser-interferometric detectors as LIGO [1] are rapidly spinning compact objects, such as neutron stars, with nonaxisymmetric deformations [2].

Very promising searches are all-sky surveys for prior unknown CW emitters. As most neutron stars are electromagnetically invisible, gravitational-wave observations might allow to reveal completely new populations of neutron stars. Therefore, a CW detection could potentially be extremely helpful for neutron-star astrophysics. Even the null results of today's search efforts, yielding observational upper limits [3], already constrain the physics of neutron stars.

2Physics articles by past winners of the GWIC Thesis Prize:
Henning Vahlbruch (2008):
"Squeezed Light – the first real application starts now"
Keisuke Goda (2007): "Beating the Quantum Limit in Gravitational Wave Detectors"
Yoichi Aso (2006): "Novel Low-Frequency Vibration Isolation Technique for Interferometric Gravitational Wave Detectors"
Rana Adhikari (2003-5)*: "Interferometric Detection of Gravitational Waves : 5 Needed Breakthroughs"
*Note, the gravitational wave thesis prize was started initially by LIGO as a biannual prize, limited to students of the LIGO Scientific Collaboration (LSC). The first award covered the period from 1 July 2003 to 30 June 2005. In 2006, the thesis prize was adopted by GWIC, renamed, converted to an annual prize, and opened to the broader international community.


The expected CW signals are extremely weak, and deeply buried in the detector instrument noise. Thus, to extract these signals sensitive data analysis methods are requisite. A powerful method is coherent matched filtering, where the signal-to-noise ratio (SNR) increases with the square root of observation time. Hence, detection is a matter of observing long enough, to accumulate sufficient SNR.

The CW data analysis is further complicated by the fact that the terrestrial detector location Doppler-modulates the amplitude and phase of the waveform, as the Earth moves relative to the solar system barycenter (SSB). The parameters describing the signal's amplitude variation may be analytically eliminated by maximizing the coherent matched-filtering statistic. The remaining search parameters describing the signal's phase are the source's sky location, frequency and frequency derivatives. The resulting coherent detection statistic is commonly called the F-statistic [4].

However, what ultimately limits the sensitivity of all-sky surveys for unknown CW sources using the F-statistic is the finite computing power available. Such searches are computationally very expensive, because for maximum sensitivity one must convolve the full data set with many signal waveforms (templates) corresponding to all possible sources. But the number of templates required for a fully coherent F-statistic search increases as a high power of the coherent observation time. For a year of data, the computational cost to search a realistic range of parameter space exceeds the total computing power on Earth [4,5]. Thus a fully coherent search is limited to much shorter observation times.

Searching year-long data sets is accomplished by less costly hierarchical, so-called “semicoherent” methods [6,7]. The data is broken into segments, which are much smaller than one year. Just every segment is analyzed coherently, computing the F-statistic on a coarse grid of templates. Then the F-statistics from all segments (or statistics derived from F) are incoherently combined using a common fine grid of templates. This way phase information between segments is discarded, hence the term “semicoherent”.

A central long-standing problem in these semicoherent methods was the design of, and link between, the coarse and fine grids. Previous methods, while creative and clever, were arbitrary and ad hoc constructions. In most recent work [8], the optimal solution for the incoherent combination step has been found. The key quantity is the fractional loss, called mismatch, in expected F-statistic for a given signal at a nearby grid point. Locally Taylor-expanding the mismatch (to quadratic order) in the differences of the coordinates defines a positive definite metric. Previous methods considered parameter correlations in the F-statistic to only linear order in coherent integration time, discarding higher orders from the metric.

The F-statistic has strong "global" (large-scale) correlations in the physical coordinates, extending outside a region in which the mismatch is well approximated by the metric. In recent work [9], an improved understanding of the large-scale correlations in the F-statistic was found. Particularly, for realistic segment durations (a day or longer) it turned out to be also crucial to consider the fractional loss in F to second order in coherent integration time.

Exploiting these large-scale correlations in the coherent detection statistic F has lead to a significantly improved semicoherent search technique for CW signals [8]. This novel method is optimal if the semicoherent detection statistic is taken to be the sum of one coarse-grid F-statistic value from each data segment.

More precisely, the improved understanding of large-scale correlations yields new coordinates on the phase parameter space. In these coordinates the first fully analytical metric for the incoherent combination step is obtained, accurately approximating the mismatch. Hence, the optimal (closest) coarse-grid point from each segment can be determined for any given fine-grid point in the incoherent combination step. So the new method combines the coherent segment results much more efficiently, since previous techniques did not use metric information beyond linear order in coherent integration time.

Fig.1: The Einstein@Home screensaver

The primary application area of this new technique is the volunteer distributed computing project Einstein@Home [10]. Members of the public can sign up their home or office computers (hosts) through the web page, and download a screensaver. When idle, the screensaver displays (Fig.1), and in the background hosts automatically download small chunks of data from the servers, carry out analysis, and report back results. While more than 250k individuals have already contributed, the computational power (0.25 PFlop/s) achieved is in fact competitive with the world's largest supercomputers.

What improvement to expect from the new search technique for Einstein@Home? Via Monte Carlo simulations an implementation of this new method has been compared to the conventional Hough transform technique [7] that has been previously used on Einstein@Home. To provide a realistic comparison, simulated data covered the same time intervals as the input data of a recent Einstein@Home search run, which employed the conventional Hough technique. Those data, from LIGO’s fifth science run (S5), included 121 data segments of 25-hour duration. The false alarm probabilities were obtained using many simulated data sets with different realizations of stationary Gaussian white noise. To find the detection probabilities, different CW signals with fixed source gravitational-wave amplitude were added. Other source parameters were randomly drawn from uniform distributions.














Fig.2: (right click on the image to see higher resolution version) Performance demonstration of the new search method. Left panel: Receiver operating characteristic curves for fixed source strain amplitude. Right panel: Detection probability as a function of source strain amplitude, at 1% false alarm probability.

The results of this comparison are illustrated in Fig. 2. The right panel of Fig. 2 shows the detection efficiencies for different values of source gravitational-wave amplitude (strain), at a fixed 1% false alarm probability. The new method has been applied in two modes of operation: first, F-statistics were simply summed across segments; second, only ones or zeros were summed (number counts) depending upon whether F exceeds a predefined threshold in a given segment. In both modes of operation, the new technique performs significantly better than the conventional Hough method. For instance, 90% detection probability with the new method (in number-count operation mode) is obtained for a value of source strain amplitude about 6 times smaller as needed by the Hough method (which is also based number counts): thus the "distance reach" of the new technique is about 6 times larger. This increases the number of potentially detectable sources by more than 2 orders of magnitude, since the "visible" spatial volume increases as the cube of the distance, as illustrated in Fig. 3.

Fig.3: Artist’s illustration of increased "visible" spatial volume due to the novel search technique.

The current Einstein@Home search run [10] in fact deploys this new technique for the first time, analyzing about two years of LIGO’s most sensitive S5 data. The combination of a better search technique, plus more and more sensitive data, greatly increases the chance of making the first gravitational wave detection of a CW source. In the long term, the detection of CW signals will provide new means to discover and locate neutron stars, and will eventually provide unique insights into the nature of matter at high densities.

References:
[1]
B. Abbott et al. (LIGO Scientific Collaboration), "LIGO : the Laser Interferometer Gravitational-wave Observatory", Rep. Prog. Phys. 72, 076901 (2009), Abstract.
[2] R. Prix (for the LIGO Scientific Collaboration), in “Neutron Stars and Pulsars”, Springer, (2009).
[3] B. Abbott et al. (LIGO Scientific Collaboration), "Beating the spin-down limit on gravitational wave emission from the Crab pulsar", Astrophys. J. Lett. 683, L45 (2008), Abstract; B. Abbott et al. (LIGO Scientific Collaboration), "All-Sky LIGO Search for Periodic Gravitational Waves in the Early Fifth-Science-Run Data", Phys. Rev. Lett. 102, 111102 (2009), Abstract; B. Abbott et al. (LIGO Scientific Collaboration), "Einstein@Home search for periodic gravitational waves in early S5 LIGO data", Phys. Rev. D 80, 042003 (2009), Abstract.
[4] P. Jaranowski, A. Królak and B. F. Schutz, "Data analysis of gravitational-wave signals from spinning neutron stars: The signal and its detection", Phys. Rev. D 58, 063001 (1998), Abstract; P. Jaranowski and A. Królak, "Gravitational-Wave Data Analysis. Formalism and Sample Applications: The Gaussian Case", Living Reviews in Relativity, 8 (2005), Link.
[5] P. R. Brady, T. Creighton, C. Cutler and B. F. Schutz, "Searching for periodic sources with LIGO", Phys. Rev. D 57, 2101 (1998), Abstract.
[6] P. R. Brady and T. Creighton, "Searching for periodic sources with LIGO. II. Hierarchical searches", Phys. Rev. D 61, 082001 (2000), Abstract.
[7] B. Krishnan, A. M. Sintes, M. A. Papa, B. F. Schutz, S. Frasca and C. Palomba, "Hough transform search for continuous gravitational waves", Phys. Rev. D 70, 082001 (2004), Abstract.
[8] H. J. Pletsch and B. Allen, "Exploiting Large-Scale Correlations to Detect Continuous Gravitational Waves", Phys. Rev. Lett. 103, 181102 (2009), Abstract.
[9] H. J. Pletsch, "Parameter-space correlations of the optimal statistic for continuous gravitational-wave detection", Phys. Rev. D 78, 102005 (2008), Abstract.
[10] Einstein@Home: http://einstein.phys.uwm.edu/.

Labels: ,


Sunday, April 25, 2010

Searching Dark Energy with Supernova Dataset

Saul Perlmutter [Photo courtesy: Lawrence Berkeley National Laboratory]

The international Supernova Cosmology Project (SCP), based at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory, has announced the Union2 compilation of hundreds of Type Ia supernovae, the largest collection ever of high-quality data from numerous surveys. Analysis of the new compilation significantly narrows the possible values that dark energy might take—but not enough to decide among fundamentally different theories of its nature.

“We’ve used the world’s best-yet dataset of Type Ia supernovae to determine the world’s best-yet constraints on dark energy,” says Saul Perlmutter, leader of the SCP. “We’ve tightened in on dark energy out to redshifts of one”—when the universe was only about six billion years old, less than half its present age—“but while at lower redshifts the values are perfectly consistent with a cosmological constant, the most important questions remain.”

Two views of one of the six new distant supernovae in the Supernova Cosmology Project's just-released Union2 survey, which among other refinements compares ground-based infrared observations (in this case by Japan's Subaru Telescope on Mauna Kea) with follow-up observations by the Hubble Space Telescope [Image courtesy: Supernova Cosmology Project]

That’s because possible values of dark energy from supernovae data become increasingly uncertain at redshifts greater than one-half, the range where dark energy’s effects on the expansion of the universe are most apparent as we look farther back in time. Says Perlmutter of the widening error bars at higher redshifts, “Right now, you could drive a truck through them.”

As its name implies, the cosmological constant fills space with constant pressure, counteracting the mutual gravitational attraction of all the matter in the universe; it is often identified with the energy of the vacuum. If indeed dark energy turns out to be the cosmological constant, however, even more questions will arise.

“There is a huge discrepancy between the theoretical prediction for vacuum energy and what we measure as dark energy,” says Rahman Amanullah, who led SCP’s Union2 analysis; Amanullah is presently with the Oskar Klein Center at Stockholm University and was a postdoctoral fellow in Berkeley Lab’s Physics Division from 2006 to 2008. “If it turns out in the future that dark energy is consistent with a cosmological constant also at early times of the universe, it will be an enormous challenge to explain this at a fundamental theoretical level.”

Rahman Amanullah [Photo courtesy:Stockholm University]

A major group of competing theories posit a dynamical form of dark energy that varies in time. Choosing among theories means comparing what they predict about the dark energy equation of state, a value written w. While the new analysis has detected no change in w, there is much room for possibly significant differences in w with increasing redshift (written z).

“Most dark-energy theories are not far from the cosmological constant at z less than one,” Perlmutter says. “We’re looking for deviations in w at high z, but there the values are very poorly constrained.”

In their new analysis to be published in the Astrophysical Journal [1], the Supernova Cosmology Project reports on the addition of several well-measured, very distant supernovae to the Union2 compilation.

Dark energy fills the universe, but what is it?

Dark energy was discovered in the late 1990s by the Supernova Cosmology Project and the competing High-Z Supernova Search Team, both using distant Type Ia supernovae as “standard candles” to measure the expansion history of the universe. To their surprise, both teams found that expansion is not slowing due to gravity but accelerating.

Other methods for measuring the history of cosmic expansion have been developed, including baryon acoustic oscillation and weak gravitational lensing, but supernovae remain the most advanced technique. Indeed, in the years since dark energy was discovered using only a few dozen Type Ia supernovae, many new searches have been mounted with ground-based telescopes and the Hubble Space Telescope; many hundreds of Type Ia’s have been discovered; techniques for measuring and comparing them have continually improved.

In 2008 the SCP, led by the work of team member Marek Kowalski of the Humboldt University of Berlin, created a way to cross-correlate and analyze datasets from different surveys made with different instruments, resulting in the SCP’s first Union compilation. In 2009 a number of new surveys were added.

The inclusion of six new high-redshift supernovae found by the SCP in 2001, including two with z greater than one, is the first in a series of very high-redshift additions to the Union2 compilation now being announced, and brings the current number of supernovae in the whole compilation to 557.

“Even with the world’s premier astronomical observatories, obtaining good quality, time-critical data of supernovae that are beyond a redshift of one is a difficult task,” says SCP member Chris Lidman of the Anglo-Australian Observatory near Sydney, a major contributor to the analysis. “It requires close collaboration between astronomers who are spread over several continents and several time zones. Good team work is essential.”

Union2 has not only added many new supernovae to the Union compilation but has refined the methods of analysis and in some cases improved the observations. The latest high-z supernovae in Union2 include the most distant supernovae for which ground-based near-infrared observations are available, a valuable opportunity to compare ground-based and Hubble Space Telescope observations of very distant supernovae.

Type Ia supernovae are the best standard candles ever found for measuring cosmic distances because the great majority are so bright and so similar in brightness. Light-curve fitting is the basic method for standardizing what variations in brightness remain: supernova light curves (their rising and falling brightness over time) are compared and uniformly adjusted to yield comparative intrinsic brightness. The light curves of all the hundreds of supernova in the Union2 collection have been consistently reanalyzed.

The upshot of these efforts is improved handling of systematic errors and improved constraints on the value of the dark energy equation of state with increasing redshift, although with greater uncertainty at very high redshifts. When combined with data from cosmic microwave background and baryon oscillation surveys, the “best fit cosmology” remains the so-called Lambda Cold Dark Matter model, or ΛCDM.

ΛCDM has become the standard model of our universe, which began with a big bang, underwent a brief period of inflation, and has continued to expand, although at first retarded by the mutual gravitational attraction of matter. As matter spread and grew less dense, dark energy overcame gravity, and expansion has been accelerating ever since.

To learn just what dark energy is, however, will first require scientists to capture many more supernovae at high redshifts and thoroughly study their light curves and spectra. This can’t be done with telescopes on the ground or even by heavily subscribed space telescopes. Learning the nature of what makes up three-quarters of the density of our universe will require a dedicated observatory in space.

Reference
[1] "Spectra and Light Curves of Six Type Ia Supernovae at 0.511 < z < 1.12 and the Union2 Compilation"
Authors: R. Amanullah, C. Lidman, D. Rubin, G. Aldering, P. Astier, K. Barbary, M. S. Burns, A. Conley, K. S. Dawson, S. E. Deustua, M. Doi, S. Fabbro, L. Faccioli, H. K. Fakhouri, G. Folatelli, A. S. Fruchter, H. Furusawa, G. Garavini, G. Goldhaber, A. Goobar, D. E. Groom, I. Hook, D. A. Howell, N. Kashikawa, A. G. Kim, R. A. Knop, M. Kowalski, E. Linder, J. Meyers, T. Morokuma, S. Nobili, J. Nordin, P. E. Nugent, L. Ostman, R. Pain, N. Panagia, S. Perlmutter, J. Raux, P. Ruiz-Lapuente, A. L. Spadafora, M. Strovink, N. Suzuki, L. Wang, W. M. Wood-Vasey, N. Yasuda,
Accepted for publication in Astrophysical Journal, available at
arXiv:1004.1711v1.

[The text of this report is written by Paul Preuss of Lawrence Berkeley National Laboratory]

Labels: ,


Saturday, February 21, 2009

Accurate Measurement of Huge Pressures that Melt Diamond provides crucial data for Planetary Astrophysics and Nuclear Fusion

Marcus Knudson examines the focal point of his team's effort to characterize materials at extremely high pressures. The fortress-like box sitting atop its support will hold within it a so-called "flyer plate" that -- at speeds far faster than a rifle bullet -- will smash into multiple targets inserted in the two circular holes. An extensive network of tiny sensors and computers will reveal information on shock wave transmission, mass movement, plate velocity, and other factors. [Photo by: Randy Montoya]

In a recent paper in the journal 'Science' [1], researchers from Sandia National Laboratories (a multiprogram laboratory operated by Sandia Corporation for the U.S. Department of Energy’s National Nuclear Security Administration) reported ten times more accurate measurement of the enormous pressures needed to melt diamond to slush and then to a completely liquid state.

Researchers Marcus Knudson, Mike Desjarlais, and Daniel Dolan discovered a triple point at which solid diamond, liquid carbon, and a long-theorized but never-before-confirmed state of solid carbon called bc8 were found to exist together.

The high–energy density behavior of carbon has received much attention in recent times mainly due to its relevance to planetary astrophysics. The outer planets, Neptune and Uranus, are thought to contain large quantities of carbon (as much as 10-15% of the total planetary mass). In Neptune, for example, much of the atmosphere is composed of methane (CH4). Under high pressure, methane decomposes, liberating its carbon. One question for astrophysicists in theorizing the planet’s characteristics is knowing the form that carbon takes in the planet’s interior. At what precise pressure does simple carbon form diamond? Is the pressure eventually great enough to liquefy the diamond, or form bc8, a solid that has yet other characteristics?

“Liquid carbon is electrically conductive at these pressures, which means it affects the generation of magnetic fields,” says Desjarlais. “So, accurate knowledge of phases of carbon in planetary interiors makes a difference in computer models of the planet’s characteristics. Thus, better equations of state can help explain planetary magnetic fields that seem otherwise to have no reason to exist.”

Accurate knowledge of these changes of state are also essential to the effort to produce nuclear fusion at Lawrence Livermore National Laboratory’s National Ignition Facility (NIF) in California. In 2010, at NIF, 192 laser beams are expected to focus on isotopes of hydrogen contained in a little spherical shell that may be made of diamond. The idea is to bring enough heat and pressure to bear to evenly squeeze the shell, which serves as a containment capsule. The contraction is expected to fuse the nuclei of deuterium and tritium within.

The success of this reaction would give more information about the effects of a hydrogen bomb explosion, making it less likely the U.S. would need to resume nuclear weapons tests. It could also be a step in learning how to produce a contained fusion reaction that could produce electrical energy for humanity from seawater, the most abundant material on Earth.

For the reaction to work, the spherical capsule must compress evenly. But at the enormous pressures needed, will the diamond turn to slush, liquid, or even to the solid bc8? A mixture of solid and liquid would create uneven pressures on the isotopes, thwarting the fusion reaction, which to be effective must offer deuterium and tritium nuclei no room to escape.

That problem can be avoided if researchers know at what pressure point diamond turns completely liquid. One laser blast could bring the diamond to the edge of its ability to remain solid, and a second could pressure the diamond wall enough that it would immediately become all liquid, avoiding the slushy solid-liquid state. Or a more powerful laser blast could cause the solid diamond to jump past the messy triple point, and past the liquid and solid bc8 mixture, to enter a totally liquid state. This would keep coherent the pressure on the nuclei being forced to fuse within.

The mixed phase regions, says Dolan, are good ones to avoid for fusion researchers. The Sandia work provides essentially a roadmap showing where those ruts in the fusion road lie.

Sandia researchers achieved these results by dovetailing theoretical simulations with laboratory work. Simulation work led by Desjarlais used theory to establish the range of velocities at which projectiles, called flyer plates, should be sent to create the pressures needed to explore these high pressure phases of carbon and how the triple point would reveal itself in the shock velocities. The theory, called density functional theory, is a powerful method for solving Schrödinger’s equation for hundreds to thousands of atoms using today’s large computers.

[Image courtesy: Sandia National Laboratories] The solid and dotted lines in both graphs represent the same equation-of-state predictions for carbon by Sandia theorists. Jogs in the lines occur when the material changes state. Graph A's consistent red-diamond path, hugging the predicted graph lines, are Z's laboratory results. They confirm the theoretical predictions. The scattered data points of graph B represent lab results from various laser sites external to Sandia.

Using these results as guides, experimental results from fifteen flyer-plate flights — themselves powered by the extreme magnetic fields of Sandia’s Z machine — in work led by Knudson, then determined more exact change-of-state transition pressures than ever before determined. Even better, these pressures fell within the bounds set by theory, thus showing that the theory was accurate.

“These experiments are much more accurate than ones previously performed with laser beams [2,3],” says Knudson. “Our flyer plates, with precisely measured velocities, strike several large diamond samples, which enables very accurate shock wave velocity measurements.” Laser beam results, he says, are less accurate because they shock only very small quantities of material, and must rely on an extra step to infer the shock pressure and density.

Sandia’s magnetically driven plates measure about 4 cm by 1.7 cm cross section, are hundreds of microns thick, and impact three samples on each firing. Z-machine’s target diamonds are each about 1.9 carats, while laser experiments use about 1/100 of a carat.

“No, they’re not gemstones,” says Desjarlais about the Sandia targets. The diamonds in fact are created through industrial processes and have no commercial value, says Dolan, though their scientific value has been large!

Reference
[1] "Shock-Wave Exploration of the High-Pressure Phases of Carbon"
M. D. Knudson, M. P. Desjarlais, D. H. Dolan, Science, 322, 1822 - 1825 (2008).
Abstract.
[2] "Hugoniot measurement of diamond under laser shock compression up to 2 TPa"
H. Nagao, K. G. Nakamura, K. Kondo, N. Ozaki, K. Takamatsu, T. Ono, T. Shiota, D. Ichinose, K. A. Tanaka, K. Wakabayashi, K. Okada, M. Yoshida, M. Nakai, K. Nagai, K. Shigemori, T. Sakaiya, K. Otani, Physics of Plasmas, 13, 052705 (2006). Abstract.
[3] "Laser-shock compression of diamond and evidence of a negative-slope melting curve"
Stéphanie Brygoo, Emeric Henry, Paul Loubeyre, Jon Eggert, Michel Koenig, Bérénice Loupias, Alessandra Benuzzi-Mounaix & Marc Rabec Le Gloahec, Nature Materials 6, 274 - 277 (2007),
Abstract.

[We thank Media Relations, Sandia National Laboratories for materials used in this report]

Labels: , ,


Tuesday, November 20, 2007

Dwarf Galaxies and Dark Matter

Marla GehaMarla Geha

Today's issue of Astrophysical Journal contains a paper by Joshua Simon of Department of Astronomy, California Institute of Technology and Marla Geha of Hertzberg Institute of Astrophysics, Victoria , Canada (currently at Department of Astronomy, Yale University) reporting results of a new observation that have shed new light on the "Missing Dwarf Galaxy" puzzle--a discrepancy between the number of extremely small, faint galaxies that cosmological theories predict should exist near the Milky Way, and the number that have actually been observed.

The "Cold Dark Matter" model, which explains the growth and evolution of the universe, predicts that large galaxies like the Milky Way should be surrounded by a swarm of up to several hundred smaller galaxies, known as "dwarf galaxies" because of their diminutive size. But until recently, only 11 such companions were known to be orbiting the Milky Way. To explain why the missing dwarfs were not seen, theorists suggested that although hundreds of the galaxies indeed may exist near the Milky Way, most have few, if any, stars. If so, they would be comprised almost entirely of dark matter which does not interact with electromagnetic waves and thus cannot be directly observed but has gravitational effects on ordinary atoms.

Joshua SimonJoshua Simon

In the past two years, researchers used images from the Sloan Digital Sky Survey to find out as many as 12 additional very faint dwarf galaxies near the Milky Way. The new systems are unusually small, even compared to other dwarf galaxies; the least massive among them contain only 1% as many stars as the most minuscule galaxies previously known. "These new dwarf galaxies are fascinating systems, not only because of their major contribution to the Missing Dwarf problem, but also as individual galaxies," says Joshua Simon, "We had no idea that such small galaxies could even exist until these objects were discovered last year."

Marla Geha added,"We thought some of them might simply be globular star clusters, or that they could be the shredded remnants of ancient galaxies torn apart by the Milky Way long ago. To test these possibilities, we needed to measure their masses." Joshua and Marla used the DEIMOS spectrograph on the 10-meter Keck II telescope at the W. M. Keck Observatory in Hawaii to study 8 of the new galaxies. The Doppler effect--a shift in the wavelength of the light coming from the galaxies caused by their motion with respect to the earth-- was closely observed to determine the speeds of stars of each dwarf galaxy, which are determined by the total mass of the galaxy.

They measured precise speeds of 18 to 214 stars in each galaxy, three times more stars per galaxy than any previous study. The speeds of the stars ranged between 4 to 7 km/s, which were much slower than the stellar velocities in any other known galaxy [For comparison, the sun orbits the center of the Milky Way at about 220 km/s]. When the speeds were coverted to masses, all these galaxies fell among the smallest ever measured, more than 10,000 times less massive than the Milky Way. Joshua and Marla conclude that the fierce ultraviolet radiation given off by the first stars, born just a few hundred million years after the Big Bang, may have blown away all of the hydrogen gas from dwarf galaxies also forming at that time. The loss of gas prevented the galaxies from creating new stars, leaving them very faint, or, in many cases, completely dark. When this effect is included in theoretical models, the number of expected dwarf galaxies agrees with the number of observed dwarf galaxies.

An image showing positions of these dwarf galaxies relative to Milky Way can be accessed here: http://www.keckobservatory.org/images/article_pictures/147_308.jpg

Although the Sloan Digital Sky Survey was successful in finding a dozen ultrafaint dwarfs, it covered only about 25% of the sky. Future surveys that scan the remainder of the sky are expected to discover as many as 50 additional dark matter-dominated dwarf galaxies orbiting the Milky Way. Telescopes for one such effort, the Pan-STARRS project on Maui, are now under construction.

"Explaining how stars form inside these remarkably tiny galaxies is difficult, and so it is hard to predict exactly how many star-containing dwarfs we should find near the Milky Way", says Joshua, "Our work narrows the gap between the Cold Dark Matter theory and observations by significantly increasing the number of Milky Way dwarf galaxies and telling us more about the properties of these galaxies."

Marla says,"One implication of our results is that up to a few hundred completely dark galaxies really should exist in the Milky Way's cosmic neighborhood. If the Cold Dark Matter model is correct they have to be out there, and the next challenge for astronomers will be finding a way to detect their presence."

Reference:
"The Kinematics of the Ultra-faint Milky Way Satellites: Solving the Missing Satellite Problem" ,
Joshua D. Simon and Marla Geha,
The Astrophysical Journal, v670, p313-331 (2007 November 20),
Abstract

[We thank Caltech Media Relations for materials used in this posting]

Labels: ,


Wednesday, July 25, 2007

"Changing Constants, Dark Energy and the Absorption of 21 cm Radiation" -- By Ben Wandelt

Ben Wandelt [Photo credit: Department of Physics, University of Illinois/Thompson-McClellan Photography]

Rishi Khatri and Ben Wandelt have recently proposed a new technique for testing the constancy of the fine structure constant across cosmic time scales using what may prove to be the ultimate astronomical resource for fundamental physics. In this invited article, Ben Wandelt explains the motivation for this work and the physical origin of this treasure trove of information.

Author: Ben Wandelt
Affiliation: Center for Theoretical Astrophysics, University of Illinois at Urbana-Champaign

What makes Constants of Nature so special? From a theorist's perspective constants are necessary evils that ought to be overcome. The Standard Model has 19 “fundamental constants,” and that is ignoring the non-zero neutrino masses which bring the total count to a whopping 25. That's 25 numbers that need to be measured as input for the theory. A major motivating factor in the search of a fundamental theory beyond the Standard Model is to explain the values of these constants in terms of some underlying but as yet unseen structure.

What's more, not even the constancy of Constants of Nature (CoNs) is guaranteed (Uzan 2003). Maybe the quantities we think of as constants are actually dynamic but vary slowly enough that we haven't noticed. Or even if constant in time, these numbers may be varying across cosmic distances.

Quite contrary to taking the constancy of the CoNs for granted, one can argue that it is actually surprising. String theorists tell us these constants are related to the properties of the space spanned by the additional, small dimensions beyond our observed four (3 space + 1 time). These properties could well be dynamical (after all, we have known since Hubble that the 3 large dimensions are growing) so why aren't the 'constants' changing? This perspective places the onus on us to justify why the small sizes are at least approximately constant. So the modern, 21st century viewpoint is that it would be much more natural if the CoNs were not constant but varying—either spatially or with time.

By way of example consider the cosmic density of dark energy. The 20th century view was in terms of “vacuum energy,” a property of empty space that is predicted by the Standard Model of particle physics. This is qualitatively compelling, but quantitatively catastrophically wrong. More recently three main categories of attempts emerged to explain that particular constant (and ignore the vacuum energy problem). The first category explains dark energy as some new and exotic form of matter. The second category of explanations sees the acceleration of the Universe as evidence that our understanding of Gravity is incomplete.

The third argues that the dark energy density is just another CoN, the “cosmological constant,” which appears as an additive term in Einstein's equations of general relativity and therefore increases the rate of the expansion of the Universe. This possibility was originally suggested by Einstein himself. While this is quite an economical way of modeling all currently observed effects of the universal acceleration it is also hugely unsatisfactory as an actual explanation—somewhat analogous to a boss explaining the size of your salary as “Because that's the number and that's it.” The attempt to turn this into an actual explanation through the pseudo-anthropic reasoning associated with string-theoretic landscape arguments corresponds to your boss adding “Because if you wanted to earn any more than that you wouldn't be here to ask me this question.”

If we consider the cosmic density of dark energy as another CoN that appears in Einstein's equation, it should also somehow arise from the underlying fundamental theory, like the other constants. By the identical argument we went through before we should in fact be surprised by its constancy. Hence most of the theoretical activity takes place within categories one and two, endowing this supposedly constant CoN with dynamical properties that can in principle be tested by observation.

Of course none of these aesthetic or theoretical arguments for what constitutes a satisfying explanation holds any water if it cannot be tested. And in fact, there are two sorts of tests: laboratory tests and astronomical observations. For definiteness, let's focus the discussion on a particular CoN, the most accurately measured CoN, the fine structure constant α. This number tells us the strength of the force that will act on an electric charge when it is placed in an electromagnetic field. If you have heard about the charge of the electron you have already encountered this constant in a slightly different form. Since charge has units (Coulomb), one could always redefine the units to change the value. So the relevant number is a dimensionless combination of the charge of the electron with other CoNs. This gives α ≈ 1/137.

Over the years, the value of α has been measured in laboratory experiments to about 10 digits of accuracy. Using the extreme precision of atomic fountains, the value of α was measured over 5 years and found to have changed by less than 1 part in 1015 per year [Marion et al. 2003].

Laboratory experiments do have their distinct advantages: the setup is under complete control and repeatable. However, they suffer from the very short lever arm of human time scales. Astronomical observations provide a much longer lever arm in time. The best current observations use quasar absorption lines and limit the variation to a similar accuracy when put in terms of yearly variation—but these measurements constrain variation over the last 12 billion years, the time it took the Universe to expand by a factor of 2.

In fact, using such quasar data, one group has claimed a detection of a change in α of 0.001% over the last 12 billion years [Webb et al. 2001]—though this claim is certainly controversial [Chand et al. 2006], but things may become interesting at that level.

My graduate student Rishi Khatri and I have discovered a new astronomical probe of the fine structure constant that is likely the ultimate astronomical resource of information for probing its time variation. Compared to the quasar data our technique probes α at an even earlier epoch, only a few million years after the Big Bang, when the Universe went from 200 times smaller to 30 times smaller than it is today. And in principle, if some technological hurdles can be overcome, there is enough information to measure α to nine digits of accuracy 13.7 billion years in the past! This would be 10,000 more sensitive than the best laboratory measurements.

What is this treasure trove of information? It arrives at the Earth in the form of long wavelength radio waves between 6 meters and 42 meters long. Theses radio waves started out their lives between 0.5 cm and 3 cm long, as part of the cosmic microwave background that was emitted when the hot plasma of the early Universe transformed into neutral hydrogen gas. As the Universe expands, these waves stretch proportionally. After about 7 million years, the ones with the longest initial wavelength first stretch to a magic wavelength: 21 cm. At this wavelength these waves resonate with hydrogen atoms: they have just the right energy to be absorbed by its electron. Waves that are absorbed are removed from the cosmic microwave background and can be seen as an absorption line (similar to the well-known Fraunhofer lines in the solar spectrum). As the Universe expands during the next 120 million years, waves that were initially shorter stretch to 21 cm and are similarly absorbed by hydrogen. After this time, light from the first stars heat the hydrogen to the point that it can no longer absorb these waves [Loeb and Zaldarriaga 2004]

It turns out that the amount of absorption is extremely sensitive to the value of α. Therefore, the spectrum of absorption lines we expect to see in the radio waves is an accurate record of the value of α during this epoch. We could even look for variations of α within this epoch, and check for spatial variations of α and other 'constants.' I argued above that these variations are expected on general grounds, but they are also predicted by specific string-theory inspired models for dark energy such as the chameleon model.

The tests we propose are uniquely promising to constrain fundamental physics models with astronomical observations. Important technological hurdles have to be overcome to realize measurements of the radio wave spectrum at the required level of accuracy. Still, the next time you see snow on your analog TV you might consider that some of what you see is due to long wavelength radio waves have reached you from the early Universe, having traveled to you across the gulf of cosmic time and carry in them the signature that may reveal the fundamental theory of Nature.

References
Chand H. et al. 2006, Astron. Astrophys. 451, 45.
Khatri, R. and Wandelt, B. D. 2007, Physical Review Letters 98, 111301. Abstract
Loeb, A. and Zaldarriaga, M. 2004, Physical Review Letters 92, 211301. Abstract
Marion, H. et al. 2003, Phys.Rev.Lett. 90, 150801. Abstract
Uzan, J.-P. 2003, Reviews of Modern Physics, vol. 75, 403. Abstract
Webb,J. K. et al. 2001, Physical Review Letters 87, 091301. Abstract

Labels: , ,