.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Sunday, July 25, 2010

Deepest All-Sky Surveys for Continuous Gravitational Waves

Holger J. Pletsch

[This is an invited article from Dr. Holger J. Pletsch who is the recipient of the 2009 GWIC (Gravitational Wave International Committee) Thesis Prize for his PhD thesis “Data Analysis for Continuous Gravitational Waves: Deepest All-Sky Surveys” (PDF). The thesis also received the 2009 Dieter Rampacher Prize of the Max Planck Society in Germany -- awarded to its youngest Ph.D. candidates usually between the ages of 25 and 27 for their outstanding doctoral work. -- 2Physics.com]

Author: Holger J. Pletsch
Affiliation:
Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut) and Leibniz Universität Hannover

Besides validating Einstein's theory of General Relativity, direct detection of gravitational waves will also constitute an important new astronomical tool. Prime target sources of continuous gravitational waves (CW) for current Earth-based laser-interferometric detectors as LIGO [1] are rapidly spinning compact objects, such as neutron stars, with nonaxisymmetric deformations [2].

Very promising searches are all-sky surveys for prior unknown CW emitters. As most neutron stars are electromagnetically invisible, gravitational-wave observations might allow to reveal completely new populations of neutron stars. Therefore, a CW detection could potentially be extremely helpful for neutron-star astrophysics. Even the null results of today's search efforts, yielding observational upper limits [3], already constrain the physics of neutron stars.

2Physics articles by past winners of the GWIC Thesis Prize:
Henning Vahlbruch (2008):
"Squeezed Light – the first real application starts now"
Keisuke Goda (2007): "Beating the Quantum Limit in Gravitational Wave Detectors"
Yoichi Aso (2006): "Novel Low-Frequency Vibration Isolation Technique for Interferometric Gravitational Wave Detectors"
Rana Adhikari (2003-5)*: "Interferometric Detection of Gravitational Waves : 5 Needed Breakthroughs"
*Note, the gravitational wave thesis prize was started initially by LIGO as a biannual prize, limited to students of the LIGO Scientific Collaboration (LSC). The first award covered the period from 1 July 2003 to 30 June 2005. In 2006, the thesis prize was adopted by GWIC, renamed, converted to an annual prize, and opened to the broader international community.


The expected CW signals are extremely weak, and deeply buried in the detector instrument noise. Thus, to extract these signals sensitive data analysis methods are requisite. A powerful method is coherent matched filtering, where the signal-to-noise ratio (SNR) increases with the square root of observation time. Hence, detection is a matter of observing long enough, to accumulate sufficient SNR.

The CW data analysis is further complicated by the fact that the terrestrial detector location Doppler-modulates the amplitude and phase of the waveform, as the Earth moves relative to the solar system barycenter (SSB). The parameters describing the signal's amplitude variation may be analytically eliminated by maximizing the coherent matched-filtering statistic. The remaining search parameters describing the signal's phase are the source's sky location, frequency and frequency derivatives. The resulting coherent detection statistic is commonly called the F-statistic [4].

However, what ultimately limits the sensitivity of all-sky surveys for unknown CW sources using the F-statistic is the finite computing power available. Such searches are computationally very expensive, because for maximum sensitivity one must convolve the full data set with many signal waveforms (templates) corresponding to all possible sources. But the number of templates required for a fully coherent F-statistic search increases as a high power of the coherent observation time. For a year of data, the computational cost to search a realistic range of parameter space exceeds the total computing power on Earth [4,5]. Thus a fully coherent search is limited to much shorter observation times.

Searching year-long data sets is accomplished by less costly hierarchical, so-called “semicoherent” methods [6,7]. The data is broken into segments, which are much smaller than one year. Just every segment is analyzed coherently, computing the F-statistic on a coarse grid of templates. Then the F-statistics from all segments (or statistics derived from F) are incoherently combined using a common fine grid of templates. This way phase information between segments is discarded, hence the term “semicoherent”.

A central long-standing problem in these semicoherent methods was the design of, and link between, the coarse and fine grids. Previous methods, while creative and clever, were arbitrary and ad hoc constructions. In most recent work [8], the optimal solution for the incoherent combination step has been found. The key quantity is the fractional loss, called mismatch, in expected F-statistic for a given signal at a nearby grid point. Locally Taylor-expanding the mismatch (to quadratic order) in the differences of the coordinates defines a positive definite metric. Previous methods considered parameter correlations in the F-statistic to only linear order in coherent integration time, discarding higher orders from the metric.

The F-statistic has strong "global" (large-scale) correlations in the physical coordinates, extending outside a region in which the mismatch is well approximated by the metric. In recent work [9], an improved understanding of the large-scale correlations in the F-statistic was found. Particularly, for realistic segment durations (a day or longer) it turned out to be also crucial to consider the fractional loss in F to second order in coherent integration time.

Exploiting these large-scale correlations in the coherent detection statistic F has lead to a significantly improved semicoherent search technique for CW signals [8]. This novel method is optimal if the semicoherent detection statistic is taken to be the sum of one coarse-grid F-statistic value from each data segment.

More precisely, the improved understanding of large-scale correlations yields new coordinates on the phase parameter space. In these coordinates the first fully analytical metric for the incoherent combination step is obtained, accurately approximating the mismatch. Hence, the optimal (closest) coarse-grid point from each segment can be determined for any given fine-grid point in the incoherent combination step. So the new method combines the coherent segment results much more efficiently, since previous techniques did not use metric information beyond linear order in coherent integration time.

Fig.1: The Einstein@Home screensaver

The primary application area of this new technique is the volunteer distributed computing project Einstein@Home [10]. Members of the public can sign up their home or office computers (hosts) through the web page, and download a screensaver. When idle, the screensaver displays (Fig.1), and in the background hosts automatically download small chunks of data from the servers, carry out analysis, and report back results. While more than 250k individuals have already contributed, the computational power (0.25 PFlop/s) achieved is in fact competitive with the world's largest supercomputers.

What improvement to expect from the new search technique for Einstein@Home? Via Monte Carlo simulations an implementation of this new method has been compared to the conventional Hough transform technique [7] that has been previously used on Einstein@Home. To provide a realistic comparison, simulated data covered the same time intervals as the input data of a recent Einstein@Home search run, which employed the conventional Hough technique. Those data, from LIGO’s fifth science run (S5), included 121 data segments of 25-hour duration. The false alarm probabilities were obtained using many simulated data sets with different realizations of stationary Gaussian white noise. To find the detection probabilities, different CW signals with fixed source gravitational-wave amplitude were added. Other source parameters were randomly drawn from uniform distributions.














Fig.2: (right click on the image to see higher resolution version) Performance demonstration of the new search method. Left panel: Receiver operating characteristic curves for fixed source strain amplitude. Right panel: Detection probability as a function of source strain amplitude, at 1% false alarm probability.

The results of this comparison are illustrated in Fig. 2. The right panel of Fig. 2 shows the detection efficiencies for different values of source gravitational-wave amplitude (strain), at a fixed 1% false alarm probability. The new method has been applied in two modes of operation: first, F-statistics were simply summed across segments; second, only ones or zeros were summed (number counts) depending upon whether F exceeds a predefined threshold in a given segment. In both modes of operation, the new technique performs significantly better than the conventional Hough method. For instance, 90% detection probability with the new method (in number-count operation mode) is obtained for a value of source strain amplitude about 6 times smaller as needed by the Hough method (which is also based number counts): thus the "distance reach" of the new technique is about 6 times larger. This increases the number of potentially detectable sources by more than 2 orders of magnitude, since the "visible" spatial volume increases as the cube of the distance, as illustrated in Fig. 3.

Fig.3: Artist’s illustration of increased "visible" spatial volume due to the novel search technique.

The current Einstein@Home search run [10] in fact deploys this new technique for the first time, analyzing about two years of LIGO’s most sensitive S5 data. The combination of a better search technique, plus more and more sensitive data, greatly increases the chance of making the first gravitational wave detection of a CW source. In the long term, the detection of CW signals will provide new means to discover and locate neutron stars, and will eventually provide unique insights into the nature of matter at high densities.

References:
[1]
B. Abbott et al. (LIGO Scientific Collaboration), "LIGO : the Laser Interferometer Gravitational-wave Observatory", Rep. Prog. Phys. 72, 076901 (2009), Abstract.
[2] R. Prix (for the LIGO Scientific Collaboration), in “Neutron Stars and Pulsars”, Springer, (2009).
[3] B. Abbott et al. (LIGO Scientific Collaboration), "Beating the spin-down limit on gravitational wave emission from the Crab pulsar", Astrophys. J. Lett. 683, L45 (2008), Abstract; B. Abbott et al. (LIGO Scientific Collaboration), "All-Sky LIGO Search for Periodic Gravitational Waves in the Early Fifth-Science-Run Data", Phys. Rev. Lett. 102, 111102 (2009), Abstract; B. Abbott et al. (LIGO Scientific Collaboration), "Einstein@Home search for periodic gravitational waves in early S5 LIGO data", Phys. Rev. D 80, 042003 (2009), Abstract.
[4] P. Jaranowski, A. Królak and B. F. Schutz, "Data analysis of gravitational-wave signals from spinning neutron stars: The signal and its detection", Phys. Rev. D 58, 063001 (1998), Abstract; P. Jaranowski and A. Królak, "Gravitational-Wave Data Analysis. Formalism and Sample Applications: The Gaussian Case", Living Reviews in Relativity, 8 (2005), Link.
[5] P. R. Brady, T. Creighton, C. Cutler and B. F. Schutz, "Searching for periodic sources with LIGO", Phys. Rev. D 57, 2101 (1998), Abstract.
[6] P. R. Brady and T. Creighton, "Searching for periodic sources with LIGO. II. Hierarchical searches", Phys. Rev. D 61, 082001 (2000), Abstract.
[7] B. Krishnan, A. M. Sintes, M. A. Papa, B. F. Schutz, S. Frasca and C. Palomba, "Hough transform search for continuous gravitational waves", Phys. Rev. D 70, 082001 (2004), Abstract.
[8] H. J. Pletsch and B. Allen, "Exploiting Large-Scale Correlations to Detect Continuous Gravitational Waves", Phys. Rev. Lett. 103, 181102 (2009), Abstract.
[9] H. J. Pletsch, "Parameter-space correlations of the optimal statistic for continuous gravitational-wave detection", Phys. Rev. D 78, 102005 (2008), Abstract.
[10] Einstein@Home: http://einstein.phys.uwm.edu/.

Labels: ,


Sunday, July 18, 2010

Weighty Matters for Particle Physics














The HPQCD collaboration: (from Left to right) Eduardo Follana, Greg Millar, Ian Allison, Craig McNeile, Emel Gulez, Junko Shigemitsu, Peter Lepage, Elvira Gamiz, Howard Trottier, Ron Horgan, Kent Hornbostel, Christine Davies, Iain Kendall, Eric Gregory.

[This is an invited article based on a recently published work of the High Precision Quantum Chromodynamics (HPQCD) collaboration. -- 2Physics.com]

Author: Christine Davies
Affiliation: Department of Physics and Astronomy,
University of Glasgow, UK

Link to HPQCD Collaboration >>

Particle physicists at the Fermilab Tevatron and at the CERN Large Hadron Collider are engaged in an exciting race to be the first to discover the Higgs particle, a 'smoking gun' remnant of the mechanism that we believe gives mass to the other fundamental particles. Particles interact with the Higgs field pervading all of space and, rather like moving through molasses, gain a mass as result.

Meanwhile an important question is: what are these masses? A recent paper [1] by the High Precision QCD (HPQCD) collaboration answers this question accurately for up, down and strange quarks for the first time.

The masses of the electron and its cousins, the muon and tau, are very well-known since these particles can be studied by the clear tracks they leave in particle detectors.

The masses of the quarks are much less well determined. The reason for this is that the strong force interactions never allow quarks to be seen as free particles. Only their bound states called hadrons (of which the proton is an example) can be produced and studied in particle physics experiments. The quark masses must then be inferred by matching experimental results for the masses of hadrons to those obtained from theoretical calculations using the theory of the strong force, Quantum Chromodynamics (QCD). The quark mass is a parameter in the theory and so matching theory and experiment allows the quark mass to be determined.

This could only be done rather approximately for many years, particularly for the lightest up, down and strange quarks. As Figure 1 (history of the strange quark mass) shows, improvements have been very slow. Recently, however, a technique known as lattice QCD has enabled theorists to calculate the masses of some hadrons very accurately and establish mastery over QCD at last [2].

Fig.1 History of the strange quark mass: This figure shows the new result compared to earlier evaluations of the strange quark mass in the Particle Data Tables. The mass is given in units of MeV/c2 - for comparison the proton mass is 938 MeV/c2.

The High Precision QCD (HPQCD) Collaboration has now determined the mass of the strange quark to an accuracy of better than 2%, which improves on the evaluation of previous results given in the Particle Data Tables [3] by a factor of 10.

The technique used by HPQCD has been to determine the ratio of the mass of the charm quark to that of the strange quark. This can be done more accurately than determining the strange quark mass on its own, and gives the breakthrough in precision that has been achieved. Determining this ratio had not been possible before because previous methods had large systematic errors for the relatively heavy charm quarks, which HPQCD have now been able to overcome.

Because the charm mass is already known to 1% from several calculations, including an earlier calculation by HPQCD and others[4], this then allows an accurate determination of the strange quark mass. Similarly a determination of the ratio of the strange quark mass to that of the up and down quarks provided by the MILC collaboration[5], allows HPQCD to cascade the accuracy that they have for the charm quark mass down to all of the light quarks.

Fig. 2: Summary of quark mass values from this paper: a comparison of the new lattice QCD results for the masses of the up, down and strange quarks (from this paper) and the charm quark (from an earlier paper) to the current evaluations in the Particle Data Tables.

With this improvement in the masses of the light quarks shown in Figure 2, we now have values for the masses of all 6 quarks at the level of a few percent and a much clearer and more complete picture of what the Higgs particle has done for the quarks.

References
[1] C. T. H. Davies, C. McNeile, K. Y. Wong, E. Follana, R. Horgan, K. Hornbostel, G. P. Lepage, J. Shigemitsu, and H. Trottier (HPQCD Collaboration), "Precise Charm to Strange Mass Ratio and Light Quark Masses from Full Lattice QCD", Phys. Rev. Lett. 104:132003 (2010). Abstract.
[2] C. Davies, "Colourful calculations", Physics World 19N12:20 (2006).
[3] Particle Data Group, http://pdg.lbl.gov/
[4] I. Allison, E. Dalgic, C. T. H. Davies, E. Follana, R. R. Horgan, K. Hornbostel, G. P. Lepage, C. McNeile, J. Shigemitsu, H. Trottier, R. M. Woloshyn, K. G. Chetyrkin, J. H. Kühn, M. Steinhauser, and C. Sturm (HPQCD Collaboration), "High-precision charm-quark mass and QCD coupling from current-current correlators in lattice and continuum QCD", Phys. Rev. D78:054513 (2008) Abstract; K. G. Chetyrkin, J. H. Kühn, A. Maier, P. Maierhöfer, P. Marquard, and M. Steinhauser, C. Sturm, "Charm and bottom quark masses: An update", Phys. Rev. D80:074010 (2009) Abstract.
[5] C. Aubin, C. Bernard, C. DeTar, J. Osborn, Steven Gottlieb, E. B. Gregory, D. Toussaint, U. M. Heller, J. E. Hetrick, R. Sugar (MILC collaboration), "Light pseudoscalar decay constants, quark masses, and low energy constants from three-flavor lattice QCD", Phys. Rev. D70:114501 (2004) Abstract.

Labels:


Sunday, July 11, 2010

Unpeeling Atoms and Molecules from the Inside Out

Nora Berrah [photo courtesy: Western Michigan University]

The first published scientific results from the world's most powerful hard X-ray laser -- Linac Coherent Light Source (LCLS) -- located at the Department of Energy's SLAC National Accelerator Laboratory, show its unique ability to control the behaviors of individual electrons within simple atoms and molecules by stripping them away, one by one—in some cases creating hollow atoms.

These early results describe in great detail how the Linac Coherent Light Source's intense pulses of X-ray light change the very atoms and molecules they are designed to image. Controlling those changes will be critical to achieving the atomic-scale images of biological molecules and movies of chemical processes that the LCLS is designed to produce.

In a report published June 22 in Physical Review Letters [1], a team led by physicist Nora Berrah of Western Michigan University—the third group to conduct experiments at the LCLS—describes the first experiments on molecules. Her group also created hollow atoms, in this case within molecules of nitrogen gas, and found surprising differences in the way short and long laser pulses of exactly the same energies stripped and damaged the nitrogen molecules.

"We just introduced molecules into the chamber and looked at what was coming out there, and we found surprising new science," said Matthias Hoener, a postdoctoral researcher in Berrah's group at WMU and visiting scientist at Lawrence Berkeley National Laboratory who was first author of the paper. "Now we know that by reducing the pulse length, the interaction with the molecule becomes less violent. "

Linda Young [photo courtesy: Argonne National Laboratory]

In another report published in the July 1 issue of Nature [2], a team led by Argonne National Laboratory physicist Linda Young describes how they were able to tune LCLS pulses to selectively strip electrons, one by one, from atoms of neon gas. By varying the photon energies of the pulses, they could do it from the outside in or—a more difficult task—from the inside out, creating so-called "hollow atoms."

"Until very recently, few believed that a free-electron X-ray laser was even possible in principle, let alone capable of being used with this precision," said William Brinkman, director of DOE's Office of Science. "That's what makes these results so exciting."

Young, who led the first experiments in October with collaborators from SLAC and five other institutions, said, "No one has ever had access to X-rays of this intensity, so the way in which ultra-intense X-rays interact with matter was completely unknown. It was important to establish these basic interaction mechanisms."

SLAC's Joachim Stöhr, director of the LCLS, said, "When we thought of the first experiments with LCLS ten years ago, we envisioned that the LCLS beam may actually be powerful enough to create hollow atoms, but at that time it was only a dream. The dream has now become reality."

The world's first hard X-ray free-electron laser started operation with a bang. First experiments at SLAC National Accelerator Laboratory's Linac Coherent Light Source stripped electrons one by one from neon atoms (illustrated above) and nitrogen molecules, in some cases removing only the innermost electrons to create "hollow atoms." Understanding how the machine's ultra-bright X-ray pulses interact with matter will be critical for making clear, atomic-scale images of biological molecules and movies of chemical processes. (Artwork by Gregory Stewart, SLAC)

While the first experiments were designed to see what the LCLS can do and how its ultra-fast, ultra-bright pulses interact with atoms and molecules, they also pave the way for more complex experiments to come. Its unique capabilities make the LCLS a powerful tool for research in a wide range of fields, including physics, chemistry, biology, materials and energy sciences.

The LCLS forms images by scattering X-ray light off an atom, molecule or larger sample of material. Yet when the LCLS X-rays are tightly focused by mirrors, each powerful laser pulse destroys any sample it hits. Since certain types of damage, like the melting of a solid, are not instantaneous and only develop with time, the trick is to minimize the damage during the pulse itself and record the X-ray snapshot with a camera before the sample disintegrates.

Both teams found that the shorter the laser pulse, the fewer electrons are stripped away from the atom or molecule and the less damage is done. And both delved into the detailed mechanisms behind that damage.

Atoms are a little like miniature solar systems, with their electrons orbiting at various distances from the nucleus in a sort of quantum fuzz. To make things simpler, scientists describe the electrons as orbiting in "shells" at specific distances from the nucleus. The innermost shell contains up to two electrons, the next one up to eight, the third one up to 18, and so on.

Since they're closest to the positively charged nucleus, the two innermost electrons are generally the hardest to wrest away. But they also most readily absorb photons of X-ray light, and so are the most vulnerable to getting stripped away by intense X-rays.

Although previous experiments with intense optical lasers had stripped neon atoms of most of their electrons, Young's was the first to discover how ultra-intense X-ray lasers do this. At low photon energies, the outer electrons are removed, leaving the inner electrons untouched. However, at higher photon energies, the inner electrons are the first to be ejected; then the outer electrons cascade into the empty inner core, only to be kicked out by later parts of the same X-ray pulse. Even within the span of a single pulse there may be times when both inner electrons are missing, creating a hollow atom that is transparent to X-rays, Young said.

"This transparency associated with hollow atoms could be a useful property for future imaging experiments, because it decreases the fraction of photons doing damage and allows a higher percentage of photons to scatter off the atom and create the image," Young said. She said application of this phenomenon will also allow researchers to control how deeply an intense X-ray pulse penetrates into a sample.

Berrah's team bombarded puffs of nitrogen gas with laser pulses that ranged in duration from about four femtoseconds, or quadrillionths of a second, to 280 femtoseconds. No matter how short or long it was, though, each pulse contained the same amount of energy in the form of X-ray light; so you might expect that they would have roughly the same effects on the nitrogen molecules.

But to the team's surprise, that was not the case, Hoener said. The long pulses stripped every single electron from the nitrogen molecules, starting with the ones closest to the nucleus; the short ones stripped off only some of them.

Their report attributes this to the "frustrated absorption effect": Since the molecule's electrons are preferentially stripped from the innermost shells, there is simply not enough time during a short pulse for the molecule's outermost electrons to refill the innermost shells and get kicked out in turn.

With all this activity going on inside the atom, scientists have a new way to explore atomic structure and dynamics. Further experiments have investigated nanoclusters of atoms, protein nanocrystals and even individual viruses, with results expected to be published in coming months.

Reference
[1]
M. Hoener, L. Fang, O. Kornilov, O. Gessner, S. T. Pratt, M. Gühr, E. P. Kanter, C. Blaga, C. Bostedt, J. D. Bozek, P. H. Bucksbaum, C. Buth, M. Chen, R. Coffee, J. Cryan, L. DiMauro, M. Glownia, E. Hosler, E. Kukk, S. R. Leone, B. McFarland, M. Messerschmidt, B. Murphy, V. Petrovic, D. Rolles and N. Berrah, "Ultraintense X-Ray Induced Ionization, Dissociation, and Frustrated Absorption in Molecular Nitrogen", Phys. Rev. Lett. 104, 253002 (2010).
Abstract.
[2] L. Young, E. P. Kanter, B. Krässig, Y. Li, A. M. March, S. T. Pratt, R. Santra, S. H. Southworth, N. Rohringer, L. F. DiMauro, G. Doumy, C. A. Roedig, N. Berrah, L. Fang, M. Hoener, P. H. Bucksbaum, J. P. Cryan, S. Ghimire, J. M. Glownia, D. A. Reis, J. D. Bozek, C. Bostedt & M. Messerschmidt, "Femtosecond electronic response of atoms to ultra-intense X-rays", Nature 466, 56-61 (1 July 2010).
Abstract.

Labels: ,


Sunday, July 04, 2010

Testing the Spin-Statistics Theorem

The image shows two UC Berkeley physicists Dima Budker and Damon English. Dima Budker (left) is in a fermionic state, occupied by only one of himself. Many copies of the bosonic Damon English (right) occupy the same state at once. [image credit: Roy Kaltschmidt and Damon English /University of California, Berkeley and Lawrence Berkeley National Laboratory]

The best theory for explaining the subatomic world got its start in 1928 when theorist Paul Dirac combined quantum mechanics with special relativity to explain the behavior of the electron. The result was relativistic quantum mechanics, which became a major ingredient in quantum field theory. With a few assumptions and ad hoc adjustments, quantum field theory has proven powerful enough to form the basis of the Standard Model of particles and forces.

“Even so, it should be remembered that the Standard Model is not a final theory of all phenomena, and is therefore inherently incomplete,” says Dmitry Budker, a staff scientist in the Nuclear Science Division of the U.S. Department of Energy’s Lawrence Berkeley National Laboratory and a professor of physics at the University of California at Berkeley.

Budker has long been interested in testing widely accepted underpinnings of physical theory to their limits. In the June 25 issue of Physical Review Letters [1], he and his colleagues report the most rigorous trials yet of a fundamental assumption about how particles behave on the atomic scale.

Why we need the spin-statistics theorem

“We tested one of the major theoretical pillars of quantum field theory, the spin-statistics theorem,” says Damon English, Budker’s former student and a postdoctoral fellow in UC’s Department of Physics, who led the experiment. “Essentially we were asking, are photons really perfect bosons?”

The spin-statistics theorem dictates that all fundamental particles must be classified into one of two types, fermions or bosons. (The names come from the statistics, Fermi-Dirac statistics and Bose-Einstein statistics, that explain their respective behaviors.)

No two electrons can be in the same quantum state. For example, no two electrons in an atom can have identical sets of quantum numbers. Any number of bosons can occupy the same quantum state, however; among other phenomena, this is what makes laser beams possible.

Electrons, neutrons, protons, and many other particles of matter are fermions. Bosons are a decidedly mixed bunch that includes the photons of electromagnetic force, the W and Z bosons of the weak force, and such matter particles as deuterium nuclei, pi mesons, and a raft of others. Given the pandemonium in this particle zoo, it takes the spin-statistics theorem to tell what’s a fermion and what’s a boson.

The way to tell them apart is by their spin – not the classical spin of a whirling top but intrinsic angular momentum, a quantum concept. Quantum spin is either integer (0, 1, 2…) or half integer, an odd number of halves (1/2, 3/2…). Bosons have integer spin. Fermions have half integer spin.

“There’s a mathematical proof of the spin-statistics theorem, but it’s so abstruse you have to be a professional quantum field theorist to understand it,” says Budker. “Every attempt to find a simple explanation has failed, even by scientists as distinguished as Richard Feynman. The proof itself is based on assumptions, some explicit, some subtle. That’s why experimental tests are essential.”

Says English, “If we were to knock down the spin-statistics theorem, the whole edifice of quantum field theory would come crashing down with it. The consequences would be far-reaching, affecting our assumptions about the structure of spacetime and even causality itself.”

In search of forbidden transitions

English and Budker, working with Valeriy Yashchuk, a staff scientist at Berkeley Lab’s Advanced Light Source, set out to test the theorem by using laser beams to excite the electrons in barium atoms. For experimenters, barium atoms have particularly convenient two-photon transitions, in which two photons are absorbed simultaneously and together contribute to lifting an atom’s electrons to a higher energy state.

“Two-photon transitions aren’t rare,” says English, “but what makes them different from single-photon transitions is that there can be two possible paths to the final excited state – two paths that differ by the order in which the photons are absorbed during the transition. These paths can interfere, destructively or constructively. One of the factors that determines whether the interference is constructive or destructive is whether photons are bosons or fermions.”

In the particular barium two-photon transition the researchers used, the spin-statistics theorem forbids the transition when the two photons have the same wavelength. These forbidden two-photon transitions are allowed by every known conservation law except the spin-statistics theorem. What English, Yashchuk, and Budker were looking for were exceptions to this rule, or as English puts it, “bosons acting like fermions.”

Two opposed laser beams, identical except for polarization, attempt to excite forbidden two-photon transitions in a beam of barium atoms. [image credit: Damon English]

The experiment starts with a stream of barium atoms; two lasers are aimed at it from opposite sides to prevent unwanted effects associated with atomic recoil. The lasers are tuned to the same frequency but have opposite polarization, which is necessary to preserve angular momentum. If forbidden transitions were caused by two same-wavelength photons from the two lasers, they would be detected when the atoms emit a particular color of fluorescent light.

The researchers carefully and repeatedly tuned through the region where forbidden two-photon transitions, if any were to occur, would reveal themselves. They detected nothing. These stringent results limit the probability that any two photons could violate the spin-statistics theorem: the chances that two photons are in a fermionic state are no better than one in a hundred billion – by far the most sensitive test yet at low energies, which may well be more sensitive than similar evidence from high-energy particle colliders.

Budker emphasizes that this was “a true table-top experiment, able to make significant discoveries in particle physics without spending billions of dollars.” Its prototype was originally devised by Budker and David DeMille, now at Yale, who in 1999 were able to severely limit the probability of photons being in a “wrong” (fermionic) state. The latest experiment, conducted at UC Berkeley, uses a more refined method and improves on the earlier result by more than three orders of magnitude.

“We keep looking, because experimental tests at ever increasing sensitivity are motivated by the fundamental importance of quantum statistics,” says Budker. “The spin-statistics connection is one of the most basic assumptions in our understanding of the fundamental laws of nature.”

References
[1] Damon English, Valeriy Yashchuk, Dmitry Budker, “Spectroscopic test of Bose-Einstein statistics for photons”, Phys. Rev. Lett. 104, 253604 (2010).
Abstract. arXiv:1001.1771.
[2] M.G. Kozlov, Damon English, and Dmitry Budker,“Symmetry-suppressed two-photon transitions induced by hyperfine interactions and magnetic fields,” by M.G. Kozlov, Damon English, and Dmitry Budker, Phys. Rev. A80, 042504 (2009).
Abstract. arXiv:0907.3727.

[This report is written by Paul Preuss of Lawrence Berkeley National Laboratory]

Labels: