.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Saturday, November 21, 2009

Testing the Foundation of Special Relativity

The Düsseldorf team in the laboratory. From left to right: Ch. Eisele, A. Yu. Nevsky and S. Schiller.

[This is an invited article based on recent work of the author and his collaborators -- 2Physics.com]

Author: Ch. Eisele

Affiliation: Institute for experimental physics, Heinrich-Heine-Universität Düsseldorf, Germany

Contact: christian.eisele@uni-duesseldorf.de

Since Albert Einstein developed the theory of special relativity (TSR) during his annus mirabilis in 1905 [1], we learned that all physical laws should be formulated invariant under a special class of transformations, the so called Lorentz transformations. This "Lorentz Invariance" (LI) follows from two postulates; these are general, but experimentally testable statements.

The first postulate is Einstein's principle of relativity. It states, that all laws describing the change of physical states do not depend on the choice of the inertial coordinate system used in the description. In this sense, all inertial coordinate systems (i.e. systems moving with constant velocity relative to each other) are equivalent.

The second postulate states the universality of the speed of light: in every inertial system light propagates with the same velocity, and this velocity does not depend on the state of motion of the source or on the state of motion of the observer. We may say that the observer’s clocks and rulers behave in such a way that (s)he cannot determine any change in the value of the velocity.

Fig. 1 : General Relativity, as well as the Standard Model, assume the validity of (local) Lorentz Invariance. Both theories might be only low-energy-limits of a more fundamental theory unifying all fundamental forces and exhibiting tiny violations of Lorentz Invariance.

Today, the laws of Special Relativity lie, either as local or global symmetry, at the very basis of our accepted theories of the fundamental forces (see fig.1), the theory of General Relativity, and the Standard Model of the electroweak and strong interactions. Physicists are trying hard to develop a single theory, a Grand Unified Theory (GUT), that describes all the fundamental interactions in a common way, i.e. including a quantum description of gravitation. Over the last few decades candidate theories for a unification of the fundamental forces have been developed, e.g. string theories and loop quantum gravity. These actually do not rule out the possibility of Lorentz Invariance violations. Thus it could be that Lorentz invariance may only be an approximate symmetry of nature. These theoretical developments stimulate experimentalists to test the basic principles, on which the theory of Special Relativity is built, with the highest possible precision allowed by technology.

To test Lorentz Invariance (LI) one needs a theory, which can be used to interpret experimental results with respect to the validity of LI, a so-called test theory. Two test theories are commonly used, the Robertson-Mansouri-Sexl (RMS) theory [2-5], a kinematical framework dealing with generalized transformation rules, and the Standard Model Extension (SME) [6], a dynamical framework based on a modified Lagrangian of the Standard Model with additional couplings.

Within the RMS framework possible effects of Lorentz Invariance violation are a modified time dilation factor, a dependency of the speed of light on the velocity of the source or the observer, and an anisotropy of the speed of light. Three experiments are sufficient to validate Lorentz invariance within this model: experiments of the Ives-Stillwell type [7], the Kennedy-Thorndike type [8] and of the Michelson-Morley-type [9].

The SME, in contrast, allows in principle hundreds of new effects, and for an interpretation of a single experiment one often has to restrict oneself to certain sectors of the theory. For the so-called minimal QED sector of the theory possible effects are, e.g., birefringence of the vacuum, a dispersive vacuum and an anisotropy of the speed of light.

Upper limits on a possible birefringence of the vacuum or for a possible dispersive character of the vacuum can be derived from astrophysical observations of the light of very distant galaxies [e.g. 10,11,12], respectively the light emitted during supernova explosions. These are extremly sensitive tests. However, one has to rely on the light made available by sources in the universe and to make certain assumptions on the source of the light.

Laboratory experiments, on the other hand, allow for a very good control of the experimental circumstances. Experiments include, e.g., measurements of the anomalous g-factor of the electron [13] or measurements of the time dilation factor using fast beams of 7Li+ ions [14]. Even data of the global positioning system (GPS) can be used to test Lorentz invariance [15].

Recently, at the Universität Düsseldorf, we have performed a test of the isotropy of the speed of light with a Michelson-Morley type experiment [16]. The main component of the setup is a block of ultra-low-expansion-coefficient glass (ULE), in which two optical resonators with high finesse (F = 180 000) are embedded under an angle of 90° (see fig.2). The resonance frequencies ν1, ν2 of the resonators are a function of the speed of light, c, and the length Li of the respective resonator, νi = nic/2Li, ni being the mode number of the light mode oscillating in the resonator. Thus, if one ensures the length of the resonator to be stable, one can derive limits on a possible anisotropy of the speed of light c = c(θ) by measuring the resonance frequencies of the resonators as a function of the orientation of the resonators. In our setup we measure the difference frequency (ν1 - ν2 ) between the two resonators by exciting their modes by laser waves. Compared to an arrangement with a single cavity and comparison with, say, an atomic reference, the hypothetical signal due to an anisotropy is doubled in size.

Fig 2: The ULE-block containing the resonators (left). For the measurement the block is actively rotated and the frequency difference ν1 – ν2 is measured as a function of the orientation (right).

A cross section of the complete experimental apparatus is shown in figure 3. Many different systems have been implemented to suppress systematic effects. For example, the ULE-block containing the resonators is fixed in a temperature stabilized vacuum chamber to ensure a high length stability. This is placed on breadboard I and is surrounded by a foam-padded wooden box for acoustical and thermal isolation. Furthermore, the laser and all the optics needed for the interrogation of the resonators, also on breadboard I, are shielded by another foam-padded box. To isolate the optical setup from vibrations, breadboard I is placed on two active vibration isolation supports (AVI), that suppress mechanical noise coming from the rotation table and the floor.

Fig.3 Cross section of the experimental setup. A: air bearings, B: piezo motors, C: voice coil actuators, D: tilt sensors, E: air springs system, AVI's: active vibration isolation supports

In addition, the AVI allows as well for active control of the tilt of breadboard I by means of voice coil actuators (C). This is necessary since a varying tilt leads to varying elastic deformations of the resonator block, and thus systematically shifts the resonance frequencies. The tilt is measured using electronic bubble-level sensors (D) with a resolution of 1 µrad. The described system and a rack carrying all the electronics used for the frequency stabilisation and other servo systems are standing on breadboard II, which is fixed to the rotor of a high precision air bearing (A) rotation table. This is used to actively change the orientation of the resonators with a rotation rate of ωrot = 2π / 90s. To minimize systematic effects due to tilt modulations, the rotation axis can be aligned in the direction of local gravitation using an air spring system (E). A tower around the complete setup with multilayered elements on the sides and ceiling plates containing thermo-electric coolers allows for thermal and acoustical isolation from the surrounding.

With this apparatus we have performed measurements over a period of more than 1 year, taken in 46 datasets longer than 1 day. From these datasets we have used 135 000 single rotations to extract upper limits for the parameters of the RMS and SME describing a potential anisotropy of the speed of light.

The mentioned models, the RMS and the SME, predict variations of the frequency difference Δν = ν12 on several timescales. Due to the symmetry of the resonator system and the active rotation θ(t) = ωrot·t, the models predict a variation of Δν with a frequency 2 ωrot , Δν(θ) = 2Bν0 sin2θ(t) + 2Cν0 cos2θ(t), where ν0 is the mean optical frequency (281 THz). Thus, for every single rotation we determine the modulation amplitudes 2Bν0 and 2Cν0 at this frequency (see fig.4). Due to the rotation and the revolution of the earth around the sun, these amplitudes will show, if Lorentz Invariance is violated, variations on the timescale of half a sidereal day, a sidereal day, and on an annual scale. The size of these modulations is directly connected to the parameters of the two test theories, which can be derived from the modulation amplitudes via fits.

Fig.4 Histograms of the determined modulation amplitudes due to active rotation. The mean values are (10 ± 1) mHz for 2Bν0 and (1± 1) mHz for 2Cν0, corresponding to (3.5 ± 0.4)·10-17 for 2B and (0.4 ± 0.4)·10-17 for 2C.

Within the RMS theory a single parameter combinaton (δ-β-1/2) describes the anisotropy. From our measurements we can deduce a value of (-1.6 ± 6.1)•10-12, thus yielding an upper 1σ limit of 7.7•10-12 . This means, that the anisotropy of the speed of light, defined as (1/2)•Δc(π/2)/c, is probably less than 6•10-18 in relative terms.

Within the SME test theory 8 different parameters can be derived using our measurements. For most of these coefficients we can place upper bounds on a level of few parts in 1017, except for one coefficient, κe-ZZ, which is determined from the mean values of the modulation amplitudes 2B and 2C, and is most seriously affected by systematic effects. For this coefficient we can only limit the value to below 1.3•10-16 (1σ). For all the coefficients our experiment improved the upper limits for a possible Lorentz Invariance violation by more than one order of magnitude compared to previous work [17-22] and no significant signature of an anisotropy of the speed of light is seen at the current sensitivity of the apparatus.

Currently, the institute is working on an improved version of the apparatus. The goal is a further significant improvement of the upper limits within the next years.

References:
[1] "Zur elektrodynamik bewegter körper", A. Einstein, Annalen der Physik, 17:891 (1905). Article.
[2] "Postulate versus Observation in the Special Theory of Relativity",

H.P. Robertson, Rev. Mod. Phys., 21(3):378–382, (Jul 1949). Article.
[3] "A test theory of special relativity: I. Simultaneity and clock synchronization",

Reza Mansouri and Roman U. Sexl, Gen. Rel. Grav., 8(7):497 (1977). Abstract.
[4] "A test theory of special relativity: II. First order tests",

Reza Mansouri and Roman U. Sexl, Gen. Rel. Grav., 8(7):515, 1977. Abstract.
[5] "A test theory of special relativity: III. Second-order tests",

Reza Mansouri and Roman U. Sexl, Gen. Rel. Grav., 8(10):809 (1977). Abstract.
[6] "Lorentz-violating extension of the standard model",

D. Colladay and V. Alan Kostelecký, Phys. Rev D, 58(11):116002 (1998). Abstract.
[7] "An Experimental Study of the Rate of a Moving Atomic Clock. II",

Herbert E. Ives and G.R. Stilwell, J. Opt. Soc. Am., 31:369 (1941). Abstract.
[8] "Experimental Establishment of the Relativity of Time",

Roy J. Kennedy and Edward M. Thorndike, Phys. Rev., 42:400 (1932). Abstract.
[9] A.A. Michelson and E.W. Morley, American Journal of Science, III-34(203), (1887)
[10] "Limits on the Chirality of Interstellar and Intergalactic Space",

M. Goldhaber and V. Trimble, J. Astrophy. Astr., 17:17 (1996). Article.
[11] "Is There Evidence for Cosmic Anisotropy in the Polarization of Distant Radio Sources?",
S.M. Carroll and G.B. Field, Phys. Rev. Lett., 79(13):2394-2397 (1997). Abstract.
[12] "A limit on the variation of the speed of light arising from quantum gravity effects",

J. Granot , S. Guiriec, M. Ohno, V. Pelassa et al., Nature, 08574, doi:10.1038 (2009). Abstract.
[13] "A dynamical test of special relativity using the anomalous electron g-factor",

M. Kohandel, R. Golestanian, M. R. H. Khajehpour, Physics Letters A, 231, 5-6 (1997). Abstract.
[14] "Test of relativistic time dilation with fast optical atomic clocks at different velocities",

Sascha Reinhardt, Guido Saathoff, Henrik Buhr, Lars A. Carlson, Andreas Wolf, Dirk Schwalm, Sergei Karpuk, Christian Novotny, Gerhard Huber, Marcus Zimmermann, Ronald Holzwarth, Thomas Udem, Theodor W. Hänsch, Gerald Gwinner, Nature Physics, 3, 861-864 (2007). Abstract.
[15] "Satellite test of special relativity using the global positioning system",

P. Wolf and G. Petit, Phys. Rev. A, 56(6):4405-4409 (1997). Abstract.
[16] "Laboratory Test of the Isotropy of Light Propagation at the 10-17 Level",

Ch. Eisele, A. Yu. Nevsky and S. Schiller, Phys. Rev. Lett. 103, 090401 (2009). Abstract.
[17] "Tests of Relativity Using a Cryogenic Optical Resonator", C. Braxmaier, H. Müller, O. Pradl, J. Mlynek, A. Peters, S. Schiller,
Phys. Rev. Lett., 88(1):010401 (2001). Abstract.
[18] "Modern Michelson-Morley Experiment using Cryogenic Optical Resonators",

Holger Müller, Sven Herrmann, Claus Braxmaier, Stephan Schiller and Achim Peters, Phys. Rev. Lett., 91:020401 (2003). Abstract.
[19] "Test of the Isotropy of the Speed of Light Using a Continuously Rotating Optical Resonator",

Sven Herrmann, Alexander Senger, Evgeny Kovalchuk, Holger Müller and Achim Peters, Phys. Rev. Lett., 95(15):150401 (2005). Abstract.
[20] "Test of constancy of speed of light with rotating cryogenic optical resonators",

P. Antonini, M. Okhapkin, E. Göklü and S. Schiller, Phys. Rev. A, 71:050101 (2005). Abstract.
[21] "Improved test of Lorentz invariance in electrodynamics using rotating cryogenic sapphire oscillators", Paul L. Stanwix, Michael E. Tobar, Peter Wolf, Clayton R. Locke, and Eugene N. Ivanov
, Phys. Rev. D, 74(8):081101 (2006). Abstract.
[22] "Tests of Relativity by Complementary Rotating Michelson-Morley Experiments",
Holger Müller, Paul Louis Stanwix, Michael Edmund Tobar, Eugene Ivanov, Peter Wolf, Sven Herrmann, Alexander Senger, Evgeny Kovalchuk, and Achim Peters, Phys. Rev. Lett., 99(5):050401 (2007). Abstract.

Labels: ,


Saturday, May 02, 2009

Measurement and Control of ‘Forbidden’ Collisions between Fermions could improve Atomic Clock Accuracy

Jun Ye adjusts the laser setup for a strontium atomic clock in his laboratory at Joint Institute for Laboratory Astrophysics (JILA).
[Image credit: J. Burrus/NIST]

In a paper published in the journal Science, a team of researchers led by Jun Ye of JILA, a joint institute of the National Institute of Standards and Technology (NIST) and the University of Colorado (CU) at Boulder, reported measurement and control of seemingly 'forbidden' collisions between neutral strontium atoms — a class of fermions, which are not supposed to collide when in identical energy states. This breakthrough has made possible a significant boost in the accuracy of atomic clocks based on hundreds or thousands of neutral atoms.

Link to Jun Ye Group "AMO Physics & Precision Measurement" >>

JILA's strontium clock is one of several next-generation atomic clocks under development around the world. These experimental clocks are based on a variety of different atoms and designs, from single ions (electrically charged atoms) to thousands of neutral atoms; it is not yet clear which design will emerge as the best and be chosen as the future international time standard. The latest JILA work helps eliminate a significant drawback to clock designs based on ensembles of neutral atoms. The presence of many atoms increases both the precision and signal of a clock based on the oscillations between energy levels, or "ticks," in those atoms. However, uncontrolled interactions between atoms can perturb their internal energy states and shift the number of clock ticks per second, reducing overall accuracy.

For the past two years, the Jun Ye group has been developing an optical lattice atomic clock based on fermions such as a collection of identical strontium atoms (87Sr). The overall systematic uncertainty has reached below NIST-F1 -- the Cesium (Cs)-fountain clock operated by NIST,Boulder as the U.S. civilian time and frequency standard.

Fermions, according to the rules of quantum physics, cannot occupy the same energy state and location in space at the same time. Therefore, these identical strontium atoms are not supposed to collide. However, as the group improved the performance of their strontium clock over the past two years, they began to observe small shifts in the frequencies of the clock ticks due to atomic collisions. The extreme precision of their clock unveiled in 2008 (read past 2Physics report, April 17, 2008) enabled the group to measure these minute interactions systematically, including the dynamic effect of the measurement process itself, and to significantly reduce the resulting uncertainties in clock operation.

The JILA clock used in the latest experiments contains about 2,000 strontium atoms cooled to temperatures of a few microKelvin and trapped in multiple levels of a crisscrossed pattern of light, known as an optical lattice. The lattice is shaped like a tall stack of pancakes, or wells. About 30 atoms are grouped together in each well, and these neighboring atoms sometimes collide.

Ye's group discovered that two atoms located some distance apart in the same well are subjected to slight variations in the direction of the laser pulses used to boost the atoms from one energy level to another. The non-uniform interaction with light excites the atoms unevenly. Strontium atoms in different internal states are no longer completely identical, and become distinguishable enough to collide, if given a sufficient amount of time. This differential effect can be suppressed by making the atoms even colder or increasing the trap depth.

The probability of atomic collisions depends on the extent of the variation in the excitation of the ensemble of atoms. Significantly for clock operations, the JILA scientists determined that when the atoms are excited to about halfway between the ground state and the more energetic excited state, the collision-related shifts in the clock frequencies goes to zero. This knowledge enables scientists to reduce or even eliminate the need for a significant correction in the clock output, thereby increasing accuracy.

The discoveries described in Science also would apply to clocks using atoms known as bosons, which, unlike fermions, can exist in the same place and energy state at the same time. This category of clocks includes NIST-F1, the U.S. civilian time and frequency standard. In the case of bosons, variations in light-matter interactions would reduce (rather than increase) the probability of collisions.

Beyond atomic clocks, the high precision of JILA's strontium lattice experimental setup is expected to be useful in other applications requiring exquisite control of atoms, such as quantum computing—potentially ultra-powerful computers based on quantum physics—and simulations to improve understanding of other quantum phenomena such as superconductivity.

"There's a fundamental question here: Why do fermions actually collide?" Ye asks. "Now we understand why there is a frequency shift, and we can zero the shift ... [This result] does not change theory. The value is from the practical possibilities: We can control multi-particle interactions."

Reference
"Probing Interactions between Ultracold Fermions"
G.K. Campbell, M.M. Boyd, J.W. Thomsen, M.J. Martin, S. Blatt, M.D. Swallows, T.L. Nicholson, T. Fortier, C.W. Oates, S.A. Diddams, N.D. Lemke, P. Naidon, P. Julienne, J. Ye, and A. D. Ludlow
Science, Vol. 324, pp. 360-363 (2009)
Abstract

[We thank Media Relations, NIST, Boulder for materials used in this posting]

Labels: , ,


Saturday, February 21, 2009

Accurate Measurement of Huge Pressures that Melt Diamond provides crucial data for Planetary Astrophysics and Nuclear Fusion

Marcus Knudson examines the focal point of his team's effort to characterize materials at extremely high pressures. The fortress-like box sitting atop its support will hold within it a so-called "flyer plate" that -- at speeds far faster than a rifle bullet -- will smash into multiple targets inserted in the two circular holes. An extensive network of tiny sensors and computers will reveal information on shock wave transmission, mass movement, plate velocity, and other factors. [Photo by: Randy Montoya]

In a recent paper in the journal 'Science' [1], researchers from Sandia National Laboratories (a multiprogram laboratory operated by Sandia Corporation for the U.S. Department of Energy’s National Nuclear Security Administration) reported ten times more accurate measurement of the enormous pressures needed to melt diamond to slush and then to a completely liquid state.

Researchers Marcus Knudson, Mike Desjarlais, and Daniel Dolan discovered a triple point at which solid diamond, liquid carbon, and a long-theorized but never-before-confirmed state of solid carbon called bc8 were found to exist together.

The high–energy density behavior of carbon has received much attention in recent times mainly due to its relevance to planetary astrophysics. The outer planets, Neptune and Uranus, are thought to contain large quantities of carbon (as much as 10-15% of the total planetary mass). In Neptune, for example, much of the atmosphere is composed of methane (CH4). Under high pressure, methane decomposes, liberating its carbon. One question for astrophysicists in theorizing the planet’s characteristics is knowing the form that carbon takes in the planet’s interior. At what precise pressure does simple carbon form diamond? Is the pressure eventually great enough to liquefy the diamond, or form bc8, a solid that has yet other characteristics?

“Liquid carbon is electrically conductive at these pressures, which means it affects the generation of magnetic fields,” says Desjarlais. “So, accurate knowledge of phases of carbon in planetary interiors makes a difference in computer models of the planet’s characteristics. Thus, better equations of state can help explain planetary magnetic fields that seem otherwise to have no reason to exist.”

Accurate knowledge of these changes of state are also essential to the effort to produce nuclear fusion at Lawrence Livermore National Laboratory’s National Ignition Facility (NIF) in California. In 2010, at NIF, 192 laser beams are expected to focus on isotopes of hydrogen contained in a little spherical shell that may be made of diamond. The idea is to bring enough heat and pressure to bear to evenly squeeze the shell, which serves as a containment capsule. The contraction is expected to fuse the nuclei of deuterium and tritium within.

The success of this reaction would give more information about the effects of a hydrogen bomb explosion, making it less likely the U.S. would need to resume nuclear weapons tests. It could also be a step in learning how to produce a contained fusion reaction that could produce electrical energy for humanity from seawater, the most abundant material on Earth.

For the reaction to work, the spherical capsule must compress evenly. But at the enormous pressures needed, will the diamond turn to slush, liquid, or even to the solid bc8? A mixture of solid and liquid would create uneven pressures on the isotopes, thwarting the fusion reaction, which to be effective must offer deuterium and tritium nuclei no room to escape.

That problem can be avoided if researchers know at what pressure point diamond turns completely liquid. One laser blast could bring the diamond to the edge of its ability to remain solid, and a second could pressure the diamond wall enough that it would immediately become all liquid, avoiding the slushy solid-liquid state. Or a more powerful laser blast could cause the solid diamond to jump past the messy triple point, and past the liquid and solid bc8 mixture, to enter a totally liquid state. This would keep coherent the pressure on the nuclei being forced to fuse within.

The mixed phase regions, says Dolan, are good ones to avoid for fusion researchers. The Sandia work provides essentially a roadmap showing where those ruts in the fusion road lie.

Sandia researchers achieved these results by dovetailing theoretical simulations with laboratory work. Simulation work led by Desjarlais used theory to establish the range of velocities at which projectiles, called flyer plates, should be sent to create the pressures needed to explore these high pressure phases of carbon and how the triple point would reveal itself in the shock velocities. The theory, called density functional theory, is a powerful method for solving Schrödinger’s equation for hundreds to thousands of atoms using today’s large computers.

[Image courtesy: Sandia National Laboratories] The solid and dotted lines in both graphs represent the same equation-of-state predictions for carbon by Sandia theorists. Jogs in the lines occur when the material changes state. Graph A's consistent red-diamond path, hugging the predicted graph lines, are Z's laboratory results. They confirm the theoretical predictions. The scattered data points of graph B represent lab results from various laser sites external to Sandia.

Using these results as guides, experimental results from fifteen flyer-plate flights — themselves powered by the extreme magnetic fields of Sandia’s Z machine — in work led by Knudson, then determined more exact change-of-state transition pressures than ever before determined. Even better, these pressures fell within the bounds set by theory, thus showing that the theory was accurate.

“These experiments are much more accurate than ones previously performed with laser beams [2,3],” says Knudson. “Our flyer plates, with precisely measured velocities, strike several large diamond samples, which enables very accurate shock wave velocity measurements.” Laser beam results, he says, are less accurate because they shock only very small quantities of material, and must rely on an extra step to infer the shock pressure and density.

Sandia’s magnetically driven plates measure about 4 cm by 1.7 cm cross section, are hundreds of microns thick, and impact three samples on each firing. Z-machine’s target diamonds are each about 1.9 carats, while laser experiments use about 1/100 of a carat.

“No, they’re not gemstones,” says Desjarlais about the Sandia targets. The diamonds in fact are created through industrial processes and have no commercial value, says Dolan, though their scientific value has been large!

Reference
[1] "Shock-Wave Exploration of the High-Pressure Phases of Carbon"
M. D. Knudson, M. P. Desjarlais, D. H. Dolan, Science, 322, 1822 - 1825 (2008).
Abstract.
[2] "Hugoniot measurement of diamond under laser shock compression up to 2 TPa"
H. Nagao, K. G. Nakamura, K. Kondo, N. Ozaki, K. Takamatsu, T. Ono, T. Shiota, D. Ichinose, K. A. Tanaka, K. Wakabayashi, K. Okada, M. Yoshida, M. Nakai, K. Nagai, K. Shigemori, T. Sakaiya, K. Otani, Physics of Plasmas, 13, 052705 (2006). Abstract.
[3] "Laser-shock compression of diamond and evidence of a negative-slope melting curve"
Stéphanie Brygoo, Emeric Henry, Paul Loubeyre, Jon Eggert, Michel Koenig, Bérénice Loupias, Alessandra Benuzzi-Mounaix & Marc Rabec Le Gloahec, Nature Materials 6, 274 - 277 (2007),
Abstract.

[We thank Media Relations, Sandia National Laboratories for materials used in this report]

Labels: , ,


Saturday, January 31, 2009

Up to 400-fold Improvement in Magnetic Field Detection

W.F. Egelhoff, Jr. (photo courtesy: NIST)

A team of researchers at the National Institute of Standards and Technology (NIST) has reported
dramatically enhanced sensitivity -- a 400-fold improvement in some cases -- in a carefully built magnetic flux concentrator that draw in external magnetic field lines and concentrate them in a small region. The flux concentrator is a kind of magnetic sandwich that interleaves layers of a magnetic alloy with a few nanometers of silver “spacer”. They are used to amplify fields in compact magnetic sensors used for a wide variety of applications from weapons detection and non-destructive testing to medical devices and high-performance data storage.

Those applications and many others are based on thin films of magnetic materials in which the direction of magnetization can be switched from one orientation to another. An important characteristic of a magnetic film is its saturation field, the magnitude of the applied magnetic field that completely magnetizes the film in the same direction as the applied field—the smaller the saturation field, the more sensitive the device.

The saturation field is often determined by the amount of stress in the film—atoms under stress due to the pull of bonds with neighboring atoms are more resistant to changing their magnetic orientation. Metallic films develop not as a single monolithic crystal, like diamonds, but rather as a random mosaic of microscopic crystals called grains. Atoms on the boundaries between two different grains tend to be more stressed, so films with a lot of fine grains tend to have more internal stress than coarser grained films. Film stress also increases as the film is made thicker, which is unfortunate because thick films are often required for high magnetization applications.

Transmission electron microscope (TEM) images show sections of a continuous 400-nanometer-thick magnetic film of a nickle-iron-copper-molybdenum alloy (top) and a film of the same alloy layered with silver every 100 nanometers (bottom). By relieving strain in the film, the silver layers promote the growth of notably larger crystal grains in the layered material as compared to the monolithic film (several are highlighted for emphasis). Electron diffraction patterns (insets) tell a similar story—the material with larger crystal grains display sharper, more discrete scattering patterns. (Color added for clarity). Image credit: J. Bonevich, NIST

The NIST research team discovered that magnetic film stress could be lowered dramatically by periodically adding a layer of a metal, having a different crystal structure or lattice spacing, in between the magnetic layers. Although the mechanism isn’t completely understood, according to lead author William Egelhoff Jr., the intervening layers disrupt the magnetic film growth and induce the creation of new grains that grow to be larger than they do in the monolithic films. The researchers prepared multilayer films with layers of a nickel-iron-copper-molybdenum magnetic alloy each 100 nanometers (nm) thick, interleaved with 5-nm layers of silver. The structure reduced the tensile stress (over a monolithic film of equivalent thickness) by a factor of 200 and lowered the saturation field by a factor of 400.

Reference
"400-fold reduction in saturation field by interlayering",
W.F. Egelhoff, Jr., J. Bonevich, P. Pong, C.R. Beauchamp, G.R. Stafford, J. Unguris, and R.D. McMichael,
J. Appl. Phys. 105, 013921 (2009).
Abstract.

[We thank Media relations, NIST for materials used in this report]

Labels: , ,


Monday, October 27, 2008

Use of Squeezed Light to perform Distance Measurement below the Standard Quantum Limit

(from Left to Right) Nicolas Treps, Brahim Lamine and Claude Fabre

A team of researchers (B. Lamine, N. Treps and C. Fabre) from the Laboratoire Kastler Brossel (LKB) at the University Pierre and Marie Curie (Paris, France) have shown how to use squeezed light to perform distance measurement below the standard quantum limit imposed by the quantum nature of light [1].

Any distance measurement involves the propagation of light between two observers and the measurement of its phase (interferometric measurement, which gives distance within a wavelength) or its amplitude (time of flight measurement, giving absolute measurement). The quantum nature of light introduces fluctuations in the phase and the amplitude of the light used for ranging, therefore leading to a noisy measurement. The scientists have shown how to combine both a time of flight and a phase measurement, using frequency combs and homodyne detection, to minimize the effects of this quantum noise.

When classical light is used, then the sensitivity cannot go below what is called a standard quantum limit, which is smaller than previously existing standard quantum limits based either on interferometric or phase measurement. More interestingly, when squeezed frequency combs are used to perform the measurement, the sensitivity can significantly dive below the previous standard quantum limit. Squeezing light consists in tailoring its quantum fluctuations.

Ranging using frequency combs have already been proposed in the past [2] while it has long ago been realized that quantum resources is a way of improving ranging [3] (in particular entanglement and squeezing). Nevertheless the combination of both technology in an adapted optimal scheme is a major first.

Potential applications could be for future space-based experiments such as DARWIN (to detect Earth-like exoplanets) or LISA (to detect gravitational waves), and even for precise dispersion measurement. Indeed, when dispersion occurs, it does not affect in the same way the phase and the envelope --an effect which can be seen in the detection scheme proposed by the scientists.

References
[1] "Quantum Improvement of Time Transfer between Remote Clocks"

B.Lamine, C.Fabre and N. Treps,
Physical Review Letter 101, 123601 (2008). Abstract. [arXiv:0804.1203].
[2] "Absolute measurement of a long, arbitrary distance to less than an optical fringe",

J. Ye, Optics Letters 29, 1153 (2004). Abstract.
[3] "Quantum-enhanced positioning and clock synchronization",

V. Giovannetti, S. Lloyd, and L. Maccone, Nature 412, 417 (2001). Abstract.

Labels: , , ,


Sunday, October 19, 2008

Squeezing of Quantum Noise successfully used to develop First Tunable, ‘Noiseless’ Amplifier

Konrad LehnertKonrad Lehnert [Photo Courtesy: JILA, Boulder]

By significantly reducing the uncertainty in delicate measurements of microwave signals, a team of researchers from the National Institute of Standards and Technology (NIST) and Joint Institute of Laboratory Astrophysics (JILA) could successfully develop the first tunable “noiseless” amplifier which could boost the speed and precision of quantum computing and communications systems.

Conventional amplifiers add unwanted “noise,” or random fluctuations, when they measure and boost electromagnetic signals. Amplifiers that theoretically add no noise have been demonstrated before, but the JILA/NIST technology offers better performance and is the first to be tunable, operating between 4 and 8 GHz, according to JILA group leader Konrad Lehnert. It is also the first amplifier of any type ever to boost signals sufficiently to overcome noise generated by the next amplifier in a series along a signal path, Lehnert says, a valuable feature for building practical systems.

Noisy amplifiers force researchers to make repeated measurements of, for example, the delicate quantum states of microwave fields—that is, the shape of the waves as measured in amplitude (or power) and phase (or point in time when each wave begins). The rules of quantum mechanics say that the noise in amplitude and phase can’t both be zero, but the JILA/NIST amplifier exploits a loophole stipulating that if you measure and amplify only one of these parameters—amplitude, in this case—then the amplifier is theoretically capable of adding no noise. In reality, the JILA/NIST amplifier adds about half the noise that would be expected from measuring both amplitude and phase.

The JILA/NIST amplifier could enable faster, more precise measurements in certain types of quantum computers—which, if they can be built, could solve some problems considered intractable today—or quantum communications systems providing “unbreakable” encryption. It also offers the related and useful capability to “squeeze” microwave fields, trading reduced noise in the signal phase for increased noise in the signal amplitude. By combining two squeezed entities, scientists can “entangle” them, linking their properties in predictable ways that are useful in quantum computing and communications. Entanglement of microwave signals, as opposed to optical signals, offer some practical advantages in computing and communication such as relatively simple equipment requirements, Lehnert says.

[Image Credit: M. Castellanos-Beltran/JILA] In the JILA/NIST “noiseless” amplifier, a long line of superconducting magnetic sensors (beginning on the right in this photograph) made of sandwiches of two layers of superconducting niobium with aluminum oxide in between, creates a 'metamaterial' that selectively amplifies microwaves based on their amplitude rather than phase.

The new amplifier is a 5-millimeter-long niobium cavity lined with 480 magnetic sensors called SQUIDs (superconducting quantum interference devices). The line of SQUIDs acts like a “metamaterial,” a structure not found in nature that has strange effects on electromagnetic energy. Microwaves ricochet back and forth inside the cavity like a skateboarder on a ramp. Scientists tune the wave velocity by manipulating the magnetic fields in the SQUIDs and the intensity of the microwaves. An injection of an intense pump tone at a particular frequency, like a skateboarder jumping at particular times to boost speed and height on a ramp, causes the microwave power to oscillate at twice the pump frequency. Only the portion of the signal which is synchronous with the pump is amplified.

Reference
"Amplification and squeezing of quantum noise with a tunable Josephson metamaterial",
M.A. Castellanos-Beltran, K.D. Irwin, G.C. Hilton, L.R. Vale and K.W. Lehnert,
Nature Physics, published online: 5 Oct. 5 2008; doi:10.1038/nphys1090. Abstract

[We thank Media Relation, NIST for materials used in this posting]

Labels: , , ,


Thursday, April 17, 2008

Collaboration Helps Make JILA Strontium Atomic Clock Surpass Accuracy of NIST-F1 Fountain Clock

Jun Ye

A next-generation atomic clock that tops previous records for accuracy in clocks based on neutral atoms has been demonstrated by physicists at JILA, a joint institute of the Commerce Department's National Institute of Standards and Technology (NIST) and the University of Colorado at Boulder. The new clock, based on thousands of strontium atoms trapped in grids of laser light, surpasses the accuracy of the current U.S. time standard based on a “fountain” of cesium atoms.

JILA’s experimental strontium clock, described in the Feb. 14 issue of Science Express and the Mar. 28 issue of Science, is now the world’s most accurate atomic clock based on neutral atoms, more than twice as accurate as the NIST-F1 standard cesium clock located just down the road at the NIST campus in Boulder.

The JILA strontium clock would neither gain nor lose a second in more than 200 million years, compared to NIST F-1’s current accuracy of over 80 million years.

The advance was made possible by Boulder’s critical mass of state-of-the-art timekeeping equipment and expertise. The JILA strontium clock was evaluated by remotely comparing it to a third NIST atomic clock, an experimental model based on neutral calcium atoms. The best clocks can be precisely evaluated by comparing them to other nearby clocks with similar performance; very long-distance signal transfer, such as by satellite, is too unstable for practical, reliable comparisons of the new generation of clocks. In the latest experiment, signals from the two clocks were compared via a 3.5-kilometer underground fiber-optic cable.

The strontium and calcium clocks rely on the use of optical light, which has higher frequencies than the microwaves used in NIST-F1. Because the frequencies are higher, the clocks divide time into smaller units, offering record precision. Laboratories around the world are developing optical clocks based on a variety of different designs and atoms; it is not yet clear which design will emerge as the best and be chosen as the next international standard. The work reported in Science is the first optical atomic clock comparison over kilometer-scale urban distances, an important step for worldwide development of future standards.

“This is our first comparison to another optical atomic clock,” says NIST/JILA Fellow Jun Ye (Link to Jun Ye Group), who leads the strontium project. “As of now, Boulder is in a very unique position. We have all the ingredients, including multiple optical clocks and the fiber-optic link, working so well. Without a single one of these components, these measurements would not be possible. It’s all coming together at this moment in time.”

NIST and JILA are home to optical clocks based on a variety of atoms, including strontium, calcium, mercury, aluminum, and ytterbium, each offering different advantages. Ye now plans to compare strontium to the world’s most accurate clock, NIST’s experimental design based on a single mercury ion (charged atom). The mercury ion clock (See our past posting). was accurate to about 1 second in 400 million years in 2006 and performs even better today, according to Jim Bergquist, the NIST physicist who built the clock. The “best” status in atomic clocks is a moving target.

The development and testing of a new generation of optical atomic clocks is important because highly precise clocks are used to synchronize telecom networks and deep-space communications, as well as for navigation and positioning. The race to build even better clocks is expected to lead to new types of gravity sensors, as well as new tests of fundamental physical laws to increase understanding of the universe. Because Ye’s group is able to measure and control interactions among so many atoms with such exquisite precision, the JILA work also is expected to lead to new scientific tools for quantum simulations that will help scientists better understand how matter and light behave under the strange rules governing the nanoworld.

In the JILA clock, a few thousand atoms of the alkaline-earth metal strontium are held in a column of about 100 pancake-shaped traps called an “optical lattice.” The lattice is formed by standing waves of intense near-infrared laser light. Forming a sort of artificial crystal of light, the lattice constrains atom motion and reduces systematic errors that occur in clocks that use moving balls of atoms, such as NIST-F1. Using thousands of atoms at once also produces stronger signals and eventually may yield more precise results than clocks relying on a single ion, such as mercury. JILA scientists detect strontium’s “ticks” (430 trillion per second) by bathing the atoms in very stable red laser light at the exact frequency that prompts jumps between two electronic energy levels. The JILA team recently improved the clock by achieving much better control of the atoms. For example, they can now cancel out the atoms’ internal sensitivity to external magnetic fields, which otherwise degrade clock accuracy. They also characterized more precisely the effects of confining atoms in the lattice.

Image caption: JILA Strontium Optical Atomic Clock: Keeping time with neutral atoms at the lowest uncertainty. After laser cooled to microKelvin temperatures, an optical lattice formed by a standing wave infrared laser beam traps the ultracold atoms in pancake-shaped traps. A highly coherent clock laser probes the atomic resonance and the result of the atom-light interaction is read out by state-sensitive strong fluorescence signals. This information is used to control the clock laser frequency, and a phase-coherent optical frequency comb distributes the clock signal to the radio frequency domain.

The NIST calcium clock, which was used to evaluate the performance of the new strontium clock, relies on the ticking of clouds of millions of calcium atoms. This clock offers high stability for short times, relatively compact size and simplicity of operation. NIST scientists believe it could be made portable and perhaps transported to other institutions for evaluations of other optical atomic clocks. JILA scientists were able to take advantage of the calcium clock's good short-term stability by making fast measurements of one property in the strontium clock and then quickly switching to a different property to start the comparison over again.

The JILA-NIST collaborations benefit both institutions by enabling scientists not only to compare and measure clock performance, but also to share tools and expertise. Another key element to the latest comparison was the use of two custom-made frequency combs, the most accurate tool for measuring optical frequencies, which helped to maintain stability during signal transfer between the two institutions. (For background, visit NIST frequency combs page).

Reference
"Sr lattice clock at 1x10-16 fractional uncertainty by remote optical evaluation with a Ca clock"
A.D. Ludlow, T. Zelevinsky, G.K. Campbell, S. Blatt, M.M. Boyd, M.H.G. de Miranda, M.J. Martin, J. W. Thomsen, S.M. Foreman, J. Ye, T.M. Fortier, J.E. Stalnaker, S.A. Diddams, Y. Le Coq, Z.W. Barber, N. Poli, N.D. Lemke, K.M. Beck, C. W. Oates.
Science Vol. 319, p. 1805-1808, (28 March 2008). Abstract Link


This report is based on a press release put together by Laura Ost in the public affairs office at NIST. The JILA research is supported by the Office of Naval Research, National Institute of Standards and Technology, National Science Foundation and Defense Advanced Research Projects Agency. As a non-regulatory agency of the Commerce Department, NIST promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards and technology in ways that enhance economic security and improve our quality of life.

Labels: , ,


Thursday, April 10, 2008

Ion Interferometers, the Bane of Chubby Photons?

Dallin S. Durfee poses with an elusive "fat photon" during the 2007 meeting of the APS Division of Atomic, Molecular, and Optical Physics (DAMOP).

[This is an invited article based on recent work of the author. -- 2Physics.com]

Author:
Dallin S. Durfee

Affiliation: Department of Physics, Brigham Young University

Our current model of electromagnetism has held up to 2.5 centuries of scrutiny. But like nearly every other theory that science has embraced, it will probably eventually be shown to be incomplete. In a recent article in Physical Review Letters, researchers at Brigham Young University examined the potential of using ion interferometry to search for Coulomb’s-law violating electric fields inside of a conducting cavity. If Coulomb’s law is correct, the absolute voltage of the cavity should not affect the fields inside the cavity. But if it is violated, changing the voltage should alter the fields in the cavity.

The proposed experiment was recently funded by a NIST Precision Measurement Grant and is currently under construction. In this experiment laser beams will be used to split the quantum wave functions of Strontium ions in two. The two waves will then recoil away from each other before being deflected back together and recombined by two additional laser beams. The last laser beam will cause the two waves to interfere, such that the final state of an ion will depend on the relative quantum phase of the two halves of its wave function.

The presence of electric fields inside the conducting shell would cause the two waves to travel through different potentials and acquire different quantum phase shifts. This would change the overall phase of the interference in a predictable way, making it possible to determine the magnitude of the electric field from the final state of the ions. By monitoring the state of ions exiting the apparatus as a changing voltage is applied to the conducting shell, a very sensitive test of Coulomb’s law can be conducted.

The theory of massive photons provides a useful way to compare experimental searches for Coulomb’s-law violations. This theory assumes that photons have a small, but non-zero rest mass, resulting in a limited range for Coulomb interactions. Although it is widely believed that the photon has zero rest mass, in today’s image conscious world it is just possible that photons aren’t telling us their true weight (after all, the neutrino maintained its massless image for decades). Based on calculations in their paper, the researchers predict that the experiment will be able to detect a rest mass of a few times 10-50 grams, about 100 times smaller than previous laboratory measurements.

Reference
"Testing Nonclassical Theories of Electromagnetism with Ion Interferometry"
by B. Neyenhuis, D. Christensen, and D. S. Durfee

Phys. Rev. Lett. 99, 200401 (2007), Abstract Link

Labels: , , ,


Friday, March 28, 2008

‘Quantum Logic Clock’ Rivals Mercury Ion as World’s Most Accurate Clock
A new limit on change in fine-structure constant

NIST physicist Till Rosenband adjusts the quantum logic clock, which derives its “ticks” from the natural vibrations of an aluminum ion. The aluminum ion is trapped together with one beryllium ion inside the copper-colored chamber in the foreground. [Photo credit and copyright: Geoffrey Wheeler]

In a paper published in today's issue of journal 'Science', a team of scientists from the National Institute of Standards and Technology (NIST) reports the development of an atomic clock that uses an aluminum atom to apply the logic of computers to the peculiarities of the quantum world and which now rivals the world's most accurate clock, based on a single mercury atom, previously designed and built by NIST scientists (See our past posting). Both clocks are at least 10 times more accurate than the current U.S. time standard.

An optical clock can be evaluated precisely only by comparison to another clock of similar accuracy serving as a “ruler.” NIST scientists used the quantum logic clock to measure the mercury clock, and vice versa. The measurements were made in a yearlong comparison of the two next-generation clocks with record precision, allowing scientists to record the relative frequencies of the two clocks to 17 digits—the most accurate measurement of this type ever made.

The aluminum and mercury clocks are both based on ions vibrating at optical frequencies, which are 100,000 times higher than microwave frequencies used in NIST-F1, the U.S. time standard based on neutral cesium atoms, and other similar time standards around the world. The aluminum and mercury clocks would neither gain nor lose one second in over 1 billion years—if they could run for such a long time—compared to about 80 million years for NIST-F1.

The NIST quantum logic clock uses two different kinds of ions, aluminum and beryllium, confined closely together in an electromagnetic trap and slowed by lasers to nearly “absolute zero” temperatures. Aluminum is a stable source of clock ticks, but its properties cannot be detected easily with lasers. The NIST scientists applied quantum computing methods to share information from the aluminum ion with the beryllium ion, a workhorse of their quantum computing research. The scientists can detect the aluminum clock’s ticks by observing light signals from the beryllium ion.

Highly accurate clocks are used to synchronize telecommunications networks and deep-space communications, and for satellite navigation and positioning. Next-generation clocks may also lead to new types of gravity sensors, which have potential applications in exploration for underground natural resources and fundamental studies of the Earth. NIST scientists have several other optical atomic clocks in development, including one based on thousands of neutral strontium atoms. The strontium clock recently achieved twice the accuracy of NIST-F1, but still trails the mercury and aluminum clocks.

Cosmology Connection: The comparison of these clocks produced the most precise results yet in the worldwide quest to determine whether some of the fundamental constants that describe the universe are changing slightly over time, a hot research question that may alter basic models of the cosmos.

Based on fluctuations in the frequencies of the two clocks relative to each other over time, NIST scientists were able to search for a possible change over time in a fundamental quantity called the fine-structure constant. This quantity measures the strength of electromagnetic interactions in many areas of physics, from studies of atoms and molecules to astronomy. Some evidence from astronomy has suggested the fine-structure constant may be changing very slowly over billions of years. If such changes are real, scientists would have to dramatically change their theories of the fundamental nature of the universe. [Readers may refer to the article "Changing Constants, Dark Energy and the Absorption of 21 cm Radiation" by Prof. Ben Wandelt of University of Illinois, 2Physics.com, July 25, 2007]

The NIST measurements indicate that the value of the fine-structure constant is not changing by more than 1.6 quadrillionths of 1 percent per year, with an uncertainty of 2.3 quadrillionths of 1 percent per year (a quadrillionth is a millionth of a billionth). The result is small enough to be “consistent with no change,” according to the paper. However, it is still possible that the fine-structure constant is changing at a rate smaller than anyone can yet detect. The new NIST limit is approximately 10 times smaller than the best previous measurement of the possible present-day rate of change in the fine-structure constant. The mercury clock is an especially useful tool for such tests because its frequency fluctuations are magnified by any changes in this constant.

Reference
"Frequency ratio of Al+ and Hg+ single-ion optical clocks; metrology at the 17th decimal place"
T. Rosenband, D.B. Hume, P.O. Schmidt, C.W. Chou, A. Brusch, L. Lorini, W.H. Oskay, R.E. Drullinger, T.M. Fortier, J.E. Stalnaker, S.A. Diddams, W.C. Swann, N.R. Newbury, W.M. Itano, D.J. Wineland, and J.C. Bergquist
Science, Vol. 319. no. 5871, pp. 1808 - 1812 (28 March 2008). Abstract Link

We thank Media Relations, NIST for materials used in this posting

Labels: , ,


Sunday, November 04, 2007

Subpicotesla Atomic Magnetometry

John Kitching [Photo courtesy: NIST, Boulder]

A team of physicists led by John Kitching of National Institute of Standards and Technology (NIST) has reported the development of a tiny sensor that can detect magnetic field changes as small as 70 femtoteslas—equivalent to the brain waves of a person daydreaming. [A femtotesla is one quadrillionth (or a millionth of a billionth) of a tesla, the unit that defines the strength of a magnetic field. For comparison, the Earth’s magnetic field is measured in microteslas, and a magnetic resonance imaging (MRI) system operates at several teslas].

This compact magnetometer is based on the so-called SERF (spin-exchange relaxation free) principle, which was used by a group at Princeton University in 2003 to enhance the sensitivity of larger, tabletop-sized magnetometers to outperform SQUIDs. The NIST group developed novel approaches and technologies to adapt the SERF concept for tiny and practical devices. The sensor could be battery-operated and could reduce the costs of non-invasive biomagnetic measurements such as fetal heart monitoring.

At zero magnetic field, the atoms’ electron “spins” (which can be roughly visualized as tiny magnetic arrows pointing through the electrons) all point in the same direction as the laser beam, and the atoms absorb virtually no light. As the magnetic field is increased, the electrons jump to higher-energy levels and their spins go out of sync, causing the atoms to absorb some of the light.

Ordinarily, the atoms would collide randomly and the electron spins would change direction in between collisions, degrading the sensor signal. The SERF approach maintains consistent spins for a relatively long time (10 milliseconds) by combining a low magnetic field with high temperatures of 150 degrees C (302 degrees F). The spins have little time to adjust in between the collisions. Like cars on a highway, the atoms behave more consistently when conditions are crowded.

Image credit and copyright: Loel Barr

In NIST’s new mini-magnetometer, light from a single low-power (milliwatt) infrared laser (small gray cylinder at left) passes through a small container (green cube; dimensions: 3 by 2 by 1 millimeters) containing about 100 billion rubidium atoms in gas form. The cell and any sample being tested are placed inside a magnetic shield (large grey cylinder). When no sample is present, as in the top image, the atoms’ “spins” (depicted inside red circle) align themselves with the laser beam, and the virtually all the light is transmitted through the cell to the detector (blue cube). In the presence of a sample emitting a magnetic field, such as a bomb or a mouse (middle and bottom images), the atoms become more disoriented as the field gets stronger, and less light arrives at the detector. A mouse heart produces a stronger signal than many explosive compounds found, for example, in bombs, if both are located the same distance from the sensor; at greater distances, the detected field is reduced. By monitoring the signal at the detector, scientists can determine the strength of the magnetic field.

“This result suggests that millimeter-scale, low-power, inexpensive, femtotesla magnetometers are feasible … Such an instrument would greatly expand the range of applications in which atomic magnetometers could be used,” the paper states. The new NIST mini-sensor could reduce the equipment size and costs associated with some non-invasive biomedical tests. The device also may have applications such as homeland security screening for explosives.

The device could be used in a heart monitoring technique known as magnetocardiography (MCG), which is sensitive enough to measure fields of few picoteslas emitted by the fetal heart from small currents in heart muscle cells, providing complementary and perhaps better information than an electrocardiogram. With further improvements, the NIST sensor also might be used in magnetoencephalography (MEG), which measures the magnetic fields produced by electrical activity in the brain, helping to pinpoint tumors or determine function of various parts of the brain.

Reference:
"Femtotesla Atomic Magnetometry with a Microfabricated Vapor Cell"
Vishal Shah, Svenja Knappe, Peter D.D. Schwindt, and John Kitching,
Nature Photonics, v1, p649 - 652 (1 November 2007) Abstract

[We thank Media Relations, NIST for materials used in this posting]

Labels: , ,


Monday, May 21, 2007

Pea-sized Spectrometer for Precision Laser Calibration

Photographed adjacent to an ordinary green pea, the newly developed microfabricated spectrometer consists of a tiny container of atoms, a photodetector, and miniature optics [photo credit: Svenja Knappe, NIST]

In the May 7 issue of Optics Express, scientists from the National Institute of Standards and Technology (NIST) reported the development of a tiny prototype device that could replace table-top-sized instruments used for laser calibration in atomic physics research, could better stabilize optical telecommunications channels, and perhaps could replace and improve on the precision of instrumentation used to measure length, chemicals or atmospheric gases.

This new spectrometer is the latest in a NIST series of miniaturized optical instruments such as chip-scale atomic clocks and magnetometers. The spectrometer is about the size of a green pea and consists of miniature optics, a microfabricated container for atoms in a gas, heaters and a photodetector, all within a cube about 10 millimeters on a side.

The key to the device is a tiny glass-and-silicon container that holds a small sample of atoms. The sample chambers were micromachined in a clean room and filled and sealed inside a vacuum to ensure the purity of the atomic gas, but they can be mass-produced from silicon wafers into much smaller sizes, requiring less power and potentially cheaper than the traditional blown-glass containers used in laboratories. Although shrinking container size creates some limitations, NIST scientists have accommodated these difficulties by adding special features, such as heaters to keep more atoms in the gas state.

The package could be used to calibrate laser instruments, or, if a miniature laser were included in the device, could serve as a wavelength or frequency reference. NIST tests predict that the stability and signal performance of the tiny, portable device can be comparable to standard table-top setups. The mini-spectrometer would offer greater precision than the physical references now used to separate fiber-optic channels, with the advantage that more channels might be packed into the same spectrum.

Reference:
"Microfabricated saturated absorption laser spectrometer"
S.A. Knappe, H.G. Robinson and L. Hollberg,
Optics Express, Vol. 15, pp. 6293-6299 (May 7, 2007), Link to Abstract

Labels: ,


Wednesday, January 17, 2007

Set-back for Dark Energy

Observational evidences suggest that the rate of expansion of the universe is increasing with time. This goes in contradiction to the expectation of some physicists that the finite energy of expansion would be continuously depleted by the gravitational attraction that holds the universe together. Some cosmologists tried to explain this increasing expansion with “dark energy” which may counteract the force of gravity at relatively short length scales – about 85 micrometres.

In order to explain the observed rate of expansion, dark energy must account for about 70% of all energy in the universe. But physicists still need a direct confirmation of its existence.

(photo of Dan Kapner, lead author of the paper; courtsey: the Eöt-Wash group )

In a recent paper in Physical Review Letters, a team of physicists from the Eöt-Wash group at the Center for Experimental Nuclear Physics and Astrophysics, University of Washington, Seattle reported their measurement of the force of gravity down to 55 micrometres and their conclusion that the inverse-square law remained valid well below 85 micrometres with 95% confidence. In a laboratory set-up, the scientists made very precise measurement of the gravitational attraction between two plates placed upon a torsion pendulum. Although a few other groups in various countries are engaged in such measurements, according to the Eöt-Wash researchers, their experiment offers the highest sensitivity at the length-scale associated with dark energy because it employs more interacting mass at the required separations than other setups.

Those who are familiar with such type of precision measurement will know that this puts a limit on the length-scale of any new type of interaction that can be theoretically predicted to exist. The experiment still does not rule out the existence of dark energy. But the potential implication of this experiment is very significant -- it's indeed a set-back for the theory of dark energy that could explain the increasing expansion of the universe.

Reference:
"Tests of the Gravitational Inverse-Square Law below the Dark-Energy Length Scale"
D. J. Kapner, T. S. Cook, E. G. Adelberger, J. H. Gundlach, B. R. Heckel, C. D. Hoyle, and H. E. Swanson,

Phys. Rev. Lett. 98, 021101 (8th January issue, 2007) Link to Abstract

Labels: ,


Saturday, July 29, 2006

Most Accurate Clock

Photo: NIST physicist Jim Bergquist holds a portable keyboard used to set up the world's most accurate clock. The single mercury ion is contained in the silver cylinder in the foreground ©Geoffrey Wheeler (Courtsey: National Institute of Standards and Technology)

A path-breaking research paper by physicists at the National Institute of Standards and Technology (NIST) in the July 14 issue of Physical Review Letters describes an experimental atomic clock based on a single mercury atom, which at present is at least five times more precise than the national standard clock. The experimental clock consists of a silver cylinder which acts as a magnetic shield that surrounds a cryogenic vacuum system. The heart of the clock, a single mercury ion (electrically charged atom) is brought to rest inside this chamber by laser-cooling it to near absolute zero. The optical oscillations of the essentially motionless ion are used to produce the "ticks" or "heartbeat" of the world's most stable and accurate clock.

The mercury ion ticks at “optical” frequencies—much higher than the microwave frequencies measured in cesium atoms in NIST-F1, the national standard and one of the world’s most accurate clocks. This achievement of shifting the operation to higher frequencies allows time to be divided into smaller units and reach greater precision.

The current version of NIST-F1 —if operated continuously—would neither gain nor lose a second in about 70 million years. The latest version of the mercury clock would neither gain nor lose a second in about 400 million years.

This improved time and frequency standards would eventually lead to improved synchronization in navigation and positioning systems, telecommunications networks, and wireless and deep-space communications and would allow designing improved probes of magnetic and gravitational fields for security and medical applications. This would also let physicists investigate whether “fundamental constants” used in scientific research might be varying over time—a question that has enormous implications for understanding the origins and ultimate fate of the universe.

Here is the reference for the paper:
W.H. Oskay, S.A. Diddams, E.A. Donley, T.M. Fortier, T.P. Heavner, L. Hollberg, W.M. Itano, S.R. Jefferts, M.J. Jensen, K. Kim, F. Levi, T.E. Parker and J.C. Bergquist. 2006. A single-atom optical clock with high accuracy. Physical Review Letters. July 14.

Labels:


Sunday, April 23, 2006

Proton-Electron Mass Ratio

Spectra of Hydrogen and Mercury

New measurements of starlights suggest that the ratio of the proton's mass to the electron's mass has increased by 0.002% over 12 billion years. The spectra of hydrogen gas as recorded in lab is compared with spectra of light coming from hydrogen clouds billions of light years away when the universe was in its youth.

Molecular hydrogen absorbs light of specific wavelengths, and the resulting spectrum of "absorption lines" uniquely identifies Hydrogen atom by the 'bar' code made up of such lines. The positions of the lines depend on the ratio of the mass of the proton to the mass of the electron. Of course, one needs to carefully take into account the effect of the expansion of the universe which shifts these lines from higher (ultraviolet) to lower (visible) frequency.

The researchers have reported in Physical Review Letters this week that the mass-ratio of proton and electron (the ratio is about 1836 and is denoted by the letter mu) has increased by about 20 parts per million over the past 12 billion years. The proton-to-electron mass ratio figures in setting the scale of the strong nuclear force.

More studies of spectra of Hydrogen gas from distant galaxies are needed to confirm whether the mass ratio has indeed changed.

Here is the link to the abstract of the paper in Physical Review Letters.

Labels: ,


Tuesday, October 04, 2005

Physics Nobel 2005

Roy J. Glauber

The Nobel Prize for Physics was awarded to U.S.
scientists Roy J. Glauber and John L. Hall and to
Theodor W. Haensch of Germany for their work
in the field of optics, it was announced Tuesday
in Stockholm. Glauber, 80, of Harvard University
was awarded half the prize money of 10 million
kronor (EUR 1.1 million) "for his contribution to
the quantum theory of optical coherence," the
Royal Swedish Academy of Sciences said.

Glauber's groundbreaking work, reported back in 1963, is in the theoretical
description of the behaviour of light particles. His contributions were
described as "pioneering work in applying quantum physics to optical
phenomena," the Academy said. It added that Glauber had helped explain
"fundamental differences between hot sources of light such as light bulbs,
with a mixture of frequencies and phases, and lasers which give a specific
frequency and phase". Possible implementations of his work on quantum
phenomena include encryption of messages within communication technology.

John L. Hall

Hall, 71, of Colorado University and Haensch, 63,
of the Max Planck Institute for Quantum Optics
and Munich's Ludwig Maximilian University, share
the other half of the prize "for their contributions
to the development of laser-based precision
spectroscopy, including the optical frequency comb
technique".

Hall and Haensch's work was on the determination
of the color of the light of atoms and molecules with extreme precision. The
Royal Swedish Academy of Sciences said Hall and Haensch had "made it
possible to measure frequencies with an accuracy of fifteen digits". This could
enable the development of extremely accurate clocks and improved satellite-
based navigation systems (GPS), as well as the study of the constants of nature
over time.




Theodor W. Haensch









Three Cheers!!!

Labels: , ,


Saturday, August 20, 2005

Reports on Light

Photon Clock: Applied physicists
at the California Institute of
Technology have created a tiny disk
that vibrates steadily like a tuning
fork while it is pumped with light.
This is the first micro-mechanical
device that has been operated at a
steady frequency by the action of
photons alone. Reporting in
recently published issues of the
journals Optics Express (July 11) and
Physical Review Letters (June 10 and
July 11), Kerry Vahala and group
members explained how the tiny,
disk-shaped resonator made of silica
can be made to vibrate mechanically
when hit by laser light. The disk,
which is less than the width of a
human hair, vibrates about 80 million
times per second when its rim is
pumped with light.

Controlling Light: A discovery by
Princeton researchers may lead to an
efficient method for controlling the transmission of light and improve new
generations of communications technologies powered by light rather than
electricity. The discovery could be used to develop new structures that
would work in the same fashion as an elbow joint in plumbing by enabling
light to make sharp turns as it travels through photonic circuits. Fiber-optic
cables currently used in computers, televisions and other devices can
transport light rapidly and efficiently, but cannot bend at sharp angles.
Information in the light pulses has to be converted back into cumbersome
electrical signals before they can be sorted and redirected to their proper
destinations. The results are reported in Aug. 18 issue of Nature.

Controlling Light Speed: A team of researchers from the Ecole
Polytechnique Federale in Lausanne, Switzerland have successfully
demonstrated for the first time that it is possible to control the speed of
light in an optical fiber. Their findings, published in Applied Physics
Letters, could have implications ranging from optical computing to the
fiber-optic telecommunications industry. The researchers were able to
slow light down and speed it up as well. The phenomenon could have
profound technological consequences in controlling the speed of light in a
simple optical fiber.

Labels: ,