.comment-link {margin-left:.6em;}


2Physics Quote:
"Ever since the birth of quantum mechanics we faced the difficulty to reconcile the superposition behavior of quantum particles and our intuitive experience in dealing with macroscopic objects which should occupy definite states at all times and independently of the observers. Leggett and Garg subsequently formulated the question qualitatively by means of the derivation of a series of inequalities based upon the premises of macroscopic realism and noninvasive measurability. In practice, the main experimental challenge comes from the implementation of truly noninvasive measurements." -- Zong-Quan Zhou, Chuan-Feng Li, Guang-Can Guo
(Read their article: "Experimental Violation of Leggett-Garg Type Inequalities Using Quantum Memories" )

Sunday, November 22, 2015

Using Phase Changes to Store Information in the Magnetic Permeability

Authors: Alan S. Edelstein*, John Timmerwilke, Jonathan R. Petrie 

Affiliation: US Army Research Laboratory, Adelphi, Maryland, USA

*Email: aedelstein@cox.net

Finding better methods for storing information was a serious problem for the earliest computers; considerable effort is still being devoted to decreasing the cost and increasing the density and lifetime of stored information. Initial methods of storage included using acoustic delay lines and pixels on the display of cathode ray tubes [1]. Though the current methods for storing information, which include magnetic hard disks, magnetic tape, and various forms of random access memory (RAM) are far superior to these early methods, they still have significant limitations.

As an example, the information on hard disks or magnetic tape is written by a magnetic field and stored as regions, i.e. bits, having different directions for their remanent magnetization. Thus, this information can be erased by exposure to a magnetic field. Furthermore, to avoid thermal upsets of the spin direction of the bits there is a trade-off between how long the information can be stored without acquiring too many incorrect bits and the information density on hard disks. Maintaining a magnetic hard disk lifetime of about seven years has made it difficult to increase the information density in hard disk drives. Magnetic tape degrades in about 20 years.

To avoid these problems, we have been developing a new approach for storing information that we call Magnetic Phase Change Memory (MAG PCM). Instead of using the direction of magnetic remanance, information is stored in bits of soft ferromagnetic material having different values for their magnetic permeability. The initial magnetic permeability of a soft ferromagnetic material is reversible and dependent only on its atomic structure. Thus, it is independent of the magnetic field. The permeability of iron rich FeGa single crystal alloys [2]  is an interesting example of soft magnetic behavior.

The property we use is that high permeability bits attract magnetic fields and low permeability bits do not. To read the information stored in bits with high or low permeability we measure their effect on a probe field. We have primarily focused on writing and changing the permeability of bits of amorphous ferromagnet, 2826 MB Metglas, films by thermally heating with a laser. The nearest neighbor atoms in amorphous ferromagnets materials do not have long range order. Thus, they do not have much crystalline anisotropy or coercivity and are soft ferromagnets with large values for their permeability. When amorphous materials are heated above their glass temperature, they crystalize and have larger values for their coercivity and smaller values for their permeability. The glass temperature of 2826 MB Metglas is 410oC. At temperatures as high as 200oC, 2826 MB Metglas will retain its high permeability for hundreds of years.

Figure 1: a) Microscope view of three 50 micron wide crystallized lines written into amorphous Metglas. b) Voltage of the MTJ reader as it moved over the three crystalline lines written in the presence of a 32 Oe probe field.

We have read the effect on a probe magnetic field near each bit using both magnetic tunnel junction (MTJ) sensors [3,4], and spin transfer oscillators [5]. Figure 1a shows a microscope image of three crystalline lines in Metglas written by a focused 1.966 micron (Tm fiber) laser. The output voltage of an MTJ sensor is plotted in Figure 1b as it is swept over the crystallized Metglas lines shown in Figure 1a. One sees that the crystallized lines affect a 32 Oe probe field. The crystallized lines do not attract the probe field as much as the amorphous Metglas film does which causes the magnetic sensor to measure a larger field. Smaller nanometer-sized bits were created using e-beam lithography. Figure 2a is a microscope image of two square 0.9 mm arrays of 300 nm bits of amorphous Metglas. Figure 2b shows a scanning electron microscope (SEM) image of the 300 nm diameter bits. Laser heating was used to crystallize all of the bits in the left array. Figure 3 shows the voltage output of the MTJ sensor when it is moved over the two arrays before and after the laser heating. One sees that the bits in the left array no longer attract the magnetic field lines of the probe field.
Figure 2. (a) Microscope image of two square 0.9 mm arrays of 300 nm diameter bits of Metglas; (b) scanning electron microscope image of the 300 nm diameter bits of Metglas.

This new approach for storing information has several advantages. One can write bits with at least three different values [6]  for their permeability. The bits will not be corrupted by a magnetic field or thermal upsets and therefore should last decades. It should be possible to write nm sized bits economically by an adaption of heat assisted magnetic recording (HAMR) [7], a technology that is being developed by hard disk companies such as Seagate and Western Digital to cope with the problem mentioned above of maintaining stability against thermal upsets. In HAMR a laser and a near field transducer is used to heat nm sized regions to 710oC to decrease the coercivity so that they can be written without using as large a magnetic field. What we need to do is simpler, in that we do not need a magnetic field and for archiving we do not need to rewrite.

Figure 3: Magnetic tunnel junctions scans before (red, o) and after (blue, x) the left array of 300nm Metglas bits in Fig. 2 were crystallized by heating with a laser.

MAG PCM has the potential for a combination properties not found in other storage technologies. It should have decades of longevity and the rapid access and high density of future hard disks. We have a clear path for developing MAG PCM into commercial products for long term storage applications such as archiving. Though it is unnecessary for archiving, we have found that we can rewrite our bit [8], i.e., return a crystallized bit to an amorphous state.

This work was done while we were at the US Army Research Laboratory.

[1] D.R. Hartree, M.H.A. Newman, M.V. Wilkens, F.C. Williams, J.H. Wilkinson, A.D. Booth, ” A discussion on computing machines”, Proceedings of the Royal Society of London Series A- Mathematical and Physical Sciences, 195:1042, 265 (1948). Abstract.
[2] Harsh Deep Chopra, Manfred Wittig, “Non-Julian magnetostriction”, Nature, 521, 340-343 (2015). Abstract.
[3] J.R. Petrie, K.A. Wieland, R.A. Burke, G.A. Newburgh, J.E. Burnette, G.A. Fischer, A.S. Edelstein, “ A non-erasable magnetic memory based on the magnetic permeability”, Journal of Magnetism and Magnetic Materials, 361, 262 (2014). Abstract.
[4] John Timmerwilke, J.R. Petrie, K.A. Wieland, Raymond Mencia, Sy-Hwang Liou, C.D. Cress, G.A. Newburgh, A.S. Edelstein, “Using magnetic permeability bits to store information”, Journal of Physics D: Applied Physics, 48, 405002 (2015). Abstract.
[5] J.R. Petrie, S. Urazhdin, K.A. Wieland, A.S. Edelstein, “Using a spin torque nano-oscillator to read memory based on the magnetic permeability”, Journal of Physics D: Applied Physics, 47, 055002 (2014). Abstract.
[6] J.R. Petrie, K.A. Wieland, J.M. Timmerwilke, S.C. Barron, R.A. Burke, G.A. Newburgh, J.E. Burnette, G.A. Fischer, and A.S. Edelstein, “A multi-state magnetic memory dependent on the permeability of Metglas”, Applied Physics Letters, 106, 142403 (2015). Abstract.
[7] M.H. Kryder and Soo Kim Chang, “After hard drives—what comes next?” IEEE Transactions on Magnetics, 45, 3406 (2009). Abstract.
[8] Unpublished data.


Sunday, November 15, 2015

A New Way To Weigh A Star

Nils Andersson (left) and Wynn Ho 

Authors: Nils Andersson, Wynn Ho

Affiliation: Mathematical Sciences and STAG Research Centre, University of Southampton, UK. 

You probably have a pretty good idea of your own weight. And if you need an exact answer it is relatively easy to find out. Get the bathroom scales out, step up and read off the result. But have you ever asked yourself what was actually involved in that measurement? Have you considered that what actually happened was that gravity’s pull was countered by the electromagnetic interaction of the atoms that make up the surface of the scale, and what you actually measured was how hard the atoms had to work to push back on your feet? Possibly not, and the chances are that you have never really considered how one would weigh a distant star, either.

As it turns out, this question has a fairly straightforward answer, which also involves gravity, electromagnetism… and a bit of luck. If you want to figure out how heavy a particular star is, and you are lucky enough that this star has a close companion, then all you need to do is track the star’s motion. The orbital motion of a double-star system is dictated by gravity and you can figure out how much the stars weigh using the same arguments that we use to figure out the mass of the moon (going around the Earth) and the Earth (circling the Sun). If you want to be a bit more precise you should use Einstein’s curved spacetime theory for gravity rather than Newton’s inverse square-law, but this may be a luxury in this exercise.

Using this technique, astronomers have weighed many stars with great precision and they have also managed to work out the masses of a number of pulsars [1]. This is particularly exciting as these systems stretch our understanding of several aspects of fundamental physics.

A pulsar is a highly magnetised rotating neutron star formed when a massive star runs out of nuclear fuel. At this point it can no longer hold itself up against gravity and it starts falling in on itself. This leads to a spectacular explosion – a supernova – the remains of which may be either a neutron star or a black hole. Neutron stars are nature’s own counterparts to the Large Hadron Collider [2]. In essence, they provide a link between astronomy and laboratory work in both high-energy and low-temperature physics. By weighing these stars we gain insight into physics under extreme conditions [3].

Pulsars are named for their rotating beam of electromagnetic radiation, which is observed by telescopes as it sweeps past the Earth, just like the familiar beam of a lighthouse [4]. They are renowned for their incredibly stable rate of rotation [5], but young pulsars occasionally experience so-called glitches where they are found to speed up for a very brief period of time [6]. The prevailing idea is that these glitches provide evidence of exotic states of matter in the star’s interior [3]. The glitches arise when a rapidly spinning superfluid within the star transfers rotational energy to the star's crust, a solid outer layer like a bowl containing a mysterious soup; the component that is tracked by observations. Imagine the bowl spinning at one speed and the soup spinning faster. Friction between the inside of the bowl and its contents, the soup, can cause the bowl to speed up. Whenever this happens, the more soup there is, the faster the bowl will be made to rotate.

Interestingly, it seems that the superfluid soup provides us with a new way of weighing these stars. This new technique is very different from the usual approach as it is not based on gravity, but nuclear physics, and it can also be used for stars in isolation. The star does not have to have a companion.

In a recent paper in Science Advances [7], we have developed this exciting new idea, which relies on a detailed understanding of neutron star superfluidity and the dynamics of the quantum vortices - a kind of ultra-slim tornadoes - by means of which these systems mimic large-scale rotation. Our results are promising and have important implications for the generation of revolutionary radio telescopes, like the Square Kilometre Array (SKA [8]) and the Low Frequency Array (LOFAR [9]), that are being developed by large international collaborations. The discovery and monitoring of many more pulsars is one of the key scientific goals of these projects. We now have a set of scales that may allow us to figure out how much these stars weigh, as well.

[1] See, for example, the list maintained at stellarcollapse.org
[2] See Large Hadron Collider
[3] J.M. Lattimer, M. Prakash, "The Physics of Neutron Stars", Science, 304, 536-542 (2004). Abstract
[4] The discovery of pulsars was recognized with the Nobel prize in physics in 1974
[5] Gravity tests using the stability of the timing of pulsars was recognized with the Nobel prize in physics in 1993
[6] A catalog of glitches is maintained by Jodrell Bank Centre for Astrophysics. 
[7] W.C.G. Ho, C.M. Espinoza, D. Antonopoulou, N. Andersson, "Pinning down the superfluid and measuring masses using pulsar glitches," Science Advances, 1, e1500578 (2015). Full Article
[8] See Square Kilometre Array.  
[9] See LOFAR

Labels: , ,

Sunday, November 08, 2015

A Continuously Pumped Reservoir of Ultracold Atoms

From Left to Right: (top row) Jan Mahnke, Ilka Kruse, Andreas Hüper, (bottom row) Wolfgang Ertmer, Jan Arlt, Carsten Klempt.

Authors: Jan Mahnke1, Ilka Kruse1, Andreas Hüper1, Stefan Jöllenbeck1, Wolfgang Ertmer1, Jan Arlt2, Carsten Klempt1

1Institut für Quantenoptik, Gottfried Wilhelm Leibniz Universität Hannover, Germany
2Institut for Fysik og Astronomi, Aarhus Universitet, Aarhus C, Denmark

During the last decades, the quantum regime could be accessed through very different systems, including trapped ions, micromechanical oscillators, superconducting circuits and dilute ultracold gases. Mostly, these systems show the desired quantum-mechanical features at ultra-low temperatures only. The lowest temperatures [1] to date are reached in dilute atomic gases by a combination of laser cooling and evaporative cooling. This approach has two disadvantages: It relies on the internal structure of the atoms due to the laser cooling and it can only cool discrete samples instead of continuous beams due to the evaporative cooling.

However, many applications would greatly benefit from a continuous source of cold atoms, for example sympathetic cooling [2] of molecules. Here, the molecules are brought into contact with a bath of cold atoms to redistribute the thermal energy through collisions. Ideally, such a cold bath is realized in absence of disturbing laser light as the rich internal structure of the molecules results in a broad absorption spectrum and any photon can potentially harm the cooling process. Another application of a continuous beam of cold atoms is continuous matter interferometry. Atom interferometry is already in use for the precise measurement of many observables, including time [3], gravity [4] and rotation [5]. These measurements could greatly benefit from a continuous observation instead of the sequential interrogation of discrete samples. Even though continuous sources are highly desired, no continuous sources without the application of laser light have been demonstrated in the microkelvin regime yet.

One possible realization of an ultracold continuous sample was proposed theoretically [6], where a conservative and static trap is loaded by an atomic beam. In this scheme, pre-cooled atoms are guided towards the entrance barrier of an elongated trap with a finite trap depth (see figure 1). If the atoms pass the entrance barrier, they follow the elongated potential until they are reflected by the end of the trap. The strong confinement in the radial direction ensures that most atoms collide with another atom before they reach the entrance barrier again. These collisions allow for a redistribution of the kinetic energy. Consequently, some atoms acquire a kinetic energy larger than the trap depth and escape the trap. Other atoms lose energy and stay trapped. If the trap parameters are chosen well, an equilibrium condition with a surprisingly large phase-space density may be reached.
Figure 1: 3D plot of the static trapping potential in the x–z-plane through the point of the trap minimum.

In our recent publication [7], we demonstrate the first experimental implementation of such a continuous loading of a conservative trap. Our realization is based on a mesoscopic atom chip (see figure 2 and Ref. [8]), a planar structure of millimeter-sized wires. The mesoscopic chip generates the magnetic fields for a three-dimensional magneto-optical trap, a magnetic waveguide and the static trapping potential described above. The three-dimensional magneto-optical trap is periodically loaded with an ensemble of atoms. These ensembles are launched into the magnetic waveguide, where they overlap and produce a continuous atom beam with varying intensity. This beam traverses an aperture which optically isolates the loading region from the static trapping region. In this trapping region, the atom beam is directed onto the elongated magnetic trap, where the atoms accumulate.
Figure 2: Photograph of the mesoscopic atom chip with millimeter-scale wires. The magneto-optical trap is in the lower left area and the static trap is generated in the top right area. The bend wires create a guide connecting the two regions.

With this loading scheme, we create and maintain an atomic reservoir with a total number of 3.8 × 107 trapped atoms at a temperature of 102 µK, corresponding to a peak phase-space density of 9 × 10-8 h-3. This is the first continuously loaded cloud in the microkelvin regime without the application of laser light. Such a continuously replenished ensemble of ultracold atoms presents a new tool for metrological tasks and for the sympathetic cooling of other atomic species, molecules or nanoscopic solid state systems. The scheme is also very versatile in creating cold samples of atoms and molecules directly, as it does not rely on any internal level structure.
Figure 3 Photograph of the experimental setup. The atom chip is visible outside of the vacuum chamber at the top. In the front is the glass cell and the optics for the two-dimensional magneto-optical trap, which is used to load the three dimensional magneto-optical trap on the chip.

[1] A. E. Leanhardt, T. A. Pasquini, M. Saba, A. Schirotzek, Y. Shin, D. Kielpinski, D. E. Pritchard,  W. Ketterle. “Cooling Bose-Einstein condensates below 500 picokelvin”, Science, 301, 1513 (2003). Abstract.
[2] Wade G. Rellergert, Scott T. Sullivan, Svetlana Kotochigova, Alexander Petrov, Kuang Chen, Steven J. Schowalter, Eric R. Hudson, “Measurement of a large chemical reaction rate between ultracold closed-shell 40Ca atoms and open-shell 174Yb+ ions held in a hybrid atom-ion trap”, Physical Review Letters, 107, 243201 (2011). Abstract.
[3] R. Wynands and S. Weyers. “Atomic fountain clocks”, Metrologia, 42 (3), S64 (2005). Abstract.
[4] A. Louchet-Chauvet, S. Merlet, Q. Bodart, A. Landragin, F. Pereira Dos Santos, H. Baumann, G. D'Agostino, C. Origlia, “Comparison of 3 absolute gravimeters based on different methods for the e-MASS project”, Instrumentation and Measurement, IEEE Transactions on, 60(7), 2527-2532 (2011). Abstract.
[5] J. K. Stockton, K. Takase, and M. A. Kasevich. “Absolute geodetic rotation measurement using atom interferometry”, Physical Review Letters, 107, 133001 (2011). Abstract.
[6] C. F. Roos, P. Cren, D. Guéry-Odelin, and J. Dalibard. “Continuous loading of a non-dissipative atom trap”, Europhysics Letters, 61, 187 (2003). Abstract.
[7] J. Mahnke, I. Kruse, A. Hüper, S. Jöllenbeck, W. Ertmer, J. Arlt, C. Klempt. “A continuously pumped reservoir of ultracold atoms”, Journal of Physics B: Atomic Molecular and Optical Physics, 48, 165301 (2015). Abstract.
[8] S. Jöllenbeck, J. Mahnke, R. Randoll, W. Ertmer, J. Arlt, C. Klempt. “Hexapole-compensated magneto-optical trap on a mesoscopic atom chip”, Physical Review A, 83, 043406 (2011). Abstract.

Labels: , , ,

Sunday, November 01, 2015

A Magnetic Wormhole

(From left to right) Carles Navau, Alvaro Sanchez, Jordi Prat-Camps

Authors: Jordi Prat-Camps, Carles Navau, Alvaro Sanchez 

Affiliation: Departament de Física, Universitat Autònoma de Barcelona, Spain.

Link to Superconductivity Group UAB >>

Is it possible to build a wormhole in a lab? Taking into account that large amounts of gravitional energy would be required [1], this seems an impossible task. However, redefining a wormhole into a path between two points in space that is completely undetectable, Greenleaf and colleagues [2] suggested in 2007 a (theoretical) way of realizing an electromagnetic wormhole capable of guiding light through an invisible path. They demonstrated that this is topologically equivalent as if the light had been sent through another spatial dimension. However, such a wormhole required metamaterials with extreme properties, which prevented its construction.

In our work, we have constructed an actual 3D wormhole working for magnetostatic fields. It allows the passage of magnetic field between distant regions while the region of propagation remains magnetically invisible. Our wormhole takes advantage of the possibilities that magnetic metamaterials offer for shaping static magnetic fields [3]. These metamaterials can be constructed using existing magnetic materials that can provide extreme magnetic permeability values ranging from zero - superconductors - to effectively infinity - ferromagnets.
Figure 1: (Left) 3D sketch of the magnetic wormhole, showing how the magnetic field lines (in red) of a small magnet at the right are transferred through it. (Right) From a magnetic point of view the wormhole is magnetically undetectable so that the field of the magnet seems to disappear at the right and reappear at the left in the form of a magnetic monopole. (Image credit: Jordi Prat-Camps and Universitat Autònoma de Barcelona).

The magnetic wormhole requires three properties: (i) to magnetically decouple a given volume from the surrounding 3D space, (ii) to have the whole object magnetically undetectable, and (iii) to have magnetic fields propagating through its interior. The first two properties are achieved by constructing a 3D magnetic cloak. Based on previous ideas [4] such a cloak could be made by surrounding a superconducting sphere with a specially created ferromagnetic (meta)surface, such that the magnetic signature of the superconductor was cancelled by the ferromagnet. For the third property, we use magnetic hoses, also made by magnetic metamaterials, as developed in [5].

The parts composing the magnetic wormhole are shown in fig. 2: a central magnetic hose to guide the magnetic field from one end of the hose to the opposite one, and a magnetic cloak composed of a superconducting-ferromagnetic bilayer to make the hose magnetically invisible.
Figure 2: (a) 3D image of the magnetic wormhole, formed by concentric shells: from outside inwards, an external metasurface made of ferromagnetic pieces (b), an internal superconducting shell made of coated conductor pieces (c), and a magnetic hose made of ferromagnetic foil (d). (e) Cross-section view of the wormhole, including the plastic formers (in green and red) used to hold the different parts. (Image credit: Jordi Prat-Camps and Universitat Autònoma de Barcelona).

Experimental results clearly demonstrate [6] the two desired properties for the wormhole: (i) magnetic field from a source at one end of the wormhole appear at the opposite end (actually as a kind of isolated magnetic monopole), (ii) the overall device is magnetically undetectable (it does not noticeably distort an applied magnetic field, even a non-uniform one).

Besides the scientific interest per se in the realization of an object with properties of a wormhole, our device may have applications in practical situations where magnetic fields have to be transferred without distorting a given field distribution, as in magnetic resonance imaging.

Acknowledgements: We thank Spanish project MAT2012-35370 and Catalan 2014-SGR-150 for financial support. A.S. acknowledges a grant from ICREA Academia, funded by the Generalitat de Catalunya. J. P.-C. acknowledges a FPU grant form Spanish Government (AP2010-2556).

[1] Michael S. Morris, Kip S. Thorne, Ulvi Yurtsever, "Wormholes, Time Machines, and the Weak Energy Condition", Physical Review Letters, 61, 1446 (1998). Abstract.
[2] Allan Greenleaf, Yaroslav Kurylev, Matti Lassas, Gunther Uhlmann, "Electromagnetic Wormholes and Virtual Magnetic Monopoles from Metamaterials", Physical Review Letters, 99, 183901 (2007). Abstract.
[3] Steven M. Anlage, "Magnetic Hose Keeps Fields from Spreading", Physics, 7, 67 (2014). Full Article.
[4] Fedor Gömöry, Mykola Solovyov, Ján Šouc, Carles Navau, Jordi Prat-Camps, Alvaro Sanchez, "Experimental realization of a magnetic cloak", Science, 335, 1466 (2012). Abstract.
[5] C. Navau, J. Prat-Camps, O. Romero-Isart, J. I. Cirac, A. Sanchez. "Long-Distance Transfer and Routing of Static Magnetic Fields", Physical Review Letters, 112, 253901 (2014). Abstract.
[6] Jordi Prat-Camps, Carles Navau, Alvaro Sanchez. "A Magnetic Wormhole", Scientific Reports 5, 12488 (2015). Full Article.

Labels: , ,

Sunday, October 25, 2015

Broadband Reflectionless Metasheets: Frequency-Selective Transmission and Perfect Absorption

Ihar Faniayeu (left) and Viktar Asadchy

Authors: Ihar Faniayeu1,2, Viktar Asadchy2,3, Younes Ra'di3, Sergey Khakhomov2, Igor Semchenko2, Sergey Tretyakov3

1Research Institute of Electronics, Shizuoka University, Hamamatsu, Japan,
2Department of General Physics, Francisk Skorina Gomel State University, Gomel, Belarus,
3Department of Radio Science and Engineering, Aalto University, Aalto, Finland.

People these days exhibit strong desire to control their surroundings, which -- in addition to tangible objects -- involves electromagnetic radiation omnipresent as radio waves, heat, and light. In this regard, the latest trend is to use electromagnetic metamaterials for transforming flow and absorption of electromagnetic waves. This trend has opened up new possibilities in imaging, telecommunications, signal processing, environmental sensing, medicine and other areas of science and technology. While metamaterial-based absorbers are typically tailored to exhibit efficient absorption at the desired resonance frequency, they are usually not transparent at other non-operative frequencies, and may exhibit strong unwanted back-reflections [1], which limit their functionality. Here we demonstrate for the first time a metamaterial-based absorber which uses 3D architecture without an opaque ground plane, which leads to complete off-resonance transparency, as is illustrated schematically in Fig. 1 (a) and (b).
Figure 1(a): Schematic design and working principle of metamaterial absorber.
Figure 1(b): Transmission, reflection and absorption spectra of the structure illustrate its perfect absorbance (R=1) at the resonance, and complete transparency (T=1) away from it.

Our work published in Physical Review X [2], presents both theoretical concept and experimental realization of such invisible metamaterial absorber for the microwave range. Conceptually, the structure consists of a periodic array of right- and left-handed single- and double-turn helices made of lossy metal, embedded in a dielectric. Balanced periodic arrangement of these bi-anisotropic elements leads to extremely broadband resonant response, which can be utilized in transmission arrays and absorbers [3]. In the case of absorber, strong resistive losses occurring in the metal transforms the absorbed electromagnetic field energy into heat. This concept is quite general and is therefore applicable in the entire electromagnetic spectrum. Practical demonstration of such absorber uses thin chromium-nickel wire helices embedded in a plastic foam sheet as shown in Fig. 2. As expected, strong absorption resonance is seen around 3 GHz frequency (see Fig. 3), whereas reflection remains low in the entire measured range.
Figure 2(a): Fabricated absorbers of single-turn helices comprising 480 elements embedded in plastic foam.
Figure 2(b): Fabricated absorbers of double-turn helices comprising 324 elements embedded in plastic foam.
Figure 3: Measured and numerically simulated reflection, transmission, and absorption coefficients for the fabricated metasurfaces with (a) single- and (b) double-turn helical inclusions. Experimental data is shown by points, the solid lines are guides to the eye, and the numerically simulated data are shown by dashed lines.

This concept and its experimental verification suggest that it is possible to realize metamaterial-based absorbers having significant advantages over other existing designs. We stress here that off-resonance transparency of the single absorber layer allows realization of multilayer structures where layers operate at different frequencies simultaneously without cross-talk, thus drastically expanding functionality of the device. The structure is tunable by changing its unit cell size. Terahertz and even visible wavelength range can be reached, provided that electromagnetic dispersion of the metal is taken into account and fabrication technique allowing realization of downscaled lattice is available. It is expected that currently available nanofabrication techniques, such as 3D printing and direct laser writing lithography will allow practical realization of such metamaterial structures, thus making a further step toward more versatile tailoring of electromagnetic radiation.

We thank Prof. Vygantas Mizeikis for helpful discussions and support in this article.

[1] Claire M. Watts, Xianliang Liu, Willie J. Padilla, "Metamaterial Electromagnetic Wave Absorbers", Advanced Materials, 24, OP98 (2012). Abstract.
[2] V.S. Asadchy, I.A. Faniayeu, Y. Ra’di, S.A. Khakhomov, I.V. Semchenko, S.A. Tretyakov, "Broadband Reflectionless Metasheets: Frequency-Selective Transmission and Perfect Absorption", Physical Review X, 5, 031005 (2015). Abstract.
[3] V.S. Asadchy, I.A. Faniayeu, Y. Ra'di, I.V. Semchenko, S.A. Khakhomov, "Optimal arrangement of smooth helices in uniaxial 2D-arrays", Advanced Electromagnetic Materials in Microwaves and Optics (Metamaterials), 7th International Congress, pp. 244–246 (16-21 September, 2013). Abstract.

Labels: ,

Sunday, October 18, 2015

Dynamic Weakening by Acoustic Fluidization in a Model for Earthquake Occurrence

(From left to right) Eugenio Lippiello, Ferdinando Giacco, Lucilla de Arcangelis,  Massimo Pica Ciamarra.

Authors: Eugenio Lippiello2,4, Ferdinando Giacco1,2, Massimo Pica Ciamarra1,5, Lucilla de Arcangelis3,4

1CNR-SPIN, Department of Physics, University of Naples “Federico II”, Italy
2Department of Mathematics and Physics, Second University of Naples and CNISM, Caserta, Italy
3Department of Industrial and Information Engineering, Second University of Naples and CNISM, Aversa (CE), Italy
4Kavli Institute for Theoretical Physics, University of California, Santa Barbara, USA
5Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore.

Many studies have shown that the frictional strength of fault systems is controlled by the rheology of crushed and ground-up rock produced during past sliding events, known as fault gouge. Treating the gouge as a granular material confined between two rough plates we started to think an earthquake as a transition from a jammed state, in which the gouge resists the existing stresses, to a flowing one. By connecting the physics of earthquakes to that of granular systems, the analogy allows to investigate -- from an original perspective -- an important geophysical question: why are earthquakes observed when the estimated shear to normal stress ratio is much smaller than the rock static Coulomb coefficient?

We have therefore developed a “granular” model for a seismic fault, where the tectonic drive is implemented by coupling the confining boundary to a spring whose free-end moves at constant velocity (Fig.1). The model exhibits alternations between very long stick (jammed) phases abruptly interrupted by jumps (slips), typical of earthquake sequences in real seismic faults [1]. The jumps follow a broad distribution that is in agreement with the Gutenberg-Richter law for the organization of magnitude in instrumental catalogs [2].
Figure 1: The model consists of spherical grains, representing the fault gouge, confined between two rough rigid layers, at constant pressure P0. The grains interact through a normal viscoelastic interaction and a tangential frictional one. A stick slip dynamics is induced by driving the system via a spring mechanism along x at shear stress σ.

The numerical model allows for a detailed investigation of the evolution of the system as the shear stress increases and the failure is approached. While the structure remains essentially unchanged, we observe that the system's response to external perturbations increases: the fault becomes weaker on approaching a slip instability. Interestingly, this weakening only occurs for perturbations in a narrow range of frequency, independent of the fault orientation [3]. In particular the weakening was observed also for perturbations which increase the confining pressure or reduce the shear stress, that is a kind of perturbation that should strengthen the system.

The mechanism of "acoustic fluidization" is the explanation of this unexpected behavior. This mechanism has been proposed by Melosh in 1979 [4,5] to justify why several major fault systems exhibit a resistance to shear stress much smaller than the one observed in experiments of rock-on-rock friction. According to this mechanism, during seismic fracture, a fraction of the total energy is released via elastic waves that diffuse inside the fault at short-wavelengths because of scattering due to small scale heterogeneities. These waves produce oscillations in the direction normal to the fault plane that can balance the confining normal pressure and eventually induce seismic failure.

In our model we explicitly show that the maximum response is obtained at a characteristic frequency ω*, typical of acoustic waves bouncing back-and-forth within the fault. The advantage of numerical simulations is that it is possible to turn back the clock arbitrarily and therefore to know in advance when the next slip will occur. In particular our study shows that even very small amplitude perturbations, at the characteristic frequency ω*, can induce failure. This could explain the observation of aftershocks at great distances (thousand of kilometers) from their triggering earthquake.

Furthermore, we have also found that acoustic waves at the same resonant frequency spontaneously emerge inside the model fault, even when no external perturbation is applied. The contour map of the power spectrum as function of the time, plotted in Fig. 2, shows that the characteristic frequency ω* is spontaneously excited as soon as the sliding starts. This suggests that acoustic oscillations reduce the confining pressure and promote failure at a shear stress value smaller than expected.
Figure 2: The map of the logarithm of the power spectral density, as function of the time t (horizontal axis) and frequency ω (vertical axis). The dashed vertical lines indicate the slip occurrence times. We observe the appearance of a peak in the characteristic frequency ω* immediately before the occurrence of each slip.

[1] Massimo Pica Ciamarra, Eugenio Lippiello, Cataldo Godano, Lucilla de Arcangelis, "Unjamming Dynamics: The Micromechanics of a Seismic Fault Model", Physical Review Letters, 104, 238001 (2010). Abstract.
[2] M. Pica Ciamarra, E. Lippiello, L. de Arcangelis, C. Godano, "Statistics of slipping event sizes in granular seismic fault models", Europhysics Letters, 95, 54002 (2011). Abstract.
[3] F. Giacco, L. Saggese, L. de Arcangelis, E. Lippiello, M. Pica Ciamarra, "Dynamic Weakening by Acoustic Fluidization during Stick-Slip Motion", Physical Review Letters, 115, 128001 (2015). Abstract.
[4] H. Jay Melosh, "Acoustic fluidization: A new geologic process?", Journal of Geophysical Research, 84, 7513 (1979). Abstract.
[5] H.J. Melosh, "Dynamical weakening of faults by acoustic fluidization", Nature, 379, 601 (1996). Abstract.

Labels: ,

Sunday, October 11, 2015

Communicating Quantum States with Alice on a Satellite

Some authors of "Experimental Satellite Quantum Communications" [4] during a night shift: (Right to Left) Davide Bacco, Simone Gaiarin, Daniele Dequal, Giuseppe Vallone and Paolo Villoresi.

Authors: Giuseppe Vallone1, Davide Bacco1, Daniele Dequal1, Simone Gaiarin1, Vincenza Luceri2, Giuseppe Bianco3, Paolo Villoresi1

1Dipartimento di Ingegneria dell’Informazione, Università degli Studi di Padova, Italy 
2e-GEOS spa, Matera, Italy 
3Matera Laser Ranging Observatory, Agenzia Spaziale Italiana, Matera, Italy.

The exchange of quantum bits – or qubits – is a fundamental process in all Quantum Information protocols. The faithful transport of the fragile quantum content of a photon is needed inside the prototypes of photonic quantum computers as well for the teleportation of a given state.

Past 2Physics articles by this group:
August 31, 2014: "A True Randomness Generator Exploiting a Very Long and Turbulent Path" by Davide G. Marangon, Giuseppe Vallone, Paolo Villoresi.
November 24, 2013: "How to Realize Quantum Key Distribution with a Limited and Noisy Link" by Paolo Villoresi.
May 19, 2008: "The Frontier of Quantum Communication is the Space"
by Paolo Villoresi.

Image 1: The scenario where satellites uses Quantum Communications for distributing secure keys to a global communications network.

Moreover, in the relevant application of Quantum-Key-Distribution (QKD), which allows to create a private key between two terminals exploiting the laws of Quantum Physics, such exchange of qubits is expected to cover very long distances [1]. Indeed, in order to connect with secure communications two embassies, two corporate branches and so on, effective quantum communication schemes on a planetary scale are needed. The fibre channels were investigated first for the realization of QKD, such that now several commercial devices based on optical cables are already in operations. Fibers are very efficient up to about 100 km, and the present limit for QKD in fiber is 300 km as demonstrated in a recent experiment [2]; beyond that scale there is the need of quantum repeaters, which presently are in development in advanced research labs. A radically different approach is to go along a Space channels, and exploit a satellite as the sender or the receiver. From the link budget analysis and the effect of turbulence in the propagation, it is evident that the transmitter (Alice) is most conveniently located on the satellite and the receiver (Bob) on the ground [3,4,5].

The first attempt of quantum communication in Space, was made in 2008 by Villoresi et al [6,7], where photons launched from a ground station were reflected by CCRs (corner cube retroreflectors) and aimed back to the Earth. In that case it was used the Japanese satellite Ajisai for emulate an optical transmitter in space. In that work it was demonstrated the application of spectral, spatial and temporal filtering capable to point out the return photons with global losses in the up- and down-link as strong as 157 dB.

In the present experiment, reported recently in Ref. [8], we introduced novel schemes for temporal synchronization and the optical interface, realizing a significant improvement in SNR (signal to noise ratio), dark counts and total transmissivity. We proved that a generic qubit with polarization encoding preserves its characteristics in a channel starting from a source realized again by a retroreflector in orbit and measured on ground by a state analyzer connected to an astronomical telescope designed for satellite-laser ranging [9]. Moreover we were able to prove a communication protocol measuring not only one polarization degree, but a complete set of four polarization states required for protocols as QKD [5].

A very important parameter in QCs (Quantum communications) is represented by the QBER factor (Quantum bit error rate), defined as the number of wrong bits received in a slot time. In case this factor is too high (the threshold depends on the chosen protocol, and on the sending rate), the security of the generated key is not guaranteed [8]. In our experiment we showed, by using some LEO (Low Earth Orbit) satellites (Starlette, Stella, Larets, Jason-2), that our method and setup allows a secure communication in a very long distance scenario (~2000 km). The measured QBER in different runs results of the order of a few percent. It was possible to attest that even with high losses, variable attenuation, and high background a quantum key distribution system works, and an unconditionally secure key, needful for encryption, can be generated also in this case.

Image 2: Picture of the SLR laser and MLRO station situated in Matera.

For the first time qubits bouncing from space were measured and analyzed in different polarization states. Moreover, all the results were obtained with existing satellites naturally used for geodetic studies and other activities usually equipped with CCRs. The optical setup used in the experiment is yet completely integrable in a lot of OGS (optical ground station) and present an easy interface between quantum and classical signals. Furthermore, the technology of SLR and classical satellite communications was exploited for synchronizing the transmitter and the receiver, even though the synchronization process is not so easy with satellite in motion.

What’s next? The possibility of sending and receiving single photons in very long distances paves the way to a lot of future experiments and brings Quantum Physics and Quantum Communication in a privilege position. Firstly, the big effort made by the Governments and by the University is surely compensated. From a scientific point of view these experimental results are very fascinating because they allow new experiments based on this technology. In particular, QKD could be realized with a small and compact device capable of changing polarization of photons, creating a base element for quantum two-way protocols. Additionally, experiments like entanglement distribution involving long distances, Bell inequality and teleportation protocol could be possible in next few years.

With this experiment, it was demonstrated that, not only free-space quantum key distribution is a ready technology, but also the quantum satellite communication is nowadays possible and realizable. The results open the way to look towards a global space quantum network, where OGS could talk with satellite and vice-versa creating a global secure network.

Acknowledgments: The work was carried out within QuantumFuture, one of ten Strategic Projects funded by the University of Padova in 2009. Coordinated by Prof. Villoresi, the project has established the Quantum Communication Laboratory and engaged four research groups in a joint activity: Quantum Communications, Quantum Control Theory, Quantum Astronomy and Quantum Optics.

[1] Valerio Scarani, Helle Bechmann-Pasquinucci, Nicolas J. Cerf, Miloslav Dušek, Norbert Lütkenhaus, Momtchil Peev, "The security of practical quantum key distribution", Review of Modern Physics, 81, 1301 (2009). Abstract.
[2] Boris Korzh, Charles Ci Wen Lim, Raphael Houlmann, Nicolas Gisin, Ming Jun Li, Daniel Nolan, Bruno Sanguinetti, Rob Thew, Hugo Zbinden, “Provably Secure and Practical Quantum Key Distribution over 307 km of Optical Fibre”. Nature Photonics, 9(3), 7. doi:10.1038/nphoton.2014.327 (2014). Abstract.
[3] Cristian Bonato, Markus Aspelmeyer, Thomas Jennewein, Claudio Pernechele, Paolo Villoresi, Anton Zeilinger, “In- fluence of satellite motion on polarization qubits in a Space-Earth quantum communication link,” Optics Express, 14,  10050 (2006). Full Article.
[4] Andrea Tomaello, Cristian Bonato, Vania Da Deppo, Giampiero Naletto, Paolo Villoresi, “Link budget and background noise for satellite quantum key distribution,” Advances in Space Research, 47, 802 (2011). Abstract.
[5] C. Bonato, A. Tomaello, V. Da Deppo, G. Naletto, and P. Villoresi, “Feasibility of satellite quantum key distribution,” New Journal of Physics, 11, 45017 (2009). Full Article.
[6] P Villoresi, T Jennewein, F Tamburini, M Aspelmeyer, C Bonato, R Ursin, C Pernechele, V Luceri, G Bianco, A Zeilinger and C Barbieri, "Experimental verification of the feasibility of a quantum channel between space and Earth", New Journal of Physics, 10, 033038 (2008). Full Article. 2Physics Article.
[7] Giuseppe Vallone, Davide Bacco, Daniele Dequal, Simone Gaiarin, Vincenza Luceri, Giuseppe Bianco, and Paolo Villoresi, “Experimental Satellite Quantum Communication” Phys. Rev. Lett. 115, 040502 (2015). Abstract.
[8] Davide Bacco, Matteo Canale, Nicola Laurenti, Giuseppe Vallone, Paolo Villoresi, "Experimental quantum key distribution with finite-key security analysis for noisy channels", Nature Communications, 4:2363, doi: 10.1038/ncomms3363 (2013). Abstract. 2Physics Article.
[9] http://ilrs.gsfc.nasa.gov/


Tuesday, October 06, 2015

Physics Nobel Prize 2015: Neutrino Oscillations

Takaaki Kajita (left) and Arthur B. McDonald

The Nobel Prize in Physics 2015 is awarded jointly to Takaaki Kajita (of Super-Kamiokande Collaboration, University of Tokyo, Japan) and Arthur B. McDonald (Sudbury Neutrino Observatory Collaboration, Queen’s University, Canada) "for the discovery of neutrino oscillations, which shows that neutrinos have mass".

The Nobel Prize in Physics 2015 recognises Takaaki Kajita and Arthur B. McDonald  for their key contributions to the experiments which demonstrated that neutrinos change identities. This metamorphosis requires that neutrinos have mass. The discovery has changed our understanding of the innermost workings of matter and can prove crucial to our view of the universe.

Around the turn of the millennium, Takaaki Kajita presented the discovery that neutrinos from the atmosphere switch between two identities on their way to the Super-Kamiokande detector in Japan.

Meanwhile, the research group in Canada led by Arthur B. McDonald could demonstrate that the neutrinos from the Sun were not disappearing on their way to Earth. Instead they were captured with a different identity when arriving to the Sudbury Neutrino Observatory.

A neutrino puzzle that physicists had wrestled with for decades had been resolved. Compared to theoretical calculations of the number of neutrinos, up to two thirds of the neutrinos were missing in measurements performed on Earth. Now, the two experiments discovered that the neutrinos had changed identities.

The discovery led to the far-reaching conclusion that neutrinos, which for a long time were considered massless, must have some mass, however small.

For particle physics this was a historic discovery. Its Standard Model of the innermost workings of matter had been incredibly successful, having resisted all experimental challenges for more than twenty years. However, as itrequires neutrinos to be massless, the new observations had clearly showed that the Standard Model cannot be the complete theory of the fundamental constituents of the universe.

The discovery rewarded with this year’s Nobel Prize in Physics have yielded crucial insights into the all but hidden world of neutrinos. After photons, the particles of light, neutrinos are the most numerous in the entire cosmos. The Earth is constantly bombarded by them.

Many neutrinos are created in reactions between cosmic radiation and the Earth’s atmosphere. Others are produced in nuclear reactions inside the Sun. Thousands of billions of neutrinos are streaming through our bodies each second. Hardly anything can stop them passing; neutrinos are nature’s most elusive elementary particles.

Now the experiments continue and intense activity is underway worldwide in order to capture neutrinos and examine their properties. New discoveries about their deepest secrets are expected to change our current understanding of the history, structure and future fate of the universe.

Homepage of Takaaki Kajita >>
Homepage of Arthur B. McDonald >>

Labels: ,