.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Sunday, November 17, 2013

Universality in Network Dynamics

Baruch Barzel (left) and Albert- László Barabási (right)

Authors: Baruch Barzel1,2 and Albert- László Barabási1,2.3

Affiliation:
1Center for Complex Network Research and Departments of Physics, Computer Science and Biology, Northeastern University, Boston, USA
2Center for Cancer Systems Biology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, USA
3Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, USA.

We think of statistical mechanics as the theory of interacting particles, gases and liquids. Its toolbox, however - indeed, its way of thought - goes beyond the domain of material science. In a broader perspective what statistical mechanics provides us with is a bridge between the microscopic description of a system and its observed macroscopic behavior. With it we can track the way in which system-level phenomena emerge from the mechanistic description of the system’s interacting components. For instance how the blind interactions between pairs of magnetic spins lead to the seemingly cooperative phenomena of magnetism, or, alternatively, how individual human interactions lead to the spread of ideas and perceptions.

At the heart of many of these emergent behaviors lies a rather simple characteristic of the system – its geometry. Indeed, many predictions in statistical mechanics can be traced back to the underlying geometry of the system’s interactions; most importantly, the system’s dimension [1,2]. This focus on geometry and dimensionality naturally portrays a picture of particles interacting in a defined topological structure, such as a lattice (or lattice-like) embedding. Recently, however, we have begun to shift our attention towards less structured systems – complex systems - which exhibit highly random and non-localized interaction patterns. Social networks, biological interactions and technological systems, such as the Internet or the power grid, are just a few examples, all of which are described by complex interaction networks, profoundly different from lattices, and hence they confront us with a potentially new class of observed dynamical behaviors.

The complex network geometry is distinguished by two key topological properties. First is the famous small world phenomenon. To understand it let us begin by observing the behavior of structured, lattice-like, systems. In such systems the number of nodes, i.e. the volume, and the typical distance between nodes are closely related – they follow the scaling law ()∼l d, where d represents the system’s dimension. In contrast networked systems are much more crowded. Within a small distance you can visit an exponentially proliferating volume of nodes [3]
so that the average distance between nodes does not scale with the system’s volume. This is the secret that allows enormous systems, such as society, to be connected by extremely short paths. Even just a small amount of structural randomness is sufficient to push the system into the small world limit [4], rendering the unique geometry described by (1) to be, in fact, not so unique. Indeed, Eq.(1) is observed by practically all complex systems, biological, social and technological. The second notion we must put aside when dealing with complex systems is the idea of a uniform pattern of connections. The random nature of complex systems invokes heterogeneity between the nodes and each node interacts with a different number of neighbors, leading to a distribution of node degrees (). Most complex systems have been shown to feature a fat-tailed (), describing an extremely heterogeneous structure in which a majority of low degree nodes coexists with a small amount of highly connected hubs [5].

This complex geometry raises an array of questions on the expected behavior of complex systems: What role does the broad span of degrees play in the system? Are the hubs really more influential than the average node? Are they more or, perhaps, less sensitive to their neighbors’ influence? We would also like to understand the impact of the small world topology. If all nodes are connected by short network paths, does that imply that they are also within each other’s dynamical reach? That would describe a system in which all nodes are impacted by all other nodes, a rather unstable scenario.

Of course, the answer to these questions is not purely in the geometry, but rather in the interplay between this geometry and the dynamics of the system, specifically in the meaning that lies behind the network links. By meaning we refer to the fact that the reduction of a complex system into a network structure gives no account to the type of interaction that occurs between node pairs – from chemical binding in biological networks to viral infection in social systems – the links in different systems represent different dynamical mechanisms or rules of interaction. We account for this diversity, by writing a most general equation for the activities, xi, of all nodes
The first term on the r.h.s. of (2) accounts for the self-dynamics and the second term sums over all of i’s interacting partners (Aij is the adjacency matrix, or the network). With the proper choice of the non-linear (xi) and (x,xj), Eq. (2) quite generally covers a broad array - indeed almost all - deterministic pairwise dynamics, among which are models frequently implemented on networks, such as biochemistry [6], genetic regulation [7], social [8] and ecological phenomena [9]. Hence Eq. (2) accounts for the inner mechanisms of the interaction rules, providing the microscopic description of the system’s dynamics.

To observe the system’s macroscopic behavior we use a perturbative approach. First we induce a small permanent perturbation dxj on the steady state of node j, representing, for instance the knockout of a gene in a cellular network or the failure of a node in a technological system. We then measure the response of all other nodes in the system, constructing the response matrix
This translates the structural properties of the system – who is connected to whom – into its dynamical patterns of influence, namely who is affected by whom. The Gij matrix provides an extremely detailed account of the system’s response to external perturbations – a term for each pair of nodes in the system. We obtain more meaningful insight on the system’s behavior by focusing on a set of aggregate functions extracted from (2), that are designed to directly answer the questions posed above.

Local dynamics: To understand the dynamical role of a node’s degree we characterize the dynamics between a node, i, and its immediate surrounding by measuring: (i) the impact, I, of ’s perturbation on its direct neighbors; (ii) the response of to neighboring perturbations. The latter quantifies’s dynamical stability, S, since a dramatic response to perturbations characterizes i as unstable, whereas a suppressed response implies high stability. We show that these two functions depend on i’s degree as
where the exponents δ and φ are fully determined by the leading terms of the functions (xi) and (x,xj) of Eq. (2). The exponents δ and φ determine the dynamical consequence of the network’s degree heterogeneity by relating the dynamical impact/response of a node to its degree. Eq. (4) tells us that different dynamical systems will be characterized by different exponents δ and φ, depending on the form of Eq. (2), and consequently they will express different patterns of influence between hubs and peripheral nodes.

Most importantly, δ and φ predict the existence of four highly distinctive dynamical universality classes. If, for example, φ=0, we find that a node’s impact is independent of its degree. All nodes exhibit uniform impact, regardless of the nature of the degree distribution, () (fat-tailed or bounded). Hence even under extreme structural heterogeneity, i.e. a fat-tailed (), all nodes will have a comparable dynamical impact. If, however φ≠0, the dynamical impact is driven by (), and consequently, for a scale-free network, the system will feature heterogeneous impact, in which most nodes have a tiny effect on their surrounding and a small group of selected nodes dominate the system’s local dynamics. A similar distinction, between uniform and heterogeneous stability, emerges from the universal value of δ. What is striking is that these crucial distinctions are determined by a rather general formulation of the dynamics (2), just a small number of leading terms. Finer details, such as higher order terms of (2), specific rate constants or the detailed structure of the network, play no role. As a result the predicted universality classes encompass a broad range of distinctive dynamics. For instance, all biochemical interactions are predicted to feature uniform stability; epidemic spreading processes will inevitably display heterogeneous impact.
Figure 1: From microscopic diversity to macroscopic universality. (a) Dynamical networks exhibit diverse types of interaction dynamics, from ecological to biological and social networks. (b) This diversity is described by a general dynamical equation (2), whose terms account for the microscopic interaction mechanisms between the nodes. (c) We observe the system’s behavior through Gij (3), capturing the patterns of influence characterizing the system. Systems are restricted to limited classes of dynamical behavior: uniform (blue) versus heterogeneous (orange) impact (top-left); uniform versus heterogeneous stability (bottom-left); conservative versus dissipative propagation (bottom-center). These classes determine other dynamical functions such as the cascade size distribution (top-right) and the correlation distribution (bottom-right).

Propagation: Perhaps the most crucial, and yet quite elusive, dynamical function in a network environment is the propagation of perturbations from the perturbed source to more distant nodes. This is captured by the distance dependent correlation function, Γ(), which tracks the impact of a perturbation at distance l. We find, analytically, that perturbations decay exponentially as
where, like δ and φ, the dissipation rate, β, is fully determined by the leading terms of (xi) and Q (x, xj) in (2). The correlation function describes the efficiency of the network in propagating information. A rapid decay of Γ() ensures that perturbations remain localized, dying out before they reach most nodes. A slow decay describes efficient penetration of perturbations. Again, we find two distinctive dynamical classes: if β=0 in (5) we observe conservative dynamics, in which perturbations penetrate the network without loss, conserving the mass of the initial perturbation. If, however, β>0, the system is dissipative, perturbations decay exponentially, and the dynamics is localized in the vicinity of the perturbed node. The case where β<0, describing an unstable response with perturbations growing out of control, is prohibited in the limit of the small-world topology (1), a crucial feature explaining the observed resilience of many complex systems to perturbations.

As for δ and φ, the value of β is insensitive to details. The specific parameters and detailed network structure have a marginal effect. Hence this distinction between conservative vs. dissipative dynamics represents a broad classification, where each class consists of many different dynamical systems. For instance, all spreading processes are dissipative; and all biochemical or population dynamics are conservative. Another way to state this is that in order to transfer a system between any of the above universality classes, uniform vs. heterogeneous or conservative vs. dissipative, it is not enough to tune a parameter or make a minor change in the dynamics. One must change the leading terms of Eq. (2), that is essentially alter the internal mechanisms of the interactions: change the system from, say, an epidemic spreading social system to a genetically regulated biological system. Hence the predictions (4) and (5) touch upon the defining features of the network’s dynamics.

The prediction of the local dynamics (4) together with the propagation (5) allows us to predict a much broader set of macroscopic dynamical functions. Indeed, the global response of the system to a perturbation is a direct consequence of the coupling between the local response and the propagation to more distant nodes. In that sense the universal exponents δ, φ and β serve as building blocks for the construction of a broad range of universal functions extracted from Gij . To demonstrate this we found that we can use (4) and (5) to predict the precise form of the cascade size distribution, (), and the correlation distribution (). These two functions, among the most frequently pursued in empirical observations of complex systems’ dynamics [10-14], have long been known to exhibit universal behavior, for which we lacked an adequate theoretical explanation. Now we can use Eqs. (4) and (5) to predict these functions and express them in terms of δ, φ and β. The meaning is that () and () inherit their universality from S, Ii and Γ(). Hence our formalism not only provides a general explanation for the universality of () and (), but also allowed us to analytically predict them directly from the structure of (2), connecting these pertinent macroscopic observations with a microscopic understanding of the system’s mechanistic interactions.

Going back to the fundamental paradigm of statistical mechanics, we make a rather strong prediction: that the observed macroscopic dynamics of complex systems can be predicted directly from its microscopic description (Eq. (2)), providing a set of universal exponents. Hence, while microscopically, complex systems exhibit a broad range of diverse dynamical models, across multiple fields and scientific domains, their macroscopic behavior condenses into a discrete set of dynamical universality classes. This is an optimistic message if our goal is to predict the behavior of complex systems. Indeed, universality provides us with predictive power, as it shows little sensitivity to microscopic details. It therefore helps make complex systems, a bit more… simple.

References: 
[1] Leo P. Kadanoff. "Scaling and Universality in Statistical Physics", Physica A 163, 1 (1990). Article.
[2] Kenneth G. Wilson. "The renormalization group: critical phenomena and the Kondo problem". Review of Modern Physics, 47, 773 (1975). Abstract.
[3] Mark E.J. Newman. "Networks - an introduction". Oxford University Press, New York, (2010). 
[4] Duncan J. Watts and Steven H. Strogatz. "Collective dynamics of ‘small-world’ networks". Nature, 393, 442 (1998). Abstract
[5] Albert-László Barabási, Réka Albert. "Emergence of scaling in random networks". Science, 286, 509 (1999). Abstract.
[6] Eberhard O. Voit. "Computational analysis of biochemical systems" (Cambridge University Press, 2000). Google Book.
[7] Guy Karlebach and Ron Shamir. "Modelling and analysis of gene regulatory networks". Nature Reviews,  9, 770–780 (2008). Abstract.
[8] P.S. Dodds and D.J. Watts. "A generalized model of social and biological contagion". Journal of Theoretical Biology, 232, 587–604 (2005). Abstract.
[9] Artem S. Novozhilov, Georgy P. Karev and Eugene V. Koonin. "Biological applications of the theory of birth-and-death processes". Briefings in Bioinformatics, 7, 70–85 (2006). Article.
[10] Chikara Furusawa and Kunihiko Kaneko. "Zipf's law in gene expression". Physical Review Letters, 90, 088102 (2003). Abstract.
[11] Victor M. Eguíluz, Dante R. Chialvo, Guillermo A. Cecchi, Marwan Baliki, A. Vania Apkarian. "Scale-free brain functional networks". Physical Review Letters, 94, 018102 (2005). Abstract
[12] J. Leskovec M. Mcglohon, C. Faloutsos, N. Glance and M. Hurst. "Patterns of cascading behavior in large blog graphs". Proceeding of the SIAM International Conference on Data Mining , 551–556 (2007). 
[13] Paolo Crucitti, Vito Latora and Massimo Marchiori. "Model for cascading failures in complex networks". Physical Review E, 69, 045104 (2004). Abstract.
[14] Ian Dobson, Benjamin A. Carreras, Vickie E. Lynch and David E. Newman. "Complex systems analysis of series of blackouts: Cascading failure, critical points, and self-organization". Chaos 17, 026103 (2007). Abstract.

Labels: ,


Sunday, July 21, 2013

Profiting from Nonlinearity and Interconnectivity to Control Networks

(left) Adilson Motter, (right) Sean Cornelius














Authors: Sean P. Cornelius1 & Adilson E. Motter1,2

Affiliation:
1Department of Physics & Astronomy, Northwestern University, USA.
2Northwestern Institute on Complex Systems, Northwestern University, USA.

The concept of a complex network---a set of “nodes'” connected by “links” that represent interactions between them---pervades science and engineering, describing systems as diverse as food webs, power grids, and cellular metabolism [1]. Due to the interconnected nature of such systems, perturbations that affect one or more nodes can propagate through the network and potentially cause the system as a whole to fail or change behavior. But our study, recently published in Nature Communications [2], shows that this principle can be a blessing in disguise, giving rise to a new strategy to control network behavior.

Past 2Physics article by Adilson E. Motter:
July 01, 2012: "First Material with Longitudinal Negative Compressibility"
by Zachary G. Nicolaou and Adilson E. Motter.

A hallmark of real complex networks, both natural and engineered, is that their dynamics are inherently nonlinear [3]. It is this nonlinearity that permits the coexistence of multiple stable states (some desirable, others not), which correspond to different possible modes of operation of a real network. Because of this, when a network is perturbed, it can spontaneously go to a “bad” state even if there are “good” ones available. The question we asked is: can this principle be applied in reverse? In other words, can we perturb a network that is in (or will approach) a “bad” state in such a way that it spontaneously evolves to a target state with more desirable properties?

Our research was motivated by recent case studies in our research group, which showed how networks damaged by an external perturbation can, counter-intuitively, be healed by intentionally applying additional, compensatory perturbations. For example, bacterial strains left unable to grow in the wake of genetic mutations can be made viable again by the further knockout of specific genes [4], while extinction cascades in ecological networks perturbed by the loss of one species can often be mitigated by the targeted suppression of additional species [5]. However, a systematic extension of such a strategy to general networks remained an open problem, partly due to system-specific constraints that restrict the perturbations one can actually implement. Indeed, in most real networks there are constraints on the potential compensatory perturbations one can implement in the system, and these generally preclude bringing the system directly to the target, or even to a similar state. In food-web networks, for instance, it may only be possible to suppress (but not increase) the populations of certain species (e.g., via hunting, fishing, culling, or non-lethal removals), while some endangered species can't be manipulated at all. Similarly, one can easily knock down one or more genes in a genetic network, but coordinating the upregulation of entire genetic pathways is comparatively difficult.

The critical insight underlying our research is that even when a desired stable state of a dynamical system can't be reached directly, there will exist a set of other states that eventually do evolve to the target---the so-called basin of attraction of that state. If we could only bring the system to one of those states through an eligible perturbation, the system would subsequently reach the target on its own, without any further intervention. A core component of our work is thus the introduction of a scalable algorithm that can locate basins of attraction in a general dynamical system [2] (for a Python implementation of the algorithm, see Ref. [6]). Figure 1 presents a visual illustration of this approach for transitions between patterns stored in an associative memory network, in which the control intervention (downward arrows) causes the network to spontaneously transition to the next pattern under time evolution (diagonal arrows).
Figure 1: Patters representing the letters of the word “NETWORK” are stored as different stable states in an associative memory network. The example shows how our algorithm induces transitions between consecutive letters by only perturbing “off” pixels.

A remarkable aspect of this approach is its robustness. In the example just mentioned, the control procedure succeeds in driving the system to the target or to a similar pattern with a small number of binary errors (gray pixels). Thus, even if the target state cannot be reached by any eligible perturbation, it may nonetheless be possible to drive the network to a similar state using this control procedure. This would not be possible if the dynamics were linear, since in that case the nonlocal nature of the control trajectories may prevent numerical convergence to the desired target even when the initial state is already close to the target [7].

The approach is based on casting the problem as a series of constrained nonlinear optimization problems, which enables systematic construction of compensatory perturbations via small imaginary changes to the state of the network. Prior to the introduction of this technique there were no systematic methods for locating the portions in the attraction basins that can be reached by eligible perturbations in general high-dimensional dynamical systems, short of conservative estimates and brute-force sampling. The latter requires an amount of computation time exponential in the number of dynamical variables of the system, which is notoriously large for complex networks of interest. In contrast, the running time of our approach scales only as the number of variables to the power 2.5.

There are numerous potential applications for the above control approach. As an example, we considered the identification of candidate therapeutic targets in a form of human blood cancer caused by the abnormal survival of cytotoxic T-cells. Here, normal and cancer states correspond to two different types of stable steady states [8]. Potential curative interventions are those that bring the system from a cancerous or pre-cancerous network state to the attraction basin of the normal state, which then leads to programmed cell death. We demonstrate that 2/3 of all such compromised states can be rescued through perturbations limited to network nodes not previously identified as promising candidate targets for therapeutic interventions. Furthermore, we show that perturbing an average of only 3.4 of them suffices to control the entire network. The effectiveness of many approved drugs relies on their being multi-target, temporary, and tunable, which are precisely the characteristics of the type of control interventions introduced by our study, making such predictions attractive for future experimental exploration [9].

This work illustrates how interconnectedness and nonlinearity---unavoidable features of real systems commonly thought to be impediments to their control---can actually be turned to our advantage. This has broad implications and may in particular shed new light on the requirements on the observability of real networks to allow their real-time control (for recent studies on network observability, see Refs. [10, 11]).

Our approach is based on the systematic construction of compensatory perturbations to the network, and, as illustrated in our applications, can account for both rather general constraints on the admissible interventions and the nonlinear dynamics inherent to most real complex networks. These results provide a new foundation for the control and rescue of network dynamics and for the related problems of cascade control, network reprogramming, and transient stability. In particular, we expect these results to have implications for the development of smart traffic and power-grid networks, of new ecosystem and Internet management strategies, and of new interventions to control the fate of living cells.

The research was supported by NSF (Grant DMS-1057128), NCI (Grant 1U54CA143869), and a Northwestern-Argonne Early Career Investigator Award.

References:
[1] Mark Newman, "The physics of networks", Physics Today, 61(11), 33 (2008). Full Article.
[2] Sean P. Cornelius, William L. Kath, Adilson E. Motter, "Realistic control of network dynamics", Nature Communications, 4, 1942 (2013). Abstract.
[3] Adilson E. Motter and Réka Albert, "Networks in motion", Physics Today 65(4), 43 (2012). Full Article.
[4] Adilson E Motter, Natali Gulbahce, Eivind Almaas & Albert-László Barabási, "Predicting synthetic rescues in metabolic networks", Molecular Systems Biology, 4, 168 (2008). Full Article.
[5] Sagar Sahasrabudhe & Adilson E. Motter, "Rescuing ecosystems from extinction cascades through compensatory perturbations", Nature Communications, 2, 170 (2011). Abstract.
[6] Sean P. Cornelius & Adilson E. Motter, "NECO - A scalable algorithm for NEtwork COntrol", Protocol Exchange, Nature Protocols (2013), doi:10.1038/protex.2013.063. Link.
[7] Jie Sun and Adilson E. Motter, "Controllability transition and nonlocality in network control", Physical Review Letters, 110, 208701 (2013). Abstract.
[8] Ranran Zhang, Mithun Vinod Shah, Jun Yang, Susan B. Nyland, Xin Liu, Jong K. Yun, Réka Albert, Thomas P. Loughran, Jr. "Network model of survival signaling in large granular lymphocyte leukemia". Proceedings of the National Academy of Sciences of the United States of America, 105, 16308 (2008). Abstract.
[9] Peter Csermely, Tamás Korcsmáros, Huba J.M. Kiss, Gábor London, Ruth Nussinov, "Structure and dynamics of molecular networks: A novel paradigm of drug discovery", Pharmacology & Therapeutics, 138, 333 (2013). Abstract.
[10] Yang Yang, Jianhui Wang, and Adilson E. Motter, "Network observability transitions", Physical Review Letters, 109, 258701 (2012). Abstract.
[11] Yang-Yu Liu, Jean-Jacques Slotine, and Albert-László Barabási, "Observability of complex systems", Proceedings of the National Academy of Sciences of the United States of America, 110, 2460 (2013). Abstract

Labels: ,


Sunday, July 07, 2013

Mesoscopic Interference and Light Trapping in Semiconductor Nanowire Mats

[From left to right clockwise] Claire Blejean, Otto Muskens, Tom Strudley, Tilman Zehender, Erik Bakkers

Authors: Tom Strudley1, Claire Blejean1, Tilman Zehender2, Erik P.A.M. Bakkers2,3, Otto L. Muskens1,

Affiliation: 
1Faculty of Physical and Applied Sciences, University of Southampton, UK
2Dept of Applied Physics, Eindhoven University of Technology, Netherlands
3Kavli Institute of Nanoscience, Delft University of Technology, Netherlands.

For many years the field of mesoscopic physics, defined as the interface between the microscopic and macroscopic worlds, has been the domain of solid state electronics. Amongst the most prominent of such mesoscopic effects are those of universal conductance fluctuations [1] and Anderson localization [2]. The case of localization, in which transport is completely halted due to interference effects in the presence of sufficiently strong disorder, has been of considerable interest over the years. Indeed it was one of the two pieces of work for which Anderson was jointly awarded a Nobel prize in 1977, and its complexity has prompted many to quote Anderson’s Nobel lecture: “Very few believed it at the time…among those who failed to fully understand it at first was certainly its author.” [3]

Past 2Physics articles by Erik Bakkers:
May 20, 2012: "Signatures of Majorana Fermions in Hybrid Superconductor-Semiconductor Nanowire Devices" by Vincent Mourik, Kun Zuo, Sergey Frolov, Sébastien Plissard, Erik Bakkers, Leo Kouwenhoven.

Due to the strong analogy between electron and wave transport, such phenomena should also occur for wave transport in random media. However this presents a significant challenge, as mesoscopic effects occur when the probability of a diffusion scattering back to the same point and interfering with itself becomes significant. Intuitively this break down of the traditional ‘random walk’ description requires scattering mean free paths significantly lower than the wavelength. As a consequence, demonstrations of Anderson localization of light in three dimensions have proved difficult, with most observations being for other systems such as microwaves in a waveguide [4], acoustic waves [5] and even cold atoms [6]. A couple of pioneering studies have reported localization of light in three dimensional media [7,8], however the interpretation of their results is complicated by absorption and fluorescence, which can mimic certain localization effects.
Figure 1: (left) The process of Vapour-liquid-solid growth starting from a gold nanoparticle catalyst is used to grow nanowires on a substrate. A lateral growth step increases the diameter of the wires, without affecting their length. (right) A typical high-density nanowire mat used in our experiments.

In our recent work published in Nature Photonics [9], we presented experimental results obtained using densely packed disordered mats of semiconductor nanowires. We demonstred strong mesoscopic effects using visible light in these layers. The nanowire mats were fabricated at the University of Eindhoven using the method of metallo-organic vapour phase epitaxy (MOVPE), see Fig. 1. By using a recipy of alternating cycles of vapour-liquid-solid (VLS) nanowire growth, followed by lateral growth to increase the wire diameter, precise control could be achieved over the length and diameters of the wire. A very high nanowire density of up to 50% area percentage was necessary to achieve the strong scattering strength required for observing the mesoscopic effects. These nanowire mats are remarkable as they have optical mean free paths as low as 0.2 micrometres, which along with their low intrinsic absorption make them ideal candidates for exploring Anderson localization.
Figure 2 [click on image to see higher resolution]: (left) typical intensity images of light transmitted through a nanowire mat, for a tight laser focus ('in focus') and for a laser illumination out of focus. Total image size ~20 μm. (middle) Intensity distribution normalized to ensemble average, showing the speckle fluctuations in space at the exit plane of the nanowire mat. (right) the total intensity fluctuations are enhanced for a tigthly focused laser, see Ref. [9].

By examining the intensity statistics of the transmitted light we found that it exhibited the large intensity fluctuations and long range correlations (both spatial and spectral) typical of mesoscopic interference (Fig. 2). Fitting these fluctuations with predictions from theory, we found an average of only 4 independent transmission channels through our sample, which is several orders of magnitude lower than previously reported values. These measurements unambiguously show for the first time that strong mesoscopic interference corrections can be achieved in three-dimensional nanomaterials and at optical wavelengths.

Mesoscopic transport corrections are a precursor of the strong or Anderson localization transition, where transport of light is halted by self-interference of many light paths returning to the same position in the medium. We are now in the exciting position where we can probe in detail the mesoscopic physics of light, as well as tuning our nanowire growth parameters further in a bid to observe localization itself. For practical applications, semiconductor nanowires have great potential in solar cell and light generation. A deep understanding of light transport in such materials is of great importance for further optimizing these applications. Ultimately, it may be possible for mesoscopic effects to be harnessed and turned into a new design tool for maximizing performance of real-world devices.

References:
[1] P.A. Lee and A. Douglas Stone, “Universal Conductance Fluctuations in Metals”, Physical Review Letters, 55, 1622-1625 (1985). Abstract.
[2] P.W. Anderson, “Absence of diffusion in certain random lattices”, Physical Review, 109, 1492-1505 (1958). Abstract.
[3] Philip W. Anderson, Nobel lecture (1977).
[4] A.A. Chabanov, M. Stoytchev, A.Z. Genack, “Statistical Signatures of Photon Localization”, Nature 404, 850-853 (2000). Abstract.
[5] Hefei Hu, A. Strybulevych, J. H. Page, S. E. Skipetrov, B. A. van Tiggelen, “Localization of Ultrasound in a Three-Dimensional Elastic Network”, Nature Physics, 4, 945-948 (2008). Abstract.
[6] Juliette Billy, Vincent Josse, Zhanchun Zuo, Alain Bernard, Ben Hambrecht, Pierre Lugan, David Clément, Laurent Sanchez-Palencia, Philippe Bouyer, Alain Aspect, “Direct Observation of Anderson localization of matter waves in a controlled disorder”, Nature, 453, 891-894 (2008). Abstract. 2Physics Article.
[7] Diederik S. Wiersma, Paolo Bartolini, Ad Lagendijk, Roberto Righini, “Localization of light in a disordered medium”, Nature 390, 671-673 (1997). Abstract.
[8] Martin Störzer, Peter Gross, Christof M. Aegerter, and Georg Maret, “Observation of the Critical Regime Near Anderson Localization of Light”, Physical Review Letters, 96, 063904 (2006). Abstract.
[9] Tom Strudley, Tilman Zehender, Claire Blejean, Erik P. A. M. Bakkers, Otto L. Muskens, “Mesoscopic Light Transport by Very Strong Collective Multiple Scattering in Nanowire Mats”, Nature Photonics, 7, 413-418 (2013). Abstract.

Labels: , , ,


Sunday, May 05, 2013

Losing Energy with Hamilton’s Principle of Least Action

Chad Galley

Author: Chad Galley

Affiliation: Theoretical Astrophysics (TAPIR), California Institute of Technology, Pasadena, USA

Classical mechanics is the foundation of physics and of all students’ course work in engineering and the physical sciences. The bricks of this foundation were first laid by Galileo then by Newton and finally by the likes of D’Alembert, Hamilton, Lagrange, Poisson, and Jacobi in the 18th and 19th centuries. What resulted was a framework of physical laws and formalisms for virtually any problem one wishes to study in fluid mechanics [1], electromagnetism [2], statistical mechanics [3], and even quantum theory later on, to give just a few examples.

One important and pervasive formulation of classical mechanics is due to Hamilton, who showed that a physical system evolves to either minimize or maximize a quantity called the action which, loosely speaking, is the accumulation in time of the difference between the kinetic and potential energies [4]. This important result, called Hamilton’s variational principle of stationary action or Hamilton’s principle for short, is the primary way to derive equations of motion for many systems, from the ubiquitous simple harmonic oscillator to supersymmetric string theories. Unfortunately, Hamilton’s principle has a well-known shortcoming: it generically cannot account for the irreversible effects of energy loss that are always present in any real world application, experiment, or problem. But why is that?

The answer has to do with the very formulation of Hamilton’s principle: “The physical configuration of a system is the one that evolves from the given state A at the initial time to the given state B at the final time such that the action is stationary.” This raises the question: how can one know the final state, especially when the system is losing energy? Isn’t the point to determine the final state from initial conditions? That’s how the real world works after all, through cause then effect. Remarkably, answering these questions correctly leads to a natural way to describe generic systems with a variational principle, even those that do not conserve energy [5].

The questions above are usually addressed, if at all, using a somewhat circular reasoning as follows. In practice, one applies Hamilton’s principle to derive equations of motion that are then solved with initial data. The fixed final state used in Hamilton’s principle is argued then as being the one associated with that specific solution. However, that specific final state is only determined after applying Hamilton’s principle to get the equations of motion in the first place. Perhaps this is a passable explanation but it doesn’t seem completely satisfactory because we usually do not have access to the environment that a system loses energy to so we cannot freely adjust the final states of those inaccessible degrees of freedom to accommodate the above explanation.

For these reasons and others, it is important to generalize Hamilton’s variational principle in a way that does not require fixing the final state of the system but is determined instead from the initial state only. The details of how this is achieved are reported in [5]. The take-home result is that eliminating dependence on the final state requires a formal doubling of the degrees of freedom in the problem. These doubled variables are fictitious but their average values are the physical ones of interest whereas their difference does not contribute to the physical evolution of the system. Figure 1 one shows a cartoon of the usual Hamilton’s principle on the left and of Hamilton’s principle on the right generalized to accommodate for energy losses (or gains). The arrows in Figure 1 indicate the direction in time to integrate the Lagrangian of the system along that path.
Figure 1: Left: A cartoon of Hamilton’s principle. Dashed lines denote the virtual displacements and the solid line denotes the stationary path. Right: A cartoon of Hamilton’s principle compatible with initial data (i.e., the final state is not fixed). In both cartoons, the arrows on the paths indicate the integration direction for the line integral of the Lagrangian.

Doubling the variables in this formal way has an interesting natural consequence. Just as the potential V in classical mechanics is an arbitrary function for conservative systems, we now have the freedom to introduce an additional arbitrary function K that couples together the doubled variables. In many ways K is analogous to V in classical mechanics because K generates the forces and interactions that account for energy loss or gain in a similar way that V generates forces and interactions that conserve energy.

To summarize, the seemingly innocuous problem with specifying the final state in Hamilton’s principle leads to a generalization based solely on the initial state. Achieving this requires formally doubling the degrees of freedom that, in turn, allows for an extra arbitrary function K to be introduced that generically accounts for the dynamical forces and interactions that cause energy loss or gain in the system. This new variational principle may have broad applicability in a wide range of practical and theoretical problems across multiple disciplines.

References
[1] G. K. Batchelor, "An Introduction to Fluid Dynamics" (Cambridge University Press, Cambridge, England, 1967).
[2] J. D. Jackson, Classical Electrodynamics (Wiley, New York, 1999), 3rd ed.
[3] K. Huang, Statistical Mechanics (Wiley, New York, 1963).
[4] H. Goldstein, Classical Mechanics (Addison-Wesley, Reading MA, 1980), 2nd ed.
[5] C. R. Galley, “Classical mechanics of nonconservative systems”, Physical Review Letters, 110, 174301 (2013). Abstract.

Labels: ,


Sunday, March 24, 2013

Role of Statistics in Two-Particle Anderson Localization

Roberto Osellame (left)  and Fabio Sciarrino (right)














Authors: Roberto Osellame1 and Fabio Sciarrino2

Affiliations:
1Istituto di Fotonica e Nanotecnologie (IFN) – CNR, Milan, Italy.
Link to the Femtosecond Laser Micromachining group >>
2Dipartimento di Fisica – Sapienza Università di Roma, Rome, Italy.
Link to the Quantum Optics group >>

3D QUEST >>

Disorder in our daily life typically has a negative connotation. Also in science it has been normally considered as a source of noise or imperfection. However, disorder is ubiquitous in nature: indeed it plays, on the one hand, a crucial role in understanding the behavior of complex physical phenomena [1] and, on the other hand, it can turn into an advantageous property for developing completely new devices [2]. One of the most striking effects of disorder is the suppression of transport of electrons in a disordered crystal. This phenomenon, known as Anderson localization after the 1958 paper by P.W. Anderson [3], is due to coherent scattering of the electron wavefunction in the disordered crystal and is general to any wave propagating in a disordered media [4]. Being a coherent effect, Anderson localization can be directly observed with photons, due to their little interaction with the environment and long coherence time. In addition, photonic structures, e.g. waveguide lattices, can be manufactured with a very high level of control of the structure parameters and are therefore prone to implementing and investigating different kinds of disorder.
Fig. 1 Femtosecond laser writing of optical waveguides. The glass sample is translated with respect to the writing laser beam. The bright spot in the glass comes from electron plasma generated by the focused laser; the energy transferred to the glass matrix after plasma relaxation is responsible for the local increase of refractive index.

A very recent technique (Fig. 1) that allows the accurate fabrication of photonic lattices is femtosecond laser waveguide writing [5]. With respect to standard microfabrication techniques it enables rapid prototyping of devices, being a direct write method, and extreme flexibility in the layout reconfiguration, being a maskless process [6]. In addition, it has the unique capability of exploiting the third dimension in the fabrication of the photonic circuits, which opens the possibility for completely new architectures [7,8].

The simplest way of observing Anderson localization is the study of a single particle in a 1D periodic crystal with static disorder (i.e. disorder that is spatially uncorrelated, but does not vary with time)[9]. This is analogous to observing the quantum walk of a single photon in a disordered waveguide array. If the observation in time is not continuous, but periodical, we are studying a discrete quantum walk. Femtosecond laser waveguide writing can be effectively exploited to produce matrices of integrated optical interferometers constituting a discrete quantum walk for photons [10]. The same technology enables a straightforward implementation of arbitrary phases in the different optical paths, thus introducing disorder in the structure. In our recent paper, published on Nature Photonics [11] in collaboration with Scuola Normale di Pisa, we implemented random phase maps representing static disorder, as in the chip depicted in Fig. 2a.

Anderson localization is essentially a single particle process, however in this work we experimentally investigated for the first time the role of particle statistics in the localization of two non-interacting photons. In order to mimick bosonic and fermionic statistics we exploited the symmetric and antisymmetric wavefunction of polarization entangled photons [10]. We observed Anderson localization for the two particles obeying both statistics, however when two bosonic particles were propagating they tended to localize on the same site, while the fermionic ones localized on adjacent sites but not on the same one, as expected from the Pauli exclusion principle (Fig. 2b). We also observed that the mean position between the two particles has a stronger localization for fermions than for bosons, while the relative distance has a smaller expectation value for bosons than for fermions [11].
Fig. 2 (a) Scheme of the device implementing a discrete quantum walk with static disorder. The m ports represents the sites of the 1D crystal, the n steps represents the discrete observation times. The colors of the phase shifters represent different implemented phases, which are constant along n to implement a static disorder. (b) Experimental correlation maps representing the joint probability of finding one photon in output port i and the other in output port j; with respect to the case without disorder (where ballistic propagation is observed), a clear localization is observed when static disorder is introduced [11] (Click on the image to view a version of better resolution).

These results demonstrate that even without interaction, particle statistics is capable of influencing the way two particles localize in a disordered media. In addition they show the potential of femtosecond laser waveguide writing for implementing arbitrary quantum walks with controlled disorder. The capability of our technology to implement arbitrary phase maps in quantum walks will enable the experimental quantum simulation of the quantum dynamics of multiparticle correlated systems and its ramifications towards the implementation of realistic universal quantum computation with quantum walks.

References
[1] Liad Levi, Yevgeny Krivolapov, Shmuel Fishman & Mordechai Segev, “Hyper-transport of light and stochastic acceleration by evolving disorder”, Nature Physics, 8, 912-917 (2012). Abstract.
[2] Diederik S. Wiersma, “Disordered photonics”, Nature Photonics, 7, 188-196 (2013). Abstract.
[3] P.W. Anderson, “Absence of diffusion in certain random lattices”, Physical Review, 109, 1492-1505 (1958). Abstract.
[4] Mordechai Segev, Yaron Silberberg & Demetrios N. Christodoulides, “Anderson localization of light”, Nature Photonics, 7, 197–204 (2013). Abstract.
[5] Rafael R. Gattass, Eric Mazur, “Femtosecond laser micromachining in transparent materials”, Nature Photonics, 2, 219 - 225 (2008). Abstract.
[6] G. Della Valle, R. Osellame, P. Laporta, “Micromachining of photonic devices by femtosecond laser pulses”, J. Opt. A 11, 013001(2009). Abstract.
[7] Nicolò Spagnolo, Chiara Vitelli, Lorenzo Aparo, Paolo Mataloni, Fabio Sciarrino, Andrea Crespi, Roberta Ramponi & Roberto Osellame, “Three-photon bosonic coalescence in an integrated tritter”, Nature Communications, doi:10.1038/ncomms2616 (Published March 19, 2013). Abstract.
[8] Mikael C. Rechtsman, Julia M. Zeuner, Andreas Tünnermann, Stefan Nolte, Mordechai Segev & Alexander Szameit, “Strain-induced pseudomagnetic field and photonic Landau levels in dielectric structures”, Nature Photonics, 7, 153-158 (2013). Abstract.
[9] Yoav Lahini, Assaf Avidan, Francesca Pozzi, Marc Sorel, Roberto Morandotti, Demetrios N. Christodoulides and Yaron Silberberg, “Anderson Localization and Nonlinearity in One-Dimensional Disordered Photonic Lattices”, Physical Review Letters, 100, 013906 (2008). Abstract.
[10] Linda Sansoni, Fabio Sciarrino, Giuseppe Vallone, Paolo Mataloni, Andrea Crespi, Roberta Ramponi and Roberto Osellame, “Two-particle bosonic–fermionic quantum walk via integrated photonics”, Physical Review Letters, 108, 010502 (2012). Abstract.
[11] Andrea Crespi, Roberto Osellame, Roberta Ramponi, Vittorio Giovannetti, Rosario Fazio, Linda Sansoni, Francesco De Nicola, Fabio Sciarrino & Paolo Mataloni, “Anderson localization of entangled photons in an integrated quantum walk”, Nature Photonics, doi:10.1038/nphoton.2013.26 (Published online March 3, 2013). Abstract.

Labels: , ,


Sunday, March 03, 2013

Experimental Boson Sampling

[From left to right] Part of the Oxford integrated quantum optics team: Steve Kolthammer,
Ben Metcalf, Merritt Moore, Peter Humphreys, Justin Spring, and Ian Walmsley


Authors: Steve Kolthammer, Justin Spring, Ian Walmsley

Affiliation: Department of Physics, University of Oxford, UK

A long-term ambition of many quantum physicists these days is to construct and understand large and complex quantum systems. This goal is driven in part by the notion that such systems have the potential to reveal interesting phenomena that cannot be studied by classical simulation. Such phenomena occur widely across the natural sciences, from biochemistry to condensed matter physics. The notion of complexity itself, however, is perhaps on firmest ground within the framework of computation. Here complexity refers to how the physical resources needed to achieve a particular task depend on the size of the task. Dramatic cases in which a quantum algorithm provides an exponential advantage over the best known classical algorithm make clear the potential for a quantum system to process information in ways that would eclipse even the most fantastically advanced classical machines.

Past 2Physics articles by this Group:
December 25, 2011: "Entang-bling" by Ian Walmsley and Joshua Nunn
August 07, 2011: "Building a Quantum Internet" by Joshua Nunn

Despite steady progress in manipulating and measuring simple quantum systems, a gulf remains between the theoretical promise of quantum computation and what has been shown in the laboratory. Bridging this divide requires not only further experimental advances but also the development of models of computation which are easier to test. In 2010, Scott Aaronson and Alex Arkhipov reported important progress toward the latter in a study of the computational power inherent in the quantum interference of indistinguishable, non-interacting bosons [1]. The specific problem they posed, termed boson sampling, is appealing due to its relatively straight-forward implementation with linear optical elements, such as beam splitters and phase shifters, sources of indistinguishable single photons, and single-photon detectors.

Importantly, this scheme does not require on-demand entanglement and feed-forward control, which are required for other approaches to linear-optical quantum computing and difficult to achieve in the laboratory. From a computational perspective, the key advance by Aaronson and Arkhipov was to show strong evidence that boson sampling is in fact interesting – that is, it cannot be solved efficiently by classical computation.

The essence of the boson sampling problem can be readily understood by examining the quantum machine which solves it. Our machine begins by injecting indistinguishable single photons into a linear optical network of single-mode waveguides. Due to quantum interference as the photons traverse the network, the photons emerge in a complicated entangled state. The positions of the photons are then measured by single-photon detectors. The machine bears some resemblance to the classical Galton board, illustrated below, in which balls randomly fall through an array of pegs. However, while these distinguishable classical balls take familiar, distinct paths down the board, rolling either left or right off of each peg they encounter, the photons in some sense collectively take all possible paths through their network.

Surprisingly, the interference of these paths makes it hard for a classical computer to predict where the photons tend to emerge! In the quantum case, the probability for a particular measurement outcome after N photons are injected into a network described by a unitary matrix U, is related to the permanent of a specific N x N submatrix of U. The permanent, a function of a matrix similar to the determinant, is noteworthy since its computation is hard. Aaronson and Arkhipov showed that an efficient classical algorithm that samples from a distribution of matrix permanents, precisely the task accomplished by the quantum machine, is very unlikely to exist.
Classical Galton board (left) and quantum boson sampling machine (right).

Last month, our group was one of two teams to report initial boson sampling experiments in Science [2, 3]. In fact, a remarkable interest in boson sampling became apparent last December when four groups - with experiments undertaken in Oxford, Brisbane, Vienna, and Rome - presented results on arXiv.org within the same week [2-5]. In our measurements, the principles of boson sampling were verified with three and four photons in a photonic chip. The photons were generated by degenerate parametric down-conversion, a nonlinear optical three-wave-mixing process, in a pair of crystals and coupled into a silica-on-silicon waveguide chip fabricated by our collaborators at the University of Southampton. The output from the chip was measured with single-photon avalanche photodiodes.

How closely did our quantum machine match an ideal boson sampling device? To address this issue we compared our measurements to the predictions of a classical computer, which is able to handle small boson sampling tasks. By extending the model of our machine to include a number of small, independently measured imperfections, such as rare multi-pair photon emission from a source, we verified that the machine ran as expected. In terms of quantum optics, our findings extend the well-studied Hong-Ou-Mandel quantum interference of two photons to processes with three and four photons. In terms of computation, we demonstrated the feasibility of running small-scale boson sampling tasks with the current experimental state of the art.

A striking feature of the boson sampling problem is that it appears to be amenable to the tools of complexity theorists as well as those of quantum experimenters. Meeting these disparate requirements is the crucial challenge to achieving a direct test of quantum computational power. Future work on boson sampling will benefit from an on-going effort to understand how unavoidable experimental imperfections relate to computational complexity. Meanwhile quantum optics labs, including our own, will continue to work towards experiments with increasing numbers of single photons. Extending our initial studies to ten photons should provide clear evidence for the computational power of this simple quantum machine, and a calculation with tens of photons could reach the frontier of quantum complexity which is inaccessible by classical computation.

References
[1] Scott Aaronson and Alex Arkhipov. “The Computational Complexity of Linear Optics”. Proceedings of the ACM Symposium on the Theory of Computing (ACM, New York, 2011), pp. 333-342. Download.
[2] Justin B. Spring, Benjamin J. Metcalf, Peter C. Humphreys, W. Steven Kolthammer, Xian-Min Jin, Marco Barbieri, Animesh Datta, Nicholas Thomas-Peter, Nathan K. Langford, Dmytro Kundys, James C. Gates, Brian J. Smith, Peter G. R. Smith, Ian A. Walmsley. “Boson Sampling on a Photonic Chip”. Science, 339, 798 (2013). Abstract.
[3] Matthew A. Broome, Alessandro Fedrizzi, Saleh Rahimi-Keshari, Justin Dove, Scott Aaronson, Timothy C. Ralph, Andrew G. White. “Photonic Boson Sampling in a Tunable Circuit”. Science, 339, 794 (2013). Abstract.
[4] Max Tillmann, Borivoje Dakić, René Heilmann, Stefan nolte, Alexander Szameit, Philip Walther. “Experimental Boson Sampling”. ArXiv:1212.2240 [quant-ph] (2012).
[5] Andrea Crespi, Roberto Oselame, Roberta Ramponi, Daniel J. Brod, Ernesto Galvão, Nicolò Spagnolo, Chiara Vitelli, Enrico Maiorino, Paolo Mataloni, Fabio Sciarrino. “Experimental Boson Sampling in Arbitrary Integrated Photonic Circuits”. ArXiv:1212.2783 [quant-ph] (2012).

Labels: , ,


Sunday, January 27, 2013

Quantum Flutter: A Dance of an Impurity and a Hole in a Quantum Wire

[Clockwise from Top left]: Charles J. M. Mathy, Eugene Demler, Mikhail B. Zvonarev.

Authors: 
Charles J. M. Mathy1,2, Mikhail B. Zvonarev2,3,4, Eugene Demler2

Affiliation:
1ITAMP, Harvard-Smithsonian Center for Astrophysics, Cambridge, Massachusetts, USA
2Department of Physics, Harvard University, Cambridge, Massachusetts, USA
3Université Paris-Sud, Laboratoire LPTMS, UMR8626, Orsay, France,
4CNRS, Orsay, France.

What happens when a particle moves through a medium at a velocity comparable to the speed of sound? The consequences lie at the heart of several striking phenomena in physics. In aerodynamics, for example, an object experiencing winds close to the speed of sound may experience a vibration that grows with time called flutter, which can ultimately have dramatic consequences such as the destruction of aeroplane wings or the iconic Tacoma Narrows bridge collapse. Other examples of physics induced by fast motion include acoustic shock waves and Cerenkov radiation. What we addressed in our work [1] was the effect of fast disturbances in strongly interacting quantum systems of many particles, in a case where the particles are effectively restricted to move in one dimension known as a quantum wire.

When the interactions between particles are weak, quantum systems can sometimes be described by a simple hydrodynamic equation. For example, the Gross-Pitaevskii Equation (GPE) describes the evolution of a weakly coupled gas of bosons at low temperatures when it forms a Bose-Einstein condensate. The GPE is analogous to equations found in hydrodynamics, which explains why one can see analogs of classical hydrodynamical effects such as shock waves and solitons in these systems [2,3]. But what if the interactions are too strong and such an approximation breaks down?

We have found a model that shows interesting physics induced by supersonic motion which goes beyond a hydrodynamical description [1]. The system is a one-dimensional gas of hardcore bosons known as a Tonks-Girardeau gas (TG) [4]. We start the system in its ground state and inject a supersonic impurity that interacts repulsively with the background particles. We obtained exact results on what happens next using an approach from mathematical physics called the Bethe Ansatz approach, coupled with large-scale computing resources [5]. Thus we track the impurity velocity as a function of time and find two main surprising features. Firstly, the impurity does not come to a complete stop, instead it only sheds part of its momentum and keeps on propagating at a reduced velocity forever (Fig 1a). Secondly, the impurity velocity oscillates a function of time, a phenomenon we call quantum flutter as it arises from nonlinear interactions of a fast particle with its environment (see Fig. 1b).

Figure 1: Impurity momentum evolution and quantum flutter:
a, Schematic picture of our setup. Top: We start with a one-dimensional gas of hardcore bosons of mass m known as a Tonks Girardeau (TG) gas (red arrows), in its ground state. We then inject an impurity also of mass m with finite momentum Q (green arrow). Middle: The impurity loses part of its momentum by creating a hole around itself (sphere) and emitting a sound wave in the background gas (blue arrow). However it retains a finite momentum Qsat after this process and carries on propagating without dissipation. Bottom: legend of the different characters in the story.
b, Time evolution of the expected momentum of the impurity, <(t)>. The momentum decays to a finite value Qsat, and shows oscillations around Qsat at a frequency we call ωosc. The background gas has density ρ, and we define a Fermi momentum kF = πρ, a Fermi energy EF = kF2/(2m) where m is the mass of the particles, and a Fermi time tF = 1/EF. Inset: zoom into the plot of <(t)> showing the oscillations we call quantum flutter.
c, Time evolution of the density of the background gas in the impurity frame. More precisely, shown is the density-density correlation function G↓↑(x,t) = <ρ(0,t) ρ(x,t)> in units of ρ/L where L is the system size, and the position along the wire is written in units of the interparticle distance ρ-1 in the background gas. Here ρ is the density of the background gas, and ρ the density of the impurity. G↓↑(x,t) is effectively the density of the background gas with respect to the impurity position. We see the formation of the correlation hole around x ρ = 0 (blue valley), and the emission of the sound wave (red ridge). Underneath a schematic illustration of the dynamics is given: the blue arrow represents the emitted sound wave, the sphere is the hole, and the green arrow the impurity (see a). Inside the correlation hole the impurity and hole are dancing, meaning that they are oscillating with respect to each other, the phenomenon we denote as quantum flutter.

Using the exact methods just mentioned we were able to look in detail at the dynamical processes underlying quantum flutter. The time evolution of the impurity in the gas of bosons can be broken down into several steps. First the impurity carves out a depletion of the gas around itself, called a correlation hole. It expels the background density into a sound wave that carries away a large part of the momentum of the impurity, but not of all it (fig 1c). In fact the impurity retains part of its momentum and no longer sheds momentum because of kinematic constraints: there are no sound waves it can emit in the background gas while conserving momentum and energy.

After formation of the correlation hole, the impurity momentum starts to oscillate. When the dynamics of a quantum system shows a feature that is periodic in time, typically the frequency of the feature corresponds to an energy difference between two states of the system. Examples include light emission of an atom, or spin precession in response to a magnetic field, which underlies Nuclear Magnetic Resonance. In our case, the two states are an exciton and a polaron. The exciton corresponds to the impurity binding to a hole, since if the impurity repels the background gas, it is attracted to a hole (i.e. a missing particle in the background). The polaron is an impurity dressed due to interactions with the background particles, which affects its properties such as its effective mass: it becomes heavier as it carries a cloud of displaced background particles around it [6]. Thus we arrive at the following picture, as shown schematically in Fig. 2: first the impurity causes the emission of a sound wave in the background gas and creation of a hole close to it. It can bind to this hole and form an exciton, or not bind to it and form a polaron instead. In fact the impurity does both in the sense that it forms a quantum superposition of a polaron and an exciton. This quantum superposition leads to oscillations in the impurity velocity, a phenomenon called quantum beating, which is analogous to Larmor precession of a spin in a magnetic field. The difference here is that the two states that are beating, the exciton and polaron, are strongly entangled many-particle states. That we observe long-lived quantum coherence effects in a system composed of infinitely many particles is surprising. Namely, typically such systems exhibit decoherence, such that if one puts a particle in a quantum superposition of two states, the superposition decays because of interactions with other particles.

Figure 2: Origin of quantum flutter:
a, The quantum flutter oscillations originate from the formation of a superposition of entangled states of the impurity with its environment. After the impurity is injected in the system is creates a hole around itself. It can then bind to this hole and form an exciton, or not bind to it and form a state that is dressed with its environment called a polaron. In fact the system forms a coherent superposition of these two possibilities, which then leads a quantum beating and oscillations in the impurity momentum with a frequency given by the energy difference between these two possibilities.
b, Comparison between the frequency ωosc of oscillations in the impurity momentum, and the energy difference between the polaron E(Pol(0)) and the exciton E(Exc(0)) (the zero between brackets refers to the exciton and polaron having momentum zero). The x-axis denotes the interaction strength between the impurity and the background particles: the interaction between a background particle at position xi and the impurity at position x is a contact interaction of the form g δ(xi - x), and one defines the dimensionless interaction parameter γ = m g/ρ. ℏωosc and E(Pol(0))-E(Exc(0)) are in quantitative agreement, which motivates the interpretation of quantum flutter as quantum beating between exciton and polaron.

To see quantum flutter in the laboratory directly, one can use methods from the field of ultracold atoms, in which neutral atoms are cooled and trapped using a combination of lasers and magnetic fields. The trapping potential can be chosen to restrict the atoms to move along 1D tubes, and effectively behave like a TG gas [7,8]. The interaction between the particles can be tuned using a Feshbach resonance. Impurity physics in one-dimensional TG gases has already been studied [9,10,11]. The only added ingredient needed for quantum flutter is to create impurities at finite velocities, which can be done using two-photon Raman processes. Quantum flutter can be observed by measuring the expected impurity velocity as a function of time. Thus cold atom experiments could confirm our predictions, and one could vary different parameters of the model so see how robust quantum flutter is. Our preliminary calculations suggest that quantum flutter survives within a certain window of varying all the parameters in the theory such as the interaction between background particles, the relative mass of the impurity and the background particles, and the form of the interactions.

In summary, we have found an example of a system of many particles where injecting a supersonic impurity leads to the spontaneous formation of a long-lived quantum superposition state which travels through the system at a finite velocity. The question of which systems allow transport of quantum coherent states is important for quantum computing applications [12], and has surfaced in recent studies of quantum effects in biology [13]. Thanks to the advent of exact methods and the development of precise experiments in the study of many-particle quantum dynamics, we expect to see progress being made on this question in the near future.

References
[1] Charles J. M. Mathy, Mikhail B. Zvonarev, Eugene Demler. "Quantum flutter of supersonic particles in one-dimensional quantum liquids". Nature Physics, 8, 881 (2012). Abstract.
[2] A.M. Kamchatnov and L.P. Pitaevskii. "Stabilization of solitons generated by a supersonic flow of a bose-einstein condensate past an obstacle". Physical Review Letters, 100, 160402 (2008). Abstract.
[3] I. Carusotto, S.X. Hu, L.A. Collins, and A. Smerzi. "Stabilization of solitons generated by a supersonic flow of a bose-einstein condensate past an obstacle". Physical Review Letters, 97, 260403 (2006). Abstract.
[4] M. Girardeau. "Relationship between systems of impenetrable bosons and fermions in one dimension". Journal of Mathematical Physics, 1, 516 (1960). Abstract.
[5] Jean-Sébastien Caux. "Correlation functions of integrable models: a description of the abacus algorithm". Journal of Mathematical Physics, 50, 095214 (2009). Abstract.
[6] A.S. Alexandrov, S. Devreeze, and T. Jozef. "Advances in Polaron Physics". Springer Series in Solid-State Sciences, Vol. 159 (2010).
[7]  Toshiya Kinoshita, Trevor Wenger and David S. Weiss. "A quantum newton's cradle". Nature, 440, 900 (2006). Abstract.
[8] Toshiya Kinoshita, Trevor Wenger and David S. Weiss. "Observation of a one-dimensional tonks-girardeau gas". Science, 305, 1125 (2004). Abstract.
[9] Stefan Palzer, Christoph Zipkes, Carlo Sias, Michael Köhl. "Quantum transport through a tonks-girardeau gas". Physical Review Letters, 103, 150601 (2009). Abstract.
[10] P. Wicke, S. Whitlock, and N.J. van Druten. "Controlling spin motion and interactions in a one-dimensional bose gas". ArXiv:1010.4545 [cond-mat.quant-gas] (2010).
[11] J. Catani, G. Lamporesi, D. Naik, M. Gring, M. Inguscio, F. Minardi, A. Kantian, and T. Giamarchi. "Quantum dynamics of impurities in a one-dimensional bose gas". Physical Review A, 85, 023623 (2012). Abstract.
[12] D.V. Averin, B. Ruggiero, and P. Silvestrini. "Macroscopic Quantum Coherence and Quantum Computing". Plenum Publishers, New York (2000).
[13] Gregory S. Engel, Tessa R. Calhoun, Elizabeth L. Read, Tae-Kyu Ahn, Tomá Manal, Yuan-Chung Cheng, Robert E. Blankenship, Graham R. Fleming. "Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems". Nature, 446, 782 (2007). Abstract.

Labels: , , ,