.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Sunday, November 24, 2013

How to Realize Quantum Key Distribution with a Limited and Noisy Link

The authors of the paper "Experimental quantum key distribution with finite-key security analysis for noisy channels" [5]: (Left to Right) Paolo Villoresi, Nicola Laurenti, Giuseppe Vallone, Davide Bacco, Matteo Canale.

Author: Paolo Villoresi

Affiliation: Dept of Information Engineering, University of Padua, Italy.

A huge amount of sensitive data travels every day on the internet: credit card numbers, emails, medical reports, social network contents, etc. Such traffic is constantly increasing on a global scale, and data protection is becoming not only a privacy issue for an individual, but also an economical and national security asset. This call for an intense use of cryptography, as we see in everyday transactions and queries over the internet. In particular, depending on the application context, security services may be required such as data integrity, confidentiality and authenticity. The corresponding security mechanisms are implemented by means of algorithms that require cryptographic keys, that is, secret bit sequences shared only by the sender and the receiver, and totally hidden from unintended users or devices in the network [1].

Past 2Physics article by Paolo Villoresi:
May 19, 2008: "The Frontier of Quantum Communication is the Space".

By leveraging the laws of quantum physics, two distant parties are able to share cryptographic keys with unconditional security, i.e., with provably negligible information leaked to the eavesdropper. When photons are used as bit carriers, in fact, observations of the attacker on the quantum transmission statistically produce a perturbation on the system itself, thus allowing the legitimate users to estimate the information that (s)he may possess and to take the appropriate countermeasures (e.g., discarding the current key). On the contrary, the security of classical cryptography – which is currently used on the internet – is based on mathematical problems for which no efficient solution exists nowadays. Such a solution, however, may appear in the near future, thanks to mathematical and computational breakthroughs as well as for the development of quantum computers.

Therefore, these keys need to be securely exchanged. As opposed to classical methods, by using quantum cryptography this key exchange (the quantum-key-distribution or QKD) can be brought to the level of being unconditionally secure, envisaging the distant parties with quanta of lights – photons – as the information carrier. The ultimate security may be obtained using the One-Time-Pad or OTP technique, which requires for each encryption and decryption a fresh and truly random key, which shall never be used again. Therefore, besides being the most secure, OTP is also the most key-demanding cipher.

Against this growing need for cryptographic keys, the optimization of the key exchange is a major issue, except for partners that may share a box of hard drives with multi-terabytes of random bits when needed. In the case of distant terminal that are in the need to implement QKD, simple aspects such as the duration of the communication and the background level are critical aspects and useful to assess the viability of a QKD implementation.

In particular, in protected environment like in research laboratories QKD systems are deployed in isolated environment and -- assuming that an arbitrarily high number of photons can be exchanged -- in real world applications these conditions are no longer verified. In realistic scenarios, in fact, the ratio of the number of final secret bits to the number of sent photons decreases on one hand with the number of sent photons, and on the other hand with increasing noise in the transmission channel and in the receiver apparatus.

Image 1: Quantum Communications using satellite are the best example of QKD with finite – and often short - duration and noisy channels.

For real-world scenarios of crucial data exchange and of remote location, as for the satellite communication, the key may be exchanged during satellite passages – which may last as short as a few minutes per hour, and are affected by background illumination which induces spurious light in the detectors [2]. Only recently the theoretical security analysis was made for the case of finite communication time [3], providing a theory for the maximum rate and the measurement strategy in the case of QKD along a modified BB84 protocol (For details of BB84 protocol, visit the Wikipedia page [4]).

In the work recently appeared on 'Nature Communications' by our group [5], including Davide Bacco, Matteo Canale, Nicola Laurenti, Giuseppe Vallone and Paolo Villoresi, all with the Department of Information Engineering at the University of Padova, we have experimentally shown the upper limit to quantum key distribution, in the presence of environmental noise, with the transmission of a limited number of photons and by considering different attack models.
Image 2: Details of the QKD receiver used in the experiment.

In the 'Nature Communications' paper [5] we described a fully fledged realization of a QKD system in the finite key regime with all details, which could represent a practical guide for the experimental realizations of the modified BB84 protocol. Moreover, in our work we extend the analysis on the key rate: the most general security analysis includes all the possible attack from an eavesdropper, that may store the sniffed photons in a quantum memory (q-memory) and analyze them at leisure. In addition, we decided to introduce the notion of pragmatic security, which is relevant for today attack possibilities since large q­memories are not yet available. This limitation allows to extract more secure bits with respect to the ones obtainable given a requirement of general security, that is, assuming that Eve is limited only by the law of physics.

However, pragmatic secrecy offers forward security: if a key is produced today with pragmatic secrecy (without a q­memory available for Eve), the key or a message encrypted with it will be secure for any future task, even when a q­memory will be present. This is opposed to computationally secure classical cryptography or key agreement where the available information can be stored by the Eavesdropper and decrypted in the future with higher computational power (either technological or algorithmic).

This idea is supported by considering that in a long-­term perspective (more than 50 years), a general security is the goal. In the near future (5-­10 years), we know that an ideal intercept-­resend attack is the best option that an eavesdropper can choose because the quantum memory needed for a general or coherent attack is not yet available. This analysis is crucial for an actual implementation of QKD beyond fiber, such as satellite quantum communication, a situation characterized by a short key in general due to a low rate and an high background noise.

This result opens perspectives for scenarios where the transmission window is limited by physical constraints, as for satellite communications, where the passage of one terminal over the other is restricted to a few minutes. The Padua team has been active on Satellite Quantum Communications for years, and obtained the first experimental demonstration of single photon exchange with an orbiting terminal in 2008 [2].

Acknowledgments: The work was carried out within QuantumFuture, one of ten Strategic Projects funded by the University of Padova in 2009. Coordinated by Prof. Villoresi, the project has established the Quantum Communication Laboratory and engaged four research groups in a joint activity: Quantum Communications, Quantum Control Theory, Quantum Astronomy and Quantum Optics.

References:
[1] Valerio Scarani, Helle Bechmann-Pasquinucci, Nicolas J. Cerf, Miloslav Dušek, Norbert Lütkenhaus, Momtchil Peev, "The security of practical quantum key distribution", Review of Modern Physics, 81, 1301 (2009). Abstract.
[2] P Villoresi, T Jennewein, F Tamburini, M Aspelmeyer, C Bonato, R Ursin, C Pernechele, V Luceri, G Bianco, A Zeilinger and C Barbieri, "Experimental verification of the feasibility of a quantum channel between space and Earth", New Journal of Physics, 10, 033038 (2008) [IOP select paper]. Abstract. 2Physics Article.
[3] Marco Tomamichel, Charles Ci Wen Lim, Nicolas Gisin, Renato Renner, "Tight finite-key analysis for quantum cryptography", Nature Communications, 3:634, doi:10.1038/ncomms1631 (2012). Abstract.
[4] Wikipedia page on BB84 protocol.
[5] Davide Bacco, Matteo Canale, Nicola Laurenti, Giuseppe Vallone, Paolo Villoresi, "Experimental quantum key distribution with finite-key security analysis for noisy channels", Nature Communications, 4:2363, doi: 10.1038/ncomms3363 (2013). Abstract.

Labels:


Sunday, November 17, 2013

Universality in Network Dynamics

Baruch Barzel (left) and Albert- László Barabási (right)

Authors: Baruch Barzel1,2 and Albert- László Barabási1,2.3

Affiliation:
1Center for Complex Network Research and Departments of Physics, Computer Science and Biology, Northeastern University, Boston, USA
2Center for Cancer Systems Biology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, USA
3Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, USA.

We think of statistical mechanics as the theory of interacting particles, gases and liquids. Its toolbox, however - indeed, its way of thought - goes beyond the domain of material science. In a broader perspective what statistical mechanics provides us with is a bridge between the microscopic description of a system and its observed macroscopic behavior. With it we can track the way in which system-level phenomena emerge from the mechanistic description of the system’s interacting components. For instance how the blind interactions between pairs of magnetic spins lead to the seemingly cooperative phenomena of magnetism, or, alternatively, how individual human interactions lead to the spread of ideas and perceptions.

At the heart of many of these emergent behaviors lies a rather simple characteristic of the system – its geometry. Indeed, many predictions in statistical mechanics can be traced back to the underlying geometry of the system’s interactions; most importantly, the system’s dimension [1,2]. This focus on geometry and dimensionality naturally portrays a picture of particles interacting in a defined topological structure, such as a lattice (or lattice-like) embedding. Recently, however, we have begun to shift our attention towards less structured systems – complex systems - which exhibit highly random and non-localized interaction patterns. Social networks, biological interactions and technological systems, such as the Internet or the power grid, are just a few examples, all of which are described by complex interaction networks, profoundly different from lattices, and hence they confront us with a potentially new class of observed dynamical behaviors.

The complex network geometry is distinguished by two key topological properties. First is the famous small world phenomenon. To understand it let us begin by observing the behavior of structured, lattice-like, systems. In such systems the number of nodes, i.e. the volume, and the typical distance between nodes are closely related – they follow the scaling law ()∼l d, where d represents the system’s dimension. In contrast networked systems are much more crowded. Within a small distance you can visit an exponentially proliferating volume of nodes [3]
so that the average distance between nodes does not scale with the system’s volume. This is the secret that allows enormous systems, such as society, to be connected by extremely short paths. Even just a small amount of structural randomness is sufficient to push the system into the small world limit [4], rendering the unique geometry described by (1) to be, in fact, not so unique. Indeed, Eq.(1) is observed by practically all complex systems, biological, social and technological. The second notion we must put aside when dealing with complex systems is the idea of a uniform pattern of connections. The random nature of complex systems invokes heterogeneity between the nodes and each node interacts with a different number of neighbors, leading to a distribution of node degrees (). Most complex systems have been shown to feature a fat-tailed (), describing an extremely heterogeneous structure in which a majority of low degree nodes coexists with a small amount of highly connected hubs [5].

This complex geometry raises an array of questions on the expected behavior of complex systems: What role does the broad span of degrees play in the system? Are the hubs really more influential than the average node? Are they more or, perhaps, less sensitive to their neighbors’ influence? We would also like to understand the impact of the small world topology. If all nodes are connected by short network paths, does that imply that they are also within each other’s dynamical reach? That would describe a system in which all nodes are impacted by all other nodes, a rather unstable scenario.

Of course, the answer to these questions is not purely in the geometry, but rather in the interplay between this geometry and the dynamics of the system, specifically in the meaning that lies behind the network links. By meaning we refer to the fact that the reduction of a complex system into a network structure gives no account to the type of interaction that occurs between node pairs – from chemical binding in biological networks to viral infection in social systems – the links in different systems represent different dynamical mechanisms or rules of interaction. We account for this diversity, by writing a most general equation for the activities, xi, of all nodes
The first term on the r.h.s. of (2) accounts for the self-dynamics and the second term sums over all of i’s interacting partners (Aij is the adjacency matrix, or the network). With the proper choice of the non-linear (xi) and (x,xj), Eq. (2) quite generally covers a broad array - indeed almost all - deterministic pairwise dynamics, among which are models frequently implemented on networks, such as biochemistry [6], genetic regulation [7], social [8] and ecological phenomena [9]. Hence Eq. (2) accounts for the inner mechanisms of the interaction rules, providing the microscopic description of the system’s dynamics.

To observe the system’s macroscopic behavior we use a perturbative approach. First we induce a small permanent perturbation dxj on the steady state of node j, representing, for instance the knockout of a gene in a cellular network or the failure of a node in a technological system. We then measure the response of all other nodes in the system, constructing the response matrix
This translates the structural properties of the system – who is connected to whom – into its dynamical patterns of influence, namely who is affected by whom. The Gij matrix provides an extremely detailed account of the system’s response to external perturbations – a term for each pair of nodes in the system. We obtain more meaningful insight on the system’s behavior by focusing on a set of aggregate functions extracted from (2), that are designed to directly answer the questions posed above.

Local dynamics: To understand the dynamical role of a node’s degree we characterize the dynamics between a node, i, and its immediate surrounding by measuring: (i) the impact, I, of ’s perturbation on its direct neighbors; (ii) the response of to neighboring perturbations. The latter quantifies’s dynamical stability, S, since a dramatic response to perturbations characterizes i as unstable, whereas a suppressed response implies high stability. We show that these two functions depend on i’s degree as
where the exponents δ and φ are fully determined by the leading terms of the functions (xi) and (x,xj) of Eq. (2). The exponents δ and φ determine the dynamical consequence of the network’s degree heterogeneity by relating the dynamical impact/response of a node to its degree. Eq. (4) tells us that different dynamical systems will be characterized by different exponents δ and φ, depending on the form of Eq. (2), and consequently they will express different patterns of influence between hubs and peripheral nodes.

Most importantly, δ and φ predict the existence of four highly distinctive dynamical universality classes. If, for example, φ=0, we find that a node’s impact is independent of its degree. All nodes exhibit uniform impact, regardless of the nature of the degree distribution, () (fat-tailed or bounded). Hence even under extreme structural heterogeneity, i.e. a fat-tailed (), all nodes will have a comparable dynamical impact. If, however φ≠0, the dynamical impact is driven by (), and consequently, for a scale-free network, the system will feature heterogeneous impact, in which most nodes have a tiny effect on their surrounding and a small group of selected nodes dominate the system’s local dynamics. A similar distinction, between uniform and heterogeneous stability, emerges from the universal value of δ. What is striking is that these crucial distinctions are determined by a rather general formulation of the dynamics (2), just a small number of leading terms. Finer details, such as higher order terms of (2), specific rate constants or the detailed structure of the network, play no role. As a result the predicted universality classes encompass a broad range of distinctive dynamics. For instance, all biochemical interactions are predicted to feature uniform stability; epidemic spreading processes will inevitably display heterogeneous impact.
Figure 1: From microscopic diversity to macroscopic universality. (a) Dynamical networks exhibit diverse types of interaction dynamics, from ecological to biological and social networks. (b) This diversity is described by a general dynamical equation (2), whose terms account for the microscopic interaction mechanisms between the nodes. (c) We observe the system’s behavior through Gij (3), capturing the patterns of influence characterizing the system. Systems are restricted to limited classes of dynamical behavior: uniform (blue) versus heterogeneous (orange) impact (top-left); uniform versus heterogeneous stability (bottom-left); conservative versus dissipative propagation (bottom-center). These classes determine other dynamical functions such as the cascade size distribution (top-right) and the correlation distribution (bottom-right).

Propagation: Perhaps the most crucial, and yet quite elusive, dynamical function in a network environment is the propagation of perturbations from the perturbed source to more distant nodes. This is captured by the distance dependent correlation function, Γ(), which tracks the impact of a perturbation at distance l. We find, analytically, that perturbations decay exponentially as
where, like δ and φ, the dissipation rate, β, is fully determined by the leading terms of (xi) and Q (x, xj) in (2). The correlation function describes the efficiency of the network in propagating information. A rapid decay of Γ() ensures that perturbations remain localized, dying out before they reach most nodes. A slow decay describes efficient penetration of perturbations. Again, we find two distinctive dynamical classes: if β=0 in (5) we observe conservative dynamics, in which perturbations penetrate the network without loss, conserving the mass of the initial perturbation. If, however, β>0, the system is dissipative, perturbations decay exponentially, and the dynamics is localized in the vicinity of the perturbed node. The case where β<0, describing an unstable response with perturbations growing out of control, is prohibited in the limit of the small-world topology (1), a crucial feature explaining the observed resilience of many complex systems to perturbations.

As for δ and φ, the value of β is insensitive to details. The specific parameters and detailed network structure have a marginal effect. Hence this distinction between conservative vs. dissipative dynamics represents a broad classification, where each class consists of many different dynamical systems. For instance, all spreading processes are dissipative; and all biochemical or population dynamics are conservative. Another way to state this is that in order to transfer a system between any of the above universality classes, uniform vs. heterogeneous or conservative vs. dissipative, it is not enough to tune a parameter or make a minor change in the dynamics. One must change the leading terms of Eq. (2), that is essentially alter the internal mechanisms of the interactions: change the system from, say, an epidemic spreading social system to a genetically regulated biological system. Hence the predictions (4) and (5) touch upon the defining features of the network’s dynamics.

The prediction of the local dynamics (4) together with the propagation (5) allows us to predict a much broader set of macroscopic dynamical functions. Indeed, the global response of the system to a perturbation is a direct consequence of the coupling between the local response and the propagation to more distant nodes. In that sense the universal exponents δ, φ and β serve as building blocks for the construction of a broad range of universal functions extracted from Gij . To demonstrate this we found that we can use (4) and (5) to predict the precise form of the cascade size distribution, (), and the correlation distribution (). These two functions, among the most frequently pursued in empirical observations of complex systems’ dynamics [10-14], have long been known to exhibit universal behavior, for which we lacked an adequate theoretical explanation. Now we can use Eqs. (4) and (5) to predict these functions and express them in terms of δ, φ and β. The meaning is that () and () inherit their universality from S, Ii and Γ(). Hence our formalism not only provides a general explanation for the universality of () and (), but also allowed us to analytically predict them directly from the structure of (2), connecting these pertinent macroscopic observations with a microscopic understanding of the system’s mechanistic interactions.

Going back to the fundamental paradigm of statistical mechanics, we make a rather strong prediction: that the observed macroscopic dynamics of complex systems can be predicted directly from its microscopic description (Eq. (2)), providing a set of universal exponents. Hence, while microscopically, complex systems exhibit a broad range of diverse dynamical models, across multiple fields and scientific domains, their macroscopic behavior condenses into a discrete set of dynamical universality classes. This is an optimistic message if our goal is to predict the behavior of complex systems. Indeed, universality provides us with predictive power, as it shows little sensitivity to microscopic details. It therefore helps make complex systems, a bit more… simple.

References: 
[1] Leo P. Kadanoff. "Scaling and Universality in Statistical Physics", Physica A 163, 1 (1990). Article.
[2] Kenneth G. Wilson. "The renormalization group: critical phenomena and the Kondo problem". Review of Modern Physics, 47, 773 (1975). Abstract.
[3] Mark E.J. Newman. "Networks - an introduction". Oxford University Press, New York, (2010). 
[4] Duncan J. Watts and Steven H. Strogatz. "Collective dynamics of ‘small-world’ networks". Nature, 393, 442 (1998). Abstract
[5] Albert-László Barabási, Réka Albert. "Emergence of scaling in random networks". Science, 286, 509 (1999). Abstract.
[6] Eberhard O. Voit. "Computational analysis of biochemical systems" (Cambridge University Press, 2000). Google Book.
[7] Guy Karlebach and Ron Shamir. "Modelling and analysis of gene regulatory networks". Nature Reviews,  9, 770–780 (2008). Abstract.
[8] P.S. Dodds and D.J. Watts. "A generalized model of social and biological contagion". Journal of Theoretical Biology, 232, 587–604 (2005). Abstract.
[9] Artem S. Novozhilov, Georgy P. Karev and Eugene V. Koonin. "Biological applications of the theory of birth-and-death processes". Briefings in Bioinformatics, 7, 70–85 (2006). Article.
[10] Chikara Furusawa and Kunihiko Kaneko. "Zipf's law in gene expression". Physical Review Letters, 90, 088102 (2003). Abstract.
[11] Victor M. Eguíluz, Dante R. Chialvo, Guillermo A. Cecchi, Marwan Baliki, A. Vania Apkarian. "Scale-free brain functional networks". Physical Review Letters, 94, 018102 (2005). Abstract
[12] J. Leskovec M. Mcglohon, C. Faloutsos, N. Glance and M. Hurst. "Patterns of cascading behavior in large blog graphs". Proceeding of the SIAM International Conference on Data Mining , 551–556 (2007). 
[13] Paolo Crucitti, Vito Latora and Massimo Marchiori. "Model for cascading failures in complex networks". Physical Review E, 69, 045104 (2004). Abstract.
[14] Ian Dobson, Benjamin A. Carreras, Vickie E. Lynch and David E. Newman. "Complex systems analysis of series of blackouts: Cascading failure, critical points, and self-organization". Chaos 17, 026103 (2007). Abstract.

Labels: ,


Sunday, November 10, 2013

Experimental Determination of Ramsey Numbers

(Left to Right) Zhengbing Bian, Fabian Chudak, William G. Macready, Lane Clark

Authors: Zhengbing Bian1, Fabian Chudak1, William G. Macready1
Lane Clark2, Frank Gaitan3

Affiliation:
1D-Wave Systems Inc., Burnaby, British Columbia, Canada
2Dept. of Mathematics, Southern Illinois University, Carbondale, Illinois, USA
3Laboratory for Physical Sciences, College Park, Maryland, USA

                                                                       Frank Gaitan 

Ramsey numbers are most easily introduced through the Party Problem. Here a host is assembling a guest list of N people. She wonders whether the list might contain a group of m guests who are mutual friends, or a group of n guests who are mutual strangers. Surprisingly, it can be proved [1] that there exists a threshold size R(m,n) for the guest list such that: (i) if N ≥ R(m,n), then all guest lists are sure to contain a clique of m mutual friends or one of n mutual strangers, and (ii) if N < R(m,n), then a guest list exists which contains no group of m mutual friends or n mutual strangers. The threshold value R(m,n) is known as a two-color Ramsey number.

It proves fruitful to convert the N-person guest list into an N-vertex graph by associating each guest with a distinct vertex in the graph. We draw a red (blue) edge between two vertices if the two corresponding people know (do not know) each other. Thus, if m (n) guests all know (do not know) each other, there will be a red (blue) edge connecting any two of the m (n) corresponding vertices. The m (n) mutual friends (strangers) are said to form a red m-clique (blue n-clique). In this way, each N-person guest list is associated with a particular red/blue coloring of the edges of a complete N-vertex graph (in which every pair of vertices is joined by an edge). The threshold result for the Party Problem is now the following statement about graphs: if N ≥ R(m,n), then every red-blue edge coloring of a complete N-vertex graph will contain either a red m-clique or a blue n-clique, while if N < R(m,n), a red/blue coloring exists which has no red m-clique or blue n-clique. The use of two colors is why R(m,n) is called a two-color Ramsey number. Figure 1 shows a 6-vertex graph that contains two blue 3-cliques and no red 3-cliques.

Figure 1: A 6-vertex graph representing a Guest List in which there are: (i) two cliques of three people that are mutual strangers: Guests 1, 2, and 3, and Guests 4, 5, and 6; and (ii) no cliques of three mutual friends.

As m and n increase, the threshold R(m,n) grows extremely rapidly, and even for quite small values of m and n, becomes extraordinarily difficult to compute. In fact, for m, n ≥ 3, only nine values of R(m,n) are currently known [2]. To dramatize the difficulty of calculating R(m,n), Spencer [3] recounts a famous vignette due to Paul Erdos in which aliens have invaded earth and threaten to destroy it in one year unless humans can correctly determine R(5,5). Erdos noted that this could probably be done if all the best minds and most powerful computers were focused on the task. In this case, the appropriate response to the alien demand would be to get busy. However, Erdos pointed out that if the aliens had instead demanded R(6,6), earthlings should immediately launch a first-strike on the aliens as there is little hope of meeting their demand.

In 2012 a quantum algorithm for computing two-color Ramsey numbers appeared which was based on adiabatic quantum evolution [4]. The essential step in constructing the Ramsey number algorithm was to transform a Ramsey number computation into an optimization problem whose solution is an N-vertex graph that has the smallest total count of red m-cliques and blue n-cliques. The optimization cost function assigns to each graph the total count of red m-cliques and blue n-cliques that it contains. Thus, if N < R(m,n), an optimal graph will have zero cost, corresponding to a guest list with no m-clique of friends or n-clique of strangers. On the other hand, if N ≥ R(m,n), then an optimal graph has non-zero cost since all guest lists will have a red m-clique or blue n-clique. The Ramsey number quantum algorithm associates each graph with a unique quantum state of the qubit-register and constructs the qubit interactions so that the energy of this state (the Ramsey energy) is the value of the graph’s cost function. By design, the minimum Ramsey energy is the minimum value of the cost function, and the associated quantum state identifies a minimum cost graph which is thus a solution of the optimization problem.

The adiabatic quantum evolution algorithm [5] is used to transform a fiducial state of the qubit register into a state with lowest Ramsey energy. The Ramsey number algorithm implements the following iterative procedure. It starts with graphs with N < R(m,n), and carries out the above described adiabatic quantum evolution. At the end of the evolution, the qubit state is measured and the Ramsey energy is determined from the measurement result. In the adiabatic limit, for N < R(m,n), the measured Ramsey energy will be zero. The number of vertices N is then increased by one, the adiabatic evolution and measurement is re-run, and the Ramsey energy re-computed. This process is repeated until a non-zero Ramsey energy is first found. In the adiabatic limit, the final value of N is the Ramsey number R(m,n) since it is the smallest N value for which all graphs have a non-zero cost and thus contain a red m-clique or blue n-clique. Ref. [5] tested the algorithm via numerical simulations and it correctly found the Ramsey numbers R(3,3) = 6 and R(m,2) = m for 5 ≤ m ≤ 7.

Figure 2: Image of the harness used to lower the D-Wave One 128 qubit chip into a dilution refrigerator. The chip can be seen at the bottom of the harness.

Recently, the Ramsey number quantum algorithm was successfully implemented on a D-Wave One quantum annealing device [6]. Quantum annealing [7] can be used to drive the adiabatic dynamics of the Ramsey number algorithm even at finite temperature and in the presence of decoherence. The D-Wave One processor is a chip that contains 128 rf SQUID flux qubits. The chip is shown in Figure 2 at the end of a harness which is used to place it in a dilution refrigerator that operates at 20mK.

The chip used in the Ramsey number experiments contained 106 useable qubits. Figure 3 shows the chip layout, where the grey circles correspond to useable qubits, and the white circles to qubits that could not be calibrated into the desired tolerance range and so were not used. The edges joining qubits represent couplers that allow the qubits to interact and which can take on programmed values.

Figure 3: Layout of the 128 qubit chip. The chip architecture is a 4x4 array of unit cells, with each unit cell containing 8 qubits. Qubits are labeled from 1 to 128, and edges between qubits indicate couplers which may take programmable values. Grey qubits indicate usable qubits, while white qubits indicate qubits which, due to fabrication defects, could not be calibrated to operating tolerances and were not used. All Ramsey number experiments were done on a chip with 106 usable qubits.

Figure 4 shows an 8-qubit unit cell in the chip. Each red loop highlights one of the rf SQUID flux qubits. Programming the Ramsey number cost function onto the D-Wave One chip requires embedding the optimization variables into the qubit architecture in a way that respects the qubit-qubit couplings on the chip. Figure 5 shows the embedding used to determine the Ramsey number R(8,2). Once the cost function for a given Ramsey number has been programmed on to the chip, 100,000 quantum annealing sweeps and measurements were carried out, and the resulting Ramsey energies binned into a histogram.

In all cases considered, the lowest Ramsey energy found was identical to the minimum Ramsey energy which was known from previous work [4]. The protocol for the Ramsey number algorithm was carried out and the value of N at which the minimum Ramsey energy first became non-zero was identified with the Ramsey number R(m,n). The experiments correctly determined that R(3,3) = 6, and R(m,2) = m for 4 ≤ m ≤ 8. Although all Ramsey numbers found in Ref. [6] correspond to known Ramsey numbers, it is possible that future hardware generations may be large enough to allow unknown Ramsey numbers to be found.
Figure 4: A single 8 qubit unit cell. Each red loop highlights an rf SQUID flux qubit.

Figure 5: Embedding used to compute the Ramsey number R(8,2) that produced the needed qubit couplings. In low energy states, like-colored qubits have the same Bloch vector and constitute a single effective qubit. This allows an indirect coupling of qubits that are not directly coupled on the chip.

References:
[1] Ronald L. Graham, Bruce L. Rothschild, and Joel H. Spencer, "Ramsey Theory" (Wiley, NY, 1990).
[2] See Wikipedia page on Ramsey's_theorem for a list of all known two-color Ramsey numbers, as well as bounds on R(m,n) when its value is not known, for m, n ≤ 10.
[3] Joel Spencer, "Ten Lectures on the Probabilistic Method", 2nd ed. (SIAM, Philadelphia, 1994).
[4] Frank Gaitan and Lane Clark, "Ramsey Numbers and Adiabatic Quantum Computing", Physical Review Letters 108, 010501 (2012). Abstract.
[5] Edward Farhi, Jeffrey Goldstone, Sam Gutmann, Michael Sipser, "Quantum Computation by Adiabatic Evolution", arXiv:quant-ph/0001106 (2000).
[6] Zhengbing Bian, Fabian Chudak, William G. Macready, Lane Clark, and Frank Gaitan, "Experimental Determination of Ramsey Numbers", Physical Review Letters, 111, 130505 (2013). Abstract.
[7] Tadashi Kadowaki and Hidetoshi Nishimori, "Quantum annealing in the transverse Ising model", Physical Review E 58, 5355 (1998). Abstract; Giuseppe E. Santoro, Roman Martoňák, Erio Tosatti, Roberto Car, "Theory of Quantum Annealing of an Ising Spin Glass", Science 295, 2427 (2002). Abstract.

Labels:


Sunday, November 03, 2013

Entanglement-enhanced Atom Interferometer with High Spatial Resolution

(From left to right) Philipp Treutlein, Roman Schmied, and Caspar Ockeloen

Authors: Caspar Ockeloen, Roman Schmied, Max F. Riedel, Philipp Treutlein

Affiliation: Department of Physics, University of Basel, Switzerland

Link to Quantum Atom Optics Lab, Treutlein Group >>

Interferometry is the cornerstone of most modern precision measurements. Atom interferometers, making use of the wave-like nature of matter, allow for ultraprecise measurements of gravitation, inertial forces, fundamental constants, electromagnetic fields, and time [1,2]. A well-known application of atom interferometry is found in atomic clocks, which provide the definition of the second. Most current atom interferometers operate with a large number of particles, which provides high precision but limited spatial resolution. Using a small atomic cloud as a scanning probe interferometer would enable new applications in electromagnetic field sensing, surface science, and the search for fundamental short-range interactions [2].

Past 2Physics article by Philipp Treutlein:
May 09, 2010: "Interface Between Two Worlds -- Ultracold atoms coupled to a micromechanical oscillator"
by Philipp Treutlein, David Hunger, Stephan Camerer.

In an atom interferometer, the external (motional) or internal (spin) state of atoms is coherently split and allowed to follow two different pathways. During the interrogation time T, a phase difference between the paths is accumulated, which depends on the quantity to be measured. When the paths are recombined, the wave-character of the atoms gives rise to an interference pattern, from which the phase can be determined. To measure this interference, the number of atoms in each output state is counted. Here the particle-character of the atoms is revealed, as the measurement process randomly projects the wave function of each atom into a definite state. When operating with an ensemble of N uncorrelated (non-entangled) atoms, the binomial counting statistics limits the phase uncertainty of the interferometer to 1/√N, the standard quantum limit (SQL) of interferometric measurement.

It is possible to overcome the SQL by making use of entanglement between the atoms [3]. Using such quantum correlations, the measurement outcome of each atom can depend on that of the other atoms. If used in a clever way, the phase uncertainty of an interferometer can be reduced below the SQL, in theory down to the ultimate Heisenberg limit of 1/N. Such entanglement-enhanced interferometry is in particular useful in situations where the number of atoms is limited by a physical process and the sensitivity can no longer be improved by simply increasing N. One such scenario is when high spatial resolution is desired. The number of atoms in a small probe volume is fundamentally limited by density-dependent losses due to collisions. As more atoms are added to this volume, the collision rate increases, and eventually any additional atoms are simply lost from the trap before the interferometer sequence has completed. This sets a tight limit on both the phase uncertainty and the maximum interrogation time T.
Fig. 1. Experimental setup. a) Central region of the atom chip showing the atomic probe (blue, size to scale) and the scanning trajectory we use. The probe is used to measure the magnetic near-field potential generated by an on-chip microwave guide (microwave currents indicated by arrows). A simulation of the potential is shown in red/yellow. b) Photograph of the atom chip, mounted on its ultra-high vacuum chamber.

In a recent paper [4] we have demonstrated a scanning-probe atom interferometer that overcomes the SQL using entanglement. Our interferometer probe is a Bose-Einstein condensate (BEC) on an atom chip, a micro-fabricated device with current-carrying wires that allow magnetic trapping and accurate positioning of neutral atoms close to the chip surface [5]. A schematic view of the experiment is shown in figure 1. We use N=1400 Rubidium-87 atoms, trapped in a cloud of 1.1 x 1.1 x 4.0 micrometers radius, 16 to 40 micrometers from the surface. Two internal states of the atoms are used as interferometric pathways, and the pathways are split and recombined using two-photon microwave and radio frequency pulses. At the end of the interferometer sequence, we count the atoms in each output state with sensitive absorption imaging, with a precision of about 5 atoms.

We create entanglement between the atoms by making use of collisions naturally present in our system. When two atoms collide, both atoms obtain a phase shift depending on the state of the other atom, thus creating quantum correlations between the two. Normally, the effect of these collisions is negligible in our experiment, as the phase shift due to collisions between atoms in the same state are almost completely canceled out by collisions where each atom is in a different state. We can turn on the effect of collisions by spatially separating the two states, such that collisions between states do not occur. When, after some time, we recombine the two states, collisional phase shifts are effectively turned off during the subsequent interrogation time of the interferometer.

The performance of our interferometer is shown in figure 2, measured at 40 micrometer from the chip surface. It has a sensitivity of 4 dB in variance below to SQL, and improved sensitivity is maintained for up to T = 10 ms of interrogation time, longer than in previous experiments [6,7,8]. We demonstrate the scanning probe interferometer by transporting the entangled atoms between 40 and 16 micrometer from the atom chip surface, and measuring a microwave near field potential at each location. The microwave potential is created by wires on our atom chip, and is also used for generation of the entangled state. As shown in figure 3, our scanning probe interferometer operates on average 2.2 dB below the SQL, demonstrating that the entanglement partially survives being transported close to the chip surface, which takes 20 ms of transport time.
Fig. 2. Interferometer performance operating at a single position for different interrogation times. Plotted is the variance relative to the standard quantum limit (SQL). The entanglement-enhanced interferometer (blue diamonds) operates about 4 dB below the SQL, whereas the non-entangled interferometer (red, coherent state) operates close to the SQL. For T > 10 ms, both experiments are limited by technical noise. The inset shows a typical interference fringe, with a fringe contrast of (98.1 ± 0.2)%.

The scanning probe measurement presented here corresponds to a microwave magnetic field sensitivity of 2.4 µT in a single shot of the experiment (cycle time ~ 11 s). The sensitivity shown in figure 2 corresponds to 23 pT for an interrogation time of 10 ms. This sensitivity is obtained with a probe volume of only 20 µm3. Our interferometer bridges the gap between vapor cell magnetometers, which achieve subfemtotesla sensitivity at the millimeter to centimeter scale [9,10] but do not have the spatial resolution needed to resolve near-field structures on microfabricated devices, and nitrogen vacancy centers in diamond, which are excellent magnetometers at the nanometer scale but currently offer lower precision in the micrometer regime [11].
Fig. 3. Scanning probe interferometer. a) Phase shift due to the microwave near-field potential measured at different positions. The dashed line is a simulation of the potential. b) Interferometer performance for the same measurement. At all positions, the interferometer operates below the SQL. These measurements were done with an interrogation time of T = 100 µs, during which the microwave near-field was pulsed on for 80 µs.

In conclusion, we have experimentally demonstrated a scanning-probe atom interferometer operating beyond the standard quantum limit, and used it for the measurement of a microwave near-field. High-resolution measurements of microwave near-fields are relevant for the design of new microwave circuits for use in communication technology [12]. This is the first demonstration of entanglement-enhanced atom interferometry with a high spatial resolution scanning probe, and promises further high-resolution sensing and measurement applications.

References:
[1] Alexander D. Cronin, Jörg Schmiedmayer, David E. Pritchard, "Optics and interferometry with atoms and molecules", Review of Modern Physics, 81, 1051 (2009). Abstract.
[2] J. Kitching, S. Knappe, and E.A. Donley, "Atomic Sensors – A Review", IEEE Sensors Journal, 11, 1749 (2011). Abstract.
[3] Vittorio Giovannetti, Seth Lloyd, Lorenzo Maccone, "Advances in quantum metrology", Nature Photonics, 5, 222 (2011). Abstract.
[4] Caspar F. Ockeloen, Roman Schmied, Max F. Riedel, Philipp Treutlein, "Quantum Metrology with a Scanning Probe Atom Interferometer", Physical Review Letters, 111, 143001 (2013). Abstract.
[5] Max F. Riedel, Pascal Böhi, Yun Li, Theodor W. Hänsch, Alice Sinatra, Philipp Treutlein, "Atom-chip-based generation of entanglement for quantum metrology", Nature, 464, 1170 (2010). Abstract.
[6] C. Gross, T. Zibold, E. Nicklas, J. Estève, and M.K. Oberthaler, "Nonlinear atom interferometer surpasses classical precision limit", Nature, 464, 1165 (2010). Abstract.
[7] Anne Louchet-Chauvet, Jürgen Appel, Jelmer J Renema, Daniel Oblak, Niels Kjaergaard, Eugene S Polzik, "Entanglement-assisted atomic clock beyond the projection noise limit", New Journal of Physics, 12, 065032 (2010). Abstract.
[8] Ian D. Leroux, Monika H. Schleier-Smith, and Vladan Vuletić, "Orientation-Dependent Entanglement Lifetime in a Squeezed Atomic Clock", Physical Review Letters, 104, 250801 (2010). Abstract.
[9] R. Mhaskar, S. Knappe, and J. Kitching, "A low-power, high-sensitivity micromachined optical magnetometer", Applied Physics Letters, 101, 241105 (2012). Abstract.
[10] W. Wasilewski, K. Jensen, H. Krauter, J. J. Renema, M. V. Balabas, E. S. Polzik, "Quantum Noise Limited and Entanglement-Assisted Magnetometry", Physical Review Letters, 104, 133601 (2010). Abstract.
[11] S. Steinert, F. Dolde, P. Neumann, A. Aird, B. Naydenov, G. Balasubramanian, F. Jelezko, J. Wrachtrup, "High sensitivity magnetic imaging using an array of spins in diamond", Review of Scientific Instruments, 81, 043705 (2010). Abstract.
[12] S. Sayil, D.V. Kerns, jr. and S.E. Kerns, "Comparison of contactless measurement and testing techniques to a all-silicon optical test and characterization method", IEEE Trans. Instrum. Meas. 54, 2082 (2005). Abstract

Labels: , ,