.comment-link {margin-left:.6em;}

2Physics

2Physics Quote:
"Many of the molecules found by ROSINA DFMS in the coma of comet 67P are compatible with the idea that comets delivered key molecules for prebiotic chemistry throughout the solar system and in particular to the early Earth increasing drastically the concentration of life-related chemicals by impact on a closed water body. The fact that glycine was most probably formed on dust grains in the presolar stage also makes these molecules somehow universal, which means that what happened in the solar system could probably happen elsewhere in the Universe."
-- Kathrin Altwegg and the ROSINA Team

(Read Full Article: "Glycine, an Amino Acid and Other Prebiotic Molecules in Comet 67P/Churyumov-Gerasimenko"
)

Sunday, June 30, 2013

Quantum Computer Runs The Most Practically Useful Quantum Algorithm

Chao-Yang Lu (left) and Jian-Wei Pan (right












Authors: Chao-Yang Lu and Jian-Wei Pan

Affiliation: Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, China.

Over the past three decades, the promise of exponential speedup using quantum computing has spurred a world-wide interest in quantum information. To date, there are three most prominent quantum algorithms that can achieve this exponential speedup over classical computers. Historically, the first one is quantum simulation of complex systems proposed by Feynman in 1980s [1]. The second one is Shor’s algorithm (1994) for factoring large numbers [2] – a killer program to break the widely used RSA cryptographic codes.

Very recently, the third one came as a surprise. Harrow, Hassidim and Lloyd (2009) showed that quantum computers can offer an exponential speedup for solving systems of linear equations [3]. As the problem of solving linear equations is ubiquitous in virtually all areas of science and engineering (such as signal processing, economics, computer science, and physics), it would be fair to say that this might be the most practically useful quantum algorithm so far.
Fig.1: An optimized circuit with four qubits and four entangling gates for solving 2x2 systems of linear equations.

Demonstration of the powerful algorithms in a scalable quantum system has been considered as a milestone toward quantum computation. While the first two have been realized previously [4,5,6], the realization the new quantum algorithm had remained a challenge. Recently, we report the first demonstration of the quantum linear system algorithm, through testing the simplest meaningful instance: solving 2×2 linear equations on a photonic quantum computer [7], in parallel with Walther’s group who also presented results on arXiv [8]. To demonstrate the algorithm, we have implemented a quantum circuit (see Fig.1) with four quantum bits and four controlled logic gates, which is among the most sophisticated quantum circuit to date.
Fig.2: Experimental setup. It consists of (1) Qubit initialization, (2) Phase estimation, (3) R rotation, (4) Inverse phase estimation.

An illustration of our experimental set-up is shown in Fig.2. The four quantum bits are from four single photons generated using a nonlinear optical process called spontaneous parametric down-conversion (where a short, intense ultraviolet laser shines on a crystal and, with a tiny probability, an ultraviolet photon can split into two correlated infrared photons). The quantum information is encoded with the polarization of the single photons, which can be initialized and manipulated using wave plates, and readout using polarizers and single-photon detectors [9].

In the experiment, it is also necessary to implement four sets of photon-photon controlled logic gates – that is, the quantum state of a single photon controls that of another independent single photon. These gates are realized by optical networks consisting of polarizing beam splitters, half wave plates, Sagnac interferometer, and post-selection measurement.
Fig.3: Experimental results. For each input state, the experimentally measured (red bar) expectation values of the observables of the Pauli matrices are compared to the theoretically prediction (gray bar).

We have implemented the algorithm for various input vectors. We characterize the output by measuring the expectation values of the Pauli observables Z, X, and Y. Figure 3 shows both the ideal (gray bar) and experimentally obtained (red bar) expectation values for each observable. To compare how close our experimental results match ideal outcome, we compute the output state fidelities, which give 0.993(3), 0.825(13) and 0.836(16) for the three input vectors, respectively.

To solve more complicated linear and differential equations [10], we are trying to develop new techniques, including experimental manipulation of more photonic qubits, higher brightness multi-photon sources, and more efficient two-photon logic gates. So far, we have the ability to control up to eight individual single photons [11] and ten hyper-entangled quantum bits [12]. Creating a larger-scale circuit would involve more quantum bits. Two parallel pathways are being undertaken in our group. One is to climb up to ten-photon entanglement, and the other way is to exploit more degrees freedom of a single photon thus using it more efficiently. The near-future goal is to control 10 to 20 photonic quantum bits. The enhanced capability would allow us to test more complicated quantum algorithms.

References: 
[1] Richard P. Feynman, “Simulating physics with computers”. International Journal of Theoretical Physics, 21, 467 (1982). Abstract.
[2] P. Shor, “Algorithms for quantum computation: discrete logarithms and factoring” in Proc. 35th Annu. Symp. on the Foundations of Computer Science, edited by S. Goldwasser (IEEE Computer Society Press, Los Alamitos, California, 1994), p. 124–134. Abstract.
[3] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd, “Quantum algorithm for linear systems of equations”. Physical Review Letters, 
103, 150502 (2009). Abstract.
[4] Chao-Yang Lu, Daniel E. Browne, Tao Yang, and Jian-Wei Pan, “Demonstration of a compiled version of Shor’s quantum factoring algorithm using photonic qubits”. Physical Review Letters, 99, 250504 (2007). Abstract.
[5] B.P. Lanyon, T. Weinhold, N. Langford, M. Barbieri, D. James, A. Gilchrist, and A. White, “Experimental demonstration of a compiled version of Shor’s algorithm with quantum entanglement”. Physical Review Letters, 99, 250505 (2007). Abstract.
[6] Chao-Yang Lu, Wei-Bo Gao, Otfried Gühne, Xiao-Qi Zhou, Zeng-Bing Chen, Jian-Wei Pan, “Demonstrating Anyonic Fractional Statistics with a Six-Qubit Quantum Simulator”. Physical Review Letters, 102, 030502 (2009). Abstract.
[7] X.-D. Cai, C. Weedbrook, Z.-E. Su, M.-C. Chen, Mile Gu, M.-J. Zhu, Li Li, Nai-Le Liu, Chao-Yang Lu, and Jian-Wei Pan, “Experimental Quantum Computing to Solve Systems of Linear Equations”. Physical Review Letters, 110, 230501 (2013). Abstract.
[8] Stefanie Barz, Ivan Kassal, Martin Ringbauer, Yannick Ole Lipp, Borivoje Dakic, Alán Aspuru-Guzik, Philip Walther, “Solving systems of linear equations on a quantum computer”. arXiv:1302.1210v1. Abstract.
[9] Jian-Wei Pan, Zeng-Bing Chen, Chao-Yang Lu, Harald Weinfurter, Anton Zeilinger, Marek Żukowski, “Multiphoton entanglement and interferometry”. Review of Modern Physics, 84, 777 (2012). Abstract.
[10] Dominic W. Berry, “Quantum algorithms for solving linear differential equations”. arXiv:1010.2745. Abstract.
[11] Xing-Can Yao, Tian-Xiong Wang, Ping Xu, He Lu, Ge-Sheng Pan, Xiao-Hui Bao, Cheng-Zhi Peng, Chao-Yang Lu, Yu-Ao Chen, Jian-Wei Pan, “Observation of eight-photon entanglement”. Nature Photonics, 6, 225. Abstract.
[12] Wei-Bo Gao, Chao-Yang Lu, Xing-Can Yao, Ping Xu, Otfried Gühne, Alexander Goebel, Yu-Ao Chen, Cheng-Zhi Peng, Zeng-Bing Chen, Jian-Wei Pan, “Experimental demonstration of a hyper-entangled ten-qubit Schrödinger cat state”. Nature Physics, 6, 331 - 335 (2010). Abstract.

Labels:


Sunday, June 23, 2013

Quantum Information at Low Light

Alan Migdall of Joint Quantum Institute (JQI), a research partnership between University of Maryland (UMD) and the National Institute of Standards and Technology, USA [Photo Courtesy: NIST]

At low light, cats see better than humans. Electronic detectors do even better, but eventually they too become more prone to errors at very low light. The fundamental probabilistic nature of light makes it impossible to perfectly distinguish light from dark at very low intensity. However, by using quantum mechanics, one can find measurement schemes that can, at least for part of the time, perform measurements which are free of errors, even when the light intensity is very low.

The chief advantage of using such a dilute light beam is to reduce the power requirement. And this in turn means that encrypted data can be sent over longer distances, even up to distant satellites. Low power and high fidelity in reading data is especially important for transmitting and processing quantum information for secure communications and quantum computation. To facilitate this quantum capability you want a detector that sees well in the (almost) dark. Furthermore, in some secure communications applications it is preferable to occasionally avoid making a decision at all rather than to make an error.

A scheme demonstrated at the Joint Quantum Institute does exactly this. The JQI work, carried out in the lab of Alan Migdall and published in the journal Nature Communications, shows how one category of photo-detection system can make highly accurate readings of incoming information at the single-photon level by allowing the detector in some instances not to give a conclusive answer. Sometimes discretion is the better part of valor.

Quantum Morse Code:

Most digital data comes into your home or office in the form of pulsed light, usually encoding a stream of zeros and ones, the equivalent of the 19th century Morse code of dots and dashes. A more sophisticated data encoding scheme is one that uses not two but four states---0, 1, 2, 3 instead of the customary 0 and 1. This protocol can be conveniently implemented, for example, by having the four states correspond to four different phases of the light pulse. However, the phase states representing values of 0, 1, 2, and 3 have some overlap, and this produces ambiguity when you try to determine which state you have received. This overlap, which is inherent in the states, means that your measurement system sometimes gives you the wrong answer.

Migdall and his associates recently achieved the lowest error rate yet for a photodetector deciphering such a four-fold phase encoding of information in a light pulse. In fact, the error rate was some 4 times lower than what is possible with conventional measurement techniques. Such low error rates were achieved by implementing measurements for minimum error discrimination, or MED for short. This measurement is deterministic insofar as it always gives an answer, albeit with some chance of being wrong.

By contrast, one can instead perform measurements that are in principle error free by allowing some inconclusive results and giving up the deterministic character of the measurement outcomes. This probabilistic discrimination scheme, based on quantum mechanics, is called unambiguous state discrimination, or USD, and is beyond the capabilities of conventional measurement schemes.
Figure 1: Scheme for carrying out unambiguous state discrimination (USD). Inset (i) shows the four nonorthogonal symmetric coherent states with phases equal to ϕ = {0, π/2, π, 3π/2}. The state under measurement, |αi>, has vertical (V) polarization, and the phase reference, |LO>, has horizontal (H) polarization. The pulse is distributed among four elimination stages using mirrors (M) and beam splitters (BS). Each elimination stage uses phase shifters (PS), a polarizer (Pol), and a single photon detector (SPD) to eliminate one possibility for the phase of the input state |αi>.

In their latest result [1], JQI scientists implement a USD of such four-fold phase-encoded states by performing measurements able to eliminate all but one possible value for the input state---whether 0, 1, 2, or 3. This has the effect of determining the answer perfectly or not at all. Alan Migdall compares the earlier minimum error discrimination approach with the unambiguous state discrimination approach: “The former is a technique that always gets an answer albeit with some probability of being mistaken, while the latter is designed to get answers that in principle are never wrong, but at the expense of sometimes getting an answer that is the equivalent of ‘don't know.’ It’s as your mother said, ‘if you can’t say something good it is better to say nothing at all.’”

With USD you make a series of measurements that rule out each state in turn. Then by process of elimination you figure out which state it must be. However, sometimes you obtain an inconclusive result, for example when your measurements eliminate less than three of the four possibilities for the input state.

The Real World:

Measurement systems are not perfect, and ideal USD is not possible. Real-world imperfections produce some errors in the decoded information even in situations where USD appears to work smoothly. The JQI experiment, which implements USD for four quantum states encoded in pulses with average photon numbers of less than one photon, is robust against real-world imperfections. At the end, it performs with much lower errors than what could be achieved by any deterministic measurement, including MED. This advance will be useful for quantum information processing, quantum communications with many states, and fundamental studies of quantum measurements at the low-light level.

Reference:
[1] F.E. Becerra, J. Fan, A.L. Migdall, "Implementation of generalized quantum measurements for unambiguous discrimination of multiple non-orthogonal coherent states," Nature Communications, 4, 2028 (2013). Abstract.

Labels:


Sunday, June 09, 2013

Quantum experiment preludes the endgame for local realism – photonic Bell violation closes the fair-sampling loophole

[Left to Right] Johannes Kofler, Sven Ramelow, Marissa Giustina, Rupert Ursin (© Brasch), Anton Zeilinger (© Godany)

Authors:
Johannes Kofler1, Sven Ramelow2,3, Marissa Giustina2,3, Rupert Ursin2,3, Anton Zeilinger2,3

Affiliation:
1Max Planck Institute of Quantum Optics (MPQ), Garching/Munich, Germany.
2Institute for Quantum Optics and Quantum Information – Vienna (IQOQI), Austrian Academy of Sciences, Vienna, Austria.
3Quantum Optics, Quantum Nanophysics, Quantum Information, University of Vienna, Faculty of Physics, Vienna, Austria.

Using photons, a recent experiment in Vienna closed a loophole in the arguments against local realism and now makes photons the first quantum system for which all major loopholes have been closed in separate experiments. This is good news for a final experiment closing all loopholes simultaneously.

“Local realism” is a world view in which the properties of physical objects exist independent of whether or not they are observed (realism), and in which no physical influence can propagate faster than the speed of light (locality). In 1964, in one of the most important works in the history of the foundations of quantum theory [1], the Irish physicist John Bell proved theoretically that local realism is in contradiction with the predictions of quantum mechanics. With his now famous “Bell inequality”, he showed that it is possible to determine experimentally which of the two radically different world views actually governs reality. The terms in the inequality are the correlations of measurement results. Bell’s inequality is satisfied by the predictions of any local realistic theory, whereas quantum mechanics predicts measurement outcomes that can violate it.

Past 2Physics articles by Johannes Kofler, Sven Ramelow, Rupert Ursin, Anton Zeilinger:

November 04, 2012: "Quantum Teleportation Over 143 Kilometers" by Xiao-song Ma, Johannes Kofler, Rupert Ursin and Anton Zeilinger,
November 13, 2011: "A New Scheme for Photonic Quantum Computing" by Nathan K. Langford, Sven Ramelow and Robert Prevedel,
May 30, 2009: "Transmission of Entangled Photons over a High-Loss Free-Space Channel" by Alessandro Fedrizzi, Rupert Ursin and Anton Zeilinger,
June 08, 2007: "Entanglement and One-Way Quantum Computing "
by Robert Prevedel and Anton Zeilinger

In a Bell test, pairs of systems, e.g. photons, are produced. From every pair, one photon is sent to a party usually called Alice, and the other photon is sent to another party known as Bob. They each choose which physical property they want to measure, for instance, a direction of their photon’s polarization. For pairs that are quantum entangled, the correlations of Alice’s and Bob’s measurement outcomes can exceed the correlations predicted by any local realistic theory and thus violate Bell’s inequality. Quantum entanglement – a term coined by the Austrian physicist Erwin Schrödinger – means that no photon taken by itself has a definite polarization, but that if one party measures the polarization of its photon, the other photon will always show a perfectly correlated polarization. Albert Einstein called this strange effect “spooky action at a distance”.

In addition to its preeminent importance in foundational physics, quantum entanglement and Bell’s inequality also play a quintessential role in the modern field of quantum information. There, individual quantum particles are the carriers of information, and the entanglement between them promises absolutely secure communication as well as enhanced computation power compared to any conceivable classical technology.

In the last decades, Bell’s inequality has been violated in numerous experiments and for several different physical systems such as photons and atoms. However, in experimental tests, “loopholes” arise that allow the observed correlations – although they violate Bell’s inequality – to still be explained by local realistic theories. The advocates of local realism can defend their worldview falling back on essentially three such loopholes. In the “locality loophole” the measurement result of one party is assumed to be influenced by a fast and hidden physical signal from the other party to produce the observed correlations. Similarly, in the “freedom-of-choice” loophole the measurement choices of Alice and Bob are considered to be influenced by some hidden local realistic properties of the particle pairs. These two loopholes have already been closed in photonic experiments [2-4] by separating Alice and Bob by large distances, and enforcing precise timing of the photon pair creation, Alice’s and Bob’s choice events, and their measurements. The local realist would then need superluminal signals to explain the measured correlations, but influences which are faster than light are not allowed in the local realistic world view.

The third way out for the local realist is called the “fair-sampling loophole” [5]. It works in the following manner: if only a small fraction of the produced photons is measured, a clever advocate of local realism can conceive a model in which the ensemble of all produced photons as a whole follows the rules of local realism, although the “unfair” sample of the actually measured photons was able to violate Bell’s inequality. (Think of randomly flipping many fair coins but looking at only some of them, where the coins showing heads tend to hide and thus have a smaller probability of being observed than those showing tails. When looking at only this incomplete and "unfair" subset of the coins, it wrongly appears as if the coins had a special distribution with more showing tails than heads.) This type of loophole has even been explicitly exploited in an experiment faking Bell violations without any entanglement [6]. The way to close the fair-sampling loophole is to achieve a high detection efficiency of the produced particle pairs by avoiding losses and using very good measurement devices. Until now, this has been accomplished for particles with mass such as ions and atoms [7,8], but never for photons. However, for such particles, the other two loopholes are very difficult to close and indeed have not yet been closed.

A recent experiment has, for the first time, closed the fair-sampling loophole for photons [9]. It employed a significantly optimized source of entangled pairs achieving excellent fiber coupling efficiencies and state-of-the-art high-efficiency superconducting detectors to reach the necessary total detection efficiency. The researchers were able to measure about 75% of all photons in each arm. This rules out all local realistic explanations that rely on unfair sampling using a form of Bell’s inequality developed about 20 years ago by the American physicist Philippe Eberhard, which requires an efficiency of only two thirds [10]. The recent experiment makes the photon the first physical system for which all three loopholes have been closed, albeit in different experiments.

Optical setup used in the quantum experiment. (Image: IQOQI Vienna, Jacqueline Godany 2012)

Although most scientists do not expect any surprises and believe that quantum physics will prevail over local realism, it is still conceivable that different loopholes are exploited in different experiments. It is this last piece in the history of Bell tests which is still missing – a final and conclusive experiment violating Bell’s inequality while closing all loopholes simultaneously [11]. It is not yet clear whether such an experiment will be achieved first for photons or atoms or some other quantum system, but if it can be successfully performed, one needs to accept at least one of the following radical views: there is a hidden faster-than-light communication in nature, or we indeed live in a world in which physical properties do not always exist independent of observation. Almost 50 years after the formulation of local realism, its endgame clearly has begun.

References:
[1] J. S. Bell, "On the Einstein Podolsky Rosen Paradox". Physics (NY) 1, 195 (1964). Full Text.
[2] Alain Aspect, Jean Dalibard, Gérard Roger, "Experimental test of Bell's inequalities using time varying analyzers". Physical Review Letters, 49, 1804 (1982). Abstract.
[3] Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, and Anton Zeilinger, "Violation of Bell's Inequality under Strict Einstein Locality Conditions". Physical Review Letters, 81, 5039 (1998). Abstract.
[4] Thomas Scheidl, Rupert Ursin, Johannes Kofler, Sven Ramelow, Xiao-Song Ma, Thomas Herbst, Lothar Ratschbacher, Alessandro Fedrizzi, Nathan K. Langford, Thomas Jennewein, and Anton Zeilinger, "Violation of local realism with freedom of choice". Proceedings of the National Academy of Sciences 107, 19708–19713 (2010). Abstract.
[5] Philip M. Pearle, "Hidden-Variable Example Based upon Data Rejection". Physical Review D 2, 1418 (1970). Abstract.
[6] Ilja Gerhardt, Qin Liu, Antía Lamas-Linares, Johannes Skaar, Valerio Scarani, Vadim Makarov, Christian Kurtsiefer, "Experimentally Faking the Violation of Bell’s Inequalities". Physical Review Letters, 107, 170404 (2011). Abstract.
[7] M. A. Rowe, D. Kielpinski, V. Meyer, C. A. Sackett, W. M. Itano, C. Monroe, and D. J. Wineland, "Experimental violation of a Bell's inequality with efficient detection". Nature 409, 791 (2001). Abstract.
[8] Julian Hofmann, Michael Krug, Norbert Ortegel, Lea Gérard, Markus Weber, Wenjamin Rosenfeld, Harald Weinfurter. "Heralded Entanglement Between Widely Separated Atoms". Science 337, 72 (2012). Abstract.
[9] Marissa Giustina, Alexandra Mech, Sven Ramelow, Bernhard Wittmann, Johannes Kofler, Jörn Beyer, Adriana Lita, Brice Calkins, Thomas Gerrits, Sae Woo Nam, Rupert Ursin, Anton Zeilinger, "Bell violation using entangled photons without the fair-sampling assumption". Nature 497, 227 (2013). Abstract.
[10] Philippe H. Eberhard, "Background Level and Counter Efficiencies Required for a Loophole-Free Einstein-Podolsky-Rosen Experiment". Physical Review A 47, 747 (1993). Abstract.
[11] Zeeya Merali, "Quantum Mechanics Braces for the Ultimate Test". Science 331, 1380 (2011). Abstract.

Labels:


Sunday, June 02, 2013

The Observable Signature of Black Hole Formation

Anthony L. Piro


Author: Anthony L. Piro

Affiliation: Theoretical Astrophysics Including Relativity (TAPIR), California Institute of Technology, Pasadena, USA

Black holes are among the most exciting objects in the Universe. They are regions of spacetime predicted by Einstein's theory of general relativity in which gravity is so strong that it prevents anything, even light, from escaping. Black holes are known to exist and roughly come in two varieties. There are massive black holes at the centers of galaxies, which can have masses anywhere from a million to many billion times the mass of our Sun. And there are also black holes of around ten solar masses in galaxies like our own that have been detected via X-ray emission from accretion [1]. Although this latter class of black holes is generally believed to be formed from the collapse of massive stars, there is a lot of uncertainty that is the focus of current ongoing research. It is unknown what fraction of massive stars produce black holes (rather than neutron stars), what the channels for black holes formation are, and what corresponding observational signatures are expected. Through a combination of theory, state-of-the-art simulations, and new observations, astrophysicists are trying to address these very fundamental questions.

A computer-generated image of the light distortions created by a black hole [Image credit: 
Alain Riazuelo, IAP/UPMC/CNRAS]

The one instance where astronomers are fairly certain they are seeing black hole formation is in the case of gamma-ray bursts (GRBs). A GRB is believed to be the collapse of a massive, quickly rotating star that produces a black hole and relativistic jet. The problem is that these are too rare and are too confined to special environments to explain the majority of black holes. Astronomers regularly see stars exploding as supernovae, but it is not clear what fraction of any of these produce black holes. There is evidence, and it is generally expected, that in most cases these explosions in fact lead to neutron stars instead. This has led to the hypothesis that the signature of black hole formation is in fact the disappearance of a massive star, or "unnova," rather than an actual supernova-like event [2].

My theoretical work [3] hypothesizes that there may be an observational signature of black hole formation, even in circumstances where one might normally expect an unnova. Therefore I titled my work "Taking the 'Un' out of 'Unnovae'." The main idea is based on a somewhat forgotten theoretical study by D. Z. Nadezhin [4]. Before a black hole is formed within a collapsing star, a neutron star is formed first. This neutron star emits neutrinos [5,6], which stream out of the star (because neutrinos are very weakly interacting) carrying energy (and thus mass via E=mc2). This can last for a few tenths of a second before enough material falls onto the neutron star to collapse it to a black hole, and carrying away a mass equivalent to a few tenths of the mass of our Sun. From the point of view of the star's envelope, it sees the mass (and therefore gravitational pull) of the core abruptly decrease and the envelope expands in response. This adjustment of the star's envelope grows into a shock wave that heats and ejects the outer envelope of the star.

This process was also looked at in detail by Elizabeth Lovegrove and Stan Woosley at UC Santa Cruz [7]. They were focused on the heating and subsequent cooling of the envelope from this shock. They found that it would lead to something that looked like a very dim supernova that would last for about a year. In my work, I focused on the observational signature when this shock first hits surface of the star. When this happens, the shock's energy is suddenly released in what is called a "shock breakout flash." Although this merely lasts for a few days, it is 10 to 100 times brighter than the subsequent dim supernova. Therefore, this is the best opportunity for astronomers to catch a black hole being created right in the act.

The most exciting part of this result is that now is the perfect time for astronomers to discover these events. Observational efforts such as the Palomar Transient Factory (also known as PTF) and the Panoramic Survey Telescope and Rapid Response System (also known as Pan-STARRS) are surveying the sky every night and sometimes finding rare and dim explosive, transient events. These surveys are well-suited to find exactly the kind of event I predict for the shock breakout from black hole formation. Given the rate we expect massive stars to be dying, it is not out of the question that one or more of these will be found in the next year or so, allowing us to actually witness the birth of a black hole.

References:
[1] Ronald A. Remillard and Jeffrey E. McClintock, "X-Ray Properties of Black-Hole Binaries". Annual Review of Astronomy & Astrophysics, 44, 49-92 (2006). Abstract.
[2] Christopher S. Kochanek,John F. Beacom, Matthew D. Kistler, José L. Prieto, Krzysztof Z. Stanek, Todd A. Thompson, Hasan Yüksel, "A Survey About Nothing: Monitoring a Million Supergiants for Failed Supernovae". Astrophysical Journal, 684, 1336-1342 (2008). Fulltext.
[3] Anthony L. Piro, "Taking the 'Un' out of 'Unnovae'". Astrophysical Journal Letters, 768, L14 (2013). Abstract.
[4] D. K. Nadyozhin, "Some secondary indications of gravitational collapse". Astrophysics and Space Science, 69, 115-125 (1980). Abstract.
[5] Adam Burrows, "Supernova neutrinos". Astrophysical Journal, 334, 891-908 (1988). Full Text.
[6] J. F. Beacom, R. N. Boyd, and A. Mezzacappa, "Black hole formation in core-collapse supernovae and time-of-flight measurements of the neutrino masses". Physical Review D, 63, 073011 (2001). Abstract.
[7] Elizabeth Lovegrove and Stan E. Woosley, "Very Low Energy Supernovae from Neutrino Mass Loss". Astrophysical Journal, 769, 109 (2013). Abstract.

Labels: , ,