Monday, May 30, 2016

Away Note

I have a trip upcoming to Helsinki. After this I'll be tied up in family business, and then my husband goes on a business trip and I have the kids alone. Then Kindergarten will be closed for a day (forgot why, I'm sure they must have some reason), I have to deal with an ant-infection in our apartment, and more family business follows. In summary: busy times.

I have a book review to appear on this blog later today, but after this you won't hear much from me for a week or two. Keep in mind that since I have comment moderation on, it might take some while for your comment to appear when I am traveling. With thanks for your understanding, here's a random cute pic of Gloria :)

Thursday, May 26, 2016

How can we test quantum gravity?

If you have good eyes, the smallest objects you can make out are about a tenth of a millimeter, roughly the width of a human hair. Add technology, and the smallest structures we have measured so far are approximately 10-19m, that’s the wavelength of the protons collided at the LHC. It has taken us about 400 years from the invention of the microscope to the construction of the LHC – 400 years to cross 15 orders of magnitude.

Quantum effects of gravity are estimated to become relevant on distance scales of approximately 10-35m, known as the Planck length. That’s another 16 orders of magnitude to go. It makes you wonder whether it’s possible at all, or whether all the effort to find a quantum theory of gravity is just idle speculation.

I am optimistic. The history of science is full with people who thought things to be impossible that have meanwhile been done: measuring the light deflection on the sun, heavier-than-air flying machines, detecting gravitational waves. Hence, I don’t think it’s impossible to experimentally test quantum gravity. Maybe it will take some decades, or maybe it will take some centuries – but if only we keep pushing, one day we will measure quantum gravitational effects. Not by directly crossing these 15 orders of magnitude, I believe, but instead by indirect detections at lower energies.

From nothing comes nothing though. If we don’t think about how quantum gravitational effects can look like and where they might show up, we’ll certainly never find them. But fueling my optimism is the steadily increasing interest in the phenomenology of quantum gravity, the research area dedicated to studying how to best find evidence for quantum gravitational effects.

Since there isn’t any one agreed-upon theory for quantum gravity, existing efforts to find observable phenomena focus on finding ways to test general features of the theory, properties that have been found in several different approaches to quantum gravity. Quantum fluctuations of space-time, for example, or the presence of a “minimal length” that would impose a fundamental resolution limit. Such effects can be quantified in mathematical models, which can then be used to estimate the strength of the effects and thus to find out which experiments are most promising.

Testing quantum gravity has long thought to be out of reach of experiments, based on estimates that show it would take a collider the size of the Milky Way to accelerate protons enough to produce a measureable amount of gravitons (the quanta of the gravitational field), or that we would need a detector the size of planet Jupiter to measure a graviton produced elsewhere. Not impossible, but clearly not something that will happen in my lifetime.

One testable consequence of quantum gravity might be, for example, the violation of the symmetry of special and general relativity, known as Lorentz-invariance. Interestingly it turns out that violations of Lorentz-invariance are not necessarily small even if they are created at distances too short to be measurable. Instead, these symmetry violations seep into many particle reactions at accessible energies, and these have been tested to extremely high accuracy. No evidence for violations of Lorentz-invariance have been found. This might sound like not much, but knowing that this symmetry has to be respected by quantum gravity is an extremely useful guide in the development of the theory.

Other testable consequences might be in the weak-field limit of quantum gravity. In the early universe, quantum fluctuations of space-time would have led to temperature fluctuation of matter. And these temperature fluctuations are still observable today in the Cosmic Microwave Background (CMB). The imprint of such “primordial gravitational waves” on the CMB has not yet been measured (LIGO is not sensitive to them), but they are not so far off measurement precision.

A lot of experiments are currently searching for this signal, including BICEP and Planck. This raises the question whether it is possible to infer from the primordial gravitational waves that gravity must have been quantized in the early universe. Answering this question is one of the presently most active areas in quantum gravity phenomenology.

Also testing the weak-field limit of quantum gravity are attempts to bring objects into quantum superpositions that are much heavier than elementary particles. This makes the gravitational field stronger and potentially offers the chance to probe its quantum behavior. The heaviest objects that have so far been brought into superpositions weigh about a nano-gram, which is still several orders of magnitude too small to measure the gravitational field. But a group in Vienna recently proposed an experimental scheme that would allow to measure the gravitational field more precisely than ever before. We are slowly closing in on the quantum gravitational range.

Such arguments however merely concern the direct detection of gravitons, and that isn’t the only manifestation of quantum gravitational effects. There are various other observable consequences that quantum gravity could give rise to, some of which have already been looked for, and others that we plan to look for. So far, we have only negative results. But even negative results are valuable because they tell us what properties the sought-for theory cannot have.

[From arXiv:1602.07539, for details, see here]

The weak field limit would prove that gravity really is quantized and finally deliver the much-needed experimental evidence, confirming that we’re not just doing philosophy. However, for most of us in the field the strong gravity limit is more interesting. With strong gravity limit I mean Planckian curvature, which (not counting those galaxy-sized colliders) can only be found close by the center of black holes and towards the big bang.

(Note that in astrophysics, “strong gravity” is sometimes used to mean something different, referring to large deviations from Newtonian gravity which can be found, eg, around the horizon of black holes. In comparison to the Planckian curvature required for strong quantum gravitational effects, this is still exceedingly weak.)

Strong quantum gravitational effects could also have left an imprint in the cosmic microwave background, notably in the type of correlations that can be found in the fluctuations. There are various models of string cosmology and loop quantum cosmology that have explored the observational consequences, and proposed experiments like EUCLID and PRISM might find first hints. Also the upcoming experiments to test the 21-cm hydrogen absorption could harbor information about quantum gravity.

A somewhat more speculative idea is based on a recent finding according to which the gravitational collapse of matter might not always form a black hole, but could escape the formation of a horizon. If that is so, then the remaining object would give us open view on a region with quantum gravitational effects. It isn’t yet clear exactly what signals we would have to look for to find such an object, but this is promising research direction because it could give us direct access to strong space-time curvature.

There are many other ideas out there. A large class of models for example deals with the possibility that quantum gravitational effects endow space-time with the properties of a medium. This can lead to the dispersion of light (colors running apart), birefringence (polarizations running apart), decoherence (preventing interference), or an opacity of otherwise empty space. More speculative ideas include Craig Hogan’s quest for holographic noise, Bekenstein’s table-top experiment that searches for Planck-length discreteness, or searches for evidence of a minimal length in tritium decay. Some general properties that have recently been found and that we yet have to find good experimental tests for are geometric phase transitions in the early universe, or dimensional reduction.

Without doubt, there is much that remains to be done. But we’re on the way.

[This post previously appeared on Starts With A Bang.]

Thursday, May 19, 2016

The Holy Grail of Crackpot Filtering: How the arXiv decides what’s science – and what’s not.

Where do we draw the boundary between science and pseudoscience? It’s is a question philosophers have debated for as long as there’s been science – and last time I looked they hadn’t made much progress. When you ask a sociologist their answer is normally a variant of: Science is what scientists do. So what do scientists do?

You might have heard that scientists use what’s called the scientific method, a virtuous cycle of generating and testing hypotheses which supposedly separates the good ideas from the bad ones. But that’s only part of the story because it doesn’t tell you where the hypotheses come from to begin with.

Science doesn’t operate with randomly generated hypotheses for the same reason natural selection doesn’t work with randomly generated genetic codes: it would be highly inefficient and any attempt to optimize the outcome would be doomed to fail. What we do instead is heavily filtering hypotheses, and then we consider only those which are small mutations of ideas that have previously worked. Scientists like to be surprised, but not too much.

Indeed, if you look at the scientific enterprise today, almost all of its institutionalized procedures are methods not for testing hypotheses, but for filtering hypotheses: Degrees, peer reviews, scientific guidelines, reproduction studies, measures for statistical significance, and community quality standards. Even the use of personal recommendations works to that end. In theoretical physics in particular the prevailing quality standard is that theories need to be formulated in mathematical terms. All these are requirements which have evolved over the last two centuries – and they have proved to work very well. It’s only smart to use them.

But the business of hypotheses filtering is a tricky one and it doesn’t proceed by written rules. It is a method that has developed through social demarcation, and as such it has its pitfalls. Humans are prone to social biases and every once in a while an idea get dismissed not because it’s bad, but because it lacks community support. And there is no telling how often this happens because these are the stories we never get to hear.

It isn’t news that scientists lock shoulders to defend their territory and use technical terms like fraternities use secret handshakes. It thus shouldn’t come as a surprise that an electronic archive which caters to the scientific community would develop software to emulate the community’s filters. And that is, in a nutshell, basically what the arXiv is doing.

In an interesting recent paper, Luis Reyes-Galindo had a look at the arXiv moderators and their reliance on automated filters:

In the attempt to develop an algorithm that would sort papers into arXiv categories automatically, thereby supporting arXiv moderators to decide when a submission needs to be reclassified, it turned out that papers which scientists would mark down as “crackpottery” showed up as not classifiable or stood out by language significantly different from that in the published literature. According to Paul Ginsparg, who developed the arXiv more than 20 years ago:
“The first thing I noticed was that every once in a while the classifier would spit something out as ‘I don't know what category this is’ and you’d look at it and it would be what we’re calling this fringe stuff. That quite surprised me. How can this classifier that was tuned to figure out category be seemingly detecting quality?

“[Outliers] also show up in the stop word distribution, even if the stop words are just catching the style and not the content! They’re writing in a style which is deviating, in a way. [...]

“What it’s saying is that people who go through a certain training and who read these articles and who write these articles learn to write in a very specific language. This language, this mode of writing and the frequency with which they use terms and in conjunctions and all of the rest is very characteristic to people who have a certain training. The people from outside that community are just not emulating that. They don’t come from the same training and so this thing shows up in ways you wouldn’t necessarily guess. They’re combining two willy-nilly subjects from different fields and so that gets spit out.”
It doesn’t surprise me much – you can see this happening in comment sections all over the place: The “insiders” can immediately tell who is an “outsider.” Often it doesn’t take more than a sentence or two, an odd expression, a term used in the wrong context, a phrase that nobody in the field would ever use. It is only consequential that with smart software you can tell insiders from outsiders even more efficiently than humans. According to Ginsparg:
“We've actually had submissions to arXiv that are not spotted by the moderators but are spotted by the automated programme [...] All I was trying to do is build a simple text classifier and inadvertently I built what I call The Holy Grail of Crackpot Filtering.”
Trying to speak in the code of a group you haven’t been part of at least for some time is pretty much impossible, much like it’s impossible to fake the accent of a city you haven’t lived in for some while. Such in-group and out-group demarcation is subject of much study in sociology, not specifically the sociology of science, but generally. Scientists are human and of course in-group and out-group behavior also shapes their profession, even though they like to deny it as if they were superhuman think-machines.

What is interesting about this paper is that, for the first time, it openly discusses how the process of filtering happens. It’s software that literally encodes the hidden rules that physicists use to sort out cranks. For what I can tell, the arXiv filters work reasonably well, otherwise there would be much complaint in the community. But the vast majority of researchers in the field are quite satisfied with what the arXiv is doing, meaning the arXiv filters match their own judgement.

There are exceptions of course. I have heard some stories of people who were working on new approaches that fell between the stools and were flagged as potential crackpottery. The cases that I know of could eventually be resolved, but that might tell you more about the people I know than about the way such issues typically end.

Personally, I have never had a problem with the arXiv moderation. I had a paper reclassified from gen-ph to gr-qc once by a well-meaning moderator, which is how I learned that gen-ph is the dump for borderline crackpottery. (How would I have known? I don’t read gen-ph. I was just assuming someone reads it.)

I don’t so much have an issue with what gets filtered on the arXiv, what bothers me much more is what does not get filtered and hence, implicitly, gets approval by the community. I am very sympathetic to the concerns of John The-End-Of-Science Horgan that scientists don’t clean enough on their own doorsteps. There is no “invisible hand” that corrects scientists if they go astray. We have to do this ourselves. In-group behavior can greatly misdirect science because, given sufficiently many people, even fruitless research can become self-supportive. No filter that is derived from the community’s own judgement will do anything about this.

It’s about time that scientists start paying attention to social behavior in their community. It can, and sometimes does, affect objective judgement. Ignoring or flagging what doesn’t fit into pre-existing categories is one such social problem that can stand in the way of progress.

In a 2013 paper published in Science, a group of researchers quantified the likeliness of combinations of topics in citation lists and studied the cross-correlation with the probability of the paper becoming a “hit” (meaning in the upper 5th percentile of citation scores). They found that having previously unlikely combinations in the quoted literature is positively correlated with the later impact of a paper. They also note that the fraction of papers with such ‘unconventional’ combinations has decreased from 3.54% in the 1980s to 2.67% in the 1990, “indicating a persistent and prominent tendency for high conventionality.”

Conventional science isn’t bad science. But we also need unconventional science, and we should be careful to not assign the label “crackpottery” too quickly. If science is what scientists do, scientists should pay some attention to the science of what they do.

Sunday, May 15, 2016

Dear Dr B: If photons have a mass, would this mean special relativity is no longer valid?

Einstein and Lorentz.
[Image: Wikipedia]
“[If photons have a restmass] would that mean the whole business of the special theory of relativity being derived from the idea that light has to go at a particular velocity in order for it to exist/Maxwell’s identification of e/m waves as light because they would have to go at the appropriate velocity is no longer valid?”

(This question came up in the discussion of a recent proposal according to which photons with a tiny restmass might cause an effect similar to the cosmological constant.)

Dear Brian,

The short answer to your question is “No.” If photons had a restmass, special relativity would still be as valid as it’s always been.

The longer answer is that the invariance of the speed of light features prominently in the popular explanations of special relativity for historic reasons, not for technical reasons. Einstein was lead to special relativity contemplating what it would be like to travel with light, and then tried to find a way to accommodate an observer’s motion with the invariance of the speed of light. But the derivation of special relativity is much more general than that, and it is unnecessary to postulate that the speed of light is invariant.

Special relativity is really just physics in Minkowski space, that is the 4-dimensional space-time you obtain after promoting time from a parameter to a coordinate. Einstein wanted the laws of physics to be the same for all inertial observers in Minkowski-space, ie observers moving at constant velocity. If you translate this requirement into mathematics, you are lead to ask for the symmetry transformations in Minkowski-space. These transformations form a group – the Poincaré-group – from which you can read off all the odd things you have heard of: time-dilatation, length-contraction, relativistic mass, and so on.

The Poincaré-group itself has two subgroups. One contains just translations in space and time. This tells you that if you have an infinitely extended and unchanging space then it doesn’t matter where or when you do your experiment, the outcome will be the same. The remaining part of the Poincaré-group is the Lorentz-group. The Lorentz-group contains rotations – this tells you it doesn’t matter in which direction you turn, the laws of nature will still be the same. Besides the rotations, the Lorentz-group contains boosts, that are basically rotations between space and time. Invariance under boosts tells you that it doesn’t matter at which velocity you move, the laws of nature will remain the same. It’s the boosts where all the special relativistic fun goes on.

Deriving the Lorentz-group, if you know how to do it, is a three-liner, and I assure you it has absolutely nothing to do with rocket ships and lasers and so on. It is merely based on the requirement that the metric of Minkowski-space has to remain invariant. Carry through with the math and you’ll find that the boosts depend on a free constant with the dimension of a speed. You can further show that this constant is the speed of massless particles.

Hence, if photons are massless, then the constant in the Lorentz-transformation is the speed of light. If photons are not massless, then the constant in the Lorentz-transformation is still there, but not identical to the speed of light. We already know however that these constants must be identical to very good precision, which is the same as saying the mass of photons must be very small.

Giving a mass to photons is unappealing not because it violates special relativity – it doesn’t – but because it violates gauge-invariance, the most cherished principle underlying the standard model. But that’s a different story and shall be told another time.

Thanks for an interesting question!

Monday, May 09, 2016

Book review: “The Big Picture” by Sean Carroll

The Big Picture: On the Origins of Life, Meaning, and the Universe Itself
Sean Carroll
Dutton (May 10, 2016)

Among the scientific disciplines, physics is unique: Concerned with the most fundamental entities, its laws must be respected in all other areas of science. While there are many emergent laws which are interesting in their own right – from neurobiology to sociology – there is no doubt they all have to be compatible with energy conservation. And the second law of thermodynamics. And quantum mechanics. And the standard model better be consistent with whatever you think are the neurological processes that make you “you.” There’s no avoiding physics.

In his new book, The Big Picture Sean explains just why you can’t ignore physics when you talk about extrasensory perception, consciousness, god, afterlife, free will, or morals. In the first part, Sean lays out what, to our best current knowledge, the fundamental laws of nature are, and what their relevance is for all other emergent laws. In the later parts he then goes through the consequences that follow from this.

On the way from quantum field theory to morals, he covers what science has to say about complexity, the arrow of time, and the origin of life. (If you attended the 2011 FQXi conference, parts will sound very familiar.) Then, towards the end of the book, he derives advice from his physics-based philosophy – which he calls “poetic naturalism” – for finding “meaning” in life and finding a “good” way to organize our living together (scare quotes because these words might not mean what you think they mean). His arguments rely heavily on Bayesian reasoning, so you better be prepared to update your belief system while reading.

The Big Picture is, above everything, a courageous book – and an overdue one. I have had many arguments about exactly the issues that Sean addresses in his book – from “qualia” to “downwards causation” – but I neither have the patience nor the interest to talk people out of their cherished delusions. I’m an atheist primarily because I think religion would be wasting my time, time that I’d rather spend on something more insightful. Trying to convince people that their beliefs are inconsistent would also be wasting my time, hence I don’t. But if I did, I almost certainly wouldn’t be able to remain as infallibly polite as Sean.

So, I am super happy about this book. Because now, whenever someone brings up Mary The Confused Color Scientist who can’t tell sensory perception from knowledge about that perception, I’ll just – politely – tell them to read Sean’s book. The best thing I learned from The Big Picture is that apparently Franck Jackson, the philosopher who came up with The Color Scientist, eventually conceded himself that the argument was wrong. The world of philosophy indeed sometimes moves! Time then, to stop talking about qualia.

I really wish I had found something to disagree with in Sean’s book, but the only quibble I have (you won’t be surprised to hear) is that I think what Sean-The-Compatibilist calls “free will” doesn’t deserve being called “free will.” Using the adjective “free” strongly suggests an independence from the underlying microscopic laws, and hence a case of “strong emergence” – which is an idea that should go into the same bin as qualia. I also agree with Sean however that fighting about the use of words is moot.

(The other thing I’m happy about is that, leaving aside the standard model and general relativity, Sean’s book has almost zero overlap with the book I’m writing. *wipes_sweat_off_forehead*. Could you all please stop writing books until I’m done, it makes me nervous.)

In any case, it shouldn’t come as a surprise that I agree so wholeheartedly with Sean because I think everybody who open-mindedly looks at the evidence – ie all we currently know about the laws of nature – must come to the same conclusions. The main obstacle in conveying this message is that most people without training in particle physics don’t understand effective field theory, and consequently don’t see what this implies for the emergence of higher level laws. Sean does a great job overcoming this obstacle.

I wish I could make myself believe that after the publication of Sean’s book I’ll never again have to endure someone insisting there must be something about their experience that can’t be described by a handful of elementary particles. But I’m not very good at making myself believe in exceedingly unlikely scenarios, whether that’s the existence of an omniscient god or the ability of humans to agree on how unlikely this existence is. At the very least however, The Big Picture should make clear that physicists aren’t just arrogant when they say their work reveals insights that reach far beyond the boundaries of their discipline. Physics indeed has an exceptional status among the sciences.

[Disclaimer: Free review copy.]

Tuesday, May 03, 2016

Experimental Search for Quantum Gravity 2016

I am happy to announce that this year we will run the 5th international conference on Experimental Search for Quantum Gravity here in Frankfurt, Germany. The meeting will take place Sep 19-23, 2016.

We have a (quite preliminary) website up here. Application is now open and will run through June 1st. If you're a student or young postdoc with an interest in the phenomenology of quantum gravity, this conference might be a good starting point and I encourage you to apply. We cannot afford handing out travel grants, but we will waive the conference fee for young participants (young in terms of PhD age, not biological age).

The location of the meeting will be at my new workplace, the Frankfurt Institute for Advanced Studies, FIAS for short. When it comes to technical support, they seem considerably better organized (not to mention staffed) than my previous institution. At this stage I am thus tentatively hopeful that this year we'll both record and livestream the talks. So stay tuned, there's more to come.

Wednesday, April 27, 2016

If you fall into a black hole

If you fall into a black hole, you’ll die. That much is pretty sure. But what happens before that?

The gravitational pull of a black hole depends on its mass. At a fixed distance from the center, it isn’t any stronger or weaker than that of a star with the same mass. The difference is that, since a black hole doesn’t have a surface, the gravitational pull can continue to increase as you approach the center.

The gravitational pull itself isn’t the problem, the problem is the change in the pull, the tidal force. It will stretch any extended object in a process with technical name “spaghettification.” That’s what will eventually kill you. Whether this happens before or after you cross the horizon depends, again, on the mass of the black hole. The larger the mass, the smaller the space-time curvature at the horizon, and the smaller the tidal force.

Leaving aside lots of hot gas and swirling particles, you have good chances to survive crossing the horizon of a supermassive black hole, like that in the center of our galaxy. You would, however, probably be torn apart before crossing the horizon of a solar-mass black hole.

It takes you a finite time to reach the horizon of a black hole. For an outside observer however, you seem to be moving slower and slower and will never quite reach the black hole, due to the (technically infinitely large) gravitational redshift. If you take into account that black holes evaporate, it doesn’t quite take forever, and your friends will eventually see you vanishing. It might just take a few hundred billion years.

In an article that recently appeared on “Quick And Dirty Tips” (featured by SciAm), Everyday Einstein Sabrina Stierwalt explains:
“As you approach a black hole, you do not notice a change in time as you experience it, but from an outsider’s perspective, time appears to slow down and eventually crawl to a stop for you [...] So who is right? This discrepancy, and whose reality is ultimately correct, is a highly contested area of current physics research.”
No, it isn’t. The two observers have different descriptions of the process of falling into a black hole because they both use different time coordinates. There is no contradiction between the conclusions they draw. The outside observer’s story is an infinitely stretched version of the infalling observer’s story, covering only the part before horizon crossing. Nobody contests this.

I suspect this confusion was caused by the idea of black hole complementarity. Which is indeed a highly contest area of current physics research. According to black hole complementarity the information that falls into a black hole both goes in and comes out. This is in contradiction with quantum mechanics which forbids making exact copies of a state. The idea of black hole complementarity is that nobody can ever make a measurement to document the forbidden copying and hence, it isn’t a real inconsistency. Making such measurements is typically impossible because the infalling observer only has a limited amount of time before hitting the singularity.

Black hole complementarity is actually a pretty philosophical idea.

Now, the black hole firewall issue points out that black hole complementarity is inconsistent. Even if you can’t measure that a copy has been made, pushing the infalling information in the outgoing radiation changes the vacuum state in the horizon vicinity to a state which is no longer empty: that’s the firewall.

Be that as it may, even in black hole complementarity the infalling observer still falls in, and crosses the horizon at a finite time.

The real question that drives much current research is how the information comes out of the black hole before it has completely evaporated. It’s a topic which has been discussed for more than 40 years now, and there is little sign that theorists will agree on a solution. And why would they? Leaving aside fluid analogies, there is no experimental evidence for what happens with black hole information, and there is hence no reason for theorists to converge on any one option.

The theory assessment in this research area is purely non-empirical, to use an expression by philosopher Richard Dawid. It’s why I think if we ever want to see progress on the foundations of physics we have to think very carefully about the non-empirical criteria that we use.

Anyway, the lesson here is: Everyday Einstein’s Quick and Dirty Tips is not a recommended travel guide for black holes.

Wednesday, April 20, 2016

Dear Dr B: Why is Lorentz-invariance in conflict with discreteness?

Can we build up space-time from
discrete entities?
“Could you elaborate (even) more on […] the exact tension between Lorentz invariance and attempts for discretisation?



Dear Noa:

Discretization is a common procedure to deal with infinities. Since quantum mechanics relates large energies to short (wave) lengths, introducing a shortest possible distance corresponds to cutting off momentum integrals. This can remove infinites that come in at large momenta (or, as the physicists say “in the UV”).

Such hard cut-off procedures were quite common in the early days of quantum field theory. They have since been replaced with more sophisticated regulation procedures, but these don’t work for quantum gravity. Hence it lies at hand to use discretization to get rid of the infinities that plague quantum gravity.

Lorentz-invariance is the symmetry of Special Relativity; it tells us how observables transform from one reference frame to another. Certain types of observables, called “scalars,” don’t change at all. In general, observables do change, but they do so under a well-defined procedure that is by the application of Lorentz-transformations.We call these “covariant.” Or at least we should. Most often invariance is conflated with covariance in the literature.

(To be precise, Lorentz-covariance isn’t the full symmetry of Special Relativity because there are also translations in space and time that should maintain the laws of nature. If you add these, you get Poincaré-invariance. But the translations aren’t so relevant for our purposes.)

Lorentz-transformations acting on distances and times lead to the phenomena of Lorentz-contraction and time-dilatation. That means observers at relative velocities to each other measure different lengths and time-intervals. As long as there aren’t any interactions, this has no consequences. But once you have objects that can interact, relativistic contraction has measurable consequences.

Heavy ions for example, which are collided in facilities like RHIC or the LHC, are accelerated to almost the speed of light, which results in a significant length contraction in beam direction, and a corresponding increase in the density. This relativistic squeeze has to be taken into account to correctly compute observables. It isn’t merely an apparent distortion, it’s a real effect.

Now consider you have a regular cubic lattice which is at rest relative to you. Alice comes by in a space-ship at high velocity, what does she see? She doesn’t see a cubic lattice – she sees a lattice that is squeezed into one direction due to Lorentz-contraction. Who of you is right? You’re both right. It’s just that the lattice isn’t invariant under the Lorentz-transformation, and neither are any interactions with it.

The lattice can therefore be used to define a preferred frame, that is a particular reference frame which isn’t like any other frame, violating observer independence. The easiest way to do this would be to use the frame in which the spacing is regular, ie your restframe. If you compute any observables that take into account interactions with the lattice, the result will now explicitly depend on the motion relative to the lattice. Condensed matter systems are thus generally not Lorentz-invariant.

A Lorentz-contraction can convert any distance, no matter how large, into another distance, no matter how short. Similarly, it can blue-shift long wavelengths to short wavelengths, and hence can make small momenta arbitrarily large. This however runs into conflict with the idea of cutting off momentum integrals. For this reason approaches to quantum gravity that rely on discretization or analogies to condensed matter systems are difficult to reconcile with Lorentz-invariance.

So what, you may say, let’s just throw out Lorentz-invariance then. Let us just take a tiny lattice spacing so that we won’t see the effects. Unfortunately, it isn’t that easy. Violations of Lorentz-invariance, even if tiny, spill over into all kinds of observables even at low energies.

A good example is vacuum Cherenkov radiation, that is the spontaneous emission of a photon by an electron. This effect is normally – ie when Lorentz-invariance is respected – forbidden due to energy-momentum conservation. It can only take place in a medium which has components that can recoil. But Lorentz-invariance violation would allow electrons to radiate off photons even in empty space. No such effect has been seen, and this leads to very strong bounds on Lorentz-invariance violation.

And this isn’t the only bound. There are literally dozens of particle interactions that have been checked for Lorentz-invariance violating contributions with absolutely no evidence showing up. Hence, we know that Lorentz-invariance, if not exact, is respected by nature to extremely high precision. And this is very hard to achieve in a model that relies on a discretization.

Having said that, I must point out that not every quantity of dimension length actually transforms as a distance. Thus, the existence of a fundamental length scale is not a priori in conflict with Lorentz-invariance. The best example is maybe the Planck length itself. It has dimension length, but it’s defined from constants of nature that are themselves frame-independent. It has units of a length, but it doesn’t transform as a distance. For the same reason string theory is perfectly compatible with Lorentz-invariance even though it contains a fundamental length scale.

The tension between discreteness and Lorentz-invariance appears always if you have objects that transform like distances or like areas or like spatial volumes. The Causal Set approach therefore is an exception to the problems with discreteness (to my knowledge the only exception). The reason is that Causal Sets are a randomly distributed collection of (unconnected!) points with a four-density that is constant on the average. The random distribution prevents the problems with regular lattices. And since points and four-volumes are both Lorentz-invariant, no preferred frame is introduced.

It is remarkable just how difficult Lorentz-invariance makes it to reconcile general relativity with quantum field theory. The fact that no violations of Lorentz-invariance have been found and the insight that discreteness therefore seems an ill-fated approach has significantly contributed to the conviction of string theorists that they are working on the only right approach. Needless to say there are some people who would disagree, such as probably Carlo Rovelli and Garrett Lisi.

Either way, the absence of Lorentz-invariance violations is one of the prime examples that I draw upon to demonstrate that it is possible to constrain theory development in quantum gravity with existing data. Everyone who still works on discrete approaches must now make really sure to demonstrate there is no conflict with observation.

Thanks for an interesting question!

Wednesday, April 13, 2016

Dark matter might connect galaxies through wormholes

Tl;dr: A new paper shows that one of the most popular types of dark matter – the axion – could make wormholes possible if strong electromagnetic fields, like those found around supermassive black holes, are present. Unclear remains how such wormholes would be formed and whether they would be stable.
Wormhole dress.
Source: Shenova.

Wouldn’t you sometimes like to vanish into a hole and crawl out in another galaxy? It might not be as impossible as it seems. General relativity has long been known to allow for “wormholes” that are short connections between seemingly very distant places. Unfortunately, these wormholes are unstable and cannot be traversed unless filled by “exotic matter,” which must have negative energy density to keep the hole from closing. And no matter that we have ever seen has this property.

The universe, however, contains a lot of matter that we have never seen, which might give you hope. We observe this “dark matter” only through its gravitational pull, but this is enough to tell that it behaves pretty much like regular matter. Dark matter too is thus not exotic enough to help with stabilizing wormholes. Or so we thought.

In a recent paper, Konstantinos Dimopoulos from the “Consortium for Fundamental Physics” at Lancaster University points out that dark matter might be able to mimic the behavior of exotic matter when caught in strong electromagnetic fields:
    Active galaxies may harbour wormholes if dark matter is axionic
    By Konstantinos Dimopoulos
    arXiv:1603.04671 [astro-ph.HE]
Axions are one of the most popular candidates for dark matter. The particles themselves are very light, but they form a condensate in the early universe that should still be around today, giving rise to the observed dark matter distribution. Like all other dark matter candidates, axions have been searched for but so far not been detected.

In his paper, Dimopoulos points out that, due to their peculiar coupling to electromagnetic fields, axions can acquire an apparent mass which makes a negative contribution to their energy. This effect isn’t so unusual – it is similar to the way that fermions obtain masses by coupling to the Higgs or that scalar fields can obtain effective masses by coupling to electromagnetic fields. In other words, it’s not totally unheard of.

Dimopoulos then estimates how strong an electromagnetic field is necessary to turn axions into exotic matter and finds that around supermassive black holes the conditions would just be right. Hence, he concludes, axionic dark matter might keep wormholes open and traversable.

In his present work, Dimopoulos has however not done a fully relativistic computation. He considers the axions in the background of the black hole, but not the coupled solution of axions plus black hole. The analysis so far also does not check whether the wormhole would indeed be stable, or if it would instead blow off the matter that is supposed to stabilize it. And finally, it leaves open the question how the wormhole would form. It is one thing to discuss configurations that are mathematically possible, but it’s another thing entirely to demonstrate that they can actually come into being in our universe.

So it’s an interesting idea, but it will take a little more to convince me that this is possible.

And in case you warmed up to the idea of getting out of this galaxy, let me remind you that the closest supermassive black hole is still 26,000 light years away.

Note added: As mentioned by a commenter (see below) the argument in the paper might be incorrect. I asked the author for comment, but no reply so far.
Another note: The author says he has revised and replaced the paper, and that the conclusions are not affected.

Thursday, April 07, 2016

10 Essentials of Quantum Mechanics

Vortices in a Bose-Einstein condensate.
Source: NIST.

Trying to score at next week’s dinner party? Here’s how to intimidate your boss by fluently speaking quantum.

1. Everything is quantum

It’s not like some things are quantum mechanical and other things are not. Everything obeys the same laws of quantum mechanics – it’s just that quantum effects of large objects are very hard to notice. This is why quantum mechanics was a latecomer in theoretical physics: It wasn’t until physicists had to explain why electrons sit on shells around the atomic nucleus that quantum mechanics became necessary to make accurate predictions.

2. Quantization doesn’t necessarily imply discreteness

“Quanta” are discrete chunks, but not everything becomes chunky on short scales. Electromagnetic waves are made of quanta called “photons,” so the waves can be thought of as a discretized. And electron shells around the atomic nucleus can only have certain discrete radii. But other particle properties do not become discrete even in a quantum theory. The position of electrons in the conducting band of a metal for example is not discrete – the electron can occupy any place within the band. And the energy values of the photons that make up electromagnetic waves are not discrete either. For this reason, quantizing gravity – should we finally succeed at it – also does not necessarily mean that space and time have to be made discrete.

3. Entanglement is not the same as superposition

A quantum superposition is the ability of a system to be in two different states at the same time, and yet, when measured, one always finds one particular state, never a superposition. Entanglement on the other hand is a correlation between parts of a system – something entirely different. Superpositions are not fundamental: Whether a state is or isn’t a superposition depends on what you want to measure. A state can for example be in a superposition of positions and not in a superposition of momenta – so the whole concept is ambiguous. Entanglement on the other hand is unambiguous: It is an intrinsic property of each system and the so-far best known measure of a system’s quantum-ness. (For more details, read “What is the difference between entanglement and superposition?”)

4. There is no spooky action at a distance

Nowhere in quantum mechanics is information ever transmitted non-locally, so that it jumps over a stretch of space without having to go through all places in between. Entanglement is itself non-local, but it doesn’t do any action – it is a correlation that is not connected to non-local transfer of information or any other observable. It was a great confusion in the early days of quantum mechanics, but we know today that the theory can be made perfectly compatible with Einstein’s theory of Special Relativity in which information cannot be transferred faster than the speed of light.

5. It’s an active research area

It’s not like quantum mechanics is yesterday’s news. True, the theory originated more than a century ago. But many aspects of it became testable only with modern technology. Quantum optics, quantum information, quantum computing, quantum cryptography, quantum thermodynamics, and quantum metrology are all recently formed and presently very active research areas. With the new technology, also interest in the foundations of quantum mechanics has been reignited.

6. Einstein didn’t deny it

Contrary to popular opinion, Einstein was not a quantum mechanics denier. He couldn’t possibly be – the theory was so successful early on that no serious scientist could dismiss it. Einstein instead argued that the theory was incomplete, and believed the inherent randomness of quantum processes must have a deeper explanation. It was not that he thought the randomness was wrong, he just thought that this wasn’t the end of the story. For an excellent clarification of Einstein’s views on quantum mechanics, I recommend George Musser’s article “What Einstein Really Thought about Quantum Mechanics” (paywalled, sorry).

7. It’s all about uncertainty

The central postulate of quantum mechanics is that there are pairs of observables that cannot simultaneously be measured, like for example the position and momentum of a particle. These pairs are called “conjugate variables,” and the impossibility to measure both their values precisely is what makes all the difference between a quantized and a non-quantized theory. In quantum mechanics, this uncertainty is fundamental, not due to experimental shortcomings.

8. Quantum effects are not necessarily small...

We do not normally observe quantum effects on long distances because the necessary correlations are very fragile. Treat them carefully enough however, and quantum effects can persist over long distances. Photons have for example been entangled over separations as much as several hundreds of kilometer. And in Bose-Einstein condensates, up to several million of atoms have been brought into one coherent quantum state. Some researchers even believe that dark matter has quantum effects which span through whole galaxies.

9. ...but they dominate the small scales

In quantum mechanics, every particle is also a wave and every wave is also a particle. The effects of quantum mechanics become very pronounced once one observes a particle on distances that are comparable to the associated wavelength. This is why atomic and subatomic physics cannot be understood without quantum mechanics, whereas planetary orbits are entirely unaffected by quantum behavior.

10. Schrödinger’s cat is dead. Or alive. But not both.

It was not well-understood in the early days of quantum mechanics, but the quantum behavior of macroscopic objects decays very rapidly. This “decoherence” is due to constant interactions with the environment which are, in relatively warm and dense places like those necessary for life, impossible to avoid. Bringing large objects into superpositions of two different states is therefore extremely difficult and the superposition fades rapidly.

The heaviest object that has so far been brought into a superposition of locations is a carbon-60 molecule, and it has been proposed to do this experiment also for viruses or even heavier creatures like bacteria. Thus, the paradox that Schrödinger’s cat once raised – the transfer of a quantum superposition (the decaying atom) to a large object (the cat) – has been resolved. We now understand that while small things like atoms can exist in superpositions for extended amounts of time, a large object would settle extremely rapidly in one particular state. That’s why we never see cats that are both dead and alive.

[This post previously appeared on Starts With A Bang.]

Sunday, April 03, 2016

New link between quantum computing and black hole may solve information loss problem

[image source: IBT]

If you leave the city limits of Established Knowledge and pass the Fields of Extrapolation, you enter the Forest of Speculations. As you get deeper into the forest, larger and larger trees impinge on the road, strangely deformed, knotted onto themselves, bent over backwards. They eventually grow so close that they block out the sunlight. It must be somewhere here, just before you cross over from speculation to insanity, that Gia Dvali looks for new ideas and drags them into the sunlight.

Dvali’s newest idea is that every black hole is a quantum computer. And not just any quantum computer, but a quantum computer made of a Bose-Einstein condensate that self-tunes to the quantum critical point. In one sweep, he has combined everything that is cool in physics at the moment.

This link between black holes and Bose-Einstein condensates is based on simple premises. Dvali set out to find some stuff that would share properties with black holes, notably the relation between entropy and mass (BH entropy), the decrease in entropy during evaporation (Page time), and the ability to scramble information quickly (scrambling time). What he found was that certain condensates do exactly this.

Consequently he went and conjectured that this is more than a coincidence, and that black holes themselves are condensates – condensates of gravitons, whose quantum criticality allows the fast scrambling. The gravitons equip black holes with quantum hair on horizon scale, and hence provide a solution to the black hole information loss problem by first storing information and then slowly leaking it out.

Bose-Einstein condensates on the other hand contain long-range quantum effects that make them good candidates for quantum computers. The individual q-bits that have been proposed for use in these condensates are normally correlated atoms trapped in optical lattices. Based on his analogy with black holes however, Dvali suggests to use a different type of state for information storage, which would optimize the storage capacity.

I had the opportunity to speak with Immanuel Bloch from the Max Planck Institute for Quantum Optics about Dvali’s idea, and I learned that while it seems possible to create a self-tuned condensate to mimic the black hole, addressing the states that Dvali has identified is difficult and, at least presently, not practical. You can read more about this in my recent Aeon essay.

But really, you may ask, what isn’t a quantum computer? Doesn’t anything that changes in time according to the equations of quantum mechanics process information and compute something? Doesn’t every piece of chalk execute the laws of nature and evaluate its own fate, doing a computation that somehow implies something with quantum?

That’s right. But when physicists speak of quantum computers, they mean a particularly powerful collection of entangled states, assemblies that allow to hold and manipulate much more information than a largely classical state. It’s this property of quantum computers specifically that Dvali claims black holes must also possess. The chalk just won’t do.

If it is correct what Dvali says, a real black hole out there in space doesn’t compute anything in particular. It merely stores the information of what fell in and spits it back out again. But a better understanding of how to initialize a state might allow us one day – give it some hundred years – to make use of nature’s ability to distribute information enormously quickly.

The relevant question is of course, can you test that it’s true?

I first heard of Dvali’s idea on a conference I attended last year in July. In his talk, Dvali spoke about possible observational evidence for the quantum hair due to modifications of orbits nearby the black hole. At least that’s my dim recollection almost a year later. He showed some preliminary results of this, but the paper hasn’t gotten published and the slides aren’t online. Instead, together with some collaborators, he published a paper arguing that the idea is compatible with the Hawking, Perry, Strominger proposal to solve the black hole information loss, which also relies on black hole hair.

In November then, I heard another talk by Stefan Hofmann, who had also worked on some aspects of the idea that black holes are Bose-Einstein condensates. He told the audience that one might see a modification in the gravitational wave signal of black hole merger ringdowns. Which have since indeed been detected. Again though, there is no paper.

So I am tentatively hopeful that we can look for evidence of this idea in the soon future, but so far there aren’t any predictions. I have an own proposal to add for observational consequences of this approach, which is to look at the scattering cross-section of the graviton condensate with photons in the wave-length regime of the horizon-size (ie radio-waves). I don’t have time to really work on this, but if you’re looking for one-year project in quantum gravity phenomenology, this one seems interesting.

Dvali’s idea has some loose ends of course. Notably it isn’t clear how the condensate escapes collapse, at least it isn’t clear to me and not clear to anyone I talked to. The general argument is that for the condensate the semi-classical limit is a bad approximation, and thus the singularity theorems are rather meaningless. While that might be, it’s too vague for my comfort. The idea also seems superficially similar to the fuzzball proposal, and it would be good to know the relation or differences.

After these words of caution, let me add that this link between condensed matter, quantum information, and black holes isn’t as crazy as it seems at first. In the last years, a lot of research has piled up that tightens the connections between these fields. Indeed, a recent paper by Brown et al hypothesizes that black holes are not only the most efficient storage devices but indeed the fastest computers.

It’s amazing just how much we have learned from a single solution to Einstein’s field equations, and not even a particularly difficult one. “Black hole physics” really should be a research field on its own right.

Friday, April 01, 2016

3 Billion Years Old Math Problem Solved by Prodigy Fetus

[image source:]
For Berta’s mother, the first kick already made clear that her daughter was extraordinary: “This wasn’t just any odd kick, it was a p-wave cross-correlation seismogram.” But this pregnancy exceeded even the most enthusiastic mother’s expectations. Still three months shy of her due date, fetus Berta just published her first paper in the renown mathematics journal “Reviews in Topology.” And it isn’t just any odd cohomological invariance that she has taken on, but one of the thorniest problems known to mathematicians.

Like most of the big mathematical puzzles, this one is easy to understand, and yet even the greatest minds on the planet have so far been unsuccessful proving it. Consider you have a box of arbitrary dimension, filled with randomly spaced polyhedra that touch on exactly three surfaces each. Now you take them out of the box, remove one surface, turn the box by 90 degrees around Donald Trump’s belly button, and then put the polyhedral back into the box. Put in simple terms this immediately raises the question: “Who cares?”

Berta’s proof demonstrates that the proposition is correct. Her work, which has been lauded by colleagues as “masterwork of incomprehensibility” and “a lucid dream caught in equations,” draws upon recent research in fields ranging from differential geometry to category theory to volcanology. The complete proof adds up to 5000 pages. “It’s quite something,” says her mother who was nicknamed “next Einstein’s mom” on Berta’s reddit AMA last week. “We hope the paper will be peer reviewed by the time she makes her driver’s license.”

Monday, March 28, 2016

Dear Dr. B: What are the requirements for a successful theory of quantum gravity?

“I've often heard you say that we don't have a theory of quantum gravity yet. What would be the requirements, the conditions, for quantum gravity to earn the label of 'a theory' ?

I am particularly interested in the nuances on the difference between satisfying current theories (GR&QM) and satisfying existing experimental data. Because a theory often entails an interpretation whereas a piece of experimental evidence or observation can be regarded as correct 'an sich'.

That aside from satisfying the need for new predictions, etc.

Thank you,

Best Regards,

Noa Drake”

Dear Noa,

I want to answer your question in two parts. First: What does it take for a hypothesis to earn the label “theory” in physics? And second: What are the requirements for a theory of quantum gravity in particular?”

What does it take for a hypothesis to earn the label “theory” in physics?

Like almost all nomenclature in physics – except the names of new heavy elements – the label “theory” is not awarded by some agreed-upon regulation, but emerges from usage in the community – or doesn’t. Contrary to what some science popularizers want the public to believe, scientists do not use the word “theory” in a very precise way. Some names stick, others don’t, and trying to change a name already in use is often futile.

The best way to capture what physicists mean with “theory” is that it describes an identification between mathematical structures and observables. The theory is the map between the math-world and the real world. A “model” on the other hand is something slightly different: it’s the stand-in for the real world that is being mapped by help of the theory. For example the standard model is the math-thing which is mapped by quantum field theory to the real world. The cosmological concordance model is mapped by the theory of general relativity to the real world. And so on.

But of course not everybody agrees. Frank Wilczek and Sean Carroll for example want to rename the standard model to “core theory.” David Gross argues that string theory isn’t a theory, but actually a “framework.” And Paul Steinhardt insists on calling the model of inflation a “paradigm.” I have a theory that physicists like being disagreeable.

Sticking with my own nomenclature, what it takes to make a theory in physics is 1) a mathematically consistent formulation – at least in some well-controlled approximation, 2) an unambiguous identification of observables, and 3) agreement with all available data relevant in the range in which the theory applies.

These are high demands, and the difficulty of meeting them is almost always underestimated by those who don’t work in the field. Physics is a very advanced discipline and the existing theories have been confirmed to extremely high precision. It is therefore very hard to make any changes that improve the existing theories rather than screwing them up altogether.

What are the requirements for a theory of quantum gravity in particular?

The combination of the standard model and general relativity is not mathematically consistent at energies beyond the Planck scale, which is why we know that a theory of quantum gravity is necessary. The successful theory of quantum gravity must achieve mathematical consistencies at all energies, or – if it is not a final theory – at least well beyond the Planck scale.

If you quantize gravity like the other interactions, the theory you end up with – perturbatively quantized gravity – breaks down at high energies; it produces nonsensical answers. In physics parlance, high energies are often referred to as “the ultra-violet” or “the UV” for short, and the missing theory is hence the “UV-completion” of perturbatively quantized gravity.

At the energies that we have tested so far, quantum gravity must reproduce general relativity with a suitable coupling to the standard model. Strictly speaking it doesn’t have to reproduce these models themselves, but only the data that we have measured. But since there is such a lot of data at low energies, and we already know this data is described by the standard model and general relativity, we don’t try to reproduce each and every observation. Instead we just try to recover the already known theories in the low-energy approximation.

That the theory of quantum gravity must remove inconsistencies in the combination of the standard model and general relativity means in particular it must solve the black hole information loss problem. It also means that it must produce meaningful answers for the interaction probabilities of particles at energies beyond the Planck scale. It is furthermore generally believed that quantum gravity will avoid the formation of space-time singularities, though this isn’t strictly speaking necessary for mathematical consistency.

These requirements are very strong and incredibly hard to meet. There are presently only a few serious candidates for quantum gravity: string theory, loop quantum gravity, asymptotically safe gravity, causal dynamical triangulation, and, somewhat down the line, causal sets and a collection of emergent gravity ideas.

Among those candidates, string theory and asymptotically safe gravity have a well-established compatibility with general relativity and the standard model. From these two, string theory is favored by the vast majority of physicists in the field, primarily because it has given rise to more insights and contains more internal connections. Whenever I ask someone what they think about asymptotically safe gravity, they tell me that would be “depressing” or “disappointing.” I know, it sounds more like psychology than physics.

Having said that, let me mention for completeness that, based on purely logical reasoning, it isn’t necessary to find a UV-completion for perturbatively quantized gravity. Instead of quantizing gravity at high energies, you can ‘unquantize’ matter at high energies, which also solves the problem. From all existing attempts to remove the inconsistencies that arise when combining the standard model with general relativity, this is the possibly most unpopular option.

I do not think that the data we have so far plus the requirement of mathematical consistency will allow us to derive one unique theory. This means that without additional data physicists have no reason to ever converge on any one approach to quantum gravity.

Thank you for an interesting question!

Wednesday, March 23, 2016

Hey Bill Nye, Please stop talking nonsense about quantum mechanics.

Bill Nye, also known as The Science Guy, is a popular science communicator in the USA. He has appeared regularly on TV and, together with Corey Powell, produced two books. On Twitter, he has gathered 2.8 million followers, by which he ranks somewhere between Brian Cox and Neil deGrasse Tyson. This morning, a video of Bill Nye explaining quantum entanglement was pointed out to me:

The video seems to be part of a series in which he answers questions from his fans. Here we have a young man by name Tom from Western Australia calling in. The transcript starts as follows:
Tom: Hi, Bill. Tom, from Western Australia. If quantum entanglement or quantum spookiness can allow us to transmit information instantaneously, that is faster than the speed of light, how do you think this could, dare I say it, change the world?

Bill Nye: Tom, I love you man. Thanks for the tip of the hat there, the turn of phrase. Will quantum entanglement change the world? If this turns out to be a real thing, well, or if we can take advantage of it, it seems to me the first thing that will change is computing. We’ll be able to make computers that work extraordinarily fast. But it carries with it, for me, this belief that we’ll be able to go back in time; that we’ll be able to harness energy somehow from black holes and other astrophysical phenomenon that we observe in the cosmos but not so readily here on earth. We’ll see. Tom, in Western Australia, maybe you’ll be the physicist that figures quantum entanglement out at its next level and create practical applications. But for now, I’m not counting on it to change the world.
I thought I must have slept through Easter and it’s already April 1st. I replayed this like 5 times. But it didn’t get any better. So what else can I do but take to my blog in the futile attempt to bring sanity back to earth?

Dear Tom,

This is an interesting question which allows one to engage in some lovely science fiction speculation, but first let us be clear that quantum entanglement does not allow to transmit information faster than the speed of light. Entanglement is a non-local correlation that enforces particles to share properties, potentially over long distances. But there is no way to send information through this link because the particles are quantum mechanical and their properties are randomly distributed.

Quantum entanglement is a real thing, we know this already. This has been demonstrated in countless experiments, and while multi-particle correlations are an active research area, the basic phenomenon is well-understood. But entanglement does not imply a spooky “action” at a distance – this is a misleading historical phrase which lives on in science communication just because it has a nice ring to it. Nothing ever acts between the entangled particles – they are merely correlated. That entanglement might allow faster-than-light communication was a confusion in the 1950s, but it’s long been understood that quantum mechanics is perfectly compatible with Einstein’s theory of Special Relativity in which information cannot be transmitted faster than the speed of light.

No, it really can’t. Sorry about that. Yes, I too would love to send messages to the other side of the universe without having to wait some billion years for a reply. But for all we presently know about the laws of nature, it’s not possible.

Entanglement is the relevant ingredient in building quantum computers, and these could indeed dramatically speed up information processing and storage capacities, hence the effort that is being made to build one. But this has nothing to do with exchanging information faster than light, it merely relies on the number of different states that quantum particles can be brought into, which is huge compared to those of normal computers. (Which also work only thanks to quantum mechanics, but normal computers don’t use quantum states for information processing.)

Now let us forget about the real world for a moment, and imagine what we could do if it was possible to send information faster than the speed of light, even though this is to our best present knowledge not possible. Maybe this is what your question really was?

The short answer is that you are likely to screw up reality altogether. Once you can send information faster than the speed of light, you can also send it back in time. If you can send information back in time, you can create inconsistent histories, that is, you can create various different pasts, a problem commonly known as “grandfather paradox:” What happens if you travel back in time and kill your grandpa? Will Marty McFly be born if he doesn’t get his mom to dance with his dad? Exactly this problem.

Multiple histories, or quantum mechanical parallel worlds, are a commonly used scenario in the science fiction literature and movie industry, and they make for some mind-bending fun. For a critical take on how these ideas hold up to real science, I can recommend Xaq Rzetelny’s awesome article “Trek at 50: The quest for a unifying theory of time travel in Star Trek.

I have no fucking clue what Bill thinks this has to do with harnessing energy from black holes, but I hope this won’t discourage you from signing up for a physics degree.

Dear Bill,

Every day I get emails from people who want to convince me that they have found a way to create a wormhole, harness vacuum energy, travel back in time, or that they know how to connect the conscious mind with the quantum, whatever that means. They often argue with quotes from papers or textbooks which they have badly misunderstood. But they no longer have to do this. Now they can quote Bill The Science Guy who said that quantum entanglement would allow us to harness energy from black holes and to travel back in time.

Maybe you were joking and I didn’t get it. But if it’s a joke, let me tell you that nobody in my newsfeed seems to have found it funny.

Seriously, man, fix that. Sincerely,


Sunday, March 20, 2016

Can we get some sympathy for the nerdy loners please?


“What is she doing?” – “She is sitting there.” – “Ye-es. But what is she do-ing?”

“She isn’t doing anything. She is just. Sitting there.”

“How long do we wait?” – “We wait until the clock is 29 and 10.”

I’m sitting there because I have a problem. The problem isn’t that I have children – children who, despite my best efforts, still can’t read the clock. I am sitting there because I have a problem with a differential equation. Actually, several of them.

You’d think two non-stop nagging kids would have cured me from getting eaten up by equations. But they’ve just made me better at zoning out. Hooked on a suitably interesting problem – it’s inevitably something-with-physics – I am basically incommunicable, sometimes for weeks at a time.

Not like that’s news. 20 years ago I was your stereotypical nerd. The student in an oversized hoodie, with glasses and an always overdue haircut. No matter where I went, I dragged around a huge backpack full of books – just in case I had to look up something about that problem I was on. Nobody was surprised I ended up with a PhD in theoretical physics.

I’ve since swapped the hoodies for mommy-wear that doesn’t make it quite as easy for toddlers to hide food in it. I’ve found a way to tie up the mess that is my hair. And I’ve learned to make conversation. Though my attempts at small-talk inevitably seem to start with “I recently read...”

But despite my efforts to hide it, I’m afraid I’m still your stereotypical nerd.

I get often asked if it’s difficult to be one of the few women in a field dominated by men. Yes, sometimes. But leaving aside the inevitable awkwardness that comes with hearing your own voice stand out an octave above everyone else’s, theoretical physics has always been my intellectual home, the go-to place when in need of likeminded people. The stories about the lone genius waiting to be hit by an apple, they didn’t turn me off, they were my aspiration. I just wanted to be left alone solving problems. And for the biggest part I have been left alone.

There’s a price to pay, of course, for wanting to be left alone. Which is that you might be left alone.

Ágnes Móscy is the exact opposite of your stereotypical nerd. She’s as intelligent as artsy, and she dabbles with ease between communities. She seems infinitely energetic and is a wonderful woman, warm and welcoming, cool and clever. In recent years, Ágnes has become very engaged in the good cause of supporting minorities in physics. She has gone about it as you expect of a scientist, with numbers and facts, with data and references, giving lectures and educating her colleagues. I admire her initiative.

I had to say some nice things about Ágnes first because next comes some criticism.

The other day she wrote a piece for Huffpo hitting on the supposed myth of the lonely genius.

I will agree that genius is as word as useless as overused. Nobody really knows what it means, and it has an unfortunate ring of “genetics” to it. That’s unfortunate because a recent study has found evidence that women shy away from fields that are believed to require inborn talent rather than hard work. Then there’s another study which demonstrated that students are more likely to associate “genius” with male professors than with female and black professors. And Ágnes is right of course when she says that most of us in physics aren’t geniuses, whatever exactly you think it means, so why use a label that is neither descriptive nor helpful?

I’d sign a petition to trashcan “genius,” together with “next Einstein.”

Then Ágnes makes a case that the loner in physics is as much a myth as the genius. You won’t be surprised to hear I disagree.

True, scientists always build on other’s work, and once they’ve built, they must tell their colleagues about it. Communication isn’t only a necessary part of research, it’s also the best way to make sure you’re not fooling yourself. That talking to other people about your problems can be useful is a lesson I first had to learn, but even I eventually learned it.

Still, there is a stage of research that remains lonely. That phase in which you don’t really know just what you know, when you have an idea but you can’t put into words, a problem so diffuse you’re not sure what the problem is.

Fields Medalist Michael Atiyah (who I now don’t dare to call a genius because you might think I want to discourage girls from studying math) put it this way in a recent interview with Siobhan Roberts for Quanta Magazine:
“Dreams happen during the daytime, they happen at night. You can call them a vision or intuition. But basically they’re a state of mind—without words, pictures, formulas or statements. It’s “pre” all that. It’s pre-Plato. It’s a very primordial feeling. And again, if you try to grasp it, it always dies. So when you wake up in the morning, some vague residue lingers, the ghost of an idea. You try to remember what it was and you only get half of it right, and maybe that’s the best you can do.”
Tell me how that’s not lonely work.

As I am raising two girls, I am all too aware of occupational stereotypes. Like many academics, my husband and I are fighting the pink/blue divide, the gender segregation that starts already in kindergarten. I don’t want my daughters to think following their interests isn’t socially appropriate because some professions aren’t for women.

I am therefore all in favor of initiatives targeting girls with science toys and educational games, because of course I hope that’s where my kids’ interests are. Also, I get to play with the stuff myself. (I recently bought a microscope that attaches to the phone because I thought the girls might want have a close look at some leaves. Instead my husband used it to inspect our gauze curtains and proceeded to use them as a refraction lattice. I’m still waiting to get my microscope and laser pointer back.)

But while I hope my children will go on to become scientists, I first and foremost want them to find out which profession they will be most happy with, whether that means physicist or midwife. And I don’t want young women to get talked into something they aren’t genuinely into, just because the statistics say there should be more women in physics. I don’t want them to be mislead by marketing physics as something it is not.

So let’s tell it like it is.

Physics isn’t all teamwork and communication skills, it’s not all collaboration and conferences, it’s not all chalk and talk. That’s some of it, but physics is also a lot of reading and a lot of thinking – and sometimes it’s lonely.

There are stages in your research in which you will hit on a problem that no one can help you with. Because that’s what research is all about – finding and solving problems that no one has solved before. And sometimes you will get stuck, annoyed about yourself, frustrated about your own inability to make sense of these equations. You will feel stupid and you will feel lonely and you will feel like nobody can understand you – because nobody can understand you.

That’s physics too.

Science only stands to benefit from more diversity. Different cultural and social backgrounds, different experiences and different personality traits serve to broaden our perspectives and may lead to new approaches to old problems. But attracting new customers shouldn’t scare away the regulars. We have use for the nerdy loners too.

Having reached almost 40 years of age, I’ve survived long enough to no longer care if people think I’m not normal. Not normal for leaving the party early, not normal for scribbling notes on my arm, not normal for spontaneously bursting into lectures about Lorentz-invariance violating operators.

Luckily, I am married to a man who doesn’t only have much understanding for my problems, but also seems to have textbooks on each and every obscure subfield of physics. There’s a reason he’s in the acknowledgements of almost all of my papers.

I hope that you, too, find a niche in life where you fit in. And if you want to be left alone, don’t let anyone tell you there is no place for loners in this world any more.

“29 and 10. That’s 39.”

She can’t yet read the clock. But she’s good at math.

Tuesday, March 15, 2016

Researchers propose experiment to measure the gravitational force of milli-gram objects, reaching almost into the quantum realm.

Neutrinos, gravitational waves, light deflection on the sun – the history of physics is full with phenomena once believed immeasurably small but now yesterday’s news. And on the list of impossible things turned possible, quantum gravity might be next.

Quantum gravitational effects have widely been believed inaccessible by experiment because enormously high energy densities are required to make them comparably large as other quantum effects. This argument however neglects that quantum effects of gravity can also become relevant for massive objects in quantum superpositions. Once we are able to measure the gravitational pull of an object that is in a superposition of two different places, we can determine whether the gravitational field is in a quantum superposition as well.

This neat idea has two problematic aspects. First, since gravity is very weak, measuring gravitational fields of small objects is extremely difficult. And second, bringing massive objects into quantum states is hard because the states rapidly decohere due to interaction with the environment. However, technological advances on both aspects of the problem have been stunning during the last decade.

In two previous posts we discussed some examples of massive quantum oscillators that can create location superpositions of objects as heavy as a nano-gram. The objects under consideration here are typically small disks made of silicon that are bombarded with laser light while trapped between two mirrors. A nano-gram might not sound much, but compared to the masses of elementary particles that’s enormous.

Meanwhile, progress on the other aspect of the problem - measuring tiny gravitational fields – has also been remarkable. Currently, the smallest mass whose gravitational pull has been measured is about 90g. But a recent proposal by the group of Markus Aspelmeyer in Vienna lays out a method for measuring the gravitational force of masses as small as a few milli-gram.
    A micromechanical proof-of-principle experiment for measuring the gravitational force of milligram masses
    Jonas Schmöle, Mathias Dragosits, Hans Hepach, Markus Aspelmeyer
    arXiv:1602.07539 [physics.ins-det]

Their proposal relies on a relatively new field of technology that employs micro-mechanical devices, which basically means you make your whole measurement apparatus as small as you can, piling single atoms on atoms. This trend, which has itself become possible only by the nanotechnology required to to design these devices, allows measurements of unprecedented precision.

The smallest force that has so far been measured with nano-devices is around a zepto-Newton (zepto is 10-21). That’s not yet the world-record in tiny-force measurements, which is currently held by a group in Berkely and lies at about a yocto-Newton (that’s 10-24). But the huge benefit of the nano-devices is that you can get them close to the probe, whereas the experiment holding the record relies on precisely tracking the motion of a cloud of atoms in a trap. Not only doesn’t the cloud-tracking mean that it’s difficult to scale up the mass without ruining precision. The necessity to trap the particles also means that it’s difficult to get the source of the force-field close to the probe. The use of micro-mechanical devices in contrast does not have the same limitations and thus lends itself better to the task of measuring the gravitational force exerted by quantum systems.

The Aspelmeyer group sketches their experiment as shown in the figure below

[From arXiv:1602.07539]

The blue circles are the masses whose gravitational interaction one wants to measure, with the source mass to the right and the test-mass to the left. The test-mass is attached to the micro-mechanical oscillator, whereas the source-mass is driven by another oscillator close by the systems’ resonance frequency. The gravitational pull between the two masses transfers the oscillation of the source-mass to the test-mass, where it can be picked up by the detector.

In their paper, the experimentalists argue that it should be possible by this method to measure the gravitational force of a source mass not heavier than a few milli-grams. And that’s the conservative estimate. With better detector efficiency even that limit could be improved on.

There are still a few orders of magnitude between a milli-gram and a nano-gram, which is the current maximum mass for which quantum superpositions have been achieved. But in typical estimates for quantum gravitational effects you end up at least 30 orders of magnitude away from measurement precision. Now we are talking about five orders of magnitude – and that in a field with rapid technological developments for which there is no fundamental limit in sight.

What is most remarkable about this development is that this proposal relies on technology that until a few years ago literally nobody in quantum gravity ever talked about. It’s not even that the technological development has been faster than anticipated, it’s a possibility that plainly wasn’t on the radar. Now there is a Nobel Prize waiting here, for the first experimental measurement of quantum gravitational effects.

And as the Prize comes within reach, competition will speed up the pace. So stay tuned, I am sure we will hear more about this soon.