Friday, February 05, 2016

Much Ado around Nothing: The Cosmological non-Constant Problem

Tl;dr: Researchers put forward a theoretical argument that new physics must appear at energies much lower than commonly thought, barely beyond the reach of the LHC.

The cosmological constant is the worst-ever prediction of quantum field theory, infamously off by 120 orders of magnitude. And as if that wasn’t embarrassing enough, this gives rise to, not one, but three problems: Why is the measured cosmological constant neither 1) huge nor 2) zero, and 3) Why didn’t this occur to us a billion years earlier? With that, you’d think that physicists have their hands full getting zeroes arranged correctly. But Niayesh Afshordi and Elliot Nelson just added to our worries.

In a paper that made it third place of this year’s Buchalter Cosmology Prize, Afshordi and Nelson pointed out that the cosmological constant, if it arises from the vacuum energy of matter fields, should be subject to quantum fluctuations. And these fluctuations around the average are still large even if you have managed to get the constant itself to be small.

The cosmological constant, thus, is not actually constant. And since matter curves space-time, the matter fluctuations lead to space-time fluctuations – which can screw with our cosmological models. Afshordi and Nelson dubbed it the “Cosmological non-Constant Problem.”

But there is more to their argument than just adding to our problems because Afshordi and Nelson quantified what it takes to avoid a conflict with observation. They calculate the effect of stress-energy fluctuations on the space-time background, and then analyze what consequences this would have for the gravitational interaction. They introduce as a free parameter an energy scale up to which the fluctuations abound, and then contrast the corrections from this with observations, like for example the CMB power spectrum or the peculiar velocities of galaxy clusters. From these measurements they derive bounds on the scale at which the fluctuations must cease, and thus, where some new physics must come into play.

They find that the scale beyond which we should already have seen the effect of the vacuum fluctuations is about 35 TeV. If their argument is right, this means something must happen either to matter or to gravity before reaching this energy scale; the option the authors advocate in their paper is that physics becomes strongly coupled below this scale (thus invalidating the extrapolation to larger energies, removing the problem).

Unfortunately, the LHC will not be able to reach all the way up to 35 TeV. But a next larger collider – and we all hope there will be one! – almost certainly would be able to test the full range. As Niayesh put it: “It’s not a problem yet” – but it will be a problem if there is no new physics before getting all the way up to 35 TeV.

I find this an interesting new twist on the cosmological constant problem(s). Something about this argument irks me, but I can’t quite put a finger on it. If I have an insight, you’ll hear from me again. Just generally I would caution you to not take the exact numerical value too seriously because in this kind of estimate there are usually various places where factors of order one might come in.

In summary, if Afshordi and Nelson are right, we’ve been missing something really essential about gravity.

Me, Elsewhere

I'm back from my trip. Here are some things that prevented me from more substantial blogging:
  • I wrote an article for Aeon, "The superfluid Universe," which just appeared. For a somewhat more technical summary, see this earlier blogpost.
  • I did a Q&A with John The-End-of-Science Horgan, which was fun. I disagree with him on many things, but I admire his writing. He is infallibly skeptic and unashamedly opinionated -- qualities I find lacking in much of today's science writing, including, sometimes, my own.
  • I spoke with Davide Castelvecchi about Stephen Hawking's recent attempt to solve the black hole information loss problem, which I previously wrote about here.
  • And I had some words to spare for Zeeya Merali, probably more words than she wanted, on the issue with the arXiv moderation, which we discussed here.
  • Finally, I had the opportunity to give some input for this video on the PhysicsGirl's YouTube channel:



    I previously explained in this blogpost that Hawking radiation is not produced at the black hole horizon, a correction to the commonly used popular science explanation that caught much more attention than I anticipated.

    There are of course still some things in the above video I'd like to complain about. To begin with, anti-particles don't normally have negative energy (no they don't). And the vacuum is the same for two observers who are moving relative to each other with constant velocity - it's the acceleration that makes the difference between the vacua. In any case, I applaud the Physics Girl team for taking on what is admittedly a rather technical and difficult topic. If anyone can come up with a better illustration for Hawking-radiation than Hawking's own idea with the pairs that are being ripped apart (which is far too localized to fit well with the math), please leave a suggestion in the comments.

Thursday, January 28, 2016

Does the arXiv censor submissions?

The arXiv is the physicsts' marketplace of ideas. In high energy physics and adjacent fields, almost all papers are submitted to the arXiv prior to journal submission. Developed by Paul Ginsparg in the early 1990s, this open-access pre-print repository has served the physics community for more than 20 years, and meanwhile extends also to adjacent fields like mathematics, economics, and biology. It fulfills an extremely important function by helping us to exchange ideas quickly and efficiently.

Over the years the originally free signup became more restricted. If you sign up for the arXiv now, you need to be "endorsed" by several people who are already signed up. It also became necessary to screen submissions to keep the quality level up. In hindsight, this isn't surprising: more people means more trouble. And sometimes, of course, things go wrong.

I have heard various stories about arXiv moderation gone wrong, mostly these are from students, and mostly it affects those who work in small research areas or those whose name is Garrett Lisi.

A few days ago, a story appeared online which quickly spread. Nicolas Gisin, an established Professor for Physics who works on quantum cryptography (among other things) relates the story of two of his students who ventured in a territory unfamiliar for him, black hole physics. They wrote a paper that appeared to him likely wrong but reasonable. It got rejected by the arxiv. The paper later got published by PLA (a respected journal that however does not focus on general relativity). More worrisome still, the students' next paper also got rejected by the arXiv, making it appear as if they were now blacklisted.

Now the paper that caused the offense is, haha, not on the arXiv, but I tracked it down. So let me just say that I think it's indeed wrong and it shouldn't have gotten published in a journal. They are basically trying to include the backreaction of the outgoing Hawking-radiation on the black hole. It's a thorny problem (the very problem this blog was named after) and the treatment in the paper doesn't make sense.

Hawking radiation is not produced at the black hole horizon. No, it is not. And tracking back the flux from infinity to the horizon is therefore is not correct. Besides this, the equation for the mass-loss that they use is a late-time approximation in a collapse situation. One can't use this approximation for a metric without collapse, and it certainly shouldn't be used down to the Planck mass. If you have a collapse-scenario, to get the backreaction right you would have to calculate the emission rate prior to horizon formation, time-dependently, and integrate over this.

Ok, so the paper is wrong. But should it have been rejected by the arXiv? I don't think so. The arxiv moderation can't and shouldn't replace peer review, it should just be a basic quality check, and the paper looks like a reasonable research project.

I asked a colleague who I know works as an arXiv moderator for comment. (S)he wants to stay anonymous but offers the following explanation:


I had not heard of the complaints/blog article, thanks for passing that information on...  
 The version of the article I saw was extremely naive and was very confused regarding coordinates and horizons in GR... I thought it was not “referee-able quality’’ — at least not in any competently run GR journal... (The hep-th moderator independently raised concerns...)  
 While it is now published at Physics Letters A, it is perhaps worth noting that the editorial board of Physics Letters A does *not* include anyone specializing in GR.
(S)he is correct of course. We haven't seen the paper that was originally submitted. It was very likely in considerably worse shape than the published version. Indeed, Gisin writes in his post that the paper was significantly revised during peer review. Taking this into account, the decision seems understandable to me.

The main problem I have with this episode is not that a paper got rejected which maybe shouldn't have been rejected -- because shit happens. Humans make mistakes, and let us be clear that the arXiv, underfunded as it is, relies on volunteers for the moderation. No, the main problem I have is the lack of transparency.

The arXiv is an essential resource for the physics community. We all put trust in a group of mostly anonymous moderators who do a rather thankless and yet vital job. I don't think the origin of the problem is with these people. I am sure they do the best they can. No, I think the origin of the problem is the lack of financial resources which must affect the possibility to employ administrative staff to oversee the operations. You get what you pay for.

I hope that this episode be a wake-up call to the community to put their financial support behind the arXiv, and to the arXiv to use this support to put into place a more transparent and better organized moderation procedure.

Note added: It was mentioned to me that the problem with the paper might be more elementary in that they're using wrong coordinates to begin with - it hadn't even occurred to me to check this. To tell you the truth, I am not really interested in figuring out exactly why the paper is wrong, it's besides the point. I just hope that whoever reviewed the paper for PLA now goes and sits in the corner for an hour with a paper bag over their head.

Wednesday, January 27, 2016

Hello from Maui

Greetings from the west-end of my trip, which brought me out to Maui, visiting Garrett at the Pacific Science Institute, PSI. Launched roughly a year ago, Garrett and his girlfriend/partner Crystal have now hosted about 60 traveling scientists, "from all areas except chemistry" I was told.

I got bitten by mosquitoes and picked at by a set of adorable chickens (named after the six quarks), but managed to convince everybody that I really didn't feel like swimming, or diving, or jumping off things at great height. I know I'm dull. I did watch some sea turtles though and I also got a new T-shirt with the PSI-logo, which you can admire in the photo to the right (taken in front of a painting by Crystal).

I'm not an island-person, don't like mountains, and I can't stand humidity, so for me it's somewhat of a mystery what people think is so great about Hawaii. But leaving aside my preference for German forests, it's as pleasant a place as can be.

You won't be surprised to hear that Garrett is still working on his E8 unification and says things are progressing well, if slowly. Aloha.






Monday, January 25, 2016

Is space-time a prism?

Tl;dr: A new paper demonstrates that quantum gravity can split light into spectral colors. Gravitational rainbows are almost certainly undetectable on cosmological scales, but the idea might become useful for Earth-based experiments.

Einstein’s theory of general relativity still stands apart from the other known forces by its refusal to be quantized. Progress in finding a theory of quantum gravity has stalled because of the complete lack of data – a challenging situation that physicists have never encountered before.

The main problem in measuring quantum gravitational effects is the weakness of gravity. Estimates show that testing its quantum effects would require detectors the size of planet Jupiter or particle accelerators the size of the Milky-way. Thus, experiments to guide theory development are unfeasible. Or so we’ve been told.

But gravity is not a weak force – its strength depends on the masses between which it acts. (Indeed, that is the very reason gravity is so difficult to quantize.) Saying that gravity is weak makes sense only when referring to a specific mass, like that of the proton for example. We can then compare the strength of gravity to the strength of the other interactions, demonstrating its relative weakness – a puzzling fact known as the “strong hierarchy problem.” But that the strength of gravity depends on the particles’ masses also means that quantum gravitational effects are not generally weak: their magnitude too depends on the gravitating masses.

To be more precise one should thus say that quantum gravity is hard to detect because if an object is massive enough to have large gravitational effects then its quantum properties are negligible and don’t cause quantum behavior of space-time. General relativity however acts in two ways: Matter affects space-time and space-time affects matter. And so the reverse is also true: If the dynamical background of general relativity for some reason has an intrinsic quantum uncertainty, then this will affect the matter moving in this space-time – in a potentially observable way.

Rainbow gravity, proposed in 2003 by Magueijo and Smolin, is based on this idea, that the quantum properties of space-time could noticeably affect particles propagating in it. In rainbow gravity, space-time itself depends on the particle’s energy. In particular, light of different energies travels with different speeds, splitting up different colors, hence the name. It’s a nice idea but unfortunately it’s is an internally inconsistent theory and so far nobody has managed to make much sense of it.

First, let us note that already in general relativity the background depends of course on the energy of the particle, and this certainly should carry over also into quantum gravity. More precisely though, space-time depends not on the energy but on the energy-density of matter in it. So this cannot give rise to rainbow gravity. Worse even, because of this, general relativity is in outright conflict with rainbow gravity.

Second, an energy-dependent metric can be given meaning to in the framework of asymptotically safe gravity, but this is not what rainbow gravity is about either. Asymptotically safe gravity is an approach to quantum gravity in which space-time depends on the energy by which it is probed. The energy in rainbow gravity is however not that by which space-time is probed (which is observer-independent), but is supposedly the energy of a single particle (which is observer-dependent).

Third, the whole idea crumbles to dust once you start wondering how the particles in rainbow gravity are supposed to interact. You need space-time to define “where” and “when”. If each particle has its own notion of where and when, the requirement that an interaction be local rather than “spookily” on a distance can no longer be fulfilled.

In a paper which recently appeared in PLB (arXiv version here), three researchers from the University of Warsaw have made a new attempt to give meaning to rainbow gravity. While it doesn’t really solve all problems, it makes considerably more sense than the previous attempts.

In their paper, the authors look a small (scalar) perturbations over a cosmological background, that are modes with different energies. They assume that there is some theory for quantum gravity which dictates what the background does but do not specify this theory. They then ask what happens to the perturbations which travel in the background and derive equations for each mode of the perturbation. Finally, they demonstrate that these equations can be reformulated so that, effectively, the perturbation travels in a space-time which depends on the perturbation’s own energy – it is a variant of rainbow gravity.

The unknown theory of quantum gravity only enters into the equations by an average over the quantum states of the background’s dynamical variables. That is, if the background is classical and in one specific quantum state, gravity doesn’t cause any rainbows, which is the usual state of affairs in general relativity. It is the quantum uncertainty of the space-time background that gives rise to rainbows.

This type of effective metric makes somewhat more sense to me than the previously considered scenarios. In this new approach, it is not the perturbation itself that causes the quantum effect (which would be highly non-local and extremely suspicious). Instead the particle merely acts as a probe for the background (a quite common approximation that neglects backreaction).

Unfortunately, one must expect the quantum uncertainty of space-time to be extremely tiny and undetectable. A long time has passed since quantum gravitational effects were strong in the very early universe and since then they have long decohered. Of course we don’t really know this with certainty, so looking for such effects is generally a good idea. But I don’t think it’s likely we’d find something here.

The situation looks somewhat better though for a case not discussed in the paper, which is a quantum uncertainty in space-time caused by massive particles with a large position uncertainty. I discussed this possibility in this earlier post, and it might be that the effect considered in the new paper can serve as a way to probe it. This would require though to know what happens not to background perturbations but other particles traveling in this background, requiring a different approach than the one used in this paper.

I am not really satisfied with this version of rainbow gravity because I still don’t understand how particles would know where to interact, or which effective background to travel in if several of them are superposed, which seems somewhat of a shortcoming for a quantum theory. But this version isn’t quite as nonsensical as the previous one, so let me say I am cautiously hopeful that this idea might one day become useful.

In summary, the new paper demonstrates that gravitational rainbows might appear in quantum gravity under quite general circumstances. It might be an interesting contribution that, with further work, could become useful in the search for experimental evidence of quantum gravity.

Note added: The paper deals with a FRW background and thus trivially violates Lorentz-invariance.

Thursday, January 21, 2016

Messengers from the Dark Age

Astrophysicists dream of putting radio
telescopes on the far side of the moon.
[Image Credits: 21stcentech.com]
An upcoming generation of radio telescopes will soon let us look back into the dark age of the universe. The new observations can test dark matter models, inflation, and maybe even string theory.

The universe might have started with a bang, but once the echoes faded it took quite some while until the symphony began. Between the creation of the cosmic microwave background (CMB) and the formation of the first stars, 100 million years passed in darkness. This “dark age” has so far been entirely hidden from observation, but this situation is soon to change.

The dark age may hold the answers to many pressing questions. During this period, most of the universe’s mass was in form of light atoms – primarily hydrogen – and dark matter. The atoms slowly clumped under the influence of gravitational forces, until they finally ignited the first stars. Before the first stars, astrophysical processes were few, and so the distribution of hydrogen during the dark age carries very clean information about structure formation. Details about both the behavior of dark matter and the size of structures are encoded in these hydrogen clouds. But how can we see into the darkness?

Luckily the dark age was not entirely dark, just very, very dim. Back then, the hydrogen atoms that filled the universe frequently bumped into each other, which can flip the electron’s spin. If a collision flips the spin, the electron’s energy changes by a tiny amount because the energy depends on whether the electron’s spin is aligned with the spin of the nucleus or whether it points in the opposite direction. This energy difference is known as “hyperfine splitting.” Flipping the hydrogen electron’s spin therefore leads to the emission of a very low energy photon with a wavelength of 21cm. If we can trace the emissions of these 21cm photons, we can trace the distribution of hydrogen.


But 21 cm is the wavelength of the photons at the time of emission, which was 13 billion years ago. Since then the universe has expanded significantly and stretched the photons’ wavelength with it. How much the wavelength has been stretched depends on whether it was emitted early or late during the dark ages. The early photons have meanwhile been stretched by a factor of about 1000, resulting in wavelengths of a few hundred meters. Photons emitted towards the end of the dark age have not been stretched quite as much – they today have wavelength of some meters.

This most exciting aspect of 21cm astronomy is that it does not only give us a snapshot at one particular moment – like the CMB – but allows us to map different times during the dark age. By measuring the red-shifted photons at different wavelengths we can scan through the whole period. This would give us many new insights about the history of our universe.

To begin with, it is not well understood how the dark age ends and the first stars are formed. The dark age fades away in a phase of reionization in which the hydrogen is stripped of its electrons again. This reionization is believed to be caused by the first star’s radiation, but exactly what happens we don’t know. Since the ionized hydrogen no longer emits the hyperfine line, 21cm astronomy could tell us how the ionized regions grow, teaching us much about the early stellar objects and the behavior of the intergalactic medium.

21 cm astronomy can also help solve the riddle of dark matter. If dark matter self-annihiliates, this affects the distribution of neutral hydrogen, which can be used to constrain or rule out dark matter models.

Inflation models too can be probed by this method: The distribution of structures that 21cm astronomy can map carries an imprint of the quantum fluctuations that caused them. These fluctuations in return depend on the type of inflation fields and the field’s potential. Thus, the correlations in the structures which were present already during the dark age let us narrow down what type of inflation has taken place.

Maybe most excitingly, the dark ages might give us a peek at cosmic strings, one-dimensional objects with a high density and high gravitational pull. In many models of string phenomenology, cosmic strings can be produced at the end of inflation, before the dark age begins. By distorting the hydrogen clouds, the cosmic strings would leave a characteristic signal in the 21cm emission spectrum.

CSL-1. A candidate signal for a cosmic
string, later identified as two galaxies.
Read more about cosmic strings here.
But measuring photons of this wavelength is not easy. The Milkyway too has sources that emit in this regime, which gives rise to an unavoidable galactic foreground. In addition, the Earth’s atmosphere distorts the signal and some radio broadcasts too can interfere with the measurement. Nevertheless, astronomers have risen up to the challenge and the first telescopes hunting for the 21cm signal are now in operation.

The Low-Frequency Array (LOFAR) went online in late 2012. Its main telescope is located in the Netherlands, but it combines data from 24 other telescopes in Europe. It reaches wavelengths up to 30m. The Mileura Widefield Array (MWA) in Australia, which is sensitive to wavelengths of a few meters, has started taking data in 2013. And in 2025, the Square Kilometer Array (SKA) is scheduled to be completed. This joint project between Australia and South Africa will be the yet largest radio telescope.

Still, the astronomers’ dream would be to get rid of the distortion caused by Earth’s atmosphere. Their most ambitious plan is to put an array of telescopes on the far side of the moon. But this idea is, unfortunately, still far-fetched – for not to mention underfunded.

Only a few decades ago, cosmology was a discipline so starved of data that it was closer to philosophy than to science. Today it is a research area based on high precision measurements. The progress in technology and in our understanding of the universe’s history has been nothing but stunning, but we have only just begun. The dark age is next.


[This post previously appeared on Starts With a Bang.]

Saturday, January 16, 2016

Away Note

I am traveling the next three weeks and things will go very slowly on this blog.

In case you missed it, you might enjoy two pieces I recently wrote for NOVA: Are Singularities Real? and Are Space and Time discrete or continuous? There should be a third one appearing later this month (which will also be the last because it seems they're scraping this column). And then I wrote an article for Quanta Magazine String Theory Meets Loop Quantum Gravity, to which you find some background material here and here. Finally you might find this article in The Independent amusing: Stephen Hawking publishes paper on black holes that could get him 'a Nobel prize after all', in which I'm quoted as the voice of reason.

Wednesday, January 13, 2016

Book review: “From the Great Wall to the Great Collider” by Nadis and Yau

From the Great Wall to the Great Collider: China and the Quest to Uncover the Inner Workings of the Universe
By Steve Nadis and Shing-Tung Yau
International Press of Boston (October 23, 2015)

Did you know that particle physicists like the Chinese government’s interest in building the next larger particle collider? If not, then this neat little book about the current plans for the Great Collider, aka “Nimatron,” is just for you.

Nadis and Yau begin their book laying out the need for a larger collider, followed by a brief history of accelerator physics that emphasizes the contribution of Chinese researchers. Then come two chapters about the hunt for the Higgs boson, the LHC’s success, and a brief survey of beyond the standard model physics that focuses on supersymmetry and extra dimensions. The reader then learns about other large-scale physics experiments that China has run or is running, and about the currently discussed options for the next larger particle accelerator. Nadis and Yau don’t waste time discussing details of all accelerators that are presently considered, but get quickly to the point of laying out the benefits of a circular 50 or even 100 TeV collider in China.

And the benefits are manifold. The favored location for the gigantic project is Qinghuangdao, which is “an attractive destination that might appeal to foreign scientists” because, among other things, “its many beaches [are] ranked among the country’s finest,” “the countryside is home to some of China’s leading vineyards” and even the air quality is “quite good” at least “compared to Beijing.” Book me in.

The authors make a good case that both the world and China only have to gain from the giant collider project. China because “one result would likely be an enhancement of national prestige, with the country becoming a leader in the field of high-energy physics and perhaps eventually becoming the world center for such research. Improved international relations may be the most important consequence of all.” And the rest of the world benefits because, besides preventing thousands of particle physicists from boredom, “civil engineering costs are low in the country – much cheaper than those in many Western countries.”

The book is skillfully written with scientific explanations that are detailed, yet not overly technical, and much space is given to researchers in the field. Nadis and Yau quote whoever might help getting their message across: David Gross, Lisa Randall, Frank Wilczek, Don Lincoln, Don Hopper, Joseph Lykken, Nima Arkani-Hamed, Nathan Seiberg, Martinus Veltman, Steven Weinberg, Gordon Kane, John Ellis – everybody gets a say.

My favorite quote is maybe that by Henry Tye, who argues that the project is a good investment because “the worldwide impact of a collider is much bigger than if the money were put into some other area of science,” since “even if China were to spend more than the United States in some field of science and engineering other than high-energy physics, US professors would still do their research in the US.” This quote sums up the authors’ investigation of whether such a major financial commitment might maybe have a larger payoff were it invested into any other research area.

Don’t get me wrong there, if the Chinese want to build a collider, I think that’s totally great and an awesome contribution to knowledge discovery and the good of humanity, the forgiveness of sins, the resurrection of the body, and the life everlasting, amen. But there’s a real discussion here to be had whether building the next bigger ring-thing is where the money should flow or if not putting a radio telescope on the moon or a gravitational wave interferometer in space would bring more bang for the Yuan. Unfortunately, you’re not going to find that discussion in Nadis and Yau’s book.

Aside: The print has smear-stripes.Yes, that puts me in a bad mood.

In summary, this book will come in very handy next time you have to convince a Chinese government official to spend a lot of money on bringing protons up to speed.

[Disclaimer: Free review copy.]

Sunday, January 10, 2016

Free will is dead, let’s bury it.

I wish people would stop insisting they have free will. It’s terribly annoying. Insisting that free will exists is bad science, like insisting that horoscopes tell you something about the future – it’s not compatible with our knowledge about nature.

According to our best present understanding of the fundamental laws of nature, everything that happens in our universe is due to only four different forces: gravity, electromagnetism, and the strong and weak nuclear force. These forces have been extremely well studied, and they don’t leave any room for free will.

There are only two types of fundamental laws that appear in contemporary theories. One type is deterministic, which means that the past entirely predicts the future. There is no free will in such a fundamental law because there is no freedom. The other type of law we know appears in quantum mechanics and has an indeterministic component which is random. This randomness cannot be influenced by anything, and in particular it cannot be influenced by you, whatever you think “you” are. There is no free will in such a fundamental law because there is no “will” – there is just some randomness sprinkled over the determinism.

In neither case do you have free will in any meaningful way.

These are the only two options, and all other elaborations on the matter are just verbose distractions. It doesn’t matter if you start talking about chaos (which is deterministic), top-down causation (which doesn’t exist), or insist that we don’t know how consciousness really works (true but irrelevant). It doesn’t change a thing about this very basic observation: there isn’t any known law of nature that lets you meaningfully speak of “free will”.

If you don’t want to believe that, I challenge you to write down any equation for any system that allows for something one could reasonably call free will. You will almost certainly fail. The only thing really you can do to hold on to free will is to wave hands, yell “magic”, and insist that there are systems which are exempt from the laws of nature. And these systems somehow have something to do with human brains.

The only known example for a law that is neither deterministic nor random comes from myself. But it’s a baroque construct meant as proof in principle, not a realistic model that I would know how to combine with the four fundamental interactions. As an aside: The paper was rejected by several journals. Not because anyone found anything wrong with it. No, the philosophy journals complained that it was too much physics, and the physics journals complained that it was too much philosophy. And you wonder why there isn’t much interaction between the two fields.

After plain denial, the somewhat more enlightened way to insist on free will is to redefine what it means. You might settle for example on speaking of free will as long as your actions cannot be predicted by anybody, possibly not even by yourself. Clearly, it is presently impossible to make such a prediction. It remains to be seen whether it will remain impossible, but right now it’s a reasonable hope. If that’s what you want to call free will, go ahead, but better not ask yourself what determined your actions.

A popular justification for this type of free will is insisting that on comparably large scales, like those between molecules responsible for chemical interactions in your brain, there are smaller components which may have a remaining influence. If you don’t keep track of these smaller components, the behavior of the larger components might not be predictable. You can then say “free will is emergent” because of “higher level indeterminism”. It’s like saying if I give you a robot and I don’t tell you what’s in the robot, then you can’t predict what the robot will do, consequently it must have free will. I haven’t managed to bring up sufficient amounts of intellectual dishonesty to buy this argument.

But really you don’t have to bother with the details of these arguments, you just have to keep in mind that “indeterminism” doesn’t mean “free will”. Indeterminism just means there’s some element of randomness, either because that’s fundamental or because you have willfully ignored information on short distances. But there is still either no “freedom” or no “will”. Just try it. Try to write down one equation that does it. Just try it.

I have written about this a few times before and according to the statistics these are some of the most-read pieces on my blog. Following these posts, I have also received a lot of emails by readers who seem seriously troubled by the claim that our best present knowledge about the laws of nature doesn’t allow for the existence of free will. To ease your existential worries, let me therefore spell out clearly what this means and doesn’t mean.

It doesn’t mean that you are not making decisions or are not making choices. Free will or not, you have to do the thinking to arrive at a conclusion, the answer to which you previously didn’t know. Absence of free will doesn’t mean either that you are somehow forced to do something you didn’t want to do. There isn’t anything external imposing on you. You are whatever makes the decisions. Besides this, if you don’t have free will you’ve never had it, and if this hasn’t bothered you before, why start worrying now?

This conclusion that free will doesn’t exist is so obvious that I can’t help but wonder why it isn’t widely accepted. The reason, I am afraid, is not scientific but political. Denying free will is considered politically incorrect because of a wide-spread myth that free will skepticism erodes the foundation of human civilization.

For example, a 2014 article in Scientific American addressed the question “What Happens To A Society That Does not Believe in Free Will?” The piece is written by Azim F. Shariff, a Professor for Psychology, and Kathleen D. Vohs, a Professor of Excellence in Marketing (whatever that might mean).

In their essay, the authors argue that free will skepticism is dangerous: “[W]e see signs that a lack of belief in free will may end up tearing social organization apart,” they write. “[S]kepticism about free will erodes ethical behavior,” and “diminished belief in free will also seems to release urges to harm others.” And if that wasn’t scary enough already, they conclude that only the “belief in free will restrains people from engaging in the kind of wrongdoing that could unravel an ordered society.”

To begin with I find it highly problematic to suggest that the answers to some scientific questions should be taboo because they might be upsetting. They don’t explicitly say this, but the message the article send is pretty clear: If you do as much as suggest that free will doesn’t exist you are encouraging people to harm others. So please read on before you grab the axe.

The conclusion that the authors draw is highly flawed. These psychology studies always work the same. The study participants are engaged in some activity in which they receive information, either verbally or in writing, that free will doesn’t exist or is at least limited. After this, their likeliness to conduct “wrongdoing” is tested and compared to a control group. But the information the participants receive is highly misleading. It does not prime them to think they don’t have free will, it instead primes them to think that they are not responsible for their actions. Which is an entirely different thing.

Even if you don’t have free will, you are of course responsible for your actions because “you” – that mass of neurons – are making, possibly bad, decisions. If the outcome of your thinking is socially undesirable because it puts other people at risk, those other people will try to prevent you from more wrongdoing. They will either try to fix you or lock you up. In other words, you will be held responsible. Nothing of this has anything to do with free will. It’s merely a matter of finding a solution to a problem.

The only thing I conclude from these studies is that neither the scientists who conducted the research nor the study participants spent much time thinking about what the absence of free will really means. Yes, I’ve spent far too much time thinking about this.

The reason I am hitting on the free will issue is not that I want to collapse civilization, but that I am afraid the politically correct belief in free will hinders progress on the foundations of physics. Free will of the experimentalist is a relevant ingredient in the interpretation of quantum mechanics. Without free will, Bell’s theorem doesn’t hold, and all we have learned from it goes out the window.

This option of giving up free will in quantum mechanics goes under the name “superdeterminism” and is exceedingly unpopular. There seem to be but three people on the planet who work on this, ‘t Hooft, me, and a third person of whom I only learned from George Musser’s recent book (and whose name I’ve since forgotten). Chances are the three of us wouldn’t even agree on what we mean. It is highly probable we are missing something really important here, something that could very well be the basis of future technologies.

Who cares, you might think, buying into the collapse of the wave-function seems a small price to pay compared to the collapse of civilization. On that matter though, I side with Socrates “The unexamined life is not worth living.”

Thursday, January 07, 2016

More information emerges about new proposal to solve black hole information loss problem

Soft hair. Redshifted.

Last year August, Stephen Hawking announced he had been working with Malcom Perry and Andrew Strominger on a solution to the black hole information loss problem, and they were closing in on a solution. But little was explained other than that this solution rests on a symmetry group by name of supertranslations.

Yesterday then, Hawking, Perry, and Strominger, had a new paper on the arxiv that fills in a little more detail
    Soft Hair on Black Holes
    Stephen W. Hawking, Malcolm J. Perry, Andrew Strominger
    arXiv:1601.00921
I haven’t had much time to think about this, but I didn’t want to leave you hanging, so here is a brief summary.

First of all, the paper seems only a first step in a longer argument. Several relevant questions are not addressed and I assume further work will follow. As the authors write: “Details will appear elsewhere.”

The present paper does not study information retrieval in general. It instead focuses on a particular type of information, the one contained in electrically charged particles. The benefit in doing this is that the quantum theory of electric fields is well understood.

Importantly, they are looking at black holes in asymptotically flat (Minkowski) space, not in asymptotic Anti-de-Sitter (AdS) space. This is relevant because string theorists believe that the black hole information loss problem doesn’t exist in asymptotic AdS space. They don’t know however how to extend this argument to asymptotically flat space or space with a positive cosmological constant. To best present knowledge we don’t live in AdS space, so understanding the case with a positive cosmological constant is necessary to describe what happens in the universe we actually inhabit.

In the usual treatment, a black hole counts only the net electric charge of particles as they fall in. The total charge is one of the three classical black hole “hairs,” next to mass and angular momentum. But all other details about the charges (eg in which chunks they came in) is lost: there is no way to store anything in or on an object that has no features, has no “hairs”.

In the new paper the authors argue that the entire information about the infalling charges is stored on the horizon in form of 'soft photons', that are photons of zero energy. These photons are the “hair” which previously was believed to be absent.

Since these photons can carry information but have zero energy, the authors conclude that the vacuum is degenerate. A 'degenerate' state is one on which several distinct quantum states share the same energy. This means there are different vacuum states which can surround the black hole and so the vacuum can hold and release information.

It is normally assumed that the vacuum state is unique. If it is not, this allows one to have information in the outgoing radiation (which is the ingoing vacuum). A vacuum degeneracy is thus a loophole in the argument originally lead by Hawking according to which information must get lost.

What the ‘soft photons’ are isn't further explained in the paper; they are simply identified with the action of certain operators and supposedly Goldstone bosons of a spontaneously broken symmetry. Or rather of an infinite amount of symmetries that, basically, belong to the conserved charges of something akin multipole moments. It sounds plausible, but the interpretation eludes me. I haven’t yet read the relevant references.

I think the argument goes basically like this: We can expand the electric field in form of all these (infinitely many) higher moments and show that each of them is associated with a conserved charge. Since the charge is conserved, the black hole can’t destroy it. Consequently, it must be maintained somehow. In the presence of a horizon, future infinity is not a Cauchy surface, so we add the horizon as boundary. And on this additional boundary we put the information that we know can’t get lost, which is what the soft photons are good for.

The new paper adds to Hawking’s previous short note by providing an argument for why the amount of information that can be stored this way by the black hole is not infinite, but instead bounded by the Bekenstein-Hawking entropy (ie proportional to the surface area). This is an important step to assure this idea is compatible with everything else we know about black holes. Their argument however is operational and not conceptual. It is based on saying, not that the excess degrees of freedom don't exist, but that they cannot be used by infalling matter to store information. Note that, if this argument is correct, the Bekenstein-Hawking entropy does not count the microstates of the black hole, it instead sets an upper limit to the possible number of microstates.

The authors don’t explain just how the information becomes physically encoded in the outgoing radiation, aside from writing down an operator. Neither, for that matter, do they demonstrate that by this method actually all of the information of the initial can be stored and released. Focusing on photons of course they can't do this anyway. But they don’t have an argument how it can be extended to all degrees of freedom. So, needless to say, I have to remain skeptical that they can live up to the promise.

In particular, I still don’t see that the conserved charges they are referring to actually encode all the information that’s in the field configuration. For all I can tell they only encode the information in the angular directions, not the information in the radial direction. If I were to throw in two concentric shells of matter, I don’t see how the asymptotic expansion could possibly capture the difference between two shells and one shell, as long as the total charge (or mass) is identical. The only way I see to get around this issue is to just postulate that the boundary at infinity does indeed contain all the information. And that in return we only know to work in AdS space. (At least it’s believed to work in this case.)

Also, the argument for why the charges on the horizon are bounded and the limit reproduces the Bekenstein-Hawking entropy irks me. I would have expected the argument for the bound to rely on taking into account that not all configurations that one can encode in the infinite distance will actually go on to form black holes.

Having said that, I think it’s correct that a degeneracy of the vacuum state would solve the black hole information loss problem. It’s such an obvious solution that you have to wonder why nobody thought of this before, except that I thought of it before. In a note from 2012, I showed that a vacuum degeneracy is the conclusion one is forced to draw from the firewall problem. And in a follow-up paper I demonstrated explicitly how this solves the problem. I didn’t have a mechanism though to transfer the information into the outgoing radiation. So now I’m tempted to look at this, despite my best intentions to not touch the topic again...

In summary, I am not at all convinced that the new idea proposed by Hawking, Perry, and Strominger solves the information loss problem. But it seems an interesting avenue that is worth further exploration. And I am sure we will see further exploration...

Monday, January 04, 2016

Finding space-time quanta in the cosmic microwave background: Not so simple

“Final theory” is such a misnomer. The long sought-after unification of Einstein’s General Relativity with quantum mechanics would not be an end, it would be a beginning. A beginning to unravel the nature of space and time, and also a beginning to understand our own beginning – the origin of the universe.

The biggest problem physicists face while trying to find such a theory of quantum gravity is the lack of experimental guidance. The energy necessary to directly test quantum gravity is enormous, and far beyond what we can achieve on Earth. But for cosmologists, the universe is the laboratory. And the universe knows how to reach such high energies. It’s been there, it’s done it.

Our universe was born when quantum gravitational effects were strong. Looking back in time for traces of these effects is therefore one of the most promising, if not the most promising, place to find experimental evidence for quantum gravity. But if it was simple, it would already have been done.

The first issue is that, lacking a theory of quantum gravity, nobody knows how to describe the strong quantum gravitational effects in the early universe. This is the area where phenomenological model building becomes important. But this brings up the next difficulty, which is that the realm of strong quantum gravity is even before inflation – the early phase in which the universe blew up exponentially fast – and neither today’s nor tomorrow’s observations will pin down any one particular model.

There is another option though, that is focusing on the regime of where quantum gravitational effects are weak, yet strong enough to still affect matter. In this regime, relevant during and towards the end of inflation, we know how the theory works. The mathematics to treat the quantum properties of space-time during this period is well-understood because such small perturbations can be dealt with almost the same way as with all other quantum fields.

Indeed, the weak quantum gravity approximation is routinely used in the calculation of today’s observables, such as the spectrum of the cosmic microwave background. That is right – cosmologists do actually use quantum gravity. It becomes necessary because, according to the currently most widely accepted models, inflation is driven by a quantum field – the “inflaton” – whose fluctuations go on to seed the structures we observe today. The quantum fluctuations of the inflaton cause quantum fluctuations of space-time. And these, in return, remain visible today in the large-scale distribution of matter and in the cosmic microwave background (CMB).

This is why last year’s claim by the BICEP collaboration that they had observed the CMB imprint left by gravitational waves from the early was claimed by some media outlets to be evidence for quantum gravity. But the situation is so simple not. Let us assume they had indeed measured what they originally claimed. Even then, obtaining correct predictions from a theory that was quantized doesn’t demonstrate the correct theory must have been quantized. To demonstrate that space-time must have had quantum behavior in the early universe, we must instead find an observable that could not have been produced by any unquantized theory.

In the last months, two papers appeared that studied this question and analyzed the prospects of finding evidence for quantum gravity in the CMB. The conclusions, however, are in both cases rather pessimistic.

The first paper is “A model with cosmological Bell inequalities” by Juan Maldacena. Maldacena tries to construct a Bell-type test that could be used to rule out a non-quantum origin of the signatures that are leftover today from the early universe. The problem is that, once inflation ends, only the classical distribution of the, originally quantum, fluctuation goes on to enter the observables, like the CMB temperature fluctuations. This makes any Bell-type setup with detectors in the current era impossible because the signal was long gone.

Maldacena refuses to be discouraged by this and instead tries to find a way in which another field, present during inflation, plays the role of the detector in the Bell-experiment. This additional field could then preserve the information about the quantum-ness of space-time. He explicitly constructs such a model with an additional field that serves as detector, but calls it himself “baroque” and “contrived.” It is a toy-model to demonstrate there exist cases in which a Bell-test can be performed on the CMB, but not a plausible scenario for our universe.

I find the paper nevertheless interesting as it shows what it would take to use this method and also exhibits where the problem lies. I wish there were more papers like this, where theorists come forward with ideas that didn’t work, because these failures are still a valuable basis for further studies.

The second paper is “Quantum Discord of Cosmic Inflation: Can we Show that CMB Anisotropies are of Quantum-Mechanical Origin?” by Jerome Martin and Vincent Vennin. The authors of this paper don’t rely on the Bell-type test specifically, but instead try to measure the “quantum discord” of the CMB temperature fluctuations. The quantum discord, in a nutshell, measures the quantum-ness in the correlations of a system. The observables they look at are firstly the CMB two-point correlations and later also higher correlation functions.

The authors address the question in two steps. In the first step they ask whether the CMB observations can also be reproduced in the standard treatment if the state has little or no quantum correlations, ie if one has a ‘classical state’ (in terms of correlations) in a quantum theory. They find that for what already existing observables are concerned, the modifications due to the lack of quantum correlations are existent but unobservable.
    “[I]n practice, the difference between the quantum and the classical results is tiny and unobservable probably forever.”
They are tentatively hopeful that the two cases might become distinguishable with higher-order correlation functions. On these correlations, experimentalists have so far only very little data, but it is a general topic of interest and future missions will undoubtedly sharpen the existing constraints. In the present work, the authors however do not quantify the predictions, but rather defer to future work: “[I]t remains to generate templates […] to determine whether such a four-point function is already excluded or not.”

The second step is that they study whether the observed correlations could be created by a theory that is classical to begin with, so that the fluctuations are stochastic. They then demonstrate that this can always be achieved, and thus there is no way to distinguish the two cases. To arrive at this conclusion, they first derive the equations for the correlations in the unquantized case, then demand that they reproduce those of the quantized case, and then argue that these equations can be fulfilled.

On the latter point I am, maybe uncharacteristically, less pessimistic than the authors themselves because their general case might be too general. Combining a classical theory with a quantum field gives rise to a semi-classical set of equations that lead to peculiar violations of the uncertainty principle, and an entirely classical theory would need a different mechanism to even create the fluctuations. That is to say, I believe that it might be possible to further constrain the prospects of unquantized fluctuations if one takes into account other properties that such models necessarily must have.

In summary, I have to conclude that we still have a long way to go until we can conclude that space-time must have been quantized in the early universe. Nevertheless, I think it is one of the most promising avenues to pin down the first experimental signature for quantum gravity.

Thursday, December 31, 2015

Book review: “Beyond the Galaxy” by Ethan Siegel

Beyond the Galaxy: How Humanity Looked Beyond Our Milky Way and Discovered the Entire Universe
By Ethan Siegel
World Scientific Publishing Co (December 9, 2015)

Ethan Siegel’s book is an introduction to modern cosmology that delivers all the facts without the equations. Like Ethan’s collection “Starts With a Bang,” it is well-explained and accessible for the reader without any prior knowledge in physics. But this access doesn’t come without effort. This isn’t a book for the strolling pedestrian who likes being dazzled by the wonders of modern science, it’s a book for the inquirer who wants to turn around everything behind the display-window of science news.

“Beyond the Galaxy” tells the history of the universe and the basics of the relevant measurement techniques. It explains the big bang theory and inflation, the formation of matter in the early universe, dark matter, dark energy, and briefly mentions the multiverse. Siegel elaborates on the cosmic microwave background and what we have learned from it, baryon acoustic oscillations, and supernovae redshift. For the most part, the book sticks closely with well-established physics and stays away from speculations, except when it comes to the possible explanations for dark matter and dark energy.

Having said what the book contains, let me spell out what it doesn’t contain. This is not a book about astrophysics. You will not find elaborate discussions about all the known astrophysical objects and their physical process. This is also not a book about particle physics. Ethan does not include dark matter direct detection experiments, and while some particle physics necessarily enters the discussion of matter formation, he sticks with the very essentials. It is also not a history book. Though Ethan does a good job giving the reader a sense of the timeline of discoveries, this is clearly not the focus of his interest.

Ethan might not be the most lyrical writer ever, but his explanations are infallibly clear and comprehensible. The book is accompanied by numerous illustrations that are mostly helpful, though some of them contain more information than is explained in the text.

In short, Ethan’s book is the missing link between cosmology textbooks and popular science articles. It will ease your transition if you are attempting one, or, if that is not your intention, it will serve to tie together the patchy knowledge that news articles often leave us with. It is the ideal starting point if you want to get serious about digging into cosmology, or if you are just dissatisfied by the vagueness of much contemporary science writing. It is, in one word, a sciency book.

[Disclaimer: Free review copy, plus I write for Ethan once per month.]

Wednesday, December 30, 2015

How does a lightsaber work? Here is my best guess.

A lightsaber works by emitting a stream of magnetic monopoles. Magnetic monopoles are heavy particles that source magnetic fields. They are so-far undiscovered but many physicists believe they are real due to theoretical arguments. For string theorist Joe Polchinski, for example, “the existence of magnetic monopoles seems like one of the safest bets that one can make about physics not yet seen.” Magnetic monopoles are so heavy however that they cannot be produced by any known processes in the universe – a minor technological complication that I will come back to below.




Depending on the speed at which the monopoles are emitted, they will either escape or return back to the saber’s hilt which has the opposite magnetic charge. You could of course just blast your opponent with the monopoles, but that would be rather boring. The point of a lightsaber isn’t to merely kill your enemies, but to kill them with style.



So you are emitting this stream of monopoles. Since the hilt has the opposing magnetic charge they pull after them magnetic force lines. Next you eject some electrically charged particles – electrons or ions – into this field with an initial angular velocity. These will circle in spirals around the magnetic field and, due to the circular motion, they will emit synchroton radiation, which is why you can see the blade.

Due to the emission of light and the occasional collision with air molecules, the electrically charged particles slow down and eventually escape the magnetic field. That doesn’t sound really healthy, so you might want to make sure that their kinetic energy isn’t too high. To then still get an emission spectrum with a significant contribution in the visible range, you need a huge magnetic field. Which can’t really be healthy either, but at least it decays inversely proportional to the distance from the blade.

Letting the monopoles escape has the advantage that you don’t have to devise a complicated mechanism to make sure they actually return back to the hilt. It has the disadvantage though that one fighter’s monopoles can be sucked up by the other’s saber if that has opposite charge. Can the blades pass through each other? Well, if they both have the same charges, they repel. You couldn’t easily pass them through each other, but they would probably distort each other to some extent. How much depends on the strength of the magnetic field that keeps the electrons caught.


Finally, there is the question how to produce the magnetic monopoles to begin with. For this, you need a pocket-sized accelerator that generates collision energies at the Planck scale. The most commonly used method for this is to use a Kyber crystal. This also means that you need to know string theory to accurately calculate how a lightsaber operates. May the Force be with you.

[For more speculation, see also Is a Real Lightsaber Possible? by Don Lincoln.]

Tuesday, December 29, 2015

Book review: “Seven brief lessons on physics” by Carlo Rovelli

Seven Brief Lessons on Physics
By Carlo Rovelli
Allen Lane (September 24, 2015)

Carlo Rovelli’s book is a collection of essays about the fundamental laws of physics as we presently know them, and the road that lies ahead. General Relativity, quantum mechanics, particle physics, cosmology, quantum gravity, the arrow of time, and consciousness, are the topics that he touches upon in this slim, pocket-sized, 79 pages collection.

Rovelli is one of the founders of the research program of Loop Quantum Gravity, an approach to understanding the quantum nature of space and time. His “Seven brief lessons on physics” are short on scientific detail, but excel in capturing the fascination of the subject and its relevance to understand our universe, our existence, and ourselves. In laying out the big questions driving physicists’ quest for a better understanding of nature Rovelli makes it clear how the, often abstract, contemporary research is intimately connected with the ancient desire to find our place in this world.

As a scientist, I would like to complain about numerous slight inaccuracies, but I forgive them since they are admittedly not essential to the message Rovelli is conveying, that is the value of knowledge for the sake of knowledge itself. The book is more a work of art and philosophy than of science, it’s the work of a public intellectual reaching out to the masses. I applaud Carlo for not dumbing down his writing, for not being afraid of using multi-syllable words and constructing nested sentences; it’s a pleasure to read. He seems to spend too much time on the beach playing with snail-shells though.

I might have recommended the book as a Christmas present for your relatives who never quite seem to understand why anyone would spend their life pondering the arrow of time, but I was too busy pondering the arrow of time to finish the book before Christmas.

I would recommend this book to anyone who wants to understand how fundamental questions in physics tie together with the mystery of our own existence, or maybe just wants a reminder of what got them into this field decades ago.

[Disclaimer: I got the book as gift from the author.]

Sunday, December 27, 2015

Dear Dr B: Is string theory science?

This question was asked by Scientific American, hovering over an article by Davide Castelvecchi.

They should have asked Ethan Siegel. Because a few days ago he strayed from the path of awesome news about the universe to inform his readership that “String Theory is not Science.” Unlike Davide however, Ethan has not yet learned the fine art of not expressing opinions that marks the true science writer. And so Ethan dismayed Peter Woit, Lubos Motl, and me in one sweep. That’s a noteworthy achievement, Ethan!

Upon my inquiry (essentially a polite version of “wtf?”) Ethan clarified that he meant string theory has no scientific evidence speaking for it and changed the title to “Why String Theory Is Not A Scientific Theory.” (See URL for original title.)

Now, Ethan is wrong with believing that string theory doesn’t have evidence speaking for it and I’ll come to this in a minute. But the main reason for his misleading title, even after the correction, is a self-induced problem of US science communicators. In reaction to an often raised Creationist’s claim that Darwinian natural selection is “just a theory,” they have bent over backwards trying to convince the public that scientists use the word “theory” to mean an explanation that has been confirmed by evidence to high accuracy. Unfortunately, that’s not how scientists actually use the word, have never used it, and will probably never use it.

Scientists don’t name their research programs following certain rules. Instead, which expression sticks is mostly coincidence. Brans-Dicke theory, Scalar-Tensor theory, terror management theory, or recapitulation theory, are but a few examples of “theories” that have little or no evidence speaking in their favor. Maybe that shouldn’t be so. Maybe “theory” should be a title reserved only for explanations widely accepted in the scientific community. But looking up definitions before assigning names isn’t how language works. Peanuts also aren’t nuts (they are legumes), and neither are Cashews (they are seeds). But, really, who gives a damn?

Speaking of nuts, the sensible reaction to the “just a theory” claim is not to conjure up rules according to which scientists allegedly use one word or the other, but to point out that any consistent explanation is better than a collection of 2000 years old fairy tales that are neither internally consistent nor consistent with observation, and thus an entirely useless waste of time.

And science really is all about finding useful explanations for observations, where “useful” means that they increase our understanding of the world around us and/or allow us to shape nature to our benefits. To find these useful explanations, scientists employ the often-quoted method of proposing hypotheses and subsequently testing them. The role of theory development in this is to identify the hypotheses which are most promising and thus deserve being put to test.

This pre-selection of hypotheses is a step often left out in the description of the scientific method, but it is highly relevant, and its relevance has only increased in the last decades. We cannot possibly test all randomly produced hypotheses – we neither have the time nor the resources. All fields of science therefore have tight quality controls for which hypotheses are worth paying attention to. The more costly experimental test of new hypotheses becomes, the more relevant is this hypotheses pre-selection. And it is in this step where non-empirical theory assessment enters.

Non-empirical theory assessment was topic of the workshop that Davide Castelvecchi’s SciAm article reported on. (For more information about the workshop, see also Natalie Wolchover’s summary in Quanta, and my summary on Starts with a Bang.) Non-empirical theory assessment is the use of criteria that scientists draw upon to judge on the promise of a theory before it can be put to experimental test.

This isn’t new. Theoretical physicists have always used non-empirical assessment. What is new is that in foundational physics it has remained the only assessment for decades, which hugely inflates the potential impact of even smallest mistakes. As long as we have frequent empirical assessment, faulty non-empirical assessment cannot lead theorists far astray. But take away the empirical test, and non-empirical assessment requires utmost objectivity in judgement or we will end up in a completely wrong place.

Richard Dawid, one of the organizers of the Munich workshop, has, in a recent book, summarized some non-empirical criteria that practitioners list in favor of string theory. It is an interesting book, but of little practical use because it doesn’t also assess other theories (so the scientist complains about the philosopher).

String theory arguably has empirical evidence speaking for it because it is compatible with the theories that we know, the standard model and general relativity. The problem is though that, for what the evidence is concerned, string theory so far isn’t any better than the existing theories. There isn’t a single piece of data that string theory explains which the standard model or general relativity doesn’t explain.

The reason many theoretical physicists prefer string theory over the existing theories are purely non-empirical. They consider it a better theory because it unifies all known interactions in a common framework and is believed to solve consistency problems in the existing theories, like the black hole information loss problem and the formation of singularities in general relativity. Whether it is actually correct as a unified theory of all interactions is still unknown. And short of a uniqueness proof, no non-empirical argument will change anything about this.

What is known however is that string theory is intimately related to quantum field theories and gravity, both of which are well-confirmed by evidence. This is why many physicists are convinced that string theory too has some use in the description of nature, even if this use eventually may not be to describe the quantum structure of space and time. And so, in the last decade string theory has become regarded less as a “final theory” and more as mathematical framework to address questions that are difficult or impossible to answer with quantum field theory or general relativity. It yet has to prove its use on these accounts.

Speculation in theory development is a necessary part of the scientific method. If a theory isn’t developed to explain already existing data, there is always a lag between the hypotheses and their tests. String theory is just another such speculation, and it is thereby a normal part of science. I have never met a physicist who claimed that string theory isn’t science. This is a statement I have only come across by people who are not familiar with the field – which is why Ethan’s recent blogpost puzzled me greatly.

No, the question that separates the community is not whether string theory is science. The controversial question is how long is too long to wait for data supporting a theory? Are 30 years too long? Does it make any sense to demand payoff after a certain time?

It doesn’t make any sense to me to force theorists to abandon a research project because experimental test is slow to come by. It seems natural that in the process of knowledge discovery it becomes increasingly harder to find evidence for new theories. What one should do in this case though is not admit defeat on the experimental front and focus solely on the theory, but instead increase efforts to find new evidence that could guide the development of the theory. That, and the non-empirical criteria should be regularly scrutinized to prevent scientists from discarding hypotheses for the wrong reasons.

I am not sure who is responsible for this needlessly provocative title of the SciAm piece, just that it’s most likely not the author, because the same article previously appeared in Nature News with the somewhat more reasonable title “Feuding physicists turn to philosophy for help.” There was, however, not much feud at the workshop, because it was mainly populated by string theory proponents and multiverse opponents, who nodded to each other’s talks. The main feud, as always, will be carried out in the blogosphere...

Tl;dr: Yes, string theory is science. No, this doesn’t mean we know it’s a correct description of nature.

Thursday, December 24, 2015

Is light a wave or a particle?

2015 was the International Year of Light. In May, I came across this video by the Max Planck Society, in which some random people on the street in Munich were asked whether light is a wave or a particle. Most of them answered in German, but here is a rough translation of their replies:
    “Uh, physics. It's been a long time. I guess it’s... a particle. — Particle. — Particle. — A particle. — A particle. — Light is... a particle. — I had physics up to 12th class. We discussed this a whole year. But I still don’t know. — A wave. — A wave? — Is this a trick question? — It’s both! Wave-particle duality. You should know that. — The duality of light. — It acts as both. — It’s hard to quantify what it is. It’s energy. — I am fascinated that nature has paradoxa. That one finds out through physics that not everything can be computed.”
So I thought some explanation is in order:



This is the first time I’ve tried the new green screen. As you can see, it has indeed solved my eye-erasure problem. (And for the experts, I hope you apologize my sloppiness in specifying the U(1) gauge group.)

On that occasion, I also want to wish you all Happy Holidays!


Like what you find on my blog? I want to kindly draw your attention to the donate button in the top right corner :o)

Saturday, December 19, 2015

Ask Dr B: Is the multiverse science? Is the multiverse real?

Kay zum Felde asked:
“Is the multiverse science? How can we test it?”
I added “Is the multiverse real” after Google offered it as autocomplete:


Dear Kay,

This is a timely question, one that has been much on my mind in the last years. Some influential theoretical physicists – like Brian Greene, Lenny Susskind, Sean Carroll, and Max Tegmark – argue that the appearance of multiverses in various contemporary theories signals that we have entered a new era of science. This idea however has been met with fierce opposition by others – like George Ellis, Joe Silk, Paul Steinhardt, and Paul Davies – who criticize the lack of testability.

If the multiverse idea is right, and we live in one of many – maybe infinitely many – different universes, then some of our fundamental questions about nature might never be answered with certainty. We might merely be able to make statements about how likely we are to inhabit a universe with some particular laws of nature. Or maybe we cannot even calculate this probability, but just have to accept that some things are as they are, with no possibility to find deeper answers.

What bugs the multiverse opponents most about this explanation – or rather lack of explanation – is that succumbing to the multiverse paradigm feels like admitting defeat in our quest for understanding nature. They seem to be afraid that merely considering the multiverse an option discourages further inquiries, inquiries that might lead to better answers.

I think the multiverse isn’t remotely as radical an idea as it has been portrayed, and that some aspects of it might turn out to be useful. But before I go on, let me first clarify what we are talking about.

What is the multiverse?

The multiverse is a collection of universes, one of which is ours. The other universes might be very different from the one we find ourselves in. There are various types of multiverses that theoretical physicists believe are logical consequences of their theories. The best known ones are:
  • The string theory landscape
    String theory doesn’t uniquely predict which particles, fields, and parameters a universe contains. If one believes that string theory is the final theory, and there is nothing more to say than that, then we have no way to explain why we observe one particular universe. To make the final theory claim consistent with the lack of predictability, one therefore has to accept that any possible universe has the same right to existence as ours. Consequently, we live in a multiverse.

  • Eternal inflation
    In some currently very popular models for the early universe our universe is just a small patch of a larger space. As result of a quantum fluctuation the initially rapid expansion – known as “inflation” – slows down in the region around us and galaxies can be formed. But outside our universe inflation continues, and randomly occurring quantum fluctuations go on to spawn off other universes – eternally. If one believes that this theory is correct and that we understand how the quantum vacuum couples to gravity, then, so the argument, the other universes are equally real as ours.

  • Many worlds interpretation
    In the Copenhagen interpretation of quantum mechanics the act of measurement is ad hoc. It is simply postulated that measurement “collapses” the wave-function from a state with quantum properties (such as being in two places at once) to a distinct state (at only one place). This postulate agrees with all observations, but it is regarded unappealing by many (including myself). One way to avoid this postulate is to instead posit that the wave-function never collapses. Instead it ‘branches’ into different universes, one for each possible measurement outcome – a whole multiverse of measurement outcomes.

  • The Mathematical Universe
    The Mathematical Universe is Max Tegmark’s brain child, in which he takes the final theory claim to its extreme. Any theory that describes only our universe requires the selection of some mathematics among all possible mathematics. But if a theory is a final theory, there is no way to justify any particular selection, because any selection would require another theory to explain it. And so, the only final theory there can be is one in which all mathematics exists somewhere in the multiverse.
This list might raise the impression that the multiverse is a new finding, but that isn’t so. New is only the interpretation. Since every theory requires observational input to fix parameters or pick axioms, every theory leads to a multiverse. Without sufficient observational input any theory becomes ambiguous – it gives rise to a multiverse.

Take Newtonian gravity: Is there a universe for each value of Newton’s constant? Or General Relativity: Do all solutions to the field equations exist? And Loop Quantum Gravity has multiverses with different parameters for an infinite number of solutions like string theory. It’s just that Loop Quantum Gravity never tried to be a theory of everything, so nobody worries about this.

What is new about the multiverse idea is that some physicists are no longer content with having a theory that describes observation. They now have additional requirements for a good theory, like for example that the theory have no ad hoc prescriptions like collapsing wavefunctions; no small, large, or in fact any numbers; or initial conditions that are likely according to some currently accepted probability distribution.

Is the multiverse science?

Science is what describes our observations of nature. But this is the goal and not necessarily the case for each step along the way. And so, taking multiverses seriously, rather than treating them as the mathematical artifact that I think they are, might eventually lead to new insights. The real controversy about the multiverses is how likely it is that new insights will emerge from this approach eventually.

The maybe best example for how multiverses might become scientific is eternal inflation. It has been argued that the different universes might not be entirely disconnected, but can collide, thereby leaving observable signatures in the cosmic microwave background. Another example for testability comes from Mersini-Houghton and Holman who have looked into potentially observable consequences of entanglement between different universes. And in a rather mindbending recent work, Garriga, Vilenkin and Zhang, have argued that the multiverse might give rise to a distribution of small black holes in our universe which also has consequences that could become observable in the future.

As to probability distributions on the string theory landscape, I don’t see any conceptual problem with that. If someone could, based on a few assumptions, come up with a probability measure according to which the universe we observe is the most likely one, that would for me be a valid computation of the standard model parameters. The problem is of course to come up with such a measure.

Similar things could be said about all other multiverses. They don’t presently seem very useful to describe nature. But pursuing the idea might eventually give rise to observable consequences and further insights.

We have known since the dawn of quantum mechanics that it’s wrong to require all mathematical structures of a theory to directly correspond to observables – wave-functions are the best counter example. How willing physicists are to accept non-observable ingredients of a theory as necessary depends on their trust in the theory and on their hope that it might give rise to deeper insights. But there isn’t a priori anything unscientific with a theory that contains elements that are unobservable.

So is the multiverse science? It is an extreme speculation, and opinions widely differ on how promising it is as a route is to deeper understanding. But speculations are a normal part of theory development, and the multiverse is scientific as long as physicists strive to eventually derive observable consequences.

Is the multiverse real?

The multiverse has some brain-bursting consequences. For example that everything that can happen does happen, and it happens an infinite amount of times. There are thus infinitely many copies of you, somewhere out there, doing their own thing, or doing exactly the same as you. What does that mean? I have no clue. But it makes for an interesting dinner conversation through the second bottle of wine.

Is it real? I think it’s a mistake to think of “being real” as a binary variable, a property that an object either has or has not. Reality has many different layers, and how real we perceive something depends on how immediate our inference of the object from sensory input is.

A dog peeing on your leg has a very simple and direct relation to your sensory input that does not require much decoding. You would almost certainly consider it real. On the contrary, evidence for the quark model contained in a large array of data on a screen is a very indirect sensory input that requires a great deal of decoding. How real you consider quarks thus depends on your knowledge of, and trust in, the theory and the data. Or trust in the scientists dealing with the theory and the data as it were. For most physicists the theory underlying the quark model has proved reliable and accurate to such high precision that they consider quarks as real as the peeing dog.

But the longer the chain of inference, and the less trust you have in the theories used for inference, the less real objects become. In this layered reality the multiverse is currently at the outer fringes. It’s as unreal as something can be without being plain fantasy. For some practitioners who greatly trust their theories, the multiverse might appear almost as real as the universe we observe. But for most of us these theories are wild speculations and consequently we have little trust in this inference.

So is the multiverse real? It is “less real” than everything else physicists have deduced from their theories – so far.

Wednesday, December 16, 2015

No, you don’t need general relativity to ride a hoverboard.

Image credit: Technologistlaboratory.
This morning, someone sent me a link to a piece that appeared on WIRED

The hoverboards in question here are the currently fashionable two-wheeled motorized boards that are driven by shifting your weight. I haven’t tried one, but it sure looks like fun.

I would have ignored this article as your average internet nonsense, but turns out the WIRED piece is written by someone by name Rhett Allain who, according to the website “is an Associate Professor of Physics at Southeastern Louisiana University.” Which makes me fear that some readers might actually believe what he wrote. Because he is something with professor, certainly he must know the physics.

Now, the claim of the article is correct in the sense that if you took the laws of physics and removed general relativity then there would be no galaxy formation, no planet Earth, no people, and certainly no hoverboards. I don’t think though that Allain had such a philosophical argument in mind. Besides, on this ground you could equally well argue that you can’t throw a pebble without general relativity because there wouldn’t be any pebbles.

What Allain argues instead is that you somehow need the effects of gravity to be the same as that of acceleration and that this sounds a little like general relativity, therefore you need general relativity.

You should find this claim immediately suspicious because if you know one thing about general relativity it’s that it’s hard to test. If you couldn’t “ride a hoverboard without Einstein’s theory of General Relativity,” then why bother with light deflection and gravitational lensing to prove that the theory is correct? Must be a giant conspiracy of scientists wasting taxpayers’ money I presume.

Image Credit: Jared Mecham
Another reason to be suspicious about the correctness of this argument is the author’s explanation that special relativity is special because “Well, before Einstein, everyone thought reference frames were relative.” I am hoping this was just a typographical error, but just to avoid any confusion: before Einstein time was absolute. It’s called special relativity because according to Einstein, time too is relative.

But to come back to the issue about gravity. What you need to drive a hoverboard is to balance the inertial force caused by the board’s acceleration with another force, for which you have pretty much only gravity available. If the board accelerates and pushes forward your feet (friction required), you better bend forward to shift your center of mass because otherwise you’ll fall flat on your back. Bend forward too much and you fall on your nose because gravity. Don’t bend enough, you’ll fall backwards because inertia. To keep standing, you need to balance these forces.

This is basic mechanics and has nothing to do with General Relativity. That one of the forces is gravity is irrelevant to the requirement that you have to balance them to not fall. And even if you take into account that it’s gravity, Newtonian gravity is entirely sufficient. And it doesn’t have anything to do with hoverboards either. You can also see people standing on a train bend forwards when the train accelerates because otherwise they’ll fall in dominoes. You don’t need to bend when sitting because the seat back balances the force for you.

What’s different about general relativity is that it explains gravity is not a force but a property of space-time. That is, it deviates from Newtonian gravity. These deviations are ridiculously small corrections though and you don’t need to take them into account for your average Joe on the Hoverboard, unless possibly Joe is a Neutron star.

The key ingredient to general relativity is the equivalence principle, a simplified version of which states that the gravitational mass is equal to the inertial mass. This is my best guess of what Allain was alluding to. But you don’t need the equivalence principle to balance forces. The equivalence principle just tells you exactly how the forces are balanced. In this case it would tell you the angle you have to aim at to not fall.

In summary: The correct statement would have been “You can’t ride a hoverboard without balancing forces.” If you lean too much forward and write about General Relativity without knowing how it works, you’ll fall flat on your nose.