Saturday, May 30, 2015

String theory advances philosophy. No, really.

I have a soft side, and I don’t mean my Snoopy pants, though there is that. I mean I have a liking for philosophy because there are so many questions that physics can’t answer. I never get far with my philosophical ambitions though because the only reason I can see for leaving a question to philosophers is that the question itself is the problem. Take for example the question “What is real?” What does that really mean?

Most scientists are realists and believe that the world exists independent of them. On the very opposite end there is solipsism, the belief that one can only be sure that one’s own mind exists. And then there’s a large spectrum of isms in the middle. Philosophers have debated the nature of reality for thousands of years, and you might rightfully conclude that it just isn’t possible to make headway on the issue. But you’d be wrong! As I learned on a recent conference where I gave a talk about dualities in physics, string theory indeed helped philosophers to make progress in this ancient debate. However, I couldn’t make much sense of the interest in dualities that my talk got until I read Richard Dawid’s book which put things into perspective.

I’d call myself a pragmatic realist and an opportunistic solipsist, which is to say that I sometimes like to challenge people to prove me they’re not a figment of my imagination. So far nobody has succeeded. It’s not so much self-focus that makes me contemplate solipsism, but a deep mistrust in the reliability of human perception and memory, especially my own, because who knows if you exist at all. Solipsism never was very popular, which might be because it makes you personally responsible for all that is wrong with the world. It is also the possibly most unproductive mindset you can have if you want to get research done, but I find it quite useful to deal with the more bizarre comments that I get.

My biggest problem with the question what is real though isn’t that I evidently sometimes talk to myself, but that I don’t know what “real” even means, which is also why most discussions about the reality of time or the multiverse seem void of content to me. The only way I ever managed to make sense of reality is in a layer of equivalence classes, so let me introduce you to my personal reality.

Equivalence classes are what mathematicians use to collect things with similar properties. It’s basically a weaker form of equality, often denoted with a tilde ~. For example all natural numbers that divide evenly by seven are in the same equivalence class, so while 7 ≠ 21, it is 7 ~ 21. They’re not the same numbers, but they share a common property. The good thing about using equivalence classes is that once defined one can derive relations for them. They play an essential role in topology, but I digress, so back to reality.

Equivalence classes help because while I can’t make sense of the question what is real, the question what is “as real as” makes sense. The number seven isn’t “as real as” my shoe, and the reason I’m saying this is because of the physical interaction I can have with my shoe but not with seven. That’s why, you won’t be surprised to hear, I want to argue here the best way to think about reality is to think about physics first.

As I laid out in an earlier post, in physics we talk about direct and indirect measurements, but the line that separates these is fuzzy. Roughly speaking, the more effort is necessary to infer the properties of the object measured, the more indirect the measurement. A particle that hits a detector is often said to be directly measured. A particle whose existence has to be inferred from decay products that hit the detector is said to be indirectly measured. But of course there are many other layers of inference in the measurement. To begin with there are assumptions about the interactions within the detector that eventually produce a number on a screen, then there are photons that travel to your retina, and finally the brain activity resulting from these photons.

The reason we don’t normally mention all these many assumptions is that we assign them an extremely high confidence level. Reality then, in my perspective, has confidence levels like our measurements do, from very direct to very indirect. The most direct measurement, the first layer of reality, is what originates in your own brain. The second layer is direct sensory input: It’s a photon, it’s the fabric touching your skin, the pressure fluctuations in the air perceived as sound. The next layer is the origin of these signals, say, the screen emitting the photon. Then the next layer is whatever processor gave rise to that photon, and so on. Depending on how solipsisitic you feel you can imagine these layers extending outside or inside.

The more layers there are, the harder it becomes to reconstruct the origin of a signal and the less real the origin appears. A person appears much more real if they are stepping on your feet, rather than sending an image of a shoe. Also, as optical illusions tell us, the signal reconstruction can be quite difficult which twists our perception of reality. And let us not even start with special relativistic image distortions that require quite some processing to get right.

Our assessment of how direct or indirect a measurement is, and of how real the object measured appears, is not fixed and may change over time with technological advances. It was historically for example much topic of debate whether atoms can be considered real if they cannot be seen by eye. But modern electron microscopes now can produce images of single atoms, a much more direct measurement than inferring the existence of atoms from chemical reactions. As the saying goes “seeing is believing.” Seeing photos from the surface of Mars likewise has moved Mars into another equivalence class of reality, one that is much closer to our sensory input. Doesn’t Mars seem so much more real now?

[Surface of Mars. Image Source: Wikipedia]


Quarks have posed a particular problem for the question of reality since they cannot be directly measured due to confinement. In fact many people in the early days of the quark model, Gell-Mann himself included, didn’t believe in quarks being real, but where thinking of them as calculational devices. I don’t really see the difference. We infer their properties through various layers of reasoning. Quarks are not in a reality class that is anywhere close to direct sensory input, but they have certainly become more real to us as our confidence in the theory necessary to extract information from the data has increased. These theories are now so well established that quarks are considered as real as other particles that are easier to measure, fapp - for all practical physicists.

It’s about at the advent of quantum field theory that the case of scientific realism starts getting complicated. Philosophers separate in two major camps, ontological realism and structural realism. The former believes that the objects of our theories are somehow real, the latter that it’s the structure of the theory instead. Effective field theories basically tell you that ontological realism makes only sense in layers, because you might have different objects depending on the scale of resolution. But even then, with seas of virtual particles, and different bases in the Hilbert space, and different pictures of time-evolution, the objects that should be at the core of ontological realism seem ill-defined. And that’s not even taking into account that the notion of a particle also depends on the observer.

For what I can extract from Dawid’s book it hasn’t been looking good for ontological realism for some while, but it’s an ongoing debate and it’s here where string theory became relevant.

Some dualities between different theories have been known for a long time. A duality can relate theories that have a different field content and different symmetries. That by itself is a death spell to anything ontological, for if you have two different fields by which you can describe the same physics, what is the rationale for calling one more real than the other? Dawid writes:
“dualities… are thoroughly incompatible with ontological scientific realism.”
String theory now not only has popularized the existence of dualities and forced philosophers to deal with that, it has also served to demonstrate that theories can be dual to each other that are structurally very different, such as a string theory in one space and a gauge-theory in a space of lower dimension. So one is now similarly at a loss to decide which structure is more real than the other.

To address this, Dawid suggests to instead think of “consistent structure realism” by which he seem to mean we need to take the full “consistent structure” (ie, string theory) and interpret this as being the fundamentally “real” thing.

For what I am concerned, both sides of a duality are equally real, or equally unreal, depending on how convincing you think the inference of either theory from existing data is. They’re both in the same equivalence class; in fact the duality itself provides the equivalence relation. So suppose you have convincing evidence for some string-theory-derived duality to be a good description of nature, does that mean the whole multiverse is equally real? No, because the rest of the multiverse only follows through an even longer chain of reasoning. You either must come up with a mechanism that produces the other universes (as in eternal inflation or the many worlds interpretation) and then find support for that, or the multiverse moves to the same class of reality as the number seven, somewhere behind Snoopy and the Yeti.

So the property of being real is not binary, but rather it is infinitely layered. It is also relative and changes over time for the effort that you must make to reconstruct a concept or an image isn’t the same I might have to make. Quarks become more real the better we understand quantum chromo dynamics in the same way that you are more real to yourself than you are to me.

I still don’t know if strings as the fundamental building blocks of elementary particles can ever reach a reality level comparable to quarks, or if there is any conceivable measurement at all, no matter how indirect. Though one could rightfully argue that in some people’s mind strings already exist beyond any doubt. And if you’re a brain in a jar, that’s all that matters, really.




Saturday, May 23, 2015

How to make a white dwarf go supernova

Black holes seem to be the most obvious explanation for the finding that galaxies harbor “dark” matter that doesn’t emit light but that makes itself noticeable by its gravitational pull. While the simplicity of the idea is appealing, we know that it is wrong. Black holes are so compact that they cause noticeable gravitational lensing, and our observations of gravitational lensing events allow to estimate that black holes are not enough to explain dark matter. Ruling out this option works well for black holes with masses about that of the Sun, 1033 grams, or heavier, and down to about 1024 grams. Gravitational lensing however cannot tell us much about lighter black holes because the lensing effect isn’t strong enough to be observable.

Black holes lighter than about a solar mass cannot be created by stellar collapse; they must be created in the very early universe from extreme density fluctuations. These “primordial” black holes can be very light and the lightest of them could be decaying right now due to Hawking radiation. That we have not seen any evidence for such a decay tells us that primordial black holes, if they exist at all, exist only in an intermediate mass range, somewhere between 1017 and 1024 grams. It has turned out difficult to say anything about the prevalence of primordial black holes in this intermediate mass regime.

In a recent paper now three researchers from Berkeley and Stanford point out that these primordial black holes, and probably also other types of massive compact dark matter, should make themselves noticeable by igniting supernovae:
    Dark Matter Triggers of Supernovae
    Peter W. Graham, Surjeet Rajendran, Jaime Varela
    arXiv:1505.04444 [hep-ph]
The authors note that white dwarfs are lingering close by nuclear fusion and their idea is that a black hole passing through the white dwarf could initiate a runaway fusion process.

A white dwarf is what you get when a star has exhausted its supply of hydrogen but isn’t heavy enough to create a neutron star or even black hole. After a star has used up the hydrogen in its core, the core starts collapsing and hydrogen fusion to helium continuous for some while in the outer shell, creating a red giant. The core contracts under gravitational pressure, which increases the temperature and allows fusion of heavier elements. What happens further depends on the initial star’s total mass which determines how high the core pressure can get.

If the star was very light, fusion into elements heavier than carbon and oxygen is not possible. The remaining object - the “white dwarf” - is first very hot and dense but, since it has no energy supply, it will go on to cool. In a white dwarf, electrons are not bound to nuclei but instead form a uniformly distributed gas. The pressure counteracting the gravitational pull and thereby stabilizing the white dwarf is the degeneracy pressure of this electron gas, caused by the Fermi exclusion principle. In contrast to the pressure that we are used to from ideal gases, the degeneracy pressure does not change much with the temperature.

In their paper the authors study under which circumstances it is possible to ignite nuclear fusion in a white dwarf. Each fusion reaction between two of the white dwarf’s carbon nuclei produces enough energy to cause more fusion, thereby creating the possibility for a runaway process. Whether or not a fusion reaction can be sustained depends on how quickly the temperature spreads into the medium away from site of fusion. The authors estimate that white dwarfs might fail under fairly general circumstances to spread out the temperature quickly enough, thereby enabling a continuously feed fusion process.

Igniting such a runaway fusion in white dwarfs is possible only because in these stellar objects an increase in temperature does not lead to an increase of pressure. If it did, then the matter would expand locally and the density decrease, thereby effectively stalling the fusion. This is what would happen if you tried to convince our Sun to fuse heavier elements. The process would not run away, but stop very quickly.

Now if dark matter was constituted of small black holes, then the black holes should hit other stellar objects every now and then, especially towards the center of galaxies where the dark matter density is highest. A small black hole passing through a white dwarf at a velocity larger than the escape velocity would eat out a tunnel from the white dwarf’s matter. The black hole would slightly slow down as its mass increases but, more important, the black hole’s gravitational pull would accelerate the white dwarf’s matter towards it, which locally heats up the temperature.

In the paper the authors estimate that for black holes in a certain mass regime this acceleration would be sufficient to ignite nuclear fusion. If the black hole is too light, then the acceleration isn’t high enough. If the black hole is too heavy, then the tunnel in the white dwarf is too large and the matter doesn’t remain dense enough. But for black holes in the mass range of about 1022 gram, the conditions are just right for nuclear fusion to be ignited.

I find this interesting for two reasons. First, because it allows the authors to put constraints on the prevalence of primordial black holes from the number of white dwarfs and supernovae we observe, and these are constraints in a mass regime that we don’t know much about. Second, because it suggests a potential new mechanism to ignite supernovae.

Figure 3 from arxiv:1505.04444. The expected rate of additional 1a supernovae from primordial black holes igniting white dwarfs. Assumed is here that the primordial black holes make up most of the dark matter in galaxies.


The paper is based on rough analytical estimates and does not contain a detailed numerical study of a supernova explosion, which is a computationally very hard problem. One should thus remain somewhat skeptic as to whether the suggested ignition process will actually succeed in blowing up the whole white dwarf, or if it maybe will just blast off a piece or rip apart the whole thing into quickly cooling pieces. I would love to see a visual simulation of this and hope that one will come by in the soon future. Meanwhile, I’ve summed up my imagination in 30 frames :)


Monday, May 18, 2015

Book Review: “String Theory and the Scientific Method” by Richard Dawid

String Theory and the Scientific Method
By Richard Dawid
Cambridge University Press (2013)

“String Theory and the Scientific Method” is a very interesting and timely book by a philosopher trying to make sense out of trends in contemporary theoretical physics. Dawid has collected arguments that physicists have raised to demonstrate the promise of their theories, arguments that however are not supported by the scientific method as it is currently understood. He focuses on string theory, but some of his observations are more general than this.


There is for example that physicists rely on mathematical consistency as a guide, even though this is clearly not an experimental assessment. A theory that isn’t mathematically consistent in some regime where we do not have observations yet isn’t considered fundamentally valid. I have to admit it wouldn’t even have occurred to me to call this a “non-empirical assessment,” because our use of mathematics is clearly based on the observation that it works very well to describe nature.

The three arguments that Dawid has collected which are commonly raised by string theorists to support their belief that string theory is a promising theory of everything are:
  1. Meta-inductive inference: The trust in a theory is higher if its development is based on extending existing successful research programs.
  2. No-alternatives argument: The more time passes in which we fail to find a theory as successful as string theory in combining quantum field theory with general relativity the more likely it is that the one theory we have found is unique and correct.
  3. Argument of unexpected explanatory coherence: A finding is perceived more important if it wasn’t expected.
Dawid then argues basically that since a lot of physicists are de facto not relying on the scientific method any more maybe philosophers should face reality and come up with a better explanation that would alter the scientific method so that according to the new method the above arguments were scientific.

In the introduction Dawid writes explicitly that he only studies the philosophical aspects of the development and not the sociological ones. My main problem with the book is that I don’t think one can separate these two aspects clearly. Look at the arguments that he raises: The No Alternatives Argument and the Unexpected Explanatory Coherence are explicitly sociological. They are 1.) based on the observation that there exists a large research area which attracts much funding and many young people and 2.) that physicists trust their colleagues’ conclusions better if it wasn’t the conclusion they were looking for. How can you analyze the relevance of these arguments without taking into account sociological (and economic) considerations?

The other problem with Dawid’s argument is that he confuses the Scientific Method with the rest of the scientific process that happens in the communities. Science basically operates as a self-organized adaptive system, that is in the same class of systems as natural selection. For such systems to be able to self-optimize something – in the case of science the use of theories for the descriptions of nature – they must have a mechanism of variation and a mechanism for assessment of the variation followed by a feedback. In the case of natural selection the variation is genetic mixing and mutation, the assessment is whether the result survives, the feedback is another reproduction. In science the variation is a new theory and the assessment is whether it agrees with experimental test. The feedback is the revision or trashcanning of the theory. This assessment whether a theory describes observation is the defining part of science – you can’t change this assessment without changing what science does because it determines what we optimize for.

The assessments that Dawid, correctly, observes are a pre-selection that is meant to assure we spend time only on those theories (gene combinations) that are promising. To make a crude analogy, we clearly do some pre-selection in our choice of partners that determines which genetic combinations are ever put to test. These might be good choices or they might be bad choices and as long as their success hasn’t also been put to test, we have to be very careful whether we rely on them. It’s the same with the assessments that Dawid observes. Absent experimental test, we don’t know if using these arguments does us any good. In fact I would argue that if one takes into account sociological dynamics one presently has a lot of reasons to not trust researchers to be objective and unbiased which sheds much doubt on the use of these arguments.

Be that as it may, Dawid’s book has been very useful for me to clarify my thoughts about exactly what is going on in the community. I think his observations are largely correct, just that he draws the wrong conclusion. We clearly don’t need to update the scientific method, we need to apply it better, and we need to apply it in particular to better understand the process of knowledge discovery.

I might never again agree with David Gross on anything, but I do agree on his “pre-publication praise” on the cover. The book is very recommendable reading both for physicists and philosophers.

I wasn’t able to summarize the arguments in the book without drawing a lot of sketches, so I made a 15 mins slideshow with my summary and comments on the book. If you have the patience, enjoy :)

Wednesday, May 13, 2015

Information transfer without energy exchange

While I was writing up my recent paper on classical information exchange, a very interesting new paper appeared on quantum information exchange
    Information transmission without energy exchange
    Robert H. Jonsson, Eduardo Martin-Martinez, Achim Kempf
    Phys. Rev. Lett. 114, 110505 (2015)
    arXiv:1405.3988 [quant-ph]
I was visiting Achim’s group two weeks ago and we talked about this for a bit.

In their paper the authors study the communication channels in lower dimensional spaces by use of thought experiments. If you do thought experiments, you need thought detectors. Named “Unruh De-Witt detectors” after their inventors such detectors are the simplest systems you can think of that detect something. It’s a state with two energy levels and it couples linearly to the field you want to detect. A positive measurement results in an excitation of the detector’s state, and that’s pretty much it. No loose cables, no helium leaks, no microwave ovens.



Equipped with such a thought detector, you can then introduce Bob and Alice and teach them to exchange information by means of a quantum field, in the simplest case a massless scalar field. What they can do depends on the way the field is correlated with itself at distant points. In a flat space-time with three spatial dimensions, the field only correlates with itself on the lightcone. But in lower dimensions this isn’t so.

The authors then demonstrate just exactly how Alice can use the correlations to send information to Bob in two spatial dimensions, or 2+1 dimensional space-time as the physicists like to say. They further show that Alice can submit a signal without it drowning in quantum noise. Alice submits information not by sending a quantum of the field, but by coupling and decoupling her detector to the field’s vacuum state. The correlations in the field then imply that whether her detector is coupled or not affects how the field excites Bob’s detector.

Now this information exchange between Bob and Alice is always slower than the speed of light so you might wonder why that is interesting. It is interesting because Alice doesn’t send any energy! While the switching of the detectors requires some work, this is a local energy requirement which doesn’t travel with the information.

Okay you might say then, fine, but we don’t live in 2+1 dimensional space-time. That’s right, but we don’t live in three plus one dimensional flat space-time either: We live in a curved space-time. This isn’t further discussed in the paper but the correlations allowing for this information exchange without energy can also exist in some curved backgrounds. The interesting question is then of course, in which backgrounds and what does this mean for our sending of information into black holes? Do we really need to use quanta of energy for this or is there a way to avoid this? And if it can be avoided, what does it mean for the information being stored in black holes?

I am sure we will hear more about this in the future...

Wednesday, May 06, 2015

Testing modified gravity with black hole shadows

Black hole shadow in the movie “Interstellar.” Image credit: Double Negative artists/DNGR/TM © Warner Bros. Entertainment Inc./Creative Commons (CC BY-NC-ND 3.0) license.

On my visit to Perimeter Institute last week, I talked to John Moffat, whose recent book “Cracking the Particle Code of the Universe” I much enjoyed reading. Talking to John is always insightful. He knows the ins and outs of both particle physics and cosmology, has an opinion on everything, and gives you a complete historical account with this. I have learned a lot from John, especially to put today’s community squabbles into a larger perspective.

John has dedicated much of his research to alternatives to the Standard Model and the cosmological Concordance Model. You might mistake him for being radical or having a chronical want of being controversial, but I assure you neither is the case. The interesting thing about his models is that they are, on the very contrary, deeply conservative. He’s fighting the standard with the standard weapons. Much of his work goes largely ignored by the community for no particular reason other than that the question what counts as an elegant model is arguably subjective. John is presently maybe best known for being one of the few defenders for modified gravity as an alternative to dark matter made of particles.

His modified gravity (MOG) that he has been working on since 2005 is a covariant version of the more widely known MOdified Newtonian Dynamics (or MOND for short). It differs from Bekenstein’s Tensor-Vector-Scalar (TeVeS) model in the field composition; it also adds a vector field to general relativity but then there are additional scalar fields and potentials for the fields. John and his collaborators claim they can fit all the evidence for dark matter with that model, including rotation curves, the acoustic peaks in the cosmic microwave background and the bullet cluster.

I can understand that nobody really liked MOND which didn’t really fit together with general relativity and was based on little more than the peculiar observation that galaxy rotation curves seem to deviate from the Newtonian prediction at a certain acceleration rather than at a certain radius. And TeVeS eventually necessitated the introduction of other types of dark matter, which made it somewhat pointless. I like dark matter because it’s a simple solution and also because I don’t really see any good reason why all matter should couple to photons. But I do have some sympathy for modifying general relativity, though, having tried and failed to do it consistently has made me vary of the many pitfalls. For what MOG is concerned, I don’t see a priori why it’s worse adding a vector field and some scalar fields than adding a bunch of other fields for which we have no direct evidence and then giving them names like WIMPS or axions.

Quite possibly the main reason MOG isn’t getting all that much attention is that it’s arguably unexciting because, if correct, it just means that none of the currently running dark matter experiments will detect anything. What you really want is a prediction for something that can be seen rather than a prediction that nothing can be seen.

That’s why I find John’s recent paper about MOG very interesting, because he points out an observable consequence of his model that could soon be tested:
Modified Gravity Black Holes and their Observable Shadows
J. W. Moffat
European Physics Journal C (2015) 75:130
arXiv:1502.01677 [gr-qc]
In this paper, he has studied how black holes in this modification of gravity differ from ordinary general relativity, and in particular calculated the size of the black hole shadow. As you might have learned from the movie “Interstellar,” black holes appear like dark disks surrounded by rings that are basically extreme lensing effects. The size of the disk in MOG depends on a parameter in the model that can be determined from fitting the galaxy rotation curves. Using this parameter, it turns out the black hole shadow should appear larger by a factor of about ten in MOG as compared to general relativity.

So far nobody has seen a black hole shadow other than in the movies, but the Event Horizon Telescope will soon be looking for exactly that. It isn’t so much a telescope but a collaboration of many telescopes all over the globe, which allows for a very long baseline interferometry with unprecedented precision. In principle they should be able to see the shadow.

What I don’t know though is whether the precision of both radius of the shadow and the mass will be sufficient to make a distinction between normal and modified general relativity in such an experiment. I am also not really sure that the black hole solution in the paper is really the most general solution one can obtain in this type of model, or if not there is some way to backpedal to another solution if the data doesn’t fulfill hopes. And then the paper contains the somewhat ominous remark that the used value for the deviation parameter might not be applicable for the black holes the Event Horizon Telescope has set its eyes on. So there are some good reasons to be skeptic of this and as the scientists always say “more work is needed.” Be that as it may, if the event horizon telescope does see a shadow larger than expected, then this would clearly be a very strong case for modified gravity.