Friday, September 26, 2014

Black holes declared non-existent again.

That's me. 

The news of the day is that Laura Mersini-Houghton has presumably shown that black holes don’t exist. The headlines refer to these two papers: arXiv:1406.1525 and arXiv:1409.1837.

The first is an analytical estimate, the second a numerical study of the same idea. Before I tell you what these papers are about, a disclaimer: I know Laura; we have met at various conferences, and I’ve found her to be very pleasant company. I read her new paper some while ago and was hoping I wouldn’t have to comment on this, but my inbox is full with people asking me what this is all about. So what can I do?

In their papers, Laura Mersini-Houghton and her collaborator Harald Pfeiffer have taken into account the backreaction from the emitted Hawking radiation on the collapsing mass which is normally neglected. They claim to have shown that the mass loss is so large that black holes never form to begin with.

To make sense of this, note that black hole radiation is produced by the dynamics of the background and not by the presence of a horizon. The horizon is why the final state misses information, but the particle creation itself does not necessitate a horizon. The radiation starts before horizon formation, which means that the mass that is left to form the black hole is actually less than the mass that initially collapsed.

Physicists have studied this problem back and forth since decades, and the majority view is that this mass loss from the radiation does not prevent horizon formation. This shouldn’t be much of a surprise because the temperature of the radiation is tiny and it’s even tinier before horizon formation. You can look eg at this paper 0906.1768 and references [3-16] therein to get an impression of this discussion. Note though that this paper also mentions that it has been claimed before every now and then that the backreaction prevents horizon formation, so it’s not like everyone agrees. Then again, this could be said about pretty much every topic.

Now what one does to estimate the backreaction is to first come up with a time-dependent emission rate. This is already problematic because the normal Hawking radiation is only the late-time radiation and time-independent. What is clear however is that the temperature before horizon formation is considerably smaller than the Hawking-temperature and it drops very quickly the farther away the mass is from horizon formation. Incidentally, this drop was topic of my master’s thesis. Since it’s not thermal equilibrium one actually shouldn’t speak of a temperature. In fact the energy spectrum isn’t quite thermal, but since we’re only concerned with the overall energy the spectral distribution doesn’t matter here.

Next problem is that you will have to model some collapsing matter and take into account the backreaction during collapse. Quite often people use a collapsing shell for this (as I did in my master’s thesis). Shells however are pathological because if they are infinitely thin they must have an infinite energy-density and are by themselves already quantum gravitational objects. If the shell isn’t infinitely thin, then the width isn’t constant during collapse. So either way, it’s a mess and you best do it numerically.

What you do next is take that approximate temperature which now depends on some proper time in which the collapse proceeds. This temperature gives via Stefan-Bolzmann’s law a rate for the mass loss with time. You integrate the mass-loss over time and subtract the integral from the initial mass. Or at least that’s what I would have done. It is not what Mersini-Houghton and Pfeiffer have done though. What they seem to have done is the following.

Hawking radiation has a negative energy-component. Normally negative energies are actually anti-particles with positive energies, but not so in the black hole evaporation. The negative energy particles though only exist inside the horizon. Now in Laura’s paper, the negative energy particles exist inside the collapsing matter, but outside the horizon. Next, she doesn’t integrate the mass loss over time and subtracts this from the initial mass, but integrates the negative energies over the inside of the mass and subtracts this integral from the initial mass. At least that is my reading of Equation IV.10 in 1406.1525, and equation 11e in 1409.1837 respectively. Note that there is no time-integration in these expressions which puzzles me.

The main problem I have with this calculation is that the temperature that enters the mass-loss rate for all I can see is that of a black hole and not that of some matter which might be far from horizon crossing. In fact it looks to me like the total mass that is lost increases with increasing radius, which I think it shouldn’t. The more dispersed the mass, the smaller the gravitational tidal force, and the smaller the effect of particle production in curved backgrounds should be. This is for what the analytical estimate is concerned. In the numerical study I am not sure what is being done because I can’t find the relevant equation, which is the dependence of the luminosity on the mass and radius.

In summary, the recent papers by Mersini-Houghton and Pfeiffer contribute to a discussion that is decades old, and it is good to see the topic being taken up by the numerical power of today. I am skeptic that their treatment of the negative energy flux is consistent with the expected emission rate during collapse. Their results are surprising and in contradiction with many previously found results. It is thus too early to claim that is has been shown black holes don’t exist.

Monday, September 22, 2014

Discoveries that weren’t

This morning’s news, as anticipated, is that the new Planck data renders the BICEP2 measurement of relic gravitational waves inconclusive. It might still be there, the signal, but at least judging by the presently existing data analysis one can’t really say whether it is or isn’t.

Discoveries that vanish into dust, or rather “background” as the technical term has it, are of course nothing new in physics. In 1984, for example, the top quark was “discovered” with a mass of about 40 GeV:

Physicists may have found 'top' quark
By Robert C. Cowen, Staff writer of The Christian Science Monitor
MAY 9, 1984

Particle physicists appear to be poised for a breakthrough in their quest for the underlying structure of matter. Puzzling phenomena have appeared at energies where present theory predicted there was little left to uncover. This indicates that reseachers may have come across an unsuspected, and possibly rich, field in which to make new discoveries. Also, a team at the European Center for Nuclear Research (CERN) at Geneva may have found the long-sought 'top' quark. Protons, neutrons, and related particles are believed to be made up of combinations of more basic entities called quarks.”
This signal turned out to be a statistical fluctuation. The top quark wasn’t really discovered until 1995 with a mass of about 175 GeV. Tommaso tells the story.

The Higgs too was already discovered in 1984, at the Crystal Ball Experiment at DESY with a mass of about 9 GeV. It even made it into the NY Times:
PHYSICISTS REPORT MYSTERY PARTICLE
By WALTER SULLIVAN
Published: August 2, 1984

A new subatomic particle whose properties apparently do not fit into any current theory has been discovered by an international team of 78 physicists at DESY, a research center near Hamburg, West Germany. The group has named the particle zeta […] As described yesterday to a conference at Stanford, the zeta particle has some, but not all, of the properties postulated for an important particle, called the Higgs particle, whose existence has yet to be confirmed.”
Also in 1984, Supersymmetry came and went in the UA1 experiment [ht JoAnne]:
“Experimental observation of events with large missing transverse energy accompanied by a jet or a photon (S) in p \bar p collisions at \sqrt{s} = 540 GeV
UA1 Collaboration

We report the observation of five events in which a missing transverse energy larger than 40 GeV is associated with a narrow hadronic jet and of two similar events with a neutral electromagnetic cluster (either one or more closely spaced photons). We cannot find an explanation for such events in terms of backgrounds or within the expectations of the Standard Model.”
And the year 1996 saw a quark substructure come and go. The New York Times reported:
Tiniest Nuclear Building Block May Not Be the Quark
By MALCOLM W. BROWNE
Published: February 8, 1996

Scientists at Fermilab's huge particle accelerator 30 miles west of Chicago reported yesterday that the quark, long thought to be the simplest building block of nuclear matter, may turn out to contain still smaller building blocks and an internal structure.”
Then there is the ominous pentaquark that comes and goes, the anisotropic universe [ht Ben], the lefthanded universe [ht Ethan], and the infamous OPERA anomaly that was a loose cable - and these are only the best known ones. The BICEP2 story is remarkable primarily because the initial media reports, based on the collaboration’s own press releases, so vastly overstated the confidence of the results.

The evidence for relic gravitational waves is a discussion that will certainly be continued for at least a decade or so. My prediction is in the end, after loads of data analysis they will find the signal just where they expected it. And that is really the main difference between the BICEP announcement and the superluminal OPERA neutrinos: In the case of the gravitational waves everybody thought the signal should be there. In the case of the superluminal neutrinos everybody thought it should not be there.

The OPERA collaboration was heavily criticized for making such a big announcement out of a result that was most likely wrong, and one can debate whether or not they did the right thing. But at least they amply warned everybody that the result was likely wrong.

Saturday, September 13, 2014

Is there a smallest length?

Good ideas start with a question. Great ideas start with a question that comes back to you. One such question that has haunted scientists and philosophers since thousands of years is whether there is a smallest unit of length, a shortest distance below which we cannot resolve structures. Can we look closer and always closer into space, time, and matter? Or is there a limit, and if so, what is the limit?

I picture our foreign ancestors sitting in their cave watching the world in amazement, wondering what the stones, the trees and they themselves are made of – and starving to death. Luckily, those smart enough to hunt down the occasional bear eventually gave rise to human civilization sheltered enough from the harshness of life to let the survivors get back to watching and wondering what we are made of. Science and philosophy in earnest is only a few thousand years old, but the question whether there is smallest unit has always been a driving force in our studies of the natural world.

The old Greeks invented atomism, the idea that there is an ultimate and smallest element of matter that everything is made of. Zeno’s famous paradoxa sought to shed light on the possibility of infinite divisibility. The question came back with the advent of quantum mechanics, with Heisenberg’s uncertainty principle that fundamentally limits the precision by which we can measure. It became only more pressing with the divergences in quantum field theory that are due to the inclusion of infinitely short distances.

It was in fact Heisenberg who first suggested that divergences in quantum field theory might be cured by the existence of a fundamentally minimal length, and he introduced it by making position operators non-commuting among themselves. Like the non-commutativity of momentum and position operators leads to an uncertainty principle, so does the non-commutativity of position operators limits how well distances can be measured.

Heisenberg’s main worry, which the minimal length was supposed to deal with, was the non-renormalizability of Fermi’s theory of beta-decay. This theory however turned out to be only an approximation to the renormalizable electro-weak interaction, so he had to worry no more. Heisenberg’s idea was forgotten for some decades, then picked up again and eventually grew into the area of non-commutative geometries. Meanwhile, the problem of quantizing gravity appeared on stage and with it, again, non-renormalizability.

In the mid 1960s Mead  reinvestigated Heisenberg’s microscope, the argument that lead to the uncertainty principle, with (unquantized) gravity taken into account. He showed that gravity amplifies the uncertainty so that it becomes impossible to measure distances below the Planck length, about 10-33 cm. Mead’s argument was forgotten, then rediscovered in the 1990s by string theorists who had noticed using strings to prevent divergences by avoiding point-interactions also implies a finite resolution, if in a technically somewhat different way than Mead’s.

Since then the idea that the Planck length may be a fundamental length beyond which there is nothing new to find, ever, appeared in other approaches towards quantum gravity, such as Loop Quantum Gravity or Asymptotically Safe Gravity. It has also been studied as an effective theory by modifying quantum field theory to include a minimal length from scratch, and often runs under the name “generalized uncertainty”.

One of the main difficulties with these theories is that a minimal length, if interpreted as the length of a ruler, is not invariant under Lorentz-transformations due to length contraction. This problem is easy to overcome in momentum space, where it is a maximal energy that has to be made Lorentz-invariant, because momentum space is not translationally invariant. In position space one either has to break Lorentz-invariance or deform it and give up locality, which has observable consequences, and not always desired ones. Personally, I think it is a mistake to interpret the minimal length as the length of a ruler (a component of a Lorentz-vector), and it should instead be interpreted as a Lorentz-invariant scalar to begin with, but opinions on that matter differ.

The science and history of the minimal length has now been covered in a recent book by Amit Hagar:

Amit is a philosopher but he certainly knows his math and physics. Indeed, I suspect the book would be quite hard to understand for a reader without at least some background knowledge in math and physics. Amit has made a considerable effort to address the topic of a fundamental length from as many perspectives as possible, and he covers a lot of scientific history and philosophical considerations that I had not previously been aware of. The book is also noteworthy for including a chapter on quantum gravity phenomenology.

My only complaint about the book is its title because the question of discrete vs continuous is not the same as the question of finite vs infinite resolution. One can have a continuous structure and yet be unable to resolve it beyond some limit (this is the case when the limit makes itself noticeable as a blur rather than a discretization). On the other hand, one can have a discrete structure that does not prevent arbitrarily sharp resolution (which can happen when localization on a single base-point of the discrete structure is possible).

(Amit’s book is admittedly quite pricey, so let me add that he said should sales numbers reach 500 Cambridge University Press will put a considerably less expensive paperback version on offer. So tell your library to get a copy and let’s hope we’ll make it to 500 so it becomes affordable for more of the interested readers.)

Every once in a while I think that there maybe is no fundamentally smallest unit of length, that all these arguments for its existence are wrong. I like to think that we can look infinitely close into structures and will never find a final theory, turtles upon turtles, or that structures are ultimately self-similar and repeat. Alas, it is hard to make sense of the romantic idea of universes in universes in universes mathematically, not that I didn’t try, and so the minimal length keeps coming back to me.

Many if not most endeavors to find observational evidence for quantum gravity today look for manifestations of a minimal length in one way or the other, such as modifications of the dispersion relation, modifications of the commutation-relations, or Bekenstein’s tabletop search for quantum gravity. The properties of these theories are today a very active research area. We’ve come a long way, but we’re still out to answer the same questions that people asked themselves thousands of years ago.


This post first appeared on Starts With a Bang with the title "The Smallest Possible Scale in the Universe" on August 12, 2014.

Thursday, September 11, 2014

Experimental Search for Quantum Gravity – What is new?

Last week I was at SISSA in Trieste for the 2014 conference on “Experimental Search for Quantum Gravity”. I missed the first two days because of child care problems (Kindergarten closed during holiday season, the babysitter ill, the husband has to work), but Stefano Liberati did a great job with the summary talk the last day, so here is a community update.

The briefest of brief summaries is that we still have no experimental evidence for quantum gravity, but then you already knew this. During the last decade, the search for experimental evidence for quantum gravity has focused mostly on deviations from Lorentz-invariance and strong quantum gravity in the early universe that might have left imprints on the cosmological observables we measure today. The focus on these two topics is still present, but we now have some more variety which I think is a good development.

There is still lots of talk about gamma ray bursts and the constraints on deformations of Lorentz-invariance that can be derived from this. One has to distinguish these constraints on deformations from constraints on violations of Lorentz-invariance. In the latter case one has a preferred frame, in the former case not. Violations of Lorentz-invariance are very strongly constrained already. But to derive these constraints one makes use of an effective field theory approach, that is one assumes that whatever quantum gravity at high energies (close by the Planck scale) looks like, at small energies it must be describable by the quantum field theories of the standard model plus some additional, small terms.

Deformations of Lorentz-symmetry are said to not have an effective field theory limit and thus these constraints cannot be applied. I cautiously say “are said not to have” such a limit because I have never heard a good argument why such a limit shouldn’t exist. For all I can tell it doesn’t exist just because nobody working on this wants it to exist. In any case, without this limit one cannot use the constraints on the additional interaction terms and has to look for other ways to test the model.

This is typically done by constraining the dispersion relation for free particles which obtains small correction terms. These corrections to the dispersion relation affect the speed of massless particles, which now is energy-dependent. The effects of the deformation become larger with long travel times and large energies which is why high energetic gamma ray bursts are so interesting. The deformation would make itself noticeable by either speeding up or slowing down the highly energetic photons, depending on the sign of a parameter.

Current constraints put the limits roughly at the Planck scale if the modification is either to slow down or to speed up the photons. Putting constraints on the case where the deformation is stochastic (sometimes speeding up, sometimes slowing down) is more difficult and so far there haven’t been any good constraints on this. Jonathan Granot briefly flashed by some constraints on the stochastic case, but said he can’t spill the details yet, some collaboration issue. He and collaborators do however have a paper coming out within the next months that I expect will push the stochastic case up to the Planck scale as well.

On the other hand we heard a talk by Giacomo Rosati who argues that to derive these bounds one uses the normal expansion of the Friedmann-Robertson-Walker metric, but that the propagation of particles in this background should be affected by the deformed theory as well, which weakens the constraints somewhat. Well, I can see the rationale behind the argument, but after 15 years the space-time picture that belongs to deformed Lorentz-invariance is still unclear, so this might or might not be the case. There were some other theory talks that try to get this space-time picture sorted out but they didn’t make a connection to phenomenology.

Jakub Mielczarek was at the meeting talking about the moment of silence in the early universe and how to connect this to phenomenology. In this model for the early universe space-time makes a phase-transition from a Euclidean regime to the present Lorentzian regime, and in principle one should be able to calculate the spectral index from this model, as well as other cosmological signatures. Alas, it’s not a simple calculation and progress is slow since there aren’t many people working on it.

Another possible observable from this phase-transition may be leftover defects in the space-time structure. Needless to say, I like that very much because I was talking about my model for space-time defects that basically is a parameterization of this possibility in general (slides here). It would be great if one could connect these parameters to some model about the underlying space-time structure.

The main message that I have in my talk is that if you want to preserve Lorentz-invariance, as my model does, then you shouldn’t look at high energies because that’s not a Lorentz-invariant statement to begin with. You should look instead at wave-functions sweeping over large world-volumes. This typically means low energies and large distances, which is not a regime that presently gets a lot of attention when it comes to quantum gravity phenomenology. I certainly hope this will change within the next years because it seems promising to me. Well, more promising than the gamma ray bursts anyway.

We also heard Joao Magueijo in his no-bullshit style explaining that modified dispersion relations in the early universe can reproduce most achievements of inflation, notably the spectral index including the tilt and solving the horizon problem. This becomes possible because an energy-dependence in the speed of light together with redshift during expansion turns the energy-dependence into a time-dependence. If you haven’t read his book “Faster Than the Speed of Light”, I assure you you won’t regret it.

The idea of dimensional reduction is still popular but experimental consequences, if any, come through derived concepts such as a modified dispersion relation or early universe dynamics, again.

There was of course some discussion of the BICEP claim that they’ve found evidence for relic gravitational waves. Everybody who cared to express an opinion seemed to agree with me that this isn’t the purported evidence for quantum gravity that the press made out of it, even if the measurement was uncontroversial and statistically significant.

As we discussed in this earlier post, to begin with this doesn’t test the quantum gravity at high energies but only the perturbative quantization of gravity, which for most of my colleagues isn’t really quantum gravity. It’s the high energy limit that we do not know how to deal with. And even to claim that it is evidence for perturbative quantization requires several additional assumptions that may just not be fulfilled, for example that there are no non-standard matter couplings and that space-time and the metric on it exist to begin with. This may just not be the case in a scenario with a phase-transition or with emergent gravity. I hope that next time the media picks up the topic they care to talk to somebody who actually works on quantum gravity phenomenology.

Then there was a member from the Planck collaboration whose name I forgot, who tried to say something about their analysis of the foreground effects from the galactic dust that BICEP might not have accurately accounted for. Unfortunately, their paper isn’t finished and he wasn’t really allowed to say all that much. So all I can tell you is that Planck is pretty much done with their analysis and the results are with the BICEP collaboration which I suppose is presently redoing their data fitting. Planck should have a paper out by the end of the month we’ve been told. I am guessing it will primarily say there’s lots of uncertainty and we can’t really tell whether the signal is there or isn’t, but look out for the paper.

There was also at the conference some discussion about the possibility to test quantum gravitational effects in massive quantum systems, as suggested for example by Igor Pikovski et al. This is a topic we previously discussed here, and I still think it is extremely implausible. The Pikovski et al paper is neither the first nor the last to have proposed this type of test, but it is arguably the one that got the most attention because they managed to get published in Nature Physics. These experiments are supposed to test basically the same deformation that the gamma ray bursts also test, just on the level of commutation relations in quantum mechanics rather than in the dispersion relation (the former leads to the latter, the opposite is not necessarily so).

The problem is that in this type of theory nobody really knows how to get from the one-particle case to the many-particle case, which is known as the ‘soccer-ball-problem’. If one naively just adds the energies of particles, one finds that the corrections blow up when one approaches the Planck mass, which is about 10-5 grams. That doesn’t make a lot of sense - to begin with because we wouldn’t reproduce classical mechanics, but also because quantum gravitational effects shouldn’t scale with the energy but with the energy density. This means that the effects should get smaller for systems composed of many particles. In this case then, you cannot get good constraints on quantum gravitational effects in the proposed experiments. That doesn’t mean one shouldn’t do the experiment. This is new parameter space in quantum mechanics and one never knows what interesting things one might find there. I’m just saying don’t expect any quantum gravity there.

Also at the conference was Jonathan Miller, who I had been in contact with earlier about his paper in which he and his coauthor estimate whether the effect of gravitational bremsstrahlung on neutrino propagation is detectable (we discussed this here). It is an interesting proposal that I spent quite some time thinking about because they don’t make giant leaps of faith about the scaling of quantum gravitational effects. In this paper, it is plainly perturbatively quantized gravity.

However, after some thinking about this I came to the conclusion that while the cross-section that they estimate may be at the right order of magnitude for some cases (I am not too optimistic about the exact case that they discuss in the paper), the total probability for this to happen is still tiny. That is because unlike the case of cross-sections measured at the LHC, for neutrinos scattering off a black hole one doesn’t have a high luminosity to bring up the chance of ever observing this. When I estimated the flux, the probability turned out to be too small to be observable by at least 30 orders of magnitude, ie what you typically expect for quantum gravity. Anyways, I had some interesting exchange with Jonathan who, needless to say, isn’t entirely convinced by my argument. So it’s not a settled story, and I’ll let you know what comes out of this.

Finally, I should mention that Carlo Rovelli and Francesca Vidotto talked about their Planck stars and the possible phenomenology that these could lead to. We previously discussed their idea here. They are arguing basically that quantum gravitational effects can be so that a black hole (with an apparent horizon, not an event horizon) does not slowly evaporate until it reaches the Planck mass, but suddenly explodes at a mass still much higher than the Planck mass, thereby releasing its information. If that was possible, it would sneak around all the issues with firewalls and remnants and so on. It might also have observable consequences for these explosions might be detectable. However, this idea is still very much in its infancy and several people in the audience raised concerns similar to mine, whether this can work without violating locality and/or causality in the semi-classical limit. In any case, I am sure that we will hear more about this in the soon future.

All together I am relieved that the obsession with gamma ray bursts seems to be fading, though much of this fading is probably due to both Giovanni Amelino-Camelia and Lee Smolin not being present at this meeting ;)

This was the first time I visited SISSA since they moved to their new building, which is no longer located directly at the coast. It is however very nicely situated on a steep hill, surrounded by hiking paths through the forest. The new SISSA building used to be a hospital, like the buildings that house Nordita in Stockholm. I’ve been told my office at Nordita is in what used to be the tuberculosis sector, and if I’m stuck with a computation I can’t help but wonder how many people died at the exact spot my desk stands now. As to SISSA, I hope that the conference was on what was formerly the pregnancy ward, and that the meeting, in spirit, may give birth to novel ideas how to test quantum gravity.

Monday, September 08, 2014

Science changed my life – and yours too.

Can you name a book that made you rethink? A song that helped you through bad times? A movie that gave you a new perspective, new hope, an idea that changed your life or that of people around you? And was it worth the price of the book, the download fee, the movie ticket? If you think of the impact it has had, does it come as a number in your currency of choice?

Those of us working in basic research today are increasingly forced to justify their work by its social impact, it’s value for the society that they live in. It is a good question because scientists payed by tax money should keep in mind who they are working for. But that impact that the funding agencies are after, it is expected to come in the form of applications, something that your neighbor will eventually be able to spend money on, to keep the economic wheels turning and the gears running.

It might take centuries for today’s basic research to result in technological applications, and predicting them is more difficult than doing the research itself. The whole point of doing basic research is that its impact is unpredictable. And so this pressure to justify what we are doing is often addressed by fantastic extrapolations of today’s research, potential gadgets that might come out of it, new materials, new technologies, new services. These justification that we come up with ourselves are normally focused on material value, something that seems tangible to your national funding agency and your member of parliament who wants to be reelected.

But basic research has a long tail, and a soft one, that despite its softness has considerable impact that is often neglected. At our recent workshop for science writers, Raymond Laflamme gave us two great lectures on quantum information technology, the theory and the applications. Normally if somebody starts talking about qubits and gates, my brain switches off instantly, but amazingly enough listening to Laflamme made it sound almost comprehensible.

Here is the slide that he used to motivate the relevance of basic research (full pdf here):


Note how the arrows in the circle gradually get smaller. A good illustration for the high-risk, high impact argument. Most of what we work on in basic research will never lead anywhere, but that which does changes our societies, rewards and refuels our curiousity, then initiates a new round in the circle.

Missing in this figure though is a direct link from understanding to social impact.



New scientific insights have historically had a major impact on the vision the thinkers of the day had for the ideal society and how it was supposed to work, and they still have. Knowledge about the workings of the universe have eroded the rationale behind monarchy, strong hierarchies in general, the influence of the church, and given rise to other forms of organizations that we may call enlightened today, but that will seem archaic a thousand years from now.

The variational principle, made popular in Leibnitz’ conclusion that we live in the “best of all possible worlds”, a world that must be “optimal” in some sense, has been hugely influential and eventually spun off the belief in self-organization, in the existence of an “invisible hand” that will lead societies to an optimal state, and that we better not try to outsmart. This belief is still wide-spread among today’s liberals, even though it obviously begs the questions whether what an unthinking universe optimizes is that what humans want.

The ongoing exploration of nature on large and small scales has fundamentally altered the way in which we perceive of us as special, now knowing that our solar system is but one among billions, many of which contain planets similar to our own. And the multiverse in all its multiple realizations is the maybe ultimate reduction of humanity to an accident, whereby it remains to be seen just how lucky this accident is.

That insights coming from fundamental research affect our societies long before and in many ways besides applications come along today is documented vividly by the Singularity believers who talk about the coming of artificial intelligence surpassing our own intelligence like Christians talk about the rapture. Unless you live in Silicon valley it's a fringe phenomenon, but it is vivid proof just how much ideas affect us.

Other recent developments that have been influential way beyond the scientific niches where they originated are chaos, instability, tipping points, complexity and systemic risk. And it seems to me that the awareness that uncertainty is an integral part of scientific knowledge is slowly spreading.

The connection between understanding and social impact is one you are part of every time you read a popular science piece and update your views about the world, the planet we inhabit, our place on it, and its place in the vastness of the universe. It doesn’t seem to mean all that much, all these little people with their little blogs and their little discussions, but multiply it by some hundred millions. How we think about our being part of nature affects how we organize our living together and our societies.

Downloading a Wikipedia entry of 300 kb through your home wireless: 0.01 Euro. Knowing that the universe expands and will forever continue to expand: Priceless.

Sunday, August 31, 2014

The ordinary weirdness of quantum mechanics

Raymond Laflamme's qubit.
Photo: Christina Reed.
I’m just back from our 2014 Workshop for Science Writers, this year on the topic “Quantum Theory”. The meeting was both inspiring and great fun - the lab visit wasn’t as disorganized as last time, the muffins appeared before the breaks and not after, and amazingly enough we had no beamer fails. We even managed to find a video camera, so hopefully you’ll be able to watch the lectures on your own once uploaded, provided I pushed the right buttons.

Due to popular demand, we included a discussion session this year. You know that I’m not exactly a big fan of discussion sessions, but then I didn’t organize this meeting for myself. Michael Schirber volunteered to moderate the discussion. He started with posing the question why quantum mechanics is almost always portrayed as spooky, strange or weird. Why do we continue to do this and is beneficial for communicating the science behind the spook?

We could just blame Einstein for this, since he famously complained that quantum mechanics seemed to imply a spooky (“spukhafte”) action at a distance, but that was a century ago and we learned something since. Or some of us anyway.

Stockholm's quantum optics lab,
Photo: Christina Reed.
We could just discard it as headline making, a way to generate interest, but that doesn’t really explain why quantum mechanics is described as weirder or stranger as other new and often surprising effects. How is time-dilatation in a gravitational field less strange than entanglement? And it’s not that quantum mechanics is particularly difficult either. As Chad pointed out during the discussion, much of quantum mechanics is technically much simpler than general relativity.

We could argue it is due to our daily life being dominated by classical physics, so that quantum effects must appear unintuitive. Intuition however is based on experience and exposure. Spend some time calculating quantum effects, spend some time listening to lectures about quantum mechanics, and you can get that experience. This does not gain you the ability to perceive quantum effects without a suitable measuring device, but that is true for almost everything in science.

The explanation that came up during the discussion that made the most sense to me is that it’s simply a way to replace technical vocabulary, and these placeholders have become vocabulary on their own right.

The spook and the weirdness, they stand in for non-locality and contextuality, they replace correlations and entanglement, pure and mixed states, non-commutativity, error correction, path integrals or post-selection. Unfortunately, all too often the technical vocabulary is entirely absent rather than briefly introduced. This makes it very difficult for interested readers to dig deeper into the topic. It is basically a guarantee that the unintuitive quantum behavior will remain unintuitive for most people. And for the researchers themselves, the lack of technical terms makes it impossible to figure out what is going on. The most common reaction to supposed “quantum weirdness” that I see among my colleagues is “What’s new about this?”

The NYT had a recent opinion piece titled “Why We Love What We Don’t Understand” in which Anna North argued that we like that what isn’t understood because we want to keep the wonder alive:
“Many of us may crave that tug, the thrill of something as-yet-unexplained… We may want to get to the bottom of it, but in another way, we may not — as long as we haven’t quite figured everything out, we can keep the wonder alive.”
This made me think because I recall browsing through my mother’s collection of (the German version of) Scientific American as a teenager, always looking to learn what the scientists, the big brains, did not know. Yeah, it was kinda predictable I would end up in some sort of institution. At least it’s one where I have a key to the doors.

Anyway, I didn’t so much want to keep the mystery alive as that I wanted to know where the boundary between knowledge and mystery was currently at. Assume for a moment I’m not all that weird but most likely average. It is surprising then that the headline-grabbing quantum weirdness, instead of helping the reader, misleads them about where this boundary between knowledge and mystery is? Is it surprising then that everybody and their dog has solved some problem with quantum mechanics without knowing what problem?

And is it surprising, as I couldn’t help noticing, that the lecturers at this year’s workshop were all well practiced in forward-defense, and repeatedly emphasized that most of the theory is extremely well understood. It’s just that the focus on new technics and recent developments highlights exactly that what isn’t (yet) well understood, thereby giving more weight to the still mysterious in the news than there is in the practice.

I myself do not mind the attention-grabbing headlines, and that news focus on that what’s new rather than that what’s been understood for decades is the nature of the business. As several science writers, at this workshop and also at the previous one, told me, it is often not them inventing the non-technical terms, but it is vocabulary that the scientists themselves use to describe their research. I suspect though the scientists use it trying to adapt their explanations to the technical level they find in the popular science literature. So who is to blame really and how do we get out of this loop?

A first step might be to stop assuming all other parties are more stupid than the own. Most science writers have some degree in science, and they are typically more up to date on what is going on in research than the researchers themselves. The “interested public” is perfectly able to deal with some technical vocabulary as long as it comes with an explanation. And researchers are not generally unwilling or unable to communicate science, they just often have no experience what is the right level of detail in situations they do not face every day.

When I talk to some journalist, I typically ask them first to tell me roughly what they already know. From their reply I can estimate what background they bring, and then I build on that until I notice I lose them. Maybe that’s not a good procedure, but it’s the best I’ve come up with so far.

We all can benefit from better science communication, and a lot has changed within the last decades. Most notably, there are many more voices to hear now, and these voices aim at very different levels of knowledge. What is still not working very well though is the connection between different levels of technical detail. (Which we previously discussed here.)

At the end of the discussion I had the impression opinions were maximally entangled and pure states might turn into mixed ones. Does that sound strange?

Monday, August 25, 2014

Name that Þing

[Image credits Ria Novosti, source]
As teenager I switched between the fantasy and science fiction aisle of the local library, but in the end it was science fiction that won me over.

The main difference between the genres seemed the extent to which authors bothered to come up with explanations. The science fiction authors, they bent and broke the laws of Nature but did so consistently, or at least tried to. Fantasy writers on the other hand were just too lazy to work out the rules to begin with.

You could convert Harry Potter into a science fiction novel easily enough. Leaving aside gimmicks such as moving photos that are really yesterday’s future, call the floo network a transmitter, the truth serum a nanobot liquid, and the invisibility cloak a shield. Add some electric buzz, quantum vocabulary, and alien species to it. Make that wooden wand a light saber and that broom an X-wing starfighter, and the rest is a fairly standard story of the Other World, the Secret Clan, and the Chosen One learning the rules of the game and the laws of the trade, of good and evil, of friendship and love.

The one thing that most of the fantasy literature has which science fiction doesn’t have, and which has always fascinated me, is the idea of an Old Language, the idea that there is a true name for every thing and every place, and if you know the true name you have power over it. Speaking in the Old Language always tells the truth. If you speak the Old Language, you make it real.

This idea of the Old Language almost certainly goes back to our ancestor’s fights with an often hostile and unpredictable nature threatening their survival. The names, the stories, the gods and godzillas, they were their way of understanding and managing the environment. They were also the precursor to what would become science. And don’t we in physics today still try to find the true name of some thing so we have power over it?

Aren’t we still looking for the right words and the right language? Aren’t we still looking for the names to speak truth to power, to command that what threatens us and frightens us, to understand where we belong, where we came from, and where we go to? We call it dark energy and we call it dark matter, but these are not their true names. We call them waves and we call them particles, but these are not their true names. Some call the thing a string, some call it a graph, some call it a bit, but as Lee Smolin put it so nicely, none of these words quite has a “ring of truth” to it. These are not the real names.

Neil Gaiman’s recent fantasy novel “The Ocean at the End of the Road” also draws on the idea of an Old Language, of a truth below the surface, a theory of everything which the average human cannot fathom because they do not speak the right words. In Michael Ende’s “Neverending Story” that what does not have a true name dies and decays to nothing. (And of course Ende has a Chosen One saving the world from that no-thing.) It all starts and it all ends with our ability to name that what we are part of.

You don’t get a universe from nothing of course. You can get a universe from math, but the mathematical universe doesn’t come from nothing either, it comes from Max Tegmark, that is to say some human (for all I can tell) trying to find the right words to describe, well, everything - no point trying to be modest about it. Tegmark, incidentally, also seems to speak at least ten different languages or so, maybe that’s not a coincidence.

The evolution of language has long fascinated historians and neurologists alike. Language is more than assigning a sound to things and things you do with things. Language is a way to organize thought patterns and to classify relations, if in a way that is frequently inconsistent and often confusing. But the oldest language of all is neither Sindarin nor Old Norse, it is, for all we can tell, the language of math in which the universe was written. You can call it temperature anisostropy, or tropospheric ozone precursors, you can call it neurofibrillary tangle or reverse transcriptase, you can call them Bárðarbunga or Eyjafjallajökull - in the end their true names were written in math.

Friday, August 22, 2014

Hello from Iceland

So here I am on an island in the middle of the Atlantic ocean that's working on its next volcano eruption.


In case you missed yesterday's Google Hangout, FQXi just announced the winner's of this year's essay contest and - awesomeliness alert! - my essay "How to save the world in five simple steps" made it first prize!

I'm happy of course about the money, but what touches me much more is that this is vivid documentation I'm not the only one who thinks the topics I addressed in my essay are relevant. If you've been following this blog for some while then you know of course that I've been thinking back and forth about the problem of emerging social dynamics, in the scientific communities as well as in society by large, and our inability to foresee and react to the consequences of our actions.

Ten years ago I started out thinking the problem is the modeling of these systems, but over the years, as more and more research and data on these trends became available, I've become convinced the problem isn't understanding the system dynamics to begin with, but that nobody is paying attention to what we've learned.

I see this every time I sit in a committee meeting and try to tell them something about research dedicated to intelligent decision making in groups, cognitive biases, or the sociology of science. They'll not listen. They might be polite and let me finish, but it's not information they will take into account in their decision making. And the reason is basically that it takes them too much time and too much effort. They'll just continue the way it's always been done; they'll continue making the same mistakes over again. There's no feedback in this system, and no learning by trial and error.

The briefest of brief summaries of my essay is that we'll only be able to meet the challenges mankind is facing if our social systems are organized so that we can react to complex and emerging problems caused by our own interaction and that with our environment. That will only be possible if we have the relevant information and use it. And we'll only use this information if it's cheap, in the sense of it being simple, fast, and intuitive to use.

Most attempts to solve the problems that we are facing are based on an unrealistic and utopian image of the average human, the well-educated, intellectual and concerned citizen who will process all available information and come to smart decisions. That is never going to happen, and that's the issue I'm taking on in my essay.

I'll be happy to answer questions about my essay. I would prefer to do this here rather than at the FQXi forum. Note though that I'll be stuck in transit for the next day. If that volcano lets me off this island that is.

Monday, August 18, 2014

DAMA annual modulation explained without invoking dark matter

Annual modulation of DAMA data.
Image credits: DAMA Collaboration.
Physicists have plenty evidence for the existence of dark matter, matter much like the one we are made of but that does not emit any light. However, so far all this evidence comes from the gravitational pull of dark matter, which affects the motion of stars, the formation of structures, and acts as a gravitational lens to bend light, all of which has been observed. We still do not know however what the microscopic nature of dark matter is. What is the type of particle (particles?) that it is constituted of, and what are its interactions?

Few physicists today doubt that dark matter exists and is some type of particle which has just evaded detection so far. First, there is all the evidence for its gravitational interaction. Add to this that we don’t know any good reason why all matter should couple to photons, and on this ground we can actually expect the existence of dark matter. Moreover, we have various candidate theories for physics beyond the standard model that contain particles which fulfil the necessary properties for dark matter. Finally, alternative explanations, by modifying gravity rather than adding a new type of matter, are disfavored by the existing data.

Not so surprisingly thus, dark matter has come to dominate the search for physics beyond the standard model. We seem to be so very close!

Infuriatingly though, despite many experimental efforts, we still have no evidence for the interaction of dark matter particles, neither among each other nor with the matter that we are made of. Many experiments are searching for evidence of these interactions. It is the very nature of dark matter – it interacting so weakly with our normal matter and with itself – which makes finding evidence so difficult.

One observation being looked for is decay products of dark matter interactions in astrophysical processes. There are presently several observations, such as the Fermi γ-ray excess or the positron excess, whose astrophysical origin is not presently understood and so could be due to dark matter. But astrophysics combines a lot of processes at many energy and density scales, and it is hard to exclude that some signal was not caused by particles of the standard model alone.

Another type of evidence that is being sought after comes from experiments designed to be sensitive to the very rare interaction of dark matter with our normal matter when it passes through the planet. These experiments have the advantage that they happen in a known and controlled environment (as opposed to somewhere in the center of our galaxy). They experiments are typically located deep underground in old mines to filter out unwanted types of particles, collectively referred to as “background”. Whether or not an experiment can detect dark matter interactions within a certain amount of time depends on the density and coupling strength of dark matter, and so also on the type of detector material.

So far, none of the dark matter searches has resulted in a statistically significant positive signal. They have set constraints on the coupling and density of dark matter. Valuable, yes, but frustrating nevertheless.

One experiment that has instilled both hope as well as controversy among physicists is the DAMA experiment. The DAMA experiment sees an unexplained annual modulation in the event rate at high statistical significance. If the signal was caused by dark matter, we would expect an annual modulation due to our celestial motion around the Sun. The event rate depends on the orientation of the detector relative to our motion and should peak around June 2nd, consistent with the DAMA data.

There are of course other signals that have an annual modulation that cause reactions with the material in and around the detector. Notably there is the flux of muons which are produced when cosmic rays hit the upper atmosphere. The muon flux however depends on the temperature in the atmosphere and peaks approximately 30 days too late to explain the observations. The DAMA collaboration has taken into account all other kinds of backgrounds that they could think of, or that other physicists could think of, but dark matter remained the best way to explain the data.

The DAMA experiment has received much attention not primarily because of the presence of the signal, but because of the physicists’ failure to explain the signal with anything but dark matter. It adds to the controversy though that the DAMA signal, if due to dark matter, seems to lie in a parameter range already excluded by other dark matter searches. Then again, this may be due to differences in the detectors. The issue has been discussed back and forth for about a decade now.

All this may change now that Jonathan Davis from the University of Durham, UK, in a recent paper demonstrated that the DAMA signal can be fitted by combining the atmospheric muon flux with the flux of solar neutrinos:
    Fitting the annual modulation in DAMA with neutrons from muons and neutrinos
    Jonathan H. Davis
    arxiv:1407.1052
The neutrinos interact with the rock surrounding the detector, thereby creating secondary particles which contribute to the background. The strength of the neutrino signal depends on the Earth’s distance to the sun and peaks around January 2nd. In his paper, Davis demonstrates that for certain values of the amount of muons and neutrinos these two modulations combine to fit the DAMA data very well, as good as a dark matter explanation. And that is after he has corrected the goodness of the fit by taking into account the larger number of parameters.

Moreover, Davis discusses how the two possible explanations could be distinguished from each other, for example by analyzing the data for residual changes in the solar activity that should not be present if the signal was due to dark matter.

Tim Tait, Professor for theoretical particle physicist at the University of California, Irvine, commented that “[This] may be the first self-consistent explanation for DAMA.” Though of course one has to be cautious not to jump to conclusions since Davis’ argument is partly based on estimates for the reaction rate of neutrinos with the rock that has to be confirmed with more qualitative studies. Thomas Dent, a former particle cosmologist now working in gravitational wave data analysis, welcomed Davis’ explanation: “DAMA has been a distraction to theorists for too long.”

This post first appeared July 17, 2014, on Starts With A BANG with the title "How the experiment that claimed to detect dark matter fooled itself".

Thursday, August 14, 2014

Away note and Interna

Lara

I'll be traveling the next three weeks, so please be prepared for little or unsubstantial action on this blog. Next week I'm in Reykjavik for a network meeting on "Holographic Methods and Applications". August 27-29 I'm running the Science Writers Workshop in Stockholm together with George, this year on the topic "Quantum Theory." The first week of September then I'm in Trieste for the 2014 conference on Experimental Search for Quantum Gravity, where I'll be speaking about space-time defects.

Unfortunately, this traveling happens just during the time when our Kindergarten is closed, and so it's quite some stress-test for my dear husband. Since you last heard from Lara and Gloria, they have learned to count, use the swing, and are finally potty trained. They can dress themselves, have given up requesting being carried up the stairs, and we mostly get around without taking along the stroller. Yes, life has become much easier. Gloria however still gets motion sick in the car, so we either have to drug her or pull over every 5 minutes. By and large we try to avoid long road trips.

The girls have now more of a social life than me, and we basically can't leave the house without meeting other children that they know and that they have to discuss with whether Friday comes before or after Wednesday. That Lara and Gloria are twins apparently contributes greatly to their popularity. Every once in a while, when I drop off the kids at Kindergarten, some four foot dwarf will request to know if it's really true that they were together in mommy's tummy and inspect me with a skeptic view. The older children tell me that the sisters are so cute, and then try to pad Gloria's head, which she hates.
Gloria

Gloria is still a little ahead of Lara when it comes to developing new skills. She learned to speak a little earlier, to count a little earlier, was potty trained a little earlier and learned to dress herself a little earlier. Then she goes on to explain Lara what to do. She also "reads" books to Lara, basically by memorizing the stories.

Lara on the other hand is still a little ahead in her physical development. She is still a bit taller and more often than not, when I come to pick them up at Kindergarten, Lara will be kicking or throwing some ball while Gloria plays in the sandbox - and afterwards Gloria will insist on taking off her shoes, pouring out the sand and cleaning her socks before she gets into the car. Lara takes off the shoes in the car and pours the sand into the seat pocket. Lara uses her physical advantage over Gloria greatly to take away toys. Gloria takes revenge by telling everybody what Lara did wrong again, like putting her shoe on the wrong foot.

The best recent development is that the girls have finally, after a quite difficult phase, stopped kicking and hitting me and telling me to go away. They now call me "my little mommy" and want me to bake cookies for them. Yes, my popularity has greatly increased with them figuring out that I'm not too bad with cakes and cookies. They don't particularly like my cooking but that's okay, because I don't like it either.

On an entirely different note, as some of you have noticed already, I agreed to write for Ethan Siegel at Starts With A Bang. So far there's two pieces from me over there: How the experiment that claimed to detect dark matter fooled itself and The Smallest Possible Scale in the Universe. The deal is that I can repost what gets published there on this blog after 30 days, which I will do. So if you're only interested in my writing, you're well off here, but check out his site because it's full with interesting physics writing.


Tuesday, August 12, 2014

Do we write too many papers?

Every Tuesday, when the weekend submissions appear on the arXiv, I think we’re all writing too many papers. Not to mention that we work too often on weekends. Every Friday, when another week has passed in which nobody solved my problems for me, I think we’re not writing enough papers.

The Guardian recently published an essay by Timo Hannay, titled “Stop the deluge of science research”, though the URL suggests the original title was “Why we should publish less Scientific Research.” Hannay argues that the literature has become unmanageable and that we need better tools to structure and filter it so that researchers can find what they are looking for. Ie, he doesn’t actually say we should publish less. Of course we all want better boats to stay afloat on the information ocean, but there are other aspects to the question whether we publish too many papers that Hannay didn’t touch upon.

Here, I use “too much” to mean that the amount of papers hinders scientific progress and no longer benefits it. The actual number depends very much on the field and its scientific culture and doesn’t matter all that much. Below I’ve collected some arguments that speak for or against the “too much papers” hypothesis.

Yes, we publish too many papers!
  • Too much to read, even with the best filter. The world doesn’t need to know about all these incremental steps, most of which never lead anywhere anyway.
  • Wastes the time of scientists who could be doing research instead. Publishing several short papers instead of one long one adds the time necessary to write several introductions and conclusions, adapt the paper to different journals styles, fight with various sets of referees, just to then submit the paper to another journal and start all over again.
  • Just not reading them isn’t an option because one needs to know what’s going on. That creates a lot of headache, especially for newcomers. Better only publish what’s really essential knowledge.
  • Wastes the time of editors and referees. Editors and referees typically don’t have access to reports on manuscripts that follow-up works are based on.
No, we don’t publish too many papers!
  • If you think it’s too much, then just don’t read it.
  • If you think it’s too much, you’re doing it wrong. It’s all a matter of tagging, keywords, and search tools.
  • It’s good to know what everybody is doing and to always be up to date.
  • Journals make money with publishing our papers, so don’t worry about wasting their time.
  • Who really wants to write a referee report for one of these 30 pages manuscripts anyway?
Possible reasons that push researchers to publish more than is good for progress:
  • Results pressure. Scientists need published papers to demonstrate outcome of research they received grants for.
  • CV boosting. Lots of papers looks like lots of ideas, at least if one doesn’t look too closely. (Especially young postdocs often believe they don’t have enough papers, so let me add a word of caution. Having too many papers can also work against you because it creates the appearance that your work is superficial. Aim at quality, not quantity.)
  • Scooping angst. In fields which are overpopulated, like for example hep-th, researchers publish anything that might go through just to have a time-stamp that documents they were first.
  • Culture. Researchers adapt the publishing norms of their peers and want to live up to their expectations. (That however might also have the result that they publish less than is good for progress, depending on the prevailing culture of the field.)  
  • PhD production machinery. It’s becoming the norm at least in physics that PhD students already have several publications, typically with their PhD supervisor. Much of this is to make it easier for the students to find a good postdoc position, which again falls back positively on the supervisor. This all makes the hamster wheel turn faster and faster.
All together I don’t have a strong opinion on whether we’re publishing too much or not. What I do find worrisome though is that all these measures for scientific success reduce our tolerance for individuality. Some people write a lot, some less so. Some pay a lot of attention to detail, some rely more on intuition. Some like to discuss and get feedback early to sort out their thoughts, some like to keep their thoughts private until they’ve sorted them out themselves. I think everybody should do their research the way it suits them best, but unfortunately we’re all increasingly forced to publish at rates close to the field average. And who said that the average is the ideal?

Monday, August 11, 2014

When the day comes [video]

Because I know you couldn't dream of anything better than starting your week with one of my awesome music videos. This one is for you, who you just missed another deadline, and for you who you still haven't done what you said you would, and for you, yes you, who you still haven't sent title and abstract.


I'm getting somewhat frustrated with the reverb tails, I think I have to make something less complicated. The background choir is really hard to get in the right place without creating a mush. And as always the video making was quite frustrating. I can't get the cuts in the video being properly in synch with the audio, mainly because I can't see the audio in my video editor. I'm using the Corel Videostudio Pro X, can anybody recommend a software better suited to the task?

Monday, August 04, 2014

What is a singularity?

Not Von Neumann's urinal, but a
model of an essential singularity.
[Source: Wikipedia Commons.]
I recently read a bit around about the technological singularity, but it’s hard. It’s hard because I have to endure sentences like this:
“Singularity is a term derived from physics, where it means the point at the unknowable centre of a black hole where the laws of physics break down.”
Ouch. Or this:
“[W]e cannot see beyond the [technological] singularity, just as we cannot see beyond a black hole's event horizon.”
Aargh. Then I thought certainly they must have looked up the word in a dictionary, how difficult can it be? In the dictionary, I found this:
sin-gu-lar-i-ty
noun, plural sin-gu-lar-i-ties for 2–4.

1. the state, fact, or quality of being singular.
2. a singular, unusual, or unique quality; peculiarity.
3. Mathematics, singular point.
4. Astronomy (in general relativity) the mathematical representation of a black hole.”
I don’t even know where to start complaining. Yes, I did realize that black holes and event horizons made it into pop culture, but little did I realize that something as seemingly simple as the word “singularity” is surrounded by such misunderstanding.

Von Neumann.

Let me start with some history. Contrary to what you read in many places, it was not Vernor Vinge who first used the word “singularity” to describe a possible breakdown of predictability in technological development, it was von Neumann.

Von Neumann may be known to you as the man behind the Von Neumann entropy. He was a multiple talented genius, one of a now almost extinct breed, who contributed to many disciplines in math and physics, and what are now interdisciplinary fields like game theory or quantum information.

In Chapter 16 (p 157) of Stanislav Ulam’s biography of Von Neumann, published in 1958, one reads:
“One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
The term “singularity” was then picked up in 1993 by Vinge who coined the expression “technological singularity”. But let us dwell for a moment on the above Von Neumann quote. Ulam speaks of an “essential singularity”. You may be forgiven mistaking the adjective “essential” as a filler, but “essential singularity” is a technical expression, typically found in the field of complex analysis.

A singularity in mathematics is basically a point in which a function is undefined. Now it might be undefined just because you didn’t define it, but it is possible to continue the function through that point. In this case the singularity is said to be removable and, in some sense, just isn’t an interesting singularity, so let us leave this aside.

What one typically means with a singularity is a point where a function behaves badly, so that one or several of its derivatives diverge, that is they go to infinity. The ubiquitous example in school math is the poles of inverse powers of x, which diverge with x to zero.

However, such poles are not malign, you can remove them easily enough by multiplying the function with the respective positive power. Of course this gives you a different function, but this function still carries much of the information of the original function, notably all the coefficients in a series expansion. This procedure of removing poles (or creating poles) is very important in complex analysis where it is necessary to obtain the “residuals” of a function.

Some singularities however cannot be removed by multiplication with any positive power. These are those cases in which the function contains an infinite number of negative powers, the most commonly used example is exp(-1/x) at x=0. Such a singularity is said to be “essential”. Please appreciate the remarkable fact that the function itself does not diverge for x to zero, but neatly goes to zero! So do all its derivatives!!

So what did von Neumann mean with referring to an essential singularity?

From the context it seems he referred to the breakdown of predictability at this point. If all derivatives of a function are zero, you cannot make a series expansion (neither Taylor nor Laurent) around that point. If you hit that point, you don’t know what happens next, basically. This is a characteristic feature of essential singularities. (The radius of convergence cannot be pushed through the singular point.)

However, predictability of the laws of nature that we have (so far) never breaks down in this very sense. It breaks down because the measurement in quantum theory is non-deterministic, but that has for all we know nothing to do with essential singularites. (Yes, I’ve tried to make this connection. I’ve always been fond of essential singularities. Alas, not even the Templeton Foundation wanted anything to do with my great idea. So much about the reality of research.)

Geodesic incompleteness.
Artist's impression.
The other breakdown of predictability that we know of are singularities in general relativity. These are not technically essential singularities if you ask for the behavior of certain observables – they are typically poles or conical singularities. But they bear a resemblance to essential singularities by a concept known as “geodesic incompleteness”. It basically means that there are curves in space-time which end at finite proper time and cannot be continued. It’s like hitting the wall at 32km.

The reason for the continuation being impossible is that a singularity is a singularity is a singularity, no matter how you got there. You lose all information about your past when you hit it. (This is why, incidentally, the Maldacena-Horowitz proposal to resolve the black hole information loss by putting initial conditions on the singularity makes a lot of sense to me. Imho a totally under-appreciated idea.)

A common confusion about black holes concerns the nature of the event horizon. You can construct certain quantities of the black hole spacetime that diverge at the event horizon. In the mathematical sense they are singular, and that did confuse many people after the black hole space-time was first derived, in the middle of the last century. But it was quickly understood that these quantities do not correspond to physical observables. The physically relevant singularity is where geodesics end, at the center of the black hole. It corresponds to an infinitely large curvature. (This is an observer independent statement.) Nothing special happens upon horizon crossing, except that one can never get out again.

The singularity inside black holes is widely believed not to exist though, exactly because it implies a breakdown of predictability and causes the so paradoxical loss of information. The singularity is expected to be removed by quantum gravitational effects. The defining property of the black hole is the horizon, not the singularity. A black hole with the singularity removed is still a black hole. A singularity with the horizon removed is a naked singularity, no longer a black hole.

What has all of this to do with the technological singularity?

Nothing, really.

To begin with, there are like 17 different definitions for the technological singularity (no kidding). None of them has anything to do with an actual singularity, neither in the mathematical nor in the physical sense, and we have absolutely no reason to believe that the laws of physics or predictability in general breaks down within the next decades or so. In principle.

In practice, on some emergent level of an effective theory, I can see predictability becoming impossible. How do you want to predict what an artificial intelligence will do without having something more powerful than that artificial intelligence already? Not that anybody has been able to predict what averagely intelligent humans will do. Indeed one could say that predictability becomes more difficult with absence of intelligence, not the other way round, but I digress.

Having said all that, let us go back to these scary quotes from the beginning:
“Singularity is a term derived from physics, where it means the point at the unknowable centre of a black hole where the laws of physics break down.”
The term singularity comes from mathematics. It does not mean “at the center of the black hole”, but it can be “like the center of a black hole”. Provided you are talking about the classical black hole solution, which is however believed to not be realized in nature.
“[W]e cannot see beyond the [technological] singularity, just as we cannot see beyond a black hole's event horizon.”
There is no singularity at the black hole horizon, and predictability does not break down at the black hole horizon. You cannot see beyond a black hole horizon as long as you stay outside the black hole. If you jump in, you will see - and then die. But I don’t know what this has to do with technological development, or maybe I just didn’t read the facebook fineprint closely enough.

And finally there’s this amazing piece of nonsense:
“Singularity: Astronomy. (in general relativity) the mathematical representation of a black hole.”
To begin with General Relativity is not a field of astronomy. But worse, the “mathematical representation of a black hole” is certainly not a singularity. The mathematical representation of a (classical) black hole is the black hole spacetime and it contains a singularity.

And just in case you wondered, singularities have absolutely nothing to do with singing, except that you find both on my blog.

Tuesday, July 29, 2014

Can you touch your nose?

Yeah, but can you? Believe it or not, it’s a question philosophers have plagued themselves with for thousands of years, and it keeps reappearing in my feeds!

Best source I could find for this image: IFLS.



My first reaction was of course: It’s nonsense – a superficial play on the words “you” and “touch”. “You touch” whatever triggers the nerves in your skin. There, look, I’ve solved a thousand year’s old problem in a matter of 3 seconds.

Then it occurred to me that with this notion of “touch” my shoes never touch the ground. Maybe I’m not a genius after all. Let me get back to that cartoon then. Certainly deep thoughts went into it that I must unravel.

The average size of an atom is an Angstrom, 10-10 m. The typical interatomar distance in molecules is a nanometer, 10-9 meter, or let that be a few nanometers if you wish. At room temperature and normal atmospheric pressure, electrostatic repulsion prevents you from pushing atoms any closer together. So the 10-8 meter in the cartoon seem about correct.

But it’s not so simple...

To begin with it isn’t just electrostatic repulsion that prevents atoms from getting close, it is more importantly the Pauli exclusion principle which forces the electrons and quarks that make up the atom to arrange in shells rather than to sit on top of each other.

If you could turn off the Pauli exclusion principle, all electrons from the higher shells would drop into the ground state, releasing energy. The same would happen with the quarks in the nucleus which arrange in similar levels. Since nuclear energy scales are higher than atomic scales by several orders of magnitude, the nuclear collapse causes the bulk of the emitted energy. How much is it?

The typical nuclear level splitting is some 100 keV, that is a few 10-14 Joule. Most of the Earth is made up of silicon, iron and oxygen, ie atomic numbers of the order of 15 or so on the average. This gives about 10-12 Joule per atom, that is 1011 Joule per mol, or 1kTon TNT per kg.

This back-of-the envelope gives pretty much exactly the maximal yield of a nuclear weapon. The difference is though that turning off the Pauli exclusion principle would convert every kg of Earthly matter into a nuclear bomb. Since our home planet has a relatively small gravitational pull, I guess it would just blast apart. I saw everybody die, again, see that’s how it happens. But I digress; let me get back to the question of touch.

So it’s not just electrostatics but also the Pauli exclusion principle that prevents you from falling through the cracks. Not only do the electrons in your shoes don’t want to touch the ground, the electrons in your shoes don’t want to touch the other electrons in your shoes either. Electrons, or fermions generally, just don’t like each other.

The 10-8 meter actually seem quite optimistic because surfaces are not perfectly even, they have a roughness to them, which means that the average distance between two solids is typically much larger than the interatomic spacing that one has in crystals. Moreover, the human body is not a solid and the skin normally covered by a thin layer of fluids. So you never touch anything just because you’re separated by a layer of grease from the world.

To be fair, grease isn’t why the Greeks were scratching their heads back then, but a guy called Zeno. Zeno’s most famous paradox divides a distance into halves indefinitely to then conclude then that because it consists of an infinite number of steps, the full distance can never be crossed. You cannot, thus, touch your nose, spoke Zeno, or ram an arrow into it respectively. The paradox resolved once it was established that infinite series can converge to finite values; the nose was in the business again, but Zeno would come back to haunt the thinkers of the day centuries later.

The issue reappeared with the advance of the mathematical field of topology in the 19th century. Back then, math, physics, and philosophy had not yet split apart, and the bright minds of the times, Descarte, Euler, Bolzano and the like, they wanted to know, using their new methods, what does it mean for any two objects to touch? And their objects were as abstract as it gets. Any object was supposed to occupy space and cover a topological set in that space. So far so good, but what kind of set?

In the space of the real numbers, sets can be open or closed or a combination thereof. Roughly speaking, if the boundary of the set is part of the set, the set is closed. If the boundary is missing the set is open. Zeno constructed an infinite series of steps that converges to a finite value and we meet these series again in topology. Iff the limiting value (of any such series) is part of the set, the set is closed. (It’s the same as the open and closed intervals you’ve been dealing with in school, just generalized to more dimensions.) The topologists then went on to reason that objects can either occupy open sets or closed sets, and at any point in space there can be only one object.

Sounds simple enough, but here’s the conundrum. If you have two open sets that do not overlap, they will always be separated by the boundary that isn’t part of either of them. And if you have two closed sets that touch, the boundary is part of both, meaning they also overlap. In neither case can the objects touch without overlapping. Now what? This puzzle was so important to them that Bolzano went on to suggest that objects may occupy sets that are partially open and partially closed. While technically possible, it’s hard to see why they would, in more than 1 spatial dimension, always arrange so as to make sure one’s object closed surface touches the other’s open patches.

More time went by and on the stage of science appeared the notion of fields that mediate interactions between things. Now objects could interact without touching, awesome. But if they don’t repel what happens when they get closer? Do or don’t they touch eventually? Or does interacting via a field means they touch already? Before anybody started worrying about this, science moved on and we learned that the field is quantized and the interaction really just mediated by the particles that make up the field. So how do we even phrase now the question whether two objects touch?

We can approach this by specifying that we mean with an “object” a bound state of many atoms. The short distance interaction of these objects will (at room temperature, normal atmospheric pressure, non-relativistically, etc) take place primarily by exchanging (virtual) photons. The photons do in no sensible way belong to any one of the objects, so it seems fair to say that the objects don’t touch. They don’t touch, in one sentence, because there is no four-fermion interaction in the standard model of particle physics.

Alas, tying touch to photon exchange in general doesn’t make much sense when we think about the way we normally use the word. It does for example not have any qualifier about the distance. A more sensible definition would make use of the probability of an interaction. Two objects touch (in some region) if their probability of interaction (in that region) is large, whether or not it was mediated by a messenger particle. This neatly solves the topologists’ problem because in quantum mechanics two objects can indeed overlap.

What one means with “large probability” of interaction is somewhat arbitrary of course, but quantum mechanics being as awkward as it is there’s always the possibility that your finger tunnels through your brain when you try to hit your nose, so we need a quantifier because nothing is ever absolutely certain. And then, after all, you can touch your nose! You already knew that, right?

But if you think this settles it, let me add...

Yes, no, maybe, wtf.
There is a non-vanishing probability that when you touch (attempt to touch?) something you actually exchange electrons with it. This opens a new can of worms because now we have to ask what is “you”? Are “you” the collection of fermions that you are made up of and do “you” change if I remove one electron and replace it with an identical electron? Or should we in that case better say that you just touched something else? Or are “you” instead the information contained in a certain arrangement of elementary particles, irrespective of the particles themselves? But in this case, “you” can never touch anything just because you are not material to begin with. I will leave that to you to ponder.

And so, after having spent an hour staring at that cartoon in my facebook feed, I came to the conclusion that the question isn’t whether we can touch something, but what we mean with “some thing”. I think I had been looking for some thing else though…