Tuesday, July 29, 2014

Can you touch your nose?

Yeah, but can you? Believe it or not, it’s a question philosophers have plagued themselves with for thousands of years, and it keeps reappearing in my feeds!

Best source I could find for this image: IFLS.

My first reaction was of course: It’s nonsense – a superficial play on the words “you” and “touch”. “You touch” whatever triggers the nerves in your skin. There, look, I’ve solved a thousand year’s old problem in a matter of 3 seconds.

Then it occurred to me that with this notion of “touch” my shoes never touch the ground. Maybe I’m not a genius after all. Let me get back to that cartoon then. Certainly deep thoughts went into it that I must unravel.

The average size of an atom is an Angstrom, 10-10 m. The typical interatomar distance in molecules is a nanometer, 10-9 meter, or let that be a few nanometers if you wish. At room temperature and normal atmospheric pressure, electrostatic repulsion prevents you from pushing atoms any closer together. So the 10-8 meter in the cartoon seem about correct.

But it’s not so simple...

To begin with it isn’t just electrostatic repulsion that prevents atoms from getting close, it is more importantly the Pauli exclusion principle which forces the electrons and quarks that make up the atom to arrange in shells rather than to sit on top of each other.

If you could turn off the Pauli exclusion principle, all electrons from the higher shells would drop into the ground state, releasing energy. The same would happen with the quarks in the nucleus which arrange in similar levels. Since nuclear energy scales are higher than atomic scales by several orders of magnitude, the nuclear collapse causes the bulk of the emitted energy. How much is it?

The typical nuclear level splitting is some 100 keV, that is a few 10-14 Joule. Most of the Earth is made up of silicon, iron and oxygen, ie atomic numbers of the order of 15 or so on the average. This gives about 10-12 Joule per atom, that is 1011 Joule per mol, or 1kTon TNT per kg.

This back-of-the envelope gives pretty much exactly the maximal yield of a nuclear weapon. The difference is though that turning off the Pauli exclusion principle would convert every kg of Earthly matter into a nuclear bomb. Since our home planet has a relatively small gravitational pull, I guess it would just blast apart. I saw everybody die, again, see that’s how it happens. But I digress; let me get back to the question of touch.

So it’s not just electrostatics but also the Pauli exclusion principle that prevents you from falling through the cracks. Not only do the electrons in your shoes don’t want to touch the ground, the electrons in your shoes don’t want to touch the other electrons in your shoes either. Electrons, or fermions generally, just don’t like each other.

The 10-8 meter actually seem quite optimistic because surfaces are not perfectly even, they have a roughness to them, which means that the average distance between two solids is typically much larger than the interatomic spacing that one has in crystals. Moreover, the human body is not a solid and the skin normally covered by a thin layer of fluids. So you never touch anything just because you’re separated by a layer of grease from the world.

To be fair, grease isn’t why the Greeks were scratching their heads back then, but a guy called Zeno. Zeno’s most famous paradox divides a distance into halves indefinitely to then conclude then that because it consists of an infinite number of steps, the full distance can never be crossed. You cannot, thus, touch your nose, spoke Zeno, or ram an arrow into it respectively. The paradox resolved once it was established that infinite series can converge to finite values; the nose was in the business again, but Zeno would come back to haunt the thinkers of the day centuries later.

The issue reappeared with the advance of the mathematical field of topology in the 19th century. Back then, math, physics, and philosophy had not yet split apart, and the bright minds of the times, Descarte, Euler, Bolzano and the like, they wanted to know, using their new methods, what does it mean for any two objects to touch? And their objects were as abstract as it gets. Any object was supposed to occupy space and cover a topological set in that space. So far so good, but what kind of set?

In the space of the real numbers, sets can be open or closed or a combination thereof. Roughly speaking, if the boundary of the set is part of the set, the set is closed. If the boundary is missing the set is open. Zeno constructed an infinite series of steps that converges to a finite value and we meet these series again in topology. Iff the limiting value (of any such series) is part of the set, the set is closed. (It’s the same as the open and closed intervals you’ve been dealing with in school, just generalized to more dimensions.) The topologists then went on to reason that objects can either occupy open sets or closed sets, and at any point in space there can be only one object.

Sounds simple enough, but here’s the conundrum. If you have two open sets that do not overlap, they will always be separated by the boundary that isn’t part of either of them. And if you have two closed sets that touch, the boundary is part of both, meaning they also overlap. In neither case can the objects touch without overlapping. Now what? This puzzle was so important to them that Bolzano went on to suggest that objects may occupy sets that are partially open and partially closed. While technically possible, it’s hard to see why they would, in more than 1 spatial dimension, always arrange so as to make sure one’s object closed surface touches the other’s open patches.

More time went by and on the stage of science appeared the notion of fields that mediate interactions between things. Now objects could interact without touching, awesome. But if they don’t repel what happens when they get closer? Do or don’t they touch eventually? Or does interacting via a field means they touch already? Before anybody started worrying about this, science moved on and we learned that the field is quantized and the interaction really just mediated by the particles that make up the field. So how do we even phrase now the question whether two objects touch?

We can approach this by specifying that we mean with an “object” a bound state of many atoms. The short distance interaction of these objects will (at room temperature, normal atmospheric pressure, non-relativistically, etc) take place primarily by exchanging (virtual) photons. The photons do in no sensible way belong to any one of the objects, so it seems fair to say that the objects don’t touch. They don’t touch, in one sentence, because there is no four-fermion interaction in the standard model of particle physics.

Alas, tying touch to photon exchange in general doesn’t make much sense when we think about the way we normally use the word. It does for example not have any qualifier about the distance. A more sensible definition would make use of the probability of an interaction. Two objects touch (in some region) if their probability of interaction (in that region) is large, whether or not it was mediated by a messenger particle. This neatly solves the topologists’ problem because in quantum mechanics two objects can indeed overlap.

What one means with “large probability” of interaction is somewhat arbitrary of course, but quantum mechanics being as awkward as it is there’s always the possibility that your finger tunnels through your brain when you try to hit your nose, so we need a quantifier because nothing is ever absolutely certain. And then, after all, you can touch your nose! You already knew that, right?

But if you think this settles it, let me add...

Yes, no, maybe, wtf.
There is a non-vanishing probability that when you touch (attempt to touch?) something you actually exchange electrons with it. This opens a new can of worms because now we have to ask what is “you”? Are “you” the collection of fermions that you are made up of and do “you” change if I remove one electron and replace it with an identical electron? Or should we in that case better say that you just touched something else? Or are “you” instead the information contained in a certain arrangement of elementary particles, irrespective of the particles themselves? But in this case, “you” can never touch anything just because you are not material to begin with. I will leave that to you to ponder.

And so, after having spent an hour staring at that cartoon in my facebook feed, I came to the conclusion that the question isn’t whether we can touch something, but what we mean with “some thing”. I think I had been looking for some thing else though…

Friday, July 25, 2014

Can black holes bounce to white holes?

Fast track to wisdom: Sure, but who cares if they can? We want to know if they do.

Black holes are defined by the presence of an event horizon which is the boundary of a region from which nothing can escape, ever. The word black hole is also often used to mean something that looks for a long time very similar to a black hole and that traps light, not eternally but only temporarily. Such space-times are said to have an “apparent horizon.” That they are not strictly speaking black holes was origin of the recent Stephen Hawking quote according to which black holes may not exist, by which he meant they might have only an apparent horizon instead of an eternal event horizon.

A white hole is an upside-down version of a black hole; it has an event horizon that is a boundary to a region in which nothing can ever enter. Static black hole solutions, describing unrealistic black holes that have existed forever and continue to exist forever, are actually a combination of a black hole and a white hole.

The horizon itself is a global construct, it is locally entirely unremarkable and regular. You would not note crossing the horizon, but the classical black hole solution contains a singularity in the center. This singularity is usually interpreted as the breakdown of classical general relativity and is expected to be removed by the yet-to-be-found theory of quantum gravity. 

You do however not need quantum gravity to construct singularity-free black hole space-times. Hawking and Ellis’ singularity theorems prove that singularities must form from certain matter configurations, provided the matter is normal matter and cannot develop negative pressure and/or density. All you have to do to get rid of the singularity is invent some funny type of matter that refuses to be squeezed arbitrarily. This is not possible with any type of matter we know, and so just pushes around the bump under the carpet: Now rather than having to explain quantum effects of gravity you have to explain where the funny matter comes from. It is normally interpreted not as matter but as a quantum gravitational contribution to the stress-energy tensor, but either way it’s basically the physicist’s way of using a kitten photo to cover the hole in wall.

Singularity-free black hole solutions have been constructed almost for as long as the black hole solution has been known – people have always been disturbed by the singularity. Using matter other than normal ones allowed constructing both wormhole solutions as well as black holes that turn into white holes and allow an exit into a second space-time region. Now if a black hole is really a black hole with an event horizon, then the second space-time region is causally disconnected from the first. If the black hole has only an apparent horizon, then this does not have to be so, and also the white hole then is not really a white hole, it just looks like one.

The latter solution is quite popular in quantum gravity. It basically describes matter collapsing, forming an apparent horizon and a strong quantum gravity region inside but no singularity, then evaporating and returning to an almost flat space-time. There are various ways to construct these space-times. The details differ, but the corresponding causal diagrams all look basically the same.

This recent paper for example used a collapsing shell turning into an expanding shell. The title “Singularity free gravitational collapse in an effective dynamical quantum spacetime” basically says it all. Note how the resulting causal diagram (left in figure below) looks pretty much the same as the one Lee and I constructed based on general considerations in our 2009 paper (middle in figure below), which again looks pretty much the same as the one that Ashtekar and Bojowald discussed in 2005 (right in figure below), and I could go on and add a dozen more papers discussing similar causal diagrams. (Note that the shaded regions do not mean the same in each figure.)

One needs a concrete ansatz for the matter of course to be able to calculate anything. The general structure of the causal diagram is good for classification purposes, but not useful for quantitative reasoning, for example about the evaporation.

Haggard and Rovelli and recently added to this discussion with a new paper about black holes bouncing to white holes.

    Black hole fireworks: quantum-gravity effects outside the horizon spark black to white hole tunneling
    Hal M. Haggard, Carlo Rovelli
    arXiv: 1407.0989

Ron Cowen at Nature News announced this as a new idea, and while the paper does contain new ideas, that black holes may turn into white holes is in and by itself not new. And so it follows some clarification.

Haggard and Rovelli’s paper contains two ideas that are connected by an argument, but not by a calculation, so I want to discuss them separately. Before we start it is important to note that their argument does not take into account Hawking radiation. The whole process is supposed to happen already without outgoing radiation. For this reason the situation is completely time-reversal invariant, which makes it significantly easier to construct a metric. It is also easier to arrive at a result that has nothing to do with reality.

So, the one thing that is new in the Haggard and Rovelli paper is that they construct a space-time diagram, describing a black hole turning into a white hole, both with apparent horizons, and do so by a cutting-procedure rather than altering the equation of state of the matter. As source they use a collapsing shell that is supposed to bounce. This cutting procedure is fine in principle, even though it is not often used. The problem is that you end up with a metric that exists as solution to some source, but you then have to calculate what the source has to do in order to give you the metric. This however is not done in the paper. I want to offer you a guess though as to what source would be necessary to create their metric.

The cutting that is done in the paper takes a part of the black hole metric (describing the inside of the shell) with an arm extending into the horizon region, then squeezes this arm together so that it shrinks in radial extension no longer extends into the regime below the Schwarzschild radius, which is normally behind the horizon. This squeezed part of the black hole metric is then matched to empty space, describing the inside of the shell. See image below

Figure 4 from arXiv: 1407.0989

They do not specify what happens to the shell after it has reached the end of the region that was cut, explaining one would need quantum gravity for this. The result is glued together with the time-reversed case, and so they get a metric that forms an apparent horizon and bounces at a radius where one normally would not expect quantum gravitational effects. (Working towards making more concrete the so far quite vague idea of Planck stars that we discussed here.)

The cutting and squeezing basically means that the high curvature region from inside the horizon was moved to a larger radius, and the only way this makes sense is if it happens together with the shell. So I think effectively they take the shell from a small radius and match the small radius to a large radius while keeping the density fixed (they keep the curvature). This looks to me like they blow up the total mass of the shell, but keep in mind this is my interpretation, not theirs. If that was so however, then makes sense that the horizon forms at a larger radius if the shell collapses while its mass increases. This raises the question though why the heck the mass of the shell should increase and where that energy is supposed to come from.

This brings me to the second argument in the paper, which is supposed to explain why it is plausible to expect this kind of behavior. Let me first point out that it is a bold claim that quantum gravity effects kick in outside the horizon of a (large) black hole. Standard lore has it that quantum gravity only leads to large corrections to the classical metric if the curvature is large (in the Planckian regime). This happens always after horizon crossing (as long as the mass of the black hole is larger than the Planck mass). But once the horizon is formed, the only way to make matter bounce so that it can come out of the horizon necessitates violations of causality and/or locality (keep in mind their black hole is not evaporating!) that extend into small curvature regions. This is inherently troublesome because now one has to explain why we don’t see quantum gravity effects all over the place.

The way they argue this could happen is that small, Planck size, higher-order correction to the metric can build up over time. In this case it is not solely the curvature that is relevant for an estimate of the effect, but also the duration of the buildup. So far, so good. My first problem is that I can’t see what their estimate of the long-term effects of such a small correction has to do with quantum gravity. I could read the whole estimate as being one for black hole solutions in higher-order gravity, quantum not required. If it was a quantum fluctuation I would expect the average solution to remain the classical one and the cases in which the fluctuations build up to be possible but highly improbable. In fact they seem to have something like this in mind, just that they for some reason come to the conclusion that the transition to the solution in which the initially small fluctuation builds up becomes more likely over time rather than less likely.

What one would need to do to estimate the transition probability is to work out some product of wave-functions describing the background metric close by and far away from the classical average, but nothing like this is contained in the paper. (Carlo told me though, it’s in the making.) It remains to be shown that the process of all the matter of the shell suddenly tunneling outside the horizon and expanding again is more likely to happen than the slow evaporation due to Hawking radiation which is essentially also a tunnel process (though not one of the metric, just of the matter moving in the metric background). And all this leaves aside that the state should decohere and not just happily build up quantum fluctuations for the lifetime of the universe or so.

By now I’ve probably lost most readers so let me just sum up. The space-time that Haggard and Rovelli have constructed exists as a mathematical possibility, and I do not actually doubt that the tunnel process is possible in principle, provided that they get rid of the additional energy that has appeared from somewhere (this is taken care of automatically by the time-reversal). But this alone does not tell us whether this space-time can exist as a real possibility in the sense that we do not know if this process can happen with large probability (close to one) in the time before the shell reaches the Schwarzschild radius (of the classical solution).

I have remained skeptical, despite Carlo’s infinitely patience in explaining their argument to me. But if they are right and what they claim is correct, then this would indeed solve both the black hole information loss problem and the firewall conundrum. So stay tuned...

Sunday, July 20, 2014

I saw the future [Video] Making of.

You wanted me to smile. I did my best :p

With all the cropping and overlays my computer worked on the video mixdown for a full 12 hours and that in a miserable resolution. Amazingly, the video looks better after uploading it to YouTube. Whatever compression YouTube is using, it has nicely smoothened out some ugly pixelations that I couldn't get rid of.

The worst part of the video making is that my software screws up the audio timing upon export. Try as I might, the lip movements never quite seem to be in sync, even if they look perfectly fine before export. I am not sure exactly what causes the problem. One issue is that the timing of my camera seems to be slightly inaccurate. If I record a video with the audio running in the background and later add the same audio on a second track, the video runs too fast by about 100 ms over 3 minutes. That's already enough to note the delay and makes the editing really cumbersome. Another contributing factor seems to be simple errors in the data processing. The audio sometimes runs behind and then, with an ugly click, jumps back into place.

Another issue with the video is that, well, I don't have a video camera. I have a DSLR photo camera with a video option, but that has its limits. It does not for example automatically refocus during recording and it doesn't have a movable display either. That's a major problem since it means I can't focus the camera on myself. So I use a mop that I place in front of the blue screen, focus the camera on that, hit record, and then try to put myself in place of the mop. Needless to say, that doesn't always work, especially if I move around. This means my videos are crappy to begin with. They don't exactly get better with several imports and exports and rescalings and background removals and so on.

Oh yeah, and then the blue screen. After I noticed last time that pink is really a bad color for a background removal because skin tones are pink, not to mention lipstick, I asked Google. The internet in its eternal wisdom recommended a saturated blue rather than turquoise, which I had though of, and so I got myself a few meters of the cheapest royal blue fabric I could find online. When I replaced the background I turned into a zombie, and thus I was reminded I have blue eyes. For this reason I have replaced the background with something similar to the original color. And my eyes look bluer than they actually are.

This brings me to the audio. After I had to admit that my so-called songs sound plainly crappy, I bought and read a very recommendable book called "Mixing Audio" by Roey Izhaki. Since then I know words like multiband compressor and reverb tail. The audio mix still isn't particularly good, but at least it's better and since nobody else will do it, I'll go and congratulate myself on this awesomely punchy bass-kick loop which you'll only really appreciate if you download the mp3 and turn the volume up to max. Also note how the high frequency plings come out crystal clear after I figured out what an equalizer is good for.

My vocal recording and processing has reached its limits. There's only so much one can do without a studio environment. My microphone picks up all kinds of noise, from the cars passing by over the computer fan and the neighbor's washing machine to the church bells. I basically can't do recordings in one stretch, I have to repeat everything a few times and pick the best pieces. I've tried noise-removal tools, but the results sound terrible to me and, worse, they are not reproducible, which is a problem since I have to patch pieces together. So instead I push the vocals through several high-pass filters to get rid of the background noise. This leaves my voice sounding thinner than it is, so then I add some low-frequency reverb and a little chorus and it comes out sounding mostly fine.

I have given up on de-essing presets, they always leave me with a lisp on top of my German accent. Since I don't actually have a lot of vocals to deal with, I just treat all the 's' by hand in the final clip, and that sounds okay, at least to my ears.

Oh yeah, and I promise I'll not attempt again to hit a F#3, that was not a good idea. My voicebox clearly wasn't meant to support anything below B3. Which is strange as I evidently speak mostly in a frequency range so low that it is plainly unstable on my vocal cords. I do fairly well with everything between the middle and high C and have developed the rather strange habit of singing second and 3rd voices to myself when I get stuck on some calculation. I had the decency to remove the whole choir in the final version though ;)

Hope you enjoy this little excursion into the future. Altogether it was fun to make. And see, I even managed a smile, especially for you :o)

Saturday, July 19, 2014

What is a theory, What is a model?

During my first semester I coincidentally found out that the guy who often sat next to me, one of the better students, believed the Earth was only 15,000 years old. Once on the topic, he produced stacks of colorful leaflets which featured lots of names, decorated by academic titles, claiming that scientific evidence supports the scripture. I laughed at him, initially thinking he was joking, but he turned out to be dead serious and I was clearly going to roast in hell until future eternity.

If it hadn’t been for that strange encounter, I would summarily dismiss the US debates about creationism as a bizarre cultural reaction to lack of intellectual stimulation. But seeing that indoctrination can survive a physics and math education, and knowing the amount of time one can waste using reason against belief, I have a lot of sympathy for the fight of my US colleagues.

One of the main educational efforts I have seen is to explain what the word “theory” means to scientists. We are told that a “theory” isn’t just any odd story that somebody made up and told to his 12 friends, but that scientists use the word “theory” to mean an empirically well-established framework to describe observations.

That’s nice, but unfortunately not true. Maybe that is how scientist should use the word “theory”, but language doesn’t follow definitions: Cashews aren’t nuts, avocados aren’t vegetables, black isn’t a color. And a theory sometimes isn’t a theory.

The word “theory” has a common root with “theater” and originally seems to have meant “contemplation” or generally a “way to look at something,” which is quite close to the use of the word in today’s common language. Scientists adopted the word, but not in any regular way. It’s not like we vote on what gets called a theory and what doesn’t. So I’ll not attempt to give you a definition that nobody uses in practice, but just try an explanation that I think comes close to practice.

Physicists use the word theory for a well worked-out framework to describe the real world. The theory is basically a map between a model, that is a simplified stand-in for a real-world system, and reality. In physics, models are mathematical, and the theory is the dictionary to translate mathematical structures into observable quantities.

Exactly what counts as “well worked-out” is somewhat subjective, but as I said one doesn’t start with the definition. Instead, a framework that gets adapted by a big part of the community slowly lives up to deserve the title of a “theory”. Most importantly that means that the theory has to fulfil the scientific standards of the field. If something is called a theory it basically means scientists trust its quality.

One should not confuse the theory with the model. The model is what actually describes whatever part of the world you want to study by help of your theory.

General Relativity for example is a theory. It does not in and by itself describe anything we observe. For this, we have to first make several assumptions for symmetries and matter content to then arrive at model, the metric that describes space-time, from which observables can be calculated. Quantum field theory, to use another example, is a general calculation tool. To use it to describe the real world, you first have to specify what type of particles you have and what symmetries, and what process you want to look at; this gives you for example the standard model of particle physics. Quantum mechanics is a theory that doesn’t carry the name theory. A concrete model would for example be that of the Hydrogen atom, and so on. String theory has been such a convincing framework for so many that it has risen to the status of a “theory” without there being any empirical evidence.

A model doesn't necessarily have to be about describing the real world. To get a better understanding of a theory, it is often helpful to examine very simplified models even though one knows these do not describe reality. Such models are called “toy-models”. Examples are e.g. neutrino oscillations with only two flavors (even though we know there are at least three), gravity in 2 spatial dimensions (even though we know there are at least three), and the φ4 theory - where we reach the limits of my language theory, because according to what I said previously it should be a φ4 model (it falls into the domain of quantum field theory).

Phenomenological models (the things I work with) are models explicitly constructed to describe a certain property or observation (the “phenomenon”). They often use a theory that is known not to be fundamental. One never talks about phenomenological theories because the whole point of doing phenomenology is the model that makes contact to the real world. A phenomenological model serves usually one of two purposes: It is either a preliminary description of existing data or a preliminary prediction for not-yet existing data, both with the purpose to lead the way to a fully-fledged theory.

One does not necessarily need a model together with the theory to make predictions. Some theories have consequences that are true for all models and are said to be “model-independent”. Though if one wants to test them experimentally, one has to use a concrete model again. Tests of violations of Bell’s inequality maybe be an example. Entanglement is a general property of quantum mechanics, straight from the axioms of the theory, yet to test it in a certain setting one has to specify a model again. The existence of extra-dimensions in string theory may serve as another example of a model-independent prediction.

One doesn’t have to tell this to physicists, but the value of having a model defined in the language of mathematics is that one uses calculation, logical conclusions, to arrive at numerical values for observables (typically dependent on some parameters) from the basic assumptions of the model. Ie, it’s a way to limit the risk of fooling oneself and get lost in verbal acrobatics. I recently read an interesting and occasionally amusing essay from a mathematician-turned-biologist who tries to explain his colleagues what’s the point of constructing models:
“Any mathematical model, no matter how complicated, consists of a set of assumptions, from whichj are deduced a set of conclusions. The technical machinery specific to each flavor of model is concerned with deducing the latter from the former. This deduction comes with a guarantee, which, unlike other guarantees, can never be invalidated. Provided the model is correct, if you accept its assumptions, you must as a matter of logic also accept its conclusions.”
Well said.

After I realized the guy next to me in physics class wasn’t joking about his creationist beliefs, he went to length explaining that carbon-dating is a conspiracy. I went to length making sure to henceforth place my butt safely far away from him. It is beyond me how one can study a natural science and still interpret the Bible literally. Though I have a theory about this…

Saturday, July 12, 2014

Post-empirical science is an oxymoron.

Image illustrating a phenomenologist after
reading a philosopher go on about

3:AM has an interview with philosopher Richard Dawid who argues that physics, or at least parts of it, are about to enter an era of post-empirical science. By this he means that “theory confirmation” in physics will increasingly be sought by means other than observational evidence because it has become very hard to experimentally test new theories. He argues that the scientific method must be updated to adapt to this development.

The interview is a mixture of statements that everybody must agree on, followed by subtle linguistic shifts that turn these statements into much stronger claims. The most obvious of these shifts is that Dawid flips repeatedly between “theory confirmation” and “theory assessment”.

Theoretical physicists do of course assess their theories by means other than fitting data. Mathematical consistency clearly leads the list, followed by semi-objective criteria like simplicity or naturalness, and other mostly subjective criteria like elegance, beauty, and the popularity of people working on the topic. These criteria are used for assessment because some of them have proven useful to arrive at theories that are empirically successful. Other criteria are used because they have proven useful to arrive on a tenured position.

Theory confirmation on the other hand doesn’t exist. The expression is sometimes used in a sloppy way to mean that a theory has been useful to explain many observations. But you never confirm a theory. You just have theories that are more, and others that are less useful. The whole purpose of the natural sciences is to find those theories that are maximally useful to describe the world around us.

This brings me to the other shift that Dawid makes in his string (ha-ha-ha) of words, which is that he alters the meaning of “science” as he goes. To see what I mean we have to make a short linguistic excursion.

The German word for science (“Wissenschaft”) is much closer to the original Latin meaning, “scientia” as “knowledge”. Science, in German, includes the social and the natural sciences, computer science, mathematics, and even the arts and humanities. There is for example the science of religion (Religionswissenschaft), the science of art (Kunstwissenschaft), science of literature, and so on. Science in German is basically everything you can study at a university and for what I am concerned mathematics is of course a science. However, in stark contrast to this, the common English use of the word “science” refers exclusively to the natural sciences and does typically not even include mathematics. To avoid conflating these two different meanings, I will explicitly refer to the natural sciences as such.

Dawid sets out talking about the natural sciences, but then strings (ha-ha-ha) his argument along on the “insights” that string theory has lead to and the internal consistency that gives string theorists confidence their theory is a correct description of nature. This “non-empirical theory assessment”, while important, can however only be means to the end of an eventual empirical assessment. Without making contact to observation a theory isn’t useful to describe the natural world, not part of the natural sciences, and not physics. These “insights” that Dawid speaks of are thus not assessments that can ever validate an idea as being good to describe nature, and a theory based only on non-empirical assessment does not belong into the natural sciences.

Did that hurt? I hope it did. Because I am pretty sick and tired of people selling semi-mathematical speculations as theoretical physics and blocking jobs with their so-called theories of nothing specifically that lead nowhere in particular. And that while looking down on those who work on phenomenological models because those phenomenologists, they’re not speaking Real Truth, they’re not among the believers, and their models are, as one string theorist once so charmingly explained to me “way out there”.

Yeah, phenomenology is out there where science is done. To many of those who call themselves theoretical physicists today seem to have forgotten physics is all about building models. It’s not about proving convergence criteria in some Hilbert-space or classifying the topology of solutions of some equation in an arbitrary number of dimensions. Physics is not about finding Real Truth. Physics is about describing the world. That’s why I became a physicist – because I want to understand the world that we live in. And Dawid is certainly not helping to prevent more theoretical physicists get lost in math and philosophy when he attempts to validate their behavior claiming the scientific method has to be updated.

The scientific method is a misnomer. There really isn’t such a thing as a scientific method. Science operates as an adaptive system, much like natural selection. Ideas are produced, their usefulness is assessed, and the result of this assessment is fed back into the system, leading to selection and gradual improvement of these ideas.

What is normally referred to as “scientific method” are certain institutionalized procedures that scientists use because they have shown to be efficient to find the most promising ideas quickly. That includes peer review, double-blind studies, criteria for statistical significance, mathematical rigor, etc. The procedures and how stringent (ha-ha-ha) they are is somewhat field-dependent. Non-empirical theory assessment has been used in theoretical physics for a long time. But these procedures are not set in stone, they’re there as long as they seem to work and the scientific method certainly does not have to be changed. (I would even argue it can’t be changed.)

The question that we should ask instead, the question I think Dawid should have asked, is whether more non-empirical assessment is useful at the present moment. This is a relevant question because it requires one to ask “useful for what”? As I clarified above, I myself mean “useful to describe the real world”. I don’t know what “use” Dawid is after. Maybe he just wants to sell his book, that’s some use indeed.

It is not a simple question to answer how much theory assessment is good and how much is too much, or for how long one should pursue a theory trying to make contact to observation before giving up. I don’t have answers to this, and I don’t see that Dawid has.

Some argue that string theory has been assessed too much already, and that more than enough money has been invested into it. Maybe that is so, but I think the problem is not that too much effort has been put into non-empirical assessment, but that too little effort has been put into pursuing the possibility of empirical test. It’s not a question of absolute weight on any side, it’s a question of balance.

And yes, of course this is related to it becoming increasingly more difficult to experimentally test new theories. That together with self-supporting community dynamics that Lee so nicely called out as group-think. Not that loop quantum gravity is any better than string theory.

In summary, there’s no such thing as post-empirical physics. If it doesn’t describe nature, if it has nothing to say about any observation, if it doesn’t even aspire to this, it’s not physics. This leaves us with a nomenclature problem. How do you call a theory that has only non-empirical facts speaking for it and one that the mathematical physicists apparently don’t want either? How about mathematical philosophy, or philosophical mathematics? Or maybe we should call it Post-empirical Dawidism.

[Peter Woit also had a comment on the 3:AM interview with Richard Dawid.]

Sunday, July 06, 2014

You’re not a donut. And not a mug either.

A topologist, as the joke goes, is somebody who can’t tell a mug from a donut.

Topology is a field of mathematics concerned with the properties of spaces and their invariants. One of these invariants is the number of ways you can cut out slices of an object without it falling apart, known as the “genus”. You can cut a donut and it becomes an open ring, yet it is still one piece, and you can cut the handle of a mug and it won’t fall off. Thus, they’re topologically the same.

The genus counts essentially the number of holes though that can be slightly misleading. A representative survey among our household members for example revealed that the majority of people count four holes in a T-shirt, while its genus is actually 3. (Make it a tank top, cut open shoulders and down the front. If you cut any more, it will fall apart.)

Every now and then I read that humans are topologically donuts, with anus, excuse me, genus one. Yes, that is obviously wrong, and I know you’ve all been waiting for me to count the holes in your body.

To begin with the surface of the human body, as any other non-mathematical surface, is not impenetrable, and how many holes it has is a matter of resolution. For a neutrino for example you’re pretty much all holes.

Leaving aside subatomic physics and marching on to the molecular level, the human body possesses an intricate network of transport routes for essential nutrients, proteins, bacteria and cells, and what went in one location can leave pretty much anywhere else. You can for example absorb some things through your lungs and get rid of them in your sweat, and you can absorb some medications through your skin. Not to mention that the fluid you ingest passes through some cellular layers and eventually leaves though yet another hole.

But even above the molecular level, the human body has more than one hole. One of the most unfortunate evolutionary heritage we have is that our airways are conjoined with the foodways. As you might have figured out when you were 4 years old, you can drink through your nose, and since you have two nostrils that brings you up to genus three.

Next, the human eyes sit pretty loosely in their sockets and the nasal cavities are connected to the eye sockets in various ways. I can blow out air through my eyes, so I count up to genus 5 then. Alas, people tend to find this a rather strange skill, so I’ll leave it to you whether you want to count your eyes to uppen your holyness. And while we are speaking of personal oddities, should you have any body piercings, these will pimp up your genus further. I have piercings in my ears, so that brings my counting to genus 7.

Finally, for the ladies, the fallopian tubes are not sealed off by the ovaries. The egg that is released during ovulation has to first make it to the tube. It is known to happen occasionally that an egg travels to the fallopian tube on the other side, meaning the tubes are connected through the abdominal cavity, forming a loop that adds one to the genus.

This brings my counting to 5 for the guys, 6 for the ladies, plus any piercings that you may have.

And if you have trouble imagining a genus 6 surface, below some visual aid.

Genus 0.

Genus 1.

Genus 2.

Genus 3.

Genus 4.

Genus 5.

Genus 6.