Monday, June 26, 2017

Dear Dr B: Is science democratic?

    “Hi Bee,

    One of the often repeated phrases here in Italy by so called “science enthusiasts” is that “science is not democratic”, which to me sounds like an excuse for someone to justify some authoritarian or semi-fascist fantasy.

    We see this on countless “Science pages”, one very popular example being Fare Serata Con Galileo. It's not a bad page per se, quite the contrary, but the level of comments including variations of “Democracy is overrated”, “Darwin works to eliminate weak and stupid people” and the usual “Science is not democratic” is unbearable. It underscores a troubling “sympathy for authoritarian politics” that to me seems to be more and more common among “science enthusiasts". The classic example it’s made is “the speed of light is not voted”, which to me, as true as it may be, has some sinister resonance.

    Could you comment on this on your blog?

    Luca S.”

Dear Luca,

Wow, I had no idea there’s so much hatred in the backyards of science communication.

Hand count at convention of the German
party CDU. Image Source: AFP
It’s correct that science isn’t democratic, but that doesn’t mean it’s fascistic. Science is a collective enterprise and a type of adaptive system, just like democracy is. But science isn’t democratic any more than sausage is a fruit just because you can eat both.

In an adaptive system, small modifications create a feedback that leads to optimization. The best-known example is probably Darwinian evolution, in which a species’ genetic information receives feedback through natural selection, thereby optimizing the odds of successful reproduction. A market economy is also an adaptive system. Here, the feedback happens through pricing. A free market optimizes “utility” that is, roughly speaking, a measure of the agents’ (customers/producers) satisfaction.

Democracy too is an adaptive system. Its task is to match decisions that affect the whole collective with the electorate’s values. We use democracy to keep our “is” close to the “ought.”

Democracies are more stable than monarchies or autocracies because an independent leader is unlikely to continuously make decisions which the governed people approve of. And the more governed people disapprove, the more likely they are to chop off the king’s head. Democracy, hence, works better than monarchy for the same reason a free market works better than a planned economy: It uses feedback for optimization, and thereby increases the probability for serving peoples’ interests.

The scientific system too uses feedback for optimization – this is the very basis of the scientific method: A hypothesis that does not explain observations has to be discarded or amended. But that’s about where similarities end.

The most important difference between the scientific, democratic, and economic system is the weight of an individual’s influence. In a free market, influence is weighted by wealth: The more money you can invest, the more influence you can have. In a democracy, each voter’s opinion has the same weight. That’s pretty much the definition of democracy – and note that this is a value in itself.

In science, influence is correlated with expertise. While expertise doesn’t guarantee influence, an expert is more likely to hold relevant knowledge, hence expertise is in practice strongly correlated with influence.

There are a lot of things that can go wrong with scientific self-optimization – and a lot of things do go wrong – but that’s a different story and shall be told another time. Still, optimizing hypotheses by evaluating empirical adequacy is how it works in principle. Hence, science clearly isn’t democratic.

Democracy, however, plays an important role for science.

For science to work properly, scientists must be free to communicate and discuss their findings. Non-democratic societies often stifle discussion on certain topics which can create a tension with the scientific system. This doesn’t have to be the case – science can flourish just fine in non-democratic societies – but free speech strongly links the two.

Science also plays an important role for democracy.

Politics isn’t done with polling the electorate on what future they would like to see. Elected representatives then have to find out how to best work towards this future, and scientific knowledge is necessary to get from “is” to “ought.”

But things often go wrong at the step from “is” to “ought.” Trouble is, the scientific system does not export knowledge in a format that can be directly imported by the political system. The information that elected representatives would need to make decisions is a breakdown of predictions with quantified risks and uncertainties. But science doesn’t come with a mechanism to aggregate knowledge. For an outsider, it’s a mess of technical terms and scientific papers and conferences – and every possible opinion seems to be defended by someone!

As a result, public discourse often draws on the “scientific consensus” but this is a bad way to quantify risk and uncertainty.

To begin with, scientists are terribly disagreeable and the only consensuses I know of are those on thousand years-old questions. More important, counting the numbers of people who agree with a statement simply isn’t an accurate quantifier of certainty. The result of such counting inevitably depends on how much expertise the counted people have: Too little expertise, and they’re likely to be ill-informed. Too much expertise, and they’re likely to have personal stakes in the debate. Worse, still, the head-count can easily be skewed by pouring money into some research programs.

Therefore, the best way we presently have make scientific knowledge digestible for politicians is to use independent panels. Such panels – done well – can both circumvent the problem of personal bias and the skewed head count. In the long run, however, I think we need a fourth arm of government to prevent politicians from attempting to interpret scientific debate. It’s not their job and it shouldn’t be.

But those “science enthusiasts” who you complain about are as wrong-headed as the science deniers who selectively disregard facts that are inconvenient for their political agenda. Both of them confuse opinions about what “ought to be” with the question how to get there. The former is a matter of opinion, the latter isn’t.

That vaccine debate that you mentioned, for example. It’s one question what are the benefits of vaccination and who is at risk from side-effects – that’s a scientific debate. It’s another question entirely whether we should allow parents to put their and other peoples’ children at an increased risk of early death or a life of disability. There’s no scientific and no logical argument that tells us where to draw the line.

Personally, I think parents who don’t vaccinate their kids are harming minors and society shouldn’t tolerate such behavior. But this debate has very little to do with scientific authority. Rather, the issue is to what extent parents are allowed to ruin their offspring’s life. Your values may differ from mine.

There is also, I should add, no scientific and no logical argument for counting the vote of everyone (above some quite arbitrary age threshold) with the same weight. Indeed, as Daniel Gilbert argues, we are pretty bad at predicting what will make us happy. If he’s right, then the whole idea of democracy is based on a flawed premise.

So – science isn’t democratic, never has been, never will be. But rather than stating the obvious, we should find ways to better integrate this non-democratically obtained knowledge into our democracies. Claiming that science settles political debate is as stupid as ignoring knowledge that is relevant to make informed decisions.

Science can only help us to understand the risks and opportunities that our actions bring. It can’t tell us what to do.

Thanks for an interesting question.

Tuesday, June 20, 2017

If tensions in cosmological data are not measurement problems, they probably mean dark energy changes

Galaxy pumpkin.
Src: The Swell Designer
According to physics, the universe and everything in it can be explained by but a handful of equations. They’re difficult equations, all right, but their simplest feature is also the most mysterious one. The equations contain a few dozen parameters that are – for all we presently know – unchanging, and yet these numbers determine everything about the world we inhabit.

Physicists have spent much brain-power on the question where these numbers come from, whether they could have taken any other values than the ones we observe, and whether their exploring their origin is even in the realm of science.

One of the key questions when it comes to the parameters is whether they are really constant, or whether they are time-dependent. If the vary, then their time-dependence would have to be determined by yet another equation, and that would change the whole story that we currently tell about our universe.

The best known of the fundamental parameters that dictate the universe how to behave is the cosmological constant. It is what causes the universe’s expansion to accelerate. The cosmological constant is usually assume to be, well, constant. If it isn’t, it is more generally referred to as ‘dark energy.’ If our current theories for the cosmos are correct, our universe will expand forever into a cold and dark future.

The value of the cosmological constant is infamously the worst prediction ever made using quantum field theory; the math says it should be 120 orders of magnitude larger than what we observe. But that the cosmological constant has a small non-zero value is extremely well established by measurement, well enough that a Nobel Prize was awarded for its discovery in 2011.

The Nobel Prize winners Perlmutter, Schmidt, and Riess, measured the expansion rate of the universe, encoded in the Hubble parameter, by looking at supernovae distributed over various distances. They concluded that the universe is not only expanding, but is expanding at an increasing rate – a behavior that can only be explained by a nonzero cosmological constant.

It is controversial though exactly how fast the expansion is today, or how large the current value of the Hubble constant, H0, is. There are different ways to measure this constant, and physicists have known for a few years that the different measurements give different results. This tension in the data is difficult to explain, and it has so-far remained unresolved.

One way to determine the Hubble constant is by using the cosmic microwave background (CMB). The small temperature fluctuations in the CMB spectrum encode the distribution of plasma in the early universe and the changes of the radiation since. From fitting the spectrum with the parameters that determine the expansion of the universe, physicists get a value for the Hubble constant. The most accurate of such measurements is currently that from the Planck satellite.

Another way to determine the Hubble constant is to deduce the expansion of the universe from the redshift of the light from distant sources. This is the way the Nobel-Prize winners made their discovery, and the precision of this method has since been improved. These two ways to determine the cosmological constant give results that differ with a statistical significance of 3.4 σ. That’s a probability of less than one in thousand to be due to random data fluctuations.

Various explanations for this have since been proposed. One possibility is that it’s a systematic error in the measurement, most likely in the CMB measurement from the Planck mission. There are reasons to be skeptical because the tension goes away when the finer structures (the large multipole moments) of the data is omitted. For many astrophysicists, this is an indicator that something’s amiss either with the Planck measurement or the data analysis.

Or maybe it’s a real effect. In this case, several modifications of the standard cosmological model have been put forward. They range from additional neutrinos to massive gravitons to changes in the cosmological constant.

That the cosmological constant changes from one place to the next is not an appealing option because this tends to screw up the CMB spectrum too much. But the currently most popular explanation for the data tension seems to be that the cosmological constant changes in time.

A group of researchers from Spain, for example, claims that they have a stunning 4.1 σ preference for a time-dependent cosmological constant over an actually constant one.

This claim seems to have been widely ignored, and indeed one should be cautious. They test for a very specific time-dependence, and their statistical analysis does not account for other parameterization they might have previously tried. (The theoretical physicist’s variant of post-selection bias.)

Moreover, they fit their model not only to the two above mentioned datasets, but to a whole bunch of others at the same time. This makes it hard to tell what is the reason their model seems to work better. A couple of cosmologists who I asked why this group’s remarkable results have been ignored complained that the data analysis is opaque.

Be that as it may, just when I put the Spaniards’ paper away, I saw another paper that supported their claim with an entirely independent study based on weak gravitational lensing.

Weak gravitational lensing happens when a foreground galaxy distorts the images of farther away galaxies. The qualifier ‘weak’ sets this effect apart from strong lensing which is caused by massive nearby objects – such as black holes – and deforms point-like sources to partials rings. Weak gravitational lensing, on the other hand, is not as easily recognizable and must be inferred from the statistical distribution of the shapes of galaxies.

The Kilo Degree Survey (KiDS) has gathered and analyzed weak lensing data from about 15 million distant galaxies. While their measurements are not sensitive to the expansion of the universe, they are sensitive to the density of dark energy, which affects the way light travels from the galaxies towards us. This density is encoded in a cosmological parameter imaginatively named σ8. Their data, too, is in conflict with the CMB data from the Planck satellite.

The members of the KiDs collaboration have tried out which changes to the cosmological standard model work best to ease the tension in the data. Intriguingly, it turns out that ahead of all explanations the one that works best is that the cosmological constant changes with time. The change is such that the effects of accelerated expansion are becoming more pronounced, not less.

In summary, it seems increasingly unlikely the tension in the cosmological data is due to chance. Cosmologists are cautious and most of them bet on a systematic problem with the Planck data. However, if the Planck measurement receives independent confirmation, the next best bet is on time-dependent dark energy. It wouldn’t make our future any brighter though. The universe would still expand forever into cold darkness.

[This article previously appeared on Starts With A Bang.]

Update June 21: Corrected several sentences to address comments below.

Wednesday, June 14, 2017

What’s new in high energy physics? Clockworks.

Clockworks. [Img via dwan1509].
High energy physics has phases. I don’t mean phases like matter has – solid, liquid, gaseous and so on. I mean phases like cranky toddlers have: One week they eat nothing but noodles, the next week anything as long as it’s white, then toast with butter but it must be cut into triangles.

High energy physics is like this. Twenty years ago, it was extra dimensions, then we had micro black holes, unparticles, little Higgses – and the list goes on.

But there hasn’t been a big, new trend since the LHC falsified everything that was falsifiable. It’s like particle physics stepped over the edge of a cliff but hasn’t looked down and now just walks on nothing.

The best candidate for a new trend that I saw in the past years is the “clockwork mechanism,” though the idea just took a blow and I’m not sure it’ll go much farther.

The origins of the model go back to late 2015, when the term “clockwork mechanism” was coined by Kaplan and Rattazzi, though Cho and Im pursued a similar idea and published it at almost the same time. In August 2016, clockworks were picked up by Giudice and McCullough, who advertised the model as a “a useful tool for model-building applications” that “offers a solution to the Higgs naturalness problem.”

Gears. Img Src: Giphy.
The Higgs naturalness problem, to remind you, is that the mass of the Higgs receives large quantum corrections. The Higgs is the only particle in the standard model that suffers from this problem because it’s the only scalar. These quantum corrections can be cancelled by subtracting a constant so that the remainder fits the observed value, but then the constant would have to be very finely tuned. Most particle physicists think that this is too much of a coincidence and hence search for other explanations.

Before the LHC turned on, the most popular solution to the Higgs naturalness issue was that some new physics would show up in the energy range comparable to the Higgs mass. We now know, however, that there’s no new physics nearby, and so the Higgs mass has remained unnatural.

Clockworks are a mechanism to create very small numbers in a “natural” way, that is from numbers that are close by 1. This can be done by copying a field multiple times and then coupling each copy to two neighbors so that they form a closed chain. This is the “clockwork” and it is assumed to have a couplings with values close to 1 which are, however, asymmetric among the chain neighbors.

The clockwork’s chain of fields has eigenmodes that can be obtained by diagonalizing the mass matrix. These modes are the “gears” of the clockwork and they contain one massless particle.

The important feature of the clockwork is now that this massless particle’s mode has a coupling that scales with the clockwork’s coupling taken to the N-th power, where N is the number of clockwork gears. This means even if the original clockwork coupling was only a little smaller than 1, the coupling of the lightest clockwork mode becomes small very fast when the clockwork grows.

Thus, clockworks are basically a complicated way to make a number of order 1 small by exponentiating it.

I’m an outspoken critic of arguments from naturalness (and have been long before we had the LHC data) so it won’t surprise you to hear that I am not impressed. I fail to see how choosing one constant to match observation is supposedly worse than introducing not only a new constant, but also N copies of some new field with a particular coupling pattern.

Either way, by March 2017, Ben Allanach reports from Recontres de Moriond – the most important annual conference in particle physics – that clockworks are “getting quite a bit of attention” and are “new fertile ground.”

Ben is right. Clockworks contain one light and weakly coupled mode – difficult to detect because of the weak coupling – and a spectrum of strongly coupled but massive modes – difficult to detect because they’re massive. That makes the model appealing because it will remain impossible to rule it out for a while. It is, therefore, perfect playground for phenomenologists.

And sure enough, the arXiv has since seen further papers on the topic. There’s clockwork inflation and clockwork dark mattera clockwork axion and clockwork composite Higgses – you get the picture.

But then, in April 2017, a criticism of the clockwork mechanism appears on the arXiv. Its authors Craig, Garcia Garcia, and Sutherland point out that the clockwork mechanism can only be used if the fields in the clockwork’s chain have abelian symmetry groups. If the group isn’t abelian the generators will mix together in the zero mode, and maintaining gauge symmetry then demands that all couplings be equal to one. This severely limits the application range of the model.

A month later, Giudice and McCullough reply to this criticism essentially by saying “we know this.” I have no reason to doubt it, but I still found the Craig et al criticism useful for clarifying what clockworks can and can’t do. This means in particular that the supposed solution to the hierarchy problem does not work as desired because to maintain general covariance one is forced to put a hierarchy of scales into the coupling already.

I am not sure whether this will discourage particle physicists from pursuing the idea further or whether more complicated versions of clockworks will be invented to save naturalness. But I’m confident that – like a toddler’s phase – this too shall pass.

Wednesday, June 07, 2017

Dear Dr B: What are the chances of the universe ending out of nowhere due to vacuum decay?

    “Dear Sabine,

    my names [-------]. I'm an anxiety sufferer of the unknown and have been for 4 years. I've recently came across some articles saying that the universe could just end out of no where either through false vacuum/vacuum bubbles or just ending and I'm just wondering what the chances of this are occurring anytime soon. I know it sounds silly but I'd be dearly greatful for your reply and hopefully look forward to that

    Many thanks


Dear Anonymous,

We can’t predict anything.

You see, we make predictions by seeking explanations for available data, and then extrapolating the best explanation into the future. It’s called “abductive reasoning,” or “inference to the best explanation” and it sounds reasonable until you ask why it works. To which the answer is “Nobody knows.”

We know that it works. But we can’t justify inference with inference, hence there’s no telling whether the universe will continue to be predictable. Consequently, there is also no way to exclude that tomorrow the laws of nature will stop and planet Earth will fall apart. But do not despair.

Francis Bacon – widely acclaimed as the first to formulate the scientific method – might have reasoned his way out by noting there are only two possibilities. Either the laws of nature will break down unpredictably or they won’t. If they do, there’s nothing we can do about it. If they don’t, it would be stupid not to use predictions to improve our lives.

It’s better to prepare for a future that you don’t have than to not prepare for a future you do have. And science is based on this reasoning: We don’t know why the universe is comprehensible and why the laws of nature are predictive. But we cannot do anything about unknown unknowns anyway, so we ignore them. And if we do that, we can benefit from our extrapolations.

Just how well scientific predictions work depends on what you try to predict. Physics is the currently most predictive discipline because it deals with the simplest of systems, those whose properties we can measure to high precision and whose behavior we can describe with mathematics. This enables physicists to make quantitatively accurate predictions – if they have sufficient data to extrapolate.

The articles that you read about vacuum decay, however, are unreliable extrapolations of incomplete evidence.

Existing data in particle physics are well-described by a field – the Higgs-field – that fills the universe and gives masses to elementary particles. This works because the value of the Higgs-field is different from zero even in vacuum. We say it has a “non-vanishing vacuum expectation value.” The vacuum expectation value can be calculated from the masses of the known particles.

In the currently most widely used theory for the Higgs and its properties, the vacuum expectation value is non-zero because it has a potential with a local minimum whose value is not at zero.

We do not, however, know that the minimum which the Higgs currently occupies is the only minimum of the potential and – if the potential has another minimum – whether the other minimum would be at a smaller energy. If that was so, then the present state of the vacuum would not be stable, it would merely be “meta-stable” and would eventually decay to the lowest minimum. In this case, we would live today in what is called a “false vacuum.”

Image Credits: Gary Scott Watson.

If our vacuum decays, the world will end – I don’t know a more appropriate expression. Such a decay, once triggered, releases an enormous amount of energy – and it spreads at the speed of light, tearing apart all matter it comes in contact with, until all vacuum has decayed.

How can we tell whether this is going to happen?

Well, we can try to measure the properties of the Higgs’ potential and then extrapolate it away from the minimum. This works much like Taylor series expansions, and it has the same pitfalls. Indeed, making predictions about the minima of a function based on a polynomial expansion is generally a bad idea.

Just look for example at the Taylor series of the sine function. The full function has an infinite number of minima at exactly the same value but you’d never guess from the first terms in the series expansion. First it has one minimum, then it has two minima of different value, then again it has only one – and the higher the order of the expansion the more minima you get.

The situation for the Higgs’ potential is more complicated because the coefficients are not constant, but the argument is similar. If you extract the best-fit potential from the available data and extrapolate it to other values of the Higgs-field, then you find that our present vacuum is meta-stable.

The figure below shows the situation for the current data (figure from this paper). The horizontal axis is the Higgs mass, the vertical axis the mass of the top-quark. The current best-fit is the upper left red point in the white region labeled “Metastability.”
Figure 2 from Bednyakov et al, Phys. Rev. Lett. 115, 201802 (2015).

This meta-stable vacuum has, however, a ridiculously long lifetime of about 10600 times the current age of the universe, take or give a few billion billion billion years. This means that the vacuum will almost certainly not decay until all stars have burnt out.

However, this extrapolation of the potential assumes that there aren’t any unknown particles at energies higher than what we have probed, and no other changes to physics as we know it either. And there is simply no telling whether this assumption is correct.

The analysis of vacuum stability is not merely an extrapolation of the presently known laws into the future – which would be justified – it is also an extrapolation of the presently known laws into an untested energy regime – which is not justified. This stability debate is therefore little more than a mathematical exercise, a funny way to quantify what we already know about the Higgs’ potential.

Besides, from all the ways I can think of humanity going extinct, this one worries me least: It would happen without warning, it would happen quickly, and nobody would be left behind to mourn. I worry much more about events that may cause much suffering, like asteroid impacts, global epidemics, nuclear war – and my worry-list goes on.

Not all worries can be cured by rational thought, but since I double-checked you want facts and not comfort, fact is that current data indicates our vacuum is meta-stable. But its decay is an unreliable prediction based the unfounded assumption that there either are no changes to physics at energies beyond the ones we have tested, or that such changes don’t matter. And even if you buy this, the vacuum almost certainly wouldn’t decay as long as the universe is hospitable for life.

Particle physics is good for many things, but generating potent worries isn’t one of them. The biggest killer in physics is still the 2nd law of thermodynamics. It will get us all, eventually. But keep in mind that the only reason we play the prediction game is to get the best out of the limited time that we have.

Thanks for an interesting question!

Wednesday, May 31, 2017

Does parametric resonance solve the cosmological constant problem?

An oscillator too.
Source: Giphy.
Tl;dr: Ask me again in ten years.

A lot of people asked for my opinion about a paper by Wang, Zhu, and Unruh that recently got published in Physical Reviews D, one of the top journals in the field.

Following a press-release from UBC, the paper has attracted quite some attention in the pop science media which is remarkable for such a long and technically heavy work. My summary of the coverage so far is “bla-bla-bla parametric resonance.”

I tried to ignore the media buzz a) because it’s a long paper, b) because it’s a long paper, and c) because I’m not your public community debugger. I actually have own research that I’m more interested in. Sulk.

But of course I eventually came around and read it. Because I’ve toyed with a similar idea some while ago and it worked badly. So, clearly, these folks outscored me, and after some introspection I thought that instead of being annoyed by the attention they got, I should figure out why they succeeded where I failed.

Turns out that once you see through the math, the paper is not so difficult to understand. Here’s the quick summary.

One of the major problems in modern cosmology is that vacuum fluctuations of quantum fields should gravitate. Unfortunately, if one calculates the energy density and pressure contained in these fluctuations, the values are much too large to be compatible with the expansion history of the universe.

This vacuum energy gravitates the same way as the cosmological constant. Such a large cosmological constant, however, should lead to a collapse of the universe long before the formation of galactic structures. If you switch the sign, the universe doesn’t collapse but expands so rapidly that structures can’t form because they are ripped apart. Evidently, since we are here today, that didn’t happen. Instead, we observe a small positive cosmological constant and where did that come from? That’s the cosmological constant problem.

The problem can be solved by introducing an additional cosmological constant that cancels the vacuum energy from quantum field theory, leaving behind the observed value. This solution is both simple and consistent. It is, however, unpopular because it requires fine-tuning the additional term so that the two contributions almost – but not exactly – cancel. (I believe this argument to be flawed, but that’s a different story and shall be told another time.) Physicists therefore have tried for a long time to explain why the vacuum energy isn’t large or doesn’t couple to gravity as expected.

Strictly speaking, however, the vacuum energy density is not constant, but – as you expect of fluctuations – it fluctuates. It is merely the average value that acts like a cosmological constant, but the local value should change rapidly both in space and in time. (These fluctuations are why I’ve never bought the “degravitation” idea according to which the vacuum energy decouples because gravity has a built-in high-pass filter. In that case, you could decouple a cosmological constant, but you’d still be stuck with the high-frequency fluctuations.)

In the new paper, the authors make the audacious attempt to calculate how gravity reacts to the fluctuations of the vacuum energy. I say it’s audacious because this is not a weak-field approximation and solving the equations for gravity without a weak-field approximation and without symmetry assumptions (as you would have for the homogeneous and isotropic case) is hard, really hard, even numerically.

The vacuum fluctuations are dominated by very high frequencies corresponding to a usually rather arbitrarily chosen ‘cutoff’ – denoted Λ – where the effective theory for the fluctuations should break down. One commonly assumes that this frequency roughly corresponds to the Planck mass, mp. The key to understanding the new paper is that the authors do not assume this cutoff, Λ, to be at the Planck mass, but at a much higher energy, Λ >> mp.

As they demonstrate in the paper, massaged into a suitable form, one of the field equations for gravity takes the form of an oscillator equation with a time- and space-dependent coupling term. This means, essentially, space-time at each place has the properties of a driven oscillator.

The important observation that solves the cosmological constant problem is then that the typical resonance frequency of this oscillator is Λ2/mp which is by assumption much larger than the main frequency of fluctuations the oscillator is driven by, which is Λ. This means that space-time resonates with the frequency of the vacuum fluctuations – leading to an exponential expansion like that from a cosmological constant – but it resonates only with higher harmonics, so that the resonance is very weak.

The result is that the amplitude of the oscillations grows exponentially, but it grows slowly. The effective cosmological constant they get by averaging over space is therefore not, as one would naively expect, Λ, but (omitting factors that are hopefully of order one) Λ* exp (-Λ2/mp). One hence uses a trick quite common in high-energy physics, that one can create a large hierarchy of numbers by having a small hierarchy of numbers in an exponent.

In conclusion, by pushing the cutoff above the Planck mass, they suppress the resonance and slow down the resulting acceleration.

Neat, yes.

But I know you didn’t come for the nice words, so here’s the main course. The idea has several problems. Let me start with the most basic one, which is also the reason I once discarded a (related but somewhat different) project. It’s that their solution doesn’t actually solve the field equations of gravity.

It’s not difficult to see. Forget all the stuff about parametric resonance for a moment. Their result doesn’t solve the field equations if you set all the fluctuations to zero, so that you get back the case with a cosmological constant. That’s because if you integrate the second Friedmann-equation for a negative cosmological constant you can only solve the first Friedmann-equation if you have negative curvature. You then get Anti-de Sitter space. They have not introduced a curvature term, hence the first Friedmann-equation just doesn’t have a (real valued) solution.

Now, if you turn back on the fluctuations, their solution should reduce to the homogeneous and isotropic case on short distances and short times, but it doesn’t. It would take a very good reason for why that isn’t so, and no reason is given in the paper. It might be possible, but I don’t see how.

I further find it perplexing that they rest their argument on results that were derived in the literature for parametric resonance on the assumption that solutions are linearly independent. General relativity, however, is non-linear. Therefore, one generally isn’t free to combine solutions arbitrarily.

So far that’s not very convincing. To make matters worse, if you don’t have homogeneity, you have even more equations that come from the position-dependence and they don’t solve these equations either. Let me add, however, that this doesn’t worry me all that much because I think it might be possible to deal with it by exploiting the stochastic properties of the local oscillators (which are homogeneous again, in some sense).

Another troublesome feature of their idea is that the scale-factor of the oscillating space-time crosses zero in each cycle so that the space-time volume also goes to zero and the metric structure breaks down. I have no idea what that even means. I’d be willing to ignore this issue if the rest was working fine, but seeing that it doesn’t, it just adds to my misgivings.

The other major problem with their approach is that the limit they work in doesn’t make sense to begin with. They are using classical gravity coupled to the expectation values of the quantum field theory, a mixture known as ‘semi-classical gravity’ in which gravity is not quantized. This approximation, however, is known to break down when the fluctuations in the energy-momentum tensor get large compared to its absolute value, which is the very case they study.

In conclusion, “bla-bla-bla parametric resonance” is a pretty accurate summary.

How serious are these problems? Is there something in the paper that might be interesting after all?

Maybe. But the assumption (see below Eq (42)) that the fields that source the fluctuations satisfy normal energy conditions is, I believe, a non-starter if you want to get an exponential expansion. Even if you introduce a curvature term so that you can solve the equations, I can’t for the hell of it see how you average over locally approximately Anti-de Sitter spaces to get an approximate de Sitter space. You could of course just flip the sign, but then the second Friedmann equation no longer describes an oscillator.

Maybe allowing complex-valued solutions is a way out. Complex numbers are great. Unfortunately, nature’s a bitch and it seems we don’t live in a complex manifold. Hence, you’d then have to find a way to get rid of the imaginary numbers again. In any case, that’s not discussed in the paper either.

I admit that the idea of using a de-tuned parametric resonance to decouple vacuum fluctuations and limit their impact on the expansion of the universe is nice. Maybe I just lack vision and further work will solve the above mentioned problems. More generally, I think numerically solving the field equations with stochastic initial conditions is of general interest and it would be great if their paper inspires follow-up studies. So, give it ten years, and then ask me again. Maybe something will have come out of it.

In other news, I have also written a paper that explains the cosmological constant and I haven’t only solved the equations that I derived, I also wrote a Maple work-sheet that you can download and check the calculation for yourself. The paper was just accepted for publication in PRD.

For what my self-reflection is concerned, I concluded I might be too ambitious. It’s much easier to solve equations if you don’t actually solve them.

I gratefully acknowledge helpful conversation with two of this paper’s authors who have been very, very patient with me. Sorry I didn’t have anything nicer to say.

Friday, May 26, 2017

Can we probe the quantization of the black hole horizon with gravitational waves?

Tl;dr: Yes, but the testable cases aren’t the most plausible ones.

It’s the year 2017, but we still don’t know how space and time get along with quantum mechanics. The best clue so far comes from Stephen Hawking and Jacob Bekenstein. They made one of the most surprising finds that theoretical physics saw in the 20th century: Black holes have entropy.

It was a surprise because entropy is a measure for unresolved microscopic details, but in general relativity black holes don’t have details. They are almost featureless balls. That they nevertheless seem to have an entropy – and a gigantically large one in addition – indicates strongly that black holes can be understood only by taking into account quantum effects of gravity. The large entropy, so the idea, quantifies all the ways the quantum structure of black holes can differ.

The Bekenstein-Hawking entropy scales with the horizon area of the black hole and is usually interpreted as a measure for the number of elementary areas of size Planck-length squared. A Planck-length is a tiny 10-35 meters. This area-scaling is also the basis of the holographic principle which has dominated research in quantum gravity for some decades now. If anything is important in quantum gravity, this is.

It comes with the above interpretation that the area of the black hole horizon always has to be a multiple of the elementary Planck area. However, since the Planck area is so small compared to the size of astrophysical black holes – ranging from some kilometers to some billion kilometers – you’d never notice the quantization just by looking at a black hole. If you got to look at it to begin with. So it seems like a safely untestable idea.

A few months ago, however, I noticed an interesting short note on the arXiv in which the authors claim that one can probe the black hole quantization with gravitational waves emitted from a black hole, for example in the ringdown after a merger event like the one seen by LIGO:
    Testing Quantum Black Holes with Gravitational Waves
    Valentino F. Foit, Matthew Kleban
    arXiv:1611.07009 [hep-th]

The basic idea is simple. Assume it is correct that the black hole area is always a multiple of the Planck area and that gravity is quantized so that it has a particle – the graviton – associated with it. If the only way for a black hole to emit a graviton is to change its horizon area in multiples of the Planck area, then this dictates the energy that the black hole loses when the area shrinks because the black hole’s area depends on the black hole’s mass. The Planck-area quantization hence sets the frequency of the graviton that is emitted.

A gravitational wave is nothing but a large number of gravitons. According to the area quantization, the wavelengths of the emitted gravitons is of the order of the order of the black hole radius, which is what one expects to dominate the emission during the ringdown. However, so the authors’ argument, the spectrum of the gravitational wave should be much narrower in the quantum case.

Since the model that quantizes the black hole horizon in Planck-area chunks depends on a free parameter, it would take two measurements of black hole ringdowns to rule out the scenario: The first to fix the parameter, the second to check whether the same parameter works for all measurements.

It’s a simple idea but it may be too simple. The authors are careful to list the possible reasons for why their argument might not apply. I think it doesn’t apply for a reason that’s a combination of what is on their list.

A classical perturbation of the horizon leads to a simultaneous emission of a huge number of gravitons, and for those there is no good reason why every single one of them must fit the exact emission frequency that belongs to an increase of one Planck area as long as the total energy adds up properly.

I am not aware, however, of a good theoretical treatment of this classical limit from the area-quantization. It might indeed not work in some of the more audacious proposals we have recently seen, like Gia Dvali’s idea that black holes are condensates of gravitons. Scenarios such like Dvali’s might be testable indeed with the ringdown characteristics. I’m sure we will hear more about this in the coming years as LIGO accumulates data.

What this proposed test would do, therefore, is to probe the failure of reproducing general relativity for large oscillations of the black hole horizon. Clearly, it’s something that we should look for in the data. But I don’t think black holes will release their secrets quite as easily.

Friday, May 19, 2017

Can we use gravitational waves to rule out extra dimensions – and string theory with it?

Gravitational Waves,
Computer simulation.

Credits: Henze, NASA
Tl;dr: Probably not.

Last week I learned from New Scientist that “Gravitational waves could show hints of extra dimensions.” The article is about a paper which recently appeared on the arxiv:

The claim in this paper is nothing but stunning. Authors Andriot and Gómez argue that if our universe has additional dimensions, no matter how small, then we could find out using gravitational waves in the frequency regime accessible by LIGO.

While LIGO alone cannot do it because the measurement requires three independent detectors, soon upcoming experiments could either confirm or forever rule out extra dimensions – and kill string theory along the way. That, ladies and gentlemen, would be the discovery of the millennium. And, almost equally stunning, you heard it first from New Scientist.

Additional dimensions are today primarily associated with string theory, but the idea is much older. In the context of general relativity, it dates back to the work of Kaluza and Klein the 1920s. I came across their papers as an undergraduate and was fascinated. Kaluza and Klein showed that if you add a fourth space-like coordinate to our universe and curl it up to a tiny circle, you don’t get back general relativity – you get back general relativity plus electrodynamics.

In the presently most widely used variants of string theory one has not one, but six additional dimensions and they can be curled up – or ‘compactified,’ as they say – to complicated shapes. But a key feature of the original idea survives: Waves which extend into the extra dimension must have wavelengths in integer fractions of the extra dimension’s radius. This gives rise to an infinite number of higher harmonics – the “Kaluza-Klein tower” – that appear like massive excitations of any particle that can travel into the extra dimensions.

The mass of these excitations is inversely proportional to the radius (in natural units). This means if the radius is small, one needs a lot of energy to create an excitation, and this explains why he haven’t yet noticed the additional dimensions.

In the most commonly used model, one further assumes that the only particle that experiences the extra-dimensions is the graviton – the hypothetical quantum of the gravitational interaction. Since we have not measured the gravitational interaction on short distances as precisely as the other interactions, such gravity-only extra-dimensions allow for larger radii than all-particle extra-dimensions (known as “universal extra-dimensions”.) In the new paper, the authors deal with gravity-only extra-dimensions.

From the current lack of observation, one can then derive bounds on the size of the extra-dimension. These bounds depend on the number of extra-dimensions and on their intrinsic curvature. For the simplest case – the flat extra-dimensions used in the paper – the bounds range from a few micrometers (for two extra-dimensions) to a few inverse MeV for six extra dimensions (natural units again).

Such extra-dimensions do more, however, than giving rise to a tower of massive graviton excitations. Gravitational waves have spin two regardless of the number of spacelike dimensions, but the number of possible polarizations depends on the number of dimensions. More dimensions, more possible polarizations. And the number of polarizations, importantly, doesn’t depend on the size of the extra-dimensions at all.

In the new paper, the authors point out that the additional polarization of the graviton affects the propagation even of the non-excited gravitational waves, ie the ones that we can measure. The modified geometry of general relativity gives rise to a “breathing mode,” that is a gravitational wave which expands and contracts synchronously in the two (large) dimensions perpendicular to the direction of the wave. Such a breathing mode does not exist in normal general relativity, but it is not specific to extra-dimensions; other modifications of general relativity also have a breathing mode. Still, its non-observation would indicate no extra-dimensions.

But an old problem of Kaluza-Klein theories stands in the way of drawing this conclusion. The radii of the additional dimensions (also known as “moduli”) are unstable. You can assume that they have particular initial values, but there is no reason for the radii to stay at these values. If you shake an extra-dimension, its radius tends to run away. That’s a problem because then it becomes very difficult to explain why we haven’t yet noticed the extra-dimensions.

To deal with the unstable radius of an extra-dimension, theoretical physicists hence introduce a potential with a minimum at which the value of the radius is stuck. This isn’t optional – it’s necessary to prevent conflict with observation. One can debate how well-motivated that is, but it’s certainly possible, and it removes the stability problem.

Fixing the radius of an extra-dimension, however, will also make it more difficult to wiggle it – after all, that’s exactly what the potential was made to do. Unfortunately, in the above mentioned paper the authors don’t have stabilizing potentials.

I do not know for sure what stabilizing the extra-dimensions would do to their analysis. This would depend not only on the type and number of extra-dimension but also on the potential. Maybe there is a range in parameter-space where the effect they speak of survives. But from the analysis provided so far it’s not clear, and I am – as always – skeptical.

In summary: I don’t think we’ll rule out string theory any time soon.

[Updated to clarify breathing mode also appears in other modifications of general relativity.]

Tuesday, May 16, 2017

“Not a Toy” - New Video about Symmetry Breaking

Here is the third and last of the music videos I produced together with Apostolos Vasilidis and Timo Alho, sponsored by FQXi. The first two are here and here.

In this video, I am be-singing a virtual particle pair that tries to separate, and quite literally reflect on the inevitable imperfection of reality. The lyrics of this song went through an estimated ten thousand iterations until we finally settled on one. After this, none of us was in the mood to fight over a second verse, but I think the first has enough words already.

With that, I have reached the end of what little funding I had. And unfortunately, the Germans haven’t yet figured out that social media benefits science communication. Last month I heard a seminar on public outreach that didn’t so much as mention the internet. I do not kid you. There are foundations here who’d rather spend 100k on an event that reaches 50 people than a tenth of that to reach 100 times as many people. In some regards, Germans are pretty backwards.

This means from here on you’re back to my crappy camcorder and the always same three synthesizers unless I can find other sponsors. So, in your own interest, share the hell out of this!

Also, please let us know which video was your favorite and why because among the three of us, we couldn’t agree.

As previously, the video has captions which you can turn on by clicking on CC in the YouTube bottom bar. For your convenience, here are the lyrics:

Not A Toy

We had the signs for taking off,
The two of us we were on top,
I had never any doubt,
That you’d be there when things got rough.

We had the stuff to do it right,
As long as you were by my side,
We were special, we were whole,
From the GUT down to the TOE.

But all the harmony was wearing off,
It was too much,
We were living in a fiction,
Without any imperfection.

Every symmetry
Has to be broken,
Every harmony
Has to decay.

Leave me alone, I’m
Tired of talking,
I’m not a toy,
I’m not a toy.

Leave alone now,
I’m not a token,
I’m not a toy,
I’m not a toy.

We had the signs for taking off
Harmony was wearing off
We had the signs for taking off
Tired of talking
Harmony was wearing off
I’m tired of talking.

[Repeat Bridge]
[Repeat Chorus]

Thursday, May 11, 2017

A Philosopher Tries to Understand the Black Hole Information Paradox

Is the black hole information loss paradox really a paradox? Tim Maudlin, a philosopher from NYU and occasional reader of this blog, doesn’t think so. Today, he has a paper on the arXiv in which he complains that the so-called paradox isn’t and physicists don’t understand what they are talking about.
So is the paradox a paradox? If you mean whether black holes break mathematics, then the answer is clearly no. The problem with black holes is that nobody knows how to combine them with quantum field theory. It should really better be called a problem than a paradox, but nomenclature rarely follows logical argumentation.

Here is the problem. The dynamics of quantum field theories is always reversible. It also preserves probabilities which, taken together (assuming linearity), means the time-evolution is unitary. That quantum field theories are unitary depends on certain assumptions about space-time, notably that space-like hypersurfaces – a generalized version of moments of ‘equal time’ – are complete. Space-like hypersurfaces after the entire evaporation of black holes violate this assumption. They are, as the terminology has it, not complete Cauchy surfaces. Hence, there is no reason for time-evolution to be unitary in a space-time that contains a black hole. What’s the paradox then, Maudlin asks.

First, let me point out that this is hardly news. As Maudlin himself notes, this is an old story, though I admit it’s often not spelled out very clearly in the literature. In particular the Susskind-Thorlacius paper that Maudlin picks on is wrong in more ways than I can possibly get into here. Everyone in the field who has their marbles together knows that time-evolution is unitary on “nice slices”– which are complete Cauchy-hypersurfaces – at all finite times. The non-unitarity comes from eventually cutting these slices. The slices that Maudlin uses aren’t quite as nice because they’re discontinuous, but they essentially tell the same story.

What Maudlin does not spell out however is that knowing where the non-unitarity comes from doesn’t help much to explain why we observe it to be respected. Physicists are using quantum field theory here on planet Earth to describe, for example, what happens in LHC collisions. For all these Earthlings know, there are lots of black holes throughout the universe and their current hypersurface hence isn’t complete. Worse still, in principle black holes can be created and subsequently annihilated in any particle collision as virtual particles. This would mean then, according to Maudlin’s argument, we’d have no reason to even expect a unitary evolution because the mathematical requirements for the necessary proof aren’t fulfilled. But we do.

So that’s what irks physicists: If black holes would violate unitarity all over the place how come we don’t notice? This issue is usually phrased in terms of the scattering-matrix which asks a concrete question: If I could create a black hole in a scattering process how come that we never see any violation of unitarity.

Maybe we do, you might say, or maybe it’s just too small an effect. Yes, people have tried that argument, which is the whole discussion about whether unitarity maybe just is violated etc. That’s the place where Hawking came from all these years ago. Does Maudlin want us to go back to the 1980s?

In his paper, he also points out correctly that – from a strictly logical point of view – there’s nothing to worry about because the information that fell into a black hole can be kept in the black hole forever without any contradictions. I am not sure why he doesn’t mention this isn’t a new insight either – it’s what goes in the literature as a remnant solution. Now, physicists normally assume that inside of remnants there is no singularity because nobody really believes the singularity is physical, whereas Maudlin keeps the singularity, but from the outside perspective that’s entirely irrelevant.

It is also correct, as Maudlin writes, that remnant solutions have been discarded on spurious grounds with the result that research on the black hole information loss problem has grown into a huge bubble of nonsense. The most commonly named objection to remnants – the pair production problem – has no justification because – as Maudlin writes – it presumes that the volume inside the remnant is small for which there is no reason. This too is hardly news. Lee and I pointed this out, for example, in our 2009 paper. You can find more details in a recent review by Chen et al.

The other objection against remnants is that this solution would imply that the Bekenstein-Hawking entropy doesn’t count microstates of the black hole. This idea is very unpopular with string theorists who believe that they have shown the Bekenstein-Hawking entropy counts microstates. (Fyi, I think it’s a circular argument because it assumes a bulk-boundary correspondence ab initio.)

Either way, none of this is really new. Maudlin’s paper is just reiterating all the options that physicists have been chewing on forever: Accept unitarity violation, store information in remnants, or finally get it out.

The real problem with black hole information is that nobody knows what happens with it. As time passes, you inevitably come into a regime where quantum effects of gravity are strong and nobody can calculate what happens then. The main argument we are seeing in the literature is whether quantum gravitational effects become noticeable before the black hole has shrunk to a tiny size.

So what’s new about Maudlin’s paper? The condescending tone by which he attempts public ridicule strikes me as bad news for the – already conflict-laden – relation between physicists and philosophers.

Saturday, May 06, 2017

Away Note

I'm in Munich next week, playing with the philosophers. Be good while I'm away, back soon. (Below, the girls, missing a few teeth.)

Thursday, May 04, 2017

In which I sing about Schrödinger’s cat

You have all been waiting for this. The first ever song about quantum entanglement, Boltzmann brains, and the multiverse:

This is the second music video produced in collaboration with Apostolos Vasilidis and Timo Alho, supported by FQXi. (The first is here.) I think these two young artists are awesomely talented! Just by sharing this video you can greatly support them.

In this video too, I’m the one to blame for the lyrics, and if you think this one’s heavy on the nerdism, wait for the next ;)

Here, I sing about entanglement and the ability of a quantum system to be in two different states at the same time. Quantum states don’t have to decide, so the story, but humans have to. I have some pessimistic and some optimistic future visions, contrast determinism with many worlds, and sum it up with a chorus that says: Whatever we do, we are all in this together. And since you ask, we are all connected, because ER=EPR.

The video has subtitles, click on the “CC” icon in the YouTube bottom bar to turn on.


(The cat is dead)

We will all come back
At the end of time
As a brain in a vat
Floating around
And purely mind.

I’m just back from the future and I'm here to report
We’ll be assimilated, we’ll all join the Borg
We’ll be collectively stupid, if you like that or not
Resistance is futile, we might as well get started now

I never asked to be part of your club
So shut up
And leave me alone

But we are all connected
We will never die
Like Schrödinger’s cat
We will all be dead
And still alive

[repeat Chorus]

We will never forget
And we will never lie
All our hope,
Our fear and doubt
Will be far behind.

But I’m not a computer and I'm not a machine
I am not any other, let me be me
If the only pill that you have left
Is the blue and not the red
It might not be so bad to be
Somebody’s pet

[repeat chorus 2x]

Since you ask, the cat is doing fine
Somewhere in the multiverse it’s still alive
Think that is bad? If you trust our math,
The future is as fixed, as is the past.
Since you ask. Since you ask.

[Repeat chorus 2x]

Monday, May 01, 2017

May-day Pope-hope

Pope Francis meets Stephen Hawking.
[Photo: Piximus.]
My husband is a Roman Catholic, so is his whole family. I’m a heathen. We’re both atheists, but dear husband has steadfastly refused to leave the church. That he throws out money with the annual “church tax” (imo a great failure of secularization) has been a recurring point of friction between us. But as of recently I’ve stopped bitching about it – because the current Pope is just so damn awesome.

Pope Francis, born in Argentina, is the 266th leader of the Catholic Church. The man’s 80 years old, but within only two years he has overhauled his religion. He accepts Darwinian evolution as well as the Big Bang theory. He addresses ecological problems – loss of biodiversity, climate change, pollution – and calls for action, while worrying that “international politics has [disregarded] well-founded scientific opinion about the state of our planet.” He also likes exoplanets:
“How wonderful would it be if the growth of scientific and technological innovation would come along with more equality and social inclusion. How wonderful would it be, while we discover faraway planets, to rediscover the needs of the brothers and sisters orbiting around us.”
I find this remarkable, not only because his attitude flies in the face of those who claim religion is incompatible with science. More important, Pope Francis succeeds where the vast majority of politicians fail. He listens to scientists, accepts the facts, and bases calls for actions on evidence. Meanwhile, politicians left and right bend facts to mislead people about what’s in whose interest.

And Pope Francis is a man whose word matters big time. About 1.3 billion people in the world are presently members of his Church. For the Catholics, the Pope is the next best thing to God. The Pope is infallible, and he can keep going until he quite literally drops dead. Compared to Francis, Tweety-Trump is a fly circling a horse’s ass.

Global distribution of Catholics.
[Source: Wikipedia. By Starfunker226CC BY-SA 3.0Link.]

This current Pope is demonstrably not afraid of science, and this gives me hope for the future. Most of the tension between science and religion that we witness today is caused by certain aspects of monotheistic religions that are obviously in conflict with science – if taken literally. But it’s an unnecessary tension. It would be easy enough to throw out what are basically thousand years old stories. But this will only happen once the religious understand it will not endanger the core of their beliefs.

Science advocates like to argue that religion is incompatible with science for religion is based on belief, not reason. But this neglects that science, too, is ultimately based on beliefs.

Most scientists, for example, believe in an external reality. They believe, for the biggest part, that knowledge is good. They believe that the world can be understood, and that this is something humans should strive for.

In the foundations of physics I have seen more specific beliefs. Many of my colleagues, for example, believe that the fundamental laws of nature are simple, elegant, even beautiful. They believe that logical deduction can predict observations. They believe in continuous functions and that infinities aren’t real.

None of this has a rational basis, but physicists rarely acknowledge these beliefs as what they are. Often, I have found myself more comfortable with openly religious people, for at least they are consciously aware of their beliefs and make an effort to prevent it from interfering with research. Even my own discipline, I think, would benefit from a better awareness of the bounds of human rationality. Even my own discipline, I think, could learn from the Pope to tell Is from Ought.

You might not subscribe to the Pope’s idea that “tenderness is the path of choice for the strongest, most courageous men and women.” Honestly, to me doesn’t sound so different from believing that love will quantize gravity. But you don’t have to share the values of the Catholic Church to appreciate here is a world leader who doesn’t confuse facts with values.

Wednesday, April 26, 2017

Not all publicity is good publicity, not even in science.

“Any publicity is good publicity” is a reaction I frequently get to my complaints about flaky science coverage. I find this attitude disturbing, especially when it comes from scientists.

[img src:]

To begin with, it’s an idiotic stance towards journalism in general – basically a permission for journalists to write nonsense. Just imagine having the same attitude towards articles on any other topic, say, immigration: Simply shrug off whether the news accurately reports survey results or even correctly uses the word “immigrant.” In that case I hope we agree that not all publicity is good publicity, neither in terms of information transfer nor in terms of public engagement.

Besides, as United Airlines and Pepsi recently served to illustrate, sometimes all you want is that they stop talking about you.

But, you may say, science is different. Scientists have little to lose and much to win from an increased interest in their research.

Well, if you think so, you either haven’t had much experience with science communication or you haven’t paid attention. Thanks to this blog, I have a lot first-hand experience with public engagement due to science writers’ diarrhea. And most of what I witness isn’t beneficial for science at all.

The most serious problem is the awakening after overhype. It’s when people start asking “Whatever happened to this?” Why are we still paying string theorists? Weren’t we supposed to have a theory of quantum gravity by 2015? Why do physicists still don’t know what dark matter is made of? Why can I still not have a meaningful conversation with my phone, where is my quantum computer, and whatever happened to negative mass particles?

That’s a predictable and wide-spread backlash from disappointed hope. Once excitement fades, the consequence is a strong headwind of public ridicule and reduced trust. And that’s for good reasons, because people were, in fact, fooled. In IT development, it goes under the (branded but catchy) name Hype Cycle

[Hype Cycle. Image: Wikipedia]

There isn’t much data on it, but academic research plausibly goes through the same “through of disillusionment” when it falls short of expectations. The more hype, the more hangover when promises don’t pan out, which is why, eg, string theory today takes most of the fire while loop quantum gravity – though in many regards even more of a disappointment – flies mostly under the radar. In the valley of disappointment, then, researchers are haunted both by dwindling financial support as well as by their colleagues’ snark. (If you think that’s not happening, wait for it.)

This overhype backlash, it’s important to emphasize, isn’t a problem journalists worry about. They’ll just drop the topic and move on to the next. We, in science, are the ones who pay for the myth that any publicity is good publicity.

In the long run the consequences are even worse. Too many never-heard-of-again breakthroughs leave even the interested layman with the impression that scientists can no longer be taken seriously. Add to this a lack of knowledge about where to find quality information, and inevitable some fraction of the public will conclude scientific results can’t be trusted, period.

If you have a hard time believing what I say, all you have to do is read comments people leave on such misleading science articles. They almost all fall into two categories. It’s either “this is a crappy piece of science writing” or “mainstream scientists are incompetent impostors.” In both cases the commenters doubt the research in question is as valuable as it was presented.

If you can stomach it, check the I-Fucking-Love-Science facebook comment section every once in a while. It's eye-opening. On recent reports from the latest LHC anomaly, for example, you find gems like “I wish I had a job that dealt with invisible particles, and then make up funny names for them! And then actually get a paycheck for something no one can see! Wow!” and “But have we created a Black Hole yet? That's what I want to know.” Black Holes at the LHC were the worst hype I can recall in my field, and it still haunts us.

Another big concern with science coverage is its impact on the scientific community. I have spoken about this many times with my colleagues, but nobody listens even though it’s not all that complicated: Our attention is influenced by what ideas we are repeatedly exposed to, and all-over-the-news topics therefore bring a high risk of streamlining our interests.

Almost everyone I ever talked to about this simply denied such influence exists because they are experts and know better and they aren’t affected by what they read. Unfortunately, many scientific studies have demonstrated that humans pay more attention to what they hear about repeatedly, and we perceive something as more important the more other people talk about it. That’s human nature.

Other studies that have shown such cognitive biases are neither correlated nor anti-correlated with intelligence. In other words, just because you’re smart doesn’t mean you’re not biased. Some techniques are known to alleviate cognitive biases but the scientific community does not presently used these techniques. (Ample references eg in “Blind Spot,” by Banaji, Greenwald, and Martin.)

I have seen this happening over and over again. My favorite example is the “OPERA anomaly” that seemed to show neutrinos could travel faster than the speed of light. The data had a high statistical significance, and yet it was pretty clear from the start that the result had to be wrong – it was in conflict with other measurements.

But the OPERA anomaly was all over the news. And of course physicists talked about it. They talked about it on the corridor, and at lunch, and in the coffee break. And they did what scientists do: They thought about it.

The more they talked about it, the more interesting it became. And they began to wonder whether not there might be something to it after all. And if maybe one could write a paper about it because, well, we’ve been thinking about it.

Everybody who I spoke to about the OPERA anomaly began their elaboration with a variant of “It’s almost certainly wrong, but...” In the end, it didn’t matter they thought it was wrong – what mattered was merely that it had become socially acceptable to work on it. And every time the media picked it up again, fuel was added to the fire. What was the result? A lot of wasted time.

For physicists, however, sociology isn’t science, and so they don’t want to believe social dynamics is something they should pay attention to. And as long as they don’t pay attention to how media coverage affects their objectivity, publicity skews judgement and promotes a rich-get-richer trend.

Ah, then, you might argue, at least exposure will help you get tenure because your university likes it if their employees make it into the news. Indeed, the “any publicity is good” line I get to hear mainly as justification from people whose research just got hyped.

But if your university measures academic success by popularity, you should be very worried about what this does to your and your colleagues’ scientific integrity. It’s a strong incentive for sexy-yet-shallow, headline-worthy research that won’t lead anywhere in the long run. If you hunt after that incentive, you’re putting your own benefit over the collective benefit society would get from a well-working academic system. In my view, that makes you a hurdle to progress.

What, then, is the result of hype? The public loses: Trust in research. Scientists lose: Objectivity. Who wins? The news sites that place an ad next to their big headlines.

But hey, you might finally admit, it’s just so awesome to see my name printed in the news. Fine by me, if that's your reasoning. Because the more bullshit appears in the press, the more traffic my cleaning service gets. Just don’t say I didn’t warn you.

Friday, April 21, 2017

No, physicists have not created “negative mass”

Thanks to BBC, I will now for several years get emails from know-it-alls who think physicists are idiots not to realize the expansion of the universe is caused by negative mass. Because that negative mass, you must know, has actually been created in the lab:

The Independent declares this turns physics “completely upside down”

And if you think that was crappy science journalism, The Telegraph goes so far to insists it’s got something to do with black holes

Not that they offer so much as a hint of an explanation what black holes have to do with anything.

These disastrous news items purport to summarize a paper that recently got published in Physics Review Letters, one of the top journals in the field:
    Negative mass hydrodynamics in a Spin-Orbit--Coupled Bose-Einstein Condensate
    M. A. Khamehchi, Khalid Hossain, M. E. Mossman, Yongping Zhang, Th. Busch, Michael McNeil Forbes, P. Engels
    Phys. Rev. Lett. 118, 155301 (2017)
    arXiv:1612.04055 [cond-mat.quant-gas]

This paper reports the results of an experiment in which the physicists created a condensate that behaves as if it has a negative effective mass.

The little word “effective” does not appear in the paper’s title – and not in the screaming headlines – but it is important. Physicists use the preamble “effective” to indicate something that is not fundamental but emergent, and the exact definition of such a term is often a matter of convention.

The “effective radius” of a galaxy, for example, is not its radius. The “effective nuclear charge” is not the charge of the nucleus. And the “effective negative mass” – you guessed it – is not a negative mass.

The effective mass is merely a handy mathematical quantity to describe the condensate’s behavior.

The condensate in question here is a supercooled cloud of about ten thousand Rubidium atoms. To derive its effective mass, you look at the dispersion relation – ie the relation between energy and momentum – of the condensate’s constituents, and take the second derivative of the energy with respect to the momentum. That thing you call the inverse effective mass. And yes, it can take on negative values.
If you plot the energy against the momentum, you can read off the regions of negative mass from the curvature of the resulting curve. It’s clear to see in Fig 1 of the paper, see below. I added the red arrow to point to the region where the effective mass is negative.
Fig 1 from arXiv:1612.04055 [cond-mat.quant-gas]

As to why that thing is called effective mass, I had to consult a friend, David Abergel, who works with cold atom gases. His best explanation is that it’s a “historical artefact.” And it’s not deep: It’s called an effective mass because in the usual non-relativistic limit E=p2/m, so if you take two derivatives of E with respect to p, you get the inverse mass. Then, if you do the same for any other relation between E and p you call the result an inverse effective mass.

It's a nomenclature that makes sense in context, but it probably doesn’t sound very headline-worthy:
“Physicists created what’s by historical accident still called an effective negative mass.”
In any case, if you use this definition, you can rewrite the equations of motion of the fluid. They then resemble the usual hydrodynamic equations with a term that contains the inverse effective mass multiplied by a force.

What this “negative mass” hence means is that if you release the condensate from a trapping potential that holds it in place, it will first start to run apart. And then no longer run apart. That pretty much sums up the paper.

The remaining force which the fluid acts against, it must be emphasized, is then not even an external force. It’s a force that comes from the quantum pressure of the fluid itself.

So here’s another accurate headline:
“Physicists observe fluid not running apart.”
This is by no means to say that the result is uninteresting! Indeed, it’s pretty cool that this fluid self-limits its expansion thanks to long-range correlations which come from quantum effects. I’ll even admit that thinking of the behavior as if the fluid had a negative effective mass may be a useful interpretation. But that still doesn’t mean physicists have actually created negative mass.

And it has nothing to do with black holes, dark energy, wormholes, and the like. Trust me, physics is still upside up.

Wednesday, April 19, 2017

Catching Light – New Video!

I have many shortcomings, like leaving people uncertain whether they’re supposed to laugh or not. But you can’t blame me for lack of vision. I see a future in which science has become a cultural good, like sports, music, and movies. We’re not quite there yet, but thanks to the Foundational Questions Institute (FQXi) we’re a step closer today.

This is the first music video in a series of three, sponsored by FQXi, for which I’ve teamed up with Timo Alho and Apostolos Vasileiadis. And, believe it or not, all three music videos are about physics!

You’ve met Apostolos before on this blog. He’s the one who, incredibly enough, used his spare time as an undergraduate to make a short film about gauge symmetry. I know him from my stay in Stockholm, where he completed a masters degree in physics. Apostolos then, however, decided that research wasn’t for him. He has since founded a company – Third Panda  – and works as freelance videographer.

Timo Alho is one of the serendipitous encounters I’ve made on this blog. After he left some comments on my songs (mostly to point out they’re crappy) it turned out not only is he a theoretical physicist too, but we were both attending the same conference a few weeks later. Besides working on what passes as string theory these days, Timo also plays the keyboard in two bands and knows more than I about literally everything to do with songwriting and audio processing and, yes, about string theory too.

Then I got a mini-grant from FQXi that allowed me to coax the two young men into putting up with me, and five months later I stood in the hail, in a sleeveless white dress, on a beach in Crete, trying to impersonate electromagnetic radiation.

This first music video is about Einstein’s famous thought experiment in which he imagined trying to catch light. It takes on the question how much can be learned by introspection. You see me in the role of light (I am part of the master plan), standing in for nature more generally, and Timo as the theorist trying to understand nature’s working while barely taking notice of it (I can hear her talk to me at night).

The two other videos will follow early May and mid of May, so stay tuned for more!

Update April 21: 

Since several people asked, here are the lyrics. The YouTube video has captions - to see them, click on the CC icon in the bottom bar.

I am part of the master plan
Every woman, every man
I have seen them come and go
Go with the flow

I have seen that we all are one
I know all and every one
I was here when the sun was born
Ages ago

In my mind
I have tried
Catching light
Catching light

In my mind
I have left the world behind

Every time I close my eyes
All of nature's open wide
I can hear her
Talk to me at night

In my mind I have been trying
Catching light outside of time
I collect it in a box
Collect it in a box

Every time I close my eyes
All of nature's open wide
I can hear her
Talk to me at night

[Repeat Chorus]

[Interlude, Einstein recording]
The scientific method itself
would not have led anywhere,
it would not even have been formed
Without a passionate striving for a clear understanding.
Perfection of means
and confusion of goals
seem in my opinion
to characterize our age.

[Repeat Chorus]