


Publications of the Astronomical Society of Australia 

Submitted: 6 February 2012 Accepted: 24 April 2012 Published: 7 June 2012 Abstract. The finetuning of the universe for intelligent life has received a great deal of attention in recent years, both in the philosophical and scientific literature. The claim is that in the space of possible physical laws, parameters and initial conditions, the set that permits the evolution of intelligent life is very small. I present here a review of the scientific literature, outlining cases of finetuning in the classic works of Carter, Carr and Rees, and Barrow and Tipler, as well as more recent work. To sharpen the discussion, the role of the antagonist will be played by Victor Stenger’s recent book The Fallacy of FineTuning: Why the Universe is Not Designed for Us. Stenger claims that all known finetuning cases can be explained without the need for a multiverse. Many of Stenger’s claims will be found to be highly problematic. We will touch on such issues as the logical necessity of the laws of nature; objectivity, invariance and symmetry; theoretical physics and possible universes; entropy in cosmology; cosmic inflation and initial conditions; galaxy formation; the cosmological constant; stars and their formation; the properties of elementary particles and their effect on chemistry and the macroscopic world; the origin of mass; grand unified theories; and the dimensionality of space and time. I also provide an assessment of the multiverse, noting the significant challenges that it must face. I do not attempt to defend any conclusion based on the finetuning of the universe for intelligent life. This paper can be viewed as a critique of Stenger’s book, or read independently.
The finetuning of the universe for intelligent life has received much attention in recent times. Beginning with the classic papers of Carter (1974) and Carr & Rees (1979), and the extensive discussion of Barrow & Tipler (1986), a number of authors have noticed that very small changes in the laws, parameters and initial conditions of physics would result in a universe unable to evolve and support intelligent life. We begin by defining our terms. We will refer to the laws of nature, initial conditions and physical constants of a particular universe as its physics for short. Conversely, we define a ‘universe’ be a connected region of spacetime over which physics is effectively constant^{1}. The claim that the universe is finetuned can be formulated as:
FT can be understood as a counterfactual claim, that is, a claim about what would have been. Such claims are not uncommon in everyday life. For example, we can formulate the claim that Roger Federer would almost certainly defeat me in a game of tennis as: ‘in the set of possible games of tennis between myself and Roger Federer, the set in which I win is extremely small’. This claim is undoubtedly true, even though none of the infinitelymany possible games has been played. Our formulation of FT, however, is in obvious need of refinement. What determines the set of possible physics? Where exactly do we draw the line between ‘universes’? How is ‘smallness’ being measured? Are we considering only cases where the evolution of life is physically impossible or just extremely improbable? What is life? We will press on with the our formulation of FT as it stands, pausing to note its inadequacies when appropriate. As it stands, FT is precise enough to distinguish itself from a number of other claims for which it is often mistaken. FT is not the claim that this universe is optimal for life, that it contains the maximum amount of life per unit volume or per baryon, that carbonbased life is the only possible type of life, or that the only kinds of universes that support life are minor variations on this universe. These claims, true or false, are simply beside the point. The reason why FT is an interesting claim is that it makes the existence of life in this universe appear to be something remarkable, something in need of explanation. The intuition here is that, if ours were the only universe, and if the causes that established the physics of our universe were indifferent to whether it would evolve life, then the chances of hitting upon a lifepermitting universe are very small. As Leslie (1989, p. 121) notes, ‘[a] chief reason for thinking that something stands in special need of explanation is that we actually glimpse some tidy way in which it might be explained’. Consider the following tidy explanations:
These scenarios are neither mutually exclusive nor exhaustive, but if either or both were true then we would have a tidy explanation of why our universe, against the odds, supports the evolution of life. Our discussion of the multiverse will touch on the socalled anthropic principle, which we will formulate as follows:
Tautological? Yes! The anthropic principle is best thought of as a selection effect. Selection effects occur whenever we observe a nonrandom sample of an underlying population. Such effects are well known to astronomers. An example is Malmquist bias — in any survey of the distant universe, we will only observe objects that are bright enough to be detected by our telescope. This statement is tautological, but is nevertheless nontrivial. The penalty of ignoring Malmquist bias is a plague of spurious correlations. For example, it will seem that distant galaxies are on average intrinsically brighter than nearby ones. A selection bias alone cannot explain anything. Consider quasars: when first discovered, they were thought to be a strange new kind of star in our galaxy. Schmidt (1963) measured their redshift, showing that they were more than a million times further away than previously thought. It follows that they must be incredibly bright. How are quasars so luminous? The (best) answer is: because quasars are powered by gravitational energy released by matter falling into a supermassive black hole (Zel'dovich 1964; LyndenBell 1969). The answer is not: because otherwise we wouldn’t see them. Noting that if we observe any object in the very distant universe then it must be very bright does not explain why we observe any distant objects at all. Similarly, AP cannot explain why life and its necessary conditions exist at all. In anticipation of future sections, Table 1 defines some relevant physical quantities.
There are a few fallacies to keep in mind as we consider cases of finetuning. The CheapBinoculars Fallacy: ‘Don’t waste money buying expensive binoculars. Simply stand closer to the object you wish to view’^{3}. We can make any point (or outcome) in possibility space seem more likely by zoomingin on its neighbourhood. Having identified the lifepermitting region of parameter space, we can make it look big by deftly choosing the limits of the plot. We could also distort parameter space using, for example, logarithmic axes. A good example of this fallacy is quantifying the finetuning of a parameter relative to its value in our universe, rather than the totality of possibility space. If a dart lands 3 mm from the centre of a dartboard, is it obviously fallacious to say that because the dart could have landed twice as far away and still scored a bullseye, therefore the throw is only finetuned to a factor of two and there is ‘plenty of room’ inside the bullseye. The correct comparison is between the area of the bullseye and the area in which the dart could land. Similarly, comparing the lifepermitting range to the value of the parameter in our universe necessarily produces a bias toward underestimating finetuning, since we know that our universe is in the lifepermitting range. The Flippant Funambulist Fallacy: ‘Tightropewalking is easy!’, the man says, ‘just look at all the places you could stand and not fall to your death!’. This is nonsense, of course: a tightrope walker must overbalance in a very specific direction if her path is to be lifepermitting. The freedom to wander is tightly constrained. When identifying the lifepermitting region of parameter space, the shape of the region is irrelevant. An elongated lifefriendly region is just as finetuned as a compact region of the same area. The fact that we can change the setting on one cosmic dial, so long as we very carefully change another at the same time, does not necessarily mean that FT is false. The Sequential Juggler Fallacy: ‘Juggling is easy!’, the man says, ‘you can throw and catch a ball. So just juggle all five, one at a time’. Juggling five balls oneatatime isn’t really juggling. For a universe to be lifepermitting, it must satisfy a number of constraints simultaneously. For example, a universe with the right physical laws for complex organic molecules, but which recollapses before it is cool enough to permit neutral atoms will not form life. One cannot refute FT by considering lifepermitting criteria oneatatime and noting that each can be satisfied in a wide region of parameter space. In settheoretic terms, we are interested in the intersection of the lifepermitting regions, not the union. The Cane Toad Solution: In 1935, the Bureau of Sugar Experiment Stations was worried by the effect of the native cane beetle on Australian sugar cane crops. They introduced 102 cane toads, imported from Hawaii, into parts of Northern Queensland in the hope that they would eat the beetles. And thus the problem was solved forever, except for the 200 million cane toads that now call eastern Australia home, eating smaller native animals, and secreting a poison that kills any larger animal that preys on them. A cane toad solution, then, is one that doesn’t consider whether the end result is worse than the problem itself. When presented with a proposed finetuning explainer, we must ask whether the solution is more finetuned than the problem.
We will sharpen the presentation of cases of finetuning by responding to the claims of Victor Stenger. Stenger is a particle physicist whose latest book, ‘The Fallacy of FineTuning: Why the Universe is Not Designed for Us’^{4}, makes the following bold claim:
Let’s be clear on the task that Stenger has set for himself. There are a great many scientists, of varying religious persuasions, who accept that the universe is finetuned for life, e.g. Barrow, Carr, Carter, Davies, Dawkins, Deutsch, Ellis, Greene, Guth, Harrison, Hawking, Linde, Page, Penrose, Polkinghorne, Rees, Sandage, Smolin, Susskind, Tegmark, Tipler, Vilenkin, Weinberg, Wheeler, Wilczek^{5}. They differ, of course, on what conclusion we should draw from this fact. Stenger, on the other hand, claims that the universe is not finetuned.
What is the evidence that FT is true? We would like to have meticulously examined every possible universe and determined whether any form of life evolves. Sadly, this is currently beyond our abilities. Instead, we rely on simplified models and more general arguments to step out into possiblephysicsspace. If the set of lifepermitting universes is small amongst the universes that we have been able to explore, then we can reasonably infer that it is unlikely that the trend will be miraculously reversed just beyond the horizon of our knowledge.
Are the laws of nature themselves finetuned? Foft defends the ambitious claim that the laws of nature could not have been different because they can be derived from the requirement that they be PointofView Invariant (hereafter, PoVI). He says:
We can formulate Stenger’s argument for this conclusion as follows:
This argument commits the fallacy of equivocation — the term ‘invariant’ has changed its meaning between LN1 and LN2. The difference is decisive but rather subtle, owing to the different contexts in which the term can be used. We will tease the two meanings apart by defining covariance and symmetry, considering a number of test cases. Galileo’s Ship: We can see where Stenger’s argument has gone wrong with a simple example, before discussing technicalities in later sections. Consider this delightful passage from Galileo regarding the brand of relativity that bears his name:
Note carefully what Galileo is not saying. He is not saying that the situation can be viewed from a variety of different viewpoints and it looks the same. He is not saying that we can describe flightpaths of the butterflies using a coordinate system with any origin, orientation or velocity relative to the ship. Rather, Galileo’s observation is much more remarkable. He is stating that the two situations, the stationary ship and moving ship, which are externally distinct are nevertheless internally indistinguishable. The two situations cannot be distinguished by means of measurements confined to each situation (Healey 2007, Chapter 6). These are not different descriptions of the same situation, but rather different situations with the same internal properties. The reason why Galilean relativity is so shocking and counterintuitive is that there is no a priori reason to expect distinct situations to be indistinguishable. If you and your friend attempt to describe the butterfly in the stationary ship and end up with ‘uselessly different results’, then at least one of you has messed up your sums. If your friend tells you his pointofview, you should be able to perform a mathematical transformation on your model and reproduce his model. None of this will tell you how the butterflies will fly when the ship is speeding on the open ocean. An Aristotelian butterfly would presumably be plastered against the aft wall of the cabin. It would not be heard to cry: ‘Oh, the subjectivity of it all!’ Galilean invariance, and symmetries in general, have nothing whatsoever to do with pointofview invariance. A universe in which Galilean relativity did not hold would not wallow in subjectivity. It would be an objective, observable fact that the butterflies would fly differently in a speeding ship. This is Stenger’s confusion: PoVI does not imply symmetry. Lagrangian Dynamics: We can see this same point in a more formal context. Lagrangian dynamics is a framework for physical theories that, while originally developed as a powerful approach to Newtonian dynamics, underlies much of modern physics. The method revolves around a mathematical function called the Lagrangian, where t is time, the variables q_{i} parameterise the degrees of freedom (the ‘coordinates’), and . For a system described by L, the equations of motion can be derived from L via the Euler–Lagrange equation. One of the features of the Lagrangian formalism is that it is covariant. Suppose that we want to use different coordinates for our system, say s_{i}, that are expressed as functions of the old coordinates q_{i} and t. We can express the Lagrangian L in terms of t, s_{i} and by substituting the new coordinates for the old ones. Crucially, the form of the Euler–Lagrange equation does not change — just replace q with s. In other words, it does not matter what coordinates we use. The equations take the same form in any coordinate system, and are thus said to be covariant. Note that this is true of any Lagrangian, and any (sufficiently smooth) coordinate transformation s_{i}(t, q_{j}). Objectivity (and PoVI) are guaranteed. Now, consider a specific Lagrangian L that has the following special property — there exists a continuous family of coordinate transformations that leave L unchanged. Such a transformation is called a symmetry (or isometry) of the Lagrangian. The simplest case is where a particular coordinate does not appear in the expression for L. Noether’s theorem tells us that, for each continuous symmetry, there will be a conserved quantity. For example, if time does not appear explicitly in the Lagrangian, then energy will be conserved. Note carefully the difference between covariance and symmetry. Both could justifiably be called ‘coordinate invariance’ but they are not the same thing. Covariance is a property of the entire Lagrangian formalism. A symmetry is a property of a particular Lagrangian L. Covariance holds with respect to all (sufficiently smooth) coordinate transformations. A symmetry is linked to a particular coordinate transformation. Covariance gives us no information whatsoever about which Lagrangian best describes a given physical scenario. Symmetries provide strong constraints on the which Lagrangians are consistent with empirical data. Covariance is a mathematical fact about our formalism. Symmetries can be confirmed or falsified by experiment. Lorentz Invariance: Let’s look more closely at some specific cases. Stenger applies his general PoVI argument to Einstein’s special theory of relativity:
This claim is false. Physicists are perfectly free to postulate theories which are not Lorentz invariant, and a great deal of experimental and theoretical effort has been expended to this end. The compilation of Kostelecký & Russell (2011) cites 127 papers that investigate Lorentz violation. Pospelov & Romalis (2004) give an excellent overview of this industry, giving an example of a Lorentzviolating Lagrangian: where the fields b_{μ}, k_{μ} and H_{μν} are external vector and antisymmetric tensor backgrounds that introduce a preferred frame and therefore break Lorentz invariance; all other symbols have their usual meanings (e.g. Nagashima 2010). A wide array of laboratory, astrophysical and cosmological tests place impressively tight bounds on these fields. At the moment, the violation of Lorentz invariance is just a theoretical possibility. But that’s the point. Ironically, the best cure for a conflation of ‘framedependent’ with ‘subjective’ is special relativity. The length of a rigid rod depends on the reference frame of the observer: if it is 2 metres long it its own rest frame, it will be 1 metre long in the frame of an observer passing at 87% of the speed of light^{6}. It does not follow that the length of the rod is ‘subjective’, in the sense that the length of the rod is just the personal opinion of a given observer, or in the sense that these two different answers are ‘uselessly different’. It is an objective fact that the length of the rod is framedependent. Physics is perfectly capable of studying framedependent quantities, like the length of a rod, and framedependent laws, such as the Lagrangian in Equation 1. General Relativity: We turn now to Stenger’s discussion of gravity.
These claims are mistaken. The existence of gravity is not implied by the existence of the universe, separate masses or accelerating frames. Stenger’s view may be rooted in the rather persistent myth that special relativity cannot handle accelerating objects or frames, and so general relativity (and thus gravity) is required. The best remedy to this view to sit down with the excellent textbook of Hartle (2003) and don’t get up until you’ve finished Chapter 5’s ‘systematic way of extracting the predictions for observers who are not associated with global inertial frames ...in the context of special relativity’. Special relativity is perfectly able to preserve invariance between reference frames accelerating with respect to one another. Physicists clearly don’t have to put gravity into any model of the universe that contains separate masses. We can see this another way. None of the invariant/covariant properties of general relativity depend on the value of Newton’s constant G. In particular, we can set G = 0. In such a universe, the geometry of spacetime would not be coupled to its matterenergy content, and Einstein’s equation would read R_{μν} = 0. With no source term, local Lorentz invariance holds globally, giving the Minkowski metric of special relativity. Neither logical necessity nor PoVI demands the coupling of spacetime geometry to massenergy. This G = 0 universe is a counterexample to Stenger’s assertion that no gravity means no universe. What of Stenger’s claim that general relativity is merely a fictitious force, to be derived from PoVI and ‘one or two additional assumptions’? Interpreting PoVI as what Einstein called general covariance, PoVI tells us almost nothing. General relativity is not the only covariant theory of spacetime (Norton 1995). As Misner, Thorne & Wheeler (1973, p. 302) note: ‘Any physical theory originally written in a special coordinate system can be recast in geometric, coordinatefree language. Newtonian theory is a good example, with its equivalent geometric and standard formulations. Hence, as a sieve for separating viable theories from nonviable theories, the principle of general covariance is useless.’ Similarly, Carroll (2003) tells us that the principle ‘Laws of physics should be expressed (or at least be expressible) in generally covariant form’ is ‘vacuous’. We can now identify the ‘additional assumptions’ that Stenger needs to derive general relativity. Given general covariance (or PoVI), the additional assumptions constitute the entire empirical content of the theory. Finally, general relativity provides a perfect counterexample to Stenger’s conflation of covariance with symmetry. Einstein’s GR field equation is covariant — it takes the same form in any coordinate system, and applying a coordinate transformation to a particular solution of the GR equation yields another solution, both representing the same physical scenario. Thus, any solution of the GR equation is covariant, or PoVI. But it does not follow that a particular solution will exhibit any symmetries. There may be no conserved quantities at all. As Hartle (2003, pp. 176, 342) explains:
The Standard Model of Particle Physics and Gauge Invariance: We turn now to particle physics, and particularly the gauge principle. Interpreting gauge invariance as ‘just a fancy technical term for pointofview invariance’, Stenger says:
Remember the point that Stenger is trying to make: the laws of nature are the same in any universe which is pointofview invariant. Stenger’s discussion glosses over the major conceptual leap from global to local gauge invariance. Most discussions of the gauge principle are rather cautious at this point. Yang, who along with Mills first used the gauge principle as a postulate in a physical theory, commented that ‘We did not know how to make the theory fit experiment. It was our judgement, however, that the beauty of the idea alone merited attention’. Kaku (1993, p. 11), who provides this quote, says of the argument for local gauge invariance:
Similarly, Griffiths (2008) ‘knows of no compelling physical argument for insisting that global invariance should hold locally’ [emphasis original]. Aitchison & Hey (2002) says that this line of thought is ‘not compelling motivation’ for the step from global to local gauge invariance, and along with Pokorski (2000), who describes the argument as aesthetic, ultimately appeals to the empirical success of the principle for justification. Needless to say, these are not the views of physicists demanding that all possible universes must obey a certain principle^{8}. We cannot deduce gauge invariance from PoVI. Even with gauge invariance, we are still a long way from the standard model of particle physics. A gauge theory needs a symmetry group. Electromagnetism is based on U(1), the weak force SU(2), the strong force SU(3), and there are grand unified theories based on SU(5), SO(10), E_{8} and more. These are just the theories with a chance of describing our universe. From a theoretical point of view, there are any number of possible symmetries, e.g. SU(N) and SO(N) for any integer N (Schellekens 2008). The gauge group of the standard model, SU(3) × SU(2) × U(1), is far from unique. Conclusion: We can now see the flaw in Stenger’s argument. Premise LN1 should read: If our formulation of the laws of nature is to be objective, then it must be covariant. Premise LN2 should read: symmetries imply conserved quantities. Since ‘covariant’ and ‘symmetric’ are not synonymous, it follows that the conclusion of the argument is unproven, and we would argue that it is false. The conservation principles of this universe are not merely principles governing our formulation of the laws of nature. Neother’s theorems do not allow us to pull physically significant conclusions out of a mathematical hat. If you want to know whether a certain symmetry holds in nature, you need a laboratory or a telescope, not a blackboard. Symmetries tell us something about the physical universe.
Suppose that Stenger were correct regarding symmetries, that any objective description of the universe must incorporate them. One of the features of the universe as we currently understand it is that it is not perfectly symmetric. Indeed, intelligent life requires a measure of asymmetry. For example, the perfect homogeneity and isotropy of the Robertson–Walker spacetime precludes the possibility of any form of complexity, including life. Sakharov (1967) showed that for the universe to contain sufficient amounts of ordinary baryonic matter, interactions in the early universe must violate baryon number conservation, chargesymmetry and chargeparitysymmetry, and must spend some time out of thermal equilibrium. Supersymmetry, too, must be a broken symmetry in any lifepermitting universe, since the bosonic partner of the electron (the selectron) would make chemistry impossible (see the discussion in Susskind 2005, p. 250). As Pierre Curie has said, it is asymmetry that creates a phenomena. One of the most important concepts in modern physics is spontaneous symmetry breaking (SSB). The power of SSB is that it allows us
SSB allows the laws of nature to retain their symmetry and yet have asymmetric solutions. Even if the symmetries of the laws of nature were logically necessary, it would still be an open question as to precisely which symmetries were broken in our universe and which were unbroken.
What if the laws of nature were different? Stenger says:
In reply, finetuning isn’t about what the parameters and laws are in a particular universe. Given some other set of laws, we ask: if a universe were chosen at random from the set of universes with those laws, what is the probability that it would support intelligent life? If that probability is robustly small, then we conclude that that region of possiblephysicsspace contributes negligibly to the total lifepermitting subset. It is easy to find examples of such claims.
We should be cautious, however. Whatever the problems of defining the possible range of a given parameter, we are in a significantly more nebulous realm when we consider the set of all possible physical laws. It is not clear how such a finetuning case could be formalised, whatever its intuitive appeal.
Moving from the laws of nature to the parameters those laws, Stenger makes the following general argument against supposed examples of finetuning:
To illustrate this point, Stenger introduces ‘the wedge’. I have produced my own version in Figure 1. Here, x and y are two physical parameters that can vary from zero to x_{max} and y_{max}, where we can allow these values to approach infinity if so desired. The point (x_{0}, y_{0}) represents the values of x and y in our universe. The lifepermitting range is the shaded wedge. Stenger’s point is that varying only one parameter at a time only explores that part of parameter space which is vertically or horizontally adjacent to (x_{0}, y_{0}), thus missing most of parameter space. The probability of a lifepermitting universe, assuming that the probability distribution is uniform in (x, y) — which, as Stenger notes, is ‘the best we can do’ (Foft 72) — is the ratio of the area inside the wedge to the area inside the dashed box.
In response, finetuning relies on a number of independent lifepermitting criteria. Fail any of these criteria, and life becomes dramatically less likely, if not impossible. When parameter space is explored in the scientific literature, it rarely (if ever) looks like the wedge. We instead see many intersecting wedges. Here are two examples. Barr & Khan (2007) explored the parameter space of a model in which uptype and downtype fermions acquire mass from different Higgs doublets. As a first step, they vary the masses of the up and down quarks. The natural scale for these masses ranges over 60 orders of magnitude and is illustrated in Figure 2 (top left). The upper limit is provided by the Planck scale; the lower limit from dynamical breaking of chiral symmetry by QCD; see Barr & Khan (2007) for a justification of these values. Figure 2 (top right) zooms in on a region of parameter space, showing boundaries of 9 independent lifepermitting criteria:
A second example comes from cosmology. Figure 2 (bottom row) comes from Tegmark et al. (2006). It shows the lifepermitting range for two slices through cosmological parameter space. The parameters shown are: the cosmological constant Λ (expressed as an energy density ρ_{Λ} in Planck units), the amplitude of primordial fluctuations Q, and the matter to photon ratio ξ. A star indicates the location of our universe, and the white region shows where life can form. The left panel shows ρ_{Λ} vs. Q^{3}ξ^{4}. The red region shows universes that are plausibly lifeprohibiting — too far to the right and no cosmic structure forms; stray too low and cosmic structures are not dense enough to form stars and planets; too high and cosmic structures are too dense to allow longlived stable planetary systems. Note well the logarithmic scale — the lack of a left boundary to the lifepermitting region is because we have scaled the axis so that ρ_{Λ} = 0 is at x = –∞. The universe recollapses before life can form for ρ_{Λ} –10^{–121} (Peacock 2007). The right panel shows similar constraints in the Q vs. ξ space. We see similar constraints relating to the ability of galaxies to successfully form stars by fragmentation due to gas cooling and for the universe to form anything other than black holes. Note that we are changing ξ while holding ξ_{baryon} constant, so the left limit of the plot is provided by the condition ξ ≥ ξ_{baryon}. See Table 4 of Tegmark et al. (2006) for a summary of 8 anthropic constraints on the 7 dimensional parameter space (α, β, m_{p}, ρ_{Λ}, Q, ξ, ξ_{baryon}). Examples could be multiplied, and the restriction to a 2D slice through parameter space is due to the inconvenient unavailability of higher dimensional paper. These two examples show that the wedge, by only considering a single lifepermitting criterion, seriously distorts typical cases of finetuning by committing the sequential juggler fallacy (Section 2). Stenger further distorts the case for finetuning by saying:
No reference is given, and this statement is not true of the scientific literature. The wedge is a straw man.
The wedge, distortion that it is, would still be able to support a finetuning claim. The probability calculated by varying only one parameter is actually an overestimate of the probability calculated using the full wedge. Suppose the full lifepermitting criterion that defines the wedge is, where ε is a small number quantifying the allowed deviation from the value of y/x in our universe. Now suppose that we hold x constant at its value in our universe. We conservatively estimate the possible range of y by y_{0}. Then, the probability of a lifepermitting universe is P_{y} = 2ε. Now, if we calculate the probability over the whole wedge, we find that P_{w} ≤ ε/(1 + ε) ≈ ε, where we have an upper limit because we have ignored the area with y inside Δy, as marked in Figure 1. Thus^{10} P_{y} ≥ P_{w}. It is thus not necessarily ‘scientifically shoddy’ to vary only one variable. Indeed, as scientists we must make these kind of assumptions all the time — the question is how accurate they are. Under fairly reasonable assumptions (uniform probability etc.), varying only one variable provides a useful estimate of the relevant probability. The wedge thus commits the flippant funambulist fallacy (Section 2). If ε is small enough, then the wedge is a tightrope. We have opened up more parameter space in which life can form, but we have also opened up more parameter space in which life cannot form. As Dawkins (1986) has rightly said: ‘however many ways there may be of being alive, it is certain that there are vastly more ways of being dead, or rather not alive’. This conclusion might be avoided with a nonuniform prior probability. One can show that a powerlaw prior has no significant effect on the wedge. Any other prior raises a problem, as explained by Aguirre (2007):
In short, to significantly change the probability of a lifepermitting universe, we would need a prior that centres close to the observed value, and has a narrow peak. But this simply exchanges one finetuning for two — the centre and peak of the distribution. There is, however, one important lesson to be drawn from the wedge. If we vary x only and calculate P_{x}, and then vary y only and calculate P_{y}, we must not simply multiply P_{w} = P_{x} P_{y}. This will certainly underestimate the probability inside the wedge, assuming that there is only a single wedge.
We turn now to cosmology. The problem of the apparently low entropy of the universe is one of the oldest problems of cosmology. The fact that the entropy of the universe is not at its theoretical maximum, coupled with the fact that entropy cannot decrease, means that the universe must have started in a very special, low entropy state. Stenger argues in response that if the universe starts out at the Planck time as a sphere of radius equal to the Planck length, then its entropy is as great as it could possibly be, equal to that of a Plancksized black hole (Bekenstein 1973; Hawking 1975). As the universe expands, an entropy ‘gap’ between the actual and maximum entropy opens up in regions smaller than the observable universe, allowing order to form. Note that Stenger’s proposed solution requires only two ingredients — the initial, highentropy state, and the expansion of the universe to create an entropy gap. In particular, Stenger is not appealing to inflation to solve the entropy problem. We will do the same in this section, coming to a discussion of inflation later. There are a number of problems with Stenger’s argument, the most severe of which arises even if we assume that his calculation is correct. We have been asked to consider the universe at the Planck time, and in particular a region of the universe that is the size of the Planck length. Let’s see what happens to this comoving volume as the universe expands. 13.7 billion years of (concordance model) expansion will blow up this Planck volume until it is roughly the size of a grain of sand. A single Planck volume in a maximum entropy state at the Planck time is a good start but hardly sufficient. To make our universe, we would need around 10^{90} such Planck volumes, all arranged to transition to a classical expanding phase within a temporal window 100 000 times shorter than the Planck time^{11}. This brings us to the most serious problem with Stenger’s reply. Let’s remind ourselves of what the entropy problem is, as expounded by Penrose (1979). Consider our universe at t_{1} = one second after the big bang. Spacetime is remarkably smooth, represented by the RobertsonWalker metric to better than one part in 10^{5}. Now run the clock forward. The tiny inhomogeneities grow under gravity, forming deeper and deeper potential wells. Some will collapse into black holes, creating singularities in our once pristine spacetime. Now suppose that the universe begins to recollapse. Unless the collapse of the universe were to reverse the arrow of time^{12}, entropy would continue to increase, creating more and larger inhomogeneities and black holes as structures collapse and collide. If we freeze the universe at t_{2} = one second before the big crunch, we see a spacetime that is highly inhomogeneous, littered with lumps and bumps, and pockmarked with singularities. Penrose’s reasoning is very simple. If we started at t_{1} with an extremely homogeneous spacetime, and then allowed a few billion years of entropy increasing processes to take their toll, and ended at t_{2} with an extremely inhomogeneous spacetime, full of black holes, then we must conclude that the t_{2} spacetime represents a significantly higher entropy state than the t_{1} spacetime. We conclude that we know what a highentropy big bang spacetime looks like, and it looks nothing like the state of our universe in its earliest stages. Why didn’t our universe begin in a high entropy, highly inhomogeneous state? Why did our universe start off in such a special, improbable, lowentropy state? Let’s return to Stenger’s proposed solution. After introducing the relevant concepts, he says:
Stenger simply assumes that the universe is homogeneous and isotropic. We can see this also in his use of the Friedmann equation, which assumes that spacetime is homogeneous and isotropic. Not surprisingly, once homogeneity and isotropy have been assumed, the entropy problem doesn’t seem so hard. We conclude that Stenger has failed to solve the entropy problem. He has presented the problem itself as its solution. Homogeneous, isotropic expansion cannot solve the entropy problem — it is the entropy problem. Stenger’s assertion that ‘the universe starts out with maximum entropy or complete disorder’ is false. A homogeneous, isotropic spacetime is an incredibly low entropy state. Penrose (1989) warned of precisely this brand of failed solution two decades ago:
Cosmologists repented of such mistakes in the 1970’s and 80’s. Stenger’s ‘biverse’ (Foft 142) doesn’t solve the entropy problem either. Once again, homogeneity and isotropy are simply assumed, with the added twist that instead of a low entropy initial state, we have a low entropy middle state. This makes no difference — the reason that a low entropy state requires explanation is that it is improbable. Moving the improbable state into the middle does not make it any more probable. As Carroll (2008) notes, ‘an unnatural lowentropy condition [that occurs] in the middle of the universe’s history (at the bounce) ...passes the buck on the question of why the entropy near what we call the big bang was small’.^{13}
We turn now to cosmic inflation, which proposes that the universe underwent a period of accelerated expansion in its earliest stages. The achievements of inflation are truly impressive — in one fell swoop, the universe is sent on its expanding way, the flatness, horizon, and monopole problem are solved and we have concrete, testable and seemingly correct predictions for the origin of cosmic structure. It is a brilliant idea, and one that continues to defy all attempts at falsification. Since life requires an almostflat universe (Barrow & Tipler 1986, p. 408ff.), inflation is potentially a solution to a particularly impressive finetuning problem — sans inflation, the density of a lifepermitting universe at the Planck time must be tuned to 60 decimal places. Inflation solves this finetuning problem by invoking a dynamical mechanism that drives the universe towards flatness. The first question we must ask is: did inflation actually happen? The evidence is quite strong, though not indubitable (Turok 2002; Brandenberger 2011). There are a few things to keep in mind. Firstly, inflation isn’t a specific model as such; it is a family of models which share the desirable trait of having an early epoch of accelerating expansion. Inflation is an effect, rather than a cause. There is no physical theory that predicts the form of the inflaton potential. Different potentials, and different initial conditions for the same potential, will produce different predictions. While there are predictions shared by a wide variety of inflationary potentials, these predictions are not unique to inflation. Inflation predicts a Gaussian random field of density fluctuations, but thanks to the central limit theorem this isn’t particularly unique (Peacock 1999, p. 342, 503). Inflation predicts a nearly scaleinvariant spectrum of fluctuations, but such a spectrum was proposed for independent reasons by Harrison (1970) and Zel'dovich (1972) a decade before inflation was proposed. Inflation is a clever solution of the flatness and horizon problem, but could be rendered unnecessary by a quantumgravity theory of initial conditions. The evidence for inflation is impressive but circumstantial.
Note the difference between this section and the last. Is inflation itself finetuned? This is no mere technicality — if the solution is just as finetuned as the problem, then no progress has been made. Inflation, to set up a lifepermitting universe, must do the following^{14}:
The question now is: which of these achievements come naturally to inflation, and which need some careful tuning of the inflationary dials? I1 is a bare hypothesis — we know of no deeper reason why there should be an inflaton field at all. It was hoped that the inflaton field could be the Higgs field (Guth 1981). Alas, it wasn’t to be, and it appears that the inflaton’s sole raison d’être is to cause the universe’s expansion to briefly accelerate. There is no direct evidence for the existence of the inflaton field. We can understand many of the remaining conditions through the work of Tegmark (2005), who considered a wide range of inflaton potentials using Gaussian random fields. The potential is of the form V(φ) = m_{v}^{4}f(φ/m_{h}), where m_{v} and m_{h} are the characteristic vertical and horizontal mass scales, and f is a dimensionless function with values and derivatives of order unity. For initial conditions, Tegmark ‘sprays starting points randomly across the potential surface’. Figure 3 shows a typical inflaton potential.
Requirement I2 will be discussed in more detail below. For now we note that the inflaton must either begin or be driven into a region in which the SRA holds in order for the universe to inflate, as shown by the thick lines in Figure 3. Requirement I3 comes rather naturally to inflation: Peacock (1999, p. 337) shows that the requirement that inflation produce a large number of efolds is essentially the same as the requirement that inflation happen in the first place (i.e. SRA), namely φ_{start} ≫ m_{Pl}. This assumes that the potential is relatively smooth, and that inflation terminates at a value of the field (φ) rather smaller than its value at the start. There is another problem lurking, however. If inflation lasts for 70 efolds (for GUT scale inflation), then all scales inside the Hubble radius today started out with physical wavelength smaller than the Planck scale at the beginning of inflation (Brandenberger 2011). The predictions of inflation (especially the spectrum of perturbations), which use general relativity and a semiclassical description of matter, must omit relevant quantum gravitational physics. This is a major unknown — transplanckian effects may even prevent the onset of inflation. I4 is nontrivial. The inflaton potential (or, more specifically, the region of the inflaton potential which actually determines the evolution of the field) must have a region in which the slowroll approximation does not hold. If the inflaton rolls into a local minimum (at φ_{0}) while the SRA still holds (which requires V(φ_{0}) ≫ m_{Pl}^{2}/8π d^{2}V/dφ^{2}_{φ0} Peacock 1999, p. 332), then inflation never ends. Tegmark (2005) asks what fraction of initial conditions for the inflaton field are successful, where success means that the universe inflates, inflation ends and the universes doesn’t thereafter meet a swift demise via a big crunch. The result is shown in Figure 4.
The thick black line shows the ‘success rate’ of inflation, for a model with m_{h}/m_{Pl} as shown on the xaxis and m_{v} = 0.001m_{Pl}. (This value has been chosen to maximise the probability that Q = Q_{observed} ≈ 2 × 10^{–5}). The coloured curves show predictions for other cosmological parameters. The lower coloured regions are for m_{v} =0.001m_{Pl}; the upper coloured regions are for m_{v} = m_{h}. The success rate peaks at ~0.1 percent, and drops rapidly as m_{h} increases or decreases away from m_{Pl}. Even with a scalar field, inflation is far from guaranteed. If inflation ends, we need its energy to be converted into ordinary matter (Condition I5). Inflation must not result in a universe filled with pure radiation or dark matter, which cannot form complex structures. Typically, the inflaton will to dump its energy into radiation. The temperature must be high enough to take advantage of baryonnumberviolating physics for baryogenesis, and for γ + γ → particle + antiparticle reactions to create baryonic matter, but low enough not to create magnetic monopoles. With no physical model of the inflaton, the necessary coupling between the inflaton and ordinary matter/radiation is another postulate, but not an implausible one. Requirement I6 brought about the downfall of ‘old’ inflation. When this version of inflation ended, it did so in expanding bubbles. Each bubble is too small to account for the homogeneity of the observed universe, and reheating only occurs when bubbles collide. As the space between the bubbles is still inflating, homogeneity cannot be achieved. New models of inflation have been developed which avoid this problem. More generally, the value of Q that results from inflation depends on the potential and initial conditions. We will discuss Q further in Section 4.5. Perhaps the most pressing issue with inflation is hidden in requirement I2. Inflation is supposed to provide a dynamical explanation for the seemingly very finetuned initial conditions of the standard model of cosmology. But does inflation need special initial conditions? Can inflation act on generic initial conditions and produce the apparently finetuned universe we observe today? Hollands & Wald (2002b)^{15} contend not, for the following reason. Consider a collapsing universe. It would require an astonishing sequence of correlations and coincidences for the universe, in its final stages, to suddenly and coherently convert all its matter into a scalar field with just enough kinetic energy to roll to the top of its potential and remain perfectly balanced there for long enough to cause a substantial era of ‘deflation’. The region of finalconditionspace that results from deflation is thus much smaller than the region that does not result from deflation. Since the relevant physics is timereversible^{16}, we can simply run the tape backwards and conclude that the initialconditionspace is dominated by universes that fail to inflate. Readers will note the similarity of this argument to Penrose’s argument from Section 4.3. This intuitive argument can be formalised using the work of Gibbons, Hawking & Stewart (1987), who developed the canonical measure on the set of solutions of Einstein’s equation of General Relativity. A number of authors have used the Gibbons–Hawking–Stewart canonical measure to calculate the probability of inflation; see Hawking & Page (1988), Gibbons & Turok (2008) and references therein. We will summarise the work of Carroll & Tam (2010), who ask what fraction of universes that evolve like our universe since matterradiation equality could have begun with inflation. Crucially, they consider the role played by perturbations:
Carroll & Tam casually note: ‘This is a small number’, and in fact an overestimate. A negligibly small fraction of universes that resemble ours at late times experience an early period of inflation. Carroll & Tam (2010) conclude that while inflation is not without its attractions (e.g. it may give a theory of initial conditions a slightly easier target to hit at the Planck scale), ‘inflation by itself cannot solve the horizon problem, in the sense of making the smooth early universe a natural outcome of a wide variety of initial conditions’. Note that this argument also shows that inflation, in and of itself, cannot solve the entropy problem^{17}. Let’s summarise. Inflation is a wonderful idea; in many ways it seems irresistible (Liddle 1995). However, we do not have a physical model, and even we had such a model, ‘although inflationary models may alleviate the ‘fine tuning’ in the choice of initial conditions, the models themselves create new ‘fine tuning’ issues with regard to the properties of the scalar field’ (Hollands & Wald 2002b). To pretend that the mere mention of inflation makes a lifepermitting universe ‘100 percent’ inevitable (Foft 245) is naïve in the extreme, a cane toad solution. For a popularlevel discussion of many of the points raised in our discussion of inflation, see Steinhardt (2011).
Suppose that inflation did solve the finetuning of the density of the universe. Is it reasonable to hope that all finetuning cases could be solved in a similar way? We contend not, because inflation has a target. Let’s consider the range of densities that the universe could have had at some point in its early history. One of these densities is physically singled out as special — the critical density^{18}. Now let’s note the range of densities that permit the existence of cosmic structure in a longlived universe. We find that this range is very narrow. Very conveniently, this range neatly straddles the critical density. We can now see why inflation has a chance. There is in fact a threefold coincidence — A: the density needed for life, B: the critical density, and C: the actual density of our universe are all aligned. B and C are physical parameters, and so it is possible that some physical process can bring the two into agreement. The coincidence between A and B then creates the required anthropic coincidence (A and C). If, for example, life required a universe with a density (say, just after reheating) 10 times less than critical, then inflation would do a wonderful job of making all universes uninhabitable. Inflation thus represents a very special case. Waiting inside the lifepermitting range (L) is another physical parameter (p). Aim for p and you will get L thrown in for free. This is not true of the vast majority of finetuning cases. There is no known physical scale waiting in the lifepermitting range of the quark masses, fundamental force strengths or the dimensionality of spacetime. There can be no inflationlike dynamical solution to these finetuning problems because dynamical processes are blind to the requirements of intelligent life. What if, unbeknownst to us, there was such a fundamental parameter? It would need to fall into the lifepermitting range. As such, we would be solving a finetuning problem by creating at least one more. And we would also need to posit a physical process able to dynamically drive the value of the quantity in our universe toward p.
Q, the amplitude of primordial fluctuations, is one of Martin Rees’ Just Six Numbers. In our universe, its value is Q ≈ 2 × 10^{–5}, meaning that in the early universe the density at any point was typically within 1 part in 100 000 of the mean density. What if Q were different?
Stenger has two replies:
Note that the first part of the quote contradicts the second part. We are first told that inflation predicts Q = 10^{–5}, and then we are told that inflation cannot predict Q at all. Both claims are false. A given inflationary model will predict Q, and it will only predict a lifepermitting value for Q if the parameters of the inflaton potential are suitably finetuned. As Turok (2002) notes, ‘to obtain density perturbations of the level required by observations ...we need to adjust the coupling μ [for a power law potential μφ^{n}] to be very small, ~10^{–13} in Planck units. This is the famous finetuning problem of inflation’; see also Barrow & Tipler (1986, p. 437) and Brandenberger (2011). Rees’ lifepermitting range for Q implies a finetuning of the inflaton potential of ~10^{–11} with respect to the Planck scale. Tegmark (2005, particularly figure 11) argues that on very general grounds we can conclude that lifepermitting inflation potentials are highly unnatural. Stenger’s second reply is to ask,
There are a few problems here. We have a clear case of the flippant funambulist fallacy — the possibility of altering other constants to compensate the change in Q is not evidence against finetuning. Choose Q and, say, α_{G} at random and you are unlikely to have picked a lifepermitting pair, even if our universe is not the only lifepermitting one. We also have a nice example of the cheapbinoculars fallacy. The allowed change in Q relative to its value in our universe (‘an order of magnitude’) is necessarily an underestimate of the degree of finetuning. The question is whether this range is small compared to the possible range of Q. Stenger seems to see this problem, and so argues that large values of Q are unlikely to result from inflation. This claim is false^{19}. The upper blue region of Figure 4 shows the distribution of Q for the model of Tegmark (2005), using the ‘physically natural expectation’ m_{v} = m_{h}. The mean value of Q ranges from 10 to almost 10 000. Note that Rees only varies Q in ‘Just Six Numbers’ because it is a popular level book. He and many others have extensively investigated the effect on structure formation of altering a number of cosmological parameters, including Q. Tegmark & Rees (1998) were the first to calculate the range of Q which permits life, deriving the following limits for the case where ρ_{Λ} = 0: where these quantities are defined in Table 1, except for the cosmic baryon density parameter Ω_{b}, and we have omitted geometric factors of order unity. This inequality demonstrates the variety of physical phenomena, atomic, gravitational and cosmological, that must combine in the right way in order to produce a lifepermitting universe. Tegmark & Rees also note that there is some freedom to change Q and ρ_{Λ} together. Tegmark et al. (2006) expanded on this work, looking more closely at the role of the cosmological constant. We have already seen some of the results from this paper in Section 4.2.1. The paper considers 8 anthropic constraints on the 7 dimensional parameter space (α, β, m_{p}, ρ_{Λ}, Q, ξ, ξ_{baryon}). Figure 2 (bottom row) shows that the lifepermitting region is boxedin on all sides. In particular, the freedom to increase Q and ρ_{Λ} together is limited by the lifepermitting range of galaxy densities. Bousso et al. (2009) considers the 4dimensional parameter space (β, Q, T_{eq}, ρ_{Λ}), where T_{eq} is the temperature if the CMB at matterradiation equality. They reach similar conclusions to Rees et al.; see also Garriga et al. (1999); Bousso & Leichenauer (2009, 2010). Garriga & Vilenkin (2006) discuss what they call the ‘Q catastrophe’: the probability distribution for Q across a multiverse typically increases or decreases sharply through the anthropic window. Thus, we expect that the observed value of Q is very likely to be close to one of the boundaries of the lifepermitting range. The fact that we appear to be in the middle of the range leads Garriga & Vilenkin to speculate that the lifepermitting range may be narrower than Tegmark & Rees (1998) calculated. For example, there may be a tighter upper bound due to the perturbation of comets by nearby stars and/or the problem of nearby supernovae explosions. The interested reader is referred to the 90 scientific papers which cite Tegmark & Rees (1998), catalogued on the NASA Astrophysics Data System^{20}. The finetuning of Q stands up well under examination.
The cosmological constant problem is described in the textbook of Burgess & Moore (2006) as ‘arguably the most severe theoretical problem in highenergy physics today, as measured by both the difference between observations and theoretical predictions, and by the lack of convincing theoretical ideas which address it’. A wellunderstood and welltested theory of fundamental physics (Quantum Field Theory — QFT) predicts contributions to the vacuum energy of the universe that are ~10^{120} times greater than the observed total value. Stenger’s reply is guided by the following principle:
This seems indistinguishable from reasoning that the calculation must be wrong since otherwise the cosmological constant would have to be finetuned. One could not hope for a more perfect example of begging the question. More importantly, there is a misunderstanding in Stenger’s account of the cosmological constant problem. The problem is not that physicists have made an incorrect prediction. We can use the term dark energy for any form of energy that causes the expansion of the universe to accelerate, including a ‘bare’ cosmological constant (see Barnes et al. 2005, for an introduction to dark energy). Cosmological observations constrain the total dark energy. QFT allows us to calculate a number of contributions to the total dark energy from matter fields in the universe. Each of these contributions turns out to be 10^{120} times larger than the total. There is no direct theoryvs.observation contradiction as one is calculating and measuring different things. The finetuning problem is that these different independent contributions, including perhaps some that we don’t know about, manage to cancel each other to such an alarming, lifepermitting degree. This is not a straightforward case of Popperian falsification. Stenger outlines a number of attempts to explain the finetuning of the cosmological constant. Supersymmetry: Supersymmetry, if it holds in our universe, would cancel out some of the contributions to the vacuum energy, reducing the required finetuning to one part in ~10^{50}. Stenger admits the obvious — this isn’t an entirely satisfying solution — but there is a deeper reason to be sceptical of the idea that advances in particle physics could solve the cosmological constant problem. As Bousso (2008) explains:
A particle physics solution to the cosmological constant problem would be just as significant a coincidence as the cosmological constant problem itself. Further, this is not a problem that appears only at the Planck scale. It is thus unlikely that quantum gravity will solve the problem. For example, Donoghue (2007) says
Zero Cosmological Constant: Stenger tries to show that the cosmological constant of general relativity should be defined to be zero. He says:
The second sentence contradicts the first. If gravity depends on the absolute value of mass/energy, then we cannot set the zerolevel to our convenience. It is in particle physics, where gravity is ignorable, where we are free to define ‘zero’ energy as we like. In general relativity there is no freedom to redefine Λ. The cosmological constant has observable consequences that no amount of redefinition can disguise. Stenger’s argument fails because of this premise: if (T_{μν} = 0 ⇒ G_{μν} = 0) then Λ = 0. This is true as a conditional, but Stenger has given no reason to believe the antecedent. Even if we associate the cosmological constant with the ‘SOURCE’ side of the equations, the antecedent nothing more than an assertion that the vacuum (T_{μν} = 0) doesn’t gravitate. Even if Stenger’s argument were successful, it still wouldn’t solve the problem. The cosmological constant problem is actually a misnomer. This section has discussed the ‘bare’ cosmological constant. It comes purely from general relativity, and is not associated with any particular form of energy. The 120 ordersofmagnitude problem refers to vacuum energy associated with the matter fields of the universe. These are contributions to T_{μν}. The source of the confusion is the fact that vacuum energy has the same dynamical effect as the cosmological constant, so that observations measure an ‘effective’ cosmological constant: Λ_{eff} = Λ_{bare} +Λ_{vacuum}. The cosmological constant problem is really the vacuum energy problem. Even if Stenger could show that Λ_{bare} = 0, this would do nothing to address why Λ_{eff} is observed to be so much smaller than the predicted contributions to Λ_{vacuum}. Quintessence: Stenger recognises that, even if he could explain why the cosmological constant and vacuum energy are zero, he still needs to explain why the expansion of the universe is accelerating. One could appeal to an asyetunknown form of energy called quintessence, which has an equation of state w = p/ρ that causes the expansion of the universe to accelerate^{21} (w < –1/3). Stenger concludes that:
In reply, it is logically possible that the cause of the universe’s acceleration is not vacuum energy but some other form of energy. However, to borrow the memorable phrasing of Bousso (2008), if it looks, walks, swims, flies and quacks like a duck, then the most reasonable conclusion is not that it is a unicorn in a duck outfit. Whatever is causing the accelerated expansion of the universe quacks like vacuum energy. Quintessence is a unicorn in a duck outfit. We are discounting a form of energy with a plausible, independent theoretical underpinning in favour of one that is pure speculation. The present energy density of quintessence must fall in the same lifepermitting range that was required of the cosmological constant. We know the possible range of ρ_{Λ} because we have a physical theory of vacuum energy. What is the possible range of ρ_{Q}? We don’t know, because we have no welltested, wellunderstood theory of quintessence. This is hypothetical physics. In the absence of a physical theory of quintessence, and with the hint (as discussed above) that gravitational physics must be involved, the natural guess for the dark energy scale is the Planck scale. In that case, ρ_{Q} is once again 120 orders of magnitude larger than the lifepermitting scale, and we have simply exchanged the finetuning of the cosmological constant for the finetuning of dark energy. Stenger’s assertion that there is no finetuning problem for quintessence is false, as a number of authors have pointed out. For example, Peacock (2007) notes that most models of quintessence in the literature specify its properties via a potential V(φ), and comments that ‘Quintessence ...models do not solve the [cosmological constant] problem: the potentials asymptote to zero, even though there is no known symmetry that requires this’. Quintessence models must be finetuned in exactly the same way as the cosmological constant (see also Durrer & Maartens 2007). Underestimating Λ: Stenger’s presentation of the cosmological constant problem fails to mention some of the reasons why this problem is so stubborn^{22}. The first is that we know that the electron vacuum energy does gravitate in some situations. The vacuum polarisation contribution to the Lamb shift is known to give a nonzero contribution to the energy of the atom, and thus by the equivalence principle must couple to gravity. Similar effects are observed for nuclei. The puzzle is not just to understand why the zero point energy does not gravitate, but why it gravitates in some environments but not in vacuum. Arguing that the calculation of vacuum energy is wrong and can be ignored is naïve. There are certain contexts where we know that the calculation is correct. Secondly, a dynamical selection mechanism for the cosmological constant is made difficult by the fact that only gravity can measure ρ_{Λ}, and ρ_{Λ} only becomes dynamically important quite recently in the history of the universe. Polchinski (2006) notes that many of the mechanisms aimed at selecting a small value for ρ_{Λ} — the HawkingHartle wavefunction, the de Sitter entropy and the Colemande Luccia amplitude for tunneling — can only explain why the cosmological constant vanishes in an empty universe. Inflation creates another problem for wouldbe cosmological constant problem solvers. If the universe underwent a period of inflation in its earliest stages, then the laws of nature are more than capable of producing lifeprohibiting accelerated expansion. The solution must therefore be rather selective, allowing acceleration in the early universe but severely limiting it later on. Further, the inflaton field is yet another contributor to the vacuum energy of the universe, and one with universeaccelerating pedigree. We can write a typical local minimum of the inflaton potential as: V(φ) = μ (φ – φ_{0})^{2} + V_{0}. Post inflation, our universe settles into the minimum at φ = φ_{0}, and the V_{0} term contributes to the effective cosmological constant. We have seen this point previously: the five and sixpointed stars in Figure 4 show universes in which the value of V_{0} is respectively too negative and too positive for the postinflationary universe to support life. If the calculation is wrong, then inflation is not a wellcharacterised theory. If the field does not cause the expansion of the universe to accelerate, then it cannot power inflation. There is no known symmetry that would set V_{0} = 0, because we do not know what the inflaton is. Most proposed inflation mechanisms operate near the Planck scale, so this defines the possible range of V_{0}. The 120 orderofmagnitude finetuning remains. The Principle of Mediocrity: Stenger discusses the multiverse solution to the cosmological constant problem, which relies on the principle of mediocrity. We will give a more detailed appraisal of this approach in Section 5. Here we note what Stenger doesn’t: an appeal to the multiverse is motivated by and dependent on the finetuning of the cosmological constant. Those who defend the multiverse solution to the cosmological constant problem are quite clear that they do so because they have judged other solutions to have failed. Examples abound:
See also Peacock (2007) and Linde & Vanchurin (2010), quoted above, and Susskind (2003). Conclusion: There are a number of excellent reviews of the cosmological constant in the scientific literature (Weinberg 1989; Carroll 2001; Vilenkin 2003; Polchinski 2006, Durrer & Maartens 2007; Padmanabhan 2007; Bousso 2008). The calculations are known to be correct in other contexts and so are taken very seriously. Supersymmetry won’t help. The problem cannot be defined away. The most plausible smallvacuumselecting mechanisms don’t work in a universe that contains matter. Particle physics is blind to the absolute value of the vacuum energy. The cosmological constant problem is not a problem only at the Planck scale and thus quantum gravity is unlikely to provide a solution. Quintessence and the inflaton field are just more fields whose vacuum state must be sternly commanded not to gravitate, or else mutually balanced to an alarming degree. There is, of course, a solution to the cosmological problem. There is some reason — some physical reason — why the large contributions to the vacuum energy of the universe don’t make it lifeprohibiting. We don’t currently know what that reason is, but scientific papers continue to be published that propose new solutions to the cosmological constant problem (e.g. Shaw & Barrow 2011). The point is this: however many ways there are of producing a lifepermitting universe, there are vastly many more ways of making a lifeprohibiting one. By the time we discover how our universe solves the cosmological constant problem, we will have compiled a rather long list of ways to blow a universe to smithereens, or quickly crush it into oblivion. Amidst the possible universes, lifepermitting ones are exceedingly rare. This is finetuning par excellence.
Stars have two essential roles to play in the origin and evolution of intelligent life. They synthesise the elements needed by life — big bang nucleosynthesis provides only hydrogen, helium and lithium, which together can form just two chemical compounds (H_{2} and LiH). By comparison, Gingerich (2008) notes that the carbon and hydrogen alone can be combined into around 2300 different chemical compounds. Stars also provide a longlived, lowentropy source of energy for planetary life, as well as the gravity that holds planets in stable orbits. The lowentropy of the energy supplied by stars is crucial if life is to ‘evade the decay to equilibrium’ (Schrödinger 1992).
Stars are defined by the forces that hold them in balance. The crushing force of gravity is held at bay by thermal and radiation pressure. The pressure is sourced by thermal reactions at the centre of the star, which balance the energy lost to radiation. Stars thus require a balance between two very different forces — gravity and the strong force — with the electromagnetic force (in the form of electron scattering opacity) providing the link between the two. There is a window of opportunity for stars — too small and they won’t be able to ignite and sustain nuclear fusion at their cores, being supported against gravity by degeneracy rather than thermal pressure; too large and radiation pressure will dominate over thermal pressure, allowing unstable pulsations. Barrow & Tipler (1986, p. 332) showed that this window is open when, where the first expression uses the more exact calculation of the righthandside by Adams (2008), and the second expression uses Barrow & Tipler’s approximation for the minimum nuclear ignition temperature T_{nuc} ~ ηα^{2}m_{p}, where η ≈ 0.025 for hydrogen burning. Outside this range, stars are not stable: anything big enough to burn is big enough to blow itself apart. Adams (2008) showed there is another criterion that must be fulfilled for stars have a stable burning configuration, where is a composite parameter related to nuclear reaction rates, and we have specialised equation 44 of Adams to the case where stellar opacity is due to Thomson scattering. Adams combines these constraints in (G, α, ) parameter space, holding all other parameters constant, as shown in Figure 5. Below the solid line, stable stars are possible. The dashed (dotted) line shows the corresponding constraint for universes in which is increased (decreased) by a factor of 100. Adams remarks that ‘within the parameter space shown, which spans 10 orders of magnitude in both α and G, about onefourth of the space supports the existence of stars’.
Stenger (Foft 243) cites Adams’ result, but crucially omits the modifier shown. Adams makes no attempt to justify the limits of parameter space as he has shown them. Further, there is no justification of the use of logarithmic axes, which significantly affects the estimate of the probability^{23}. The figure of ‘onefourth’ is almost meaningless — given any lifepermitting region, one can make it equal onefourth of parameter space by chopping and changing said space. This is a perfect example of the cheapbinoculars fallacy. If one allows G to increase until gravity is as strong as the strong force (α_{G} ≈ α_{s} ≈ 1), and uses linear rather than logarithmic axes, the stablestarpermitting region occupies ~ 10^{–38} of parameter space. Even with logarithmic axes, finetuning cannot be avoided — zero is a possible value of G, and thus is part of parameter space. However, such a universe is not lifepermitting, and so there is a minimum lifepermitting value of G. A logarithmic axis, by placing G = 0 at negative infinity, puts an infinitely large region of parameter space outside of the lifepermitting region. Stable stars would then require infinite finetuning. Note further that the fact that our universe (the triangle in Figure 5) isn’t particularly close to the lifepermitting boundary is irrelevant to finetuning as we have defined it. We conclude that the existence of stable stars is indeed a finetuned property of our universe.
One of the most famous examples of finetuning is the Hoyle resonance in carbon. Hoyle reasoned that if such a resonance level did not exist at just the right place, then stars would be unable to produce the carbon required by life^{24}. Is the Hoyle resonance (called the 0^{+} level) finetuned? Stenger quotes the work of Livio et al. (1989), who considered the effect on the carbon and oxygen production of stars when the 0^{+} level is shifted. They found one could increase the energy of the level by 60 keV without effecting the level of carbon production. Is this a large change or a small one? Livio et al. (1989) ask just this question, noting the following. The permitted shift represents a 0.7% change in the energy of the level itself. It is 3% of the energy difference between the 0^{+} level and the next level up in the carbon nucleus (3^{–}). It is 16% of the difference between the energy of the 0^{+} state and the energy of three alpha particles, which come together to form carbon. Stenger argues that this final estimate is the most appropriate one, quoting from Weinberg (2007):
As Cohen (2008) notes, the 0^{+} state is known as a breathing mode; all nuclei have such a state. However, we are not quite done with assessing this finetuning case. The existence of the 0^{+} level is not enough. It must have the right energy, and so we need to ask how the properties of the resonance level, and thus stellar nucleosynthesis, change as we alter the fundamental constants. Oberhummer, Csótó & Schlattl (2000a)^{25} have performed such calculations, combining the predictions of a microscopic 12body, threealpha cluster model of ^{12}C (as alluded to by Weinberg) with a stellar nucleosynthesis code. They conclude that:
Schlattl et al. (2004), by the same group, noted an important caveat on their previous result. Modelling the later, posthydrogenburning stages of stellar evolution is difficult even for modern codes, and the inclusion of Heshell flashes seems to lessen the degree of finetuning of the Hoyle resonance. Ekström et al. (2010) considered changes to the Hoyle resonance in the context of Population III stars. These firstgeneration stars play an important role in the production of the elements needed by life. Ekström et al. (2010) place similar limits to Oberhummer et al. (2000a) on the nucleonnucleon force, and go further by translating these limits into limits on the finestructure constant, α. A fractional change in α of one part in 10^{5} would change the energy of the Hoyle resonance enough that stars would contain carbon or oxygen at the end of helium burning but not both. There is again reason to be cautious, as stellar evolution has not been followed to the very end of the life of the star. Nevertheless, these calculations are highly suggestive — the main process by which carbon and oxygen are synthesised in our universe is drastically curtailed by a tiny change in the fundamental constants. Life would need to hope that sufficient carbon and oxygen are synthesized in other ways, such as supernovae. We conclude that Stenger has failed to turn back the force of this finetuning case. The ability of stars in our universe to produce both carbon and oxygen seems to be a rare talent.
In Chapters 7–10, Stenger turns his attention to the strength of the fundamental forces and the masses of the elementary particles. These quantities are among the most discussed in the finetuning literature, beginning with Carter (1974), Carr & Rees (1979) and Barrow & Tipler (1986). Figure 6 shows in white the lifepermitting region of (α, β) (left) and (α, α_{s}) (right) parameter space^{26}. The axes are scaled like arctan (log_{10}[x]), so that the interval [0, ∞] maps onto a finite range. The blue cross shows our universe. This figure is similar to those of Tegmark (1998). The various regions illustrated are as follows:
Note that there are four more constraints on α, m_{e} and m_{p} from the cosmological considerations of Tegmark et al. (2006), as discussed in Section 4.2. There are more cases of finetuning to be considered when we expand our view to consider all the parameters of the standard model of particle physics. Agrawal et al. (1998a, b) considered the lifepermitting range of the Higgs mass parameter μ^{2}, and the corresponding limits on the vacuum expectation value, v = (–μ^{2}/λ)^{1/2}, which takes the value 246 GeV =2 × 10^{–17}m_{Pl} in our universe. After exploring the range [–m_{Pl}, m_{Pl}], they find that ‘only for values in a narrow window is life likely to be possible’. In Planck units, the relevant limits are: for v > 4 × 10^{–17}, the deuteron is strongly unstable (see point 10 above); for v > 10^{–16}, the neutron is heavier than the proton by more than the nucleon’s binding energy, so that even bound neutrons decay into protons and no nuclei larger than hydrogen are stable; for v > 2 × 10^{–14}, only the Δ^{++} particle is stable and the only stable nucleus has the chemistry of helium; for v 2 × 10^{–19}, stars will form very slowly (~10^{17} yr) and burn out very quickly (~1 yr), and the large number of stable nucleon species may make nuclear reactions so easy that the universe contains no light nuclei. Damour & Donoghue (2008) refined the limits of Agrawal et al. by considering nuclear binding, concluding that unless 0.78 × 10^{–17} < v < 3.3 × 10^{–17} hydrogen is unstable to the reaction p + e → n + ν (if v is too small) or else there is no nuclear binding at all (if v is too large). Jeltema & Sher (1999) combined the conclusions of Agrawal et al. and Oberhummer et al. (2000a) to place a constraint on the Higgs vev from the finetuning of the Hoyle resonance (Section 4.7.2). They conclude that a 1% change in v from its value in our universe would significantly affect the ability of stars to synthesise both oxygen and carbon. Hogan (2006) reached a similar conclusion: ‘In the absence of an identified compensating factor, increases in [v/Λ_{QCD}] of more than a few percent lead to major changes in the overall cosmic carbon creation and distribution’. Remember, however, the caveats of Section 4.7.2: it is difficult to predict exactly when a major change becomes a lifeprohibiting change. There has been considerable attention given to the finetuning of the masses of fundamental particles, in particular m_{u}, m_{d} and m_{e}. We have already seen the calculation of Barr & Khan (2007) in Figure 2, which shows the lifepermitting region of the m_{u}–m_{d} plane. Hogan (2000) was one of the first to consider the finetuning of the quark masses (see also Hogan 2006). Such results have been confirmed and extended by Damour & Donoghue (2008), Hall & Nomura (2008) and Bousso et al. (2009). Jaffe et al. (2009) examined a different slice through parameter space, varying the masses of the quarks while ‘holding as much as possible of the rest of the Standard Model phenomenology constant’ [emphasis original]. In particular, they fix the electron mass, and vary Λ_{QCD} so that the average mass of the lightest baryon(s) is 940 MeV, as in our universe. These restrictions are chosen to make the characterisation of these other universes more certain. Only nuclear stability is considered, so that a universe is deemed congenial if both carbon and hydrogen are stable. The resulting congenial range is shown in Figure 8. The height of each triangle is proportional to the total mass of the three lightest quarks: m_{T} = m_{u} + m_{d} + m_{s}; the centre triangle has m_{T} as in our universe. The perpendicular distance from each side represents the mass of the u, d and s quarks. The lower green region shows universes like ours with two light quarks (m_{u}, m_{d} ≪ m_{s}), and is bounded above by the stability of some isotope of hydrogen (in this case, tritium) and below by the corresponding limit for carbon ^{10}C, (–21.80 MeV < m_{p} – m_{n} < 7.97 MeV). The smaller green strip shows a novel congenial region, where there is one light quark (m_{d} ≪ m_{s} ≈ m_{u}). This congeniality band has half the width of the band in which our universe is located. The red regions are uncongenial, while white regions show where it is uncertain where the redgreen boundary should lie. Note two things about the larger triangle on the right. Firstly, the smaller congenial band detaches from the edge of the triangle for m_{T} 1.22m_{T,0} as the lightest baryon is the Δ^{++}, which would be incapable of forming nuclei. Secondly, and most importantly for our purposes, the absolute width of the green regions remains the same, and thus the congenial fraction of the space decreases approximately as 1/m_{T}. Moving from the centre (m_{T} = m_{T,0}) to the right (m_{T} = 2m_{T,0}) triangle of Figure 8, the congenial fraction drops from 14% to 7%. Finally, ‘congenial’ is almost certainly a weaker constraint than ‘lifepermitting’, since only nuclear stability is investigated. For example, a universe with only tritium will have an element which is chemically very similar to hydrogen, but stars will not have ^{1}H as fuel and will therefore burn out significantly faster.
Tegmark, Vilenkin & Pogosian (2005) studied anthropic constraints on the total mass of the three neutrino species. If ∑m_{ν} 1 eV then galaxy formation is significantly suppressed by free streaming. If ∑m_{ν} is large enough that neutrinos are effectively another type of cold dark matter, then the baryon fraction in haloes would be very low, affecting baryonic disk and star formation. If all neutrinos are heavy, then neutrons would be stable and big bang nucleosynthesis would leave no hydrogen for stars and organic compounds. This study only varies one parameter, but its conclusions are found to be ‘rather robust’ when ρ_{Λ} is also allowed to vary (Pogosian & Vilenkin 2007). There are a number of tentative anthropic limits relating to baryogenesis. Baryogenesis is clearly crucial to life — a universe which contained equal numbers of protons and antiprotons at annihilation would only contain radiation, which cannot form complex structures. However, we do not currently have a wellunderstood and welltested theory of baryogenesis, so caution is advised. Gould (2010) has argued that three or more generations of quarks and leptons are required for CP violation, which is one of the necessary conditions for baryogenesis (Sakharov 1967; Cahn 1996; Schellekens 2008). Hall & Nomura (2008) state that v/Λ_{QCD} ~ 1 is required ‘so that the baryon asymmetry of the early universe is not washed out by sphaleron effects’ (see also ArkaniHamed et al. 2005). Harnik, Kribs & Perez (2006) attempted to find a region of parameter space which is lifepermitting in the absence of the weak force. With some ingenuity, they plausibly discovered one, subject to the following conditions. To prevent big bang nucleosynthesis burning all hydrogen to helium in the early universe, they must use a ‘judicious parameter adjustment’ and set the baryon to photon radio η_{b} = 4 × 10^{–12}. The result is a substantially increased abundance of deuterium, ~10% by mass. Λ_{QCD} and the masses of the light quarks and leptons are held constant, which means that the nucleon masses and thus nuclear physics is relatively unaffected (except, of course, for beta decay) so long as we ‘insist that the weakless universe is devoid of heavy quarks’ to avoid problems relating to the existence of stable baryons^{29} Λ_{c}^{+}, Λ_{b}^{0} and Λ_{t}^{+}. Since v ~ m_{Pl} in the weakless universe, holding the light fermion masses constant requires the Yukawa parameters (Γ_{e}, Γ_{u}, Γ_{d}, Γ_{s}) must all be set by hand to be less than 10^{–20} (Feldstein et al. 2006). The weakless universe requires Ω_{baryon}/Ω_{dark matter} ~ 10^{–3}, 100 times less than in our universe. This is very close to the limit of Tegmark et al. (2006), who calculated that unless Ω_{baryon}/Ω_{dark matter} 5 × 10^{–3}, gas will not cool into galaxies to form stars. Galaxy formation in the weakless universe will thus be considerably less efficient, relying on rare statistical fluctuations and cooling via molecular viscosity. The protonproton reaction which powers stars in our universe relies on the weak interaction, so stars in the weakless universe burn via protondeuterium reactions, using deuterium left over from the big bang. Stars will burn at a lower temperature, and probably with shorter lifetimes. Stars will still be able to undergo accretion supernovae (Type 1a), but the absence of corecollapse supernovae will seriously affect the oxygen available for planet formation and life (Clavelli & White 2006). Only ~1% of the oxygen in our universe comes from accretion supernovae. It is then somewhat optimistic to claim that (Gedalia, Jenkins & Perez 2011), where {α_{us}} ({α_{weakless}}) represents the set of parameters of our (the weakless) universe. Note that, even if Equation 6 holds, the weakless universe at best opens up a lifepermitting region of parameter space of similar size to the region in which our universe resides. The need for a lifepermitting universe to be finetuned is not significantly affected.
Let’s consider Stenger’s responses to these cases of finetuning. Higgs and Hierarchy:
Stenger takes no cognizance of the hierarchy and flavour problems, widely believed to be amongst the most important problems of particle physics:
The problem is as follows. The mass of a fundamental particle in the standard model is set by two factors: , where i labels the particle species, Γ_{i} is called the Yukawa parameter (e.g. electron: Γ_{e} ≈ 2.9 ×10^{–6}, up quark: Γ_{u} ≈ 1.4 × 10^{–5}, down quark: Γ_{d} ≈2.8 × 10^{–5}), and v is the Higgs vacuum expectation value, which is the same for all particles (see Burgess & Moore 2006, for an introduction). Note that, contra Stenger, the bare masses of the quarks are not related to the strong force^{30}. There are, then, two independent ways in which the masses of the basic constituents of matter are surprisingly small: v = 2 × 10^{–17}m_{Pl}, which ‘is so notorious that it’s acquired a special name — the Hierarchy Problem — and spawned a vast, inconclusive literature’ (Wilczek 2006a), and Γ_{i} ~ 10^{–6}, which implies that, for example, the electron mass is unnaturally smaller than its (unnaturally small) natural scale set by the Higgs condensate (Wilczek 2007, p. 53) . This is known as the flavour problem. Let’s take a closer look at the hierarchy problem. The problem (as ably explained by Martin 1998) is that the Higgs mass (squared) m_{H}^{2} receives quantum corrections from the virtual effects of every particle that couples, directly or indirectly, to the Higgs field. These corrections are enormous — their natural scale is the Planck scale, so that these contributions must be finetuned to mutually cancel to one part in m_{Pl}^{2}/m_{H}^{2} ≈ 10^{32}. Stenger’s reply is to say that:
Here we see the problem itself presented as its solution. It is precisely the smallness of the quantum corrections wherein the finetuning lies. If the Planck mass is the ‘natural’ (Foft 175) mass scale in physics, then it sets the scale for all mass terms, corrections or otherwise. Just calling them ‘small’ doesn’t explain anything. Attempts to solve the hierarchy problem have driven the search for theories beyond the standard model: technicolor, the supersymmetric standard model, large extra dimensions, warped compactifications, little Higgs theories and more — even anthropic solutions (ArkaniHamed & Dimopoulos 2005; ArkaniHamed et al. 2005; Feldstein et al. 2006; Hall & Nomura 2008, 2010; Donoghue et al. 2010). Perhaps the most popular option is supersymmetry, whereby the Higgs mass scale doesn’t receive corrections from mass scales above the supersymmetrybreaking scale Λ_{SM} due to equal and opposite contributions from supersymmetric partners. This ties v to Λ_{SM}. The question now is: why is Λ_{SM} ≪ m_{Pl}? This is known in the literature as ‘the μproblem’, in reference to the parameter in the supersymmetric potential that sets the relevant mass scale. The value of μ in our universe is probably ~10^{2}–10^{3} GeV. The natural scale for μ is m_{Pl}, and thus we still do not have an explanation for why the quark and lepton masses are so small. Lowenergy supersymmetry does not by itself explain the magnitude of the weak scale, though it protects it from radiative correction (Barr & Khan 2007). Solutions to the μproblem can be found in the literature (see Martin 1998, for a discussion and references). We can draw some conclusions. First, Stenger’s discussion of the surprising lightness of fundamental masses is woefully inadequate. To present it as a solved problem of particle physics is a gross misrepresentation of the literature. Secondly, smallness is not sufficient for life. Recall that Damour & Donoghue (2008) showed that unless 0.78 × 10^{–17} < v/m_{Pl} < 3.3 × 10^{–17}, the elements are unstable. The masses must be sufficiently small but not too small. Finally, suppose that the LHC discovers that supersymmetry is a (broken) symmetry of our universe. This would not be the discovery that the universe could not have been different. It would not be the discovery that the masses of the fundamental particles must be small. It would at most show that our universe has chosen a particularly elegant and beautiful way to be lifepermitting. QCD and MassWithoutMass: The bare quark masses, discussed above, only account for a small fraction of the mass of the proton and neutron. The majority of the other 95% comes from the strong force binding energy of the valence quarks. This contribution can be written as aΛ_{QCD}, where a ≈ 4 is a dimensionless constant determined by quantum chromodynamics (QCD). In Planck units, Λ_{QCD} ≈ 10^{–20}m_{Pl}. The question ‘why is gravity so feeble?’ (i.e. α_{G} ≪ 1) is at least partly answered if we can explain why Λ_{QCD} ≪ m_{Pl}. Unlike the bare masses of the quarks and leptons, we can answer this question from within the standard model. The strength of the strong force α_{s} is a function of the energy of the interaction. Λ_{QCD} is the massenergy scale at which α_{s} diverges. Given that the strength of the strong force runs very slowly (logarithmically) with energy, there is a exponential relationship between Λ_{QCD} and the scale of grand unification m_{U}: where b is a constant of order unity. Thus, if the QCD coupling is even moderately small at the unification scale, the QCD scale will be a long way away. To make this work in our universe, we need α_{s}(m_{U}) ≈ 1/25, and m_{U} ≈ 10^{16} GeV (De Boer & Sander 2004). The calculation also depends on the spectrum of quark flavours; see Hogan (2000), Wilczek (2002) and Schellekens (2008, Appendix C). As an explanation for the value of the proton and neutron mass in our universe, we aren’t done yet. We don’t know how to calculate the α_{s}(m_{U}), and there is still the puzzle of why the unification scale is three orders of magnitude below the Planck scale. From a finetuning perspective, however, this seems to be good progress, replacing the major miracle Λ_{QCD}/m_{Pl} ~ 10^{–20} with a more minor one, α_{s}(m_{U}) ~ 10^{–1}. Such explanations have been discussed in the finetuning literature for many years (Carr & Rees 1979; Hogan 2000). Note that this does not completely explain the smallness of the proton mass, since m_{p} is the sum of a number of contributions: QCD (Λ_{QCD}), electromagnetism, the masses of the valence quarks (m_{u} and m_{d}), and the mass of the virtual quarks, including the strange quark, which makes a surprisingly large contribution to the mass of ordinary matter. We need all of the contributions to be small in order for m_{p} to be small. Potential problems arise when we need the proton mass to fall within a specific range, rather than just be small, since the proton mass depends very sensitively (exponentially) on α_{U}. For example, consider Region 4 in Figure 6, β^{1/4} ≪ 1. The constraint shown, β^{1/4} < 1/3 would require a 20fold decrease in the proton mass to be violated, which (using Equation 7) translates to decreasing α_{U} by ~0.003. Similarly, Region 7 will be entered if α_{U} is increased^{31} by ~0.008. We will have more to say about grand unification and finetuning below. For the moment, we note that the finetuning of the mass of the proton can be translated into anthropic limits on GUT parameters. Protons, Neutrons, Electrons: We turn now to the relative masses of the three most important particles in our universe: the proton, neutron and electron, from which atoms are made. Consider first the ratio of the electron to the proton mass, β, of which Stenger says:
Remember that finetuning compares the lifepermitting range of a parameter with the possible range. Foft has compared the electron mass in our universe with the electron mass in universes ‘like ours’, thus missing the point entirely. In terms of the parameters of the standard model, β ≡ m_{e}/m_{p} ≈ Γ_{e}v/aΛ_{QCD}. The smallness of β is thus quite surprising, since the ratio of the natural mass scale of the electron and the proton is v/Λ_{QCD} ≈ 10^{3}. The smallness of β stems from the fact that the dimensionless constant for the proton is of order unity (a ≈ 4), while the Yukawa constant for the electron is unnaturally small Γ_{e} ≈ 10^{–6}. Stenger’s assertion that the Higgs mechanism (with mass scale 246 GeV) accounts for the smallness of the electron mass (0.000511 GeV) is false. The other surprising aspect of the smallness of β is the remarkable proximity of the QCD and electroweak scales (ArkaniHamed & Dimopoulos 2005); in Planck units, v ≈ 2 × 10^{–17}m_{Pl} and Λ_{QCD} ≈ 2 × 10^{–20}m_{Pl}. Given that β is constrained from both above and below anthropically (Figure 6), this coincidence is required for life. Let’s look at the protonneutron mass difference.
Let’s first deal with the Lattice QCD (LQCD) calculations. LQCD is a method of reformulating the equations of QCD in a way that allows them to be solved on a supercomputer. LQCD does not calculate the quark masses from the fundamental parameters of the standard model — they are fundamental parameters of the standard model. Rather, ‘[t]he experimental values of the π, ρ and K or φ masses are employed to fix the physical scale and the light quark masses’ (Iwasaki 2000). Every LQCD calculation takes great care to explain that they are inferring the quark masses from the masses of observed hadrons (see, for example, Davies et al. 2004; Dürr et al. 2008; Laiho 2011). This is important because finetuning involves a comparison between the lifepermitting range of the fundamental parameters with their possible range. LQCD doesn’t address either. It demonstrates that (with no small amount of cleverness) one can measure the quark masses in our universe. It does not show that the quark masses could not have been otherwise. When Stenger compares two different values for the quark masses (3.3 MeV and 1.5–3 MeV), he is not comparing a theoretical calculation with an experimental measurement. He is comparing two measurements. Stenger has demonstrated that the u and d quark masses in our universe are equal (within experimental error) to the u and d quark masses in our universe. Stenger states that m_{n} – m_{p} results from m_{d} – m_{u}. This is false, as there is also a contribution from the electromagnetic force (Gasser & Leutwyler 1982; Hall & Nomura 2008). This would tend to make the (charged) proton heavier than the (neutral) neutron, and hence we need the mass difference of the light quarks to be large enough to overcome this contribution. As discussed in Section 4.8 (item 5), this requires α (m_{d} – m_{u})/141 MeV. The lightness of the upquark is especially surprising, since the upquark’s older brothers (charm and top) are significantly heavier than their partners (strange and bottom). Finally, and most importantly, note carefully Stenger’s conclusion. He states that no finetuning is needed for the neutronproton mass difference in our universe to be approximately equal to the up quarkdown quark mass difference in our universe. Stenger has compared our universe with our universe and found no evidence of finetuning. There is no discussion of the lifepermitting range, no discussion of the possible range of m_{n} – m_{p} (or its relation to the possible range of m_{d} – m_{u}), and thus no relevance to finetuning whatsoever.
Until now, we have treated the strength of the fundamental forces, quantified by the coupling constants α_{1}, α_{2} and α_{3} (collectively α_{i}), as constants. In fact, these parameters are a function of energy due to screening (or antiscreening) by virtual particles. For example, the ‘running’ of α_{1} with massenergy (M) is governed (to first order) by the following equation (De Boer 1994; Hogan 2000) where the sum is over the charges Q_{i} of all fermions of mass less than M. If we include all (and only) the particles of the standard model, then the solution is The integration constant, α_{1}(M_{0}) is set at a given energy scale M_{0}. A similar set of equations holds for the other constants. Stenger asks,
The second sentence is false by definition — a finetuning claim necessarily considers different values of the physical parameters of our universe. Note that Stenger doesn’t explicitly answer the question he has posed. If the implication is that those who have performed theoretical calculations to determine whether universes with different physics would support life have failed to take into account the running of the coupling constants, then he should provide references. I know of no scientific paper on finetuning that has used the wrong value of α_{i} for this reason. For example, for almost all constraints involving the finestructure constant, the relevant value is the low energy limit i.e. the fine structure constant α = 1/137. The fact that α is different at higher energies is not relevant. Alternatively, if the implication is that the running of the constants means that one cannot meaningfully consider changes in the α_{i}, then this too is false. As can be seen from Equation 9, the running of the coupling does not fix the integration constants. If we choose to fix them at low energies, then changing the finestructure constant is effected by our choice of α_{1}(M_{0}) and α_{2}(M_{0}). The running of the coupling constants does not change the status of the α_{i} as free parameters of the theory. The running of the coupling constants is only relevant if unification at high energy fixes the integration constants, changing their status from fundamental to derived. We thus turn to Grand Unification Theories (GUTs), of which Stenger remarks:
At the risk of repetition: to show (or conjecture) that a parameter is derived rather than fundamental does not mean that it is not finetuned. As Stenger has presented it, grand unification is a cane toad solution, as no attempt is made to assess whether the GUT parameters are finetuned. All that we should conclude from Stenger’s discussion is that the parameters (α_{1}, α_{2}, α_{3}) can be calculated given α_{U} and M_{U}. The calculation also requires that the masses, charges and quantum numbers of all fundamental particles be given to allow terms like ∑Q_{i}^{2} to be computed. What is the lifepermitting range of α_{U} and M_{U}? Given that the evidence for GUTs is still circumstantial, not much work has been done towards answering this question. The pattern α_{3} ≫ α_{2} > α_{1} seems to be generic, since ‘the antiscreening or asymptotic freedom effect is more pronounced for larger gauge groups, which have more types of virtual gluons’ (Wilczek 1997). As can be seen from Figure 6, this is a good start but hardly guarantees a lifepermitting universe. The strength of the strong force at low energy increases with M_{U}, so the smallness of M_{U}/m_{Pl} may be ‘explained’ by the anthropic limits on α_{s}. If we suppose that α and α_{s} are related linearly to α_{U}, then the GUT would constrain the point (α, α_{s}) to lie on the blue dotdashed line in Figure 6. This replaces the finetuning of the white area with the finetuning of the linesegment, plus the constraints placed on the other GUT parameters to ensure that the dotted line passes through the white region at all. This last point has been emphasised by Hogan (2007). Figure 7 shows a slice through parameter space, showing the electron mass (m_{e}) and the downup quark mass difference (m_{d} – m_{u}). The condition labelled no nuclei was discussed in Section 4.8, point 10. The line labelled no atoms is the same condition as point 1, expressed in terms of the quark masses. The thin solid vertical line shows ‘a constraint from a particular SO(10) grand unified scenario’ which fixes m_{d}/m_{e}. Hogan notes:
The effect of grand unification on finetuning is discussed in Barrow & Tipler (1986, p. 354). They found that GUTs provided the tightest anthropic bounds on the fine structure constant, associated with the decay of the proton into a positron and the requirement of grand unification below the Planck scale. These limits are shown in Figure 6 as solid black lines. Regarding the spectrum of fundamental particles, Cahn (1996) notes that if the couplings are fixed at high energy, then their value at low energy depends on the masses of particles only ever seen in particle accelerators. For example, changing the mass of the top quark affects the finestructure constant and the mass of the proton (via Λ_{QCD}). While the dependence on m_{t} is not particularly dramatic, it would be interesting to quantify such anthropic limits within GUTs. Note also that, just as there are more than one way to unify the forces of the standard model — SU(5), SO(10), E_{8} and more — there is also more than one way to break the GUT symmetry. I will defer to the expertise of Schellekens (2008).
In other words, we not only need the right GUT symmetry, we need to make sure it breaks in the right way. A deeper perspective of GUTs comes from string theory — I will follow the discussion in Schellekens (2008, p. 62ff.). Since string theory unifies the four fundamental forces at the Planck scale, it doesn’t really need grand unification. That is, there is no particular reason why three of the forces should unify first, three orders of magnitude below the Planck scale. It seems at least as easy to get the standard model directly, without bothering with grand unification. This could suggest that there are anthropic reasons for why we (possibly) live in a GUT universe. Grand unification provides a mechanism for baryon number violation and thus baryogenesis, though such theories are currently out of favour. We conclude that anthropic reasoning seems to provide interesting limits on GUTs, though much work remains to be done in this area.
Suppose Bob sees Alice throw a dart and hit the bullseye. ‘Pretty impressive, don’t you think?’, says Alice. ‘Not at all’, says Bob, ‘the pointofimpact of the dart can be explained by the velocity with which the dart left your hand. No finetuning is needed.’ On the contrary, the finetuning of the point of impact (i.e. the smallness of the bullseye relative to the whole wall) is evidence for the finetuning of the initial velocity. This fallacy alone makes much of Chapters 7 to 10 of Foft irrelevant. The question of the finetuning of these more fundamental parameters is not even asked, making the whole discussion a cane toad solution. Stenger has given us no reason to think that the lifepermitting region is larger, or possibility space smaller, than has been calculated in the finetuning literature. The parameters of the standard model remain some of the best understood and most impressive cases of finetuning.
A number of authors have emphasised the lifepermitting properties of the particular combination of one time and three spacedimensions, going back to Ehrenfest (1917) and Whitrow (1955), summarised in Barrow & Tipler (1986) and Tegmark (1997)^{32}. Figure 9 shows the summary of the constraints on the number of space and time dimensions. The number of space dimensions is one of Rees ‘Just Six Numbers’. Foft addresses the issue:
In response, we do not need to think of dimensionality as a property of objective reality. We just rephrase the claim: instead of ‘if space were not three dimensional, then life would not exist’, we instead claim ‘if whatever exists were not such that it is accurately described on macroscopic scales by a model with three space dimensions, then life would not exist’. This (admittedly inelegant sentence) makes no claims about the universe being really threedimensional. If ‘whatever works’ was four dimensional, then life would not exist, whether the number of dimensions is simply a human invention or an objective fact about the universe. We can still use the dimensionality of space in counterfactual statements about how the universe could have been. String theory is actually an excellent counterexample to Stenger’s claims. String theorists are not content to posit ten dimensions and leave it at that. They must compactify all but 3+1 of the extra dimensions for the theory to have a chance of describing our universe. This finetuning case refers to the number of macroscopic or ‘large’ space dimensions, which both string theory and classical physics agree to be three. The possible existence of small, compact dimensions is irrelevant. Finally, Stenger tells us (Foft 48) that ‘when a model has passed many risky tests ...we can begin to have confidence that it is telling us something about the real world with certainty approaching 100 percent’. One wonders how the idea that space has three (large) dimensions fails to meet this criterion. Stenger’s worry seems to be that the threedimensionality of space may not be a fundamental property of our universe, but rather an emergent one. Our model of space as a subset of^{33} may crumble into spacetime foam below the Planck length. But emergent does not imply subjective. Whatever the fundamental properties of spacetime are, it is an objective fact about physical reality — by Stenger’s own criterion — that in the appropriate limit space is accurately modelled by . The confusion of Stenger’s response is manifest in the sentence: ‘We choose three [dimensions] because it fits the data’ (Foft 51). This isn’t much of a choice. One is reminded of the man who, when asked why he choose to join the line for ‘nonhenpecked husbands’, answered, ‘because my wife told me to’. The universe will let you choose, for example, your unit of length. But you cannot decide that the macroscopic world has four space dimensions. It is a mathematical fact that in a universe with four spatial dimensions you could, with a judicious choice of axis, make a leftfooted shoe into a rightfooted one by rotating it. Our inability to perform such a transformation is not the result of physicists arbitrarily deciding that, in this spacetime model we’re inventing, space will have three dimensions.
On Boxing Day, 2002, Powerball announced that Andrew J. Whittaker Jr. of West Virginia had won $314.9 million in their lottery. The odds of this event are 1 in 120 526 770. How could such an unlikely event occur? Should we accuse Mr Whittaker of cheating? Probably not, because a more likely explanation is that a great many different tickets were sold, increasing the chances that someone would win. The multiverse is just such an explanation. Perhaps there are more universes out there (in some sense), sufficiently numerous and varied that it is not too improbable that at least one of them would be in the lifepermitting subset of possiblephysicsspace. And, just as Powerball wouldn’t announce that ‘Joe Smith of Chicago didn’t win the lottery today’, so there is no one in the lifeprohibiting universes to wonder what went wrong. Stenger says (Foft 24) that he will not need to appeal to a multiverse in order to explain finetuning. He does, however, keep the multiverse close in case of emergencies.
Firstly, the difficulty in ruling out multiverses speaks to their unfalsifiability, rather than their steadfastness in the face of cosmological data. There is very little evidence, one way or the other. Moreover, there are plenty of reasons given in the scientific literature to be skeptical of the existence of a multiverse. Even their most enthusiastic advocate isn’t as certain about the existence of a multiverse as Stenger suggests. A multiverse is not part of nor a prediction of the concordance model of cosmology. It is the existence of small, adiabatic, nearlyscale invariant, Gaussian fluctuations in a verynearlyflat FLRW model (containing dark energy, dark matter, baryons and radiation) that is strongly suggested by the data. Inflation is one idea of how to explain this data. Some theories of inflation, such as chaotic inflation, predict that some of the properties of universes vary from place to place. Carr & Ellis (2008) write:
Stenger fails to distinguish between the concordance model of cosmology, which has excellent empirical support but in no way predicts a multiverse, and speculative models of the early universe, only some of which predict a multiverse, all of which rely on hypothetical physics, and none of which have unambiguous empirical support, if any at all.
What does it take to specify a multiverse? Following Ellis, Kirchner & Stoeger (2004), we need to:
We would also like to know the set of universes which allow the existence of conscious observers — the anthropic subset. As Ellis et al. (2004) point out, any such proposal will have to deal with the problems of what determines {}, actualized infinities (in , f(m) and the spatial extent of universes) and nonrenormalisability, the parameter dependence and nonuniqueness of π, and how one could possibly observationally confirm any of these quantities. If some metalaw is proposed to physically generate a multiverse, then we need to postulate not just a.) that the metalaw holds in this universe, but b.) that it holds in some preexisting metaspace beyond our universe. There is no unambiguous evidence in favour of a.) for any multiverse, and b.) will surely forever hold the title of the most extreme extrapolation in all of science, if indeed it can be counted as part of science. We turn to this topic now.
Could a multiverse proposal ever be regarded as scientific? Foft 228 notes the similarity between undetectable universes and undetectable quarks, but the analogy is not a good one. The properties of quarks — mass, charge, spin, etc. — can be inferred from measurements. Quarks have a causal effect on particle accelerator measurements; if the quark model were wrong, we would know about it. In contrast, we cannot observe any of the properties of a multiverse {}, as they have no causal effect on our universe. We could be completely wrong about everything we believe about these other universes and no observation could correct us. The information is not here. The history of science has repeatedly taught us that experimental testing is not an optional extra. The hypothesis that a multiverse actually exists will always be untestable. The most optimistic scenario is where a physical theory, which has been welltested in our universe, predicts a universegenerating mechanism. Even then, there would still be questions beyond the reach of observation, such as whether the necessary initial conditions for the generator hold in the metaspace, and whether there are modifications to the physical theory that arise at energy scales or on length scales relevant to the multiverse but beyond testing in our universe. Moreover, the process by which a new universe is spawned almost certainly cannot be observed.
One way of testing a particular multiverse proposal is the socalled principle of mediocrity. This is a selfconsistency test — it cannot pick out a unique multiverse as the ‘real’ multiverse — but can be quite powerful. We will present the principle using an illustration. Boltzmann (1895), having discussed the discovery that the second law of thermodynamics is statistical in nature, asks why the universe is currently so far from thermal equilibrium. Perhaps, Boltzmann says, the universe as a whole is in thermal equilibrium. From time to time, however, a random statistical fluctuation will produce a region which is far from equilibrium. Since life requires low entropy, it could only form in such regions. Thus, a randomly chosen region of the universe would almost certainly be in thermal equilibrium. But if one were to take a survey of all the intelligent life in such a universe, one would find them all scratching their heads at the surprisingly low entropy of their surroundings. It is a brilliant idea, and yet something is wrong^{34}. At most, life only needs a low entropy fluctuation a few tens of Mpc in size — cosmological structure simulations show that the rest of the universe has had virtually no effect on galaxy/star/planet/life formation where we are. And yet, we find ourselves in a low entropy region that is tens of thousands of Mpc in size, as far as our telescopes can see. Why is this a problem? Because the probability of a thermal fluctuation decreases exponentially with its volume. This means that a random observer is overwhelmingly likely to observe that they are in the smallest fluctuation able to support an observer. If one were to take a survey of all the life in the multiverse, an incredibly small fraction would observe that they are inside a fluctuation whose volume is at least a billion times larger than their existence requires. In fact, our survey would find vastly many more observers who were simply isolated brains that fluctuated into existence preloaded with false thoughts about being in a large fluctuation. It is more likely that we are wrong about the size of the universe, that the distant galaxies are just a mirage on the face of the thermal equilibrium around us. The Boltzmann multiverse is thus definitively ruled out.
Do more modern multiverse proposals escape the mediocrity test? Tegmark (2005) discusses what is known as the coolness problem, also known as the youngness paradox. Suppose that inflation is eternal, in the sense (Guth 2007) the universe is always a mix of inflating and noninflating regions. In our universe, inflation ended 13.7 billion years ago and a period of matterdominated, decelerating expansion began. Meanwhile, other regions continued to inflate. Let’s freeze the whole multiverse now, and take our survey clipboard around to all parts of the multiverse. In the regions that are still inflating, there is almost no matter and so no life. So we need to look for life in the parts that have stopped inflating. Whenever we find an intelligent life form, we’ll ask how long ago their part of the universe stopped inflating. Since the temperature of a postinflation region is at its highest just as inflation ends and drops as the universe expands, we could equivalently ask: what is the temperature of the CMB in your universe? The results of this survey would be rather surprising: an extremely small fraction of lifepermitting universes are as old and cold as ours. Why? Because other parts of the universe continued to inflate after ours had stopped. These regions become exponentially larger, and thus nucleate exponentially more matterdominated regions, all of which are slightly younger and warmer than ours. There are two effects here: there are many more younger universes, but they will have had less time to make intelligent life. Which effect wins? Are there more intelligent observers who formed early in younger universes or later in older universes? It turns out that the exponential expansion of inflation wins rather comfortably. For every observer in a universe as old as ours, there are 10^{1038} observers who live in a universe that is one second younger. The probability of observing a universe with a CMB temperature of 2.75 K or less is approximately 1 in 10^{1056}. Alas! Is this the end of the inflationary multiverse as we know it? Not necessarily. The catch comes in the seemingly innocent word now. We are considering the multiverse at a particular time. But general relativity will not allow it — there is no unique way to specify ‘now’. We can’t just compare our universe with all the other universes in existence ‘now’. But we must be able to compare the properties of our universe with some subset of the multiverse — otherwise the multiverse proposal cannot make predictions. This is the ‘measure problem’ of cosmology, on which there is an extensive literature — Page (2011a) lists 70 scientific papers. As Linde & Noorbala (2010) explains, one of the main problems is that ‘in an eternally inflating universe the total volume occupied by all, even absolutely rare types of the ‘universes’, is indefinitely large’. We are thus faced with comparing infinities. In fact, even if inflation is not eternal and the universe is finite, the measure problem can still paralyse our analysis. The moral of the coolness problem is not that the inflationary multiverse has been falsified. Rather, it is this: no measure, no nothing. For a multiverse proposal to make predictions, it must be able to calculate and justify a measure over the set of universes it creates. The predictions of the inflationary multiverse are very sensitive to the measure, and thus in the absence of a measure, we cannot conclude that it survives the test of the principle of mediocrity.
A closer look at our island in parameter space reveals a refinement of the mediocrity test, as discussed by Aguirre (2007); see also Bousso, Hall & Nomura (2009). It is called the ‘principle of living dangerously’: if the prior probability for a parameter is a rapidly increasing (or decreasing) function, then we expect the observed value of the parameter to lie near the edge of the anthropically allowed range. One particular parameter for which this could be a problem is Q, as discussed in Section 4.5. Fixing other cosmological parameters, the anthropically allowed range is 10^{–6} Q 10^{–4}. The observed value (~10^{–5}) isn’t close to either edge of the anthropic range. This creates problems for inflationary multiverses, which are either finetuned to have the prior for Q to peak near the observed value, or else are steep functions of Q in the anthropic range (Graesser et al. 2004; Feldstein, Hall & Watari 2005). The discovery of another lifepermitting island in parameter space potentially creates a problem for the multiverse. If the other island is significantly larger than ours (for a given multiverse measure), then observers should expect to be on the other island. An example is the cold big bang, as described by Aguirre (2001). Aguirre’s aim in the paper is to provide a counterexample to what he calls the anthropic program: ‘the computation of P [the probability that a randomly chosen observer measures a given set of cosmological parameters]; if this probability distribution has a single peak at a set [of parameters] and if these are near the measured values, then it could be claimed that the anthropic program has ‘explained’ the values of the parameters of our cosmology’. Aguirre’s concern is a lack of uniqueness. The cold big bang (CBB) is a model of the universe in which the (primordial) ratio of photons to baryons is η_{γ} ~ 1. To be a serious contender as a model of our universe (in which η_{γ} ~ 10^{9}) there would need to be an early population of luminous objects e.g. PopIII stars. Nucleosynthesis generally proceeds further than in our universe, creating an approximately solar metalicity intergalactic medium along with a 25% helium mass fraction^{35}. Structure formation is not suppressed by CMB radiation pressure, and thus stars and galaxies require a smaller value of Q. How much of a problem is the cold big bang to a multiverse explanation of cosmological parameters? Particles and antiparticles pair off and mutually annihilate to photons as the universe cools, so the excess of particles over antiparticles determines the value of η_{γ}. We are thus again faced with the absence of a successful theory of baryogenesis and leptogenesis. It could be that small values of η_{γ}, which correspond to larger baryon and lepton asymmetry, are very rare in the multiverse. Nevertheless, the conclusion of Aguirre (2001) seems sound: ‘[the CBB] should be discouraging for proponents of the anthropic program: it implies that it is quite important to know the [prior] probabilities P, which depend on poorly constrained models of the early universe’. Does the cold big bang imply that cosmology need not be finetuned to be lifepermitting? Aguirre (2001) claims that ξ(η_{γ} ~ 1, 10^{–11} < Q < 10^{–5}) ~ ξ(η_{γ} ~ 10^{9}, 10^{–6} <Q < 10^{–4}), where ξ is the number of solar mass stars per baryon. At best, this would show that there is a continuous lifepermitting region, stretching along the η_{γ} axis. Various compensating factors are needed along the way — we need a smaller value of Q, which renders atomic cooling inefficient, so we must rely on molecular cooling, which requires higher densities and metalicities, but not too high or planetary orbits will be disrupted collisions (whose frequency increases as η_{γ}^{–4}Q^{7/2}). Aguirre (2001) only considers the case η_{γ} ~ 1 in detail, so it is not clear whether the CBB island connects to the HBB island (10^{6} η_{γ} 10^{11}) investigated by Tegmark & Rees (1998). Either way, life does not have free run of parameter space.
The spectre of the demise of Boltzmann’s multiverse haunts more modern cosmologies in two different ways. The first is the possibility of Boltzmann brains. We should be wary of any multiverse which allows for single brains, imprinted with memories, to fluctuate into existence. The worry is that, for every observer who really is a carbonbased life form who evolved on a planet orbiting a star in a galaxy, there are vastly more for whom this is all a passing dream, the few, fleeting fancies of a phantom fluctuation. This could be a problem in our universe — if the current, accelerating phase of the universe persists arbitrarily into the future, then our universe will become vacuum dominated. Observers like us will die out, and eventually Boltzmann brains, dreaming that they are us, will outnumber us. The most serious problem is that, unlike biologically evolved life like ourselves, Boltzmann brains do not require a finetuned universe. If we condition on observers, rather than biological evolved life, then the multiverse may fail to predict a universe like ours. The multiverse would not explain why our universe is finetuned for biological life (R. Collins, forthcoming). Another argument against the multiverse is given by Penrose (2004, p. 763ff). As with the Boltzmann multiverse, the problem is that this universe seems uncomfortably roomy.
In other words, if we live in a multiverse generated by a process like chaotic inflation, then for every observer who observes a universe of our size, there are 10^{10123} who observe a universe that is just 10 times smaller. This particular multiverse dies the same death as the Boltzmann multiverse. Penrose’s argument is based on the place of our universe in phase space, and is thus generic enough to apply to any multiverse proposal that creates more small universe domains than large ones. Most multiverse mechanisms seem to fall into this category.
A multiverse generated by a simple underlying mechanism is a remarkably seductive idea. The mechanism would be an extrapolation of known physics, that is, physics with an impressive record of explaining observations from our universe. The extrapolation would be natural, almost inevitable. The universe as we know it would be a very small part of a much larger whole. Cosmology would explore the possibilities of particle physics; what we know as particle physics would be mere bylaws in an unimaginably vast and variegated cosmos. The multiverse would predict what we expect to observe by predicting what conditions hold in universes able to support observers. Sadly, most of this scenario is still hypothetical. The goal of this section has been to demonstrate the mountain that the multiverse is yet to climb, the challenges that it must face openly and honestly. The multiverse may yet solve the finetuning of the universe for intelligent life, but it will not be an easy solution. ‘Multiverse’ is not a magic word that will make all the finetuning go away. For a popular discussion of these issues, see Ellis (2011).
We conclude that the universe is finetuned for the existence of life. Of all the ways that the laws of nature, constants of physics and initial conditions of the universe could have been, only a very small subset permits the existence of intelligent life. Will future progress in fundamental physics solve the problem of the finetuning of the universe for intelligent life, without the need for a multiverse? There are a few ways that this could happen. We could discover that the set of lifepermitting universes is much larger than previously thought. This is unlikely, since the physics relevant to life is lowenergy physics, and thus wellunderstood. Physics at the Planck scale will not rewrite the standard model of particle physics. It is sometimes objected that we do not have an adequate definition of ‘an observer’, and we do not know all possible forms of life. This is reason for caution, but not a fatal flaw of finetuning. If the strong force were weaker, the periodic table would consist of only hydrogen. We do not need a rigorous definition of life to reasonably conclude that a universe with one chemical reaction (2H → H_{2}) would not be able to create and sustain the complexity necessary for life. Alternatively, we could discover that the set of possible universes is much smaller than we thought. This scenario is much more interesting. What if, when we really understand the laws of nature, we will realise that they could not have been different? We must be clear about the claim being made. If the claim is that the laws of nature are fixed by logical and mathematical necessity, then this is demonstrably wrong — theoretical physicists find it rather easy to describe alternative universes that are free from logical contradiction (Davies, in Davies 2003). The category of ‘physically possible’ isn’t much help either, as the laws of nature tell us what is physically possible, but not which laws are possible. It is not true that finetuning must eventually yield to the relentless march of science. Finetuning is not a typical scientific problem, that is, a phenomenon in our universe that cannot be explained by our current understanding of physical laws. It is not a gap. Rather, we are concerned with the physical laws themselves. In particular, the anthropic coincidences are not like, say, the coincidence between inertial mass and gravitational mass in Newtonian gravity, which is a coincidence between two seemingly independent physical quantities. Anthropic coincidences, on the other hand, involve a happy consonance between a physical quantity and the requirements of complex, embodied intelligent life. The anthropic coincidences are so arresting because we are accustomed to thinking of physical laws and initial conditions as being unconcerned with how things turn out. Physical laws are material and efficient causes, not final causes. There is, then, no reason to think that future progress in physics will render a lifepermitting universe inevitable. When physics is finished, when the equation is written on the blackboard and fundamental physics has gone as deep as it can go, finetuning may remain, basic and irreducible. Perhaps the most optimistic scenario is that we will eventually discover a simple, beautiful physical principle from which we can derive a unique physical theory, whose unique solution describes the universe as we know it, including the standard model, quantum gravity, and (dare we hope) the initial conditions of cosmology. While this has been the dream of physicists for centuries, there is not the slightest bit of evidence that this idea is true. It is almost certainly not true of our best hope for a theory of quantum gravity, string theory, which has ‘anthropic principle written all over it’ (Schellekens 2008). The beauty of its principles has not saved us from the complexity and contingency of the solutions to its equations. Beauty and simplicity are not necessity. Finally, it would be the ultimate anthropic coincidence if beauty and complexity in the mathematical principles of the fundamental theory of physics produced all the necessary lowenergy conditions for intelligent life. This point has been made by a number of authors, e.g. Carr & Rees (1979) and Aguirre (2005). Here is Wilczek (2006b):
Adams, F. C., 2008, JCAP, 2008, 010 Agrawal, V., Barr, S. M., Donoghue, J. F. and Seckel, D., 1998a, PhRvL, 80, 1822  CAS  Agrawal, V., Barr, S. M., Donoghue, J. F. and Seckel, D., 1998b, PhRvD, 57, 5480  CAS  Aguirre, A., 1999, ApJ, 521, 17  CrossRef  CAS  Aguirre, A., 2001, PhRvD, 64, 083508 Aguirre, A., 2005, ArXiv:astroph/0506519 Aguirre, A., 2007, in Universe or Multiverse?, ed. B. J. Carr (Cambridge: Cambridge University Press), 367 Aitchison, I. & Hey, A., 2002, Gauge Theories in Particle Physics: Volume 1 — From Relativistic Quantum Mechanics to QED (3rd edition; New York: Taylor & Francis) ArkaniHamed, N. and Dimopoulos, S., 2005, JHEP, 2005, 073 ArkaniHamed, N., Dimopoulos, S. & Kachru, S., 2005, ArXiv: hepth/0501082 Barnes, L. A., Francis, M. J., Lewis, G. F. and Linder, E. V., 2005, PASA, 22, 315  CrossRef  Barr, S. M. and Khan, A., 2007, PhRvD, 76, 045002 Barrow, J. D. & Tipler, F. J., 1986, The Anthropic Cosmological Principle (Oxford: Clarendon Press) Bekenstein, J. D., 1973, PhRvD, 7, 2333 Boltzmann, L., 1895, Natur, 51, 413  CrossRef  Bousso, R., 2008, GReGr, 40, 607 Bousso, R. and Leichenauer, S., 2009, PhRvD, 79, 063506 Bousso, R. and Leichenauer, S., 2010, PhRvD, 81, 063524 Bousso, R., Hall, L. and Nomura, Y., 2009, PhRvD, 80, 063510 Bradford, R. A. W., 2009, JApA, 30, 119  CAS  Brandenberger, R. H., 2011, ArXiv:astroph/1103.2271 Burgess, C. & Moore, G., 2006, The Standard Model: A Primer (Cambridge: Cambridge University Press) Cahn, R., 1996, RvMP, 68, 951  CAS  Carr, B. J. and Ellis, G. F. R., 2008, A&G, 49, 2.29  CAS  Carr, B. J. and Rees, M. J., 1979, Natur, 278, 605  CrossRef  Carroll, S. M., 2001, LRR, 4, 1 Carroll, S. M., 2003, Spacetime and Geometry: An Introduction to General Relativity (San Francisco: Benjamin Cummings) Carroll, S. M., 2008, SciAm, 298, 48  CAS  Carroll, S. M. & Tam, H., 2010, ArXiv:astroph/1007.1417 Carter, B., 1974, in IAU Symposium, Vol. 63, Confrontation of Cosmological Theories with Observational Data, ed. M. S. Longair (Boston: D. Reidel Pub. Co.), 291 Clavelli, L. & White, R. E., 2006, ArXiv:hepph/0609050 Cohen, B. L., 2008, PhTea, 46, 285 Collins, R., 2003, in The Teleological Argument and Modern Science, ed. N. Manson (London: Routledge), 178 Csótó, A., Oberhummer, H. and Schlattl, H., 2001, NuPhA, 688, 560 Damour, T. and Donoghue, J. F., 2008, PhRvD, 78, 014014 Davies, P. C. W., 1972, JPhA, 5, 1296  CAS  Davies, P., 2003, in God and Design: The Teleological Argument and Modern Science, ed. N. A. Manson (London: Routledge), 147 Davies, P. C. W., 2006, The Goldilocks Enigma: Why is the Universe Just Right for Life? (London: Allen Lane) Davies, C. et al., 2004, PhRvL, 92,  CAS  Dawkins, R., 1986, The Blind Watchmaker (New York: W. W. Norton & Company) Dawkins, R., 2006, The God Delusion (New York: Houghton Mifflin Harcourt) De Boer, W., 1994, PrPNP, 33, 201  CAS  De Boer, W. and Sander, C., 2004, PhLB, 585, 276  CAS  Donoghue, J. F., 2007, in Universe or Multiverse?, ed. B. J. Carr (Cambridge: Cambridge University Press), 231 Donoghue, J. F., Dutta, K., Ross, A. and Tegmark, M., 2010, PhRvD, 81, Dorling, J., 1970, AmJPh, 38, 539 Dürr, S. et al., 2008, Sci, 322, 1224  CrossRef  Durrer, R. and Maartens, R., 2007, GReGr, 40, 301 Dyson, F. J., 1971, SciAm, 225, 51 Earman, J., 2003, in Symmetries in Physics: Philosophical Reflections, ed. K. Brading & E. Castellani (Cambridge: Cambridge University Press), 140 Ehrenfest, P., 1917, Proc. Amsterdam Academy, 20, 200 Ekström, S., Coc, A., Descouvemont, P., Meynet, G., Olive, K. A., Uzan, J.P. and Vangioni, E., 2010, A&A, 514, A62 Ellis, G. F. R., 1993, in The Anthropic Principle, ed. F. Bertola & U. Curi (Oxford: Oxford University Press), 27 Ellis, G. F. R., 2011, SciAm, 305, 38 Ellis, G. F. R., Kirchner, U. and Stoeger, W. R., 2004, MNRAS, 347, 921  CrossRef  CAS  Feldstein, B., Hall, L. and Watari, T., 2005, PhRvD, 72, 123506 Feldstein, B., Hall, L. and Watari, T., 2006, PhRvD, 74, 095011 Freeman, I. M., 1969, AmJPh, 37, 1222 Garriga, J. and Vilenkin, A., 2006, PThPS, 163, 245 Garriga, J., Livio, M. and Vilenkin, A., 1999, PhRvD, 61, 023503 Gasser, J. and Leutwyler, H., 1982, PhR, 87, 77  CAS  Gedalia, O., Jenkins, A. and Perez, G., 2011, PhRvD, 83,  CAS  Gibbons, G. W. and Turok, N., 2008, PhRvD, 77, 063516 Gibbons, G. W., Hawking, S. W. and Stewart, J. M., 1987, NuPhB, 281, 736 Gingerich, O., 2008, in Fitness of the Cosmos for Life: Biochemistry and FineTuning, ed. J. D. Barrow, S. C. Morris, S. J. Freeland & C. L. Harper (Cambridge: Cambridge University Press), 20 Gould, A., 2010, ArXiv:hepph/1011.2761 Graesser, M. L., Hsu, S. D. H., Jenkins, A. and Wise, M. B., 2004, PhLB, 600, 15  CAS  Greene, B., 2011, The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos (New York: Knopf) Griffiths, D. J., 2008, Introduction to Elementary Particles (Weinheim: WileyVCH) Gurevich, L., 1971, PhLA, 35, 201 Guth, A. H., 1981, PhRvD, 23, 347  CAS  Guth, A. H., 2007, JPhA, 40, 6811 Hall, L. and Nomura, Y., 2008, PhRvD, 78, 035001 Hall, L. and Nomura, Y., 2010, JHEP, 2010, 76 Harnik, R., Kribs, G. and Perez, G., 2006, PhRvD, 74, 035006 Harrison, E. R., 1970, PhRvD, 1, 2726 Harrison, E. R., 2003, Masks of the Universe (2nd edition; Cambridge: Cambridge University Press) Hartle, J. B., 2003, Gravity: An Introduction to Einstein's General Relativity (San Francisco: Addison Wesley) Hawking, S. W., 1975, CMaPh, 43, 199 Hawking, S. W., 1988, A Brief History of Time (Toronto: Bantam) Hawking, S. W. & Mlodinow L., 2010, The Grand Design (Toronto: Bantam) Hawking, S. W. and Page, D. N., 1988, NuPhB, 298, 789 Healey, R., 2007, Gauging What's Real: The Conceptual Foundations of Gauge Theories (New York: Oxford University Press) Hogan, C. J., 2000, RvMP, 72, 1149  CAS  Hogan, C. J., 2006, PhRvD, 74, 123514 Hogan, C. J., 2007, in Universe or Multiverse?, ed. B. J. Carr (Cambridge: Cambridge University Press), 221 Hollands, S. & Wald, R. M., 2002a, ArXiv:hepth/0210001 Hollands, S. and Wald, R. M., 2002b, GReGr, 34, 2043 Iwasaki, Y., 2000, PThPS, 138, 1  CAS  Jaffe, R., Jenkins, A. and Kimchi, I., 2009, PhRvD, 79, 065014 Jeltema, T. and Sher, M., 1999, PhRvD, 61, 017301 Kaku, M., 1993, Quantum Field Theory: A Modern Introduction (New York: Oxford University Press) King, R. A., Siddiqi, A., Allen, W. D. and Schaefer, H. F. I., 2010, PhRvA, 81, 042523 Kofman, L., Linde, A. and Mukhanov, V., 2002, JHEP, 2002, 057 Kostelecký, V. and Russell, N., 2011, RvMP, 83, 11 Laiho, J., 2011, ArXiv:hepph/1106.0457 Leslie, J., 1989, Universes (London: Routledge) Liddle, A., 1995, PhRvD, 51, R5347  CAS  Lieb, E. and Yau, H.T., 1988, PhRvL, 61, 1695  CAS  Linde, A., 2008, in Lecture Notes in Physics, Vol. 738, Inflationary Cosmology, ed. M. Lemoine, J. Martin & P. Peter (Berlin, Heidelberg: Springer), 1 Linde, A. and Noorbala, M., 2010, JCAP, 2010, 8 Linde, A. & Vanchurin, V., 2010, ArXiv:hepth/1011.0119 Livio, M., Hollowell, D., Weiss, A. and Truran, J. W., 1989, Natur, 340, 281  CrossRef  CAS  LyndenBell, D., 1969, Natur, 223, 690  CrossRef  MacDonald, J. and Mullan, D. J., 2009, PhRvD, 80, 043507 Martin, S. P., 1998, in Perspectives on Supersymmetry, ed. G. L. Kane (Singapore: World Scientific Publishing), 1 Martin, C. A., 2003, in Symmetries in Physics: Philosophical Reflections, ed. K. Brading & E. Castellani (Cambridge: Cambridge University Press), 29 Misner, C. W., Thorne, K. S. & Wheeler, J. A., 1973, Gravitation (San Francisco: W. H. Freeman and Co) Mo, H., van den Bosch, F. C. & White, S. D. M., 2010, Galaxy Formation and Evolution (Cambridge: Cambridge University Press) Nagashima, Y., 2010, Elementary Particle Physics: Volume 1: Quantum Field Theory and Particles (WileyVCH) Nakamura, K., 2010, JPhG, 37, 075021 Norton, J. D., 1995, Erkenntnis, 42, 223  CrossRef  Oberhummer, H., 2001, NuPhA, 689, 269 Oberhummer, H., Pichler, R. & Csótó, A., 1998, ArXiv:nuclth/9810057 Oberhummer, H., Csótó, A. & Schlattl, H., 2000a, in The Future of the Universe and the Future of Our Civilization, ed. V. Burdyuzha & G. Khozin (Singapore: World Scientific Publishing), 197 Oberhummer, H., Csótó, A. and Schlattl, H., 2000b, Sci, 289, 88  CrossRef  CAS  Padmanabhan, T., 2007, GReGr, 40, 529 Page, D. N., 2011a, JCAP, 2011, 031 Page, D. N., 2011b, ArXiv eprints: 1101.2444 Peacock, J. A., 1999, Cosmological Physics (Cambridge: Cambridge University Press) Peacock, J. A., 2007, MNRAS, 379, 1067  CrossRef  Penrose, R., 1959, MPCPS, 55, 137 Penrose, R., 1979, in General Relativity: An Einstein Centenary Survey, ed. S. W. Hawking & W. Israel (Cambridge: Cambridge University Press), 581 Penrose, R., 1989, NYASA, 571, 249  CrossRef  CAS  Penrose, R., 2004, The Road to Reality: A Complete Guide to the Laws of the Universe (London: Vintage) Phillips, A. C., 1999, The Physics of Stars (2nd edition; Chichester: Wiley) Pogosian, L. and Vilenkin, A., 2007, JCAP, 2007, 025 Pokorski, S., 2000, Gauge Field Theories (Cambridge: Cambridge University Press) Polchinski, J., 2006, ArXiv:hepth/0603249 Polkinghorne, J. C. & Beale, N., 2009, Questions of Truth: FiftyOne Responses to Questions about God, Science, and Belief (Louisville: Westminster John Knox Press) Pospelov, M. and Romalis, M., 2004, PhT, 57, 40  CAS  Price, H., 1997, in Time's Arrows Today: Recent Physical and Philosophical Work on the Direction of Time, ed. S. F. Savitt (Cambridge: Cambridge University Press), 66 Price, H., 2006, Time and Matter – Proceedings of the International Colloquium on the Science of Time, ed. I. I. Bigi (Singapore: World Scientific Publishing), 209 Redfern, M., 2006, The Anthropic Universe, ABC Radio National, available at http://www.abc.net.au/rn/scienceshow/stories/2006/1572643.htm Rees, M. J., 1999, Just Six Numbers: The Deep Forces that Shape the Universe (New York: Basic Books) Sakharov, A. D., 1967, JETPL, 5, 24 Schellekens, A. N., 2008, RPPh, 71, 072201 Schlattl, H., Heger, A., Oberhummer, H., Rauscher, T. and Csótó, A., 2004, ApSS, 291, 27  CAS  Schmidt, M., 1963, Natur, 197, 1040  CrossRef  Schrödinger, E., 1992, What Is Life? (Cambridge: Cambridge University Press) Shaw, D. and Barrow, J. D., 2011, PhRvD, 83, Smolin, L., 2007, in Universe or Multiverse?, ed. B. Carr (Cambridge: Cambridge University Press), 323 Steinhardt, P. J., 2011, SciAm, 304, 36 Strocchi, F., 2007, Symmetry Breaking (Berlin, Heidelberg: Springer) Susskind, L., 2003, ArXiv:hepth/0302219 Susskind, L., 2005, The Cosmic Landscape: String Theory and the Illusion of Intelligent Design (New York: Little, Brown and Company) Taubes, G., 2002, Interview with Lisa Randall, ESI Special Topics, available at http://www.esitopics.com/brane/interviews/DrLisaRandall.html Tegmark, M., 1997, CQGra, 14, L69  CrossRef  CAS  Tegmark, M., 1998, AnPhy, 270, 1  CAS  Tegmark, M., 2005, JCAP, 2005, 001 Tegmark, M. and Rees, M. J., 1998, ApJ, 499, 526  CrossRef  CAS  Tegmark, M., Vilenkin, A. and Pogosian, L., 2005, PhRvD, 71, 103523 Tegmark, M., Aguirre, A., Rees, M. J. and Wilczek, F., 2006, PhRvD, 73, 023505 Turok, N., 2002, CQGra, 19, 3449  CrossRef  Vachaspati, T. and Trodden, M., 1999, PhRvD, 61, 023502 Vilenkin, A., 2003, in Astronomy, Cosmology and Fundamental Physics, ed. P. Shaver, L. Dilella & A. Giméne (Berlin: Springer Verlag), 70 Vilenkin, A., 2006, ArXiv eprints: hepth/0610051 Vilenkin, A., 2010, JPhCS, 203, 012001 Weinberg, S., 1989, RvMP, 61, 1  CAS  Weinberg, S., 1994, SciAm, 271, 44  CAS  Weinberg, S., 2007, in Universe or Multiverse?, ed. B. J. Carr (Cambridge: Cambridge University Press), 29 Wheeler J. A., 1996, At Home in the Universe (New York: AIP Press) Whitrow, G. J., 1955, BrJPhilosSci, VI, 13  CrossRef  Wilczek, F., 1997, in Critical Dialogues in Cosmology, ed. N. Turok (Singapore: World Scientific Publishing), 571 Wilczek, F., 2002, ArXiv:hepph/0201222 Wilczek, F., 2005, PhT, 58, 12 Wilczek, F., 2006a, PhT, 59, 10 Wilczek, F., 2006b, PhT, 59, 10 Wilczek, F., 2007, in Universe or Multiverse?, ed. B. J. Carr (Cambridge: Cambridge University Press), 43 Zel'dovich, Y. B., 1964, SPhD, 9, 195 Zel'dovich, Y. B., 1972, MNRAS, 160, 1P ^{1} We may wish to stipulate that a given observer by definition only observes one universe. Such finer points will not effect our discussion. ^{2} The counterargument presented in Stenger’s book (page 252), borrowing from a paper by Ikeda and Jeffreys, does not address this possibility. Rather, it argues against a deity which intervenes to sustain life in this universe. I have discussed this elsewhere: ikedajeff.notlong.com ^{3} Viz Top Tip: http://www.viz.co.uk/toptips.html ^{4} Hereafter, ‘Foft x’ will refer to page x of Stenger’s book. ^{5} References: Barrow & Tipler (1986), Carr & Rees (1979), Carter (1974), Davies (2006), Dawkins (2006), Redfern (2006) for Deutsch’s view on finetuning, Ellis (1993), Greene (2011), Guth (2007), Harrison (2003), Hawking & Mlodinow (2010, p. 161), Linde (2008), Page (2011b), Penrose (2004, p. 758), Polkinghorne & Beale (2009), Rees (1999), Smolin (2007), Susskind (2005), Tegmark et al. (2006), Vilenkin (2006), Weinberg (1994) and Wheeler (1996). ^{6} Note that it isn’t just that the rod appears to be shorter. Length contraction in special relativity is not just an optical illusion resulting from the finite speed of light. See, for example, Penrose (1959). ^{7} That is, the spacetime of a nonrotating, uncharged black hole. ^{8} See also the excellent articles by Martin (2003) and Earman (2003). ^{9} This may not be as clearcut a disaster as is often asserted in the finetuning literature, going back to Dyson (1971). MacDonald & Mullan (2009) and Bradford (2009) have shown that the binding of the diproton is not sufficient to burn all the hydrogen to helium in big bang nucleosynthesis. For example, MacDonald & Mullan (2009) show that while an increase in the strength of the strong force by 13% will bind the diproton, a ~50% increase is needed to significantly affect the amount of hydrogen left over for stars. Also, Collins (2003) has noted that the decay of the diproton will happen too slowly for the resulting deuteron to be converted into helium, leaving at least some deuterium to power stars and take the place of hydrogen in organic compounds. Finally with regard to stars, Phillips (1999, p. 118) notes that: ‘It is sometimes suggested that the timescale for hydrogen burning would be shorter if it were initiated by an electromagnetic reaction instead of the weak nuclear reaction [as would be the case is the diproton were bound]. This is not the case, because the overall rate for hydrogen burning is determined by the rate at which energy can escape from the star, i.e. by its opacity, If hydrogen burning were initiated by an electromagnetic reaction, this reaction would proceed at about the same rate as the weak reaction, but at a lower temperature and density.’ However, stars in such a universe would be significantly different to our own, and detailed predictions for their formation and evolution have not been investigated. ^{10} Note that this is independent of x_{max} and y_{max}, and in particular holds in the limit x_{max}, y_{max} → ∞. ^{11} This requirement is set by the homogeneity of our universe. Regions that transition early will expand and dilute, and so for the entire universe to be homogeneous to within Q ≈ 10^{–5}, the regions must begin their classical phase within Δt ≈ Qt. ^{12} This seems very unlikely. Regions of the universe which have collapsed and virialised have decoupled from the overall expansion of the universe, and so would have no way of knowing exactly when the expansion stalled and reversed. However, as Price (1997) lucidly explains, such arguments risk invoking a double standard, as they work just as well when applied backwards in time. ^{13} Carroll has raised this objection to Stenger (Foft 142), whose reply was to point out that the arrow of time always points away from the lowest entropy point, so we can always call that point the beginning of the universe. Once again, Stenger fails to understand the problem. The question is not why the low entropy state was at the beginning of the universe, but why the universe was ever in a low entropy state. The second law of thermodynamics tells us that the most probable world is one in which the entropy is always high. This is precisely what entropy quantifies. See Price (1997, 2006) for an excellent discussion of these issues. ^{14} These requirements can be found in any good cosmology textbook, e.g. Peacock (1999); Mo, van den Bosch & White (2010). ^{15} See also the discussion in Kofman, Linde & Mukhanov (2002) and Hollands & Wald (2002a). ^{16} Cosmic phase transitions are irreversible in the same sense that scrambling an egg is irreversible. The time asymmetry is a consequence of low entropy initial conditions, not the physics itself (Penrose 1989; Hollands & Wald 2002a). ^{17} We should also note that Carroll & Tam (2010) argue that the GibbonsHawkingStewart canonical measure renders an inflationary solution to the flatness problem superfluous. This is a puzzling result — it would seem to show that nonflat FLRW universes are infinitely unlikely, so to speak. This result has been noted before. See Gibbons & Turok (2008) for a different point of view. ^{18} We use the Hubble constant to specify the particular time being considered. ^{19} The Arxiv version of this paper (arxiv.org/abs/1112.4647) includes an appendix that gives further critique of Stenger’s discussion of cosmology. ^{20} http://TegRees.notlong.com ^{21} Stenger’s Equation 12.22 is incorrect, or at least misleading. By the third Friedmann equation, , one cannot stipulate that the density ρ is constant unless one sets w = –1. Equation 12.22 is thus only valid for w = –1, in which case it reduces to Equation 12.21 and is indistinguishable from a cosmological constant. One can solve the Friedmann equations for w ≠ –1, for example, if the universe contains only quintessence, is spatially flat and w is constant, then a(t) = (t/t_{0})^{2/3(1+w)}, where t_{0} is the age of the universe. ^{22} Some of this section follows the excellent discussion by Polchinski (2006). ^{23} More precisely, to use the area element in Figure 5 as the probability measure, one is assuming a probability distribution that is linear in log_{10} G and log_{10} α. There is, of course, no problem in using logarithmic axes to illustrate the lifepermitting region. ^{24} Hoyle’s prediction is not an ‘anthropic prediction’. As Smolin (2007) explains, the prediction can be formulated as follows: a.) Carbon is necessary for life. b.) There are substantial amounts of carbon in our universe. c.) If stars are to produce substantial amounts of carbon, then there must be a specific resonance level in carbon. d.) Thus, the specific resonance level in carbon exists. The conclusion does not depend in any way on the first, ‘anthropic’ premise. The argument would work just as well if the element in question were the inert gas neon, for which the first premise is (probably) false. ^{25} See also Oberhummer, Pichler & Csótó (1998); Oberhummer, Csótó & Schlattl (2000b); Csótó, Oberhummer & Schlattl (2001); Oberhummer (2001). ^{26} In the left plot, we hold m_{p} constant, so we vary β = m_{e}/m_{p} by varying the electron mass. ^{27} As with the stability of the diproton, there is a caveat. Weinberg (2007) notes that if the pp reaction p^{+} + p^{+} → ^{2}H + e^{+}ν_{e} is rendered energetically unfavourable by changing the fundamental masses, then the reaction p^{+} + e^{–} + p^{+} → ^{2}H + ν_{e} will still be favourable so long as m_{d} – m_{u} – m_{e} < 3.4 MeV. This is a weaker condition. Note, however, that the pep reaction is 400 times less likely to occur in our universe than pp, meaning that pep stars must burn hotter. Such stars have not been simulated in the literature. Note also that the full effect of an unstable deuteron on stars and their formation has not been calculated. Primordial helium burning may create enough carbon, nitrogen and oxygen to allow the CNO cycle to burn hydrogen in later generation stars. ^{28} Even this limit should be noted with caution, as it holds for constant . As appears to depend on α, the corresponding limit on α may be a different plane to the one shown in Figure 6. ^{29} In the absence of weak decay, the weakless universe will conserve each individual quark number. ^{30} The most charitable reading of Stenger’s claim is that he is referring to the constituent quark model, wherein the massenergy of the cloud of virtual quarks and gluons that surround a valence quark in a composite particle is assigned to the quark itself. In this model, the quarks have masses of ~300 MeV. The constituent quark model is a nonrelativistic phenomenological model which provides a simple approximation to the more fundamental but more difficult theory (QCD) that is useful at lowenergies. It is completely irrelevant to the cases of finetuning in the literature concerning quark masses (e.g. Agarwal et al. 1998a; Hogan 2000; Barr & Khan 2007), all of which discuss the bare (or current) quark masses. In fact, even a charge of irrelevance is too charitable — Stenger later quotes the quark masses as ~5 MeV, which is the current quark mass. ^{31} A few caveats. This estimate assumes that this small change in α_{U} will not significantly change α. The dependence seems to be flatter than linear, so this assumption appears to hold. Also, be careful in applying the limits on β in Figure 6 to the proton mass, as where appropriate only the electron mass was varied. For example, Region 1 depends on the protonneutron mass difference, which doesn’t change with Λ_{QCD} and thus does not place a constraint on α_{U}. ^{32} See also Freeman (1969); Dorling (1970); Gurevich (1971), and the popularlevel discussion in Hawking (1988, p. 180). ^{33} Or perhaps Euclidean space , or Minkowskian spacetime. ^{34} Actually, there are several things wrong, not least that such a scenario is unstable to gravitational collapse. ^{35} Stenger states that ‘[t]he cold bigbang model shows that we don’t necessarily need the Hoyle resonance, or even significant stellar nucleosynthesis, for life’. It shows nothing of the sort. The CBB does not alter nuclear physics and thus still relies on the tripleα process to create carbon in the early universe; see the more detailed discussion of CBB nucleosynthesis in Aguirre (1999, p. 22). Further, CBB does not negate the need for longlived, nuclearfueled stars as an energy source for planetary life. Aguirre (2001) is thus justifiably eager to demonstrate that stars will plausibly form in a CBB universe. 
Legal & Privacy  Contact Us  Help 