[go: up one dir, main page]

Academia.eduAcademia.edu
Misapprehensions About the Fine-Tuning Argument by John Hawthorne and Yoaav Isaacs Introduction: The fine-tuning argument purports to provide evidence––substantial evidence, even––for the existence of God. We think that the fine-tuning argument does exactly what it purports to do. This is not to say we think that the fine-tuning argument establishes the existence of God, makes atheism irrational, or anything like that. The epistemic status of theism depends not only on the the status of the fine-tuning argument, but also on the status of just about every other argument in the philosophy of religion. The fine-tuning argument does not accomplish everything, but it does accomplish something (and that's not bad at all for a philosophical argument). The fine-tuning argument is legitimate, yet there are numerous doubts about its legitimacy. There are various misgivings about the fine-tuning argument which are based on misunderstandings. In this paper we will go over several major misapprehensions (from both popular and philosophical sources), and explain why they do not undermine the basic cogency of the fine-tuning argument. Overview: The standard model of physics presents a theory of the electromagnetic, weak, and strong forces, and a classification of all known elementary particles. The standard model specifies numerous physical laws, but that's not all it does. According to the standard model there are roughly two dozen dimensionless A dimensionless quantity is not measured in units and thus is not unit-relative. Height, by contrast, is measured in units (inches, centimeters, and so on) and thus is unit-relative. There is, therefore, nothing particularly deep about someone being exactly one unit of height tall according to some popular system of measurement. Literally everyone is exactly one unit of height tall according to some system of measurement, and there's nothing deep about the difference between popular systems of measurement and unpopular systems of measurement. But the ratio of the mass of the proton to the mass of the electron is not measured in units and thus is not unit-relative. It would be deep if that ratio were exactly one; that would mean that protons and electrons had the same mass. constants that characterize fundamental Here "fundamental" means something like "non-derived”. What is derived from what is obviously theory-dependent, and thus need not reflect metaphysical priority. For example, the ratio of the mass of the proton to the mass of the electron is no metaphysically deeper than the ratio of the mass of the electron to the mass of the proton. It was a matter of convention which ratio made it into the standard model. physical quantities. Dimensionless constants specify the energy density of the vacuum, the masses of the fundamental particles, and many other things which we won't even pretend to understand properly. The point is just that our best understanding of physics doesn't only involve the simple, elegant formulae that are taught in high school physics classes and the complicated, elegant formulae that are taught in college physics classes. Our best understanding of physics also involves the specification of certain numerical quantities. Physicists have determined the (approximate) values of the fundamental constants by measurement. (There's no way to derive the values of the fundamental constants from other aspects of the standard model. Any quantities that could be so derived wouldn't be fundamental.) Still, the underlying theory favored some sorts of parameter-values over others. A Wilsonian analysis of effective field theory For more about such physics, see Weinberg (1989). gave physicists a well-defined sense of what sorts of parameter-values one could expect a priori in a universe which has the sorts of general laws that our universe does. As is customary, we individuate parameter-values somewhat coarsely to avoid triviality. Since parameter-values can vary continuously, nearly any maximally specific parameter-value must have prior probability 0. Of course, we don't actually know the maximally specific numerical parameter for any parameter, and it's easy enough to divvy possible parameter-values into equivalence classes according to their observational consequences. There's an identical sense in which someone who is a little over 7' 9'' has a stranger height than someone who is a little under 5' 11'', even if all maximally specific heights have probability 0. The probability of the former height plus-or-minus a nanometer and the probability of the latter height plus-or-minus a nanometer are each non-zero, making comparison unproblematic. For more, see footnote 41. Physicists made the startling discovery that––given antecedently plausibly assumptions about the nature of the physical world––the probability that a universe with general laws like ours would be habitable was staggeringly low. Thus these antecedently plausible assumptions were called into question. Antecedently implausible hypotheses that afforded modest probability to this evidence from physics were massively confirmed relative to the antecedently plausible hypotheses that afforded miniscule probability to this evidence from physics. One such antecedently implausible hypothesis is that an enormous multiplicity of universes with different physical laws, a multiverse, exists. Another such antecedently implausible hypothesis is that God designed the laws of physics so as to allow for life. Our point is that this sort of divine artifice of the laws of physics was antecedently implausible, and not that the mere existence of God was antecedently implausible. Most simply, the fine-tuning argument maintains that these facts of physics are likelier given theism than given atheism and thus that these facts of physics count as evidence for theism and against atheism. It seems plausible that the mere existence of life is a bit more probable given theism than given atheism, and thus that the mere existence of life constitutes a bit of evidence for theism. But if the fine-tuning argument is legitimate (and it is) further facts about physics constitute substantial further evidence for theism. There's much to be said about the details of the fine-tuning argument. We've even said some of it. For more on the epistemological details see Hawthorne and Isaacs (forthcoming) and for more on the details of the underlying physics see Hawthorne, Isaacs, and Wall (manuscript in progress). But there is also a variety of dismissive objections to the fine-tuning argument that should be addressed. And so we shall. Pessimistic Induction: One popular dismissal of the fine-tuning argument relates only tangentially to the fine-tuning argument. The thought is that you don't need to take the fine-tuning argument all that seriously; there is already a heap of bad empirical arguments for the existence of God, so you can be be confident that fine-tuning is just another argument that belongs on the heap. You've seen this same story play out before––is it really necessary to pay all that much attention to its latest iteration? For example, Herman Philipse writes, "[T]heists of the past . . . argued that many specific natural phenomena yield a strong confirmation of their theory, since theism allegedly provides the best, or even the only possible, explanation of these phenomena. But the history of science taught many contemporary theists that it is too risky to appeal to particular empirical phenomena in support of theism. In countless cases, scientists or scholars came up with more precise and detailed explanations of the phenomena, so that religious explanations were massively superseded. Should we not conclude by a pessimistic induction that this is always likely to happen, or that it is at least a real possibility?" Philipse (2012) He continues, "Theism will risk being disconfirmed as soon as a good scientific or scholarly explanation has been found. As the numerous historical examples show, such explanations are more empirically adequate than theological explanations, so that by now theistic explanations of specific phenomena have been abandoned massively." Philipse (2012) The sad history of empirical arguments for theism certainly looks unpromising for theism. People keep coming up with empirical arguments for theism, and science keeps showing that the arguments don't work. But this whole manner of presentation is tendentious. Consider the sad history of naturalism, which looks comparably unpromising for atheism. People keep trying to provide naturalistic explanations for everything, and no matter how much they change their theories they're never able to get everything right. The exact same pattern that looks bad for theism given one framing looks bad for atheism given a different framing. This shouldn't be surprising. A pessimistic induction Readers are perhaps more familiar with pessimistic induction as an argument against scientific realism. We note that our contentions about fine-tuning do not presuppose realism about contemporary scientific theories, but only confidence about some of the standard model's empirical predictions. For more about the pessimistic induction against scientific realism see Lange (2002). is, after all, a kind of induction, and induction is dangerously frame-dependent. All the emeralds we've seen so far have been green; naive induction would thus suggest that emeralds will be nice and green in the future. All the emeralds we've seen so far have been grue; naive induction would thus suggest that emeralds will be nice and grue in the future. But since emeralds can't say both green and grue forever, we just plain have to give up naive inductive inferences and instead actually think through what we expect to happen to emeralds. See Goodman (1955). Now in the case of emeralds, it does seem that we have a nice asymmetry. Green is a far more natural property than grue. See Lewis (1983). But we don't think that there's so stark a disparity between the two stories we told: one in which science keeps being able to figure out more and more and another in which science keeps not being able to figure out everything. The general idea of the pessimistic induction is that empirical arguments for God have been bad thus far, so there's reason to discount any novel empirical arguments for God. The pessimistic induction is a fairly simple bad company argument. Yet it is not entirely clear that the company is so bad. There are a number of empirical arguments for the existence of God that seem to be worth taking seriously. The argument from consciousness seems very striking (and strangely under-discussed). And historical arguments are obviously legitimate––the mere fact that a religion exists is almost guaranteed to be some evidence that the religion is true. We assume that these historical arguments do not provide compensatory evidence against theism itself. These arguments are not failures in the sense of providing no reason to believe in theism. At most, these arguments are failures only in the sense of not providing sufficient reason to warrant belief. But it would be very strange to use that sort of failure for a pessimistic induction. (If reasons to believe in theism were piling up argument after argument then atheism would be in dire straits!) More importantly, bad company arguments are weak. The company that an argument keeps (whatever that means) need not be relevant to it at all. You cannot undermine a hypothesis by making myriad bad arguments for it. You can put the arguments for any hypothesis into bad company, but concocting bad company for an argument accomplishes nothing. The important, the proper thing is to evaluate each argument on its merits. An evaluation of an argument's company might make for a passable evaluative stop-gap, but it is wrongheaded for serious scholarship to focus on an argument's company instead of on the argument itself. There's some room for reasoning by indirections. But it's ultimately important to let each matter speak for itself. Suppose there's a student who is notoriously bad at arithmetic. You could reasonably expect the student's arithmetic assertions to be false. Nonetheless, resolute confidence that some particular arithmetic assertion of his is false should be based on an understanding of the math itself, and not merely on a low opinion of the person doing the math. Note that this pessimistic induction would look embarrassingly foolish in the face of a particularly awesome theistic argument. If we discovered the opening of the Gospel of John written onto the interior of every atom If you protest that it doesn't make sense to have something written on the interior of an atom, we would remind you that this thought experiment involves physics working rather differently than we anticipated. , it would be outlandish to remain nonchalant on the grounds that a naturalistic explanation for the writing would soon be forthcoming. If the Gospel of John were written onto the interior of every atom, atheism would look rather shaky. Richard Dawkins, an exceedingly staunch atheist, conceded that "[a]lthough atheism may have been logically possible before Darwin, Darwin made it possible to be an intellectually fulfilled atheist" Dawkins (1986). And the argument from atomic inscriptions would be rather more forceful than the argument from biological design ever was. And there's no reason to think think that a pessimistic induction debunks a good theistic argument if it's an obviously inadequate rejoinder to a particularly awesome theistic argument. God-of-the-Gaps: There is this phrase, "God-of-the-gaps", which often comes up in contemporary arguments about religion, especially those connected to science. The clearest thing about the phrase is that it's pejorative. You're meant to accuse other people of using God-of-the-gaps reasoning, and to angrily deny that you're using it yourself. But the actual meaning of the phrase is far from clear. The idea of the God-of the gaps seems to be used to advance two very different sorts of criticisms of theistic argumentation. The first use of the God-of-the-gaps accusation is made by theists against other theists. In particular, the accusation is made by theists who think that there is something wrong with making empirical arguments for theism. Specifically, the worry is that if one takes some empirical argument for the existence of God seriously and considers God to feature in the best explanation for that empirical phenomenon, then when one comes across a better naturalistic explanation for the phenomenon the explanatory role for God will shrink, and that will be bad. The thought is that as we know more and more there will be less and less for God to do, until eventually there is nothing at all. Dietrich Bonhoeffer follows exactly this line of thinking, "[H]ow wrong it is to use God as a stop-gap for the incompleteness of our knowledge. If in fact the frontiers of knowledge are being pushed further and further back (and that is bound to be the case), then God is being pushed back with them, and is therefore continually in retreat." Bonhoeffer (1997) This sort of thinking is silly. Notably, it is based on some sort of putative clairvoyance. How can Bonhoeffer or anyone else presuppose that science will eventually figure everything out? We certainly grant that it's possible that it will, but why should a theist take for granted that everything we come across in the world will make perfect sense with or without God? Science can account for more now than it could some hundreds of years ago. But this is not surprising––we didn't forget what we figured out some hundreds of years ago, and we've figured out some more things since then. These facts do not, however, provide a plausible basis for an inference that atheistic science will eventually be able to account for everything. Dismissing the fine-tuning argument out of the blind conviction that it will fall apart eventually is epistemologically risible. Maybe the fine-tuning argument will fall apart and maybe it won't. There's no legitimate alternative to thinking the matter through as best one can. We note that it is very odd to be certain that, even given the supposition that God exists, there are naturalistic explanations for everything. Why be so confident that God won’t do anything that’s best explained by God having done it? Note again that this God-of-the-gaps reasoning would look foolish in the face of a particularly awesome theistic argument. If we discovered the opening of the Gospel of John written in the stars, it would be outlandish to remain nonchalant on the grounds that the gap in our scientific understanding would soon be filled. And there's no reason to think think that god-of-the-gaps reasoning debunks a good theistic argument if it's an obviously inadequate rejoinder to a particularly awesome theistic argument. The second use of the God-of-the-gaps accusation is made by atheists against theists. The atheistic version of the God-of-the-gaps accusation is more reasonable than the theistic version of it (though that's not much of an achievement). This accusation is that theists unreasonably take any gap in scientific understanding to constitute strong evidence for the existence of God. There's no known naturalistic explanation, so this sort of theist foolishly assumes that there is a supernaturalistic explanation. Automatic inference from any gap in scientific understanding to the very hand of God would indeed be quite unreasonable. It is, however, far from clear that any theists actually reason in such an unreasonable way. Someone truly in the grips of God-of-the-gaps irrationality would make far more irrational inferences than anyone does. For example, until quite recently we did not have an adequate explanation for why Swiss cheese has holes in it. The theory that carbon dioxide releasing bacteria caused holes in Swiss cheese was traditional, having been first laid out by William Clark in 1917. This theory was undermined by the discovery that over the past 15 years fewer and fewer holes were appearing in Swiss cheese. There was thus a period of time in which we did not know what caused the holes in Swiss cheese and moreover knew that we did not know what caused the holes in Swiss cheese. As it turns out, the real cause is microscopic particles of hay (which became less common as cheesemaking conditions became more sanitary). Yet no one argued that the holes in Swiss cheese must therefore have been made by God, and thus must prove the existence of God. Everyone found it plausible that a scientific explanation for the holes in Swiss cheese was out there, even if we didn't have it yet. For another example, the phenomenon of high-temperature superconductivity is still puzzling to contemporary physicists. Thanks to Aron Wall for this example. Yet no one argues that high-temperature superconductivity must therefore be caused by God, and thus must prove the existence of God. Everyone finds it plausible that a scientific explanation for high-temperature superconductivity is out there, even though we don't have it yet. It is simply not the case that anyone thinks that any gap in scientific understanding makes for a good theistic argument. There are myriad gaps in scientific understanding––scientists would be out of work without such gaps. And no one thinks there are that many arguments for the existence of God. Admittedly, there are cases where theists fall prey to bad empirical arguments, thinking that there is good evidence for theism in areas where there isn’t. In these cases, are they guilty of God-of-the-gaps? For example, advocates of "intelligent design" contend that the development of bacterial flagella cannot be explained by gradual evolution through genetic mutation and natural selection. Isn't that pernicious God-of-the-gaps reasoning? We don't think it is. First, we're not at all convinced that there is an explanatory gap vis-a-vis the evolution of the bacterial flagellum; evolutionary biologists do have ways to account for the development of the bacterial flagellum. But suppose that evolutionary biologists had no good models for the development of the bacterial flagellum, and were merely confident that a good model was out there. Even then, the "intelligent design" advocate would not be engaging in some distinctive sort of God-of-the-gaps reasoning. The "intelligent design" advocate would merely evince insufficient credence that a naturalistic account of the evolution of the bacterial flagellum could be found. See Davey (2006). There's no phenomenon that warrants the distinctive appellation "God-of-the-gaps". There's a perfectly lovely phrase for the tendency to reason in a way that illegitimately favors one's pre-existent beliefs: confirmation bias. We find it very plausible that confirmation bias is at stake in many bad theistic arguments. We also find it very plausible that confirmation bias is at stake in many bad atheistic arguments, and in many bad arguments well outside of the philosophy of religion. The term "God-of-the-gaps" suggests that there's something special going on with certain bad theistic arguments, but there isn't. Under no precisification of God-of-the-gaps reasoning is the fine-tuning argument impugned by it. Frustrated Expectations: There's something potentially odd about the fine-tuning argument. We're supposed to believe that God fine-tuned our universe's parameters so that life could exist. But if God likes life so much, why did he select laws that needed fine-tuning in the first place? Why didn't God select laws that were friendlier to life? This objection is pressed by Hans Halvorson, "[T]he fine-tuning argument ... would disconfirm God’s existence. After all, a benevolent God would want to create the physical laws so that life-conducive universes would be overwhelmingly likely." Halvorson (2017) Might the fine-tuning argument backfire against theism? A preliminary qualification is called for: The fine-tuning argument really doesn't presuppose that God is interested in life. If one was antecedently certain that God would not care one whit about life but thought that God might well care about the existence of rocks, the fine-tuning argument could proceed just the same. Fine-tuning for rocks will do just as well as fine-tuning for life. Whether one is more inclined to believe in a God who was interested in creating life or a God who was interested in creating rocks will depend on one’s prior probabilities in those hypotheses. For more see the sections, “The God of Tungsten” and “Back to Tungsten” in Hawthorne and Isaacs (forthcoming). Additionally, there are other sorts of phenomena that are not so easily realized in other sorts of physics. Most crudely, if God wanted to provide fuel for the fine-tuning argument itself there really was no alternative to fine-tuning. And if God wanted both life and quasars (or some other complicated facet of our physical world) it's far from clear that that there was an alternative. People talk about fine-tuning for life, but the fine-tuning argument need not presuppose that much about divine psychology. It needs to presuppose a little, but not that much. But the main problem with Halvorson's line of reasoning isn't that its conception of the fine-tuning argument is a bit narrow. The main problem with Halvorson's line of reasoning is that it has almost no bearing on the status of the fine-tuning argument. It's entirely fair to think that God would probably have created life-friendly laws. The existence of life-unfriendly laws is thus plausibly evidence against the existence of God. The existence of life-unfriendly laws is evidence against the existence of God if and only if the existence of life-unfriendly laws is less likely given the existence of God than it is given the non-existence of God. But so what? The life-unfriendliness of the laws may well be modest evidence against the existence of God, but if the fine-tuning of those life-unfriendly laws is powerful evidence for the existence of God, then theism comes out ahead on balance. Consider an analogous case. Suppose that a friend of yours said that she might come by your house and write something with the leaves on your yard. Of course, your friend might not come by, and then the pattern of leaves on your yard would just be determined by the wind. Suppose that this friend is widely known to have a deep antipathy to the poetry of the 19th century. What should you think upon finding that the leaves on your yard spell out Gerard Manley Hopkins' Spring and Fall? Margaret, are you grieving Over Goldengrove unleaving? Leaves, like the things of man, you With your fresh thoughts care for, can you? Ah! as the heart grows older It will come to such sights colder By and by, nor spare a sigh Though worlds of wanwood leafmeal lie; And yet you will weep know why. Now no matter, child, the name: Sorrow’s springs are the same. Nor mouth had, no nor mind, expressed What heart heard of, ghost guessed: It is the blight man was born for, It is Margaret you mourn for. Your friend was very unlikely to write out a poem from the 19th century. But so what? Any 19th century poem is still massively more likely to have been written by your friend than to have come about by the random blowing of the wind. Note that one need not have any substantial theory of explanations in order to make this inference work. One need not claim that everything has to have an explanation nor that “That’s just the way it is.” could not count as an explanation. There might be no need to explain why the leaves spell out Spring and Fall, and “That’s just the way it is” might be an entirely acceptable candidate explanation for why the leaves spell out Spring and Fall. Regardless, that pattern of leaves is massively more likely to have been written by your friend than to have come about by the random blowing of the wind. And for our purposes the probabilities are what matter. It's not at all necessary that the facts about fine-tuning be probable given theism. Given how fine-grained facts are, they are hugely improbable given just about any coarse hypothesis. But that doesn't matter. What matters are the comparative probabilities. If the facts about fine-tuning are improbable given theism but are vastly more improbable given atheism, then the fine-tuning argument works just fine. If one knew a conditional such as "If there were a God then there wouldn't be life-unfriendly laws." then the existence of life-unfriendly laws would entail the non-existence of God. Such claims are obviously tendentious, however, and are (quite properly) not generally part of skeptical responses to the fine-tuning argument. Anthropic Complaints: There's an objection to the fine-tuning argument that goes something like this: “The fact that we exist is supposed to be surprising evidence for theism. But the fact that we exist cannot be surprising, and so cannot be evidence for anything. If we didn't exist we couldn't possibly discover that we didn't exist. It's totally obvious that we'd find that we exist; the fact of our existence thus cannot confirm theism.” This sort of reasoning is often dubbed "anthropic" although there are many inequivalent propositions that go by the name "the anthropic principle". See Bostrom (2002). This sort reasoning is flawed on two levels. First, the evidence for the fine-tuning argument is not the fact of our existence. We've known that we existed for quite some time; we didn't need any contemporary physics to arrive at that conclusion. The fine-tuning argument is based on contemporary discoveries about how life is realized in our universe. It was entirely possible for us to have discovered that life was realized in some other way, in some way that did not involve fine-tuning. That was, in fact, what we were expecting. But instead we discovered fine-tuning, so now have to reckon with that. Second, there's no principle that in order for us to have something as evidence it has to be possible to have its negation as evidence. It's possible to learn that a piece of litmus paper turned blue and it's possible to learn that a piece of litmus paper didn't turn blue. It's possible to learn that you exist but it's not possible to learn that you don't exist. It's not possible to learn that you don't exist, that you're incapable of learning, that you're completely brain-dead, and so on. But none of that matters epistemologically. A simple case: You can learn that something exists. You could not have learned that nothing exists. Yet the fact that something exists is obviously devastating evidence against the hypothesis that nothing exists. Anything that is likelier given theism than it is given atheism is evidence for theism. People are notoriously bad at probabilistic reasoning. We should not trust vague anthropic slogans. If you just work through the probabilities everything comes out correctly. Unscientificness: Some people dismiss the very idea of evidence for God on the grounds that theism is not a scientific hypothesis. For example, Lee Smolin demands that any hypothesis we entertain be confirmable, falsifiable, and unique. According to Smolin, a confirmable theory is one that makes definite predictions that could (given favorable experimental results) redound to the theory’s credit, a falsifiable theory is one that makes definite predictions that could (given unfavorable experimental results) entail the theory’s falsity, and a unique theory is one such that no other simpler or more plausible theory makes the same predictions. He writes, "Any explanation that fails these tests should be abandoned. After all, it is possible to imagine a multitude of possible non-scientific explanations for almost any observation. Unless we accept the stricture that hypotheses must be confirmable, falsifiable, and unique, no rational debate is possible; the proponents of the various explanations will never change their minds. Yet several of the most popular explanations for the fine-tuning problem fail these tests. One such hypothesis is that there is a god who made the world and chose the values of the parameters so that intelligent life would arise. This is widely believed, but it fails the test for a scientific explanation." Smolin (2012) Smolin's qualms are unreasonable. It's worth noting that no ordinary scientific hypothesis––no theory from the Newtonian theory of universal gravitation to quantum electrodynamics––actually satisfies Smolin's criteria. No ordinary scientific theory is actually falsifiable. You can always come up with some ancillary assumptions to save any theory you like; underdetermination is ubiquitous. If Smolin said that theories had only to be disconfirmable, then this objection would not apply. But in that case he could not thereby claim that theism is an illegitimate hypothesis, as theism is disconfirmable. Additionally, many perfectly sensible hypotheses are obviously unfalsifiable. We hypothesize that the Archimedes was a fun guy to have at parties. There's no possible way to falsify that hypothesis. Similarly, there's no way to falsify the hypothesis that Archimedes wasn't a fun guy to have at parties. But obvious either he was or he wasn't; Smolin's qualms would throw both possibilities away, and that's starkly unreasonable. Note once again that Smolin's qualms would look embarrassingly foolish in the face of a particularly awesome theistic argument. If we discovered the opening of the Gospel of John written into the structure of DNA, it would be outlandish to remain nonchalant on the grounds theism is not a scientific hypothesis. And there's no reason to think think that Smolin's qualms legitimate rejecting a good theistic argument if it's an obviously inadequate rejoinder to a particularly awesome theistic argument. Measure-0 Worries: The fine-tuning argument is standardly evaluated in light of some presuppositions, namely that physical laws have the structural form that they do and that only a single universe exists. It is worth keeping in mind that, ultimately, one must drop these presuppositions and see what the evidential significance of fine-tuning is in light of the full panoply of epistemic possibilities. Still, it's worth seeing how fine-tuning shakes out given the standard presuppositions. If given laws like ours and a single universe the fine-tuning argument makes a tremendous case for theism, then then the fine-tuning argument should make at least a pretty good case for theism when all things are considered. And if given laws like ours and a single universe the fine-tuning argument still doesn't work, then it's doubtful that it can be made to work by dropping assumptions that are favorable to it. The fine-tuning argument (like all empirical arguments) is based on claims about probabilities. In this case, the fine-tuning argument is based the claim that the probability of observed parameter-values is dramatically greater given theism than the probability of observed parameter-values is given atheism. Given theism, it supposed to be reasonably plausible that the parameter-values would permit life. Of course, we know more than merely that the actual parameter-values are life permitting. We have a decently good sense of what those values are––scientists can tell you the values with a modest margin for error. And the comparative likelihoods afforded by theism and atheism to such particular regions of parameter-space need not correspond to the comparative likelihoods afforded by theism and atheism to the entirety of life-permitting parameter-space. But there seems to be nothing particularly significant about the region of parameter-space in which we find ourselves beyond its life-permittingness (and rock-permittingness, and so on), so our additional evidence about what the parameter-values are shouldn't make much of a difference, if any difference at all. Given atheism, it is supposed to be wildly implausible that the parameter-values would permit life. But some philosophers (working in teams of three) have argued that this underlying claim about probabilities given atheism does not make sense. Timothy McGrew, Lydia McGrew, and Eric Vestrup and Mark Colyvan, Jay Garfield, and Graham Priest have each pressed a worry about the probabilities invoked in fine-tuning. The worry goes like this: "How do we get to the thought that, given atheism, the probability of life-permitting Again, rock-permitting would work just as well. parameter-values is extremely low? Well, the parameters of interest do not have maximum values; some have minimum values and others do not. That is, some parameters may take any positive real number and some may take any real number. For example, the ratio of the mass of the proton to the mass of the electron should be a positive number (at least assuming that there are no negative masses). The cosmological constant specifies the energy density of the vacuum, and could sensibly be any real number. Either way, it makes sense for the parameters to take any value in an infinite range. So what probability should we assign to finite ranges of parameter-values? If the range of possible parameter-values were finite it would be natural to give every equally sized region equal probability. Note that we do not endorse such indifference-driven reasoning even over finite ranges. And it still seems natural to do something like that with an infinite range of possible parameter-values. Every equally sized region should have equal probability: the probability that the parameter-value would fall between 1 and 2 is the same as the probability that it would fall between 2 and 3, is the same as the probability that it would fall between 3 and 4, and so on. What, then, is the probability, given atheism, that the parameters would be life-permitting? The probability would have to be low––indeed, the probability would have to be 0. Life is only possible within a finite range of parameter-values. (Too much or too little of anything is bad for life.) But the value of any finite range must be 0. There are, after all, infinitely many non-overlapping regions of any finite size within the total range. They all must have the same probability, and if those probabilities were anything other than 0 they would sum to more than 1, and that's no good. So any finite region must have probability 0. Colyvan, Garfield, and Priest flirt with the notion that probability 0 events are automatically impossible. This is emphatically not so. But if any finite region must have probability 0, then the fine-tuning argument looks really weird. First, the fine-tuning argument wouldn't just be a strong argument for the existence of God, it would be a maximally strong argument for the existence of God. It's always unsettling when an argument purports to be the strongest possible argument. Even more troublingly, it looks too easy for the fine-tuning argument to be maximally strong. The life-permitting range could be arbitrarily large and (so long as it was still finite) the argument would go through just the same. You'll note that virtually no physics was required to make that argument go through, just the fact that too much or too little of just about anything and life is impossible. So it looks like the fine-tuning argument isn't a product of fancy discoveries in physics, but is instead something that would come about in any physics with parameters even remotely like ours––in those physics, just like in ours, too much or too little of anything is bad for life, and so only a finite range of any parameter-value will be friendly to life. But it's wildly implausible that such an incredibly strong argument should be so automatically available. So the fine-tuning argument must be insuperably flawed, and thus have no epistemic significance." McGrew, McGrew, and Vestrup go on to note that such a probability assignment would not be countably additive, which probabilities are standardly required to be. Countable additivity requires that the probabilities assigned to countably many regions of parameter-space sum to the probability assigned to the union of those regions of parameter-space. The entirety of parameter-space can be divided into countably many equally sized regions (the region between 0 and 1, the region between 1 and 2, the region between 2 and 3, and so on). But there is no way for countably infinitely many regions to each receive the same probability such that the sum of those probabilities is 1. If each region receives probability 0 the sum will be 0, and if each region receives probability greater than 0 the sum will be infinite. McGrew, McGrew, and Vestrup consider this violation of countable additivity to be fatal. We are less convinced that the violation of countable additivity is fatal; there are some reasons to prefer mere finite additivity, which would not impose unsatisfiable restrictions. But our reasons for being sympathetic to possible violations of countable additivity have nothing to do with this case––we emphatically reject the indifference-driven reasoning which posed problems for countable additivity in the first place. This worry is profoundly misguided. It is unreasonable for philosophers to dismiss the reasoning of physicists on the basis of a misrepresentation of what the physicists say. Physicists do not claim that the probability of life-permitting parameter values is 0. The reasoning that lead to the judgment that the probability was 0 was flawed and it's inappropriate to ascribe such flawed reasoning to the professional consensus of a generation of physicists. But for now we want to emphasize that the physicists simply do not claim that the probability of life-permitting parameter-values is 0. For example, physicists say that the cosmological constant (which specifies the energy density of the vacuum) is strikingly fine-tuned, that the odds of it having a life-permitting value was roughly 1 in 10 ^ 120. Physicists do not say that the cosmological constant is maximally fine-tuned, that the odds of getting life-permitting values was 0. Physicists did not come up with the number 1 in 10 ^ 120 capriciously; the reasoning behind that number involves seriously fancy physics. In particular, a Wilsonian dimensional analysis of effective field theory. We kind of know what that is. This criticism of the fine-tuning argument are not based on any understanding of it whatsoever. The rough idea of the physics is this: The values of the constants do not exist in complete isolation. The constants make contributions to the values of the other constants; they nudge each other around, so to speak. So the cosmological constant has received around two dozen contributions from the other constants, and physicists can know how big they are––order 10 ^ 120 bigger than the actual value. So we've got a couple dozen numbers of magnitude 10 ^ 120, some positive and some negative, they get added together, and the sum is a small, positive value. Physicists did not expect that. They expected that the numbers of magnitude 10 ^ 120 would sum to something of magnitude 10 ^ 120. Trying to figure out why that sum worked out as conveniently as it did is a major project in physics. But the crucial point here is that this claim of fine-tuning isn't based on any sort of judgment that all parameter-values are equally likely. It is instead based on an expectation––an expectation rooted in a serious understanding of physics––that the cosmological constant would have a hugely different value than it does. A toy model is helpful. Suppose that Bill and Melinda Gates decide to start living particularly lavishly, spending billions of dollars every year, buying islands, commissioning movies, and generally living it up. At the end of the year, their accountant finds something remarkable––their expenditures were almost perfectly cancelled out by the appreciation of Microsoft stock. Over the course of the year, their net worth increased by just under a dollar. It's very improbable to have the pluses and minuses cancel out so closely. Given the magnitude of the fine-tuning of the cosmological constant, it'd be more like the Gates' expenditures being almost perfectly cancelled out by stock gains 10 years in a row. Now if there's literally no alternative account for why that happened other than that something weird happened, then there's no alternative account for why that happened other than that something of weird happened. But there are always alternatives. If someone wearing a robe had told Bill and Melinda that he was casting a spell on them to make that happen, we’d be much more inclined to believe that he was a wizard than that it was just a coincidence, and we’d be much more inclined to believe that he was running some sort of scam than that he was a wizard. But the point here is just that such near-perfect cancellation of increases and decreases of net worth are shockingly improbable unless there's something funny going on. That's exactly the kind of reasoning that the fine-tuning argument relies on, and it is beyond reproach. Any criticism of the fine-tuning argument that relies on arguments against assigning probability 0 to the life-permitting range of parameter-values is wildly beside the point. Such criticism involves hearing a claim from physicists, disregarding that claim in favor of a dramatically different and novel claim that seems superficially similar, imputing obviously flawed reasoning as the basis of that novel claim, refuting that flawed reasoning, and thereby being confident that the initial claim was bogus. This really is as though a philosopher heard some physicists claim that there's a supermassive black hole in the center of the Milky Way galaxy and had this response: "Supermassive black hole? They must mean a black hole more massive than which cannot be thought. And their reason for thinking that there's such a black hole must be that a black hole that exists is more massive than a black hole that doesn't exist, thus a black hole more massive than which cannot be thought must exist on pain of contradiction. But we all know that sort of reasoning is flawed. Kant's claim that existence is not a predicate is good enough for me, and fancier work refuting the ontological argument has been done more recently. Silly physicists, getting suckered by such a well-known fallacy! There must be no black hole at the center of the Milky Way galaxy at all." In fact, physicists do not base their judgments about probabilities on anything as crude as the conviction that all areas of parameter-space with equal size must have equal probability. This would be an obviously silly thing to do with parameter-space. Many of the parameters involve ratios of quantities––the ratio of the mass of the proton to the mass of the electron, for example. And there's no particularly deep reason why the parameter should be the ratio of the mass of the proton to the mass of the electron rather than the ratio of the mass of the electron to the mass of the proton––those two parameters are obviously interdefinable and equally natural. But the values between 0 and 1 in one of those parameters will correspond to the values greater than 1 in the other and vice versa. Thus the crude conviction that all areas of parameter-space with equal size must have equal probability is obviously dependent on an arbitrary decision of how to do the parameterization, and physicists know better than to depend on anything remotely like that. It is, admittedly, hard to understand the nuances of physicists’ reasoning about fine-tuning. But it is not hard to understand that physicists’ reasoning about fine-tuning is untouched by McGrew, McGrew, Vestrup, Colyvan, Garfield, and Priest’s argumentation. It is worth thinking about how to reason in contexts in which probabilities behave the way these critics have laid out––not because it is relevant to fine-tuning, but just because it is interesting. Let us therefore allow violations of countable additivity. Suppose that we know that a natural number will be generated by one of two processes, and that each process is equally likely to do the generating. The first process is not uniform and does obey countable additivity, while the second process is uniform and does not obey countable additivity. The first process will generate ‘1’ with probability ½, ‘2’ with probability ¼, ‘3’ with probability ⅛, and so on. (The probability that any number ‘n’ will be generated is 1/ 2^n.) The second process will generate any number ‘n’ with probability 0. In effect, the second process randomly selects a natural number. Now suppose that you learn what number was generated. What should you think about whether that number was generated by the first process or the second process? There’s a good argument that, no matter what number was generated, you should be certain that it was generated by the first process and not the second process. After all, no matter what number was generated, the first process had non-zero probability of generating it while the second process had 0 probability of generating it. This does seem quite odd however; it seems wrong for the hypothesis that the first process was selected to be destined for confirmation and for the hypothesis that the second process was selected to be destined for disconfirmation. The unconditional probability falls outside the range of the conditional probabilities of each element of the outcome space. When this happens, mathematicians say that the distribution is non-conglomerable. Now it’s well-known that violations of countable additivity can easily produce non-conglomerability, so it’s not surprising that this happened in the case above. And it is not clear to us how one should reason in the case above. We are open to the possibility that one should simply follow the conditional probabilities where they lead and accept their non-conglomerability. A different––but related––worry about measure-0 probabilities is worth thinking about. Since there are continuum many possible parameter-values, the actual values are likely to have probability 0 given either theism or atheism. But probabilities conditional on measure-0 events are not generally well-defined. Colyvan, Garfield, and Priest (with something else in mind) write, “Accepting that the universe as we find it has probability zero means that the conditional probability of any hypothesis relative to the fine-tuning data is undefined. This makes the next move in the argument from fine tuning––that the hypothesis of an intelligent designer is more likely than not, given the fine-tuning data––untenable.” Happily, there are two good responses to this worry. First, it doesn’t matter if the probabilities given the actual parameter-values are undefined so long as the probabilities given our evidence about the parameter-values is not undefined. And since our measuring instruments are only finitely sensitive, our evidence is coarse-grained enough to unproblematically receive non-zero prior probability. Second, although probabilities conditional on measure 0 events are not generally well-defined, they are sometimes well-defined––and this is one of the cases in which they are. Probabilities conditional on measure 0 events are well-defined when they can be taken as the limits of continuous random variables. If you can think of a measure 0 event as the limit of other events with non-zero measure, then everything is OK. For example, suppose you have two dart players throwing darts at a continuously dense dartboard. One player, the amateur, will hit a random point on the board. The other player, the expert, will hit a random point in the bullseye. There’s a sense in which the expert is no more likely to hit any spot in the bullseye than the amateur is––they each hit each spot in the bullseye with probability 0. But if you think about shrinking regions around some spot in the bullseye, the expert is more likely to get in that region than the amateur is. Because parameter-values vary continuously, even if we did know the actual parameter-values we could use this approach to keep our conditional probabilities well-defined. For more on this approach to measure-0 conditional probabilities, see Urbach & Howson (1993). Conclusion: The fine-tuning argument is impressively complex. A proper assessment of its strength requires a sophisticated understanding of contemporary physics, a method for reasoning about the problematic infinities of inflationary cosmology and multiverse theory, unusually detailed credences about an unusually wide array of possibilities for physics, and general epistemological good sense to boot. We do not claim to have shown what should be made of the fine-tuning argument, but only that something should be made of it. The prominent objections to the fine-tuning argument are insuperably flawed. And we don’t hold out much hope for non-prominent objections. There is no legitimate reason to dismiss the fine-tuning argument. We are left with the more difficult task of reckoning with it. References: Bonhoeffer, Dietrich (1997) Letters and Papers from Prison. Simon & Schuster. Bostrom, Nick (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy. Routledge. Colyvan, Mark, Garfield, Jay L. and Priest, Graham (2005). Problems With the Argument From Fine Tuning. Synthese 145 (3):325-338. Davey, Kevin (2006). Debating Design: From Darwin to DNA - Edited by William A. Dembski and Michael Ruse. Philosophical Books 47 (4):383-386. Dawkins, Richard (1986). The Blind Watchmaker. Norton & Company, Inc. Goodman, Nelson (1955). Fact, Fiction, and Forecast. Harvard University Press. Halvorson, Hans (2017) Fine-Tuning Does Not Imply a Fine-Tuner. Retrieved from: http://cosmos.nautil.us/short/119/fine-tuning-does-not-imply-a-fine-tuner Hawthorne, John and Isaacs, Yoaav. Fine-Tuning Fine-Tuning. In Benton, Hawthorne, and Rabinowitz eds: Knowledge, Belief, and God: New Insights in Religious Epistemology (forthcoming) Oxford University Press. Hawthorne, John, Isaacs, Yoaav, and Wall, Aron (manuscript in progress) The Foundations of Fine-Tuning. Lange, M. (2002). Baseball, pessimistic inductions and the turnover fallacy. Analysis 62 (4):281-285. Lewis, David (1983). New work for a theory of universals. Australasian Journal of Philosophy 61 (December):343-377. McGrew, Timothy, McGrew, Lydia, and Vestrup, and Eric (2001). Probabilities and the fine-tuning argument: A sceptical view. Mind 110 (440):1027-1038. Philipse, Herman (2012). God in the Age of Science?: A Critique of Religious Reason. Oxford University Press. Smolin, Lee (2012) Scientific Approaches to the Fine-Tuning Problem. Retrieved from: http://www.pbs.org/wgbh/nova/blogs/physics/2012/12/scientific-approaches-to-the-fine-tuning-problem/ Urbach, Peter & Howson, Colin (1993). Scientific Reasoning: The Bayesian Approach. Open Court. Weinberg, Steven (1989). The cosmological constant problem. Reviews of Modern Physics 61 (1): 1-23.